Skip to content

[CVPR 2023] Improving Zero-shot Generalization and Robustness of Multi-modal Models

Notifications You must be signed in to change notification settings

gyhandy/Hierarchy-CLIP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hierarchy-CLIP

[CVPR 2023] Improving Zero-shot Generalization and Robustness of Multi-modal Models

Improving Zero-shot Generalization and Robustness of Multi-modal Models
Yunhao Ge*, Jie Ren*, Andrew Gallagher, Yuxiao Wang, Ming-Hsuan Yang, Hartwig Adam, Laurent Itti, Balaji Lakshminarayanan, Jiaping Zhao ( * =equal contribution)
IEEE/ CVF International Conference on Computer Vision and Pattern Recognition (CVPR), 2023

Editor

Figure: Our zero-shot classification pipeline consists of 2 steps: confidence estimation via self-consistency (left block) and top-down and bottom-up label augmentation using the WordNet hierarchy (right block).

Editor

Figure: Typical failure modes in the cases where top-5 prediction was correct but top-1 was wrong.

Getting Started

Installation

  • Clone this repo:
git clone https://github.com/gyhandy/Hierarchy-CLIP.git
cd Hierarchy-CLIP
  • Install required library:
git clone https://github.com/google-research/scenic.git
cd scenic
pip install .

Load dataset:

  • Most of the dataset we used in paper could be load by tensorflow_datasets, with our provided function:
  • dset = load_dataset('imagenet2012')
    Note: please make sure you have registered ImageNet account.
  • You could also first download ImageNet and then process them with tensorflow_datasets and load them with function:
    dset = load_dataset_from(data_dir='YOUR/LOCAL/PATH/imagenet2012', dataset='imagenet2012', split='validation')
    If you want to use other dataset (paper Table 2), e.g., caltech101, Food-101, Flower102, Cifar-100, please use/rewrite our function: load_dataset_info()
  • # caltech101
    caltech101_dset, caltech101_dset_info = load_dataset_info('caltech101', split='test')

Download WordNet hierarchy information to build top-down and bottom-up prompt augmentation:

Code

We provide a colab code, all details are in the following:

Hierarcy_Clip.ipynb

Contact / Cite

Got Questions? We would love to answer them! Please reach out by email! You may cite us in your research as:

@inproceedings{ge2023improving,
  title={Improving Zero-shot Generalization and Robustness of Multi-modal Models},
  author={Ge, Yunhao and Ren, Jie and Gallagher, Andrew and Wang, Yuxiao and Yang, Ming-Hsuan and Adam, Hartwig and Itti, Laurent and Lakshminarayanan, Balaji and Zhao, Jiaping},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={11093--11101},
  year={2023}
}

About

[CVPR 2023] Improving Zero-shot Generalization and Robustness of Multi-modal Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published