Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ICCV2023 Paper (PromptStyler) #3

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 13 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ If you would like to contribute to our repository or have any questions/advice,
- [Semi/Weak/Un-Supervised Domain Generalization](#semiweakun-supervised-domain-generalization)
- [Open/Heterogeneous Domain Generalization](#openheterogeneous-domain-generalization)
- [Federated Domain Generalization](#federated-domain-generalization)
- [Source-free Domain Generalization](#source-free-domain-generalization)
- [Applications](#applications)
- [Person Re-Identification](#person-re-identification)
- [Face Recognition \& Anti-Spoofing](#face-recognition--anti-spoofing)
Expand Down Expand Up @@ -412,6 +413,11 @@ If you would like to contribute to our repository or have any questions/advice,
- FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space [[CVPR 2021](http://openaccess.thecvf.com/content/CVPR2021/papers/Liu_FedDG_Federated_Domain_Generalization_on_Medical_Image_Segmentation_via_Episodic_CVPR_2021_paper.pdf)] [[Code](https://github.com/liuquande/FedDG-ELCFS)] (**FedDG**) [147]
- Collaborative Optimization and Aggregation for Decentralized Domain Generalization and Adaptation [[ICCV 2021](https://openaccess.thecvf.com/content/ICCV2021/papers/Wu_Collaborative_Optimization_and_Aggregation_for_Decentralized_Domain_Generalization_and_Adaptation_ICCV_2021_paper.pdf)] (**COPDA**) [159]

## Source-free Domain Generalization
> Source-free domain generalization aims to improve model's generalization capability to arbitrary unseen domains without exploiting any source domain data.

- PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization [[ICCV 2023](https://arxiv.org/abs/2307.15199)] [[Project Page](https://PromptStyler.github.io/)] (**PromptStyler**) [231]

## Applications
### Person Re-Identification
- Deep Domain-Adversarial Image Generation for Domain Generalisation [[AAAI 2020](https://ojs.aaai.org/index.php/AAAI/article/download/7003/6857)] [[Code](https://github.com/KaiyangZhou/Dassl.pytorch)]
Expand Down Expand Up @@ -448,7 +454,7 @@ If you would like to contribute to our repository or have any questions/advice,
| 2020 | **ICLR:** [55], [83], [218]; **ICLR:** [126]; **CVPR:** [22], [27], [79], [106]; **ICML:** [105]; **ECCV:** [14], [15], [28], [57], [64], [94], [99], [104]; **NeurIPS:** [75], [86], [112], [181] |
| 2021 | **ICLR:** [19], [56], [59], [134], [175], [196]; **ICLR:** [139], [171], [221]; **CVPR:** [12], [13], [115], [116], [117], [118], [119], [132], [141], [147], [153], [160], [168], [187], [193]; IJCAI: [155], [195], [230]; **ICML:** [73], [190], [217]; **ICCV:** [129], [130], [133], [135], [138], [142], [143], [148], [149], [150], [158], [159], [194]; **MM:** [131], [137], [146], [157]; **NeurIPS:** [136], [145], [152], [154], [198], [199], [200], [201], [202], [203], [204], [205], [206], [207], [208], [228], [225] |
| 2022 | **AAAI:** [140]; **ICLR:** [213], [224]; **CVPR:** [69], [182], [214]; **ICML**: [173]; **MM:** [211] |
| 2023 | **WACV:** [215]; **ICLR:** [223] |
| 2023 | **WACV:** [215]; **ICLR:** [223]; **ICCV:** [231] |

| Top Journal | Papers |
| ---- | ---- |
Expand Down Expand Up @@ -483,11 +489,11 @@ If you would like to contribute to our repository or have any questions/advice,
| **Colored MNIST** [[165]](https://arxiv.53yu.com/pdf/1907.02893.pdf) | Handwritten digit recognition; 3 domains: {0.1, 0.3, 0.9}; 70,000 samples of dimension (2, 28, 28); 2 classes | [82], [138], [140], [149], [152], [154], [165], [171], [173], [190], [200], [202], [214], [216], [217], [219], [220], [222], [224] |
| **Rotated MNIST** [[6]](http://openaccess.thecvf.com/content_iccv_2015/papers/Ghifary_Domain_Generalization_for_ICCV_2015_paper.pdf) ([original](https://github.com/Emma0118/mate)) | Handwritten digit recognition; 6 domains with rotated degree: {0, 15, 30, 45, 60, 75}; 7,000 samples of dimension (1, 28, 28); 10 classes | [5], [6], [15], [35], [53], [55], [63], [71], [73], [74], [76], [77], [86], [90], [105], [107], [138], [140], [170], [173], [202], [204], [206], [216], [222], [224] |
| **Digits-DG** [[28]](https://arxiv.org/pdf/2007.03304) | Handwritten digit recognition; 4 domains: {MNIST [[29]](http://lushuangning.oss-cn-beijing.aliyuncs.com/CNN%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97/Gradient-Based_Learning_Applied_to_Document_Recognition.pdf), MNIST-M [[30](http://proceedings.mlr.press/v37/ganin15.pdf)], SVHN [[31](https://research.google/pubs/pub37648.pdf)], SYN [[30](http://proceedings.mlr.press/v37/ganin15.pdf)]}; 24,000 samples; 10 classes | [21], [25], [27], [28], [34], [35], [55], [59], [63], [94], [98], [116], [118], [130], [141], [142], [146], [151], [153], [157], [158], [159], [160], [166], [168], [179], [189], [203], [209], [210] |
| **VLCS** [[16]](http://openaccess.thecvf.com/content_iccv_2013/papers/Fang_Unbiased_Metric_Learning_2013_ICCV_paper.pdf) ([1](https://drive.google.com/uc?id=1skwblH1_okBwxWxmRsp9_qi15hyPpxg8); or [original](https://www.mediafire.com/file/7yv132lgn1v267r/vlcs.tar.gz/file)) | Object recognition; 4 domains: {Caltech [[8]](http://www.vision.caltech.edu/publications/Fei-FeiCompVIsImageU2007.pdf), LabelMe [[9]](https://idp.springer.com/authorize/casa?redirect_uri=https://link.springer.com/content/pdf/10.1007/s11263-007-0090-8.pdf&casa_token=n3w4Sen-huAAAAAA:sJY2dHreDGe2V4KE9jDehftM1W-Sn1z8bqeF_WK8Q9t4B0dFk5OXEAlIP7VYnr8UfiWLAOPG7dK0ZveYWs8), PASCAL [[10]](https://idp.springer.com/authorize/casa?redirect_uri=https://link.springer.com/content/pdf/10.1007/s11263-009-0275-4.pdf&casa_token=Zb6LfMuhy_sAAAAA:Sqk_aoTWdXx37FQjUFaZN9ZMQxrUhqO2S_HbOO2a9BKtejW7CMekg-3PDVw6Yjw7BZqihyjP0D_Y6H2msBo), SUN [[11]](https://dspace.mit.edu/bitstream/handle/1721.1/60690/Oliva_SUN%20database.pdf?sequence=1&isAllowed=y)}; 10,729 samples of dimension (3, 224, 224); 5 classes; about 3.6 GB | [2], [6], [7], [14], [15], [18], [60], [61], [64], [67], [68], [70], [71], [74], [76], [77], [81], [83], [86], [91], [98], [99], [101], [102], [103], [117], [118], [126], [127], [131], [132], [136], [138], [140], [142], [145], [146], [148], [149], [161], [170], [173], [174], [184], [190], [195], [199], [201], [202], [203], [209], [216], [217], [222], [223], [224] |
| **VLCS** [[16]](http://openaccess.thecvf.com/content_iccv_2013/papers/Fang_Unbiased_Metric_Learning_2013_ICCV_paper.pdf) ([1](https://drive.google.com/uc?id=1skwblH1_okBwxWxmRsp9_qi15hyPpxg8); or [original](https://www.mediafire.com/file/7yv132lgn1v267r/vlcs.tar.gz/file)) | Object recognition; 4 domains: {Caltech [[8]](http://www.vision.caltech.edu/publications/Fei-FeiCompVIsImageU2007.pdf), LabelMe [[9]](https://idp.springer.com/authorize/casa?redirect_uri=https://link.springer.com/content/pdf/10.1007/s11263-007-0090-8.pdf&casa_token=n3w4Sen-huAAAAAA:sJY2dHreDGe2V4KE9jDehftM1W-Sn1z8bqeF_WK8Q9t4B0dFk5OXEAlIP7VYnr8UfiWLAOPG7dK0ZveYWs8), PASCAL [[10]](https://idp.springer.com/authorize/casa?redirect_uri=https://link.springer.com/content/pdf/10.1007/s11263-009-0275-4.pdf&casa_token=Zb6LfMuhy_sAAAAA:Sqk_aoTWdXx37FQjUFaZN9ZMQxrUhqO2S_HbOO2a9BKtejW7CMekg-3PDVw6Yjw7BZqihyjP0D_Y6H2msBo), SUN [[11]](https://dspace.mit.edu/bitstream/handle/1721.1/60690/Oliva_SUN%20database.pdf?sequence=1&isAllowed=y)}; 10,729 samples of dimension (3, 224, 224); 5 classes; about 3.6 GB | [2], [6], [7], [14], [15], [18], [60], [61], [64], [67], [68], [70], [71], [74], [76], [77], [81], [83], [86], [91], [98], [99], [101], [102], [103], [117], [118], [126], [127], [131], [132], [136], [138], [140], [142], [145], [146], [148], [149], [161], [170], [173], [174], [184], [190], [195], [199], [201], [202], [203], [209], [216], [217], [222], [223], [224], [231] |
| **Office31+Caltech** [[32]](https://linkspringer.53yu.com/content/pdf/10.1007/978-3-642-15561-1_16.pdf) ([1](https://drive.google.com/file/d/14OIlzWFmi5455AjeBZLak2Ku-cFUrfEo/view)) | Object recognition; 4 domains: {Amazon, Webcam, DSLR, Caltech}; 4,652 samples in 31 classes (office31) or 2,533 samples in 10 classes (office31+caltech); 51 MB | [6], [35], [67], [68], [70], [71], [80], [91], [96], [119], [131], [167] |
| **OfficeHome** [[20]](http://openaccess.thecvf.com/content_cvpr_2017/papers/Venkateswara_Deep_Hashing_Network_CVPR_2017_paper.pdf) ([1](https://drive.google.com/uc?id=1uY0pj7oFsjMxRwaD3Sxy0jgel0fsYXLC); or [original](https://drive.google.com/file/d/0B81rNlvomiwed0V1YUxQdC1uOTg/view?resourcekey=0-2SNWq0CDAuWOBRRBL7ZZsw)) | Object recognition; 4 domains: {Art, Clipart, Product, Real World}; 15,588 samples of dimension (3, 224, 224); 65 classes; 1.1 GB | [19], [54], [28], [34], [55], [58], [60], [61], [64], [80], [92], [94], [98], [101], [118], [126], [130], [131], [132], [133], [137], [138], [140], [146], [148], [156], [159], [160], [162], [163], [167], [173], [174], [178], [179], [182], [184], [189], [190], [199], [201], [202], [203], [206], [211], [212], [214], [216], [217], [220], [222], [223], [224], [230] |
| **PACS** [[2]](https://openaccess.thecvf.com/content_ICCV_2017/papers/Li_Deeper_Broader_and_ICCV_2017_paper.pdf) ([1](https://drive.google.com/uc?id=1JFr8f805nMUelQWWmfnJR3y4_SYoN5Pd); or [original](https://drive.google.com/drive/folders/0B6x7gtvErXgfUU1WcGY5SzdwZVk?resourcekey=0-2fvpQY_QSyJf2uIECzqPuQ)) | Object recognition; 4 domains: {photo, art_painting, cartoon, sketch}; 9,991 samples of dimension (3, 224, 224); 7 classes; 174 MB | [1], [2], [4], [5], [14], [15], [18], [19], [34], [54], [28], [35], [55], [56], [57], [58], [59], [60], [61], [64], [69], [73], [77], [80], [81], [82], [83], [84], [86], [90], [92], [94], [96], [98], [99], [101], [102], [104], [105], [116], [117], [118], [127], [129], [130], [131], [132], [136], [137], [138], [139], [140], [142], [145], [146], [148], [149], [153], [156], [157], [158], [159], [160], [161], [162], [163], [167], [170], [171], [173], [174], [178], [179], [180], [182], [184], [189], [190], [195], [199], [200], [201], [202], [203], [206], [209], [210], [211], [212], [214], [216], [217], [220], [222], [223], [224], [230] |
| **DomainNet** [[33](https://openaccess.thecvf.com/content_ICCV_2019/papers/Peng_Moment_Matching_for_Multi-Source_Domain_Adaptation_ICCV_2019_paper.pdf)] ([clipart](http://csr.bu.edu/ftp/visda/2019/multi-source/groundtruth/clipart.zip), [infograph](http://csr.bu.edu/ftp/visda/2019/multi-source/infograph.zip), [painting](http://csr.bu.edu/ftp/visda/2019/multi-source/groundtruth/painting.zip), [quick-draw](http://csr.bu.edu/ftp/visda/2019/multi-source/quickdraw.zip), [real](http://csr.bu.edu/ftp/visda/2019/multi-source/real.zip), and [sketch](http://csr.bu.edu/ftp/visda/2019/multi-source/sketch.zip); or [original](http://ai.bu.edu/M3SDA/)) | Object recognition; 6 domains: {clipart, infograph, painting, quick-draw, real, sketch}; 586,575 samples of dimension (3, 224, 224); 345 classes; 1.2 GB + 4.0 GB + 3.4 GB + 439 MB + 5.6 GB + 2.5 GB | [34], [57], [69], [104], [119], [130], [131], [132], [133], [138], [140], [150], [173], [182], [189], [201], [202], [203], [216], [222], [223], [224], [230] |
| **OfficeHome** [[20]](http://openaccess.thecvf.com/content_cvpr_2017/papers/Venkateswara_Deep_Hashing_Network_CVPR_2017_paper.pdf) ([1](https://drive.google.com/uc?id=1uY0pj7oFsjMxRwaD3Sxy0jgel0fsYXLC); or [original](https://drive.google.com/file/d/0B81rNlvomiwed0V1YUxQdC1uOTg/view?resourcekey=0-2SNWq0CDAuWOBRRBL7ZZsw)) | Object recognition; 4 domains: {Art, Clipart, Product, Real World}; 15,588 samples of dimension (3, 224, 224); 65 classes; 1.1 GB | [19], [54], [28], [34], [55], [58], [60], [61], [64], [80], [92], [94], [98], [101], [118], [126], [130], [131], [132], [133], [137], [138], [140], [146], [148], [156], [159], [160], [162], [163], [167], [173], [174], [178], [179], [182], [184], [189], [190], [199], [201], [202], [203], [206], [211], [212], [214], [216], [217], [220], [222], [223], [224], [230], [231] |
| **PACS** [[2]](https://openaccess.thecvf.com/content_ICCV_2017/papers/Li_Deeper_Broader_and_ICCV_2017_paper.pdf) ([1](https://drive.google.com/uc?id=1JFr8f805nMUelQWWmfnJR3y4_SYoN5Pd); or [original](https://drive.google.com/drive/folders/0B6x7gtvErXgfUU1WcGY5SzdwZVk?resourcekey=0-2fvpQY_QSyJf2uIECzqPuQ)) | Object recognition; 4 domains: {photo, art_painting, cartoon, sketch}; 9,991 samples of dimension (3, 224, 224); 7 classes; 174 MB | [1], [2], [4], [5], [14], [15], [18], [19], [34], [54], [28], [35], [55], [56], [57], [58], [59], [60], [61], [64], [69], [73], [77], [80], [81], [82], [83], [84], [86], [90], [92], [94], [96], [98], [99], [101], [102], [104], [105], [116], [117], [118], [127], [129], [130], [131], [132], [136], [137], [138], [139], [140], [142], [145], [146], [148], [149], [153], [156], [157], [158], [159], [160], [161], [162], [163], [167], [170], [171], [173], [174], [178], [179], [180], [182], [184], [189], [190], [195], [199], [200], [201], [202], [203], [206], [209], [210], [211], [212], [214], [216], [217], [220], [222], [223], [224], [230], [231] |
| **DomainNet** [[33](https://openaccess.thecvf.com/content_ICCV_2019/papers/Peng_Moment_Matching_for_Multi-Source_Domain_Adaptation_ICCV_2019_paper.pdf)] ([clipart](http://csr.bu.edu/ftp/visda/2019/multi-source/groundtruth/clipart.zip), [infograph](http://csr.bu.edu/ftp/visda/2019/multi-source/infograph.zip), [painting](http://csr.bu.edu/ftp/visda/2019/multi-source/groundtruth/painting.zip), [quick-draw](http://csr.bu.edu/ftp/visda/2019/multi-source/quickdraw.zip), [real](http://csr.bu.edu/ftp/visda/2019/multi-source/real.zip), and [sketch](http://csr.bu.edu/ftp/visda/2019/multi-source/sketch.zip); or [original](http://ai.bu.edu/M3SDA/)) | Object recognition; 6 domains: {clipart, infograph, painting, quick-draw, real, sketch}; 586,575 samples of dimension (3, 224, 224); 345 classes; 1.2 GB + 4.0 GB + 3.4 GB + 439 MB + 5.6 GB + 2.5 GB | [34], [57], [69], [104], [119], [130], [131], [132], [133], [138], [140], [150], [173], [182], [189], [201], [202], [203], [216], [222], [223], [224], [230], [231] |
| **mini-DomainNet** [[34]](https://arxiv.53yu.com/pdf/2003.07325) | Object recognition; a smaller and less noisy version of DomainNet; 4 domains: {clipart, painting, real, sketch}; 140,006 samples | [34], [130], [156], [157], [210] |
**ImageNet-Sketch** [[35]](https://arxiv.53yu.com/pdf/1903.06256) | Object recognition; 2 domains: {real, sketch}; 50,000 samples | [64] |
**VisDA-17** [[36](https://arxiv.53yu.com/pdf/1710.06924)] | Object recognition; 3 domains of synthetic-to-real generalization; 280,157 samples | [119], [182] |
Expand All @@ -497,7 +503,7 @@ If you would like to contribute to our repository or have any questions/advice,
**SYNTHIA** [[42]](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Ros_The_SYNTHIA_Dataset_CVPR_2016_paper.pdf) | Semantic segmentation; 15 domains with 4 locations and 5 weather conditions; 2,700 samples | [27], [62], [115], [141], [151], [185], [193] |
**GTA5-Cityscapes** [[43]](https://linkspringer.53yu.com/chapter/10.1007/978-3-319-46475-6_7), [[44]](http://openaccess.thecvf.com/content_cvpr_2016/papers/Cordts_The_Cityscapes_Dataset_CVPR_2016_paper.pdf) | Semantic segmentation; 2 domains of synthetic-to-real generalization; 29,966 samples | [62], [115], [185], [193] |
**Cityscapes-ACDC** [[44]](http://openaccess.thecvf.com/content_cvpr_2016/papers/Cordts_The_Cityscapes_Dataset_CVPR_2016_paper.pdf) ([original](https://acdc.vision.ee.ethz.ch/overview)) | Semantic segmentation; real life domain shifts, ACDC contains four different weather conditions: rain, fog, snow, night | [215] |
**Terra Incognita (TerraInc)** [[45]](http://openaccess.thecvf.com/content_ECCV_2018/papers/Beery_Recognition_in_Terra_ECCV_2018_paper.pdf) ([1](https://lilablobssc.blob.core.windows.net/caltechcameratraps/eccv_18_all_images_sm.tar.gz) and [2](https://lilablobssc.blob.core.windows.net/caltechcameratraps/labels/caltech_camera_traps.json.zip); or [original](https://lila.science/datasets/caltech-camera-traps)) | Animal classification; 4 domains captured at different geographical locations: {L100, L38, L43, L46}; 24,788 samples of dimension (3, 224, 224); 10 classes; 6.0 GB + 8.6 MB | [132], [136], [138], [140], [173], [201], [202], [207], [212], [214], [216], [222], [223], [224] |
**Terra Incognita (TerraInc)** [[45]](http://openaccess.thecvf.com/content_ECCV_2018/papers/Beery_Recognition_in_Terra_ECCV_2018_paper.pdf) ([1](https://lilablobssc.blob.core.windows.net/caltechcameratraps/eccv_18_all_images_sm.tar.gz) and [2](https://lilablobssc.blob.core.windows.net/caltechcameratraps/labels/caltech_camera_traps.json.zip); or [original](https://lila.science/datasets/caltech-camera-traps)) | Animal classification; 4 domains captured at different geographical locations: {L100, L38, L43, L46}; 24,788 samples of dimension (3, 224, 224); 10 classes; 6.0 GB + 8.6 MB | [132], [136], [138], [140], [173], [201], [202], [207], [212], [214], [216], [222], [223], [224], [231] |
**Market-Duke** [[46]](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Zheng_Scalable_Person_Re-Identification_ICCV_2015_paper.pdf), [[47]](https://linkspringer.53yu.com/chapter/10.1007/978-3-319-48881-3_2) | Person re-idetification; cross-dataset re-ID; heterogeneous DG with 2 domains; 69,079 samples | [12], [13], [28], [55], [56], [58], [114], [144], [187], [208] |
<!-- **UCF-HMDB** [[40](https://arxiv.53yu.com/pdf/1212.0402.pdf?ref=https://githubhelp.com)], [[41](https://dspace.mit.edu/bitstream/handle/1721.1/69981/Poggio-HMDB.pdf?sequence=1&isAllowed=y)] | Action recognition | 2 domains with 12 overlapping actions; 3809 samples | | -->
<!-- **Face** [22] | >5M | 9 | Face recognition | Combination of 9 face datasets | |
Expand Down Expand Up @@ -525,7 +531,7 @@ Feel free to contribute to our repository.

- If you woulk like to *correct mistakes*, please do it directly;
- If you would like to *add/update papers*, please finish the following tasks (if necessary):
1. Find the max index (current max: **[230]**, not used: none), and create a new one.
1. Find the max index (current max: **[231]**, not used: none), and create a new one.
2. Update [Publications](#publications).
3. Update [Papers](#papers).
4. Update [Datasets](#datasets).
Expand Down