-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
230 lines (207 loc) · 14.2 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
<!DOCTYPE HTML>
<html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Haoqing Wang</title>
<meta name="author" content="Haoqing Wang">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" type="text/css" href="stylesheet.css">
<link rel="icon" href="data:image/svg+xml,<svg xmlns=%22http://www.w3.org/2000/svg%22 viewBox=%220 0 100 100%22><text y=%22.9em%22 font-size=%2290%22>🌐</text></svg>">
</head>
<body>
<table style="width:100%;max-width:800px;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:0px">
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:2.5%;width:63%;vertical-align:middle">
<p style="text-align:center">
<name>Haoqing Wang</name>
</p>
<p> I am now a PhD candidate (expected to graduate in June 2024) at <a href="https://www.cis.pku.edu.cn/">School of Intelligence Science and Technology</a>, <a href="https://www.pku.edu.cn/">Peking University</a>. I am a member of National Key Lab of General AI, and advised by Professor <a href="https://scholar.google.com.hk/citations?user=tRoAxlsAAAAJ&hl=zh-CN">Zhi-Hong Deng</a>. Previously, I received my Bachelor of Science in mathematics from <a href="https://www.buaa.edu.cn/">Beihang University</a>.
</p>
<p style="text-align:center">
<a href="mailto:[email protected]">Email</a>  / 
<a href="https://scholar.google.com.hk/citations?user=A2kCYnUAAAAJ&hl=zh-CN">Google Scholar</a>  / 
<a href="https://github.com/Haoqing-Wang">Github</a>
</p>
</td>
<td style="padding:2.5%;width:40%;max-width:40%">
<img style="width:40%;max-width:40%" alt="profile photo" src="images/HaoqingW_2.jpg" class="hoverZoomLink">
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:100%;vertical-align:middle">
<heading>Research</heading>
<p>
I'm mainly interested in multi/single-modal self-supervised representation learning (i.e., large model pre-training) and its downstream tasks in computer vision, i.e., few-shot learning, detection and segmentation.
</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle"><img src="works/FORT.png" alt="blind-date" width="120" height="120"></td>
<td width="75%" valign="middle">
<papertitle>Focus Your Attention when Few-Shot Classification</papertitle>
<br>
<strong>Haoqing Wang</strong>, Shibo Jie, <a href="https://scholar.google.com.hk/citations?user=tRoAxlsAAAAJ&hl=zh-CN">Zhi-Hong Deng</a>
<br>
<em>NeurIPS</em>, 2023. CCF-A.
<p> <a href="https://openreview.net/forum?id=uFlE0qgtRO">paper</a> / <a href="">code</a> </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle"><img src="works/Low_Prec.png" alt="blind-date" width="120" height="120"></td>
<td width="75%" valign="middle">
<papertitle>Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy</papertitle>
<br>
Shibo Jie, <strong>Haoqing Wang</strong>, <a href="https://scholar.google.com.hk/citations?user=tRoAxlsAAAAJ&hl=zh-CN">Zhi-Hong Deng</a>
<br>
<em>ICCV</em>, 2023. CCF-A.
<p> <a href="https://openaccess.thecvf.com/content/ICCV2023/papers/Jie_Revisiting_the_Parameter_Efficiency_of_Adapters_from_the_Perspective_of_ICCV_2023_paper.pdf">paper</a> / <a href="https://github.com/JieShibo/PETL-ViT">code</a> </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle"><img src="works/LocalMIM.png" alt="blind-date" width="120" height="120"></td>
<td width="75%" valign="middle">
<papertitle>Masked Image Modeling with Local Multi-Scale Reconstruction</papertitle>
<br>
<strong>Haoqing Wang</strong>, <a href="https://scholar.google.com.hk/citations?user=TkSZQ6gAAAAJ&hl=zh-CN&oi=ao">Yehui Tang</a>, <a href="https://www.wangyunhe.site/">Yunhe Wang</a>, <a href="https://scholar.google.com.hk/citations?user=UnAbd4gAAAAJ&hl=zh-CN&oi=ao">Jianyuan Guo</a>, <a href="https://scholar.google.com.hk/citations?user=tRoAxlsAAAAJ&hl=zh-CN">Zhi-Hong Deng</a>, <a href="https://scholar.google.com.hk/citations?user=vThoBVcAAAAJ&hl=zh-CN&oi=ao">Kai Han</a>
<br>
<em>CVPR</em>, 2023 <font color="red"><strong>(<a href="works/email.pdf">Highlight</a> Presentation, Top 2.6%)</strong></font>. CCF-A.
<p> <a href="https://arxiv.org/pdf/2303.05251v1.pdf">paper</a> / <a href="https://github.com/Haoqing-Wang/LocalMIM">code</a> / <a href="https://zhuanlan.zhihu.com/p/613629304">blog</a> </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle"><img src="works/ATA.png" alt="blind-date" width="120" height="120"></td>
<td width="75%" valign="middle">
<papertitle>Towards well-generalizing meta-learning via adversarial task augmentation</papertitle>
<br>
<strong>Haoqing Wang</strong>, Huiyu Mai, Yuhang Gong, <a href="https://scholar.google.com.hk/citations?user=tRoAxlsAAAAJ&hl=zh-CN">Zhi-Hong Deng</a>
<br>
<em>Artificial Intelligence</em>, 103875, 2023. CCF-A, IF=14.05.
<p> <a href="https://www.sciencedirect.com/science/article/pii/S0004370223000218">paper</a> / <a href="https://github.com/Haoqing-Wang/CDFSL-ATA">code</a> </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle"><img src="works/CPNWCP.png" alt="blind-date" width="120" height="130"></td>
<td width="75%" valign="middle">
<papertitle>Contrastive Prototypical Network with Wasserstein Confidence Penalty</papertitle>
<br>
<strong>Haoqing Wang</strong>, <a href="https://scholar.google.com.hk/citations?user=tRoAxlsAAAAJ&hl=zh-CN">Zhi-Hong Deng</a>
<br>
<em>ECCV</em>, 2022. CCF-B.
<p> <a href="https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136790654.pdf">paper</a> / <a href="https://github.com/Haoqing-Wang/CPNWCP">code</a> </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle"><img src="works/InfoCL.png" alt="blind-date" width="120" height="100"></td>
<td width="75%" valign="middle">
<papertitle>Rethinking minimal sufficient representation in contrastive learning</papertitle>
<br>
<strong>Haoqing Wang</strong>, <a href="https://www.microsoft.com/en-us/research/people/xunguo/">Xun Guo</a>, <a href="https://scholar.google.com.hk/citations?user=tRoAxlsAAAAJ&hl=zh-CN">Zhi-Hong Deng</a>, <a href="https://www.microsoft.com/en-us/research/people/yanlu/" >Yan Lu</a>
<br>
<em>CVPR</em>, 2022 <font color="red"><strong>(<a href="https://cvpr2022.thecvf.com/orals-624-am">Oral</a> Presentation, Top 4.2%)</strong></font>. CCF-A.
<br>
<p> <a href="https://openaccess.thecvf.com/content/CVPR2022/papers/Wang_Rethinking_Minimal_Sufficient_Representation_in_Contrastive_Learning_CVPR_2022_paper.pdf">paper</a> / <a href="https://github.com/Haoqing-Wang/InfoCL">code</a> </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle"><img src="works/ATA_IJCAI.png" alt="blind-date" width="120" height="120"></td>
<td width="75%" valign="middle">
<papertitle>Cross-domain few-shot classification via adversarial task augmentation</papertitle>
<br>
<strong>Haoqing Wang</strong>, <a href="https://scholar.google.com.hk/citations?user=tRoAxlsAAAAJ&hl=zh-CN">Zhi-Hong Deng</a>
<br>
<em>IJCAI</em>, 2021. CCF-A.
<p> <a href="https://www.ijcai.org/proceedings/2021/0149.pdf">paper</a> / <a href="https://github.com/Haoqing-Wang/CDFSL-ATA">code</a> / <a href="https://zhuanlan.zhihu.com/p/370583079">blog</a> </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle"><img src="works/Di2Vec.png" alt="blind-date" width="120" height="120"></td>
<td width="75%" valign="middle">
<papertitle>Distributed representations of diseases based on co-occurrence relationship</papertitle>
<br>
<strong>Haoqing Wang</strong>, Huiyu Mai, <a href="https://scholar.google.com.hk/citations?user=tRoAxlsAAAAJ&hl=zh-CN">Zhi-Hong Deng</a>, Chao Yang, <a href="https://www.researchgate.net/profile/Luxia-Zhang-3">Luxia Zhang</a>, Huai-yu Wang,
<br>
<em>Expert Systems with Applications</em> 183, 115418, 2021. CCF-C, IF=8.665.
<p> <a href="https://www.sciencedirect.com/science/article/pii/S095741742100837X">paper</a>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle"><img src="works/FSLSTM.png" alt="blind-date" width="120" height="100"></td>
<td width="75%" valign="middle">
<papertitle>Few-shot learning with LSSVM base learner and transductive modules</papertitle>
<br>
<strong>Haoqing Wang</strong>, <a href="https://scholar.google.com.hk/citations?user=tRoAxlsAAAAJ&hl=zh-CN">Zhi-Hong Deng</a>
<br>
arXiv preprint arXiv:2009.05786
<p> <a href="https://arxiv.org/pdf/2009.05786.pdf">paper</a> / <a href="https://github.com/Haoqing-Wang/FSLSTM">code</a> </p>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle"><img src="works/NART.png" alt="blind-date" width="120" height="120"></td>
<td width="75%" valign="middle">
<papertitle>Fast structured decoding for sequence models</papertitle>
<br>
<a href="http://www.cs.cmu.edu/~zhiqings/">Zhiqing Sun</a>, <a href="https://people.eecs.berkeley.edu/~zhuohan/">Zhuohan Li</a>, <strong>Haoqing Wang</strong>, <a href="https://zi-lin.com/">Zi Lin</a>, <a href="https://dihe-pku.github.io/">Di He</a>, <a href="https://scholar.google.com.hk/citations?user=tRoAxlsAAAAJ&hl=zh-CN">Zhi-Hong Deng</a>
<br>
<em>NeurIPS</em>, 2019. CCF-A.
<p> <a href="https://proceedings.neurips.cc/paper/2019/file/74563ba21a90da13dacf2a73e3ddefa7-Paper.pdf">paper</a> / <a href="https://github.com/Edward-Sun/structured-nart">code</a> </p>
</td>
</tr>
</tbody></table>
<table width="100%" align="center" border="0" cellpadding="20"><tbody>
<tr>
<td>
<heading>Internships</heading>
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle"><img src="images/tongyi.jpg" alt="clean-usnob" width="200" height="100"></td>
<td width="75%" valign="middle">
Vision Intelligence Lab
<br>
2023/06/01-2024/02/19, advised by <a href="https://scholar.google.com.hk/citations?hl=zh-CN&user=02H8RBIAAAAJ">Kang Zhao</a>
<br>
During the internship, I conduct deep research on video generation with diffusion models, especially on talking head generation.
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle"><img src="images/ark.png" alt="clean-usnob" width="200" height="100"></td>
<td width="75%" valign="middle">
Algorithm Application Department
<br>
2022/06/15-2022/03/16, advised by <a href="https://scholar.google.com.hk/citations?user=vThoBVcAAAAJ&hl=zh-CN&oi=ao">Kai Han</a>
<br>
During the internship, I conduct deep research on masked image modeling and propose a new pretext task, local multi-scale reconstruction, to accelerate representation learning. This work has been accepted by CVPR 2023 as Highlight/Oral presentation (Top 2.5%).
</td>
</tr>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle"><img src="images/msra.png" alt="clean-usnob" width="200" height="100"></td>
<td width="75%" valign="center">
<a href="https://www.microsoft.com/en-us/research/group/multimedia-search-and-mining/">Multimedia Search and Mining Group</a>
<br>
2021/07/20-2022/04/11, advised by <a href="https://www.microsoft.com/en-us/research/people/xunguo/">Xun Guo</a>
<br>
During the internship, I conduct deep research on contrastive learning, revealing its shortcomings from a theoretical perspective and proposing solutions. This work has been accepted by CVPR 2022 as Oral presentation (Top 4.2%).
</td>
</tr>
</tbody></table>
<table width="100%" align="center" border="0" cellpadding="20"><tbody>
<tr>
<td>
<heading>Professional Services</heading>
<p>
1. Reviewer for CVPR, ICCV, NeurIPS, ICLR, ICML, ECCV, IJCV, TMM, ...
<p>
2. Editor of <a href="http://www.nihds.pku.edu.cn/info/1039/1853.htm">"Introduction to Health Data Science"</a>, responsible for "Chapter 9. Machine Learning".
</td>
</tr>
</tbody></table>
</td>
</tr>
</table>
</body>
</html>