-
Notifications
You must be signed in to change notification settings - Fork 159
/
Screenshot-to-Webpage.html
65 lines (63 loc) · 5.67 KB
/
Screenshot-to-Webpage.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
<html>
<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/tailwind.min.css" rel="stylesheet">
<body class="bg-white font-sans leading-normal tracking-normal">
<header class="bg-white border-b border-gray-200">
<nav class="container mx-auto px-4 sm:px-6 lg:px-8">
<div class="flex justify-between h-16">
<div class="flex">
<div class="flex-shrink-0 flex items-center">
<a href="#" class="text-gray-800 hover:text-gray-900">Homepage</a>
</div>
<div class="hidden sm:ml-6 sm:flex sm:space-x-8">
<a href="#" class="inline-flex items-center px-1 pt-1 text-base font-normal text-gray-600 hover:text-gray-900">About Me</a>
<a href="#" class="inline-flex items-center px-1 pt-1 text-base font-normal text-gray-600 hover:text-gray-900">News</a>
<a href="#" class="inline-flex items-center px-1 pt-1 text-base font-normal text-gray-600 hover:text-gray-900">Publications</a>
<a href="#" class="inline-flex items-center px-1 pt-1 text-base font-normal text-gray-600 hover:text-gray-900">Honors and Awards</a>
<a href="#" class="inline-flex items-center px-1 pt-1 text-base font-normal text-gray-600 hover:text-gray-900">Educations</a>
</div>
</div>
</div>
</nav>
</header>
<main class="container mx-auto px-4 sm:px-6 lg:px-8 py-8">
<div class="flex flex-col md:flex-row">
<div class="md:w-1/4 mb-8 md:mb-0">
<img src="https://picsum.photos/300/300" alt="Profile Picture" class="rounded-full mx-auto mb-4">
<h2 class="text-xl font-bold mb-2">Pan Zhang</h2>
<p class="text-gray-600 mb-4">Shanghai AI Laboratory</p>
<p class="text-gray-600 mb-4">Researcher at Shanghai AI Laboratory.</p>
<ul class="list-disc pl-4 mb-4">
<li>Shanghai, China</li>
<li>Email</li>
<li>Google Scholar</li>
</ul>
</div>
<div class="md:w-3/4">
<h2 class="text-xl font-bold mb-2">Short Bio</h2>
<p class="text-gray-600 mb-4">I am currently a researcher at Shanghai AI Laboratory (shlab). I received my Ph.D. degree through a Joint PhD Program between Microsoft Research Asia (MSRA) and University of Science and Technology of China (USTC) in 2022. Prior to that, I received my Bachelor degree of Engineering at University of Science and Technology of China in 2017. I joined the Shanghai AI Laboratory in July 2022.</p>
<p class="text-gray-600 mb-4">My research interest includes Multimodal Large Language Models and Image/Video Generation and Editing.</p>
<p class="text-gray-600 mb-4">We are seeking long-term internship candidates and looking for research collaboration. Please send email to me if you want to join us.</p>
</div>
</div>
<div class="mt-8">
<h2 class="text-xl font-bold mb-2">News</h2>
<ul class="list-disc pl-4 mb-4">
<li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2024.03: InternLM-XComposer Series has received 1,300+ github star. <span class="text-red-500"><i class="fab fa-twitter"></i></span> XComposer2 has been commercially utilized by ByteDance.</li>
<li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2024.02: The model and dataset of ShareGPT4V has been downloaded 100,000+ times in one month.</li>
<li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2024.02: Three papers accepted by CVPR 2024. Alpha-CLiP is Strongly Accepted by All the Reviewers.</li>
<li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2024.01: We release InternLM-XComposer2. The first 7B model matches or even surpasses GPT-4 and GPT-Neo Pro in certain assessments.</li>
<li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2023.09: We release InternLM-XComposer, a vision-language large model for advanced text-image comprehension and composition.</li>
<li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2023.02: One paper accepted by SIGGRAPH Asia 2023.</li>
<li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2023.07: VDet, the first ten-thousand-class object detection dataset, is accepted by ICCV 2023 as an Oral paper.</li>
<li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2023.03: Two papers accepted by CVPR 2023.</li>
<li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2022.07: One paper accepted by ECCV 2022.</li>
<li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2022.06: One paper accepted by TPAMI.</li>
<li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2021.06: CoCoNet v2 is selected as a CVPR 2021 Best Paper Candidate.</li>
<li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2021.02: CoCoNet v2 and ProDA are accepted by CVPR 2021. CoCoNet v2 is an Oral Paper.</li>
<li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2020.10: Bring-Old-Photos-Back-to-Life has received 14,000+ github star.</li>
<li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2020.03: CoCoNet and Bring-Old-Photos-Back-to-Life are accepted by CVPR 2020 as Oral Papers.</li>
</ul>
</div>
</main>
</body>
</html>