Skip to content

Commit

Permalink
new
Browse files Browse the repository at this point in the history
  • Loading branch information
AtlasWang committed Jun 4, 2024
1 parent 3f882ea commit bfbd0b9
Show file tree
Hide file tree
Showing 3 changed files with 15 additions and 9 deletions.
6 changes: 3 additions & 3 deletions group.html
Original file line number Diff line number Diff line change
Expand Up @@ -432,7 +432,6 @@ <h2>Visiting Scholars</h2>
</div>
<!-- <div class="row justify-content-md-left" style="margin-top: 0px"> -->
<ul>
<li> <a href="https://simoneangarano.github.io/">Simone Angarano</a>, Ph.D. student in ML at Politecnico di Torino, Italy, Aug 2023 - present </li>
<li> <a href="https://scholar.google.com/citations?user=Ihn-OugAAAAJ&hl=en">Gabriel Jacob Perin</a>, undergraduate student in CS at University of São Paulo, Brazil, Apr 2024 - present </li>
</ul>
<!-- </div> -->
Expand Down Expand Up @@ -529,7 +528,7 @@ <h2>Alumni</h2>
<span class="d-block" style="padding-top:10px"><a
href="https://sandbox3aster.github.io/">Junru Wu (Ph.D) </a> </span>
<span class="d-block"> Spring 2018 - Summer 2023 </span>
<span class="date-read">Current: Research Engineer, Google Research NYC</span>
<span class="date-read">Current: Research Engineer, Google Deepmind</span>
</div>
</div>
</div>
Expand Down Expand Up @@ -695,9 +694,10 @@ <h2>Alumni</h2>
<ul>
Visitors/Interns
<ul>
<li> <a href="https://simoneangarano.github.io/">Simone Angarano</a>, Ph.D. student in ML at Politecnico di Torino, Italy, Aug 2023 - May 2024</li>
<li> Yee Yang Tee, Ph.D. student, EEE@Nanyang Technological University, Singapore, visiting VITA during Aug 2022 - Jan 2023 </li>
<li> Artur André Oiveira, Ph.D. student, CS@University of São Paulo, Brazil, visiting VITA during Dec 2021 - May 2022 </li>
<li> Shuai Yang, Ph.D. student, CS@Peking University, visiting VITA during Sep 2018 - Sep 2019</li>
<li> <a href="https://williamyang1991.github.io/">Shuai Yang</a>,, Ph.D. student, CS@Peking University, visiting VITA during Sep 2018 - Sep 2019</li>

<br>
<li>Saebyeol Shin, undergraduate, Sungkyunkwan University, South Korea, Fall 2023 [remote] (Next Move: Ph.D. student, CS@Cornell)</li>
Expand Down
6 changes: 6 additions & 0 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,12 @@ <h2>News</h2>
<p>
</ul>

<b style="color:rgb(68, 68, 68)">[Jun. 2024]</b>
<ul style="margin-bottom:5px">
<li> 1 Nature Communication Medicine (clinical SSL for echocardiography) accepted</li>
<li> Ph.D. dissertation of VITA alumni Dr. <a href="https://chenwydj.github.io/">Wuyang Chen</a> is selected to receive the INNS Doctoral Dissertation Award</li>
</ul>


<b style="color:rgb(68, 68, 68)">[May. 2024]</b>
<ul style="margin-bottom:5px">
Expand Down
12 changes: 6 additions & 6 deletions publication.html
Original file line number Diff line number Diff line change
Expand Up @@ -202,16 +202,16 @@ <h2>Conference Paper</h2>
<div class="trend-entry d-flex">
<div class="trend-contents">
<ul>
<li>R. Cai*, S. Muralidharan, G. Heinrich, H. Yin, Z. Wang, J. Kautz, and P. Molchanov<br> <b style="color:rgb(71, 71, 71)">“Flextron: Many-in-One Flexible Large Language Model”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="">[Paper]</a> <a href="">[Code] </a> </li>
<li>R. Cai*, Y. Tian, Z. Wang, and B. Chen<br> <b style="color:rgb(71, 71, 71)">Learning to Compress Long Contexts by Dropping-In Convolutions</b><br>International Conference on Machine Learning (ICML), 2024. <a href="">[Paper]</a> <a href="">[Code] </a> </li>
<li>R. Cai*, S. Muralidharan, G. Heinrich, H. Yin, Z. Wang, J. Kautz, and P. Molchanov<br> <b style="color:rgb(71, 71, 71)">“Flextron: Many-in-One Flexible Large Language Model”</b><br>International Conference on Machine Learning (ICML), 2024. (Oral) <a href="">[Paper]</a> <a href="">[Code] </a> </li>
<li>R. Cai*, Y. Tian, Z. Wang, and B. Chen<br> <b style="color:rgb(71, 71, 71)">LoCoCo: Dropping In Convolutions for Long Context Compression</b><br>International Conference on Machine Learning (ICML), 2024. <a href="">[Paper]</a> <a href="">[Code] </a> </li>
<li>L. Yin*, A. Jaiswal*, S. Liu*, S. Kundu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs Difficult Downstream Tasks in LLMs”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2310.02277">[Paper]</a> <a href="https://github.com/VITA-Group/Junk_DNA_Hypothesis">[Code] </a> </li>
<li>L. Yin*, Y. Wu, Z. Zhang*, C. Hsieh, Y. Wang, Y. Jia, G. Li, A. Jaiswal*, M. Pechenizkiy, Y. Liang, M. Bendersky, Z. Wang, and S. Liu*<br> <b style="color:rgb(71, 71, 71)">“Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2310.05175">[Paper]</a> <a href="https://github.com/luuyin/OWL">[Code] </a> </li>
<li>R. Chen*, T. Zhao, A. Jaiswal*, N. Shah, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“LLaGA: Large Language and Graph Assistant”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2402.08170">[Paper]</a> <a href="https://github.com/VITA-Group/LLaGA">[Code] </a> </li>
<li>J. Hong*, J. Duan, C. Zhang, Z. Li*, C. Xie, K. Lieberman, J. Diffenderfer, B. Bartoldson, A. Jaiswal*, K. Xu, B. Kailkhura, D. Hendrycks, D. Song, Z. Wang, and B. Li<br> <b style="color:rgb(71, 71, 71)">“Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2403.15447">[Paper]</a> <a href="https://decoding-comp-trust.github.io/">[Code] </a> </li>
<li>Z. Li*, S. Liu*, T. Chen*, A. Jaiswal*, Z. Zhang*, D. Wang, R. Krishnamoorthi, S. Chang, Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Sparse Cocktail: Every Sparse Pattern Every Sparse Ratio All At Once”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="">[Paper]</a> <a href="">[Code] </a> </li>
<li>J. Zhao, Z. Zhang*, B. Chen, Z. Wang, A. Anandkumar, and Y. Tian<br> <b style="color:rgb(71, 71, 71)">“GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2403.03507">[Paper]</a> <a href="https://github.com/jiaweizzhao/GaLore">[Code] </a> </li>
<li>J. Zhao, Z. Zhang*, B. Chen, Z. Wang, A. Anandkumar, and Y. Tian<br> <b style="color:rgb(71, 71, 71)">“GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection”</b><br>International Conference on Machine Learning (ICML), 2024. (Oral) <a href="https://arxiv.org/abs/2403.03507">[Paper]</a> <a href="https://github.com/jiaweizzhao/GaLore">[Code] </a> </li>
<li>H. Dong, X. Yang, Z. Zhang*, Z. Wang, Y. Chi, and B. Chen<br> <b style="color:rgb(71, 71, 71)">“Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2402.09398">[Paper]</a> <a href="https://github.com/hdong920/LESS">[Code] </a> </li>
<li>Y. Zhang, P. Li, J. Hong*, J. Li, Y. Zhang, W. Zheng*, P. Chen, J. Lee, W. Yin, M. Hong, Z. Wang, S. Liu, and T. Chen*<br> <b style="color:rgb(71, 71, 71)">“Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2402.11592">[Paper]</a> <a href="https://github.com/ZO-Bench/ZO-LLM?tab=readme-ov-file">[Code] </a> </li>
<li>Y. Zhang, P. Li, J. Hong*, J. Li, Y. Zhang, W. Zheng*, P. Chen, J. Lee, W. Yin, M. Hong, Z. Wang, S. Liu, and T. Chen*<br> <b style="color:rgb(71, 71, 71)">“Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark”</b><br>International Conference on Machine Learning (ICML), 2024. <a href="https://arxiv.org/abs/2402.11592">[Paper]</a> <a href="https://github.com/ZO-Bench/ZO-LLM">[Code] </a> </li>
<li>P. Wang*, D. Xu*, Z. Fan*, D. Wang, S. Mohan, F. Iandola, R. Ranjan, Y. Li, Q. liu, Z. Wang, and V. Chandra
<br> <b style="color:rgb(71, 71, 71)">"Taming Mode Collapse in Score Distillation for Text-to-3D Generation”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. <a href="https://arxiv.org/abs/2401.00909">[Paper]</a> <a href="https://vita-group.github.io/3D-Mode-Collapse/">[Code] </a> </li>
<li>M. Varma, P. Wang*, Z. Fan*, Z. Wang, H. Su, and R. Ramamoorthi<br> <b style="color:rgb(71, 71, 71)">"Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. <a href="">[Paper]</a> <a href="">[Code] </a> </li>
Expand All @@ -220,8 +220,8 @@ <h2>Conference Paper</h2>
<li>M. Ohanyan, H. Manukyan, Z. Wang, S. Navasardyan, and H. Shi<br> <b style="color:rgb(71, 71, 71)">"Zero-Painter: Training-Free Layout Control for Text-to-Image Synthesis”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. <a href="">[Paper]</a> <a href="">[Code] </a> </li>
<li>X. Xu, J. Guo, Z. Wang, G. Huang, I. Essa, and H. Shi<br> <b style="color:rgb(71, 71, 71)">"Prompt-Free Diffusion: Taking 'Text' out of Text-to-Image Diffusion Models”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. <a href="https://arxiv.org/abs/2305.16223">[Paper]</a> <a href="https://github.com/SHI-Labs/Prompt-Free-Diffusion">[Code] </a> </li>
<li>M. D'Incà, E. Peruzzo, M. Mancini, D. Xu*, V. Goel, X. Xu, Z. Wang, H. Shi, and N. Sebe<br> <b style="color:rgb(71, 71, 71)">"OpenBias: Open-set Bias Detection in Generative Models”</b><br>IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. (Highlight) <a href="https://arxiv.org/abs/2404.07990">[Paper]</a> <a href="https://github.com/Picsart-AI-Research/OpenBias">[Code] </a> </li>
<li>Z. Zhang*, S. Liu*, R. Chen*, B. Kailkhura, B. Chen, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Q-Hitter: A Better Token Oracle for Efficient LLM Inference via Sparse-Quantized KV Cache”</b><br>Conference on Machine Learning and Systems (MLSys), 2024. <a href="">[Paper]</a> <a href="">[Code] </a> </li>
<li>Y. Yang, N. Bhatt*, T. Ingebrand, W. Ward, S. Carr, Z. Wang, and U. Topcu<br> <b style="color:rgb(71, 71, 71)">"Fine-Tuning Language Models Using Formal Methods Feedback”</b><br>Conference on Machine Learning and Systems (MLSys), 2024. <a href="https://arxiv.org/abs/2310.18239">[Paper]</a> <a href="">[Code] </a> </li>
<li>Z. Zhang*, S. Liu*, R. Chen*, B. Kailkhura, B. Chen, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Q-Hitter: A Better Token Oracle for Efficient LLM Inference via Sparse-Quantized KV Cache”</b><br>Conference on Machine Learning and Systems (MLSys), 2024. <a href="https://proceedings.mlsys.org/paper_files/paper/2024/file/bbb7506579431a85861a05fff048d3e1-Paper-Conference.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/Q-Hitter">[Code] </a> </li>
<li>Y. Yang, N. Bhatt*, T. Ingebrand, W. Ward, S. Carr, Z. Wang, and U. Topcu<br> <b style="color:rgb(71, 71, 71)">"Fine-Tuning Language Models Using Formal Methods Feedback”</b><br>Conference on Machine Learning and Systems (MLSys), 2024. <a href="https://proceedings.mlsys.org/paper_files/paper/2024/file/b0131b6ee02a00b03fc3320176fec8f5-Paper-Conference.pdf">[Paper]</a> <a href="">[Code] </a> </li>
<li>A. Jaiswal*, Z. Gan, X. Du, B. Zhang, Z. Wang, and Y. Yang<br> <b style="color:rgb(71, 71, 71)">"Compressing LLMs: The Truth is Rarely Pure and Never Simple”</b><br>International Conference on Learning Representations (ICLR), 2024. <a href="https://openreview.net/forum?id=B9klVS7Ddk">[Paper]</a> <a href="https://github.com/VITA-Group/llm-kick">[Code] </a> </li>
<li>J. Hong*, J. Wang, C. Zhang, Z. LI*, B. Li, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"DP-OPT: Make Large Language Model Your Differentially-Private Prompt Engineer”</b><br>International Conference on Learning Representations (ICLR), 2024. (Spotlight) <a href="https://openreview.net/forum?id=Ifz3IgsEPX">[Paper]</a> <a href="https://github.com/VITA-Group/DP-OPT">[Code] </a> </li>
<li>Y. Jiang*, H. Tang, J. Chang, L. Song, Z. Wang, and L. Cao<br> <b style="color:rgb(71, 71, 71)">"Efficient-3DiM: Learning a Generalizable Single-image Novel-view Synthesizer in One Day”</b><br>International Conference on Learning Representations (ICLR), 2024. <a href="https://openreview.net/forum?id=3eFMnZ3N4J">[Paper]</a> <a href="">[Code] </a> </li>
Expand Down

0 comments on commit bfbd0b9

Please sign in to comment.