Skip to content

YuWang-CPU/iDEA-iSAIL-Reading-Group

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

68 Commits
 
 
 
 
 
 

Repository files navigation

I-I Reading Group

I-I (iDEA-iSAIL) group is a statistical learning and data mining reading group at UIUC, coordinated by Prof. Hanghang Tong and Prof. Jingrui He. The main purpose of this group is to educate and inform its members of the recent advances of machine learning and data mining.

Regular Meetting Time: {Wed 11 am} at {Thomas M. Siebel Center, 4102}.

Unless otherwise notified, our regular weekly meeting for Fall 2019 is Wed 11-12am, SC 4102. If you would like to present in an upcoming meeting, please email dzhou21/lecheng4 [at] illinois [dot] edu or submit a pull request and add to the table!

Schedule for 2019 Fall:

Dates Presenters Topics Materials
Sep 4, 2019 Hansheng Ren Kick-Off Meeting -
Sep 11, 2019 Xue Hu & Dongqi Fu Data Poisoning Attacks
Sep 18, 2019 Dawei Zhou & Lecheng Zheng Recent Advances of Transformer Machine -
Sep 25, 2019 Yao Zhou & Jun Wu TBD -
Oct 2, 2019 MAA-members Active Interpretation of Disparate Alternatives (AIDA) -
Oct 9, 2019 Zhe Xu - -
Oct 16, 2019 Si Zhang & Qinghai Zhou - -
Oct 23, 2019 Yu Wang & Ziye Zhu Language Model in NLP -
Nov 6, 2019 Lihui Liu - -
Nov 13, 2019 - - -
Nov 20, 2019 - - -
Nov 27, 2019 NA Thanksgiving -
Dec 4, 2019 - - -

Recommended Flows

Introduce 1~2 Research Papers:

  • 20 mins: Introduction & Background (Motivation examples, literature review)
  • 10 min: Problem Description (Give a formal definition of the studied problems)
  • 30 min: Brainstorm Discussion (Propose potential approaches based on your knowledge)
  • 30 min: Algorithm (Description of the algorithms in the papers)
  • 30 min: Critical Discussion (Pros & Cons of your ideas and the existing one)

Survey a Research Topic

  • 20 mins: Introduction & Background (Motivation examples, literature review)
  • 20 min: Problem/Subproblems Description (Give a formal definition of the studied problems)
  • 60 min: Review (High-level discussion of the existing work)
  • 20 min: Conclusion & Future Direction

Covered topics/papers in the past:

Generative Deep Learning:

  • Martín Arjovsky, Soumith Chintala, Léon Bottou: Wasserstein Generative Adversarial Networks. ICML 2017: 214-223 
  • Gulrajani, Faruk Ahmed, Martín Arjovsky, Vincent Dumoulin, Aaron C. Courville: Improved Training of Wasserstein GANs. NIPS 2017: 5767-5777 
  • You, Jiaxuan, et al. "Graphrnn: Generating realistic graphs with deep auto-regressive models." arXiv preprint arXiv:1802.08773 (2018). 
  • Aleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, Stephan Günnemann: NetGAN: Generating Graphs via Random Walks. ICML 2018: 609-618 

Robustness:

  • Eric Wong, J. Zico Kolter: Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope. ICML 2018: 5283-5292.  

Meta Learning:

  • Chelsea Finn, Pieter Abbeel, Sergey Levine: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. ICML 2017: 1126-1135. 

Fairness Learning:

  • Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, Adam Tauman Kalai: Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. NIPS 2016: 4349-4357.  
  • Richard S. Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, Cynthia Dwork: Learning Fair Representations. ICML (3) 2013: 325-333.  

Adversarial Attacks:

  • Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, Le Song: Adversarial Attack on Graph Structured Data. ICML 2018: 1123-1132 . 
  • Daniel Zügner, Amir Akbarnejad, Stephan Günnemann: Adversarial Attacks on Neural Networks for Graph Data. KDD 2018: 2847-2856. 
  • Guanhong Tao, Shiqing Ma, Yingqi Liu, Xiangyu Zhang: Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples. NeurIPS 2018: 7728-7739 

Click to see what we have covered in each semester

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published