1 |
8.67 |
Generating High Fidelity Images With Subscale Pixel Networks And Multidimensional Upscaling |
7, 10, 9 |
1.25 |
Accept (Oral) |
2 |
8.67 |
Alista: Analytic Weights Are As Good As Learned Weights In Lista |
10, 7, 9 |
1.25 |
Accept (Poster) |
3 |
8.33 |
Benchmarking Neural Network Robustness To Common Corruptions And Perturbations |
7, 9, 9 |
0.94 |
Accept (Poster) |
4 |
8.33 |
On Random Deep Weight-tied Autoencoders: Exact Asymptotic Analysis, Phase Transitions, And Implications To Training |
9, 8, 8 |
0.47 |
Accept (Oral) |
5 |
8.00 |
Posterior Attention Models For Sequence To Sequence Learning |
8, 9, 7 |
0.82 |
Accept (Poster) |
6 |
8.00 |
Pay Less Attention With Lightweight And Dynamic Convolutions |
8, 8, 8 |
0.00 |
Accept (Oral) |
7 |
8.00 |
Slimmable Neural Networks |
8, 9, 7 |
0.82 |
Accept (Poster) |
8 |
8.00 |
A Unified Theory Of Early Visual Representations From Retina To Cortex Through Anatomically Constrained Deep Cnns |
8, 8, 8 |
0.00 |
Accept (Oral) |
9 |
8.00 |
Ordered Neurons: Integrating Tree Structures Into Recurrent Neural Networks |
9, 7, 8 |
0.82 |
Accept (Oral) |
10 |
8.00 |
Temporal Difference Variational Auto-encoder |
8, 9, 7 |
0.82 |
Accept (Oral) |
11 |
8.00 |
Enabling Factorized Piano Music Modeling And Generation With The Maestro Dataset |
8, 8, 8 |
0.00 |
Accept (Oral) |
12 |
8.00 |
Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse Rl, And Gans By Constraining Information Flow |
6, 10, 8 |
1.63 |
Accept (Poster) |
13 |
8.00 |
Near-optimal Representation Learning For Hierarchical Reinforcement Learning |
8, 9, 7 |
0.82 |
Accept (Poster) |
14 |
8.00 |
Ba-net: Dense Bundle Adjustment Networks |
9, 7, 8 |
0.82 |
Accept (Oral) |
15 |
8.00 |
Understanding And Improving Interpolation In Autoencoders Via An Adversarial Regularizer |
7, 8, 9 |
0.82 |
Accept (Poster) |
16 |
8.00 |
Snip: Single-shot Network Pruning Based On Connection Sensitivity |
8, 7, 9 |
0.82 |
Accept (Poster) |
17 |
8.00 |
Meta-learning Update Rules For Unsupervised Representation Learning |
8, 8, 8 |
0.00 |
Accept (Oral) |
18 |
8.00 |
Large Scale Gan Training For High Fidelity Natural Image Synthesis |
8, 7, 9 |
0.82 |
Accept (Oral) |
19 |
8.00 |
Unsupervised Learning Of The Set Of Local Maxima |
8, 8, 8 |
0.00 |
Accept (Poster) |
20 |
8.00 |
An Empirical Study Of Example Forgetting During Deep Neural Network Learning |
9, 8, 7 |
0.82 |
Accept (Poster) |
21 |
7.67 |
Learning Robust Representations By Projecting Superficial Statistics Out |
7, 7, 9 |
0.94 |
Accept (Oral) |
22 |
7.67 |
Automatically Composing Representation Transformations As A Means For Generalization |
7, 9, 7 |
0.94 |
Accept (Poster) |
23 |
7.67 |
Identifying And Controlling Important Neurons In Neural Machine Translation |
7, 10, 6 |
1.70 |
Accept (Poster) |
24 |
7.67 |
Towards Robust, Locally Linear Deep Networks |
8, 8, 7 |
0.47 |
Accept (Poster) |
25 |
7.67 |
Deep Decoder: Concise Image Representations From Untrained Non-convolutional Networks |
8, 8, 7 |
0.47 |
Accept (Poster) |
26 |
7.67 |
Lagging Inference Networks And Posterior Collapse In Variational Autoencoders |
7, 8, 8 |
0.47 |
Accept (Poster) |
27 |
7.67 |
A Variational Inequality Perspective On Generative Adversarial Networks |
8, 8, 7 |
0.47 |
Accept (Poster) |
28 |
7.67 |
Robustness May Be At Odds With Accuracy |
8, 7, 8 |
0.47 |
Accept (Poster) |
29 |
7.67 |
Knockoffgan: Generating Knockoffs For Feature Selection Using Generative Adversarial Networks |
6, 10, 7 |
1.70 |
Accept (Oral) |
30 |
7.67 |
Adaptive Input Representations For Neural Language Modeling |
7, 8, 8 |
0.47 |
Accept (Poster) |
31 |
7.67 |
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks |
5, 9, 9 |
1.89 |
Accept (Oral) |
32 |
7.67 |
Critical Learning Periods In Deep Networks |
9, 8, 6 |
1.25 |
Accept (Poster) |
33 |
7.67 |
Composing Complex Skills By Learning Transition Policies |
7, 9, 7 |
0.94 |
Accept (Poster) |
34 |
7.67 |
Supervised Community Detection With Line Graph Neural Networks |
6, 9, 8 |
1.25 |
Accept (Poster) |
35 |
7.67 |
Learning Deep Representations By Mutual Information Estimation And Maximization |
7, 7, 9 |
0.94 |
Accept (Oral) |
36 |
7.67 |
Smoothing The Geometry Of Probabilistic Box Embeddings |
8, 8, 7 |
0.47 |
Accept (Oral) |
37 |
7.67 |
A2bcd: Asynchronous Acceleration With Optimal Complexity |
7, 7, 9 |
0.94 |
Accept (Poster) |
38 |
7.67 |
Kernel Change-point Detection With Auxiliary Deep Generative Models |
8, 8, 7 |
0.47 |
Accept (Poster) |
39 |
7.67 |
Imagenet-trained Cnns Are Biased Towards Texture; Increasing Shape Bias Improves Accuracy And Robustness |
7, 8, 8 |
0.47 |
Accept (Oral) |
40 |
7.67 |
Slalom: Fast, Verifiable And Private Execution Of Neural Networks In Trusted Hardware |
7, 7, 9 |
0.94 |
Accept (Oral) |
41 |
7.67 |
Sparse Dictionary Learning By Dynamical Neural Networks |
6, 9, 8 |
1.25 |
Accept (Poster) |
42 |
7.50 |
On The Minimal Supervision For Training Any Binary Classifier From Only Unlabeled Data |
7, 8, 8, 7 |
0.50 |
Accept (Poster) |
43 |
7.50 |
Exploration By Random Network Distillation |
4, 9, 10, 7 |
2.29 |
Accept (Poster) |
44 |
7.33 |
Dimensionality Reduction For Representing The Knowledge Of Probabilistic Models |
6, 7, 9 |
1.25 |
Accept (Poster) |
45 |
7.33 |
Probabilistic Recursive Reasoning For Multi-agent Reinforcement Learning |
8, 7, 7 |
0.47 |
Accept (Poster) |
46 |
7.33 |
Approximability Of Discriminators Implies Diversity In Gans |
8, 7, 7 |
0.47 |
Accept (Poster) |
47 |
7.33 |
Evaluating Robustness Of Neural Networks With Mixed Integer Programming |
7, 8, 7 |
0.47 |
Accept (Poster) |
48 |
7.33 |
Biologically-plausible Learning Algorithms Can Scale To Large Datasets |
9, 9, 4 |
2.36 |
Accept (Poster) |
49 |
7.33 |
Diagnosing And Enhancing Vae Models |
9, 6, 7 |
1.25 |
Accept (Poster) |
50 |
7.33 |
Learning To Navigate The Web |
7, 8, 7 |
0.47 |
Accept (Poster) |
51 |
7.33 |
Transferring Knowledge Across Learning Processes |
6, 8, 8 |
0.94 |
Accept (Oral) |
52 |
7.33 |
Improving Differentiable Neural Computers Through Memory Masking, De-allocation, And Link Distribution Sharpness Control |
8, 7, 7 |
0.47 |
Accept (Poster) |
53 |
7.33 |
Towards Metamerism Via Foveated Style Transfer |
7, 8, 7 |
0.47 |
Accept (Poster) |
54 |
7.33 |
Variance Reduction For Reinforcement Learning In Input-driven Environments |
7, 9, 6 |
1.25 |
Accept (Poster) |
55 |
7.33 |
Quaternion Recurrent Neural Networks |
8, 7, 7 |
0.47 |
Accept (Poster) |
56 |
7.33 |
Promp: Proximal Meta-policy Search |
6, 7, 9 |
1.25 |
Accept (Poster) |
57 |
7.33 |
Label Super-resolution Networks |
7, 6, 9 |
1.25 |
Accept (Poster) |
58 |
7.33 |
Learning Self-imitating Diverse Policies |
8, 6, 8 |
0.94 |
Accept (Poster) |
59 |
7.33 |
Learning Protein Sequence Embeddings Using Information From Structure |
7, 7, 8 |
0.47 |
Accept (Poster) |
60 |
7.33 |
Diffusion Scattering Transforms On Graphs |
6, 9, 7 |
1.25 |
Accept (Poster) |
61 |
7.33 |
Deep Frank-wolfe For Neural Network Optimization |
7, 7, 8 |
0.47 |
Accept (Poster) |
62 |
7.33 |
Gradient Descent Aligns The Layers Of Deep Linear Networks |
7, 9, 6 |
1.25 |
Accept (Poster) |
63 |
7.33 |
Recurrent Experience Replay In Distributed Reinforcement Learning |
7, 7, 8 |
0.47 |
Accept (Poster) |
64 |
7.33 |
Large-scale Study Of Curiosity-driven Learning |
6, 9, 7 |
1.25 |
Accept (Poster) |
65 |
7.33 |
Learning Localized Generative Models For 3d Point Clouds Via Graph Convolution |
9, 6, 7 |
1.25 |
Accept (Poster) |
66 |
7.33 |
Prior Convictions: Black-box Adversarial Attacks With Bandits And Priors |
7, 8, 7 |
0.47 |
Accept (Poster) |
67 |
7.33 |
Learning Latent Superstructures In Variational Autoencoders For Deep Multidimensional Clustering |
8, 7, 7 |
0.47 |
Accept (Poster) |
68 |
7.33 |
Learning Grid Cells As Vector Representation Of Self-position Coupled With Matrix Representation Of Self-motion |
8, 7, 7 |
0.47 |
Accept (Poster) |
69 |
7.33 |
Clarinet: Parallel Wave Generation In End-to-end Text-to-speech |
9, 6, 7 |
1.25 |
Accept (Poster) |
70 |
7.33 |
Dynamic Sparse Graph For Efficient Deep Learning |
8, 7, 7 |
0.47 |
Accept (Poster) |
71 |
7.33 |
Learning To Remember More With Less Memorization |
7, 8, 7 |
0.47 |
Accept (Oral) |
72 |
7.33 |
Gan Dissection: Visualizing And Understanding Generative Adversarial Networks |
7, 7, 8 |
0.47 |
Accept (Poster) |
73 |
7.33 |
Detecting Egregious Responses In Neural Sequence-to-sequence Models |
7, 7, 8 |
0.47 |
Accept (Poster) |
74 |
7.33 |
Deep Layers As Stochastic Solvers |
7, 7, 8 |
0.47 |
Accept (Poster) |
75 |
7.33 |
Small Nonlinearities In Activation Functions Create Bad Local Minima In Neural Networks |
7, 7, 8 |
0.47 |
Accept (Poster) |
76 |
7.33 |
Efficient Training On Very Large Corpora Via Gramian Estimation |
7, 8, 7 |
0.47 |
Accept (Poster) |
77 |
7.33 |
Diversity Is All You Need: Learning Skills Without A Reward Function |
8, 7, 7 |
0.47 |
Accept (Poster) |
78 |
7.33 |
Instagan: Instance-aware Image-to-image Translation |
7, 8, 7 |
0.47 |
Accept (Poster) |
79 |
7.33 |
Time-agnostic Prediction: Predicting Predictable Video Frames |
7, 8, 7 |
0.47 |
Accept (Poster) |
80 |
7.33 |
Learning To Schedule Communication In Multi-agent Reinforcement Learning |
7, 8, 7 |
0.47 |
Accept (Poster) |
81 |
7.33 |
No Training Required: Exploring Random Encoders For Sentence Classification |
7, 7, 8 |
0.47 |
Accept (Poster) |
82 |
7.33 |
Lanczosnet: Multi-scale Deep Graph Convolutional Networks |
7, 7, 8 |
0.47 |
Accept (Poster) |
83 |
7.33 |
The Neuro-symbolic Concept Learner: Interpreting Scenes, Words, And Sentences From Natural Supervision |
7, 6, 9 |
1.25 |
Accept (Oral) |
84 |
7.33 |
How Powerful Are Graph Neural Networks? |
7, 7, 8 |
0.47 |
Accept (Oral) |
85 |
7.25 |
Episodic Curiosity Through Reachability |
7, 8, 6, 8 |
0.83 |
Accept (Poster) |
86 |
7.00 |
Strokenet: A Neural Painting Environment |
7, 8, 6 |
0.82 |
Accept (Poster) |
87 |
7.00 |
Discriminator-actor-critic: Addressing Sample Inefficiency And Reward Bias In Adversarial Imitation Learning |
8, 6, 7 |
0.82 |
Accept (Poster) |
88 |
7.00 |
An Analytic Theory Of Generalization Dynamics And Transfer Learning In Deep Linear Networks |
8, 7, 6 |
0.82 |
Accept (Poster) |
89 |
7.00 |
Feature Intertwiner For Object Detection |
5, 9, 7 |
1.63 |
Accept (Poster) |
90 |
7.00 |
Learning Neural Pde Solvers With Convergence Guarantees |
7, 8, 6 |
0.82 |
Accept (Poster) |
91 |
7.00 |
Knowledge Flow: Improve Upon Your Teachers |
6, 8, 7 |
0.82 |
Accept (Poster) |
92 |
7.00 |
Multilingual Neural Machine Translation With Knowledge Distillation |
7, 7, 7 |
0.00 |
Accept (Poster) |
93 |
7.00 |
Texttovec: Deep Contextualized Neural Autoregressive Topic Models Of Language With Distributed Compositional Prior |
7, 8, 6 |
0.82 |
Accept (Poster) |
94 |
7.00 |
Supervised Policy Update For Deep Reinforcement Learning |
9, 6, 6 |
1.41 |
Accept (Poster) |
95 |
7.00 |
Gansynth: Adversarial Neural Audio Synthesis |
6, 7, 8 |
0.82 |
Accept (Poster) |
96 |
7.00 |
Lemonade: Learned Motif And Neuronal Assembly Detection In Calcium Imaging Videos |
8, 5, 8 |
1.41 |
Accept (Poster) |
97 |
7.00 |
The Comparative Power Of Relu Networks And Polynomial Kernels In The Presence Of Sparse Latent Structure |
7, 7, 7 |
0.00 |
Accept (Poster) |
98 |
7.00 |
Execution-guided Neural Program Synthesis |
7, 7, 7 |
0.00 |
Accept (Poster) |
99 |
7.00 |
Deterministic Variational Inference For Robust Bayesian Neural Networks |
7, 7, 7 |
0.00 |
Accept (Oral) |
100 |
7.00 |
Distributional Concavity Regularization For Gans |
7, 8, 6, 7 |
0.71 |
Accept (Poster) |
101 |
7.00 |
Som-vae: Interpretable Discrete Representation Learning On Time Series |
9, 6, 6 |
1.41 |
Accept (Poster) |
102 |
7.00 |
Variational Autoencoders With Jointly Optimized Latent Dependency Structure |
7, 6, 8 |
0.82 |
Accept (Poster) |
103 |
7.00 |
Learning Sparse Relational Transition Models |
6, 7, 8 |
0.82 |
Accept (Poster) |
104 |
7.00 |
Adversarial Domain Adaptation For Stable Brain-machine Interfaces |
9, 5, 7 |
1.63 |
Accept (Poster) |
105 |
7.00 |
The Role Of Over-parametrization In Generalization Of Neural Networks |
7, 7, 7 |
0.00 |
Accept (Poster) |
106 |
7.00 |
Differentiable Learning-to-normalize Via Switchable Normalization |
7, 7, 7 |
0.00 |
Accept (Poster) |
107 |
7.00 |
Stochastic Optimization Of Sorting Networks Via Continuous Relaxations |
8, 7, 6 |
0.82 |
Accept (Poster) |
108 |
7.00 |
A Statistical Approach To Assessing Neural Network Robustness |
6, 7, 8 |
0.82 |
Accept (Poster) |
109 |
7.00 |
Darts: Differentiable Architecture Search |
6, 7, 8 |
0.82 |
Accept (Poster) |
110 |
7.00 |
Learning Concise Representations For Regression By Evolving Networks Of Trees |
7, 6, 8 |
0.82 |
Accept (Poster) |
111 |
7.00 |
Padam: Closing The Generalization Gap Of Adaptive Gradient Methods In Training Deep Neural Networks |
6, 6, 9 |
1.41 |
Reject |
112 |
7.00 |
A Universal Music Translation Network |
8, 7, 6 |
0.82 |
Accept (Poster) |
113 |
7.00 |
Deep Learning 3d Shapes Using Alt-az Anisotropic 2-sphere Convolution |
6, 8, 7 |
0.82 |
Accept (Poster) |
114 |
7.00 |
Energy-constrained Compression For Deep Neural Networks Via Weighted Sparse Projection And Layer Input Masking |
7, 7, 7 |
0.00 |
Accept (Poster) |
115 |
7.00 |
Deep Graph Infomax |
9, 5, 7 |
1.63 |
Accept (Poster) |
116 |
7.00 |
On The Universal Approximability And Complexity Bounds Of Quantized Relu Neural Networks |
7, 6, 8 |
0.82 |
Accept (Poster) |
117 |
7.00 |
Global-to-local Memory Pointer Networks For Task-oriented Dialogue |
8, 8, 5 |
1.41 |
Accept (Poster) |
118 |
7.00 |
Self-monitoring Navigation Agent Via Auxiliary Progress Estimation |
8, 6, 7 |
0.82 |
Accept (Poster) |
119 |
7.00 |
Signsgd Via Zeroth-order Oracle |
8, 7, 6 |
0.82 |
Accept (Poster) |
120 |
7.00 |
Learning Particle Dynamics For Manipulating Rigid Bodies, Deformable Objects, And Fluids |
8, 6, 7 |
0.82 |
Accept (Poster) |
121 |
7.00 |
Generative Code Modeling With Graphs |
7, 7, 7 |
0.00 |
Accept (Poster) |
122 |
7.00 |
The Deep Weight Prior |
6, 8, 7 |
0.82 |
Accept (Poster) |
123 |
7.00 |
Bounce And Learn: Modeling Scene Dynamics With Real-world Bounces |
6, 7, 8 |
0.82 |
Accept (Poster) |
124 |
7.00 |
Quasi-hyperbolic Momentum And Adam For Deep Learning |
7, 6, 8 |
0.82 |
Accept (Poster) |
125 |
7.00 |
Integer Networks For Data Compression With Latent-variable Models |
6, 7, 8 |
0.82 |
Accept (Poster) |
126 |
7.00 |
Deep Online Learning Via Meta-learning: Continual Adaptation For Model-based Rl |
7, 7, 7 |
0.00 |
Accept (Poster) |
127 |
7.00 |
Are Adversarial Examples Inevitable? |
7, 8, 6 |
0.82 |
Accept (Poster) |
128 |
7.00 |
Learning To Screen For Fast Softmax Inference On Large Vocabulary Neural Networks |
7, 6, 8 |
0.82 |
Accept (Poster) |
129 |
7.00 |
Information-directed Exploration For Deep Reinforcement Learning |
7, 7, 7 |
0.00 |
Accept (Poster) |
130 |
7.00 |
Rotdcf: Decomposition Of Convolutional Filters For Rotation-equivariant Deep Networks |
7, 7, 7 |
0.00 |
Accept (Poster) |
131 |
7.00 |
Theoretical Analysis Of Auto Rate-tuning By Batch Normalization |
7, 7, 7 |
0.00 |
Accept (Poster) |
132 |
7.00 |
Visual Semantic Navigation Using Scene Priors |
7, 7, 7 |
0.00 |
Accept (Poster) |
133 |
7.00 |
Woulda, Coulda, Shoulda: Counterfactually-guided Policy Search |
7, 7, 7 |
0.00 |
Accept (Poster) |
134 |
7.00 |
Function Space Particle Optimization For Bayesian Neural Networks |
7, 7, 7 |
0.00 |
Accept (Poster) |
135 |
7.00 |
Eidetic 3d Lstm: A Model For Video Prediction And Beyond |
7, 7, 7 |
0.00 |
Accept (Poster) |
136 |
7.00 |
Wizard Of Wikipedia: Knowledge-powered Conversational Agents |
7, 6, 8 |
0.82 |
Accept (Poster) |
137 |
7.00 |
Meta-learning Probabilistic Inference For Prediction |
7, 6, 8 |
0.82 |
Accept (Poster) |
138 |
7.00 |
Don't Settle For Average, Go For The Max: Fuzzy Sets And Max-pooled Word Vectors |
8, 8, 5 |
1.41 |
Accept (Poster) |
139 |
7.00 |
Solving The Rubik's Cube With Approximate Policy Iteration |
7, 7, 7 |
0.00 |
Accept (Poster) |
140 |
7.00 |
Learning A Meta-solver For Syntax-guided Program Synthesis |
7, 7, 7 |
0.00 |
Accept (Poster) |
141 |
7.00 |
Rotate: Knowledge Graph Embedding By Relational Rotation In Complex Space |
7, 7, 7 |
0.00 |
Accept (Poster) |
142 |
7.00 |
Generative Question Answering: Learning To Answer The Whole Question |
7, 6, 8 |
0.82 |
Accept (Poster) |
143 |
7.00 |
Local Sgd Converges Fast And Communicates Little |
8, 5, 8 |
1.41 |
Accept (Poster) |
144 |
7.00 |
Ffjord: Free-form Continuous Dynamics For Scalable Reversible Generative Models |
7, 7, 7 |
0.00 |
Accept (Oral) |
145 |
7.00 |
Adashift: Decorrelation And Convergence Of Adaptive Learning Rate Methods |
6, 6, 9 |
1.41 |
Accept (Poster) |
146 |
7.00 |
What Do You Learn From Context? Probing For Sentence Structure In Contextualized Word Representations |
7, 7, 7 |
0.00 |
Accept (Poster) |
147 |
7.00 |
Modeling Uncertainty With Hedged Instance Embeddings |
7, 7, 7 |
0.00 |
Accept (Poster) |
148 |
7.00 |
Learning Implicitly Recurrent Cnns Through Parameter Sharing |
8, 7, 6 |
0.82 |
Accept (Poster) |
149 |
7.00 |
Arm: Augment-reinforce-merge Gradient For Stochastic Binary Networks |
8, 6, 7 |
0.82 |
Accept (Poster) |
150 |
7.00 |
On The Loss Landscape Of A Class Of Deep Neural Networks With No Bad Local Valleys |
7, 8, 6 |
0.82 |
Accept (Poster) |
151 |
7.00 |
Riemannian Adaptive Optimization Methods |
7, 7, 7 |
0.00 |
Accept (Poster) |
152 |
7.00 |
Learning To Learn Without Forgetting By Maximizing Transfer And Minimizing Interference |
6, 8, 7 |
0.82 |
Accept (Poster) |
153 |
7.00 |
G-sgd: Optimizing Relu Neural Networks In Its Positively Scale-invariant Space |
7, 7, 7 |
0.00 |
Accept (Poster) |
154 |
7.00 |
Reasoning About Physical Interactions With Object-oriented Prediction And Planning |
5, 7, 9 |
1.63 |
Accept (Poster) |
155 |
7.00 |
Hindsight Policy Gradients |
7, 7, 7 |
0.00 |
Accept (Poster) |
156 |
7.00 |
Unsupervised Domain Adaptation For Distance Metric Learning |
8, 5, 8 |
1.41 |
Accept (Poster) |
157 |
7.00 |
Learning Mixed-curvature Representations In Product Spaces |
7, 7, 7 |
0.00 |
Accept (Poster) |
158 |
7.00 |
Auxiliary Variational Mcmc |
7, 7, 7 |
0.00 |
Accept (Poster) |
159 |
7.00 |
Unsupervised Speech Recognition Via Segmental Empirical Output Distribution Matching |
7, 7, 7 |
0.00 |
Accept (Poster) |
160 |
7.00 |
On Computation And Generalization Of Generative Adversarial Networks Under Spectrum Control |
8, 6, 7 |
0.82 |
Accept (Poster) |
161 |
7.00 |
Optimal Control Via Neural Networks: A Convex Approach |
6, 8, 7 |
0.82 |
Accept (Poster) |
162 |
7.00 |
Whitening And Coloring Batch Transform For Gans |
7, 7, 7 |
0.00 |
Accept (Poster) |
163 |
7.00 |
Deep, Skinny Neural Networks Are Not Universal Approximators |
6, 8, 7 |
0.82 |
Accept (Poster) |
164 |
7.00 |
Nadpex: An On-policy Temporally Consistent Exploration Method For Deep Reinforcement Learning |
8, 6, 7 |
0.82 |
Accept (Poster) |
165 |
7.00 |
Learning To Solve Circuit-sat: An Unsupervised Differentiable Approach |
6, 8, 7 |
0.82 |
Accept (Poster) |
166 |
7.00 |
A Convergence Analysis Of Gradient Descent For Deep Linear Neural Networks |
7, 7, 7 |
0.00 |
Accept (Poster) |
167 |
7.00 |
Learning A Sat Solver From Single-bit Supervision |
7, 7, 7 |
0.00 |
Accept (Poster) |
168 |
7.00 |
Generating Multiple Objects At Spatially Distinct Locations |
6, 8, 7 |
0.82 |
Accept (Poster) |
169 |
7.00 |
K For The Price Of 1: Parameter-efficient Multi-task And Transfer Learning |
7, 6, 8 |
0.82 |
Accept (Poster) |
170 |
7.00 |
Bias-reduced Uncertainty Estimation For Deep Neural Classifiers |
7, 7, 7 |
0.00 |
Accept (Poster) |
171 |
7.00 |
Probabilistic Neural-symbolic Models For Interpretable Visual Question Answering |
8, 6, 7 |
0.82 |
Reject |
172 |
7.00 |
Representation Degeneration Problem In Training Natural Language Generation Models |
7, 7, 7 |
0.00 |
Accept (Poster) |
173 |
7.00 |
Neural Network Gradient-based Learning Of Black-box Function Interfaces |
7, 7, 7 |
0.00 |
Accept (Poster) |
174 |
7.00 |
A Data-driven And Distributed Approach To Sparse Signal Representation And Recovery |
8, 7, 6 |
0.82 |
Accept (Poster) |
175 |
7.00 |
Relaxed Quantization For Discretized Neural Networks |
7, 7, 7 |
0.00 |
Accept (Poster) |
176 |
7.00 |
Invariant And Equivariant Graph Networks |
8, 4, 9 |
2.16 |
Accept (Poster) |
177 |
7.00 |
Dyrep: Learning Representations Over Dynamic Graphs |
6, 7, 8 |
0.82 |
Accept (Poster) |
178 |
7.00 |
The Laplacian In Rl: Learning Representations With Efficient Approximations |
7, 7, 7 |
0.00 |
Accept (Poster) |
179 |
7.00 |
Learning Recurrent Binary/ternary Weights |
6, 8, 7 |
0.82 |
Accept (Poster) |
180 |
7.00 |
How Important Is A Neuron |
7, 7, 7 |
0.00 |
Accept (Poster) |
181 |
6.80 |
Subgradient Descent Learns Orthogonal Dictionaries |
7, 7, 7, 7, 6 |
0.40 |
Accept (Poster) |
182 |
6.75 |
Unsupervised Learning Via Meta-learning |
7, 6, 8, 6 |
0.83 |
Accept (Poster) |
183 |
6.75 |
Bayesian Deep Convolutional Networks With Many Channels Are Gaussian Processes |
7, 7, 7, 6 |
0.43 |
Accept (Poster) |
184 |
6.75 |
Deterministic Pac-bayesian Generalization Bounds For Deep Networks Via Generalizing Noise-resilience |
8, 7, 7, 5 |
1.09 |
Accept (Poster) |
185 |
6.67 |
Structured Adversarial Attack: Towards General Implementation And Better Interpretability |
7, 7, 6 |
0.47 |
Accept (Poster) |
186 |
6.67 |
Adaptive Estimators Show Information Compression In Deep Neural Networks |
7, 6, 7 |
0.47 |
Accept (Poster) |
187 |
6.67 |
Tree-structured Recurrent Switching Linear Dynamical Systems For Multi-scale Modeling |
7, 7, 6 |
0.47 |
Accept (Poster) |
188 |
6.67 |
Residual Non-local Attention Networks For Image Restoration |
7, 7, 6 |
0.47 |
Accept (Poster) |
189 |
6.67 |
Cem-rl: Combining Evolutionary And Gradient-based Methods For Policy Search |
6, 7, 7 |
0.47 |
Accept (Poster) |
190 |
6.67 |
Marginal Policy Gradients: A Unified Family Of Estimators For Bounded Action Spaces With Applications |
7, 6, 7 |
0.47 |
Accept (Poster) |
191 |
6.67 |
Relgan: Relational Generative Adversarial Networks For Text Generation |
6, 8, 6 |
0.94 |
Accept (Poster) |
192 |
6.67 |
Defensive Quantization: When Efficiency Meets Robustness |
7, 6, 7 |
0.47 |
Accept (Poster) |
193 |
6.67 |
Policy Transfer With Strategy Optimization |
7, 7, 6 |
0.47 |
Accept (Poster) |
194 |
6.67 |
Big-little Net: An Efficient Multi-scale Feature Representation For Visual And Speech Recognition |
7, 6, 7 |
0.47 |
Accept (Poster) |
195 |
6.67 |
Universal Transformers |
6, 6, 8 |
0.94 |
Accept (Poster) |
196 |
6.67 |
Active Learning With Partial Feedback |
7, 6, 7 |
0.47 |
Accept (Poster) |
197 |
6.67 |
There Are Many Consistent Explanations Of Unlabeled Data: Why You Should Average |
6, 8, 6 |
0.94 |
Accept (Poster) |
198 |
6.67 |
Unsupervised Control Through Non-parametric Discriminative Rewards |
8, 5, 7 |
1.25 |
Accept (Poster) |
199 |
6.67 |
On The Convergence Of A Class Of Adam-type Algorithms For Non-convex Optimization |
7, 7, 6 |
0.47 |
Accept (Poster) |
200 |
6.67 |
Adaptivity Of Deep Relu Network For Learning In Besov And Mixed Smooth Besov Spaces: Optimal Rate And Curse Of Dimensionality |
8, 6, 6 |
0.94 |
Accept (Poster) |
201 |
6.67 |
Predicting The Generalization Gap In Deep Networks With Margin Distributions |
5, 9, 6 |
1.70 |
Accept (Poster) |
202 |
6.67 |
A Mean Field Theory Of Batch Normalization |
7, 6, 7 |
0.47 |
Accept (Poster) |
203 |
6.67 |
Don't Let Your Discriminator Be Fooled |
7, 7, 6 |
0.47 |
Accept (Poster) |
204 |
6.67 |
L-shapley And C-shapley: Efficient Model Interpretation For Structured Data |
7, 7, 6 |
0.47 |
Accept (Poster) |
205 |
6.67 |
Hyperbolic Attention Networks |
6, 7, 7 |
0.47 |
Accept (Poster) |
206 |
6.67 |
Learning To Make Analogies By Contrasting Abstract Relational Structure |
6, 7, 7 |
0.47 |
Accept (Poster) |
207 |
6.67 |
Meta-learning For Stochastic Gradient Mcmc |
7, 7, 6 |
0.47 |
Accept (Poster) |
208 |
6.67 |
Directed-info Gail: Learning Hierarchical Policies From Unsegmented Demonstrations Using Directed Information |
6, 6, 8 |
0.94 |
Accept (Poster) |
209 |
6.67 |
Building Dynamic Knowledge Graphs From Text Using Machine Reading Comprehension |
6, 7, 7 |
0.47 |
Accept (Poster) |
210 |
6.67 |
Proxquant: Quantized Neural Networks Via Proximal Operators |
8, 7, 5 |
1.25 |
Accept (Poster) |
211 |
6.67 |
Emergent Coordination Through Competition |
7, 7, 6 |
0.47 |
Accept (Poster) |
212 |
6.67 |
Doubly Reparameterized Gradient Estimators For Monte Carlo Objectives |
7, 7, 6 |
0.47 |
Accept (Poster) |
213 |
6.67 |
Learning To Understand Goal Specifications By Modelling Reward |
7, 7, 6 |
0.47 |
Accept (Poster) |
214 |
6.67 |
Off-policy Evaluation And Learning From Logged Bandit Feedback: Error Reduction Via Surrogate Policy |
6, 8, 6 |
0.94 |
Accept (Poster) |
215 |
6.67 |
Improving Mmd-gan Training With Repulsive Loss Function |
6, 7, 7 |
0.47 |
Accept (Poster) |
216 |
6.67 |
Probgan: Towards Probabilistic Gan With Theoretical Guarantees |
6, 5, 9 |
1.70 |
Accept (Poster) |
217 |
6.67 |
Three Mechanisms Of Weight Decay Regularization |
6, 7, 7 |
0.47 |
Accept (Poster) |
218 |
6.67 |
Hierarchical Rl Using An Ensemble Of Proprioceptive Periodic Policies |
6, 7, 7 |
0.47 |
Accept (Poster) |
219 |
6.67 |
Detecting Adversarial Examples Via Neural Fingerprinting |
5, 9, 6 |
1.70 |
Reject |
220 |
6.67 |
Diversity-sensitive Conditional Generative Adversarial Networks |
7, 6, 7 |
0.47 |
Accept (Poster) |
221 |
6.67 |
Optimal Completion Distillation For Sequence Learning |
7, 7, 6 |
0.47 |
Accept (Poster) |
222 |
6.67 |
Flowqa: Grasping Flow In History For Conversational Machine Comprehension |
7, 6, 7 |
0.47 |
Accept (Poster) |
223 |
6.67 |
Towards The First Adversarially Robust Neural Network Model On Mnist |
7, 7, 6 |
0.47 |
Accept (Poster) |
224 |
6.67 |
Sample Efficient Adaptive Text-to-speech |
7, 7, 6 |
0.47 |
Accept (Poster) |
225 |
6.67 |
Latent Convolutional Models |
6, 7, 7 |
0.47 |
Accept (Poster) |
226 |
6.67 |
Minimal Images In Deep Neural Networks: Fragile Object Recognition In Natural Images |
7, 7, 6 |
0.47 |
Accept (Poster) |
227 |
6.67 |
Universal Stagewise Learning For Non-convex Problems With Convergence On Averaged Solutions |
8, 6, 6 |
0.94 |
Accept (Poster) |
228 |
6.67 |
Learning Multimodal Graph-to-graph Translation For Molecule Optimization |
7, 7, 6 |
0.47 |
Accept (Poster) |
229 |
6.67 |
Autoloss: Learning Discrete Schedule For Alternate Optimization |
7, 6, 7 |
0.47 |
Accept (Poster) |
230 |
6.67 |
Efficient Lifelong Learning With A-gem |
7, 6, 7 |
0.47 |
Accept (Poster) |
231 |
6.67 |
Spherical Cnns On Unstructured Grids |
6, 7, 7 |
0.47 |
Accept (Poster) |
232 |
6.67 |
Differentiable Perturb-and-parse: Semi-supervised Parsing With A Structured Variational Autoencoder |
8, 7, 5 |
1.25 |
Accept (Poster) |
233 |
6.67 |
Practical Lossless Compression With Latent Variables Using Bits Back Coding |
6, 6, 8 |
0.94 |
Accept (Poster) |
234 |
6.67 |
Analysis Of Quantized Models |
6, 7, 7 |
0.47 |
Accept (Poster) |
235 |
6.67 |
Detecting Memorization In Relu Networks |
5, 6, 9 |
1.70 |
Reject |
236 |
6.67 |
Snas: Stochastic Neural Architecture Search |
6, 7, 7 |
0.47 |
Accept (Poster) |
237 |
6.67 |
Pate-gan: Generating Synthetic Data With Differential Privacy Guarantees |
7, 6, 7 |
0.47 |
Accept (Poster) |
238 |
6.67 |
Principled Deep Neural Network Training Through Linear Programming |
6, 6, 8 |
0.94 |
Reject |
239 |
6.67 |
Cot: Cooperative Training For Generative Modeling Of Discrete Data |
7, 7, 6 |
0.47 |
Reject |
240 |
6.67 |
On The Turing Completeness Of Modern Neural Network Architectures |
6, 7, 7 |
0.47 |
Accept (Poster) |
241 |
6.67 |
Layoutgan: Generating Graphic Layouts With Wireframe Discriminators |
7, 7, 6 |
0.47 |
Accept (Poster) |
242 |
6.67 |
Learning Factorized Multimodal Representations |
7, 7, 6 |
0.47 |
Accept (Poster) |
243 |
6.67 |
Phase-aware Speech Enhancement With Deep Complex U-net |
6, 7, 7 |
0.47 |
Accept (Poster) |
244 |
6.67 |
Go Gradient For Expectation-based Objectives |
7, 7, 6 |
0.47 |
Accept (Poster) |
245 |
6.67 |
Analyzing Inverse Problems With Invertible Neural Networks |
7, 6, 7 |
0.47 |
Accept (Poster) |
246 |
6.67 |
Deep Reinforcement Learning With Relational Inductive Biases |
6, 7, 7 |
0.47 |
Accept (Poster) |
247 |
6.67 |
Janossy Pooling: Learning Deep Permutation-invariant Functions For Variable-size Inputs |
7, 5, 8 |
1.25 |
Accept (Poster) |
248 |
6.67 |
Improving Generalization And Stability Of Generative Adversarial Networks |
7, 7, 6 |
0.47 |
Accept (Poster) |
249 |
6.67 |
Preconditioner On Matrix Lie Group For Sgd |
8, 5, 7 |
1.25 |
Accept (Poster) |
250 |
6.67 |
Deep Anomaly Detection With Outlier Exposure |
6, 6, 8 |
0.94 |
Accept (Poster) |
251 |
6.67 |
Attention, Learn To Solve Routing Problems! |
7, 6, 7 |
0.47 |
Accept (Poster) |
252 |
6.67 |
Learning What And Where To Attend |
6, 6, 8 |
0.94 |
Accept (Poster) |
253 |
6.67 |
Query-efficient Hard-label Black-box Attack: An Optimization-based Approach |
7, 6, 7 |
0.47 |
Accept (Poster) |
254 |
6.67 |
Recall Traces: Backtracking Models For Efficient Reinforcement Learning |
7, 7, 6 |
0.47 |
Accept (Poster) |
255 |
6.67 |
Learning To Infer And Execute 3d Shape Programs |
6, 7, 7 |
0.47 |
Accept (Poster) |
256 |
6.67 |
Dom-q-net: Grounded Rl On Structured Language |
7, 7, 6 |
0.47 |
Accept (Poster) |
257 |
6.67 |
Toward Understanding The Impact Of Staleness In Distributed Machine Learning |
4, 9, 7 |
2.05 |
Accept (Poster) |
258 |
6.67 |
Graph Hypernetworks For Neural Architecture Search |
7, 6, 7 |
0.47 |
Accept (Poster) |
259 |
6.67 |
A Generative Model For Electron Paths |
8, 4, 8 |
1.89 |
Accept (Poster) |
260 |
6.67 |
Bayesian Prediction Of Future Street Scenes Using Synthetic Likelihoods |
6, 8, 6 |
0.94 |
Accept (Poster) |
261 |
6.67 |
Disjoint Mapping Network For Cross-modal Matching Of Voices And Faces |
7, 6, 7 |
0.47 |
Accept (Poster) |
262 |
6.67 |
Complement Objective Training |
5, 8, 7 |
1.25 |
Accept (Poster) |
263 |
6.67 |
Value Propagation Networks |
7, 6, 7 |
0.47 |
Accept (Poster) |
264 |
6.67 |
Trellis Networks For Sequence Modeling |
7, 6, 7 |
0.47 |
Accept (Poster) |
265 |
6.67 |
Non-vacuous Generalization Bounds At The Imagenet Scale: A Pac-bayesian Compression Approach |
6, 6, 8 |
0.94 |
Accept (Poster) |
266 |
6.67 |
Contingency-aware Exploration In Reinforcement Learning |
6, 7, 7 |
0.47 |
Accept (Poster) |
267 |
6.67 |
Context-adaptive Entropy Model For End-to-end Optimized Image Compression |
7, 7, 6 |
0.47 |
Accept (Poster) |
268 |
6.67 |
Learning Finite State Representations Of Recurrent Policy Networks |
6, 7, 7 |
0.47 |
Accept (Poster) |
269 |
6.67 |
Do Deep Generative Models Know What They Don't Know? |
7, 6, 7 |
0.47 |
Accept (Poster) |
270 |
6.67 |
Learning Two-layer Neural Networks With Symmetric Inputs |
7, 6, 7 |
0.47 |
Accept (Poster) |
271 |
6.67 |
Minimal Random Code Learning: Getting Bits Back From Compressed Model Parameters |
7, 6, 7 |
0.47 |
Accept (Poster) |
272 |
6.67 |
Noodl: Provable Online Dictionary Learning And Sparse Coding |
7, 6, 7 |
0.47 |
Accept (Poster) |
273 |
6.67 |
Approximating Cnns With Bag-of-local-features Models Works Surprisingly Well On Imagenet |
6, 7, 7 |
0.47 |
Accept (Poster) |
274 |
6.67 |
Understanding Straight-through Estimator In Training Activation Quantized Neural Nets |
7, 7, 6 |
0.47 |
Accept (Poster) |
275 |
6.67 |
Antisymmetricrnn: A Dynamical System View On Recurrent Neural Networks |
7, 7, 6 |
0.47 |
Accept (Poster) |
276 |
6.67 |
The Limitations Of Adversarial Training And The Blind-spot Attack |
7, 7, 6 |
0.47 |
Accept (Poster) |
277 |
6.67 |
A Rotation-equivariant Convolutional Neural Network Model Of Primary Visual Cortex |
7, 5, 8 |
1.25 |
Accept (Poster) |
278 |
6.67 |
Generalized Tensor Models For Recurrent Neural Networks |
6, 7, 7 |
0.47 |
Accept (Poster) |
279 |
6.67 |
Adversarial Attacks On Graph Neural Networks Via Meta Learning |
7, 7, 6 |
0.47 |
Accept (Poster) |
280 |
6.67 |
Training For Faster Adversarial Robustness Verification Via Inducing Relu Stability |
8, 7, 5 |
1.25 |
Accept (Poster) |
281 |
6.67 |
Adv-bnn: Improved Adversarial Defense Through Robust Bayesian Neural Network |
7, 6, 7 |
0.47 |
Accept (Poster) |
282 |
6.67 |
Initialized Equilibrium Propagation For Backprop-free Training |
5, 8, 7 |
1.25 |
Accept (Poster) |
283 |
6.67 |
Learning To Design Rna |
6, 6, 8 |
0.94 |
Accept (Poster) |
284 |
6.67 |
Adef: An Iterative Algorithm To Construct Adversarial Deformations |
7, 7, 6 |
0.47 |
Accept (Poster) |
285 |
6.67 |
Stable Opponent Shaping In Differentiable Games |
8, 6, 6 |
0.94 |
Accept (Poster) |
286 |
6.67 |
Spigan: Privileged Adversarial Learning From Simulation |
6, 7, 7 |
0.47 |
Accept (Poster) |
287 |
6.67 |
Metropolis-hastings View On Variational Inference And Adversarial Training |
5, 6, 9 |
1.70 |
Reject |
288 |
6.67 |
Beyond Pixel Norm-balls: Parametric Adversaries Using An Analytically Differentiable Renderer |
7, 7, 6 |
0.47 |
Accept (Poster) |
289 |
6.67 |
Adaptive Posterior Learning: Few-shot Learning With A Surprise-based Memory Module |
6, 7, 7 |
0.47 |
Accept (Poster) |
290 |
6.67 |
Glue: A Multi-task Benchmark And Analysis Platform For Natural Language Understanding |
7, 5, 8 |
1.25 |
Accept (Poster) |
291 |
6.67 |
Looking For Elmo's Friends: Sentence-level Pretraining Beyond Language Modeling |
5, 7, 8 |
1.25 |
Reject |
292 |
6.67 |
Misgan: Learning From Incomplete Data With Generative Adversarial Networks |
7, 6, 7 |
0.47 |
Accept (Poster) |
293 |
6.50 |
Gradient Descent Provably Optimizes Over-parameterized Neural Networks |
3, 8, 8, 7 |
2.06 |
Accept (Poster) |
294 |
6.50 |
Relational Forward Models For Multi-agent Learning |
7, 6, 7, 6 |
0.50 |
Accept (Poster) |
295 |
6.50 |
Dynamic Channel Pruning: Feature Boosting And Suppression |
7, 6, 7, 6 |
0.50 |
Accept (Poster) |
296 |
6.50 |
Learning Protein Structure With A Differentiable Simulator |
6, 7, 7, 6 |
0.50 |
Accept (Oral) |
297 |
6.50 |
Preferences Implicit In The State Of The World |
6, 7, 6, 7 |
0.50 |
Accept (Poster) |
298 |
6.50 |
Peernets: Exploiting Peer Wisdom Against Adversarial Attacks |
7, 6 |
0.50 |
Accept (Poster) |
299 |
6.33 |
Multilingual Neural Machine Translation With Soft Decoupled Encoding |
6, 6, 7 |
0.47 |
Accept (Poster) |
300 |
6.33 |
Analysing Mathematical Reasoning Abilities Of Neural Models |
7, 6, 6 |
0.47 |
Accept (Poster) |
301 |
6.33 |
Minimum Divergence Vs. Maximum Margin: An Empirical Comparison On Seq2seq Models |
5, 7, 7 |
0.94 |
Accept (Poster) |
302 |
6.33 |
Self-tuning Networks: Bilevel Optimization Of Hyperparameters Using Structured Best-response Functions |
7, 6, 6 |
0.47 |
Accept (Poster) |
303 |
6.33 |
Learning Disentangled Representations With Reference-based Variational Autoencoders |
7, 6, 6 |
0.47 |
Reject |
304 |
6.33 |
Remember And Forget For Experience Replay |
7, 6, 6 |
0.47 |
Reject |
305 |
6.33 |
Dpsnet: End-to-end Deep Plane Sweep Stereo |
7, 6, 6 |
0.47 |
Accept (Poster) |
306 |
6.33 |
On Tighter Generalization Bounds For Deep Neural Networks: Cnns, Resnets, And Beyond |
5, 7, 7 |
0.94 |
Reject |
307 |
6.33 |
Measuring Compositionality In Representation Learning |
6, 6, 7 |
0.47 |
Accept (Poster) |
308 |
6.33 |
Reward Constrained Policy Optimization |
6, 7, 6 |
0.47 |
Accept (Poster) |
309 |
6.33 |
Regularized Learning For Domain Adaptation Under Label Shifts |
7, 6, 6 |
0.47 |
Accept (Poster) |
310 |
6.33 |
A Differentiable Self-disambiguated Sense Embedding Model Via Scaled Gumbel Softmax |
7, 6, 6 |
0.47 |
Reject |
311 |
6.33 |
Preventing Posterior Collapse With Delta-vaes |
6, 7, 6 |
0.47 |
Accept (Poster) |
312 |
6.33 |
Efficient Augmentation Via Data Subsampling |
6, 7, 6 |
0.47 |
Accept (Poster) |
313 |
6.33 |
Double Viterbi: Weight Encoding For High Compression Ratio And Fast On-chip Reconstruction For Deep Neural Network |
6, 6, 7 |
0.47 |
Accept (Poster) |
314 |
6.33 |
Rethinking The Value Of Network Pruning |
6, 6, 7 |
0.47 |
Accept (Poster) |
315 |
6.33 |
Aligning Artificial Neural Networks To The Brain Yields Shallow Recurrent Architectures |
5, 7, 7 |
0.94 |
Reject |
316 |
6.33 |
Equi-normalization Of Neural Networks |
7, 7, 5 |
0.94 |
Accept (Poster) |
317 |
6.33 |
Multi-domain Adversarial Learning |
5, 8, 6 |
1.25 |
Accept (Poster) |
318 |
6.33 |
Information Theoretic Lower Bounds On Negative Log Likelihood |
6, 7, 6 |
0.47 |
Accept (Poster) |
319 |
6.33 |
Dialogwae: Multimodal Response Generation With Conditional Wasserstein Auto-encoder |
7, 7, 5 |
0.94 |
Accept (Poster) |
320 |
6.33 |
Monge-amp`ere Flow For Generative Modeling |
7, 6, 6 |
0.47 |
Reject |
321 |
6.33 |
Nlprolog: Reasoning With Weak Unification For Natural Language Question Answering |
7, 5, 7 |
0.94 |
Reject |
322 |
6.33 |
Attentive Neural Processes |
6, 6, 7 |
0.47 |
Accept (Poster) |
323 |
6.33 |
Scalable Unbalanced Optimal Transport Using Generative Adversarial Networks |
6, 7, 6 |
0.47 |
Accept (Poster) |
324 |
6.33 |
Structured Neural Summarization |
6, 6, 7 |
0.47 |
Accept (Poster) |
325 |
6.33 |
Laplacian Networks: Bounding Indicator Function Smoothness For Neural Networks Robustness |
9, 5, 5 |
1.89 |
Reject |
326 |
6.33 |
Accumulation Bit-width Scaling For Ultra-low Precision Training Of Deep Networks |
6, 6, 7 |
0.47 |
Accept (Poster) |
327 |
6.33 |
Direct Optimization Through For Discrete Variational Auto-encoder |
7, 7, 5 |
0.94 |
Reject |
328 |
6.33 |
Fluctuation-dissipation Relations For Stochastic Gradient Descent |
8, 5, 6 |
1.25 |
Accept (Poster) |
329 |
6.33 |
Rnns Implicitly Implement Tensor-product Representations |
7, 6, 6 |
0.47 |
Accept (Poster) |
330 |
6.33 |
From Hard To Soft: Understanding Deep Network Nonlinearities Via Vector Quantization And Statistical Inference |
6, 6, 7 |
0.47 |
Accept (Poster) |
331 |
6.33 |
Von Mises-fisher Loss For Training Sequence To Sequence Models With Continuous Outputs |
6, 7, 6 |
0.47 |
Accept (Poster) |
332 |
6.33 |
Proxylessnas: Direct Neural Architecture Search On Target Task And Hardware |
6, 6, 7 |
0.47 |
Accept (Poster) |
333 |
6.33 |
Discriminator Rejection Sampling |
7, 6, 6 |
0.47 |
Accept (Poster) |
334 |
6.33 |
Visceral Machines: Risk-aversion In Reinforcement Learning With Intrinsic Physiological Rewards |
6, 6, 7 |
0.47 |
Accept (Poster) |
335 |
6.33 |
Fixup Initialization: Residual Learning Without Normalization |
7, 5, 7 |
0.94 |
Accept (Poster) |
336 |
6.33 |
Algorithmic Framework For Model-based Deep Reinforcement Learning With Theoretical Guarantees |
7, 6, 6 |
0.47 |
Accept (Poster) |
337 |
6.33 |
Understanding Composition Of Word Embeddings Via Tensor Decomposition |
7, 6, 6 |
0.47 |
Accept (Poster) |
338 |
6.33 |
Learning To Simulate |
6, 6, 7 |
0.47 |
Accept (Poster) |
339 |
6.33 |
Temporal Gaussian Mixture Layer For Videos |
6, 6, 7 |
0.47 |
Reject |
340 |
6.33 |
Dher: Hindsight Experience Replay For Dynamic Goals |
6, 7, 6 |
0.47 |
Accept (Poster) |
341 |
6.33 |
L2-nonexpansive Neural Networks |
8, 6, 5 |
1.25 |
Accept (Poster) |
342 |
6.33 |
Generating Liquid Simulations With Deformation-aware Neural Networks |
7, 7, 5 |
0.94 |
Accept (Poster) |
343 |
6.33 |
Camou: Learning Physical Vehicle Camouflages To Adversarially Attack Detectors In The Wild |
4, 8, 7 |
1.70 |
Accept (Poster) |
344 |
6.33 |
Timbretron: A Wavenet(cyclegan(cqt(audio))) Pipeline For Musical Timbre Transfer |
4, 7, 8 |
1.70 |
Accept (Poster) |
345 |
6.33 |
Synthetic Datasets For Neural Program Synthesis |
7, 6, 6 |
0.47 |
Accept (Poster) |
346 |
6.33 |
Delta: Deep Learning Transfer Using Feature Map With Attention For Convolutional Networks |
7, 6, 6 |
0.47 |
Accept (Poster) |
347 |
6.33 |
Neural Speed Reading With Structural-jump-lstm |
7, 5, 7 |
0.94 |
Accept (Poster) |
348 |
6.33 |
Policy Generalization In Capacity-limited Reinforcement Learning |
7, 7, 5 |
0.94 |
Reject |
349 |
6.33 |
Large Scale Graph Learning From Smooth Signals |
7, 5, 7 |
0.94 |
Accept (Poster) |
350 |
6.33 |
Post Selection Inference With Incomplete Maximum Mean Discrepancy Estimator |
6, 5, 8 |
1.25 |
Accept (Poster) |
351 |
6.33 |
Stable Recurrent Models |
7, 6, 6 |
0.47 |
Accept (Poster) |
352 |
6.33 |
On The Relation Between The Sharpest Directions Of Dnn Loss And The Sgd Step Length |
6, 6, 7 |
0.47 |
Accept (Poster) |
353 |
6.33 |
Learning To Represent Edits |
7, 6, 6 |
0.47 |
Accept (Poster) |
354 |
6.33 |
On Self Modulation For Generative Adversarial Networks |
7, 5, 7 |
0.94 |
Accept (Poster) |
355 |
6.33 |
Sgd Converges To Global Minimum In Deep Learning Via Star-convex Path |
6, 5, 8 |
1.25 |
Accept (Poster) |
356 |
6.33 |
Neural Graph Evolution: Towards Efficient Automatic Robot Design |
5, 8, 6 |
1.25 |
Accept (Poster) |
357 |
6.33 |
The Relativistic Discriminator: A Key Element Missing From Standard Gan |
6, 6, 7 |
0.47 |
Accept (Poster) |
358 |
6.33 |
Augmented Cyclic Adversarial Learning For Low Resource Domain Adaptation |
8, 6, 5 |
1.25 |
Accept (Poster) |
359 |
6.33 |
Seq2slate: Re-ranking And Slate Optimization With Rnns |
6, 6, 7 |
0.47 |
Reject |
360 |
6.33 |
A Novel Variational Family For Hidden Non-linear Markov Models |
5, 8, 6 |
1.25 |
Reject |
361 |
6.33 |
Hierarchical Visuomotor Control Of Humanoids |
5, 8, 6 |
1.25 |
Accept (Poster) |
362 |
6.33 |
Single Shot Neural Architecture Search Via Direct Sparse Optimization |
6, 6, 7 |
0.47 |
Reject |
363 |
6.33 |
Beyond Greedy Ranking: Slate Optimization Via List-cvae |
6, 6, 7 |
0.47 |
Accept (Poster) |
364 |
6.33 |
Local Critic Training Of Deep Neural Networks |
6, 6, 7 |
0.47 |
Reject |
365 |
6.33 |
On The Sensitivity Of Adversarial Robustness To Input Data Distributions |
7, 5, 7 |
0.94 |
Accept (Poster) |
366 |
6.33 |
A Rotation And A Translation Suffice: Fooling Cnns With Simple Transformations |
8, 6, 5 |
1.25 |
Reject |
367 |
6.33 |
Verification Of Non-linear Specifications For Neural Networks |
7, 5, 7 |
0.94 |
Accept (Poster) |
368 |
6.33 |
Visual Reasoning By Progressive Module Networks |
6, 7, 6 |
0.47 |
Accept (Poster) |
369 |
6.33 |
Hierarchical Interpretations For Neural Network Predictions |
7, 6, 6 |
0.47 |
Accept (Poster) |
370 |
6.33 |
Robust Estimation Via Generative Adversarial Networks |
7, 5, 7 |
0.94 |
Accept (Poster) |
371 |
6.33 |
Large-scale Answerer In Questioner's Mind For Visual Dialog Question Generation |
6, 6, 7 |
0.47 |
Accept (Poster) |
372 |
6.33 |
Stochastic Gradient Descent Learns State Equations With Nonlinear Activations |
7, 5, 7 |
0.94 |
Reject |
373 |
6.33 |
Selfless Sequential Learning |
7, 6, 6 |
0.47 |
Accept (Poster) |
374 |
6.33 |
Mae: Mutual Posterior-divergence Regularization For Variational Autoencoders |
7, 6, 6 |
0.47 |
Accept (Poster) |
375 |
6.33 |
Information Asymmetry In Kl-regularized Rl |
7, 5, 7 |
0.94 |
Accept (Poster) |
376 |
6.33 |
Poincare Glove: Hyperbolic Word Embeddings |
6, 6, 7 |
0.47 |
Accept (Poster) |
377 |
6.33 |
From Language To Goals: Inverse Reinforcement Learning For Vision-based Instruction Following |
5, 5, 9 |
1.89 |
Accept (Poster) |
378 |
6.33 |
Dynamically Unfolding Recurrent Restorer: A Moving Endpoint Control Method For Image Restoration |
6, 6, 7 |
0.47 |
Accept (Poster) |
379 |
6.33 |
Soft Q-learning With Mutual-information Regularization |
7, 6, 6 |
0.47 |
Accept (Poster) |
380 |
6.33 |
M^3rl: Mind-aware Multi-agent Management Reinforcement Learning |
7, 6, 6 |
0.47 |
Accept (Poster) |
381 |
6.33 |
Invariance And Inverse Stability Under Relu |
6, 6, 7 |
0.47 |
Reject |
382 |
6.33 |
Diversity And Depth In Per-example Routing Models |
7, 6, 6 |
0.47 |
Accept (Poster) |
383 |
6.33 |
Revealing Interpretable Object Representations From Human Behavior |
7, 7, 5 |
0.94 |
Accept (Poster) |
384 |
6.33 |
Learning Factorized Representations For Open-set Domain Adaptation |
6, 6, 7 |
0.47 |
Accept (Poster) |
385 |
6.33 |
Functional Variational Bayesian Neural Networks |
7, 6, 6 |
0.47 |
Accept (Poster) |
386 |
6.33 |
Emi: Exploration With Mutual Information Maximizing State And Action Embeddings |
5, 7, 7 |
0.94 |
Reject |
387 |
6.33 |
Modeling The Long Term Future In Model-based Reinforcement Learning |
6, 6, 7 |
0.47 |
Accept (Poster) |
388 |
6.33 |
Deepobs: A Deep Learning Optimizer Benchmark Suite |
6, 6, 7 |
0.47 |
Accept (Poster) |
389 |
6.33 |
Empirical Bounds On Linear Regions Of Deep Rectifier Networks |
6, 7, 6 |
0.47 |
Reject |
390 |
6.33 |
Signsgd With Majority Vote Is Communication Efficient And Fault Tolerant |
6, 6, 7 |
0.47 |
Accept (Poster) |
391 |
6.33 |
Babyai: A Platform To Study The Sample Efficiency Of Grounded Language Learning |
6, 7, 6 |
0.47 |
Accept (Poster) |
392 |
6.33 |
Excessive Invariance Causes Adversarial Vulnerability |
7, 6, 6 |
0.47 |
Accept (Poster) |
393 |
6.33 |
Overcoming The Disentanglement Vs Reconstruction Trade-off Via Jacobian Supervision |
7, 7, 5 |
0.94 |
Accept (Poster) |
394 |
6.33 |
Feature-wise Bias Amplification |
6, 7, 6 |
0.47 |
Accept (Poster) |
395 |
6.33 |
Why Do Deep Convolutional Networks Generalize So Poorly To Small Image Transformations? |
7, 7, 5 |
0.94 |
Reject |
396 |
6.33 |
Hierarchical Generative Modeling For Controllable Speech Synthesis |
8, 6, 5 |
1.25 |
Accept (Poster) |
397 |
6.33 |
Multi-step Retriever-reader Interaction For Scalable Open-domain Question Answering |
6, 6, 7 |
0.47 |
Accept (Poster) |
398 |
6.33 |
Improved Gradient Estimators For Stochastic Discrete Variables |
7, 6, 6 |
0.47 |
Reject |
399 |
6.33 |
Characterizing Audio Adversarial Examples Using Temporal Dependency |
6, 6, 7 |
0.47 |
Accept (Poster) |
400 |
6.33 |
Data-dependent Coresets For Compressing Neural Networks With Applications To Generalization Bounds |
6, 7, 6 |
0.47 |
Accept (Poster) |
401 |
6.33 |
Meta-learning With Latent Embedding Optimization |
6, 5, 8 |
1.25 |
Accept (Poster) |
402 |
6.33 |
Probabilistic Planning With Sequential Monte Carlo Methods |
8, 6, 5 |
1.25 |
Accept (Poster) |
403 |
6.33 |
Learning What You Can Do Before Doing Anything |
7, 6, 6 |
0.47 |
Accept (Poster) |
404 |
6.33 |
Model-predictive Policy Learning With Uncertainty Regularization For Driving In Dense Traffic |
6, 6, 7 |
0.47 |
Accept (Poster) |
405 |
6.33 |
Spreading Vectors For Similarity Search |
6, 7, 6 |
0.47 |
Accept (Poster) |
406 |
6.33 |
Learning When To Communicate At Scale In Multiagent Cooperative And Competitive Tasks |
7, 6, 6 |
0.47 |
Accept (Poster) |
407 |
6.33 |
Opportunistic Learning: Budgeted Cost-sensitive Learning From Data Streams |
6, 6, 7 |
0.47 |
Accept (Poster) |
408 |
6.33 |
The Singular Values Of Convolutional Layers |
8, 4, 7 |
1.70 |
Accept (Poster) |
409 |
6.33 |
Exemplar Guided Unsupervised Image-to-image Translation With Semantic Consistency |
6, 5, 8 |
1.25 |
Accept (Poster) |
410 |
6.33 |
Learning-based Frequency Estimation Algorithms |
7, 6, 6 |
0.47 |
Accept (Poster) |
411 |
6.33 |
Max-mig: An Information Theoretic Approach For Joint Learning From Crowds |
6, 6, 7 |
0.47 |
Accept (Poster) |
412 |
6.33 |
Multiple-attribute Text Rewriting |
7, 6, 6 |
0.47 |
Accept (Poster) |
413 |
6.33 |
Harmonizing Maximum Likelihood With Gans For Multimodal Conditional Generation |
8, 7, 4 |
1.70 |
Accept (Poster) |
414 |
6.33 |
Variational Autoencoder With Arbitrary Conditioning |
7, 6, 6 |
0.47 |
Accept (Poster) |
415 |
6.33 |
A New Dog Learns Old Tricks: Rl Finds Classic Optimization Algorithms |
6, 6, 7 |
0.47 |
Accept (Poster) |
416 |
6.33 |
Generating Multi-agent Trajectories Using Programmatic Weak Supervision |
7, 6, 6 |
0.47 |
Accept (Poster) |
417 |
6.25 |
Maximal Divergence Sequential Autoencoder For Binary Software Vulnerability Detection |
6, 7, 6, 6 |
0.43 |
Accept (Poster) |
418 |
6.25 |
Neural Tts Stylization With Adversarial And Collaborative Games |
6, 6, 6, 7 |
0.43 |
Accept (Poster) |
419 |
6.25 |
Competitive Experience Replay |
5, 7, 6, 7 |
0.83 |
Accept (Poster) |
420 |
6.25 |
Bayesian Policy Optimization For Model Uncertainty |
5, 6, 7, 7 |
0.83 |
Accept (Poster) |
421 |
6.25 |
Sinkhorn Autoencoders |
5, 6, 7, 7 |
0.83 |
Reject |
422 |
6.25 |
Two-timescale Networks For Nonlinear Value Function Approximation |
6, 7, 6, 6 |
0.43 |
Accept (Poster) |
423 |
6.25 |
Lyapunov-based Safe Policy Optimization |
6, 6, 8, 5 |
1.09 |
Reject |
424 |
6.25 |
Towards Consistent Performance On Atari Using Expert Demonstrations |
6, 5, 7, 7 |
0.83 |
Reject |
425 |
6.00 |
Learning Implicit Generative Models By Teaching Explicit Ones |
7, 5, 6 |
0.82 |
Reject |
426 |
6.00 |
Emerging Disentanglement In Auto-encoder Based Unsupervised Image Content Transfer |
6, 6, 6 |
0.00 |
Accept (Poster) |
427 |
6.00 |
Projective Subspace Networks For Few-shot Learning |
6, 6, 6 |
0.00 |
Reject |
428 |
6.00 |
Environment Probing Interaction Policies |
6, 6, 6 |
0.00 |
Accept (Poster) |
429 |
6.00 |
Stcn: Stochastic Temporal Convolutional Networks |
6, 6, 6 |
0.00 |
Accept (Poster) |
430 |
6.00 |
Capsule Graph Neural Network |
6, 6, 6 |
0.00 |
Accept (Poster) |
431 |
6.00 |
Top-down Neural Model For Formulae |
6, 6, 6 |
0.00 |
Accept (Poster) |
432 |
6.00 |
Tarmac: Targeted Multi-agent Communication |
6, 6, 6 |
0.00 |
Reject |
433 |
6.00 |
Coarse-grain Fine-grain Coattention Network For Multi-evidence Question Answering |
7, 7, 4 |
1.41 |
Accept (Poster) |
434 |
6.00 |
Learning Programmatically Structured Representations With Perceptor Gradients |
7, 6, 5 |
0.82 |
Accept (Poster) |
435 |
6.00 |
Graph Transformer |
6, 6, 6 |
0.00 |
Reject |
436 |
6.00 |
Feed-forward Propagation In Probabilistic Neural Networks With Categorical And Max Layers |
6, 6, 6 |
0.00 |
Accept (Poster) |
437 |
6.00 |
Discriminative Active Learning |
8, 6, 4 |
1.63 |
Reject |
438 |
6.00 |
Neural Logic Machines |
6, 7, 5 |
0.82 |
Accept (Poster) |
439 |
6.00 |
Improving Sequence-to-sequence Learning Via Optimal Transport |
6, 7, 5 |
0.82 |
Accept (Poster) |
440 |
6.00 |
Backpropamine: Training Self-modifying Neural Networks With Differentiable Neuromodulated Plasticity |
4, 9, 5 |
2.16 |
Accept (Poster) |
441 |
6.00 |
Learning To Propagate Labels: Transductive Propagation Network For Few-shot Learning |
5, 6, 7 |
0.82 |
Accept (Poster) |
442 |
6.00 |
Neural Mmo: A Massively Multiplayer Game Environment For Intelligent Agents |
6, 5, 7 |
0.82 |
Reject |
443 |
6.00 |
Learnable Embedding Space For Efficient Neural Architecture Compression |
5, 7, 6 |
0.82 |
Accept (Poster) |
444 |
6.00 |
Ib-gan: Disentangled Representation Learning With Information Bottleneck Gan |
7, 7, 4 |
1.41 |
Reject |
445 |
6.00 |
Interpolation-prediction Networks For Irregularly Sampled Time Series |
6, 6, 6 |
0.00 |
Accept (Poster) |
446 |
6.00 |
Learning Models For Visual 3d Localization With Implicit Mapping |
7, 5, 6 |
0.82 |
Reject |
447 |
6.00 |
Transfer Learning For Related Reinforcement Learning Tasks Via Image-to-image Translation |
7, 7, 4 |
1.41 |
Reject |
448 |
6.00 |
Countering Language Drift Via Grounding |
6, 6, 6 |
0.00 |
Reject |
449 |
6.00 |
H-detach: Modifying The Lstm Gradient Towards Better Optimization |
5, 6, 7 |
0.82 |
Accept (Poster) |
450 |
6.00 |
Rigorous Agent Evaluation: An Adversarial Approach To Uncover Catastrophic Failures |
6, 6, 6 |
0.00 |
Accept (Poster) |
451 |
6.00 |
Multi-class Classification Without Multi-class Labels |
6, 7, 5 |
0.82 |
Accept (Poster) |
452 |
6.00 |
Dadam: A Consensus-based Distributed Adaptive Gradient Method For Online Optimization |
8, 4, 6 |
1.63 |
Reject |
453 |
6.00 |
A Biologically Inspired Visual Working Memory For Deep Networks |
4, 5, 9 |
2.16 |
Reject |
454 |
6.00 |
Multi-agent Dual Learning |
6, 6, 6 |
0.00 |
Accept (Poster) |
455 |
6.00 |
Dirichlet Variational Autoencoder |
6, 5, 7 |
0.82 |
Reject |
456 |
6.00 |
Graph Convolutional Network With Sequential Attention For Goal-oriented Dialogue Systems |
5, 6, 7 |
0.82 |
Reject |
457 |
6.00 |
Unsupervised Neural Multi-document Abstractive Summarization Of Reviews |
5, 4, 9 |
2.16 |
Reject |
458 |
6.00 |
Semi-supervised Learning With Multi-domain Sentiment Word Embeddings |
6, 6, 6 |
0.00 |
Reject |
459 |
6.00 |
Guiding Policies With Language Via Meta-learning |
6, 6, 6 |
0.00 |
Accept (Poster) |
460 |
6.00 |
Improving Sentence Representations With Multi-view Frameworks |
7, 6, 5 |
0.82 |
Reject |
461 |
6.00 |
Estimating Information Flow In Dnns |
7, 7, 4 |
1.41 |
Reject |
462 |
6.00 |
Identifying Bias In Ai Using Simulation |
5, 7, 6 |
0.82 |
Reject |
463 |
6.00 |
Graph Wavelet Neural Network |
4, 7, 7 |
1.41 |
Accept (Poster) |
464 |
6.00 |
Recurrent Kalman Networks: Factorized Inference In High-dimensional Deep Feature Spaces |
6, 6, 6 |
0.00 |
Reject |
465 |
6.00 |
Learning Procedural Abstractions And Evaluating Discrete Latent Temporal Structure |
6, 5, 7 |
0.82 |
Accept (Poster) |
466 |
6.00 |
Adversarial Information Factorization |
6, 6, 6 |
0.00 |
Reject |
467 |
6.00 |
Bnn+: Improved Binary Network Training |
8, 6, 4 |
1.63 |
Reject |
468 |
6.00 |
An Empirical Study Of Binary Neural Networks' Optimisation |
8, 6, 4 |
1.63 |
Accept (Poster) |
469 |
6.00 |
Graph U-net |
7, 4, 7 |
1.41 |
Reject |
470 |
6.00 |
Distribution-interpolation Trade Off In Generative Models |
6, 7, 5 |
0.82 |
Accept (Poster) |
471 |
6.00 |
A Closer Look At Few-shot Classification |
6, 6, 6 |
0.00 |
Accept (Poster) |
472 |
6.00 |
Decoupled Weight Decay Regularization |
6, 7, 5 |
0.82 |
Accept (Poster) |
473 |
6.00 |
An Adaptive Homeostatic Algorithm For The Unsupervised Learning Of Visual Features |
5, 4, 9 |
2.16 |
Reject |
474 |
6.00 |
Efficient Two-step Adversarial Defense For Deep Neural Networks |
5, 6, 7 |
0.82 |
Reject |
475 |
6.00 |
Cramer-wold Autoencoder |
5, 7, 6 |
0.82 |
Reject |
476 |
6.00 |
Precision Highway For Ultra Low-precision Quantization |
6, 7, 5 |
0.82 |
Reject |
477 |
6.00 |
Graphseq2seq: Graph-sequence-to-sequence For Neural Machine Translation |
6, 6, 6 |
0.00 |
Reject |
478 |
6.00 |
Learning Multi-level Hierarchies With Hindsight |
6, 7, 5 |
0.82 |
Accept (Poster) |
479 |
6.00 |
The Variational Deficiency Bottleneck |
5, 7, 6 |
0.82 |
Reject |
480 |
6.00 |
Universal Successor Features Approximators |
7, 5, 6 |
0.82 |
Accept (Poster) |
481 |
6.00 |
Deep Lagrangian Networks: Using Physics As Model Prior For Deep Learning |
7, 4, 7 |
1.41 |
Accept (Poster) |
482 |
6.00 |
Neural Program Repair By Jointly Learning To Localize And Repair |
6, 7, 5 |
0.82 |
Accept (Poster) |
483 |
6.00 |
Measuring And Regularizing Networks In Function Space |
6, 6, 6 |
0.00 |
Accept (Poster) |
484 |
6.00 |
Anytime Minibatch: Exploiting Stragglers In Online Distributed Optimization |
4, 7, 7 |
1.41 |
Accept (Poster) |
485 |
6.00 |
Stochastic Gradient Push For Distributed Deep Learning |
6, 6, 6 |
0.00 |
Reject |
486 |
6.00 |
A Direct Approach To Robust Deep Learning Using Adversarial Networks |
5, 7, 6 |
0.82 |
Accept (Poster) |
487 |
6.00 |
Gamepad: A Learning Environment For Theorem Proving |
4, 7, 7 |
1.41 |
Accept (Poster) |
488 |
6.00 |
Don’t Judge A Book By Its Cover - On The Dynamics Of Recurrent Neural Networks |
5, 7, 6 |
0.82 |
Reject |
489 |
6.00 |
Uncovering Surprising Behaviors In Reinforcement Learning Via Worst-case Analysis |
5, 7, 6 |
0.82 |
Reject |
490 |
6.00 |
Language Model Pre-training For Hierarchical Document Representations |
6, 6, 6 |
0.00 |
Reject |
491 |
6.00 |
Manifold Mixup: Learning Better Representations By Interpolating Hidden States |
6, 4, 8 |
1.63 |
Reject |
492 |
6.00 |
Dimension-free Bounds For Low-precision Training |
6, 6, 6 |
0.00 |
Reject |
493 |
6.00 |
Kernel Rnn Learning (kernl) |
5, 7, 6 |
0.82 |
Accept (Poster) |
494 |
6.00 |
Datnet: Dual Adversarial Transfer For Low-resource Named Entity Recognition |
6, 6, 6 |
0.00 |
Reject |
495 |
6.00 |
Optimistic Mirror Descent In Saddle-point Problems: Going The Extra (gradient) Mile |
7, 6, 5 |
0.82 |
Accept (Poster) |
496 |
6.00 |
Deep Convolutional Networks As Shallow Gaussian Processes |
5, 8, 5 |
1.41 |
Accept (Poster) |
497 |
6.00 |
On The Computational Inefficiency Of Large Batch Sizes For Stochastic Gradient Descent |
5, 8, 5 |
1.41 |
Reject |
498 |
6.00 |
Wasserstein Barycenter Model Ensembling |
6, 6, 6 |
0.00 |
Accept (Poster) |
499 |
6.00 |
Computing Committor Functions For The Study Of Rare Events Using Deep Learning With Importance Sampling |
6, 6, 5, 7 |
0.71 |
Reject |
500 |
6.00 |
Scaling Shared Model Governance Via Model Splitting |
4, 5, 9 |
2.16 |
Reject |
501 |
6.00 |
Generative Feature Matching Networks |
6, 6, 6, 6 |
0.00 |
Reject |
502 |
6.00 |
Mixed Precision Quantization Of Convnets Via Differentiable Neural Architecture Search |
5, 6, 7, 6 |
0.71 |
Reject |
503 |
6.00 |
Alignment Based Mathching Networks For One-shot Classification And Open-set Recognition |
7, 6, 7, 4 |
1.22 |
Reject |
504 |
6.00 |
Unsupervised Adversarial Image Reconstruction |
6, 8, 4 |
1.63 |
Accept (Poster) |
505 |
6.00 |
Adversarial Reprogramming Of Neural Networks |
4, 6, 8 |
1.63 |
Accept (Poster) |
506 |
6.00 |
Reinforcement Learning With Perturbed Rewards |
6, 6, 6 |
0.00 |
Reject |
507 |
6.00 |
Variational Bayesian Phylogenetic Inference |
6, 5, 7 |
0.82 |
Accept (Poster) |
508 |
6.00 |
Efficient Multi-objective Neural Architecture Search Via Lamarckian Evolution |
6, 6, 6 |
0.00 |
Accept (Poster) |
509 |
6.00 |
On-policy Trust Region Policy Optimisation With Replay Buffers |
7, 6, 5 |
0.82 |
Reject |
510 |
6.00 |
A Max-affine Spline Perspective Of Recurrent Neural Networks |
6, 6, 6 |
0.00 |
Accept (Poster) |
511 |
6.00 |
Explicit Information Placement On Latent Variables Using Auxiliary Generative Modelling Task |
6, 7, 5 |
0.82 |
Reject |
512 |
6.00 |
Code2seq: Generating Sequences From Structured Representations Of Code |
6, 7, 5 |
0.82 |
Accept (Poster) |
513 |
6.00 |
Overcoming Catastrophic Forgetting For Continual Learning Via Model Adaptation |
5, 6, 7 |
0.82 |
Accept (Poster) |
514 |
6.00 |
Dl2: Training And Querying Neural Networks With Logic |
7, 5, 6 |
0.82 |
Reject |
515 |
6.00 |
Cost-sensitive Robustness Against Adversarial Examples |
5, 5, 8 |
1.41 |
Accept (Poster) |
516 |
6.00 |
Robust Conditional Generative Adversarial Networks |
6, 6, 6 |
0.00 |
Accept (Poster) |
517 |
6.00 |
Unsupervised Discovery Of Parts, Structure, And Dynamics |
6, 6, 7, 5 |
0.71 |
Accept (Poster) |
518 |
6.00 |
Learning To Learn With Conditional Class Dependencies |
6, 8, 4 |
1.63 |
Accept (Poster) |
519 |
6.00 |
Aggregated Momentum: Stability Through Passive Damping |
7, 6, 5 |
0.82 |
Accept (Poster) |
520 |
6.00 |
Discovery Of Natural Language Concepts In Individual Units Of Cnns |
6, 6, 6 |
0.00 |
Accept (Poster) |
521 |
6.00 |
Generative Predecessor Models For Sample-efficient Imitation Learning |
6, 5, 7 |
0.82 |
Accept (Poster) |
522 |
6.00 |
Image Deformation Meta-network For One-shot Learning |
5, 7, 6 |
0.82 |
N/A |
523 |
6.00 |
Adversarial Vulnerability Of Neural Networks Increases With Input Dimension |
6, 4, 9, 5 |
1.87 |
Reject |
524 |
6.00 |
How To Train Your Maml |
5, 6, 7 |
0.82 |
Accept (Poster) |
525 |
6.00 |
Learning Kolmogorov Models For Binary Random Variables |
5, 5, 8 |
1.41 |
Reject |
526 |
6.00 |
Unsupervised Hyper-alignment For Multilingual Word Embeddings |
5, 6, 7 |
0.82 |
Accept (Poster) |
527 |
6.00 |
Adversarial Imitation Via Variational Inverse Reinforcement Learning |
6, 6, 6 |
0.00 |
Accept (Poster) |
528 |
6.00 |
Improving The Generalization Of Adversarial Training With Domain Adaptation |
6, 6, 6 |
0.00 |
Accept (Poster) |
529 |
6.00 |
Formal Limitations On The Measurement Of Mutual Information |
8, 6, 4 |
1.63 |
Reject |
530 |
6.00 |
Online Hyperparameter Adaptation Via Amortized Proximal Optimization |
6, 5, 7 |
0.82 |
Reject |
531 |
6.00 |
Machine Translation With Weakly Paired Bilingual Documents |
6, 5, 7 |
0.82 |
Reject |
532 |
6.00 |
Greedy Attack And Gumbel Attack: Generating Adversarial Examples For Discrete Data |
3, 6, 8, 7 |
1.87 |
Reject |
533 |
6.00 |
Variance Networks: When Expectation Does Not Meet Your Expectations |
6, 6, 6 |
0.00 |
Accept (Poster) |
534 |
6.00 |
Shallow Learning For Deep Networks |
6, 5, 7 |
0.82 |
Reject |
535 |
6.00 |
Success At Any Cost: Value Constrained Model-free Continuous Control |
7, 5, 6 |
0.82 |
Reject |
536 |
6.00 |
Language Modeling Teaches You More Syntax Than Translation Does: Lessons Learned Through Auxiliary Task Analysis |
6, 5, 7 |
0.82 |
Reject |
537 |
6.00 |
Hierarchical Reinforcement Learning Via Advantage-weighted Information Maximization |
5, 6, 7 |
0.82 |
Accept (Poster) |
538 |
6.00 |
Mean-field Analysis Of Batch Normalization |
7, 6, 5 |
0.82 |
Reject |
539 |
6.00 |
Learning From Positive And Unlabeled Data With A Selection Bias |
7, 6, 5 |
0.82 |
Accept (Poster) |
540 |
6.00 |
A Kernel Random Matrix-based Approach For Sparse Pca |
5, 7, 6 |
0.82 |
Accept (Poster) |
541 |
6.00 |
Per-tensor Fixed-point Quantization Of The Back-propagation Algorithm |
7, 3, 8 |
2.16 |
Accept (Poster) |
542 |
6.00 |
Fortified Networks: Improving The Robustness Of Deep Networks By Modeling The Manifold Of Hidden Representations |
4, 5, 9, 6 |
1.87 |
Reject |
543 |
6.00 |
A Comprehensive, Application-oriented Study Of Catastrophic Forgetting In Dnns |
5, 6, 7 |
0.82 |
Accept (Poster) |
544 |
6.00 |
Interactive Agent Modeling By Learning To Probe |
6, 6, 6, 6 |
0.00 |
Reject |
545 |
6.00 |
Learning Heuristics For Automated Reasoning Through Reinforcement Learning |
5, 6, 7 |
0.82 |
Reject |
546 |
6.00 |
Stochastic Prediction Of Multi-agent Interactions From Partial Observations |
6, 6, 6 |
0.00 |
Accept (Poster) |
547 |
6.00 |
Combinatorial Attacks On Binarized Neural Networks |
5, 6, 7 |
0.82 |
Accept (Poster) |
548 |
6.00 |
Invase: Instance-wise Variable Selection Using Neural Networks |
6, 6, 6 |
0.00 |
Accept (Poster) |
549 |
5.75 |
On The Margin Theory Of Feedforward Neural Networks |
5, 5, 6, 7 |
0.83 |
Reject |
550 |
5.75 |
Neural Networks For Modeling Source Code Edits |
5, 6, 6, 6 |
0.43 |
Reject |
551 |
5.75 |
Efficiently Testing Local Optimality And Escaping Saddles For Relu Networks |
3, 6, 6, 8 |
1.79 |
Accept (Poster) |
552 |
5.75 |
Automata Guided Skill Composition |
7, 5, 6, 5 |
0.83 |
Reject |
553 |
5.67 |
A More Globally Accurate Dimensionality Reduction Method Using Triplets |
6, 5, 6 |
0.47 |
Reject |
554 |
5.67 |
Eddi: Efficient Dynamic Discovery Of High-value Information With Partial Vae |
6, 5, 6 |
0.47 |
Reject |
555 |
5.67 |
Deep Imitative Models For Flexible Inference, Planning, And Control |
5, 6, 6 |
0.47 |
Reject |
556 |
5.67 |
Set Transformer |
5, 6, 6 |
0.47 |
Reject |
557 |
5.67 |
Transfer Learning Via Unsupervised Task Discovery For Visual Question Answering |
4, 5, 8 |
1.70 |
N/A |
558 |
5.67 |
Information Regularized Neural Networks |
6, 6, 5 |
0.47 |
Reject |
559 |
5.67 |
An Information-theoretic Metric Of Transferability For Task Transfer Learning |
5, 6, 6 |
0.47 |
Reject |
560 |
5.67 |
Detecting Out-of-distribution Samples Using Low-order Deep Features Statistics |
5, 5, 7 |
0.94 |
Reject |
561 |
5.67 |
Laplacian Smoothing Gradient Descent |
6, 6, 5 |
0.47 |
Reject |
562 |
5.67 |
Cross-task Knowledge Transfer For Visually-grounded Navigation |
7, 5, 5 |
0.94 |
Reject |
563 |
5.67 |
Super-resolution Via Conditional Implicit Maximum Likelihood Estimation |
5, 6, 6 |
0.47 |
Reject |
564 |
5.67 |
Mode Normalization |
5, 6, 6 |
0.47 |
Accept (Poster) |
565 |
5.67 |
Learning Cross-lingual Sentence Representations Via A Multi-task Dual-encoder Model |
7, 4, 6 |
1.25 |
Reject |
566 |
5.67 |
Ppo-cma: Proximal Policy Optimization With Covariance Matrix Adaptation |
4, 9, 4 |
2.36 |
Reject |
567 |
5.67 |
A Resizable Mini-batch Gradient Descent Based On A Multi-armed Bandit |
6, 7, 4 |
1.25 |
Reject |
568 |
5.67 |
Understanding Gans Via Generalization Analysis For Disconnected Support |
6, 5, 6 |
0.47 |
Reject |
569 |
5.67 |
Open-ended Content-style Recombination Via Leakage Filtering |
7, 5, 5 |
0.94 |
Reject |
570 |
5.67 |
Adversarial Audio Synthesis |
5, 6, 6 |
0.47 |
Accept (Poster) |
571 |
5.67 |
A Variational Dirichlet Framework For Out-of-distribution Detection |
6, 5, 6 |
0.47 |
Reject |
572 |
5.67 |
Stochastic Adversarial Video Prediction |
6, 6, 5 |
0.47 |
Reject |
573 |
5.67 |
Transfer Learning For Sequences Via Learning To Collocate |
6, 5, 6 |
0.47 |
Accept (Poster) |
574 |
5.67 |
Talk The Walk: Navigating Grids In New York City Through Grounded Dialogue |
6, 7, 4 |
1.25 |
Reject |
575 |
5.67 |
Random Mesh Projectors For Inverse Problems |
6, 7, 4 |
1.25 |
Accept (Poster) |
576 |
5.67 |
Infobot: Transfer And Exploration Via The Information Bottleneck |
7, 7, 3 |
1.89 |
Accept (Poster) |
577 |
5.67 |
Trace-back Along Capsules And Its Application On Semantic Segmentation |
6, 6, 5 |
0.47 |
Reject |
578 |
5.67 |
Adversarially Learned Mixture Model |
6, 5, 6 |
0.47 |
Reject |
579 |
5.67 |
Unsupervised Document Representation Using Partition Word-vectors Averaging |
6, 7, 4 |
1.25 |
Reject |
580 |
5.67 |
Deep Recurrent Gaussian Process With Variational Sparse Spectrum Approximation |
5, 5, 7 |
0.94 |
Reject |
581 |
5.67 |
A Frank-wolfe Framework For Efficient And Effective Adversarial Attacks |
5, 5, 7 |
0.94 |
Reject |
582 |
5.67 |
Adaptive Gradient Methods With Dynamic Bound Of Learning Rate |
7, 4, 6 |
1.25 |
Accept (Poster) |
583 |
5.67 |
Convolutional Crfs For Semantic Segmentation |
7, 4, 6 |
1.25 |
Reject |
584 |
5.67 |
Ppd: Permutation Phase Defense Against Adversarial Examples In Deep Learning |
6, 7, 4 |
1.25 |
Reject |
585 |
5.67 |
Generalizable Adversarial Training Via Spectral Normalization |
6, 5, 6 |
0.47 |
Accept (Poster) |
586 |
5.67 |
Cbow Is Not All You Need: Combining Cbow With The Compositional Matrix Space Model |
6, 5, 6 |
0.47 |
Accept (Poster) |
587 |
5.67 |
Soseleto: A Unified Approach To Transfer Learning And Training With Noisy Labels |
7, 5, 5 |
0.94 |
Reject |
588 |
5.67 |
Revisiting Reweighted Wake-sleep |
5, 6, 6 |
0.47 |
Reject |
589 |
5.67 |
Lipschitz Regularized Deep Neural Networks Generalize |
4, 6, 7 |
1.25 |
Reject |
590 |
5.67 |
Necst: Neural Joint Source-channel Coding |
6, 4, 7 |
1.25 |
Reject |
591 |
5.67 |
Multi-objective Training Of Generative Adversarial Networks With Multiple Discriminators |
6, 5, 6 |
0.47 |
Reject |
592 |
5.67 |
Small Steps And Giant Leaps: Minimal Newton Solvers For Deep Learning |
7, 7, 3 |
1.89 |
Reject |
593 |
5.67 |
Actrce: Augmenting Experience Via Teacher’s Advice |
5, 7, 5 |
0.94 |
Reject |
594 |
5.67 |
Adaptive Sample-space & Adaptive Probability Coding: A Neural-network Based Approach For Compression |
5, 7, 5 |
0.94 |
Reject |
595 |
5.67 |
Collapse Of Deep And Narrow Neural Nets |
6, 4, 7 |
1.25 |
Reject |
596 |
5.67 |
Incremental Training Of Multi-generative Adversarial Networks |
5, 6, 6 |
0.47 |
Reject |
597 |
5.67 |
Visual Explanation By Interpretation: Improving Visual Feedback Capabilities Of Deep Neural Networks |
8, 4, 5 |
1.70 |
Accept (Poster) |
598 |
5.67 |
The Loss Landscape Of Overparameterized Neural Networks |
5, 7, 5 |
0.94 |
Reject |
599 |
5.67 |
Rotation Equivariant Networks Via Conic Convolution And The Dft |
4, 7, 6 |
1.25 |
N/A |
600 |
5.67 |
Backprop With Approximate Activations For Memory-efficient Network Training |
5, 7, 5 |
0.94 |
Reject |
601 |
5.67 |
Nested Dithered Quantization For Communication Reduction In Distributed Training |
5, 5, 7 |
0.94 |
Reject |
602 |
5.67 |
Switching Linear Dynamics For Variational Bayes Filtering |
6, 4, 7 |
1.25 |
Reject |
603 |
5.67 |
Exploring The Interpretability Of Lstm Neural Networks Over Multi-variable Data |
6, 5, 6 |
0.47 |
Reject |
604 |
5.67 |
Pix2scene: Learning Implicit 3d Representations From Images |
5, 6, 6 |
0.47 |
Reject |
605 |
5.67 |
Manifold Regularization With Gans For Semi-supervised Learning |
7, 5, 5 |
0.94 |
Reject |
606 |
5.67 |
Random Mask: Towards Robust Convolutional Neural Networks |
6, 7, 4 |
1.25 |
Reject |
607 |
5.67 |
Predict Then Propagate: Graph Neural Networks Meet Personalized Pagerank |
5, 5, 7 |
0.94 |
Accept (Poster) |
608 |
5.67 |
Pyramid Recurrent Neural Networks For Multi-scale Change-point Detection |
4, 6, 7 |
1.25 |
Reject |
609 |
5.67 |
Adversarial Attacks On Node Embeddings |
5, 6, 6 |
0.47 |
Reject |
610 |
5.67 |
Learning Exploration Policies For Navigation |
3, 7, 7 |
1.89 |
Accept (Poster) |
611 |
5.67 |
Domain Adaptation For Structured Output Via Disentangled Patch Representations |
7, 5, 5 |
0.94 |
Reject |
612 |
5.67 |
Doubly Sparse: Sparse Mixture Of Sparse Experts For Efficient Softmax Inference |
6, 4, 7 |
1.25 |
Reject |
613 |
5.67 |
Infinitely Deep Infinite-width Networks |
5, 6, 6 |
0.47 |
Reject |
614 |
5.67 |
A Model Cortical Network For Spatiotemporal Sequence Learning And Prediction |
7, 7, 3 |
1.89 |
Reject |
615 |
5.67 |
Perception-aware Point-based Value Iteration For Partially Observable Markov Decision Processes |
6, 4, 7 |
1.25 |
Reject |
616 |
5.67 |
Adversarial Exploration Strategy For Self-supervised Imitation Learning |
7, 5, 5 |
0.94 |
Reject |
617 |
5.67 |
Gradient-based Training Of Slow Feature Analysis By Differentiable Approximate Whitening |
5, 6, 6 |
0.47 |
Reject |
618 |
5.67 |
A Closer Look At Deep Learning Heuristics: Learning Rate Restarts, Warmup And Distillation |
4, 7, 6 |
1.25 |
Accept (Poster) |
619 |
5.67 |
Learning State Representations In Complex Systems With Multimodal Data |
6, 6, 5 |
0.47 |
Reject |
620 |
5.67 |
Representing Formal Languages: A Comparison Between Finite Automata And Recurrent Neural Networks |
7, 5, 5 |
0.94 |
Accept (Poster) |
621 |
5.67 |
Controlling Covariate Shift Using Equilibrium Normalization Of Weights |
4, 6, 7 |
1.25 |
Reject |
622 |
5.67 |
Accelerating Nonconvex Learning Via Replica Exchange Langevin Diffusion |
4, 7, 6 |
1.25 |
Accept (Poster) |
623 |
5.67 |
Neural Networks With Structural Resistance To Adversarial Attacks |
5, 5, 7 |
0.94 |
Reject |
624 |
5.67 |
Interactive Parallel Exploration For Reinforcement Learning In Continuous Action Spaces |
6, 4, 7 |
1.25 |
Reject |
625 |
5.67 |
Neural Separation Of Observed And Unobserved Distributions |
6, 5, 6 |
0.47 |
Reject |
626 |
5.67 |
Dynamic Early Terminating Of Multiply Accumulate Operations For Saving Computation Cost In Convolutional Neural Networks |
5, 6, 6 |
0.47 |
Reject |
627 |
5.67 |
Codraw: Collaborative Drawing As A Testbed For Grounded Goal-driven Communication |
4, 6, 7 |
1.25 |
Reject |
628 |
5.67 |
Understanding & Generalizing Alphago Zero |
5, 7, 5 |
0.94 |
Reject |
629 |
5.67 |
Learning Actionable Representations With Goal Conditioned Policies |
5, 6, 6 |
0.47 |
Accept (Poster) |
630 |
5.67 |
Deep Denoising: Rate-optimal Recovery Of Structured Signals With A Deep Prior |
6, 5, 6 |
0.47 |
Reject |
631 |
5.67 |
Attentive Task-agnostic Meta-learning For Few-shot Text Classification |
5, 5, 7 |
0.94 |
Reject |
632 |
5.67 |
Mile: A Multi-level Framework For Scalable Graph Embedding |
7, 4, 6 |
1.25 |
Reject |
633 |
5.67 |
Stochastic Gradient/mirror Descent: Minimax Optimality And Implicit Regularization |
7, 5, 5 |
0.94 |
Accept (Poster) |
634 |
5.67 |
Teacher Guided Architecture Search |
6, 5, 6 |
0.47 |
Reject |
635 |
5.67 |
Knowledge Representation For Reinforcement Learning Using General Value Functions |
6, 7, 4 |
1.25 |
N/A |
636 |
5.67 |
Learning Neural Random Fields With Inclusive Auxiliary Generators |
6, 6, 5 |
0.47 |
Reject |
637 |
5.67 |
Lit: Block-wise Intermediate Representation Training For Model Compression |
5, 6, 6 |
0.47 |
Reject |
638 |
5.67 |
Adaptive Mixture Of Low-rank Factorizations For Compact Neural Modeling |
4, 6, 7 |
1.25 |
Reject |
639 |
5.67 |
Learning Backpropagation-free Deep Architectures With Kernels |
6, 6, 5 |
0.47 |
Reject |
640 |
5.67 |
Identifying Generalization Properties In Neural Networks |
6, 5, 6 |
0.47 |
Reject |
641 |
5.67 |
Unsupervised Learning Of Sentence Representations Using Sequence Consistency |
7, 5, 5 |
0.94 |
Reject |
642 |
5.67 |
Dana: Scalable Out-of-the-box Distributed Asgd Without Retuning |
5, 7, 5 |
0.94 |
Reject |
643 |
5.67 |
Clean-label Backdoor Attacks |
6, 7, 4 |
1.25 |
Reject |
644 |
5.67 |
Meta-learning With Domain Adaptation For Few-shot Learning Under Domain Shift |
6, 5, 6 |
0.47 |
Reject |
645 |
5.67 |
Amortized Bayesian Meta-learning |
6, 5, 6 |
0.47 |
Accept (Poster) |
646 |
5.67 |
(unconstrained) Beam Search Is Sensitive To Large Search Discrepancies |
5, 5, 7 |
0.94 |
Reject |
647 |
5.67 |
Universal Successor Features For Transfer Reinforcement Learning |
4, 7, 6 |
1.25 |
Reject |
648 |
5.67 |
Learning Embeddings Into Entropic Wasserstein Spaces |
7, 7, 3 |
1.89 |
Accept (Poster) |
649 |
5.67 |
Reliable Uncertainty Estimates In Deep Neural Networks Using Noise Contrastive Priors |
7, 4, 6 |
1.25 |
Reject |
650 |
5.67 |
Can I Trust You More? Model-agnostic Hierarchical Explanations |
6, 6, 5 |
0.47 |
Reject |
651 |
5.67 |
Hallucinations In Neural Machine Translation |
6, 4, 7 |
1.25 |
Reject |
652 |
5.67 |
Learning Data-derived Privacy Preserving Representations From Information Metrics |
6, 5, 6 |
0.47 |
Reject |
653 |
5.67 |
The Expressive Power Of Deep Neural Networks With Circulant Matrices |
4, 7, 6 |
1.25 |
Reject |
654 |
5.67 |
Flow++: Improving Flow-based Generative Models With Variational Dequantization And Architecture Design |
6, 6, 5 |
0.47 |
Reject |
655 |
5.67 |
Overcoming Multi-model Forgetting |
6, 5, 6 |
0.47 |
Reject |
656 |
5.67 |
Graph Matching Networks For Learning The Similarity Of Graph Structured Objects |
5, 6, 6 |
0.47 |
Reject |
657 |
5.67 |
Aim: Adversarial Inference By Matching Priors And Conditionals |
7, 4, 6 |
1.25 |
Reject |
658 |
5.67 |
Where Off-policy Deep Reinforcement Learning Fails |
7, 5, 5 |
0.94 |
Reject |
659 |
5.67 |
Explaining Image Classifiers By Counterfactual Generation |
5, 7, 5 |
0.94 |
Accept (Poster) |
660 |
5.67 |
Adaptive Network Sparsification Via Dependent Variational Beta-bernoulli Dropout |
5, 5, 7 |
0.94 |
Reject |
661 |
5.67 |
Unsupervised Emergence Of Spatial Structure From Sensorimotor Prediction |
4, 6, 7 |
1.25 |
Reject |
662 |
5.67 |
State-regularized Recurrent Networks |
6, 6, 5 |
0.47 |
Reject |
663 |
5.67 |
Context Mover's Distance & Barycenters: Optimal Transport Of Contexts For Building Representations |
4, 6, 7 |
1.25 |
Reject |
664 |
5.67 |
Human-level Protein Localization With Convolutional Neural Networks |
4, 5, 8 |
1.70 |
Accept (Poster) |
665 |
5.67 |
Better Generalization With On-the-fly Dataset Denoising |
5, 6, 6 |
0.47 |
Reject |
666 |
5.67 |
Adaptive Pruning Of Neural Language Models For Mobile Devices |
6, 5, 6 |
0.47 |
Reject |
667 |
5.67 |
On Difficulties Of Probability Distillation |
5, 7, 5 |
0.94 |
Reject |
668 |
5.67 |
Improved Learning Of One-hidden-layer Convolutional Neural Networks With Overlaps |
6, 5, 6 |
0.47 |
Reject |
669 |
5.67 |
Fast Adversarial Training For Semi-supervised Learning |
7, 5, 5 |
0.94 |
Reject |
670 |
5.67 |
Amortized Context Vector Inference For Sequence-to-sequence Networks |
6, 6, 5 |
0.47 |
Reject |
671 |
5.67 |
Predicted Variables In Programming |
5, 5, 7 |
0.94 |
Reject |
672 |
5.67 |
Guiding Physical Intuition With Neural Stethoscopes |
6, 4, 7 |
1.25 |
Reject |
673 |
5.67 |
Optimal Transport Maps For Distribution Preserving Operations On Latent Spaces Of Generative Models |
7, 5, 5 |
0.94 |
Accept (Poster) |
674 |
5.67 |
Towards Understanding Regularization In Batch Normalization |
5, 6, 6 |
0.47 |
Accept (Poster) |
675 |
5.67 |
Hierarchically-structured Variational Autoencoders For Long Text Generation |
5, 5, 7 |
0.94 |
Reject |
676 |
5.67 |
Learning To Augment Influential Data |
6, 6, 5 |
0.47 |
Reject |
677 |
5.67 |
Actor-attention-critic For Multi-agent Reinforcement Learning |
6, 7, 4 |
1.25 |
Reject |
678 |
5.67 |
A Unified Theory Of Adaptive Stochastic Gradient Descent As Bayesian Filtering |
5, 7, 5 |
0.94 |
Reject |
679 |
5.67 |
Deep Probabilistic Video Compression |
6, 5, 6 |
0.47 |
Reject |
680 |
5.67 |
Rethinking Knowledge Graph Propagation For Zero-shot Learning |
7, 5, 5 |
0.94 |
N/A |
681 |
5.67 |
The Unusual Effectiveness Of Averaging In Gan Training |
5, 6, 6 |
0.47 |
Accept (Poster) |
682 |
5.67 |
Zero-resource Multilingual Model Transfer: Learning What To Share |
6, 5, 6 |
0.47 |
Reject |
683 |
5.67 |
The Meaning Of "most" For Visual Question Answering Models |
7, 5, 5 |
0.94 |
Reject |
684 |
5.67 |
Spectral Inference Networks: Unifying Deep And Spectral Learning |
5, 7, 5 |
0.94 |
Accept (Poster) |
685 |
5.67 |
Neural Persistence: A Complexity Measure For Deep Neural Networks Using Algebraic Topology |
6, 4, 7 |
1.25 |
Accept (Poster) |
686 |
5.50 |
Unlabeled Disentangling Of Gans With Guided Siamese Networks |
5, 6, 5, 6 |
0.50 |
Reject |
687 |
5.50 |
Are Generative Classifiers More Robust To Adversarial Attacks? |
4, 6, 4, 8 |
1.66 |
Reject |
688 |
5.50 |
Convergent Reinforcement Learning With Function Approximation: A Bilevel Optimization Perspective |
5, 6, 6, 5 |
0.50 |
Reject |
689 |
5.50 |
Multi-way Encoding For Robustness To Adversarial Attacks |
4, 6, 6, 6 |
0.87 |
Reject |
690 |
5.50 |
Music Transformer: Generating Music With Long-term Structure |
7, 6, 4, 5 |
1.12 |
Accept (Poster) |
691 |
5.50 |
Caml: Fast Context Adaptation Via Meta-learning |
4, 6, 6, 6 |
0.87 |
Reject |
692 |
5.50 |
Sample Efficient Imitation Learning For Continuous Control |
5, 7, 5, 5 |
0.87 |
Accept (Poster) |
693 |
5.33 |
Diffranet: Automatic Classification Of Serial Crystallography Diffraction Patterns |
5, 3, 8 |
2.05 |
Reject |
694 |
5.33 |
Learning To Decompose Compound Questions With Reinforcement Learning |
6, 5, 5 |
0.47 |
Reject |
695 |
5.33 |
Probabilistic Model-based Dynamic Architecture Search |
5, 6, 5 |
0.47 |
Reject |
696 |
5.33 |
Composing Entropic Policies Using Divergence Correction |
4, 7, 5 |
1.25 |
Reject |
697 |
5.33 |
Zero-shot Learning For Speech Recognition With Universal Phonetic Model |
7, 4, 5 |
1.25 |
Reject |
698 |
5.33 |
Clinical Risk: Wavelet Reconstruction Networks For Marked Point Processes |
7, 4, 5 |
1.25 |
Reject |
699 |
5.33 |
Graph2seq: Graph To Sequence Learning With Attention-based Neural Networks |
6, 6, 4 |
0.94 |
Reject |
700 |
5.33 |
Knowledge Distillation From Few Samples |
4, 6, 6 |
0.94 |
Reject |
701 |
5.33 |
Caveats For Information Bottleneck In Deterministic Scenarios |
2, 8, 6 |
2.49 |
Accept (Poster) |
702 |
5.33 |
Mimicking Actions Is A Good Strategy For Beginners: Fast Reinforcement Learning With Expert Action Sequences |
5, 5, 6 |
0.47 |
Reject |
703 |
5.33 |
Rethinking Learning Rate Schedules For Stochastic Optimization |
6, 4, 6 |
0.94 |
Reject |
704 |
5.33 |
Consistent Jumpy Predictions For Videos And Scenes |
7, 4, 5 |
1.25 |
Reject |
705 |
5.33 |
Adapting Auxiliary Losses Using Gradient Similarity |
4, 6, 6 |
0.94 |
Reject |
706 |
5.33 |
I Know The Feeling: Learning To Converse With Empathy |
4, 7, 5 |
1.25 |
Reject |
707 |
5.33 |
Learning From The Experience Of Others: Approximate Empirical Bayes In Neural Networks |
6, 3, 7 |
1.70 |
Reject |
708 |
5.33 |
Hierarchically Clustered Representation Learning |
5, 5, 6 |
0.47 |
Reject |
709 |
5.33 |
Learning To Refer To 3d Objects With Natural Language |
6, 6, 4 |
0.94 |
Reject |
710 |
5.33 |
Advocacy Learning |
4, 4, 8 |
1.89 |
Reject |
711 |
5.33 |
Area Attention |
6, 5, 5 |
0.47 |
Reject |
712 |
5.33 |
An Active Learning Framework For Efficient Robust Policy Search |
5, 6, 5 |
0.47 |
Reject |
713 |
5.33 |
Selective Convolutional Units: Improving Cnns Via Channel Selectivity |
6, 5, 5 |
0.47 |
Reject |
714 |
5.33 |
Improved Robustness To Adversarial Examples Using Lipschitz Regularization Of The Loss |
6, 6, 4 |
0.94 |
Reject |
715 |
5.33 |
Classification From Positive, Unlabeled And Biased Negative Data |
5, 6, 5 |
0.47 |
Reject |
716 |
5.33 |
Sliced Wasserstein Auto-encoders |
6, 4, 6 |
0.94 |
Accept (Poster) |
717 |
5.33 |
Learning To Encode Spatial Relations From Natural Language |
6, 5, 5 |
0.47 |
Reject |
718 |
5.33 |
Learning To Separate Domains In Generalized Zero-shot And Open Set Learning: A Probabilistic Perspective |
5, 5, 6 |
0.47 |
Reject |
719 |
5.33 |
Sorting Out Lipschitz Function Approximation |
7, 5, 4 |
1.25 |
Reject |
720 |
5.33 |
Meta-learning For Contextual Bandit Exploration |
7, 6, 3 |
1.70 |
Reject |
721 |
5.33 |
Systematic Generalization: What Is Required And Can It Be Learned? |
4, 6, 6 |
0.94 |
Accept (Poster) |
722 |
5.33 |
Local Image-to-image Translation Via Pixel-wise Highway Adaptive Instance Normalization |
6, 5, 5 |
0.47 |
Reject |
723 |
5.33 |
Negotiating Team Formation Using Deep Reinforcement Learning |
5, 6, 5 |
0.47 |
Reject |
724 |
5.33 |
Making Convolutional Networks Shift-invariant Again |
6, 5, 5 |
0.47 |
Reject |
725 |
5.33 |
Characterizing Attacks On Deep Reinforcement Learning |
5, 5, 6 |
0.47 |
Reject |
726 |
5.33 |
The Nonlinearity Coefficient - Predicting Generalization In Deep Neural Networks |
5, 7, 4 |
1.25 |
Reject |
727 |
5.33 |
Learning Global Additive Explanations For Neural Nets Using Model Distillation |
6, 4, 6 |
0.94 |
Reject |
728 |
5.33 |
Open Loop Hyperparameter Optimization And Determinantal Point Processes |
5, 6, 5 |
0.47 |
Reject |
729 |
5.33 |
Complementary-label Learning For Arbitrary Losses And Models |
5, 5, 6 |
0.47 |
Reject |
730 |
5.33 |
Mitigating Bias In Natural Language Inference Using Adversarial Learning |
4, 4, 8 |
1.89 |
N/A |
731 |
5.33 |
A Deep Learning Approach For Dynamic Survival Analysis With Competing Risks |
4, 8, 4 |
1.89 |
Reject |
732 |
5.33 |
Cdeepex: Contrastive Deep Explanations |
5, 6, 5 |
0.47 |
Reject |
733 |
5.33 |
Surprising Negative Results For Generative Adversarial Tree Search |
5, 5, 6 |
0.47 |
Reject |
734 |
5.33 |
Tangent-normal Adversarial Regularization For Semi-supervised Learning |
5, 4, 7 |
1.25 |
N/A |
735 |
5.33 |
The Expressive Power Of Gated Recurrent Units As A Continuous Dynamical System |
5, 6, 5 |
0.47 |
Reject |
736 |
5.33 |
Learning To Coordinate Multiple Reinforcement Learning Agents For Diverse Query Reformulation |
4, 7, 5 |
1.25 |
Reject |
737 |
5.33 |
Domain Adaptation Via Distribution And Representation Matching: A Case Study On Training Data Selection Via Reinforcement Learning |
4, 7, 5 |
1.25 |
Reject |
738 |
5.33 |
Optimal Margin Distribution Network |
5, 6, 5 |
0.47 |
Reject |
739 |
5.33 |
Generalization And Regularization In Dqn |
6, 5, 5 |
0.47 |
Reject |
740 |
5.33 |
Unsupervised Conditional Generation Using Noise Engineered Mode Matching Gan |
5, 5, 6 |
0.47 |
Reject |
741 |
5.33 |
Improved Language Modeling By Decoding The Past |
6, 7, 3 |
1.70 |
Reject |
742 |
5.33 |
Adversarial Sampling For Active Learning |
6, 5, 5 |
0.47 |
Reject |
743 |
5.33 |
Multi-task Learning With Gradient Communication |
5, 4, 7 |
1.25 |
Reject |
744 |
5.33 |
Quality Evaluation Of Gans Using Cross Local Intrinsic Dimensionality |
4, 6, 6 |
0.94 |
Reject |
745 |
5.33 |
Perfect Match: A Simple Method For Learning Representations For Counterfactual Inference With Neural Networks |
5, 5, 6 |
0.47 |
Reject |
746 |
5.33 |
Dynamic Planning Networks |
4, 6, 6 |
0.94 |
Reject |
747 |
5.33 |
Provable Guarantees On Learning Hierarchical Generative Models With Deep Cnns |
6, 6, 4 |
0.94 |
Reject |
748 |
5.33 |
Dataset Distillation |
5, 6, 5 |
0.47 |
Reject |
749 |
5.33 |
Simple Black-box Adversarial Attacks |
6, 6, 4 |
0.94 |
Reject |
750 |
5.33 |
Probabilistic Knowledge Graph Embeddings |
5, 6, 5 |
0.47 |
Reject |
751 |
5.33 |
Graph Neural Networks With Generated Parameters For Relation Extraction |
4, 6, 6 |
0.94 |
Reject |
752 |
5.33 |
On Learning Heteroscedastic Noise Models Within Differentiable Bayes Filters |
6, 4, 6 |
0.94 |
Reject |
753 |
5.33 |
Live Face De-identification In Video |
6, 4, 6 |
0.94 |
N/A |
754 |
5.33 |
Meta Learning With Fast/slow Learners |
5, 6, 5 |
0.47 |
Reject |
755 |
5.33 |
Curiosity-driven Experience Prioritization Via Density Estimation |
6, 4, 6 |
0.94 |
Reject |
756 |
5.33 |
Integrated Steganography And Steganalysis With Generative Adversarial Networks |
5, 6, 5 |
0.47 |
Reject |
757 |
5.33 |
Entropic Gans Meet Vaes: A Statistical Approach To Compute Sample Likelihoods In Gans |
5, 5, 6 |
0.47 |
Reject |
758 |
5.33 |
Probabilistic Federated Neural Matching |
4, 6, 6 |
0.94 |
Reject |
759 |
5.33 |
Local Binary Pattern Networks For Character Recognition |
5, 6, 5 |
0.47 |
Reject |
760 |
5.33 |
Out-of-sample Extrapolation With Neuron Editing |
5, 5, 6 |
0.47 |
Reject |
761 |
5.33 |
Large-scale Visual Speech Recognition |
4, 3, 9 |
2.62 |
Reject |
762 |
5.33 |
Knows When It Doesn’t Know: Deep Abstaining Classifiers |
6, 5, 5 |
0.47 |
Reject |
763 |
5.33 |
Connecting The Dots Between Mle And Rl For Sequence Generation |
5, 6, 5 |
0.47 |
Reject |
764 |
5.33 |
Neural Predictive Belief Representations |
4, 7, 5 |
1.25 |
Reject |
765 |
5.33 |
Hint-based Training For Non-autoregressive Translation |
6, 6, 4 |
0.94 |
Reject |
766 |
5.33 |
Neural Model-based Reinforcement Learning For Recommendation |
5, 6, 5 |
0.47 |
Reject |
767 |
5.33 |
Domain Generalization Via Invariant Representation Under Domain-class Dependency |
4, 7, 5 |
1.25 |
Reject |
768 |
5.33 |
Model Compression With Generative Adversarial Networks |
5, 6, 5 |
0.47 |
Reject |
769 |
5.33 |
Learning Partially Observed Pde Dynamics With Neural Networks |
6, 5, 5 |
0.47 |
Reject |
770 |
5.33 |
Heated-up Softmax Embedding |
8, 3, 5 |
2.05 |
Reject |
771 |
5.33 |
Transformer-xl: Language Modeling With Longer-term Dependency |
6, 6, 4 |
0.94 |
Reject |
772 |
5.33 |
Improving Composition Of Sentence Embeddings Through The Lens Of Statistical Relational Learning |
5, 5, 6 |
0.47 |
Reject |
773 |
5.33 |
Invariant-equivariant Representation Learning For Multi-class Data |
7, 5, 4 |
1.25 |
Reject |
774 |
5.33 |
Training Generative Latent Models By Variational F-divergence Minimization |
6, 5, 5 |
0.47 |
Reject |
775 |
5.33 |
Decaynet: A Study On The Cell States Of Long Short Term Memories |
8, 4, 4 |
1.89 |
Reject |
776 |
5.33 |
Label Propagation Networks |
5, 5, 6 |
0.47 |
Reject |
777 |
5.33 |
Augment Your Batch: Better Training With Larger Batches |
4, 8, 4 |
1.89 |
Reject |
778 |
5.33 |
Deep Learning Generalizes Because The Parameter-function Map Is Biased Towards Simple Functions |
7, 5, 4 |
1.25 |
Accept (Poster) |
779 |
5.33 |
Towards Gan Benchmarks Which Require Generalization |
6, 7, 3 |
1.70 |
Accept (Poster) |
780 |
5.33 |
Mahinet: A Neural Network For Many-class Few-shot Learning With Class Hierarchy |
5, 6, 5 |
0.47 |
Reject |
781 |
5.33 |
Learning To Describe Scenes With Programs |
6, 4, 6 |
0.94 |
Accept (Poster) |
782 |
5.33 |
Meta-learning Neural Bloom Filters |
3, 6, 7 |
1.70 |
Reject |
783 |
5.33 |
Cohen Welling Bases & So(2)-equivariant Classifiers Using Tensor Nonlinearity. |
3, 7, 6 |
1.70 |
Reject |
784 |
5.33 |
Antman: Sparse Low-rank Compression To Accelerate Rnn Inference |
6, 5, 5 |
0.47 |
Reject |
785 |
5.33 |
Learning Internal Dense But External Sparse Structures Of Deep Neural Network |
5, 5, 6 |
0.47 |
Reject |
786 |
5.33 |
Generative Adversarial Networks For Extreme Learned Image Compression |
6, 6, 4 |
0.94 |
Reject |
787 |
5.33 |
Adaptive Neural Trees |
4, 6, 6 |
0.94 |
Reject |
788 |
5.33 |
Graph Classification With Geometric Scattering |
5, 6, 5 |
0.47 |
Reject |
789 |
5.33 |
Point Cloud Gan |
5, 5, 6 |
0.47 |
Reject |
790 |
5.33 |
Search-guided, Lightly-supervised Training Of Structured Prediction Energy Networks |
5, 7, 4 |
1.25 |
Reject |
791 |
5.33 |
Exploring Curvature Noise In Large-batch Stochastic Optimization |
5, 6, 5 |
0.47 |
Reject |
792 |
5.33 |
Nsga-net: A Multi-objective Genetic Algorithm For Neural Architecture Search |
6, 5, 5 |
0.47 |
Reject |
793 |
5.33 |
Massively Parallel Hyperparameter Tuning |
6, 5, 5 |
0.47 |
Reject |
794 |
5.33 |
Learning And Planning With A Semantic Model |
4, 7, 5 |
1.25 |
Reject |
795 |
5.33 |
Playing The Game Of Universal Adversarial Perturbations |
6, 5, 5 |
0.47 |
Reject |
796 |
5.33 |
Learning To Adapt In Dynamic, Real-world Environments Through Meta-reinforcement Learning |
7, 2, 7 |
2.36 |
Accept (Poster) |
797 |
5.33 |
Bliss In Non-isometric Embedding Spaces |
4, 6, 6 |
0.94 |
Reject |
798 |
5.33 |
An Experimental Study Of Layer-level Training Speed And Its Impact On Generalization |
6, 5, 5 |
0.47 |
Reject |
799 |
5.33 |
Coverage And Quality Driven Training Of Generative Image Models |
5, 4, 7 |
1.25 |
Reject |
800 |
5.33 |
Multi-agent Deep Reinforcement Learning With Extremely Noisy Observations |
7, 3, 6 |
1.70 |
Reject |
801 |
5.33 |
Volumetric Convolution: Automatic Representation Learning In Unit Ball |
6, 5, 5 |
0.47 |
Reject |
802 |
5.33 |
Stackelberg Gan: Towards Provable Minimax Equilibrium Via Multi-generator Architectures |
5, 7, 4 |
1.25 |
Reject |
803 |
5.33 |
Exploring And Enhancing The Transferability Of Adversarial Examples |
4, 6, 6 |
0.94 |
Reject |
804 |
5.33 |
On The Ineffectiveness Of Variance Reduced Optimization For Deep Learning |
5, 6, 5 |
0.47 |
Reject |
805 |
5.33 |
Skip-gram Word Embeddings In Hyperbolic Space |
5, 5, 6 |
0.47 |
Reject |
806 |
5.33 |
The Case For Full-matrix Adaptive Regularization |
6, 5, 5 |
0.47 |
Reject |
807 |
5.33 |
Policy Optimization Via Stochastic Recursive Gradient Algorithm |
5, 6, 5 |
0.47 |
Reject |
808 |
5.33 |
The Universal Approximation Power Of Finite-width Deep Relu Networks |
5, 5, 6 |
0.47 |
Reject |
809 |
5.33 |
Synonymnet: Multi-context Bilateral Matching For Entity Synonyms |
5, 7, 4 |
1.25 |
Reject |
810 |
5.33 |
Convolutional Neural Networks On Non-uniform Geometrical Signals Using Euclidean Spectral Transformation |
5, 7, 4 |
1.25 |
Accept (Poster) |
811 |
5.33 |
Coco-gan: Conditional Coordinate Generative Adversarial Network |
6, 6, 4 |
0.94 |
Reject |
812 |
5.33 |
Neural Causal Discovery With Learnable Input Noise |
4, 4, 8 |
1.89 |
Reject |
813 |
5.33 |
Learning Graph Decomposition |
7, 4, 5 |
1.25 |
N/A |
814 |
5.33 |
Exploiting Cross-lingual Subword Similarities In Low-resource Document Classification |
4, 6, 6 |
0.94 |
Reject |
815 |
5.33 |
Towards Decomposed Linguistic Representation With Holographic Reduced Representation |
5, 5, 6 |
0.47 |
Reject |
816 |
5.33 |
Gaussian-gated Lstm: Improved Convergence By Reducing State Updates |
5, 5, 6 |
0.47 |
Reject |
817 |
5.33 |
Lorentzian Distance Learning |
6, 5, 5 |
0.47 |
Reject |
818 |
5.33 |
A Modern Take On The Bias-variance Tradeoff In Neural Networks |
5, 7, 4 |
1.25 |
Reject |
819 |
5.33 |
State-denoised Recurrent Neural Networks |
6, 5, 5 |
0.47 |
Reject |
820 |
5.33 |
Generative Adversarial Self-imitation Learning |
5, 6, 5 |
0.47 |
Reject |
821 |
5.33 |
Network Compression Using Correlation Analysis Of Layer Responses |
5, 6, 5 |
0.47 |
Reject |
822 |
5.33 |
Learning Generative Models For Demixing Of Structured Signals From Their Superposition Using Gans |
4, 5, 7 |
1.25 |
Reject |
823 |
5.33 |
Purchase As Reward : Session-based Recommendation By Imagination Reconstruction |
5, 6, 5 |
0.47 |
Reject |
824 |
5.33 |
Geneval: A Benchmark Suite For Evaluating Generative Models |
5, 5, 6 |
0.47 |
Reject |
825 |
5.33 |
Deep Graph Translation |
5, 5, 6 |
0.47 |
Reject |
826 |
5.33 |
Unseen Action Recognition With Unpaired Adversarial Multimodal Learning |
7, 5, 4 |
1.25 |
Reject |
827 |
5.33 |
Escaping Flat Areas Via Function-preserving Structural Network Modifications |
6, 4, 6 |
0.94 |
Reject |
828 |
5.33 |
Reducing Overconfident Errors Outside The Known Distribution |
6, 4, 6 |
0.94 |
Reject |
829 |
5.33 |
Low Latency Privacy Preserving Inference |
5, 6, 5 |
0.47 |
Reject |
830 |
5.33 |
Causal Importance Of Orientation Selectivity For Generalization In Image Recognition |
7, 5, 4 |
1.25 |
Reject |
831 |
5.33 |
What Would Pi* Do?: Imitation Learning Via Off-policy Reinforcement Learning |
5, 5, 6 |
0.47 |
Reject |
832 |
5.33 |
Engan: Latent Space Mcmc And Maximum Entropy Generators For Energy-based Models |
6, 5, 5 |
0.47 |
Reject |
833 |
5.33 |
Graph Transformation Policy Network For Chemical Reaction Prediction |
5, 6, 5 |
0.47 |
Reject |
834 |
5.33 |
Importance Resampling For Off-policy Policy Evaluation |
6, 5, 5 |
0.47 |
Reject |
835 |
5.33 |
Cnnsat: Fast, Accurate Boolean Satisfiability Using Convolutional Neural Networks |
5, 6, 5 |
0.47 |
Reject |
836 |
5.33 |
An Efficient And Margin-approaching Zero-confidence Adversarial Attack |
5, 5, 6 |
0.47 |
Reject |
837 |
5.25 |
Diverse Machine Translation With A Single Multinomial Latent Variable |
3, 6, 5, 7 |
1.48 |
Reject |
838 |
5.25 |
Optimal Attacks Against Multiple Classifiers |
5, 4, 6, 6 |
0.83 |
Reject |
839 |
5.25 |
Unified Recurrent Network For Many Feature Types |
4, 6, 4, 7 |
1.30 |
Reject |
840 |
5.25 |
An Alarm System For Segmentation Algorithm Based On Shape Model |
7, 3, 6, 5 |
1.48 |
Reject |
841 |
5.25 |
P^2ir: Universal Deep Node Representation Via Partial Permutation Invariant Set Functions |
4, 7, 5, 5 |
1.09 |
Reject |
842 |
5.25 |
Improving Generative Adversarial Imitation Learning With Non-expert Demonstrations |
5, 5, 7, 4 |
1.09 |
Reject |
843 |
5.25 |
Towards A Better Understanding Of Vector Quantized Autoencoders |
5, 7, 3, 6 |
1.48 |
Reject |
844 |
5.25 |
On The Spectral Bias Of Neural Networks |
4, 6, 5, 6 |
0.83 |
Reject |
845 |
5.20 |
Deep Neuroevolution: Genetic Algorithms Are A Competitive Alternative For Training Deep Neural Networks For Reinforcement Learning |
6, 6, 4, 3, 7 |
1.47 |
Reject |
846 |
5.00 |
Accelerated Value Iteration Via Anderson Mixing |
7, 4, 4 |
1.41 |
Reject |
847 |
5.00 |
A Case For Object Compositionality In Deep Generative Models Of Images |
5, 4, 6 |
0.82 |
Reject |
848 |
5.00 |
Guided Exploration In Deep Reinforcement Learning |
7, 5, 3 |
1.63 |
Reject |
849 |
5.00 |
Plan Online, Learn Offline: Efficient Learning And Exploration Via Model-based Control |
6, 5, 4 |
0.82 |
Accept (Poster) |
850 |
5.00 |
K-nearest Neighbors By Means Of Sequence To Sequence Deep Neural Networks And Memory Networks |
6, 5, 4 |
0.82 |
Reject |
851 |
5.00 |
Mlprune: Multi-layer Pruning For Automated Neural Network Compression |
5, 6, 4 |
0.82 |
Reject |
852 |
5.00 |
The Anisotropic Noise In Stochastic Gradient Descent: Its Behavior Of Escaping From Minima And Regularization Effects |
4, 6, 5 |
0.82 |
Reject |
853 |
5.00 |
Transfer Learning For Estimating Causal Effects Using Neural Networks |
7, 5, 3 |
1.63 |
N/A |
854 |
5.00 |
Cross-entropy Loss Leads To Poor Margins |
3, 4, 8, 5, 5 |
1.67 |
Reject |
855 |
5.00 |
Tequilagan: How To Easily Identify Gan Samples |
4, 6, 5 |
0.82 |
Reject |
856 |
5.00 |
Structured Content Preservation For Unsupervised Text Style Transfer |
5, 6, 4 |
0.82 |
N/A |
857 |
5.00 |
Correction Networks: Meta-learning For Zero-shot Learning |
4, 4, 7 |
1.41 |
Reject |
858 |
5.00 |
Nattack: A Strong And Universal Gaussian Black-box Adversarial Attack |
7, 4, 4 |
1.41 |
Reject |
859 |
5.00 |
Where And When To Look? Spatial-temporal Attention For Action Recognition In Videos |
6, 3, 6 |
1.41 |
Reject |
860 |
5.00 |
Adversarial Audio Super-resolution With Unsupervised Feature Losses |
4, 5, 6 |
0.82 |
Reject |
861 |
5.00 |
An Adversarial Learning Framework For A Persona-based Multi-turn Dialogue Model |
6, 4, 5 |
0.82 |
Reject |
862 |
5.00 |
Zero-shot Dual Machine Translation |
4, 6, 5 |
0.82 |
Reject |
863 |
5.00 |
Learning To Progressively Plan |
5, 5, 5 |
0.00 |
Reject |
864 |
5.00 |
Deep Clustering Based On A Mixture Of Autoencoders |
6, 4, 5 |
0.82 |
N/A |
865 |
5.00 |
What Is In A Translation Unit? Comparing Character And Subword Representations Beyond Translation |
5, 5, 5 |
0.00 |
N/A |
866 |
5.00 |
Learning To Remember: Dynamic Generative Memory For Continual Learning |
4, 3, 8 |
2.16 |
Reject |
867 |
5.00 |
A Recurrent Neural Cascade-based Model For Continuous-time Diffusion Process |
7, 4, 4 |
1.41 |
Reject |
868 |
5.00 |
Nesterov's Method Is The Discretization Of A Differential Equation With Hessian Damping |
4, 5, 6 |
0.82 |
N/A |
869 |
5.00 |
Representation-constrained Autoencoders And An Application To Wireless Positioning |
5, 4, 6 |
0.82 |
Reject |
870 |
5.00 |
Towards Resisting Large Data Variations Via Introspective Learning |
4, 5, 6 |
0.82 |
N/A |
871 |
5.00 |
Transferrable End-to-end Learning For Protein Interface Prediction |
5, 5, 5 |
0.00 |
Reject |
872 |
5.00 |
Learning Joint Wasserstein Auto-encoders For Joint Distribution Matching |
6, 5, 4 |
0.82 |
Reject |
873 |
5.00 |
Incremental Few-shot Learning With Attention Attractor Networks |
5, 5, 5 |
0.00 |
Reject |
874 |
5.00 |
Learning With Random Learning Rates. |
6, 4, 5 |
0.82 |
Reject |
875 |
5.00 |
Bias Also Matters: Bias Attribution For Deep Neural Network Explanation |
5, 5, 5 |
0.00 |
Reject |
876 |
5.00 |
Engaging Image Captioning Via Personality |
5, 5, 5 |
0.00 |
N/A |
877 |
5.00 |
Generative Adversarial Models For Learning Private And Fair Representations |
4, 4, 7 |
1.41 |
Reject |
878 |
5.00 |
The Gan Landscape: Losses, Architectures, Regularization, And Normalization |
4, 4, 7 |
1.41 |
Reject |
879 |
5.00 |
Generative Ensembles For Robust Anomaly Detection |
5, 4, 6 |
0.82 |
Reject |
880 |
5.00 |
Backplay: 'man Muss Immer Umkehren' |
5, 5, 5 |
0.00 |
Reject |
881 |
5.00 |
Learning Diverse Generations Using Determinantal Point Processes |
5, 5, 5 |
0.00 |
Reject |
882 |
5.00 |
Conditional Network Embeddings |
4, 6, 5 |
0.82 |
Accept (Poster) |
883 |
5.00 |
Co-manifold Learning With Missing Data |
4, 4, 7 |
1.41 |
Reject |
884 |
5.00 |
Dynamic Graph Representation Learning Via Self-attention Networks |
4, 6, 5 |
0.82 |
Reject |
885 |
5.00 |
Noisy Information Bottlenecks For Generalization |
7, 5, 3 |
1.63 |
Reject |
886 |
5.00 |
Directional Analysis Of Stochastic Gradient Descent Via Von Mises-fisher Distributions In Deep Learning |
6, 5, 4 |
0.82 |
Reject |
887 |
5.00 |
Choicenet: Robust Learning By Revealing Output Correlations |
4, 6, 5 |
0.82 |
Reject |
888 |
5.00 |
Neural Message Passing For Multi-label Classification |
4, 6, 5 |
0.82 |
Reject |
889 |
5.00 |
Riemannian Transe: Multi-relational Graph Embedding In Non-euclidean Space |
5, 5, 5 |
0.00 |
Reject |
890 |
5.00 |
Ada-boundary: Accelerating The Dnn Training Via Adaptive Boundary Batch Selection |
5, 5, 5 |
0.00 |
Reject |
891 |
5.00 |
Deep Curiosity Search: Intra-life Exploration Can Improve Performance On Challenging Deep Reinforcement Learning Problems |
5, 5, 5 |
0.00 |
Reject |
892 |
5.00 |
Intrinsic Social Motivation Via Causal Influence In Multi-agent Rl |
5, 4, 6 |
0.82 |
Reject |
893 |
5.00 |
Empirical Observations On The Instability Of Aligning Word Vector Spaces With Gans |
4, 6, 5 |
0.82 |
N/A |
894 |
5.00 |
Analyzing Federated Learning Through An Adversarial Lens |
5, 4, 6 |
0.82 |
Reject |
895 |
5.00 |
Variational Smoothing In Recurrent Neural Network Language Models |
7, 6, 2 |
2.16 |
Accept (Poster) |
896 |
5.00 |
Stop Memorizing: A Data-dependent Regularization Framework For Intrinsic Pattern Learning |
7, 4, 4 |
1.41 |
Reject |
897 |
5.00 |
Probabilistic Semantic Embedding |
7, 4, 4 |
1.41 |
Reject |
898 |
5.00 |
Globally Soft Filter Pruning For Efficient Convolutional Neural Networks |
6, 5, 4 |
0.82 |
Reject |
899 |
5.00 |
The Effectiveness Of Pre-trained Code Embeddings |
6, 4, 5 |
0.82 |
Reject |
900 |
5.00 |
Discovering Low-precision Networks Close To Full-precision Networks For Efficient Embedded Inference |
5, 4, 6 |
0.82 |
Reject |
901 |
5.00 |
On The Relationship Between Neural Machine Translation And Word Alignment |
4, 5, 6 |
0.82 |
Reject |
902 |
5.00 |
Cautious Deep Learning |
4, 7, 4 |
1.41 |
Reject |
903 |
5.00 |
End-to-end Learning Of A Convolutional Neural Network Via Deep Tensor Decomposition |
5, 5, 5 |
0.00 |
N/A |
904 |
5.00 |
Few-shot Classification On Graphs With Structural Regularized Gcns |
4, 6, 5 |
0.82 |
Reject |
905 |
5.00 |
A Better Baseline For Second Order Gradient Estimation In Stochastic Computation Graphs |
6, 5, 6, 3 |
1.22 |
Reject |
906 |
5.00 |
Unsupervised Multi-target Domain Adaptation: An Information Theoretic Approach |
6, 4, 5 |
0.82 |
Reject |
907 |
5.00 |
Improving Gaussian Mixture Latent Variable Model Convergence With Optimal Transport |
5, 5, 5 |
0.00 |
N/A |
908 |
5.00 |
Spatial-winograd Pruning Enabling Sparse Winograd Convolution |
5, 4, 6 |
0.82 |
Reject |
909 |
5.00 |
Towards Language Agnostic Universal Representations |
5, 4, 6 |
0.82 |
Reject |
910 |
5.00 |
Generative Adversarial Network Training Is A Continual Learning Problem |
5, 3, 7 |
1.63 |
Reject |
911 |
5.00 |
Local Stability And Performance Of Simple Gradient Penalty -wasserstein Gan |
5, 4, 6 |
0.82 |
Reject |
912 |
5.00 |
Understand The Dynamics Of Gans Via Primal-dual Optimization |
4, 5, 6 |
0.82 |
Reject |
913 |
5.00 |
A Main/subsidiary Network Framework For Simplifying Binary Neural Networks |
5 |
0.00 |
N/A |
914 |
5.00 |
Implicit Autoencoders |
3, 6, 6 |
1.41 |
Reject |
915 |
5.00 |
Excitation Dropout: Encouraging Plasticity In Deep Neural Networks |
5, 5, 5 |
0.00 |
Reject |
916 |
5.00 |
Convergence Properties Of Deep Neural Networks On Separable Data |
5, 5, 5 |
0.00 |
Reject |
917 |
5.00 |
Weakly-supervised Knowledge Graph Alignment With Adversarial Learning |
5, 5, 5 |
0.00 |
Reject |
918 |
5.00 |
Quantization For Rapid Deployment Of Deep Neural Networks |
5, 5, 5 |
0.00 |
Reject |
919 |
5.00 |
Harmonic Unpaired Image-to-image Translation |
6, 5, 4 |
0.82 |
Accept (Poster) |
920 |
5.00 |
On The Effectiveness Of Task Granularity For Transfer Learning |
5, 5, 5 |
0.00 |
Reject |
921 |
5.00 |
Relwalk -- A Latent Variable Model Approach To Knowledge Graph Embedding |
6, 5, 4 |
0.82 |
Reject |
922 |
5.00 |
Likelihood-based Permutation Invariant Loss Function For Probability Distributions |
5, 6, 4 |
0.82 |
Reject |
923 |
5.00 |
Link Prediction In Hypergraphs Using Graph Convolutional Networks |
6, 5, 4 |
0.82 |
Reject |
924 |
5.00 |
Physiological Signal Embeddings (phase) Via Interpretable Stacked Models |
6, 5, 4 |
0.82 |
Reject |
925 |
5.00 |
Transferring Slu Models In Novel Domains |
6, 5, 4 |
0.82 |
Reject |
926 |
5.00 |
Deep Reinforcement Learning Of Universal Policies With Diverse Environment Summaries |
4, 6, 5 |
0.82 |
Reject |
927 |
5.00 |
A Privacy-preserving Image Classification Framework With A Learnable Obfuscator |
5, 5, 5 |
0.00 |
N/A |
928 |
5.00 |
Optimistic Acceleration For Optimization |
5, 6, 5, 4 |
0.71 |
Reject |
929 |
5.00 |
Metric-optimized Example Weights |
4, 4, 7 |
1.41 |
Reject |
930 |
5.00 |
Novel Positional Encodings To Enable Tree-structured Transformers |
5, 4, 6 |
0.82 |
Reject |
931 |
5.00 |
The Importance Of Norm Regularization In Linear Graph Embedding: Theoretical Analysis And Empirical Demonstration |
7, 4, 4 |
1.41 |
Reject |
932 |
5.00 |
Analysis Of Memory Organization For Dynamic Neural Networks |
7, 5, 3 |
1.63 |
Reject |
933 |
5.00 |
Isa-vae: Independent Subspace Analysis With Variational Autoencoders |
4, 7, 4 |
1.41 |
Reject |
934 |
5.00 |
Inferring Reward Functions From Demonstrators With Unknown Biases |
5, 5, 5 |
0.00 |
Reject |
935 |
5.00 |
Approximation Capability Of Neural Networks On Sets Of Probability Measures And Tree-structured Data |
6, 5, 4 |
0.82 |
Reject |
936 |
5.00 |
Denoise While Aggregating: Collaborative Learning In Open-domain Question Answering |
4, 6, 5 |
0.82 |
Reject |
937 |
5.00 |
Learning Discriminators As Energy Networks In Adversarial Learning |
5, 5, 5 |
0.00 |
Reject |
938 |
5.00 |
Learning Representations Of Categorical Feature Combinations Via Self-attention |
5, 5, 5 |
0.00 |
Reject |
939 |
5.00 |
Therml: The Thermodynamics Of Machine Learning |
7, 3, 5 |
1.63 |
Reject |
940 |
5.00 |
Favae: Sequence Disentanglement Using In- Formation Bottleneck Principle |
5, 6, 4 |
0.82 |
Reject |
941 |
5.00 |
Learning Representations Of Sets Through Optimized Permutations |
6, 3, 6 |
1.41 |
Accept (Poster) |
942 |
5.00 |
Convolutional Neural Networks Combined With Runge-kutta Methods |
4, 5, 6 |
0.82 |
Reject |
943 |
5.00 |
Supportnet: Solving Catastrophic Forgetting In Class Incremental Learning With Support Data |
6, 5, 4 |
0.82 |
Reject |
944 |
5.00 |
Experience Replay For Continual Learning |
5, 5, 5 |
0.00 |
Reject |
945 |
5.00 |
Hypergan: Exploring The Manifold Of Neural Networks |
6, 5, 4 |
0.82 |
Reject |
946 |
5.00 |
Learning Neuron Non-linearities With Kernel-based Deep Neural Networks |
5, 4, 6 |
0.82 |
Reject |
947 |
5.00 |
Pairwise Augmented Gans With Adversarial Reconstruction Loss |
4, 6, 5 |
0.82 |
Reject |
948 |
5.00 |
Cutting Down Training Memory By Re-fowarding |
6, 4, 4, 6 |
1.00 |
Reject |
949 |
5.00 |
Multi-modal Generative Adversarial Networks For Diverse Datasets |
4, 6 |
1.00 |
N/A |
950 |
5.00 |
A Variational Autoencoder For Probabilistic Non-negative Matrix Factorisation |
4, 4, 7 |
1.41 |
Reject |
951 |
5.00 |
High Resolution And Fast Face Completion Via Progressively Attentive Gans |
5, 5, 5 |
0.00 |
Reject |
952 |
5.00 |
Redsync : Reducing Synchronization Traffic For Distributed Deep Learning |
5, 5, 5 |
0.00 |
Reject |
953 |
5.00 |
What A Difference A Pixel Makes: An Empirical Examination Of Features Used By Cnns For Categorisation |
4, 4, 7 |
1.41 |
Reject |
954 |
5.00 |
Graph2seq: Scalable Learning Dynamics For Graphs |
6, 5, 4 |
0.82 |
Reject |
955 |
5.00 |
Self-binarizing Networks |
5, 5, 5 |
0.00 |
N/A |
956 |
5.00 |
Dissecting An Adversarial Framework For Information Retrieval |
6, 5, 4 |
0.82 |
Reject |
957 |
5.00 |
Capacity Of Deep Neural Networks Under Parameter Quantization |
5, 5, 5 |
0.00 |
N/A |
958 |
5.00 |
Model Comparison For Semantic Grouping |
5, 5, 5 |
0.00 |
Reject |
959 |
5.00 |
Learning Abstract Models For Long-horizon Exploration |
4, 5, 6 |
0.82 |
Reject |
960 |
5.00 |
Finding Mixed Nash Equilibria Of Generative Adversarial Networks |
4, 5, 6 |
0.82 |
Reject |
961 |
5.00 |
Pointgrow: Autoregressively Learned Point Cloud Generation With Self-attention |
3, 6, 6 |
1.41 |
N/A |
962 |
5.00 |
Information Maximization Auto-encoding |
5, 6, 4 |
0.82 |
Reject |
963 |
5.00 |
Bayesian Deep Learning Via Stochastic Gradient Mcmc With A Stochastic Approximation Adaptation |
5, 4, 6 |
0.82 |
Reject |
964 |
5.00 |
Boosting Robustness Certification Of Neural Networks |
5, 6, 4 |
0.82 |
Accept (Poster) |
965 |
5.00 |
Vhegan: Variational Hetero-encoder Randomized Gan For Zero-shot Learning |
5, 5, 5 |
0.00 |
Reject |
966 |
5.00 |
Inducing Cooperation Via Learning To Reshape Rewards In Semi-cooperative Multi-agent Reinforcement Learning |
5, 5, 5 |
0.00 |
Reject |
967 |
5.00 |
Iteratively Learning From The Best |
6, 3, 6 |
1.41 |
Reject |
968 |
5.00 |
Déjà Vu: An Empirical Evaluation Of The Memorization Properties Of Convnets |
4, 5, 6 |
0.82 |
Reject |
969 |
5.00 |
Understanding The Effectiveness Of Lipschitz-continuity In Generative Adversarial Nets |
6, 4, 5 |
0.82 |
Reject |
970 |
5.00 |
Efficient Codebook And Factorization For Second Order Representation Learning |
4, 6, 5 |
0.82 |
Reject |
971 |
5.00 |
Reinforced Imitation Learning From Observations |
6, 5, 4 |
0.82 |
Reject |
972 |
5.00 |
Safe Policy Learning From Observations |
5, 5, 5 |
0.00 |
Reject |
973 |
5.00 |
Data Interpretation And Reasoning Over Scientific Plots |
6, 6, 3 |
1.41 |
N/A |
974 |
5.00 |
An Energy-based Framework For Arbitrary Label Noise Correction |
5, 5, 5 |
0.00 |
Reject |
975 |
5.00 |
Discrete Flow Posteriors For Variational Inference In Discrete Dynamical Systems |
4, 4, 7 |
1.41 |
Reject |
976 |
5.00 |
Human-guided Column Networks: Augmenting Deep Learning With Advice |
6, 4, 5 |
0.82 |
Reject |
977 |
5.00 |
Collaborative Multiagent Reinforcement Learning In Homogeneous Swarms |
6, 4, 5 |
0.82 |
Reject |
978 |
5.00 |
An Automatic Operation Batching Strategy For The Backward Propagation Of Neural Networks Having Dynamic Computation Graphs |
5, 6, 4 |
0.82 |
Reject |
979 |
5.00 |
Unicorn: Continual Learning With A Universal, Off-policy Agent |
4, 5, 6 |
0.82 |
Reject |
980 |
5.00 |
A Theoretical Framework For Deep And Locally Connected Relu Network |
3, 7, 5 |
1.63 |
Reject |
981 |
5.00 |
On Regularization And Robustness Of Deep Neural Networks |
5, 4, 6 |
0.82 |
Reject |
982 |
5.00 |
Strength In Numbers: Trading-off Robustness And Computation Via Adversarially-trained Ensembles |
5, 6, 4 |
0.82 |
Reject |
983 |
5.00 |
Canonical Correlation Analysis With Implicit Distributions |
5, 6, 4 |
0.82 |
Reject |
984 |
5.00 |
Dense Morphological Network: An Universal Function Approximator |
5, 5, 5 |
0.00 |
Reject |
985 |
5.00 |
Ad-vat: An Asymmetric Dueling Mechanism For Learning Visual Active Tracking |
5, 4, 6 |
0.82 |
Accept (Poster) |
986 |
5.00 |
Double Neural Counterfactual Regret Minimization |
5, 6, 4 |
0.82 |
Reject |
987 |
5.00 |
Guided Evolutionary Strategies: Escaping The Curse Of Dimensionality In Random Search |
5, 4, 6 |
0.82 |
Reject |
988 |
5.00 |
Using Ontologies To Improve Performance In Massively Multi-label Prediction |
6, 5, 4 |
0.82 |
Reject |
989 |
5.00 |
Accelerated Gradient Flow For Probability Distributions |
4, 5, 6 |
0.82 |
Reject |
990 |
5.00 |
Large Batch Size Training Of Neural Networks With Adversarial Training And Second-order Information |
4, 7, 4 |
1.41 |
Reject |
991 |
5.00 |
Phrase-based Attentions |
5, 5, 5 |
0.00 |
Reject |
992 |
5.00 |
Causal Reasoning From Meta-reinforcement Learning |
5, 4, 4, 7 |
1.22 |
Reject |
993 |
5.00 |
Interpretable Continual Learning |
4, 5, 6 |
0.82 |
Reject |
994 |
5.00 |
Tensor Ring Nets Adapted Deep Multi-task Learning |
6, 5, 4 |
0.82 |
Reject |
995 |
5.00 |
Reduced-gate Convolutional Lstm Design Using Predictive Coding For Next-frame Video Prediction |
5, 3, 7 |
1.63 |
Reject |
996 |
5.00 |
S3ta: A Soft, Spatial, Sequential, Top-down Attention Model |
5, 5, 5 |
0.00 |
Reject |
997 |
5.00 |
Solar: Deep Structured Representations For Model-based Reinforcement Learning |
5, 5, 5 |
0.00 |
Reject |
998 |
5.00 |
Learning To Control Self-assembling Morphologies: A Study Of Generalization Via Modularity |
4, 7, 4 |
1.41 |
Reject |
999 |
5.00 |
Snapquant: A Probabilistic And Nested Parameterization For Binary Networks |
4, 6, 5 |
0.82 |
Reject |
1000 |
5.00 |
Spread Divergences |
5, 4, 6 |
0.82 |
Reject |
1001 |
5.00 |
Characterizing Malicious Edges Targeting On Graph Neural Networks |
5, 5, 5 |
0.00 |
Reject |
1002 |
5.00 |
N-ary Quantization For Cnn Model Compression And Inference Acceleration |
4, 4, 7 |
1.41 |
Reject |
1003 |
4.75 |
Pooling Is Neither Necessary Nor Sufficient For Appropriate Deformation Stability In Cnns |
5, 4, 5, 5 |
0.43 |
Reject |
1004 |
4.75 |
Geomstats: A Python Package For Riemannian Geometry In Machine Learning |
4, 4, 3, 8 |
1.92 |
Reject |
1005 |
4.75 |
Multi-turn Dialogue Response Generation In An Adversarial Learning Framework |
4, 4, 6, 5 |
0.83 |
Reject |
1006 |
4.75 |
Successor Options : An Option Discovery Algorithm For Reinforcement Learning |
4, 5, 6, 4 |
0.83 |
Reject |
1007 |
4.67 |
Like What You Like: Knowledge Distill Via Neuron Selectivity Transfer |
4, 4, 6 |
0.94 |
Reject |
1008 |
4.67 |
A Study Of Robustness Of Neural Nets Using Approximate Feature Collisions |
6, 4, 4 |
0.94 |
Reject |
1009 |
4.67 |
Segen: Sample-ensemble Genetic Evolutionary Network Model |
5, 5, 4 |
0.47 |
Reject |
1010 |
4.67 |
Unsupervised Disentangling Structure And Appearance |
6, 5, 3 |
1.25 |
Reject |
1011 |
4.67 |
Predictive Uncertainty Through Quantization |
5, 4, 5 |
0.47 |
Reject |
1012 |
4.67 |
Shaping Representations Through Communication |
5, 4, 5 |
0.47 |
N/A |
1013 |
4.67 |
Diagnosing Language Inconsistency In Cross-lingual Word Embeddings |
6, 4, 4 |
0.94 |
N/A |
1014 |
4.67 |
Explicit Recall For Efficient Exploration |
7, 4, 3 |
1.70 |
Reject |
1015 |
4.67 |
Generalized Adaptive Moment Estimation |
3, 4, 7 |
1.70 |
Reject |
1016 |
4.67 |
Neural Variational Inference For Embedding Knowledge Graphs |
5, 5, 4 |
0.47 |
Reject |
1017 |
4.67 |
Holographic And Other Point Set Distances For Machine Learning |
4, 3, 7 |
1.70 |
Reject |
1018 |
4.67 |
Geometry Aware Convolutional Filters For Omnidirectional Images Representation |
4, 6, 4 |
0.94 |
Reject |
1019 |
4.67 |
End-to-end Learning Of Pharmacological Assays From High-resolution Microscopy Images |
6, 3, 5 |
1.25 |
Reject |
1020 |
4.67 |
Ssoc: Learning Spontaneous And Self-organizing Communication For Multi-agent Collaboration |
4, 5, 5 |
0.47 |
Reject |
1021 |
4.67 |
Estimating Heterogeneous Treatment Effects Using Neural Networks With The Y-learner |
5, 5, 4 |
0.47 |
N/A |
1022 |
4.67 |
Na |
4, 5, 5 |
0.47 |
N/A |
1023 |
4.67 |
Computation-efficient Quantization Method For Deep Neural Networks |
4, 5, 5 |
0.47 |
Reject |
1024 |
4.67 |
Context-aware Forecasting For Multivariate Stationary Time-series |
5, 5, 4 |
0.47 |
N/A |
1025 |
4.67 |
Traditional And Heavy Tailed Self Regularization In Neural Network Models |
4, 4, 6 |
0.94 |
Reject |
1026 |
4.67 |
Efficient Dictionary Learning With Gradient Descent |
5, 4, 5 |
0.47 |
Reject |
1027 |
4.67 |
Answer-based Adversarial Training For Generating Clarification Questions |
4, 6, 4 |
0.94 |
N/A |
1028 |
4.67 |
Learning Hash Codes Via Hamming Distance Targets |
4, 6, 4 |
0.94 |
Reject |
1029 |
4.67 |
On The Convergence And Robustness Of Batch Normalization |
6, 4, 4 |
0.94 |
Reject |
1030 |
4.67 |
Zero-training Sentence Embedding Via Orthogonal Basis |
5, 4, 5 |
0.47 |
Reject |
1031 |
4.67 |
Feature Prioritization And Regularization Improve Standard Accuracy And Adversarial Robustness |
5, 4, 5 |
0.47 |
Reject |
1032 |
4.67 |
Probabilistic Binary Neural Networks |
6, 5, 3 |
1.25 |
Reject |
1033 |
4.67 |
Differential Equation Networks |
5, 4, 5 |
0.47 |
Reject |
1034 |
4.67 |
Stochastic Learning Of Additive Second-order Penalties With Applications To Fairness |
5, 5, 4 |
0.47 |
Reject |
1035 |
4.67 |
Rectified Gradient: Layer-wise Thresholding For Sharp And Coherent Attribution Maps |
5, 5, 4 |
0.47 |
Reject |
1036 |
4.67 |
When Will Gradient Methods Converge To Max-margin Classifier Under Relu Models? |
5, 4, 5 |
0.47 |
Reject |
1037 |
4.67 |
An Efficient Network For Predicting Time-varying Distributions |
5, 4, 5 |
0.47 |
N/A |
1038 |
4.67 |
Selectivity Metrics Can Overestimate The Selectivity Of Units: A Case Study On Alexnet |
5, 6, 3 |
1.25 |
Reject |
1039 |
4.67 |
Effective Path: Know The Unknowns Of Neural Network |
4, 4, 6 |
0.94 |
Reject |
1040 |
4.67 |
Exploiting Environmental Variation To Improve Policy Robustness In Reinforcement Learning |
5, 3, 6 |
1.25 |
Reject |
1041 |
4.67 |
Mean Replacement Pruning |
5, 4, 5 |
0.47 |
Reject |
1042 |
4.67 |
Inference Of Unobserved Event Streams With Neural Hawkes Particle Smoothing |
5, 4, 5 |
0.47 |
Reject |
1043 |
4.67 |
Theoretical And Empirical Study Of Adversarial Examples |
5, 5, 4 |
0.47 |
Reject |
1044 |
4.67 |
Domain Adaptive Transfer Learning |
3, 4, 7 |
1.70 |
N/A |
1045 |
4.67 |
How Training Data Affect The Accuracy And Robustness Of Neural Networks For Image Classification |
4, 5, 5 |
0.47 |
Reject |
1046 |
4.67 |
Learning Physics Priors For Deep Reinforcement Learing |
5, 4, 5 |
0.47 |
Reject |
1047 |
4.67 |
Coupled Recurrent Models For Polyphonic Music Composition |
7, 3, 4 |
1.70 |
Reject |
1048 |
4.67 |
Unifying Bilateral Filtering And Adversarial Training For Robust Neural Networks |
4, 5, 5 |
0.47 |
Reject |
1049 |
4.67 |
Effective And Efficient Batch Normalization Using Few Uncorrelated Data For Statistics' Estimation |
4, 5, 5 |
0.47 |
Reject |
1050 |
4.67 |
Visualizing And Discovering Behavioural Weaknesses In Deep Reinforcement Learning |
5, 5, 4 |
0.47 |
N/A |
1051 |
4.67 |
Improving Latent Variable Descriptiveness By Modelling Rather Than Ad-hoc Factors |
4, 4, 6 |
0.94 |
N/A |
1052 |
4.67 |
Accelerated Sparse Recovery Under Structured Measurements |
4, 5, 5 |
0.47 |
Reject |
1053 |
4.67 |
Unsupervised Expectation Learning For Multisensory Binding |
4, 5, 5 |
0.47 |
Reject |
1054 |
4.67 |
Crystalgan: Learning To Discover Crystallographic Structures With Generative Adversarial Networks |
3, 7, 4 |
1.70 |
N/A |
1055 |
4.67 |
Deep-trim: Revisiting L1 Regularization For Connection Pruning Of Deep Network |
4, 6, 4 |
0.94 |
Reject |
1056 |
4.67 |
Interpreting Adversarial Robustness: A View From Decision Surface In Input Space |
3, 6, 5 |
1.25 |
Reject |
1057 |
4.67 |
What Information Does A Resnet Compress? |
4, 4, 6 |
0.94 |
Reject |
1058 |
4.67 |
Highly Efficient 8-bit Low Precision Inference Of Convolutional Neural Networks |
6, 4, 4 |
0.94 |
Reject |
1059 |
4.67 |
Measuring Density And Similarity Of Task Relevant Information In Neural Representations |
4, 5, 5 |
0.47 |
Reject |
1060 |
4.67 |
Over-parameterization Improves Generalization In The Xor Detection Problem |
4, 5, 5 |
0.47 |
Reject |
1061 |
4.67 |
Few-shot Learning By Exploiting Object Relation |
6, 4, 4 |
0.94 |
N/A |
1062 |
4.67 |
N/a |
4, 4, 6 |
0.94 |
N/A |
1063 |
4.67 |
Improved Resistance Of Neural Networks To Adversarial Images Through Generative Pre-training |
4, 4, 6 |
0.94 |
Reject |
1064 |
4.67 |
Sparse Binary Compression: Towards Distributed Deep Learning With Minimal Communication |
6, 3, 5 |
1.25 |
Reject |
1065 |
4.67 |
Dual Skew Divergence Loss For Neural Machine Translation |
3, 6, 5 |
1.25 |
Reject |
1066 |
4.67 |
Discriminative Out-of-distribution Detection For Semantic Segmentation |
4, 7, 3 |
1.70 |
Reject |
1067 |
4.67 |
Maximum A Posteriori On A Submanifold: A General Image Restoration Method With Gan |
4, 4, 6 |
0.94 |
Reject |
1068 |
4.67 |
Robust Determinantal Generative Classifier For Noisy Labels And Adversarial Attacks |
3, 7, 4 |
1.70 |
Reject |
1069 |
4.67 |
Learned Optimizers That Outperform On Wall-clock And Validation Loss |
4, 5, 5 |
0.47 |
Reject |
1070 |
4.67 |
Pushing The Bounds Of Dropout |
5, 5, 4 |
0.47 |
Reject |
1071 |
4.67 |
Manifold Alignment Via Feature Correspondence |
5, 5, 4 |
0.47 |
Reject |
1072 |
4.67 |
Low-rank Matrix Factorization Of Lstm As Effective Model Compression |
5, 5, 4 |
0.47 |
N/A |
1073 |
4.67 |
Security Analysis Of Deep Neural Networks Operating In The Presence Of Cache Side-channel Attacks |
4, 6, 4 |
0.94 |
Reject |
1074 |
4.67 |
Cgnf: Conditional Graph Neural Fields |
5, 4, 5 |
0.47 |
Reject |
1075 |
4.67 |
Self-supervised Generalisation With Meta Auxiliary Learning |
4, 4, 6 |
0.94 |
Reject |
1076 |
4.67 |
An Investigation Of Model-free Planning |
5, 5, 4 |
0.47 |
Reject |
1077 |
4.67 |
Evolving Intrinsic Motivations For Altruistic Behavior |
5, 6, 3 |
1.25 |
N/A |
1078 |
4.67 |
Boosting Trust Region Policy Optimization By Normalizing Flows Policy |
6, 4, 4 |
0.94 |
Reject |
1079 |
4.67 |
Mixfeat: Mix Feature In Latent Space Learns Discriminative Space |
6, 4, 4 |
0.94 |
Reject |
1080 |
4.67 |
Unsupervised Image To Sequence Translation With Canvas-drawer Networks |
4, 6, 4 |
0.94 |
Reject |
1081 |
4.67 |
Chemical Names Standardization Using Neural Sequence To Sequence Model |
4, 3, 7 |
1.70 |
Reject |
1082 |
4.67 |
Tabnn: A Universal Neural Network Solution For Tabular Data |
5, 4, 5 |
0.47 |
Reject |
1083 |
4.67 |
Compound Density Networks |
4, 5, 5 |
0.47 |
Reject |
1084 |
4.67 |
Architecture Compression |
6, 4, 4 |
0.94 |
Reject |
1085 |
4.67 |
Text Infilling |
3, 5, 6 |
1.25 |
Reject |
1086 |
4.67 |
Approximation And Non-parametric Estimation Of Resnet-type Convolutional Neural Networks Via Block-sparse Fully-connected Neural Networks |
4, 6, 4 |
0.94 |
Reject |
1087 |
4.67 |
Learning Gibbs-regularized Gans With Variational Discriminator Reparameterization |
5, 5, 4 |
0.47 |
Reject |
1088 |
4.67 |
Learning To Attend On Essential Terms: An Enhanced Retriever-reader Model For Open-domain Question Answering |
4, 5, 5 |
0.47 |
N/A |
1089 |
4.67 |
Progressive Weight Pruning Of Deep Neural Networks Using Admm |
5, 5, 4 |
0.47 |
Reject |
1090 |
4.67 |
Tfgan: Improving Conditioning For Text-to-video Synthesis |
6, 3, 5 |
1.25 |
N/A |
1091 |
4.67 |
Penetrating The Fog: The Path To Efficient Cnn Models |
5, 4, 5 |
0.47 |
Reject |
1092 |
4.67 |
Pruning In Training: Learning And Ranking Sparse Connections In Deep Convolutional Networks |
5, 5, 4 |
0.47 |
Reject |
1093 |
4.67 |
Expanding The Reach Of Federated Learning By Reducing Client Resource Requirements |
4, 5, 5 |
0.47 |
Reject |
1094 |
4.67 |
Learning To Drive By Observing The Best And Synthesizing The Worst |
3, 6, 5 |
1.25 |
Reject |
1095 |
4.67 |
Multi-grained Entity Proposal Network For Named Entity Recognition |
5, 5, 4 |
0.47 |
Reject |
1096 |
4.67 |
Learning With Little Data: Evaluation Of Deep Learning Algorithms |
6, 4, 4 |
0.94 |
N/A |
1097 |
4.67 |
A Fast Quasi-newton-type Method For Large-scale Stochastic Optimisation |
5, 5, 4 |
0.47 |
Reject |
1098 |
4.67 |
Gradient Descent Happens In A Tiny Subspace |
4, 4, 6 |
0.94 |
Reject |
1099 |
4.67 |
Pumpout: A Meta Approach For Robustly Training Deep Neural Networks With Noisy Labels |
6, 5, 3 |
1.25 |
Reject |
1100 |
4.67 |
Convergence Guarantees For Rmsprop And Adam In Non-convex Optimization And An Empirical Comparison To Nesterov Acceleration |
5, 4, 5 |
0.47 |
Reject |
1101 |
4.67 |
Logically-constrained Neural Fitted Q-iteration |
5, 4, 5 |
0.47 |
N/A |
1102 |
4.67 |
Pa-gan: Improving Gan Training By Progressive Augmentation |
4, 5, 5 |
0.47 |
Reject |
1103 |
4.67 |
Learning Graph Representations By Dendrograms |
4, 5, 5 |
0.47 |
Reject |
1104 |
4.67 |
Outlier Detection From Image Data |
4, 5, 5 |
0.47 |
Reject |
1105 |
4.67 |
Count-based Exploration With The Successor Representation |
5, 5, 4 |
0.47 |
Reject |
1106 |
4.67 |
On Breiman’s Dilemma In Neural Networks: Success And Failure Of Normalized Margins |
4, 5, 5 |
0.47 |
Reject |
1107 |
4.67 |
On Accurate Evaluation Of Gans For Language Generation |
5, 3, 6 |
1.25 |
Reject |
1108 |
4.67 |
Noise-tempered Generative Adversarial Networks |
4, 5, 5 |
0.47 |
N/A |
1109 |
4.67 |
A Unified View Of Deep Metric Learning Via Gradient Analysis |
3, 6, 5 |
1.25 |
N/A |
1110 |
4.67 |
Ergodic Measure Preserving Flows |
5, 5, 4 |
0.47 |
Reject |
1111 |
4.67 |
Stability Of Stochastic Gradient Method With Momentum For Strongly Convex Loss Functions |
4, 6, 4 |
0.94 |
Reject |
1112 |
4.67 |
Siamese Capsule Networks |
5, 6, 3 |
1.25 |
Reject |
1113 |
4.67 |
Learning Shared Manifold Representation Of Images And Attributes For Generalized Zero-shot Learning |
4, 5, 5 |
0.47 |
Reject |
1114 |
4.67 |
Variational Sparse Coding |
4, 5, 5 |
0.47 |
Reject |
1115 |
4.67 |
Parameter Efficient Training Of Deep Convolutional Neural Networks By Dynamic Sparse Reparameterization |
4, 4, 6 |
0.94 |
Reject |
1116 |
4.67 |
Open Vocabulary Learning On Source Code With A Graph-structured Cache |
4, 4, 6 |
0.94 |
Reject |
1117 |
4.67 |
Simile: Introducing Sequential Information Towards More Effective Imitation Learning |
6, 4, 4 |
0.94 |
Reject |
1118 |
4.67 |
Differentiable Expected Bleu For Text Generation |
4, 4, 6 |
0.94 |
Reject |
1119 |
4.67 |
Generating Realistic Stock Market Order Streams |
5, 5, 4 |
0.47 |
Reject |
1120 |
4.67 |
Tinkering With Black Boxes: Counterfactuals Uncover Modularity In Generative Models |
6, 4, 4 |
0.94 |
Reject |
1121 |
4.67 |
Neural Malware Control With Deep Reinforcement Learning |
5, 4, 5 |
0.47 |
Reject |
1122 |
4.67 |
Three Continual Learning Scenarios And A Case For Generative Replay |
4, 4, 6 |
0.94 |
Reject |
1123 |
4.67 |
On The Geometry Of Adversarial Examples |
5, 3, 6 |
1.25 |
Reject |
1124 |
4.67 |
Investigating Cnns' Learning Representation Under Label Noise |
5, 4, 5 |
0.47 |
Reject |
1125 |
4.67 |
Integral Pruning On Activations And Weights For Efficient Neural Networks |
4, 5, 5 |
0.47 |
Reject |
1126 |
4.67 |
A Proposed Hierarchy Of Deep Learning Tasks |
6, 4, 4 |
0.94 |
Reject |
1127 |
4.67 |
3d-relnet: Joint Object And Relational Network For 3d Prediction |
6, 3, 5 |
1.25 |
Reject |
1128 |
4.67 |
Ace: Artificial Checkerboard Enhancer To Induce And Evade Adversarial Attacks |
4, 4, 6 |
0.94 |
Reject |
1129 |
4.67 |
Conscious Inference For Object Detection |
4, 6, 4 |
0.94 |
Reject |
1130 |
4.67 |
Sufficient Conditions For Robustness To Adversarial Examples: A Theoretical And Empirical Study With Bayesian Neural Networks |
5, 5, 4 |
0.47 |
Reject |
1131 |
4.67 |
Using Gans For Generation Of Realistic City-scale Ride Sharing/hailing Data Sets |
4, 5, 5 |
0.47 |
Reject |
1132 |
4.67 |
Visual Imitation With A Minimal Adversary |
5, 3, 6 |
1.25 |
Reject |
1133 |
4.67 |
Intriguing Properties Of Learned Representations |
3, 6, 5 |
1.25 |
Reject |
1134 |
4.67 |
Sampling With Probability Matching |
5, 6, 3 |
1.25 |
Reject |
1135 |
4.67 |
Meta-learning With Differentiable Closed-form Solvers |
5, 2, 7 |
2.05 |
Accept (Poster) |
1136 |
4.67 |
The Conditional Entropy Bottleneck |
6, 2, 6 |
1.89 |
Reject |
1137 |
4.67 |
Learning Information Propagation In The Dynamical Systems Via Information Bottleneck Hierarchy |
5, 4, 5 |
0.47 |
Reject |
1138 |
4.67 |
Partially Mutual Exclusive Softmax For Positive And Unlabeled Data |
5, 4, 5 |
0.47 |
Reject |
1139 |
4.67 |
Transfer Value Or Policy? A Value-centric Framework Towards Transferrable Continuous Reinforcement Learning |
5, 4, 5 |
0.47 |
Reject |
1140 |
4.67 |
Consistency-based Anomaly Detection With Adaptive Multiple-hypotheses Predictions |
4, 5, 5 |
0.47 |
Reject |
1141 |
4.67 |
Marginalized Average Attentional Network For Weakly-supervised Learning |
5, 6, 3 |
1.25 |
Accept (Poster) |
1142 |
4.67 |
Online Bellman Residue Minimization Via Saddle Point Optimization |
5, 5, 4 |
0.47 |
N/A |
1143 |
4.67 |
Object-oriented Model Learning Through Multi-level Abstraction |
4, 4, 6 |
0.94 |
Reject |
1144 |
4.67 |
Expressiveness In Deep Reinforcement Learning |
6, 4, 4 |
0.94 |
Reject |
1145 |
4.67 |
Scalable Neural Theorem Proving On Knowledge Bases And Natural Language |
4, 5, 5 |
0.47 |
Reject |
1146 |
4.50 |
Improving On-policy Learning With Statistical Reward Accumulation |
4, 5 |
0.50 |
Reject |
1147 |
4.50 |
Fast Binary Functional Search On Graph |
4, 5 |
0.50 |
Reject |
1148 |
4.50 |
One-shot High-fidelity Imitation: Training Large-scale Deep Nets With Rl |
4, 4, 5, 5 |
0.50 |
Reject |
1149 |
4.50 |
Online Abstraction With Mdp Homomorphisms For Deep Learning |
4, 5 |
0.50 |
N/A |
1150 |
4.50 |
Fast Exploration With Simplified Models And Approximately Optimistic Planning In Model Based Reinforcement Learning |
5, 4 |
0.50 |
Reject |
1151 |
4.50 |
Unification Of Recurrent Neural Network Architectures And Quantum Inspired Stable Design |
5, 4, 4, 5 |
0.50 |
Reject |
1152 |
4.40 |
Context Dependent Modulation Of Activation Function |
4, 4, 4, 4, 6 |
0.80 |
Reject |
1153 |
4.33 |
Multi-objective Value Iteration With Parameterized Threshold-based Safety Constraints |
5, 5, 3 |
0.94 |
Reject |
1154 |
4.33 |
Bridging Hmms And Rnns Through Architectural Transformations |
5, 3, 5 |
0.94 |
N/A |
1155 |
4.33 |
Learning Corresponded Rationales For Text Matching |
6, 4, 3 |
1.25 |
Reject |
1156 |
4.33 |
Modeling Dynamics Of Biological Systems With Deep Generative Neural Networks |
6, 4, 3 |
1.25 |
Reject |
1157 |
4.33 |
Learning Adversarial Examples With Riemannian Geometry |
6, 4, 3 |
1.25 |
Reject |
1158 |
4.33 |
Q-neurons: Neuron Activations Based On Stochastic Jackson's Derivative Operators |
6, 2, 5 |
1.70 |
Reject |
1159 |
4.33 |
From Nodes To Networks: Evolving Recurrent Neural Networks |
5, 4, 4 |
0.47 |
Reject |
1160 |
4.33 |
Combining Global Sparse Gradients With Local Gradients |
5, 5, 3 |
0.94 |
N/A |
1161 |
4.33 |
Efficient Sequence Labeling With Actor-critic Training |
5, 4, 4 |
0.47 |
Reject |
1162 |
4.33 |
Learning A Neural-network-based Representation For Open Set Recognition |
4, 4, 5 |
0.47 |
Reject |
1163 |
4.33 |
Mental Fatigue Monitoring Using Brain Dynamics Preferences |
7, 4, 2 |
2.05 |
Reject |
1164 |
4.33 |
Feed: Feature-level Ensemble Effect For Knowledge Distillation |
5, 4, 4 |
0.47 |
Reject |
1165 |
4.33 |
Pseudosaccades: A Simple Ensemble Scheme For Improving Classification Performance Of Deep Nets |
5, 4, 4 |
0.47 |
Reject |
1166 |
4.33 |
Unsupervised Meta-learning For Reinforcement Learning |
3, 6, 4 |
1.25 |
Reject |
1167 |
4.33 |
Total Style Transfer With A Single Feed-forward Network |
4, 5, 4 |
0.47 |
Reject |
1168 |
4.33 |
Pruning With Hints: An Efficient Framework For Model Acceleration |
4, 5, 4 |
0.47 |
Reject |
1169 |
4.33 |
Prototypical Examples In Deep Learning: Metrics, Characteristics, And Utility |
3, 5, 5 |
0.94 |
Reject |
1170 |
4.33 |
Classifier-agnostic Saliency Map Extraction |
4, 5, 4 |
0.47 |
Reject |
1171 |
4.33 |
A Guider Network For Multi-dual Learning |
4, 5, 4 |
0.47 |
Reject |
1172 |
4.33 |
Targeted Adversarial Examples For Black Box Audio Systems |
4, 6, 3 |
1.25 |
Reject |
1173 |
4.33 |
Meta-learning With Individualized Feature Space For Few-shot Classification |
5, 5, 3 |
0.94 |
Reject |
1174 |
4.33 |
Improving Sample-based Evaluation For Generative Adversarial Networks |
5, 5, 3 |
0.94 |
Reject |
1175 |
4.33 |
Variation Network: Learning High-level Attributes For Controlled Input Manipulation |
3, 6, 4 |
1.25 |
Reject |
1176 |
4.33 |
A Convergent Variant Of The Boltzmann Softmax Operator In Reinforcement Learning |
4, 4, 5 |
0.47 |
Reject |
1177 |
4.33 |
Variational Recurrent Models For Representation Learning |
5, 3, 5 |
0.94 |
Reject |
1178 |
4.33 |
The Natural Language Decathlon: Multitask Learning As Question Answering |
5, 5, 3 |
0.94 |
Reject |
1179 |
4.33 |
On The Effect Of The Activation Function On The Distribution Of Hidden Nodes In A Deep Network |
4, 5, 4 |
0.47 |
Reject |
1180 |
4.33 |
Variational Domain Adaptation |
4, 4, 5 |
0.47 |
Reject |
1181 |
4.33 |
Pixel Redrawn For A Robust Adversarial Defense |
4, 6, 3 |
1.25 |
Reject |
1182 |
4.33 |
Composition And Decomposition Of Gans |
4, 5, 4 |
0.47 |
Reject |
1183 |
4.33 |
Hiding Objects From Detectors: Exploring Transferrable Adversarial Patterns |
6, 4, 3 |
1.25 |
N/A |
1184 |
4.33 |
Representation Flow For Action Recognition |
3, 5, 5 |
0.94 |
N/A |
1185 |
4.33 |
Selective Self-training For Semi-supervised Learning |
4, 5, 4 |
0.47 |
Reject |
1186 |
4.33 |
Realistic Adversarial Examples In 3d Meshes |
5, 5, 3 |
0.94 |
N/A |
1187 |
4.33 |
Odin: Outlier Detection In Neural Networks |
5, 4, 4 |
0.47 |
N/A |
1188 |
4.33 |
Sense: Semantically Enhanced Node Sequence Embedding |
4, 4, 5 |
0.47 |
Reject |
1189 |
4.33 |
Label Smoothing And Logit Squeezing: A Replacement For Adversarial Training? |
7, 4, 2 |
2.05 |
N/A |
1190 |
4.33 |
Blackmarks: Black-box Multi-bit Watermarking For Deep Neural Networks |
5, 4, 4 |
0.47 |
Reject |
1191 |
4.33 |
Neural Probabilistic Motor Primitives For Humanoid Control |
3, 6, 4 |
1.25 |
Accept (Poster) |
1192 |
4.33 |
Compositional Gan: Learning Conditional Image Composition |
4, 4, 5 |
0.47 |
N/A |
1193 |
4.33 |
Representation Compression And Generalization In Deep Neural Networks |
6, 3, 4 |
1.25 |
Reject |
1194 |
4.33 |
Beyond Winning And Losing: Modeling Human Motivations And Behaviors With Vector-valued Inverse Reinforcement Learning |
5, 4, 4 |
0.47 |
Reject |
1195 |
4.33 |
Backdrop: Stochastic Backpropagation |
5, 3, 5 |
0.94 |
Reject |
1196 |
4.33 |
Contextualized Role Interaction For Neural Machine Translation |
4, 5, 4 |
0.47 |
Reject |
1197 |
4.33 |
Task-gan For Improved Gan Based Image Restoration |
4, 5, 4 |
0.47 |
Reject |
1198 |
4.33 |
Model-agnostic Meta-learning For Multimodal Task Distributions |
3, 5, 5 |
0.94 |
Reject |
1199 |
4.33 |
Stacked U-nets: A No-frills Approach To Natural Image Segmentation |
5, 3, 5 |
0.94 |
N/A |
1200 |
4.33 |
Unsupervised Classification Into Unknown Number Of Classes |
4, 4, 5 |
0.47 |
Reject |
1201 |
4.33 |
Fast Object Localization Via Sensitivity Analysis |
4, 6, 3 |
1.25 |
Reject |
1202 |
4.33 |
Variadic Learning By Bayesian Nonparametric Deep Embedding |
5, 4, 4 |
0.47 |
Reject |
1203 |
4.33 |
Asynchronous Sgd Without Gradient Delay For Efficient Distributed Training |
5, 4, 4 |
0.47 |
Reject |
1204 |
4.33 |
Shrinkage-based Bias-variance Trade-off For Deep Reinforcement Learning |
4, 4, 5 |
0.47 |
Reject |
1205 |
4.33 |
Unsupervised Latent Tree Induction With Deep Inside-outside Recursive Auto-encoders |
5, 6, 2 |
1.70 |
N/A |
1206 |
4.33 |
Low-cost Parameterizations Of Deep Convolutional Neural Networks |
4, 4, 5 |
0.47 |
N/A |
1207 |
4.33 |
Successor Uncertainties: Exploration And Uncertainty In Temporal Difference Learning |
4, 5, 4 |
0.47 |
Reject |
1208 |
4.33 |
End-to-end Hierarchical Text Classification With Label Assignment Policy |
5, 4, 4 |
0.47 |
Reject |
1209 |
4.33 |
Generalized Label Propagation Methods For Semi-supervised Learning |
4, 3, 6 |
1.25 |
N/A |
1210 |
4.33 |
Jumpout: Improved Dropout For Deep Neural Networks With Rectified Linear Units |
5, 4, 4 |
0.47 |
Reject |
1211 |
4.33 |
The Cakewalk Method |
5, 4, 4 |
0.47 |
Reject |
1212 |
4.33 |
W2gan: Recovering An Optimal Transport Map With A Gan |
6, 3, 4 |
1.25 |
Reject |
1213 |
4.33 |
Rating Continuous Actions In Spatial Multi-agent Problems |
5, 4, 4 |
0.47 |
Reject |
1214 |
4.33 |
Meta-learning To Guide Segmentation |
7, 3, 3 |
1.89 |
Reject |
1215 |
4.33 |
Stochastic Quantized Activation: To Prevent Overfitting In Fast Adversarial Training |
4, 5, 4 |
0.47 |
Reject |
1216 |
4.33 |
Dppnet: Approximating Determinantal Point Processes With Deep Networks |
3, 5, 5 |
0.94 |
Reject |
1217 |
4.33 |
Evolutionary-neural Hybrid Agents For Architecture Search |
5, 4, 4 |
0.47 |
Reject |
1218 |
4.33 |
Topicgan: Unsupervised Text Generation From Explainable Latent Topics |
4, 4, 5 |
0.47 |
Reject |
1219 |
4.33 |
Select Via Proxy: Efficient Data Selection For Training Deep Networks |
4, 4, 5 |
0.47 |
Reject |
1220 |
4.33 |
Deep Perm-set Net: Learn To Predict Sets With Unknown Permutation And Cardinality Using Deep Neural Networks |
7, 3, 3 |
1.89 |
Reject |
1221 |
4.33 |
In Your Pace: Learning The Right Example At The Right Time |
5, 4, 4 |
0.47 |
Reject |
1222 |
4.33 |
Modulating Transfer Between Tasks In Gradient-based Meta-learning |
5, 4, 4 |
0.47 |
Reject |
1223 |
4.33 |
A Preconditioned Accelerated Stochastic Gradient Descent Algorithm |
4, 4, 5 |
0.47 |
Reject |
1224 |
4.33 |
Adaptive Convolutional Neural Networks |
5, 4, 4 |
0.47 |
Reject |
1225 |
4.33 |
Inter-bmv: Interpolation With Block Motion Vectors For Fast Semantic Segmentation On Video |
5, 3, 5 |
0.94 |
Reject |
1226 |
4.33 |
Deeptwist: Learning Model Compression Via Occasional Weight Distortion |
5, 4, 4 |
0.47 |
Reject |
1227 |
4.33 |
Na |
3, 5, 5 |
0.94 |
N/A |
1228 |
4.33 |
N/a |
5, 4, 4 |
0.47 |
N/A |
1229 |
4.33 |
Generative Adversarial Interpolative Autoencoding: Adversarial Training On Latent Space Interpolations Encourages Convex Latent Distributions |
5, 4, 4 |
0.47 |
Reject |
1230 |
4.33 |
Aciq: Analytical Clipping For Integer Quantization Of Neural Networks |
4, 4, 5 |
0.47 |
Reject |
1231 |
4.33 |
Explainable Adversarial Learning: Implicit Generative Modeling Of Random Noise During Training For Adversarial Robustness |
3, 5, 5 |
0.94 |
N/A |
1232 |
4.33 |
Combining Learned Representations For Combinatorial Optimization |
4, 4, 5 |
0.47 |
Reject |
1233 |
4.33 |
Visual Imitation Learning With Recurrent Siamese Networks |
4, 4, 5 |
0.47 |
Reject |
1234 |
4.33 |
How To Learn (and How Not To Learn) Multi-hop Reasoning With Memory Networks |
3, 5, 5 |
0.94 |
N/A |
1235 |
4.33 |
Teaching To Teach By Structured Dark Knowledge |
4, 3, 6 |
1.25 |
Reject |
1236 |
4.33 |
Confidence Calibration In Deep Neural Networks Through Stochastic Inferences |
5, 3, 5 |
0.94 |
N/A |
1237 |
4.33 |
Isolating Effects Of Age With Fair Representation Learning When Assessing Dementia |
4, 4, 5 |
0.47 |
N/A |
1238 |
4.33 |
A Single Shot Pca-driven Analysis Of Network Structure To Remove Redundancy |
4, 4, 5 |
0.47 |
N/A |
1239 |
4.33 |
Learning To Control Visual Abstractions For Structured Exploration In Deep Reinforcement Learning |
4, 5, 4 |
0.47 |
Reject |
1240 |
4.33 |
Log Hyperbolic Cosine Loss Improves Variational Auto-encoder |
4, 4, 5 |
0.47 |
Reject |
1241 |
4.33 |
Locally Linear Unsupervised Feature Selection |
4, 6, 3 |
1.25 |
Reject |
1242 |
4.33 |
Sequence Modelling With Auto-addressing And Recurrent Memory Integrating Networks |
4, 4, 5 |
0.47 |
Reject |
1243 |
4.33 |
Learning What To Remember: Long-term Episodic Memory Networks For Learning From Streaming Data |
5, 4, 4 |
0.47 |
Reject |
1244 |
4.33 |
Universal Attacks On Equivariant Networks |
4, 4, 5 |
0.47 |
Reject |
1245 |
4.33 |
Incsql: Training Incremental Text-to-sql Parsers With Non-deterministic Oracles |
4, 6, 3 |
1.25 |
N/A |
1246 |
4.33 |
Provable Defenses Against Spatially Transformed Adversarial Inputs: Impossibility And Possibility Results |
5, 3, 5 |
0.94 |
Reject |
1247 |
4.33 |
Auto-encoding Knockoff Generator For Fdr Controlled Variable Selection |
3, 4, 6 |
1.25 |
Reject |
1248 |
4.33 |
Learning Grounded Sentence Representations By Jointly Using Video And Text Information |
4, 3, 6 |
1.25 |
N/A |
1249 |
4.33 |
Neural Rendering Model: Joint Generation And Prediction For Semi-supervised Learning |
5, 5, 3 |
0.94 |
Reject |
1250 |
4.33 |
Manifoldnet: A Deep Neural Network For Manifold-valued Data |
5, 4, 4 |
0.47 |
Reject |
1251 |
4.33 |
Unsupervised Word Discovery With Segmental Neural Language Models |
4, 3, 6 |
1.25 |
Reject |
1252 |
4.33 |
Network Reparameterization For Unseen Class Categorization |
5, 3, 5 |
0.94 |
N/A |
1253 |
4.33 |
Deep Geometrical Graph Classification |
4, 3, 6 |
1.25 |
Reject |
1254 |
4.33 |
Generative Models From The Perspective Of Continual Learning |
4, 5, 4 |
0.47 |
Reject |
1255 |
4.33 |
Adversarial Examples Are A Natural Consequence Of Test Error In Noise |
4, 5, 4 |
0.47 |
Reject |
1256 |
4.33 |
Wasserstein Proximal Of Gans |
3, 6, 4 |
1.25 |
Reject |
1257 |
4.33 |
From Adversarial Training To Generative Adversarial Networks |
3, 6, 4 |
1.25 |
N/A |
1258 |
4.33 |
Adversarial Decomposition Of Text Representation |
3, 6, 4 |
1.25 |
N/A |
1259 |
4.33 |
Neuron Hierarchical Networks |
5, 4, 4 |
0.47 |
N/A |
1260 |
4.33 |
Sample Efficient Deep Neuroevolution In Low Dimensional Latent Space |
4, 5, 4 |
0.47 |
Reject |
1261 |
4.33 |
Downsampling Leads To Image Memorization In Convolutional Autoencoders |
3, 5, 5 |
0.94 |
Reject |
1262 |
4.33 |
Online Learning For Supervised Dimension Reduction |
2, 5, 6 |
1.70 |
Reject |
1263 |
4.33 |
Nice: Noise Injection And Clamping Estimation For Neural Network Quantization |
4, 5, 4 |
0.47 |
Reject |
1264 |
4.33 |
Assessing Generalization In Deep Reinforcement Learning |
3, 5, 5 |
0.94 |
Reject |
1265 |
4.33 |
Do Language Models Have Common Sense? |
5, 4, 4 |
0.47 |
Reject |
1266 |
4.33 |
Efficient Convolutional Neural Network Training With Direct Feedback Alignment |
4, 4, 5 |
0.47 |
Reject |
1267 |
4.33 |
Q-map: A Convolutional Approach For Goal-oriented Reinforcement Learning |
5, 4, 4 |
0.47 |
Reject |
1268 |
4.33 |
Deep Ensemble Bayesian Active Learning : Adressing The Mode Collapse Issue In Monte Carlo Dropout Via Ensembles |
4, 4, 5 |
0.47 |
Reject |
1269 |
4.33 |
Salsa-text : Self Attentive Latent Space Based Adversarial Text Generation |
4, 4, 5 |
0.47 |
N/A |
1270 |
4.33 |
Pie: Pseudo-invertible Encoder |
3, 5, 5 |
0.94 |
Reject |
1271 |
4.33 |
On Inductive Biases In Deep Reinforcement Learning |
3, 3, 7 |
1.89 |
Reject |
1272 |
4.33 |
Na |
5, 4, 4 |
0.47 |
N/A |
1273 |
4.33 |
Dvolver: Efficient Pareto-optimal Neural Network Architecture Search |
4, 5, 4 |
0.47 |
Reject |
1274 |
4.33 |
Dual Learning: Theoretical Study And Algorithmic Extensions |
6, 2, 5 |
1.70 |
Reject |
1275 |
4.33 |
Gradient-based Learning For F-measure And Other Performance Metrics |
5, 3, 5 |
0.94 |
Reject |
1276 |
4.33 |
Robust Text Classifier On Test-time Budgets |
4, 4, 5 |
0.47 |
N/A |
1277 |
4.33 |
Recycling The Discriminator For Improving The Inference Mapping Of Gan |
3, 3, 7 |
1.89 |
Reject |
1278 |
4.33 |
Shamann: Shared Memory Augmented Neural Networks |
4, 5, 4 |
0.47 |
Reject |
1279 |
4.33 |
Recovering The Lowest Layer Of Deep Networks With High Threshold Activations |
4, 5, 4 |
0.47 |
Reject |
1280 |
4.33 |
Exploration By Uncertainty In Reward Space |
5, 5, 3 |
0.94 |
Reject |
1281 |
4.33 |
Modulated Variational Auto-encoders For Many-to-many Musical Timbre Transfer |
5, 5, 3 |
0.94 |
Reject |
1282 |
4.33 |
On Generalization Bounds Of A Family Of Recurrent Neural Networks |
4, 6, 3 |
1.25 |
Reject |
1283 |
4.33 |
Feature Matters: A Stage-by-stage Approach For Task Independent Knowledge Transfer |
5, 4, 4 |
0.47 |
N/A |
1284 |
4.33 |
Sentence Encoding With Tree-constrained Relation Networks |
3, 5, 5 |
0.94 |
Reject |
1285 |
4.25 |
A Priori Estimates Of The Generalization Error For Two-layer Neural Networks |
4, 4, 4, 5 |
0.43 |
Reject |
1286 |
4.25 |
Understanding The Asymptotic Performance Of Model-based Rl Methods |
5, 6, 4, 2 |
1.48 |
Reject |
1287 |
4.25 |
Discovering General-purpose Active Learning Strategies |
5, 4, 4, 4 |
0.43 |
Reject |
1288 |
4.25 |
On Meaning-preserving Adversarial Perturbations For Sequence-to-sequence Models |
4, 4, 3, 6 |
1.09 |
Reject |
1289 |
4.25 |
Characterizing The Accuracy/complexity Landscape Of Explanations Of Deep Networks Through Knowledge Extraction |
4, 4, 4, 5 |
0.43 |
Reject |
1290 |
4.25 |
Countdown Regression: Sharp And Calibrated Survival Predictions |
4, 4, 4, 5 |
0.43 |
Reject |
1291 |
4.00 |
Neural Regression Tree |
5, 3, 4 |
0.82 |
Reject |
1292 |
4.00 |
The Effectiveness Of Layer-by-layer Training Using The Information Bottleneck Principle |
5, 2, 5 |
1.41 |
Reject |
1293 |
4.00 |
S-system, Geometry, Learning, And Optimization: A Theory Of Neural Networks |
4, 4 |
0.00 |
Reject |
1294 |
4.00 |
Learning Latent Semantic Representation From Pre-defined Generative Model |
5, 3, 4 |
0.82 |
Reject |
1295 |
4.00 |
Fatty And Skinny: A Joint Training Method Of Watermark Encoder And Decoder |
4, 4, 4 |
0.00 |
N/A |
1296 |
4.00 |
Reconciling Feature-reuse And Overfitting In Densenet With Specialized Dropout |
5, 3, 4 |
0.82 |
Reject |
1297 |
4.00 |
Functional Bayesian Neural Networks For Model Uncertainty Quantification |
3, 4, 5 |
0.82 |
Reject |
1298 |
4.00 |
Empirically Characterizing Overparameterization Impact On Convergence |
5, 4, 3 |
0.82 |
Reject |
1299 |
4.00 |
Exploration Using Distributional Rl And Ucb |
4, 4, 4 |
0.00 |
N/A |
1300 |
4.00 |
Revisting Negative Transfer Using Adversarial Learning |
4, 2, 6 |
1.63 |
Reject |
1301 |
4.00 |
Distilled Agent Dqn For Provable Adversarial Robustness |
5, 3, 4 |
0.82 |
Reject |
1302 |
4.00 |
Reinforced Pipeline Optimization: Behaving Optimally With Non-differentiabilities |
4, 5, 3 |
0.82 |
Reject |
1303 |
4.00 |
Generalized Capsule Networks With Trainable Routing Procedure |
5, 3, 4 |
0.82 |
Reject |
1304 |
4.00 |
Learning From Noisy Demonstration Sets Via Meta-learned Suitability Assessor |
4, 4, 4 |
0.00 |
Reject |
1305 |
4.00 |
Latent Domain Transfer: Crossing Modalities With Bridging Autoencoders |
4, 4, 4 |
0.00 |
Reject |
1306 |
4.00 |
Complexity Of Training Relu Neural Networks |
3, 5, 4 |
0.82 |
Reject |
1307 |
4.00 |
Learning To Search Efficient Densenet With Layer-wise Pruning |
4, 4, 4 |
0.00 |
Reject |
1308 |
4.00 |
Conditional Inference In Pre-trained Variational Autoencoders Via Cross-coding |
4, 4, 4 |
0.00 |
Reject |
1309 |
4.00 |
The Wisdom Of The Crowd: Reliable Deep Reinforcement Learning Through Ensembles Of Q-functions |
4, 5, 3 |
0.82 |
Reject |
1310 |
4.00 |
Guaranteed Recovery Of One-hidden-layer Neural Networks Via Cross Entropy |
3, 4, 5 |
0.82 |
Reject |
1311 |
4.00 |
D2ke: From Distance To Kernel And Embedding Via Random Features For Structured Inputs |
4, 3, 5 |
0.82 |
N/A |
1312 |
4.00 |
Graph Generation Via Scattering |
4, 4, 4 |
0.00 |
Reject |
1313 |
4.00 |
Classification Of Building Noise Type/position Via Supervised Learning |
4, 4, 4 |
0.00 |
N/A |
1314 |
4.00 |
Explaining Neural Networks Semantically And Quantitatively |
4, 4, 4 |
0.00 |
N/A |
1315 |
4.00 |
Look Ma, No Gans! Image Transformation With Modifae |
3, 4, 5 |
0.82 |
Reject |
1316 |
4.00 |
The Forward-backward Embedding Of Directed Graphs |
5, 3, 4 |
0.82 |
Reject |
1317 |
4.00 |
Efficient Exploration Through Bayesian Deep Q-networks |
6, 4, 4, 2 |
1.41 |
Reject |
1318 |
4.00 |
Hc-net: Memory-based Incremental Dual-network System For Continual Learning |
4, 4, 4 |
0.00 |
Reject |
1319 |
4.00 |
Multi-task Learning For Semantic Parsing With Cross-domain Sketch |
3, 4, 5 |
0.82 |
Reject |
1320 |
4.00 |
Rnns With Private And Shared Representations For Semi-supervised Sequence Learning |
3, 5, 4 |
0.82 |
N/A |
1321 |
4.00 |
Na |
6, 2, 4 |
1.63 |
N/A |
1322 |
4.00 |
The Missing Ingredient In Zero-shot Neural Machine Translation |
5, 4, 3 |
0.82 |
N/A |
1323 |
4.00 |
Evading Defenses To Transferable Adversarial Examples By Mitigating Attention Shift |
4, 4, 4 |
0.00 |
N/A |
1324 |
4.00 |
Applications Of Gaussian Processes In Finance |
4, 5, 3 |
0.82 |
N/A |
1325 |
4.00 |
Evaluating Gans Via Duality |
4, 3, 5 |
0.82 |
Reject |
1326 |
4.00 |
Neural Network Cost Landscapes As Quantum States |
5, 3, 4 |
0.82 |
Reject |
1327 |
4.00 |
Deep Adversarial Forward Model |
4, 4, 4 |
0.00 |
Reject |
1328 |
4.00 |
In Search Of Theoretically Grounded Pruning |
4, 3, 5 |
0.82 |
N/A |
1329 |
4.00 |
Mol-cyclegan - A Generative Model For Molecular Optimization |
4, 4, 4 |
0.00 |
Reject |
1330 |
4.00 |
Overlapping Community Detection With Graph Neural Networks |
5, 3, 4 |
0.82 |
Reject |
1331 |
4.00 |
Chaingan: A Sequential Approach To Gans |
4, 4, 4 |
0.00 |
Reject |
1332 |
4.00 |
Constraining Action Sequences With Formal Languages For Deep Reinforcement Learning |
5, 3, 4 |
0.82 |
Reject |
1333 |
4.00 |
Trajectory Vae For Multi-modal Imitation |
4, 4, 4 |
0.00 |
Reject |
1334 |
4.00 |
Improving Machine Classification Using Human Uncertainty Measurements |
6, 3, 3 |
1.41 |
Reject |
1335 |
4.00 |
Differentially Private Federated Learning: A Client Level Perspective |
4, 4, 4 |
0.00 |
Reject |
1336 |
4.00 |
N/a |
4, 5, 3 |
0.82 |
N/A |
1337 |
4.00 |
Exploiting Invariant Structures For Compression In Neural Networks |
4, 4, 4 |
0.00 |
N/A |
1338 |
4.00 |
Continual Learning Via Explicit Structure Learning |
4, 4, 4 |
0.00 |
Reject |
1339 |
4.00 |
Difference-seeking Generative Adversarial Network |
5, 4, 3 |
0.82 |
Reject |
1340 |
4.00 |
Assumption Questioning: Latent Copying And Reward Exploitation In Question Generation |
4, 3, 5 |
0.82 |
Reject |
1341 |
4.00 |
Second-order Adversarial Attack And Certifiable Robustness |
4, 5, 3 |
0.82 |
Reject |
1342 |
4.00 |
A Multi-modal One-class Generative Adversarial Network For Anomaly Detection In Manufacturing |
3, 4, 5 |
0.82 |
Reject |
1343 |
4.00 |
Deep Generative Models For Learning Coherent Latent Representations From Multi-modal Data |
4, 4, 4 |
0.00 |
Reject |
1344 |
4.00 |
Sequenced-replacement Sampling For Deep Learning |
3, 5, 4 |
0.82 |
Reject |
1345 |
4.00 |
Sample-efficient Policy Learning In Multi-agent Reinforcement Learning Via Meta-learning |
4, 4, 4 |
0.00 |
Reject |
1346 |
4.00 |
Overfitting Detection Of Deep Neural Networks Without A Hold Out Set |
4, 5, 3 |
0.82 |
Reject |
1347 |
4.00 |
On The Selection Of Initialization And Activation Function For Deep Neural Networks |
3, 4, 5 |
0.82 |
Reject |
1348 |
4.00 |
Constrained Bayesian Optimization For Automatic Chemical Design |
3, 4, 5 |
0.82 |
Reject |
1349 |
4.00 |
Dual Importance Weight Gan |
4, 3, 5 |
0.82 |
N/A |
1350 |
4.00 |
Robustness And Equivariance Of Neural Networks |
3, 4, 5 |
0.82 |
Reject |
1351 |
4.00 |
Incremental Hierarchical Reinforcement Learning With Multitask Lmdps |
3, 4, 5 |
0.82 |
Reject |
1352 |
4.00 |
Cosine Similarity-based Adversarial Process |
4, 3, 5 |
0.82 |
N/A |
1353 |
4.00 |
Training Hard-threshold Networks With Combinatorial Search In A Discrete Target Propagation Setting |
3, 4, 5 |
0.82 |
Reject |
1354 |
4.00 |
On The Use Of Convolutional Auto-encoder For Incremental Classifier Learning In Context Aware Advertisement |
5, 4, 3 |
0.82 |
Reject |
1355 |
4.00 |
Exploration Of Efficient On-device Acoustic Modeling With Neural Networks |
4, 4, 4 |
0.00 |
Reject |
1356 |
4.00 |
Microgan: Promoting Variety Through Microbatch Discrimination |
3, 3, 6 |
1.41 |
Reject |
1357 |
4.00 |
Better Accuracy With Quantified Privacy: Representations Learned Via Reconstructive Adversarial Network |
4, 5, 3 |
0.82 |
Reject |
1358 |
4.00 |
Merci: A New Metric To Evaluate The Correlation Between Predictive Uncertainty And True Error |
4, 5, 3 |
0.82 |
Reject |
1359 |
4.00 |
Deep Processing Of Structured Data |
4, 4, 4 |
0.00 |
Reject |
1360 |
4.00 |
Iterative Binary Decisions |
4, 4, 4 |
0.00 |
N/A |
1361 |
4.00 |
Understanding Opportunities For Efficiency In Single-image Super Resolution Networks |
4, 5, 3 |
0.82 |
Reject |
1362 |
4.00 |
Unsupervised Convolutional Neural Networks For Accurate Video Frame Interpolation With Integration Of Motion Components |
3, 5, 4 |
0.82 |
N/A |
1363 |
4.00 |
Learning Representations In Model-free Hierarchical Reinforcement Learning |
5, 4, 3 |
0.82 |
Reject |
1364 |
4.00 |
Accidental Exploration Through Value Predictors |
4, 5, 3 |
0.82 |
Reject |
1365 |
4.00 |
Language Modeling With Graph Temporal Convolutional Networks |
4, 4, 4 |
0.00 |
Reject |
1366 |
4.00 |
Overcoming Catastrophic Forgetting Through Weight Consolidation And Long-term Memory |
4, 4, 4 |
0.00 |
Reject |
1367 |
4.00 |
Modular Deep Probabilistic Programming |
3, 4, 5 |
0.82 |
Reject |
1368 |
4.00 |
Relational Graph Attention Networks |
4, 4, 4 |
0.00 |
Reject |
1369 |
4.00 |
Towards More Theoretically-grounded Particle Optimization Sampling For Deep Learning |
5, 4, 3 |
0.82 |
Reject |
1370 |
4.00 |
Layerwise Recurrent Autoencoder For General Real-world Traffic Flow Forecasting |
4, 5, 3 |
0.82 |
Reject |
1371 |
4.00 |
Activity Regularization For Continual Learning |
4, 4, 4 |
0.00 |
Reject |
1372 |
4.00 |
Dynamic Pricing On E-commerce Platform With Deep Reinforcement Learning |
4, 4, 4 |
0.00 |
Reject |
1373 |
4.00 |
Defactor: Differentiable Edge Factorization-based Probabilistic Graph Generation |
3, 5, 4 |
0.82 |
Reject |
1374 |
4.00 |
Data Poisoning Attack Against Unsupervised Node Embedding Methods |
4, 4, 4 |
0.00 |
N/A |
1375 |
4.00 |
Distributionally Robust Optimization Leads To Better Generalization: On Sgd And Beyond |
3, 4, 5 |
0.82 |
Reject |
1376 |
4.00 |
Uainets: From Unsupervised To Active Deep Anomaly Detection |
4, 5, 3 |
0.82 |
Reject |
1377 |
4.00 |
Implicit Maximum Likelihood Estimation |
4, 3, 5 |
0.82 |
Reject |
1378 |
4.00 |
Prob2vec: Mathematical Semantic Embedding For Problem Retrieval In Adaptive Tutoring |
3, 5, 4 |
0.82 |
Reject |
1379 |
4.00 |
Hyper-regularization: An Adaptive Choice For The Learning Rate In Gradient Descent |
4, 4, 4 |
0.00 |
Reject |
1380 |
4.00 |
A Teacher Student Network For Faster Video Classification |
4, 4, 4 |
0.00 |
N/A |
1381 |
4.00 |
Deepström Networks |
4, 5, 3 |
0.82 |
N/A |
1382 |
4.00 |
Universal Discriminative Quantum Neural Networks |
5, 5, 2 |
1.41 |
Reject |
1383 |
4.00 |
Morpho-mnist: Quantitative Assessment And Diagnostics For Representation Learning |
3, 5, 4 |
0.82 |
Reject |
1384 |
4.00 |
Uncertainty-guided Lifelong Learning In Bayesian Networks |
4, 4, 4 |
0.00 |
Reject |
1385 |
4.00 |
Distinguishability Of Adversarial Examples |
4, 4, 4 |
0.00 |
Reject |
1386 |
4.00 |
Pearl: Prototype Learning Via Rule Lists |
5, 3, 4 |
0.82 |
Reject |
1387 |
4.00 |
Nuts: Network For Unsupervised Telegraphic Summarization |
4, 4, 4 |
0.00 |
Reject |
1388 |
4.00 |
Unsupervised Exploration With Deep Model-based Reinforcement Learning |
4, 4, 4 |
0.00 |
Reject |
1389 |
4.00 |
Adversarial Attacks For Optical Flow-based Action Recognition Classifiers |
4, 3, 5 |
0.82 |
Reject |
1390 |
4.00 |
Found By Nemo: Unsupervised Object Detection From Negative Examples And Motion |
5, 3, 4 |
0.82 |
N/A |
1391 |
4.00 |
On The Statistical And Information Theoretical Characteristics Of Dnn Representations |
5, 4, 3 |
0.82 |
Reject |
1392 |
4.00 |
Decoupling Feature Extraction From Policy Learning: Assessing Benefits Of State Representation Learning In Goal Based Robotics |
5, 3, 4 |
0.82 |
Reject |
1393 |
4.00 |
Ain't Nobody Got Time For Coding: Structure-aware Program Synthesis From Natural Language |
4, 4, 4 |
0.00 |
Reject |
1394 |
4.00 |
On The Trajectory Of Stochastic Gradient Descent In The Information Plane |
4, 6, 2 |
1.63 |
Reject |
1395 |
4.00 |
Polar Prototype Networks |
5, 3, 4 |
0.82 |
Reject |
1396 |
3.75 |
The Body Is Not A Given: Joint Agent Policy Learning And Morphology Evolution |
4, 4, 3, 4 |
0.43 |
N/A |
1397 |
3.75 |
Lsh Microbatches For Stochastic Gradients: Value In Rearrangement |
4, 4, 3, 4 |
0.43 |
Reject |
1398 |
3.67 |
Fake Sentence Detection As A Training Task For Sentence Encoding |
5, 3, 3 |
0.94 |
Reject |
1399 |
3.67 |
Inhibited Softmax For Uncertainty Estimation In Neural Networks |
4, 4, 3 |
0.47 |
N/A |
1400 |
3.67 |
Parametrizing Fully Convolutional Nets With A Single High-order Tensor |
4, 3, 4 |
0.47 |
N/A |
1401 |
3.67 |
A Walk With Sgd: How Sgd Explores Regions Of Deep Network Loss? |
4, 4, 3 |
0.47 |
Reject |
1402 |
3.67 |
Interpretable Convolutional Filter Pruning |
4, 4, 3 |
0.47 |
Reject |
1403 |
3.67 |
A Fully Automated Periodicity Detection In Time Series |
3, 5, 3 |
0.94 |
Reject |
1404 |
3.67 |
Geometric Augmentation For Robust Neural Network Classifiers |
4, 4, 3 |
0.47 |
Reject |
1405 |
3.67 |
Optimized Gated Deep Learning Architectures For Sensor Fusion |
4, 4, 3 |
0.47 |
Reject |
1406 |
3.67 |
Graph Spectral Regularization For Neural Network Interpretability |
4, 3, 4 |
0.47 |
Reject |
1407 |
3.67 |
Synthnet: Learning Synthesizers End-to-end |
4, 4, 3 |
0.47 |
Reject |
1408 |
3.67 |
Question Generation Using A Scratchpad Encoder |
4, 3, 4 |
0.47 |
Reject |
1409 |
3.67 |
Spectral Convolutional Networks On Hierarchical Multigraphs |
4, 3, 4 |
0.47 |
N/A |
1410 |
3.67 |
Feature Transformers: A Unified Representation Learning Framework For Lifelong Learning |
4, 3, 4 |
0.47 |
Reject |
1411 |
3.67 |
Learning Robust, Transferable Sentence Representations For Text Classification |
4, 3, 4 |
0.47 |
N/A |
1412 |
3.67 |
Using Deep Siamese Neural Networks To Speed Up Natural Products Research |
4, 3, 4 |
0.47 |
Reject |
1413 |
3.67 |
Unsupervised Video-to-video Translation |
3, 4, 4 |
0.47 |
Reject |
1414 |
3.67 |
Bilingual-gan: Neural Text Generation And Neural Machine Translation As Two Sides Of The Same Coin |
3, 4, 4 |
0.47 |
N/A |
1415 |
3.67 |
Optimizing For Generalization In Machine Learning With Cross-validation Gradients |
5, 2, 4 |
1.25 |
Reject |
1416 |
3.67 |
Radial Basis Feature Transformation To Arm Cnns Against Adversarial Attacks |
4, 4, 3 |
0.47 |
Reject |
1417 |
3.67 |
Feature Attribution As Feature Selection |
4, 4, 3 |
0.47 |
Reject |
1418 |
3.67 |
Filter Training And Maximum Response: Classification Via Discerning |
2, 3, 6 |
1.70 |
Reject |
1419 |
3.67 |
Unsupervised Monocular Depth Estimation With Clear Boundaries |
4, 4, 3 |
0.47 |
N/A |
1420 |
3.67 |
Beyond Games: Bringing Exploration To Robots In Real-world |
3, 3, 5 |
0.94 |
Reject |
1421 |
3.67 |
Withdrawn |
4, 4, 3 |
0.47 |
N/A |
1422 |
3.67 |
Prior Networks For Detection Of Adversarial Attacks |
3, 4, 4 |
0.47 |
Reject |
1423 |
3.67 |
Accelerating First Order Optimization Algorithms |
3, 4, 4 |
0.47 |
Reject |
1424 |
3.67 |
Mixture Of Pre-processing Experts Model For Noise Robust Deep Learning On Resource Constrained Platforms |
3, 4, 4 |
0.47 |
Reject |
1425 |
3.67 |
Text Embeddings For Retrieval From A Large Knowledge Base |
3, 3, 5 |
0.94 |
Reject |
1426 |
3.67 |
Discrete Structural Planning For Generating Diverse Translations |
2, 4, 5 |
1.25 |
Reject |
1427 |
3.67 |
Structured Prediction Using Cgans With Fusion Discriminator |
5, 3, 3 |
0.94 |
Reject |
1428 |
3.67 |
Na |
4, 4, 3 |
0.47 |
N/A |
1429 |
3.67 |
Normalization Gradients Are Least-squares Residuals |
4, 4, 3 |
0.47 |
Reject |
1430 |
3.67 |
Deep Hierarchical Model For Hierarchical Selective Classification And Zero Shot Learning |
4, 5, 2 |
1.25 |
Reject |
1431 |
3.67 |
Graph Learning Network: A Structure Learning Algorithm |
4, 3, 4 |
0.47 |
Reject |
1432 |
3.67 |
Quantile Regression Reinforcement Learning With State Aligned Vector Rewards |
4, 3, 4 |
0.47 |
N/A |
1433 |
3.67 |
Towards The Latent Transcriptome |
4, 2, 5 |
1.25 |
Reject |
1434 |
3.67 |
Explaining Alphago: Interpreting Contextual Effects In Neural Networks |
3, 4, 4 |
0.47 |
N/A |
1435 |
3.67 |
Dyncnn: An Effective Dynamic Architecture On Convolutional Neural Network For Surveillance Videos |
3, 4, 4 |
0.47 |
Reject |
1436 |
3.67 |
D-gan: Divergent Generative Adversarial Network For Positive Unlabeled Learning And Counter-examples Generation |
3, 5, 3 |
0.94 |
Reject |
1437 |
3.67 |
Using Word Embeddings To Explore The Learned Representations Of Convolutional Neural Networks |
3, 4, 4 |
0.47 |
Reject |
1438 |
3.67 |
Automatic Generation Of Object Shapes With Desired Functionalities |
5, 3, 3 |
0.94 |
Reject |
1439 |
3.67 |
Delibgan: Coarse-to-fine Text Generation Via Adversarial Network |
4, 3, 4 |
0.47 |
Reject |
1440 |
3.67 |
Rethinking Self-driving : Multi -task Knowledge For Better Generalization And Accident Explanation Ability |
4, 4, 3 |
0.47 |
Reject |
1441 |
3.67 |
Gradmix: Multi-source Transfer Across Domains And Tasks |
5, 3, 3 |
0.94 |
N/A |
1442 |
3.67 |
Polycnn: Learning Seed Convolutional Filters |
3, 4, 4 |
0.47 |
N/A |
1443 |
3.67 |
Why Do Neural Response Generation Models Prefer Universal Replies? |
3, 7, 1 |
2.49 |
Reject |
1444 |
3.67 |
Residual Networks Classify Inputs Based On Their Neural Transient Dynamics |
4, 2, 5 |
1.25 |
Reject |
1445 |
3.67 |
Object-contrastive Networks: Unsupervised Object Representations |
3, 3, 5 |
0.94 |
N/A |
1446 |
3.67 |
Latent Transformations For Object View Points Synthesis |
2, 4, 5 |
1.25 |
N/A |
1447 |
3.67 |
An Attention-based Model For Learning Dynamic Interaction Networks |
4, 3, 4 |
0.47 |
N/A |
1448 |
3.67 |
Hierarchical Attention: What Really Counts In Various Nlp Tasks |
4, 3, 4 |
0.47 |
Reject |
1449 |
3.67 |
Modeling Evolution Of Language Through Time With Neural Networks |
3, 4, 4 |
0.47 |
N/A |
1450 |
3.67 |
Contextual Recurrent Convolutional Model For Robust Visual Learning |
4, 3, 4 |
0.47 |
Reject |
1451 |
3.67 |
Generating Images From Sounds Using Multimodal Features And Gans |
3, 4, 4 |
0.47 |
Reject |
1452 |
3.67 |
Differentiable Greedy Networks |
5, 2, 4 |
1.25 |
N/A |
1453 |
3.67 |
Learning Agents With Prioritization And Parameter Noise In Continuous State And Action Space |
3, 4, 4 |
0.47 |
Reject |
1454 |
3.67 |
Optimization On Multiple Manifolds |
7, 1, 3 |
2.49 |
Reject |
1455 |
3.67 |
Unsupervised One-to-many Image Translation |
3, 4, 4 |
0.47 |
Reject |
1456 |
3.67 |
Adversarially Robust Training Through Structured Gradient Regularization |
4, 4, 3 |
0.47 |
Reject |
1457 |
3.67 |
Distributed Deep Policy Gradient For Competitive Adversarial Environment |
4, 4, 3 |
0.47 |
N/A |
1458 |
3.67 |
Diminishing Batch Normalization |
4, 3, 4 |
0.47 |
Reject |
1459 |
3.67 |
Efficient Federated Learning Via Variational Dropout |
4, 4, 3 |
0.47 |
N/A |
1460 |
3.67 |
Pcnn: Environment Adaptive Model Without Finetuning |
4, 3, 4 |
0.47 |
Reject |
1461 |
3.67 |
Localized Random Projections Challenge Benchmarks For Bio-plausible Deep Learning |
5, 3, 3 |
0.94 |
Reject |
1462 |
3.67 |
Riemannian Stochastic Gradient Descent For Tensor-train Recurrent Neural Networks |
4, 4, 3 |
0.47 |
Reject |
1463 |
3.67 |
Imposing Category Trees Onto Word-embeddings Using A Geometric Construction |
4, 4, 3 |
0.47 |
Accept (Poster) |
1464 |
3.67 |
Few-shot Intent Inference Via Meta-inverse Reinforcement Learning |
3, 4, 4 |
0.47 |
Reject |
1465 |
3.67 |
Controlling Over-generalization And Its Effect On Adversarial Examples Detection And Generation |
4, 4, 3 |
0.47 |
Reject |
1466 |
3.67 |
Image Score: How To Select Useful Samples |
4, 4, 3 |
0.47 |
Reject |
1467 |
3.50 |
Bamboo: Ball-shape Data Augmentation Against Adversarial Attacks From All Directions |
4, 3 |
0.50 |
N/A |
1468 |
3.50 |
Learning To Reinforcement Learn By Imitation |
4, 3, 2, 5 |
1.12 |
Reject |
1469 |
3.50 |
Mctsbug: Generating Adversarial Text Sequences Via Monte Carlo Tree Search And Homoglyph Attack |
3, 4 |
0.50 |
N/A |
1470 |
3.33 |
Major-minor Lstms For Word-level Language Model |
3, 4, 3 |
0.47 |
N/A |
1471 |
3.33 |
Human Action Recognition Based On Spatial-temporal Attention |
4, 3, 3 |
0.47 |
Reject |
1472 |
3.33 |
Generative Model Based On Minimizing Exact Empirical Wasserstein Distance |
5, 2, 3 |
1.25 |
Reject |
1473 |
3.33 |
Neural Network Regression With Beta, Dirichlet, And Dirichlet-multinomial Outputs |
3, 3, 4 |
0.47 |
Reject |
1474 |
3.33 |
A Quantifiable Testing Of Global Translational Invariance In Convolutional And Capsule Networks |
3, 4, 3 |
0.47 |
Reject |
1475 |
3.33 |
Na |
4, 5, 1 |
1.70 |
N/A |
1476 |
3.33 |
Interpreting Layered Neural Networks Via Hierarchical Modular Representation |
4, 3, 3 |
0.47 |
Reject |
1477 |
3.33 |
Attack Graph Convolutional Networks By Adding Fake Nodes |
4, 3, 3 |
0.47 |
Reject |
1478 |
3.33 |
Visualizing And Understanding The Semantics Of Embedding Spaces Via Algebraic Formulae |
3, 3, 4 |
0.47 |
Reject |
1479 |
3.33 |
Combining Adaptive Algorithms And Hypergradient Method: A Performance And Robustness Study |
3, 3, 4 |
0.47 |
Reject |
1480 |
3.33 |
Logit Regularization Methods For Adversarial Robustness |
3, 5, 2 |
1.25 |
N/A |
1481 |
3.33 |
Linearizing Visual Processes With Deep Generative Models |
3, 3, 4 |
0.47 |
N/A |
1482 |
3.33 |
Associate Normalization |
3, 5, 2 |
1.25 |
N/A |
1483 |
3.33 |
Understanding And Improving Sequence-labeling Ner With Self-attentive Lstms |
3, 3, 4 |
0.47 |
N/A |
1484 |
3.33 |
Step-wise Sensitivity Analysis: Identifying Partially Distributed Representations For Interpretable Deep Learning |
3, 4, 3 |
0.47 |
Reject |
1485 |
3.33 |
Bigsage: Unsupervised Inductive Representation Learning Of Graph Via Bi-attended Sampling And Global-biased Aggregating |
2, 4, 4 |
0.94 |
Reject |
1486 |
3.33 |
Learning Spatio-temporal Representations Using Spike-based Backpropagation |
3, 4, 3 |
0.47 |
N/A |
1487 |
3.33 |
Featurized Bidirectional Gan: Adversarial Defense Via Adversarially Learned Semantic Inference |
3, 4, 3 |
0.47 |
Reject |
1488 |
3.33 |
Iea: Inner Ensemble Average Within A Convolutional Neural Network |
4, 2, 4 |
0.94 |
Reject |
1489 |
3.33 |
Geometric Operator Convolutional Neural Network |
2, 5, 3 |
1.25 |
N/A |
1490 |
3.33 |
Neural Random Projections For Language Modelling |
3, 4, 3 |
0.47 |
Reject |
1491 |
3.33 |
Empirical Study Of Easy And Hard Examples In Cnn Training |
3, 4, 3 |
0.47 |
Reject |
1492 |
3.33 |
Large-scale Classification Of Structured Objects Using A Crf With Deep Class Embedding |
3, 3, 4 |
0.47 |
Reject |
1493 |
3.33 |
Non-synergistic Variational Autoencoders |
3, 4, 3 |
0.47 |
Reject |
1494 |
3.33 |
Detecting Topological Defects In 2d Active Nematics Using Convolutional Neural Networks |
4, 4, 2 |
0.94 |
Reject |
1495 |
3.33 |
Deconfounding Reinforcement Learning In Observational Settings |
4, 4, 2 |
0.94 |
Reject |
1496 |
3.33 |
Learning Powerful Policies And Better Dynamics Models By Encouraging Consistency |
3, 2, 5 |
1.25 |
Reject |
1497 |
3.33 |
Offline Deep Models Calibration With Bayesian Neural Networks |
4, 3, 3 |
0.47 |
Reject |
1498 |
3.33 |
She2: Stochastic Hamiltonian Exploration And Exploitation For Derivative-free Optimization |
4, 3, 3 |
0.47 |
Reject |
1499 |
3.33 |
Gradient Acceleration In Activation Functions |
5, 2, 3 |
1.25 |
Reject |
1500 |
3.33 |
Behavior Module In Neural Networks |
3, 3, 4 |
0.47 |
Reject |
1501 |
3.33 |
Learning And Data Selection In Big Datasets |
4, 3, 3 |
0.47 |
N/A |
1502 |
3.33 |
Multi-scale Stacked Hourglass Network For Human Pose Estimation |
3, 4, 3 |
0.47 |
Reject |
1503 |
3.33 |
Neural Distribution Learning For Generalized Time-to-event Prediction |
4, 3, 3 |
0.47 |
Reject |
1504 |
3.33 |
Encoder Discriminator Networks For Unsupervised Representation Learning |
3, 4, 3 |
0.47 |
N/A |
1505 |
3.00 |
Hr-td: A Regularized Td Method To Avoid Over-generalization |
4, 3, 2 |
0.82 |
Reject |
1506 |
3.00 |
Spamhmm: Sparse Mixture Of Hidden Markov Models For Graph Connected Entities |
3, 3, 3 |
0.00 |
N/A |
1507 |
3.00 |
Real-time Neural-based Input Method |
3, 3, 3 |
0.00 |
Reject |
1508 |
3.00 |
Learn From Neighbour: A Curriculum That Train Low Weighted Samples By Imitating |
2, 3, 4 |
0.82 |
Reject |
1509 |
3.00 |
Variational Autoencoders For Text Modeling Without Weakening The Decoder |
4, 1, 4 |
1.41 |
N/A |
1510 |
3.00 |
An Exhaustive Analysis Of Lazy Vs. Eager Learning Methods For Real-estate Property Investment |
3, 4, 2 |
0.82 |
Reject |
1511 |
3.00 |
A Non-linear Theory For Sentence Embedding |
3, 3, 3 |
0.00 |
Reject |
1512 |
3.00 |
Geometry Of Deep Convolutional Networks |
2, 4, 3 |
0.82 |
N/A |
1513 |
3.00 |
Probabilistic Program Induction For Intuitive Physics Game Play |
3, 4, 2 |
0.82 |
Reject |
1514 |
3.00 |
A Self-supervised Method For Mapping Human Instructions To Robot Policies |
4, 3, 2 |
0.82 |
Reject |
1515 |
3.00 |
Mapping The Hyponymy Relation Of Wordnet Onto Vector Spaces |
3, 3, 3 |
0.00 |
Reject |
1516 |
3.00 |
Learning With Reflective Likelihoods |
4, 2, 3 |
0.82 |
Reject |
1517 |
3.00 |
Attentive Explainability For Patient Temporal Embedding |
4, 3, 2 |
0.82 |
Reject |
1518 |
3.00 |
Generative Model For Material Irradiation Experiments Based On Prior Knowledge And Attention Mechanism |
3, 3 |
0.00 |
N/A |
1519 |
3.00 |
An Analysis Of Composite Neural Network Performance From Function Composition Perspective |
3, 3, 3 |
0.00 |
Reject |
1520 |
3.00 |
A Forensic Representation To Detect Non-trivial Image Duplicates, And How It Applies To Semantic Segmentation |
4, 3, 2 |
0.82 |
N/A |
1521 |
3.00 |
Calibration Of Neural Network Logit Vectors To Combat Adversarial Attacks |
3, 2, 4 |
0.82 |
Reject |
1522 |
3.00 |
Handling Concept Drift In Wifi-based Indoor Localization Using Representation Learning |
2, 3, 4 |
0.82 |
N/A |
1523 |
3.00 |
Classification In The Dark Using Tactile Exploration |
4, 3, 2 |
0.82 |
Reject |
1524 |
3.00 |
End-to-end Multi-lingual Multi-speaker Speech Recognition |
3, 3, 3 |
0.00 |
Reject |
1525 |
3.00 |
Nonlinear Channels Aggregation Networks For Deep Action Recognition |
3, 3, 3 |
0.00 |
N/A |
1526 |
3.00 |
From Amortised To Memoised Inference: Combining Wake-sleep And Variational-bayes For Unsupervised Few-shot Program Learning |
3, 3, 3 |
0.00 |
N/A |
1527 |
3.00 |
Dopamine: A Research Framework For Deep Reinforcement Learning |
3, 3, 3 |
0.00 |
Reject |
1528 |
3.00 |
Hybrid Policies Using Inverse Rewards For Reinforcement Learning |
3, 2, 4 |
0.82 |
Reject |
1529 |
3.00 |
From Deep Learning To Deep Deducing: Automatically Tracking Down Nash Equilibrium Through Autonomous Neural Agent, A Possible Missing Step Toward General A.i. |
3, 2, 4 |
0.82 |
Reject |
1530 |
3.00 |
Uncertainty In Multitask Transfer Learning |
3, 2, 4 |
0.82 |
Reject |
1531 |
3.00 |
Reneg And Backseat Driver: Learning From Demonstration With Continuous Human Feedback |
3, 4, 2 |
0.82 |
Reject |
1532 |
3.00 |
A Rate-distortion Theory Of Adversarial Examples |
4, 3, 2 |
0.82 |
Reject |
1533 |
3.00 |
Attention Incorporate Network: A Network Can Adapt Various Data Size |
3, 4, 2 |
0.82 |
Reject |
1534 |
3.00 |
Learning Of Sophisticated Curriculums By Viewing Them As Graphs Over Tasks |
3, 2, 4 |
0.82 |
N/A |
1535 |
3.00 |
Irda Method For Sparse Convolutional Neural Networks |
3, 3, 3 |
0.00 |
Reject |
1536 |
3.00 |
Evaluation Methodology For Attacks Against Confidence Thresholding Models |
2, 3, 4 |
0.82 |
Reject |
1537 |
3.00 |
Feature Quantization For Parsimonious And Interpretable Predictive Models |
2, 3, 4 |
0.82 |
Reject |
1538 |
3.00 |
Stacking For Transfer Learning |
3, 4, 2 |
0.82 |
Reject |
1539 |
3.00 |
One Bit Matters: Understanding Adversarial Examples As The Abuse Of Redundancy |
3, 3, 3 |
0.00 |
N/A |
1540 |
3.00 |
Knowledge Distill Via Learning Neuron Manifold |
5, 1, 3 |
1.63 |
N/A |
1541 |
2.75 |
Predictive Local Smoothness For Stochastic Gradient Methods |
2, 3, 2, 4 |
0.83 |
Reject |
1542 |
2.67 |
Variational Sgd: Dropout , Generalization And Critical Point At The End Of Convexity |
4, 2, 2 |
0.94 |
Reject |
1543 |
2.67 |
Weak Contraction Mapping And Optimization |
3, 1, 4 |
1.25 |
Reject |
1544 |
2.67 |
Learning Goal-conditioned Value Functions With One-step Path Rewards Rather Than Goal-rewards |
4, 1, 3 |
1.25 |
Reject |
1545 |
2.67 |
Multiple Encoder-decoders Net For Lane Detection |
2, 2, 4 |
0.94 |
Reject |
1546 |
2.67 |
Explaining Adversarial Examples With Knowledge Representation |
3, 3, 2 |
0.47 |
Reject |
1547 |
2.67 |
End-to-end Learning Of Video Compression Using Spatio-temporal Autoencoders |
3, 3, 2 |
0.47 |
Reject |
1548 |
2.67 |
A Case Study On Optimal Deep Learning Model For Uavs |
3, 3, 2 |
0.47 |
Reject |
1549 |
2.67 |
Decoupling Gating From Linearity |
3, 2, 3 |
0.47 |
Reject |
1550 |
2.67 |
Happier: Hierarchical Polyphonic Music Generative Rnn |
2, 3, 3 |
0.47 |
Reject |
1551 |
2.67 |
Faster Training By Selecting Samples Using Embeddings |
3, 3, 2 |
0.47 |
Reject |
1552 |
2.67 |
A Bird's Eye View On Coherence, And A Worm's Eye View On Cohesion |
2, 2, 4 |
0.94 |
Reject |
1553 |
2.67 |
Exponentially Decaying Flows For Optimization In Deep Learning |
3, 3, 2 |
0.47 |
N/A |
1554 |
2.50 |
A Solution To China Competitive Poker Using Deep Learning |
3, 2 |
0.50 |
Reject |
1555 |
2.50 |
Psychophysical Vs. Learnt Texture Representations In Novelty Detection |
3, 3, 3, 1 |
0.87 |
Reject |
1556 |
2.33 |
Hierarchical Deep Reinforcement Learning Agent With Counter Self-play On Competitive Games |
3, 2, 2 |
0.47 |
N/A |
1557 |
2.33 |
Training Variational Auto Encoders With Discrete Latent Representations Using Importance Sampling |
3, 1, 3 |
0.94 |
Reject |
1558 |
2.33 |
Vectorization Methods In Recommender System |
2, 2, 3 |
0.47 |
Reject |
1559 |
2.33 |
Generating Text Through Adversarial Training Using Skip-thought Vectors |
3, 2, 2 |
0.47 |
N/A |
1560 |
2.33 |
Deli-fisher Gan: Stable And Efficient Image Generation With Structured Latent Generative Space |
2, 2, 3 |
0.47 |
Reject |
1561 |
2.33 |
Pixel Chem: A Representation For Predicting Material Properties With Neural Network |
3, 1, 3 |
0.94 |
Reject |
1562 |
2.33 |
Advanced Neuroevolution: A Gradient-free Algorithm To Train Deep Neural Networks |
1, 1, 5 |
1.89 |
N/A |
1563 |
2.25 |
A Synaptic Neural Network And Synapse Learning |
2, 3, 2, 2 |
0.43 |
Reject |
1564 |
2.00 |
Hierarchical Bayesian Modeling For Clustering Sparse Sequences In The Context Of Group Profiling |
2, 2, 3, 1, 2 |
0.63 |
Reject |
1565 |
1.50 |
Object Detection Deep Learning Networks For Optical Character Recognition |
1, 2, 1, 2 |
0.50 |
Reject |
1566 |
nan |
Pass: Phased Attentive State Space Modeling Of Disease Progression Trajectories |
|
nan |
N/A |
1567 |
nan |
Exploring Deep Learning Using Information Theory Tools And Patch Ordering |
|
nan |
N/A |
1568 |
nan |
Teaching Machine How To Think By Natural Language: A Study On Machine Reading Comprehension |
|
nan |
N/A |
1569 |
nan |
Statistical Characterization Of Deep Neural Networks And Their Sensitivity |
|
nan |
N/A |
1570 |
nan |
Is Pgd-adversarial Training Necessary? Alternative Training Via A Soft-quantization Network With Noisy-natural Samples Only |
|
nan |
N/A |
1571 |
nan |
Isonetry : Geometry Of Critical Initializations And Training |
|
nan |
N/A |
1572 |
nan |
Scaling Up Deep Learning For Pde-based Models |
|
nan |
N/A |
1573 |
nan |
Adversarial Defense Via Data Dependent Activation Function And Total Variation Minimization |
|
nan |
N/A |
1574 |
nan |
Program Synthesis With Learned Code Idioms |
|
nan |
N/A |
1575 |
nan |
Show, Attend And Translate: Unsupervised Image Translation With Self-regularization And Attention |
|
nan |
N/A |
1576 |
nan |
Confidence-based Graph Convolutional Networks For Semi-supervised Learning |
|
nan |
N/A |
1577 |
nan |
Neural Network Bandit Learning By Last Layer Marginalization |
|
nan |
N/A |
1578 |
nan |
Neural Collobrative Networks |
|
nan |
N/A |
1579 |
nan |
Exploration In Policy Mirror Descent |
|
nan |
N/A |