-
ICLR15, Neural Machine Translation by Jointly Learning to Align and Translate , Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio.
-
ACL15, Encoding Source Language with Convolutional Neural Network for Machine Translation , Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, Qun Liu
-
ACL15, A Hierarchical Neural Autoencoder for Paragraphs and Documents , Jiwei Li, Minh-Thang Luong, Dan Jurafsky
-
EMNLP15, A Neural Attention Model for Sentence Summarization, Alexander M. Rush, Sumit Chopra and Jason Weston
-
EMNLP15 short, Not All Contexts Are Created Equal: Better Word Representations with Variable Attention6, Wang Ling.
-
NAACL15 , Two/Too Simple Adaptations of Word2Vec for Syntax Problems
-
NIPS14, Volodymyr Mnih, Nicolas Heess, Alex Graves, Koray Kavukcuoglu. Recurrent Models of Visual Attention.
-
ICLR15, Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate.
-
ACL15, Fandong Meng, Zhengdong Lu, Mingxuan Wang, et al. Encoding Source Language with Convolutional Neural Network for Machine Translation.
-
ACL15, Jiwei Li, Minh-Thang Luong, Dan Jurafsky. A Hierarchical Neural Autoencoder for Paragraphs and Documents.
-
EMNLP15, Alexander M. Rush, Sumit Chopra, Jason Weston. A Neural Attention Model for Sentence Summarization.
-
EMNLP15, Wang Ling, Lin Chu-Cheng, Yulia Tsvetkov, et al. Not All Contexts Are Created Equal: Better Word Representations with Variable Attention
-
End-to-End Attention-based Large Vocabulary Speech Recognition
-
NIPS14 ws, End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results http://arxiv.org/abs/1412.1602
-
NIPS15, Attention-Based Models for Speech Recognition
-
EMNLP15, Effective Approaches to Attention-based Neural Machine Translation
-
ICML15, Kelvin Xu, Jimmy Ba, Ryan Kiros, et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention.
-
Karol Gregor, Ivo Danihelka, Alex Graves, et al. DRAW: A Recurrent Neural Network For Image Generation.
-
NIPS15, Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, et al. Teaching Machines to Read and Comprehend.
-
NIPS15, Lei Jimmy Ba, Roger Grosse, Ruslan Salakhutdinov, Brendan Frey. Learning Wake-Sleep Recurrent Attention Models.
-
DSSM
Learning deep structured semantic models for web search using clickthrough data, cikm2013 Paper code1 code2
-
CDSSM
A latent semantic model with convolutional-pooling structure for information retrieval, msr, CIKM2014 -
ARC-I
convolutional neural network architectures for matching natural language sentences, NIPS2014 -
ARC-II
convolutional neural network architectures for matching natural language sentences, NIPS2014 -
RAE
Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection, NIPS2011 -
Deep Match
A deep architecture for matching short texts, NIPS,2013 -
CNTN
Convolutional Neural Tensor Network Architecture for Community-based Question Answering, IJCAI2015 -
CNNPI
convolutional neural network for paraphrase identification, NAACL2015 -
MultiGranCNN
MultiGranCNN: An architecture for general matching of text chunks on multiple levels of granularity, ACL2015 -
CLSTM
Contextual LSTM (CLSTM) models for Large scale NLP tasks, Google, Arxiv201602 -
CLSM
A latent semantic model with convolutional-pooling structure for information retrieval, cikm2014 -
Recurrent-DSSM
Palangi, H., Deng, L., Shen, Y., Gao, J., He, X., Chen, J., Song, X., and Ward, R. Learning sequential semantic representations of natural language using recurrent neural networks. In ICASSP, 2015. -
LSTM-DSSM
SEMANTIC MODELLING WITH LONG-SHORT-TERM MEMORY FOR INFORMATION RETRIEVAL,ICLR2016, workshop -
DCNN: Dynamic convolutional neural network
a convolutional neural network for modeling sentences, acl2014 convolutional neural network architectures for matching natural language sentences, nips2014, noah -
BRAE: bilingually-constrained recursive auto-encoders
bilingually-constrained phrase embeddings for machine translation, acl2014, long paper -
LSTM-RNN
Deep sentence embedding using lstm networks: analysis and application to information retrieval, 201602 -
SkipThought
Skip thought vectors, -
Bidirectional LSTM-RNN
Bi-directional LSTM Recurrent Neural Network for Chinese Word Segmentation, 201602, Arxiv -
MV-DNN
A multi-view deep learning approach for cross domain user modeling in recommendation systems, WWW2015
- UM
Bordes, Joint Learning of Words and Meaning representations for open-text semantic parsing - LFM
A latent Factor Model for Highly Multi-relational Data - SE
Learning Structured Embeddings of Knowledge Bases - SME
A Semantic Matching Energy Function for Learning with Multi-Relational Data - RESCAL A three-way model for collective learning on multi-relational data
- NTN
Reasoning With Neural Tensor Networks for Knowledge Base Completion - TransE
Translating Embedding for Modeling Multi-relational Data - TransH
Knowledge Graph Embedding by Translating on Hyperplanes - TransR
Learning Entity and Relation Embeddings for Knowledge Graph Completion - TransM
- TransG
- KG2E
- PTransE
- TransA@CAS
- TransA@THU
- STransE