A curated list of resources dedicated to Natural Language Processing
Please read the contribution guidelines before contributing.
Please feel free to create pull requests.
- Tutorials
- Libraries
- Services
- Techniques
- Datasets
- Implementations of various models
- NLP in Korean
- NLP in Arabic
- NLP in Chinese
- NLP in Spanish
- NLP in Indic Languages
- NLP in Thai
- NLP in Vietnamese
- Other Languages
- Credits
General Machine Learning
- AI Playbook is a brief set of pieces to introduce machine learning and other advancements to technical as well as non-technical audience. Written by the amazing people over at a16z - Andreessen Horowitz this is a great link to forward to your managers or content for your presentations
- Machine Learning Blog by Brian McFee
- Ruder's Blog by Sebastian Ruder for commentary on the best of NLP Research
Introductions and Guides to NLP
- Ultimate Guide to Understand & Implement Natural Language Processing
- Introduction to NLP at Hackernoon is for people who suck at math - in their own words
- NLP Tutorial by Vik Paruchari
- Natural Language Processing: An Introduction by Oxford
- Deep Learning for NLP with Pytorch
- Hands-On NLTK Tutorial - The hands-on NLTK tutorial in the form of Jupyter notebooks
Blogs and Newsletters
- Deep Learning, NLP, and Representations
- Natural Language Processing Blog by Hal Daumé III
- Tutorials by Radim Řehůřek on using Python and gensim to process language corpora
- arXiv: Natural Language Processing (Almost) from Scratch
- Karpathy's The Unreasonable Effectiveness of Recurrent Neural Networks
- Sebastian Ruder's blog is focused on NLP Research
Word embeddings, RNNs, LSTMs and CNNs for Natural Language Processing | Back to Top
- Udacity's Intro to Artificial Intelligence course which touches upon NLP as well
- Udacity's Deep Learning using Tensorflow which covers a section on using deep learning for NLP tasks (covering Word2Vec, RNN's and LSTMs)
- Deep Natural Language Processing at Oxford has videos, lecture slides and reading material
- Deep Learning for Natural Language Processing (cs224n) by Richard Socher and Christopher Manning at Stanford. Includes Videos, Assignments, Syllabus and other detailed material Lecture Slides and Reading Material here
- Coursera's Natural Language Processing by National Research University Higher School of Economics
Bayesian, statistics and Linguistics approaches for Natural Language Processing | Back to Top
- Natural Language Processing by Prof. Mike Collins at Columbia
- Statistical Machine Translation - a Machine Translation course with great assignments and slides
- NLTK with Python 3 for Natural Language Processing by Harrison Kinsley(sentdex). Good tutorials with NLTK code implementation
- Computational Linguistics I by Jordan Boyd-Graber, Lectures from University of Maryland
-
Node.js and Javascript - Node.js Libaries for NLP | Back to Top
- Twitter-text - A JavaScript implementation of Twitter's text processing library
- Knwl.js - A Natural Language Processor in JS
- Retext - Extensible system for analyzing and manipulating natural language
- NLP Compromise - Natural Language processing in the browser
- Natural - general natural language facilities for node
-
Python - Python NLP Libraries | Back to Top
- TextBlob - Providing a consistent API for diving into common natural language processing (NLP) tasks. Stands on the giant shoulders of Natural Language Toolkit (NLTK) and Pattern, and plays nicely with both 👍
- spaCy - Industrial strength NLP with Python and Cython 👍
- textacy - Higher level NLP built on spaCy
- gensim - Python library to conduct unsupervised semantic modelling from plain text 👍
- scattertext - Python library to produce d3 visualizations of how language differs between corpora
- AllenNLP - An NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks.
- PyTorch-NLP - NLP research toolkit designed to support rapid prototyping with better data loaders, word vector loaders, neural network layer representations, common NLP metrics such as BLEU
- Rosetta - Text processing tools and wrappers (e.g. Vowpal Wabbit)
- PyNLPl - Python Natural Language Processing Library. General purpose NLP library for Python. Also contains some specific modules for parsing common NLP formats, most notably for FoLiA, but also ARPA language models, Moses phrasetables, GIZA++ alignments.
- jPTDP - A toolkit for joint part-of-speech (POS) tagging and dependency parsing. jPTDP provides pre-trained models for 40+ languages.
- BigARTM - a fast library for topic modelling
- Snips NLU - A production ready library for intent parsing
-
C++ - C++ Libraries | Back to Top
- MIT Information Extraction Toolkit - C, C++, and Python tools for named entity recognition and relation extraction
- CRF++ - Open source implementation of Conditional Random Fields (CRFs) for segmenting/labeling sequential data & other Natural Language Processing tasks.
- CRFsuite - CRFsuite is an implementation of Conditional Random Fields (CRFs) for labeling sequential data.
- BLLIP Parser - BLLIP Natural Language Parser (also known as the Charniak-Johnson parser)
- colibri-core - C++ library, command line tools, and Python binding for extracting and working with basic linguistic constructions such as n-grams and skipgrams in a quick and memory-efficient way.
- ucto - Unicode-aware regular-expression based tokenizer for various languages. Tool and C++ library. Supports FoLiA format.
- libfolia - C++ library for the FoLiA format
- frog - Memory-based NLP suite developed for Dutch: PoS tagger, lemmatiser, dependency parser, NER, shallow parser, morphological analyzer.
- MeTA - MeTA : ModErn Text Analysis is a C++ Data Sciences Toolkit that facilitates mining big text data.
- Mecab (Japanese)
- Moses
- StarSpace - a library from Facebook for creating embeddings of word-level, paragraph-level, document-level and for text classification
-
Java - Java NLP Libraries | Back to Top
- Stanford NLP
- OpenNLP
- ClearNLP
- Word2vec in Java
- ReVerb Web-Scale Open Information Extraction
- OpenRegex An efficient and flexible token-based regular expression language and engine.
- CogcompNLP - Core libraries developed in the U of Illinois' Cognitive Computation Group.
- MALLET - MAchine Learning for LanguagE Toolkit - package for statistical natural language processing, document classification, clustering, topic modeling, information extraction, and other machine learning applications to text.
- RDRPOSTagger - A robust POS tagging toolkit available (in both Java & Python) together with pre-trained models for 40+ languages.
-
Scala - Scala NLP Libraries | Back to Top
- Saul - Library for developing NLP systems, including built in modules like SRL, POS, etc.
- ATR4S - Toolkit with state-of-the-art automatic term recognition methods.
- tm - Implementation of topic modeling based on regularized multilingual PLSA.
- word2vec-scala - Scala interface to word2vec model; includes operations on vectors like word-distance and word-analogy.
- Epic - Epic is a high performance statistical parser written in Scala, along with a framework for building complex structured prediction models.
-
R - R NLP Libraries | Back to Top
- text2vec - Fast vectorization, topic modeling, distances and GloVe word embeddings in R.
- wordVectors - An R package for creating and exploring word2vec and other word embedding models
- RMallet - R package to interface with the Java machine learning tool MALLET
- dfr-browser - Creates d3 visualizations for browsing topic models of text in a web browser.
- dfrtopics - R package for exploring topic models of text.
- sentiment_classifier - Sentiment Classification using Word Sense Disambiguation and WordNet Reader
- jProcessing - Japanese Natural Langauge Processing Libraries, with Japanese sentiment classification
-
- Clojure-openNLP - Natural Language Processing in Clojure (opennlp)
- Infections-clj - Rails-like inflection library for Clojure and ClojureScript
- postagga - A library to parse natural language in Clojure and ClojureScript
-
- whatlang — Natural language recognition library based on trigrams
- snips-nlu-rs - A production ready library for intent parsing
APIs with higher level functionality such as NER, Topic tagging and so on | Back to Top
- Wit-ai - Natural Language Interface for apps and devices
- IBM Watson's Natural Language Understanding - API and Github demo
- Amazon Comprehend - NLP and ML suite covers most common tasks like NER, tagging, and sentiment analysis
- Google Cloud Natural Language API - Syntax Analysis, NER, Sentiment Analysis, and Content tagging in atleast 9 languages include English and Chinese (Simplified and Traditional).
- ParallelDots - State of the art Text Analysis API Service ranging from Sentiment Analysis to Intent Analysis
- Microsoft Cognitive Service
- TextRazor
- Rosette
Text embeddings allow deep learning to be effective on smaller datasets. These are often first inputs to a deep learning archiectures and most popular way of transfer learning in NLP. Embeddings are simply vectors or a more generically, real valued representations of strings. Word embeddings are considered a great starting point for most deep NLP tasks.
The most popular names in word embeddings are word2vec by Google (Mikolov) and GloVe by Stanford (Pennington, Socher and Manning). fastText seems to be a fairly popular for multi-lingual sub-word embeddings.
Don't use word2vec, don't use GloVe. Use fastText vectors, which are much better from the same authors. word2vec was introduced by T. Mikolov et al. when he was with Google. Performs well on word similarity and analogy tasks. | Back to Top
- Word2Vec Official Implementation
- Deep Learning, NLP, and Representations Chris Olah (2014), Beginner friendly blog explaining word2vec
- Efficient Estimation of Word Representations in Vector Space
- Distributed Representations of Words and Phrases and their Compositionality, Word2Vec tutorial in TensorFlow, gensim's Review of word2vec
- Word2Vec Resources on Github
GloVe was introduced by Pennington, Socher, Manning from Stanford in 2014 as a statistical approximation to word embeddings. The word vectors are created by matrix factorizations of word-word co-occurence matrices here | Back to Top
- GloVe: Global vectors for word representation. Creates word vectors and relates word2vec to matrix factorizations
- Glove source code and training data
fastText by Mikolov (from Facebook) supports sub-word embeddings in more than 200 languages. This allows it to work with out of vocabulary words as well. It captures language morphology well. It also supports a supervised classification mechanism | Back to Top
- fastText on Github - for efficient learning of word representations and sentence classification
- Pre-trained Vectors in several languages
- arXiv: Enriching Word Vectors with Subword Information, arXiv: Bag of Tricks for Efficient Text Classification, and arXiv: FastText.zip: Compressing text classification models were released as part of this project
- Unofficial Python Wrapper for fastText on Github
- Pre-trained word embeddings for WSJ corpus by Koc AI-Lab
- HLBL language model by Turian
- Real-valued vector "embeddings" by Dhillon
- Improving Word Representations Via Global Context And Multiple Word Prototypes by Huang
- Dependency based word embeddings
- sense2vec - on word sense disambiguation
- Infinite Dimensional Word Embeddings - new
- Skip Thought Vectors - word representation method
- Adaptive skip-gram - similar approach, with adaptive properties
- Sequence to Sequence Learning - word vectors for machine translation
- Improving distributional similarity with lessons learned from word embeddings
- Deep Contextualized Word Represenations - PyTorch - TF Implementation
Thought vectors are numeric representations for sentences, paragraphs, and documents. The following papers are listed in order of date published, each one replaces the last as the state of the art in sentiment analysis | Back to Top
- Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank Socher et al. 2013. Introduces Recursive Neural Tensor Network. Uses a parse tree.
- Distributed Representations of Sentences and Documents Le, Mikolov. 2014. Introduces Paragraph Vector. Concatenates and averages pretrained, fixed word vectors to create vectors for sentences, paragraphs and documents. Also known as paragraph2vec. Doesn't use a parse tree. Implemented in gensim. See doc2vec tutorial
- Deep Recursive Neural Networks for Compositionality in Language Irsoy & Cardie. 2014. Uses Deep Recursive Neural Networks. Uses a parse tree.
- Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks Tai et al. 2015 Introduces Tree LSTM. Uses a parse tree.
- Semi-supervised Sequence Learning Dai, Le 2015 "With pretraining, we are able to train long short term memory recurrent networks up to a few hundred timesteps, thereby achieving strong performance in many text classification tasks, such as IMDB, DBpedia and 20 Newsgroups."
- Google Research's blog post for neural machine translation using encoder-decoder architecture with seq2seq models. Tensorflow Code here
- Prof Graham Neubig's Neural Machine Translation tutorial in Perl
- arXiv: Sequence to Sequence Learning with Neural Networks Sutskever, Vinyals, Le 2014 proved the effectiivenss of LSTM for Machine Translation. Check their (nips presentation)
- arXiv: Neural Machine Translation by jointly learning to align and translate Bahdanau, Cho 2014 introduced the attention mechanism in NLP
- arXiv: A Convolutional encoder model for neural machine translation by Gehring et al, 2017. The paper is from Facebook AI research and its code is available here
- Convolutional Sequence to Sequence learning by Gehring et al, 2017. The paper is from Facebook AI research and its code is available here
- Convolutional over Recurrent Encoder for neural machine translation by Dakwale and Monz from University of Amsterdam compare the CNNs with a recurrent neural network with additional convolutonal layers
- Open Source code: OpenNMT is an open source initiative for neural machine translation and neural sequence modeling. PyTorch, Tensorflow and the original LuaTorch implementation
- A Neural Network Approach to Context-Sensitive Generation of Conversational Responses Sordoni 2015. Generates responses to tweets.
- Neural Responding Machine for Short-Text Conversation Shang et al. 2015 Uses Neural Responding Machine. Trained on Weibo dataset. Achieves one round conversations with 75% appropriate responses.
- arXiv: A Neural Conversation Model Vinyals, Le 2015. Uses LSTM RNNs to generate conversational responses
Back to Top Some are courtesy andrewt3000/DL4NLP
- Interactive tutorial on Augmented RNNs including Attention and Memory networks
- Annotated Transformer from the Attention is All You Need work explains Tranformer implementation in line by line detail. Both links highly recommended.
- Memory Networks Weston et. al 2014
- End-To-End Memory Networks Sukhbaatar et. al 2015 Memory networks are implemented in MemNN. Attempts to solve task of reason attention and memory
- Reasoning, Attention and Memory RAM workshop at NIPS 2015. slides included
- Neural Turing Machines, Graves et al. 2014
- Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets, Joulin, Mikolov 2015
- Stack RNN source code and blog post
- Neural autocoder for paragraphs and documents - LSTM representation
- LSTM over tree structures
- Low-Dimensional Embeddings of Logic
- Tutorial on Markov Logic Networks (based on this paper)
- Distant Supervision for Cancer Pathway Extraction From Text
- A Neural Probabilistic Language Model
- Retrofitting word vectors to semantic lexicons
- Unsupervised Learning of the Morphology of a Natural Language
- Computational Grounded Cognition: a new alliance between grounded cognition and computational modelling
- Learning the Structure of Biomedical Relation Extractions
- Statistical Language Models based on Neural Networks by T. Mikolov, 2012. * Slides on the same here
- A survey of named entity recognition and classification
- Benchmarking the extraction and disambiguation of named entities on the semantic web
- Knowledge base population: Successful approaches and challenges
- SpeedRead: A fast named entity recognition Pipeline
- Markov Logic Networks for Natural Language Question Answering
- Template-Based Information Extraction without the Templates
- Relation extraction with matrix factorization and universal schemas
- Privee: An Architecture for Automatically Analyzing Web Privacy Policies
- Teaching Machines to Read and Comprehend - DeepMind paper
- DrQA: Open Domain Question Answering by facebook on Wikipedia data
- Relation Extraction with Matrix Factorization and Universal Schemas
- Towards a Formal Distributional Semantics: Simulating Logical Calculi with Tensors
- Presentation slides for MLN tutorial
- Presentation slides for QA applications of MLNs
- Presentation slides
- awesome-text-summarization - curated list of resources in text summarization.
- Example blogpost uses Amazon food reviews for text summarization. Code on Github here.
- TextRank- bringing order into text by Mihalcea and Tarau. Code on Github here
- Modelling compressions with Discourse constraints by Clarke and Zapata provides a discourse informed model for summarization and subtitle generation.
- Deep Recurrent Generative Decoder model for Abstractive Text Summarization by Li et al, 2017 uses a sequence-to-sequence oriented encoder-decoder model equipped with a deep recurrent generative decoder.
- A Semantic Relevance Based Neural Network for Text Summarization and Text Simplification by Ma and Sun, 2017 uses a gated attention enocder-decoder for text summarization.
- TextSum implementation from Tensorflow
- Brightmart/text_classification has a list of all text classification models with their respective scores, trainings,explanations and their Python implementations.
- Facebook's fasttext is a library for text embeddings and text classification
- Convolutional Neural Networks for Sentence Classfication by Kim Yoon is now regarded as the standard baseline for text classification architecture.
- Using a CNN for text classification in TensorFlow by Denny Britz uses the same dataset as Kim Yoon's paper(mentioned above). The code implementation can be found here.
- Character-level Convolutional Networks for Text Classification by Zhang et al uses CNN and compares them with the traditional text classification models. Its Lua implementation can be found here.
- nlp-datasets great collection of nlp datasets
- DeepNLP-models-Pytorch has Pytorch implementations of various deep NLP models used in CS224n(Stanford) in the form of Jupyter notebooks.The models are aimed for those who are acquainted with Pytorch.
- UDPipe : Trainable pipeline for tokenizing, tagging, lemmatizing and parsing Universal Treebanks and other CoNLL-U file. Primarily written in C++, offers a fast and reliable solution for multilingual NLP processing.
- NLP-Cube : Natural Language Processing Pipeline - Sentence Splitting, Tokenization, Lemmatization, Part-of-speech Tagging and Dependency Parsing. New platform, written in Python with Dynet 2.0. Offers standalone (CLI/Python bindings) and server functionality (REST API).
- KoNLPy - Python package for Korean natural language processing.
- Mecab (Korean) - C++ library for Korean NLP
- KoalaNLP - Scala library for Korean Natural Language Processing.
- KoNLP - R package for Korean Natural language processing
- KAIST Corpus- A corpus from the Korea Advanced Institute of Science and Technology in Korean.
- Naver Sentiment Movie Corpus in Korean
- Chosun Ilbo archive - dataset in Korean from one of the major newspapers in South Korea, the Chosun Ilbo.
- goarabic- Go package for Arabic text processing
- jsastem - Javascript for Arabic stemming
- PyArabic - Python libraries for Arabic
- Multidomain Datasets - Largest Available Multi-Domain Resources for Arabic Sentiment Analysis
- LABR - LArge Arabic Book Reviews dataset
- Arabic Stopwords - A list of Arabic stopwords from various resources
- jieba - Python package for Words Segmentation Utilities in Chinese
- SnowNLP - Python package for Chinese NLP
- FudanNLP- Java library for Chinese text processing
- Columbian Political Speeches
- Copenhagen Treebank
- Reuters Corpora RCV2
- Spanish Billion words corpus with Word2Vec embeddings
- Hindi Dependency Treebank - A multi-representational multi-layered treebank for Hindi and Urdu
- Universal Dependencies Treebank in Hindi
- Parallel Universal Dependencies Treebank in Hindi - A smaller part of the above-mentioned treebank.
- PyThaiNLP - Thai NLP in Python Package
- JTCC- A character cluster library in Java
- CutKum - Word segmentation with deep learning in TensorFlow
- Thai Language Toolkit - Based on a paper by Wirote Aroonmanakun in 2002 with included dataset
- SynThai- Word segmentation and POS tagging using deep learning in Python
- Inter-BEST - A text corpus with 5 million words with word segmentation
- Prime Minister 29- Dataset containing speeches of the current Prime Minister of Thailand
- underthesea - Vietnamese NLP Toolkit
- vn.vitk - A Vietnamese Text Processing Toolkit
- VnCoreNLP - A Vietnamese natural language processing toolkit
- Vietnamese treebank - 10,000 sentences for the constituency parsing task
- BKTreeBank - a Vietnamese Dependency Treebank
- UD_Vietnamese - Vietnamese Universal Dependency Treebank
- VIVOS - a free Vietnamese speech corpus consisting of 15 hours of recording speech by AILab
- VNTQcorpus(big).txt - 1.75 million sentences in news
- Russian: pymorphy2 - a good pos-tagger for Russian
- Asian Languages: Thai, Lao, Chinese, Japanese, and Korean ICU Tokenizer implementation in ElasticSearch
- Ancient Languages: CLTK: The Classical Language Toolkit is a Python library and collection of texts for doing NLP in ancient languages
- Dutch: python-frog - Python binding to Frog, an NLP suite for Dutch. (pos tagging, lemmatisation, dependency parsing, NER)
- Hebrew: NLPH_Resources - A collection of papers, corpora and linguistic resources for NLP in Hebrew
Awesome NLP was seeded with curated content from the lot of repositories, some of which are listed below | Back to Top