Command line tool for improving typing speed and accuracy. The main goal is to help programmers practise programming languages.
pip install --upgrade mltype
Make sure that Docker and Docker Compose are installed.
docker-compose run --rm mltype
You will get a shell in a running container and the mlt
command should be
available.
See the documentation for more information.
- Using neural networks to generate text. One can use pretrained networks (see below) or train new ones from scratch.
- Alternatively, one can read text from a file or provide it manually
- Dead simple (implemented in
curses
) - Basic statistics - WPM and accuracy
- Setting target speed
- Playing against past performances
- Detailed documentation: https://mltype.readthedocs.io/en/latest/index.html.
- GIF examples: https://mltype.readthedocs.io/en/latest/source/examples.html.
The entrypoint is mlt
. To get information on how to use the subcommands
use the --help
flag (e.g. mlt file --help
).
$ mlt
Usage: mlt [OPTIONS] COMMAND [ARGS]...
Tool for improving typing speed and accuracy
Options:
--help Show this message and exit.
Commands:
file Type text from a file
ls List all language models
random Sample characters randomly from a vocabulary
raw Provide text manually
replay Compete against a past performance
sample Sample text from a language
train Train a language
See below for a list of pretrained models. They are stored on a google drive and one needs to download the entire archive.
Once you download the file, you will need to place it in ~/.mltype/languages
.
Note that if the folder does not exist you will have to create it. The file name
can be changed to whatever you like. This name will then be used to
refer to the model.
To verify that the model was downloaded succesfully, try to sample from it. Note that this might take 20+ seconds the first time around.
mlt sample my_new_model
Feel free to create an issue if you want me to train a model for you. Note
that you can also do it yourself easily by reading the documentation (mlt train
) and getting a GPU on Google Colab (click the badge below for a ready to
use notebook).
This project is very much motivated by the The Unreasonable Effectiveness of Recurrent Neural Networks by Andrej Karpathy.