Skip to content

Commit

Permalink
add documentation about core concepts in k2. (#516)
Browse files Browse the repository at this point in the history
* add documentation about core concepts in k2.

* add doc for dense FSA vector.

* fix style issues.

* fix typos.

* resolve comments.

* Resolve more comments.
  • Loading branch information
csukuangfj authored Dec 17, 2020
1 parent 73756a7 commit 33e9d18
Show file tree
Hide file tree
Showing 22 changed files with 1,086 additions and 196 deletions.
162 changes: 1 addition & 161 deletions INSTALL.md
Original file line number Diff line number Diff line change
@@ -1,162 +1,2 @@

# Installation

## Dependencies installation

k2 supports CUDA as well as CPU. To get the full benefit from k2,
we recommend that you have installed CUDA on your system before
installing k2. CUDA 10.1 and 10.2 are known to work. PyTorch
is also needed. `torch 1.6.0` and `torch 1.7.0` are known to work.

If you use pip to install torch, make sure the torch version
you install is compatible with the CUDA toolkit you are using.
For example, if your local CUDA toolkit version is 10.1, and
you want to install `torch 1.6.0`, you should install it with:

```bash
pip install torch==1.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
```

If you use conda to manage packages, you can install those
dependencies with:

```bash
conda create -n k2 python=3.7
conda activate k2
conda install pytorch==1.6.0 cudatoolkit=10.1 -c pytorch
```

To install other versions of torch, please check the following URLs:

- https://pytorch.org/get-started/locally/

- https://pytorch.org/get-started/previous-versions/

**HINT**: If you install k2 from pre-built wheel packages, neither do you need
a GPU nor need to install CUDA toolkit. k2 is ready to run with a CPU. However,
you still need to install PyTorch with CUDA support.


## Install k2 from pre-built wheel packages

There are two ways to install k2 from pre-built wheel packages.

### (1) From PyPI using `pip install --pre k2`

The wheel packages on PyPI are built using torch==1.6.0+cu101 on Ubuntu 18.04.
If you are using other Linux systems, the pre-built wheel packages may NOT
work on your system, please install k2 from source in this case.

**CAUTION**: k2 is still under active development and we are trying to keep
the packages on PyPI up to date. If you want to try the latest version, please
install k2 from source.

### (2) From GitHub actions

We have set up GitHub actions to build wheel packages for every pullrequest.
They can be downloaded from the following URL:

https://github.com/k2-fsa/k2/actions?query=workflow%3Abuild

Please click the above URL and select the pullrequest from which you
want to download the pre-built wheel package. The environment information
for building the wheel packages is encoded in the filename. For example,
`gcc-6-cuda-10.2-torch-1.7.0-python-3.8-ubuntu-18.04` means this wheel
package is built using `GCC 6`, `CUDA toolkit 10.2`, `torch 1.7.0`,
`Python 3.8` and `Ubuntu 18.04`.

After downloading, you will get a `zip` file, e.g.,
`gcc-6-cuda-10.2-torch-1.7.0-python-3.8-ubuntu-18.04.zip`.
After `unzip gcc-6-cuda-10.2-torch-1.7.0-python-3.8-ubuntu-18.04.zip`,
you will obtain `k2-0.1.1+cu102.dev20201124-cp38-cp38-linux_x86_64.whl`,
which can be installed using:

```bash
pip install ./k2-0.1.1+cu102.dev20201124-cp38-cp38-linux_x86_64.whl
```

**NOTE**: After `unzip`, you may get a `*.whl` with a different filename from
the above one.

The pre-built packages generated by GitHub actions are available for 90 days
after creation.


## Install k2 from source

Before compiling k2, some preparation work has to be done:

- Have a compiler supporting at least c++14, e.g., GCC >= 5.0, Clang >= 3.4.
- Install CMake. CMake 3.11.0 and 3.18.0 are known to work.
- Install Python3. Python 3.6, 3.7 and 3.8 are known to work.
- Install PyTorch. PyTorch 1.6 and 1.7 are known to work.
- Install CUDA toolkit. CUDA 10.1 and 10.2 are known to work.
- Install cuDNN. Please install a version that is compatible with the
CUDA toolkit you are using.

(Note we need NVCC to build k2, if you use conda to install CUDA toolkit,
you may need to install `nvcc_linux-64` or `cudatoolkit-dev` as well since the
default installation of CUDA toolkit in conda did not include NVCC.
However, `nvcc_linux-64` or `cudatoolkit-dev` may not work well on all platforms,
so it's better if you can install CUDA toolkit using a normal way instead of
using conda if you want to build k2 from source.)

After setting up the environment, we are ready to build k2:

```bash
git clone https://github.com/k2-fsa/k2.git
cd k2
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make _k2
cd ..
pip3 install wheel twine
./scripts/build_pip.sh

# Have a look at the `dist/` directory.
```

You will find the wheel file in the `dist` directory, e.g.,
`dist/k2-0.1.1.dev20201125-cp38-cp38-linux_x86_64.whl`, which
can be installed with

```
pip install dist/k2-0.1.1.dev20201125-cp38-cp38-linux_x86_64.whl
```

**HINT**: You may get a wheel with a different filename. To run tests,
you have to install the following requirements first:

```bash
sudo apt-get install graphviz
cd k2
pip3 install -r ./requirements.txt
```

You can run tests with

```bash
cd build
make -j
make test
```

To run tests in parallel

```bash
cd build
make -j
ctest --parallel <JOBNUM>
```

If `valgrind` is installed, you can check heap corruptions and memory leaks by

```bash
cd build
make -j
ctest -R <TESTNAME> -D ExperimentalMemCheck
```

**HINT**: You can install `valgrind` with `sudo apt-get install valgrind`
on Ubuntu.
See <https://k2.readthedocs.io/en/latest/installation.html>
7 changes: 4 additions & 3 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
recommonmark
graphviz
dataclasses
sphinx_rtd_theme
graphviz
recommonmark
sphinx
sphinx-autodoc-typehints
sphinx_rtd_theme
sphinxcontrib-bibtex
torch>=1.6.0
9 changes: 9 additions & 0 deletions docs/source/bibtex.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{
"cited": {
"core_concepts/index": [
"mohri1997finite",
"mohri2002weighted",
"mohri2008speech"
]
}
}
4 changes: 4 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,9 @@ def get_version():
'sphinx.ext.napoleon',
'sphinx_autodoc_typehints',
'sphinx_rtd_theme',
'sphinxcontrib.bibtex',
]
bibtex_bibfiles = ['refs.bib']

# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
Expand Down Expand Up @@ -83,6 +85,8 @@ def get_version():

pygments_style = 'sphinx'

numfig = True

html_context = {
'display_github': True,
'github_user': 'k2-fsa',
Expand Down
60 changes: 60 additions & 0 deletions docs/source/core_concepts/images/autograd.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
66 changes: 66 additions & 0 deletions docs/source/core_concepts/images/autograd_tropical.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
60 changes: 60 additions & 0 deletions docs/source/core_concepts/images/dense_fsa_vec_frame_0.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 33e9d18

Please sign in to comment.