Skip to content
This repository has been archived by the owner on May 21, 2023. It is now read-only.

Latest commit

 

History

History
55 lines (35 loc) · 1.5 KB

README.md

File metadata and controls

55 lines (35 loc) · 1.5 KB

dt-machine-learning-libraries

Docker environment you can use to build CUDA-compatible Machine Learning tools and libraries for NVidia Jetson Nano boards.

Environment

The environment in which the libraries are built is the combination of host and container libraries. In particular,

Host Libraries:

Library Version
CUDA 10.2(.89)
CuDNN 8.0(.0.180)

Container Libraries:

Check the lists dependencies-apt.txt and dependencies-py3.txt for the container libraries.

How to use it

The Docker image (i.e., environment) can be built on any machine (i.e., any architecture). The libraries build when the image is run, and this can only happen on a machine with arm64v8 architecture and with the proper version of CUDA and CuDNN installed.

NOTE: This Docker image DOES NOT have CUDA/CuDNN installed in it. CUDA and CuDNN are mounted by the nvidia runtime for Docker.

Build the image

Build the environment image using the command:

dts devel build

Run the image (build the library)

Build a library using the command:

dts devel run -L <library_name> -- -v $(pwd)/dist:/out

where, library_name is one of those available in the /launchers directory of this repository. The final python wheel will be available in the directory /dist of this repository once the building has completed.

Where to run

As of January 2020, it is possible to install the JetPack v4.4.1 on an NVidia Jetson AGX Xavier.