Skip to content

Components of Ovation

John Gamboa edited this page Sep 13, 2017 · 2 revisions

Ovation is composed of several modules that form a pipeline for creating your Conversational Intelligence architecture. The pipeline starts with the formatting of the datasets into some convenient format, all the way to the training of models and the output of their predictions.

  • tools: These are some standalone scripts used for data preprocessing before or along with the code found in the datasets folder. Take a look at here for details on what they do.

  • datasets: To build any Deep Learning model, you need data. Datasets that can be found in the internet come in any format, and it may take hours for one to reorganize them into the format that is convenient for him. In the datasets folder, you will find a set of utility classes that simply load the data for you and allow it to be accessed in several ways that we deemed useful for performing the Deep Learning tasks we had in mind. You can have a better introduction to this module here.

  • models: Now that we have access to the data, we need to write models that receive the data (in a suitable format) and output some result. In the models folder you will find some example model classes that perform tasks such as Named Entity Recognition, Sentiment Classification and Intent Classification. You can find more details about this module here.

  • templates: Contains examples of how to use the models along with some of the datasets. The idea is that you can tweak parts of the code here to generate your own model, without having to create all the required boilerplate from scratch. You can find more details here.

Additionally, the following folders have some other useful code:

  • utils: These are utility functions used by the rest of the code.
  • tests: Some code used for testing the functionalities above.