Skip to content
This repository has been archived by the owner on May 14, 2020. It is now read-only.

Classification time proportional to ? #20

Open
letronje opened this issue Mar 12, 2015 · 0 comments
Open

Classification time proportional to ? #20

letronje opened this issue Mar 12, 2015 · 0 comments

Comments

@letronje
Copy link

I have observed that as we train graphify more and more, the size of the neo4j database on disk keeps increasing and beyond a point, each classification request takes more than a few minutes and makes it almost unusable.

Is there a way to train graphify for more accuracy but at the same time keep the classification time within usable limits ( like say 30 seconds or under a minute ? )

To understand the slowup, could you tell me which of the following parameters affect the classification time for a text given to it and how ?

  • The number of labels/classes already known to graphify from previous training requests
  • The total volume of text that has been given to graphify for training.
  • The amount of text given to graphify for classification
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant