You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 14, 2020. It is now read-only.
I have observed that as we train graphify more and more, the size of the neo4j database on disk keeps increasing and beyond a point, each classification request takes more than a few minutes and makes it almost unusable.
Is there a way to train graphify for more accuracy but at the same time keep the classification time within usable limits ( like say 30 seconds or under a minute ? )
To understand the slowup, could you tell me which of the following parameters affect the classification time for a text given to it and how ?
The number of labels/classes already known to graphify from previous training requests
The total volume of text that has been given to graphify for training.
The amount of text given to graphify for classification
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I have observed that as we train graphify more and more, the size of the neo4j database on disk keeps increasing and beyond a point, each classification request takes more than a few minutes and makes it almost unusable.
Is there a way to train graphify for more accuracy but at the same time keep the classification time within usable limits ( like say 30 seconds or under a minute ? )
To understand the slowup, could you tell me which of the following parameters affect the classification time for a text given to it and how ?
The text was updated successfully, but these errors were encountered: