You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We could integrate with an existing tool (that behind uses a language model engine) to enable semi-automatic classification of submissions. For example, upon linking of a submission, an API could scrape the abstract to help classify the paper based on buckets provided by Metriq (such as suggesting the most relevant top-level Task option among "Hardware", "Simulation", "Compiler", and "QEM/QEC"). More simply, it could also suggest what tags to add, in the modal of the submission.
I believe it's used for data annotation in the training process, but I wonder if Argilla would be a useful tool to plug in, https://argilla.io/
The text was updated successfully, but these errors were encountered:
We could integrate with an existing tool (that behind uses a language model engine) to enable semi-automatic classification of submissions. For example, upon linking of a submission, an API could scrape the abstract to help classify the paper based on buckets provided by Metriq (such as suggesting the most relevant top-level Task option among "Hardware", "Simulation", "Compiler", and "QEM/QEC"). More simply, it could also suggest what tags to add, in the modal of the submission.
I believe it's used for data annotation in the training process, but I wonder if Argilla would be a useful tool to plug in, https://argilla.io/
The text was updated successfully, but these errors were encountered: