Day Zero bulk data-loading #66
bobmcwhirter
started this conversation in
Ideas
Replies: 1 comment 4 replies
-
There are already some instructions for this: https://github.com/trustification/trustify/blob/main/importer/README.md That should do the trick, and can be "cached" to speed things up when re-executing this. I don't think this is suitable however for production deployments. For that I would hope we can have an API, to get something like this: http POST localhost:8080/api/sources/csaf/redhat url=https://www.redhat.com schedule=daily |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This also may apply for dev-cycle...
When we examine stuff like "ingest all CVEs" or "ingest all RHSAs" etc, those are not necessarily quick.
I'm wondering if it'd be worthwhile to have CI (or some pipeline) perform a task of doing those ingestions, and squeezing out a bulk-loadable .sql file matching our schema.
Then from zero-to-basically-full-system could be much quicker than waiting for every importer to execute.
Once up and running, a system would only need to perform the marginal change-loading from that initial .sql checkpoint.
Not a task for this week, but wanted to open a discussion.
Beta Was this translation helpful? Give feedback.
All reactions