Replies: 1 comment
-
This is looking great @wlandau! 🙌 Thanks for sharing. Let me know if you have questions as you work on this in the future. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
@philbowsher suggested that I explore the power of
vetiver
intargets
pipelines. (@juliasilge, thanks for the package.) The integration seems like another successful instance of pipeline-based continuous deployment, which also comes up for Shiny apps and literate programming documents. The following_targets.R
file extends the example from https://github.com/wlandau/targets-four-minutes with avetiver
model and and a pin:The dependency graph shows the overall flow of the pipeline.
# R console tar_mermaid()
And the targets in the pipeline run in topological order.
The pin only updates when the vetiver model object changes.
This cuts down on the number of superfluous versions in the model history. In production, the pipeline could run on Connect at regular intervals, maybe in a scenario where the data updates on a schedule and the model needs to be retrained.
targets
would strategically reduce both computation time and the number of unimportant saved models.Model cards would work well with the literate programming integration at https://books.ropensci.org/targets/literate-programming.html#literate-programming-within-a-target. As for
plumber
APIs, another target could redeploy the API with a short-lived synchronous command if needed, and then downstream targets could run predictions with the updated API usingpredict(vetiver_endpoint("https://..."), ...)
.Beta Was this translation helpful? Give feedback.
All reactions