diff --git a/versions/unreleased/guides/prefect-deploy/index.html b/versions/unreleased/guides/prefect-deploy/index.html index fa29f8b388..1f2d2555ca 100644 --- a/versions/unreleased/guides/prefect-deploy/index.html +++ b/versions/unreleased/guides/prefect-deploy/index.html @@ -9656,7 +9656,7 @@
In the example above your credentials will be auto-discovered from your deployment creation environment and credentials will need to be available in your runtime environment.
@@ -9669,7 +9669,7 @@If you are familiar with the deployment creation mechanics with .serve
, you will notice that .deploy
is very similar. .deploy
just requires a work pool name and has a number of parameters dealing with flow-code storage for Docker images.
Unlike .serve
, if you don't specify an image to use for your flow, you must to specify where to pull the flow code from at runtime with the from_source
method, whereas from_source
is optional with .serve
.
Prefect is a workflow orchestration tool empowering developers to build, observe, and react to data pipelines.
It's the easiest way to transform any Python function into a unit of work that can be observed and orchestrated. Just bring your Python code, sprinkle in a few decorators, and go!
With Prefect you gain:
Get up and running quickly with the quickstart guide.
Want more hands on practice to productionize your workflows? Follow our tutorial.
For deeper dives on common use cases, explore our guides.
Take your understanding even further with Prefect's concepts and API reference.
Join Prefect's vibrant community of over 26,000 engineers to learn with others and share your knowledge!
Need help?
Get your questions answered by a Prefect Product Advocate! Book a Meeting
","tags":["getting started","quick start","overview"],"boost":2},{"location":"faq/","title":"Frequently Asked Questions","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#prefect","title":"Prefect","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#how-is-prefect-licensed","title":"How is Prefect licensed?","text":"Prefect is licensed under the Apache 2.0 License, an OSI approved open-source license. If you have any questions about licensing, please contact us.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#is-the-prefect-v2-cloud-url-different-than-the-prefect-v1-cloud-url","title":"Is the Prefect v2 Cloud URL different than the Prefect v1 Cloud URL?","text":"Yes. Prefect Cloud for v2 is at app.prefect.cloud/ while Prefect Cloud for v1 is at cloud.prefect.io.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#the-prefect-orchestration-engine","title":"The Prefect Orchestration Engine","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#why-was-the-prefect-orchestration-engine-created","title":"Why was the Prefect orchestration engine created?","text":"The Prefect orchestration engine has three major objectives:
As Prefect has matured, so has the modern data stack. The on-demand, dynamic, highly scalable workflows that used to exist principally in the domain of data science and analytics are now prevalent throughout all of data engineering. Few companies have workflows that don\u2019t deal with streaming data, uncertain timing, runtime logic, complex dependencies, versioning, or custom scheduling.
This means that the current generation of workflow managers are built around the wrong abstraction: the directed acyclic graph (DAG). DAGs are an increasingly arcane, constrained way of representing the dynamic, heterogeneous range of modern data and computation patterns.
Furthermore, as workflows have become more complex, it has become even more important to focus on the developer experience of building, testing, and monitoring them. Faced with an explosion of available tools, it is more important than ever for development teams to seek orchestration tools that will be compatible with any code, tools, or services they may require in the future.
And finally, this additional complexity means that providing clear and consistent insight into the behavior of the orchestration engine and any decisions it makes is critically important.
The Prefect orchestration engine represents a unified solution to these three problems.
The Prefect orchestration engine is capable of governing any code through a well-defined series of state transitions designed to maximize the user's understanding of what happened during execution. It's popular to describe \"workflows as code\" or \"orchestration as code,\" but the Prefect engine represents \"code as workflows\": rather than ask users to change how they work to meet the requirements of the orchestrator, we've defined an orchestrator that adapts to how our users work.
To achieve this, we've leveraged the familiar tools of native Python: first class functions, type annotations, and async
support. Users are free to implement as much \u2014 or as little \u2014 of the Prefect engine as is useful for their objectives.
No, Prefect Cloud hosts an instance of the Prefect API for you. In fact, each workspace in Prefect Cloud corresponds directly to a single instance of the Prefect orchestration engine. See the Prefect Cloud Overview for more information.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#features","title":"Features","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#does-prefect-support-mapping","title":"Does Prefect support mapping?","text":"Yes! For more information, see the Task.map
API reference
@flow\ndef my_flow():\n\n # map over a constant\n for i in range(10):\n my_mapped_task(i)\n\n # map over a task's output\n l = list_task()\n for i in l.wait().result():\n my_mapped_task_2(i)\n
Note that when tasks are called on constant values, they cannot detect their upstream edges automatically. In this example, my_mapped_task_2
does not know that it is downstream from list_task()
. Prefect will have convenience functions for detecting these associations, and Prefect's .map()
operator will automatically track them.
Yes! For more information, see the Tasks
section.
Yes!
Prefect supports communicating via proxies through the use of environment variables. You can read more about this in the Installation documentation and the article Using Prefect Cloud with proxies.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-run-prefect-flows-on-linux","title":"Can I run Prefect flows on Linux?","text":"Yes!
See the Installation documentation and Linux installation notes for details on getting started with Prefect on Linux.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-run-prefect-flows-on-windows","title":"Can I run Prefect flows on Windows?","text":"Yes!
See the Installation documentation and Windows installation notes for details on getting started with Prefect on Windows.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#what-external-requirements-does-prefect-have","title":"What external requirements does Prefect have?","text":"Prefect does not have any additional requirements besides those installed by pip install --pre prefect
. The entire system, including the UI and services, can be run in a single process via prefect server start
and does not require Docker.
Prefect Cloud users do not need to worry about the Prefect database. Prefect Cloud uses PostgreSQL on GCP behind the scenes. To use PostgreSQL with a self-hosted Prefect server, users must provide the connection string for a running database via the PREFECT_API_DATABASE_CONNECTION_URL
environment variable.
A self-hosted Prefect server can work with SQLite and PostgreSQL. New Prefect installs default to a SQLite database hosted at ~/.prefect/prefect.db
on Mac or Linux machines. SQLite and PostgreSQL are not installed by Prefect.
SQLite generally works well for getting started and exploring Prefect. We have tested it with up to hundreds of thousands of task runs. Many users may be able to stay on SQLite for some time. However, for production uses, Prefect Cloud or self-hosted PostgreSQL is highly recommended. Under write-heavy workloads, SQLite performance can begin to suffer. Users running many flows with high degrees of parallelism or concurrency should use PostgreSQL.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#relationship-with-other-prefect-products","title":"Relationship with other Prefect products","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-a-flow-written-with-prefect-1-be-orchestrated-with-prefect-2-and-vice-versa","title":"Can a flow written with Prefect 1 be orchestrated with Prefect 2 and vice versa?","text":"No. Flows written with the Prefect 1 client must be rewritten with the Prefect 2 client. For most flows, this should take just a few minutes. See our migration guide and our Upgrade to Prefect 2 post for more information.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-a-use-prefect-1-and-prefect-2-at-the-same-time-on-my-local-machine","title":"Can a use Prefect 1 and Prefect 2 at the same time on my local machine?","text":"Yes. Just use different virtual environments.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"api-ref/","title":"API Reference","text":"Prefect auto-generates reference documentation for the following components:
Self-hosted docs
When self-hosting, you can access REST API documentation at the /docs
endpoint of your PREFECT_API_URL
- for example, if you ran prefect server start
with no additional configuration you can find this reference at http://localhost:4200/docs.
prefect.agent
","text":"DEPRECATION WARNING:
This module is deprecated as of March 2024 and will not be available after September 2024. Agents have been replaced by workers, which offer enhanced functionality and better performance.
For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent","title":"PrefectAgent
","text":"Source code in prefect/agent.py
@deprecated_class(\n start_date=\"Mar 2024\",\n help=\"Use a worker instead. Refer to the upgrade guide for more information: https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass PrefectAgent:\n def __init__(\n self,\n work_queues: List[str] = None,\n work_queue_prefix: Union[str, List[str]] = None,\n work_pool_name: str = None,\n prefetch_seconds: int = None,\n default_infrastructure: Infrastructure = None,\n default_infrastructure_document_id: UUID = None,\n limit: Optional[int] = None,\n ) -> None:\n if default_infrastructure and default_infrastructure_document_id:\n raise ValueError(\n \"Provide only one of 'default_infrastructure' and\"\n \" 'default_infrastructure_document_id'.\"\n )\n\n self.work_queues: Set[str] = set(work_queues) if work_queues else set()\n self.work_pool_name = work_pool_name\n self.prefetch_seconds = prefetch_seconds\n self.submitting_flow_run_ids = set()\n self.cancelling_flow_run_ids = set()\n self.scheduled_task_scopes = set()\n self.started = False\n self.logger = get_logger(\"agent\")\n self.task_group: Optional[anyio.abc.TaskGroup] = None\n self.limit: Optional[int] = limit\n self.limiter: Optional[anyio.CapacityLimiter] = None\n self.client: Optional[PrefectClient] = None\n\n if isinstance(work_queue_prefix, str):\n work_queue_prefix = [work_queue_prefix]\n self.work_queue_prefix = work_queue_prefix\n\n self._work_queue_cache_expiration: pendulum.DateTime = None\n self._work_queue_cache: List[WorkQueue] = []\n\n if default_infrastructure:\n self.default_infrastructure_document_id = (\n default_infrastructure._block_document_id\n )\n self.default_infrastructure = default_infrastructure\n elif default_infrastructure_document_id:\n self.default_infrastructure_document_id = default_infrastructure_document_id\n self.default_infrastructure = None\n else:\n self.default_infrastructure = Process()\n self.default_infrastructure_document_id = None\n\n async def update_matched_agent_work_queues(self):\n if self.work_queue_prefix:\n if self.work_pool_name:\n matched_queues = await self.client.read_work_queues(\n work_pool_name=self.work_pool_name,\n work_queue_filter=WorkQueueFilter(\n name=WorkQueueFilterName(startswith_=self.work_queue_prefix)\n ),\n )\n else:\n matched_queues = await self.client.match_work_queues(\n self.work_queue_prefix, work_pool_name=DEFAULT_AGENT_WORK_POOL_NAME\n )\n\n matched_queues = set(q.name for q in matched_queues)\n if matched_queues != self.work_queues:\n new_queues = matched_queues - self.work_queues\n removed_queues = self.work_queues - matched_queues\n if new_queues:\n self.logger.info(\n f\"Matched new work queues: {', '.join(new_queues)}\"\n )\n if removed_queues:\n self.logger.info(\n f\"Work queues no longer matched: {', '.join(removed_queues)}\"\n )\n self.work_queues = matched_queues\n\n async def get_work_queues(self) -> AsyncIterator[WorkQueue]:\n \"\"\"\n Loads the work queue objects corresponding to the agent's target work\n queues. If any of them don't exist, they are created.\n \"\"\"\n\n # if the queue cache has not expired, yield queues from the cache\n now = pendulum.now(\"UTC\")\n if (self._work_queue_cache_expiration or now) > now:\n for queue in self._work_queue_cache:\n yield queue\n return\n\n # otherwise clear the cache, set the expiration for 30 seconds, and\n # reload the work queues\n self._work_queue_cache.clear()\n self._work_queue_cache_expiration = now.add(seconds=30)\n\n await self.update_matched_agent_work_queues()\n\n for name in self.work_queues:\n try:\n work_queue = await self.client.read_work_queue_by_name(\n work_pool_name=self.work_pool_name, name=name\n )\n except (ObjectNotFound, Exception):\n work_queue = None\n\n # if the work queue wasn't found and the agent is NOT polling\n # for queues using a regex, try to create it\n if work_queue is None and not self.work_queue_prefix:\n try:\n work_queue = await self.client.create_work_queue(\n work_pool_name=self.work_pool_name, name=name\n )\n except Exception:\n # if creating it raises an exception, it was probably just\n # created by some other agent; rather than entering a re-read\n # loop with new error handling, we log the exception and\n # continue.\n self.logger.exception(f\"Failed to create work queue {name!r}.\")\n continue\n else:\n log_str = f\"Created work queue {name!r}\"\n if self.work_pool_name:\n log_str = (\n f\"Created work queue {name!r} in work pool\"\n f\" {self.work_pool_name!r}.\"\n )\n else:\n log_str = f\"Created work queue '{name}'.\"\n self.logger.info(log_str)\n\n if work_queue is None:\n self.logger.error(\n f\"Work queue '{name!r}' with prefix {self.work_queue_prefix} wasn't\"\n \" found\"\n )\n else:\n self._work_queue_cache.append(work_queue)\n yield work_queue\n\n async def get_and_submit_flow_runs(self) -> List[FlowRun]:\n \"\"\"\n The principle method on agents. Queries for scheduled flow runs and submits\n them for execution in parallel.\n \"\"\"\n if not self.started:\n raise RuntimeError(\n \"Agent is not started. Use `async with PrefectAgent()...`\"\n )\n\n self.logger.debug(\"Checking for scheduled flow runs...\")\n\n before = pendulum.now(\"utc\").add(\n seconds=self.prefetch_seconds or PREFECT_AGENT_PREFETCH_SECONDS.value()\n )\n\n submittable_runs: List[FlowRun] = []\n\n if self.work_pool_name:\n responses = await self.client.get_scheduled_flow_runs_for_work_pool(\n work_pool_name=self.work_pool_name,\n work_queue_names=[wq.name async for wq in self.get_work_queues()],\n scheduled_before=before,\n )\n submittable_runs.extend([response.flow_run for response in responses])\n\n else:\n # load runs from each work queue\n async for work_queue in self.get_work_queues():\n # print a nice message if the work queue is paused\n if work_queue.is_paused:\n self.logger.info(\n f\"Work queue {work_queue.name!r} ({work_queue.id}) is paused.\"\n )\n\n else:\n try:\n queue_runs = await self.client.get_runs_in_work_queue(\n id=work_queue.id, limit=10, scheduled_before=before\n )\n submittable_runs.extend(queue_runs)\n except ObjectNotFound:\n self.logger.error(\n f\"Work queue {work_queue.name!r} ({work_queue.id}) not\"\n \" found.\"\n )\n except Exception as exc:\n self.logger.exception(exc)\n\n submittable_runs.sort(key=lambda run: run.next_scheduled_start_time)\n\n for flow_run in submittable_runs:\n # don't resubmit a run\n if flow_run.id in self.submitting_flow_run_ids:\n continue\n\n try:\n if self.limiter:\n self.limiter.acquire_on_behalf_of_nowait(flow_run.id)\n except anyio.WouldBlock:\n self.logger.info(\n f\"Flow run limit reached; {self.limiter.borrowed_tokens} flow runs\"\n \" in progress.\"\n )\n break\n else:\n self.logger.info(f\"Submitting flow run '{flow_run.id}'\")\n self.submitting_flow_run_ids.add(flow_run.id)\n self.task_group.start_soon(\n self.submit_run,\n flow_run,\n )\n\n return list(\n filter(lambda run: run.id in self.submitting_flow_run_ids, submittable_runs)\n )\n\n async def check_for_cancelled_flow_runs(self):\n if not self.started:\n raise RuntimeError(\n \"Agent is not started. Use `async with PrefectAgent()...`\"\n )\n\n self.logger.debug(\"Checking for cancelled flow runs...\")\n\n work_queue_filter = (\n WorkQueueFilter(name=WorkQueueFilterName(any_=list(self.work_queues)))\n if self.work_queues\n else None\n )\n\n work_pool_filter = (\n WorkPoolFilter(name=WorkPoolFilterName(any_=[self.work_pool_name]))\n if self.work_pool_name\n else WorkPoolFilter(name=WorkPoolFilterName(any_=[\"default-agent-pool\"]))\n )\n named_cancelling_flow_runs = await self.client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state=FlowRunFilterState(\n type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n ),\n # Avoid duplicate cancellation calls\n id=FlowRunFilterId(not_any_=list(self.cancelling_flow_run_ids)),\n ),\n work_pool_filter=work_pool_filter,\n work_queue_filter=work_queue_filter,\n )\n\n typed_cancelling_flow_runs = await self.client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state=FlowRunFilterState(\n type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n ),\n # Avoid duplicate cancellation calls\n id=FlowRunFilterId(not_any_=list(self.cancelling_flow_run_ids)),\n ),\n work_pool_filter=work_pool_filter,\n work_queue_filter=work_queue_filter,\n )\n\n cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n if cancelling_flow_runs:\n self.logger.info(\n f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n )\n\n for flow_run in cancelling_flow_runs:\n self.cancelling_flow_run_ids.add(flow_run.id)\n self.task_group.start_soon(self.cancel_run, flow_run)\n\n return cancelling_flow_runs\n\n async def cancel_run(self, flow_run: FlowRun) -> None:\n \"\"\"\n Cancel a flow run by killing its infrastructure\n \"\"\"\n if not flow_run.infrastructure_pid:\n self.logger.error(\n f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n \" attached. Cancellation cannot be guaranteed.\"\n )\n await self._mark_flow_run_as_cancelled(\n flow_run,\n state_updates={\n \"message\": (\n \"This flow run is missing infrastructure tracking information\"\n \" and cancellation cannot be guaranteed.\"\n )\n },\n )\n return\n\n try:\n infrastructure = await self.get_infrastructure(flow_run)\n if infrastructure.is_using_a_runner:\n self.logger.info(\n f\"Skipping cancellation because flow run {str(flow_run.id)!r} is\"\n \" using enhanced cancellation. A dedicated runner will handle\"\n \" cancellation.\"\n )\n return\n except Exception:\n self.logger.exception(\n f\"Failed to get infrastructure for flow run '{flow_run.id}'. \"\n \"Flow run cannot be cancelled.\"\n )\n # Note: We leave this flow run in the cancelling set because it cannot be\n # cancelled and this will prevent additional attempts.\n return\n\n if not hasattr(infrastructure, \"kill\"):\n self.logger.error(\n f\"Flow run '{flow_run.id}' infrastructure {infrastructure.type!r} \"\n \"does not support killing created infrastructure. \"\n \"Cancellation cannot be guaranteed.\"\n )\n return\n\n self.logger.info(\n f\"Killing {infrastructure.type} {flow_run.infrastructure_pid} for flow run \"\n f\"'{flow_run.id}'...\"\n )\n try:\n await infrastructure.kill(flow_run.infrastructure_pid)\n except InfrastructureNotFound as exc:\n self.logger.warning(f\"{exc} Marking flow run as cancelled.\")\n await self._mark_flow_run_as_cancelled(flow_run)\n except InfrastructureNotAvailable as exc:\n self.logger.warning(f\"{exc} Flow run cannot be cancelled by this agent.\")\n except Exception:\n self.logger.exception(\n \"Encountered exception while killing infrastructure for flow run \"\n f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n )\n # We will try again on generic exceptions\n self.cancelling_flow_run_ids.remove(flow_run.id)\n return\n else:\n await self._mark_flow_run_as_cancelled(flow_run)\n self.logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n\n async def _mark_flow_run_as_cancelled(\n self, flow_run: FlowRun, state_updates: Optional[dict] = None\n ) -> None:\n state_updates = state_updates or {}\n state_updates.setdefault(\"name\", \"Cancelled\")\n state_updates.setdefault(\"type\", StateType.CANCELLED)\n state = flow_run.state.copy(update=state_updates)\n\n await self.client.set_flow_run_state(flow_run.id, state, force=True)\n\n # Do not remove the flow run from the cancelling set immediately because\n # the API caches responses for the `read_flow_runs` and we do not want to\n # duplicate cancellations.\n await self._schedule_task(\n 60 * 10, self.cancelling_flow_run_ids.remove, flow_run.id\n )\n\n async def get_infrastructure(self, flow_run: FlowRun) -> Infrastructure:\n deployment = await self.client.read_deployment(flow_run.deployment_id)\n\n flow = await self.client.read_flow(deployment.flow_id)\n\n # overrides only apply when configuring known infra blocks\n if not deployment.infrastructure_document_id:\n if self.default_infrastructure:\n infra_block = self.default_infrastructure\n else:\n infra_document = await self.client.read_block_document(\n self.default_infrastructure_document_id\n )\n infra_block = Block._from_block_document(infra_document)\n\n # Add flow run metadata to the infrastructure\n prepared_infrastructure = infra_block.prepare_for_flow_run(\n flow_run, deployment=deployment, flow=flow\n )\n return prepared_infrastructure\n\n ## get infra\n infra_document = await self.client.read_block_document(\n deployment.infrastructure_document_id\n )\n\n # this piece of logic applies any overrides that may have been set on the\n # deployment; overrides are defined as dot.delimited paths on possibly nested\n # attributes of the infrastructure block\n doc_dict = infra_document.dict()\n infra_dict = doc_dict.get(\"data\", {})\n for override, value in (deployment.infra_overrides or {}).items():\n nested_fields = override.split(\".\")\n data = infra_dict\n for field in nested_fields[:-1]:\n data = data[field]\n\n # once we reach the end, set the value\n data[nested_fields[-1]] = value\n\n # reconstruct the infra block\n doc_dict[\"data\"] = infra_dict\n infra_document = BlockDocument(**doc_dict)\n infrastructure_block = Block._from_block_document(infra_document)\n\n # TODO: Here the agent may update the infrastructure with agent-level settings\n\n # Add flow run metadata to the infrastructure\n prepared_infrastructure = infrastructure_block.prepare_for_flow_run(\n flow_run, deployment=deployment, flow=flow\n )\n\n return prepared_infrastructure\n\n async def submit_run(self, flow_run: FlowRun) -> None:\n \"\"\"\n Submit a flow run to the infrastructure\n \"\"\"\n ready_to_submit = await self._propose_pending_state(flow_run)\n\n if ready_to_submit:\n try:\n infrastructure = await self.get_infrastructure(flow_run)\n except Exception as exc:\n self.logger.exception(\n f\"Failed to get infrastructure for flow run '{flow_run.id}'.\"\n )\n await self._propose_failed_state(flow_run, exc)\n if self.limiter:\n self.limiter.release_on_behalf_of(flow_run.id)\n else:\n # Wait for submission to be completed. Note that the submission function\n # may continue to run in the background after this exits.\n readiness_result = await self.task_group.start(\n self._submit_run_and_capture_errors, flow_run, infrastructure\n )\n\n if readiness_result and not isinstance(readiness_result, Exception):\n try:\n await self.client.update_flow_run(\n flow_run_id=flow_run.id,\n infrastructure_pid=str(readiness_result),\n )\n except Exception:\n self.logger.exception(\n \"An error occurred while setting the `infrastructure_pid`\"\n f\" on flow run {flow_run.id!r}. The flow run will not be\"\n \" cancellable.\"\n )\n\n self.logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n else:\n # If the run is not ready to submit, release the concurrency slot\n if self.limiter:\n self.limiter.release_on_behalf_of(flow_run.id)\n\n self.submitting_flow_run_ids.remove(flow_run.id)\n\n async def _submit_run_and_capture_errors(\n self,\n flow_run: FlowRun,\n infrastructure: Infrastructure,\n task_status: anyio.abc.TaskStatus = None,\n ) -> Union[InfrastructureResult, Exception]:\n # Note: There is not a clear way to determine if task_status.started() has been\n # called without peeking at the internal `_future`. Ideally we could just\n # check if the flow run id has been removed from `submitting_flow_run_ids`\n # but it is not so simple to guarantee that this coroutine yields back\n # to `submit_run` to execute that line when exceptions are raised during\n # submission.\n try:\n result = await infrastructure.run(task_status=task_status)\n except Exception as exc:\n if not task_status._future.done():\n # This flow run was being submitted and did not start successfully\n self.logger.exception(\n f\"Failed to submit flow run '{flow_run.id}' to infrastructure.\"\n )\n # Mark the task as started to prevent agent crash\n task_status.started(exc)\n await self._propose_crashed_state(\n flow_run, \"Flow run could not be submitted to infrastructure\"\n )\n else:\n self.logger.exception(\n f\"An error occurred while monitoring flow run '{flow_run.id}'. \"\n \"The flow run will not be marked as failed, but an issue may have \"\n \"occurred.\"\n )\n return exc\n finally:\n if self.limiter:\n self.limiter.release_on_behalf_of(flow_run.id)\n\n if not task_status._future.done():\n self.logger.error(\n f\"Infrastructure returned without reporting flow run '{flow_run.id}' \"\n \"as started or raising an error. This behavior is not expected and \"\n \"generally indicates improper implementation of infrastructure. The \"\n \"flow run will not be marked as failed, but an issue may have occurred.\"\n )\n # Mark the task as started to prevent agent crash\n task_status.started()\n\n if result.status_code != 0:\n await self._propose_crashed_state(\n flow_run,\n (\n \"Flow run infrastructure exited with non-zero status code\"\n f\" {result.status_code}.\"\n ),\n )\n\n return result\n\n async def _propose_pending_state(self, flow_run: FlowRun) -> bool:\n state = flow_run.state\n try:\n state = await propose_state(self.client, Pending(), flow_run_id=flow_run.id)\n except Abort as exc:\n self.logger.info(\n (\n f\"Aborted submission of flow run '{flow_run.id}'. \"\n f\"Server sent an abort signal: {exc}\"\n ),\n )\n return False\n except Exception:\n self.logger.error(\n f\"Failed to update state of flow run '{flow_run.id}'\",\n exc_info=True,\n )\n return False\n\n if not state.is_pending():\n self.logger.info(\n (\n f\"Aborted submission of flow run '{flow_run.id}': \"\n f\"Server returned a non-pending state {state.type.value!r}\"\n ),\n )\n return False\n\n return True\n\n async def _propose_failed_state(self, flow_run: FlowRun, exc: Exception) -> None:\n try:\n await propose_state(\n self.client,\n await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n flow_run_id=flow_run.id,\n )\n except Abort:\n # We've already failed, no need to note the abort but we don't want it to\n # raise in the agent process\n pass\n except Exception:\n self.logger.error(\n f\"Failed to update state of flow run '{flow_run.id}'\",\n exc_info=True,\n )\n\n async def _propose_crashed_state(self, flow_run: FlowRun, message: str) -> None:\n try:\n state = await propose_state(\n self.client,\n Crashed(message=message),\n flow_run_id=flow_run.id,\n )\n except Abort:\n # Flow run already marked as failed\n pass\n except Exception:\n self.logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n else:\n if state.is_crashed():\n self.logger.info(\n f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n )\n\n async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n \"\"\"\n Schedule a background task to start after some time.\n\n These tasks will be run immediately when the agent exits instead of waiting.\n\n The function may be async or sync. Async functions will be awaited.\n \"\"\"\n\n async def wrapper(task_status):\n # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n # time or shutdown\n if self.started:\n with anyio.CancelScope() as scope:\n self.scheduled_task_scopes.add(scope)\n task_status.started()\n await anyio.sleep(__in_seconds)\n\n self.scheduled_task_scopes.remove(scope)\n else:\n task_status.started()\n\n result = fn(*args, **kwargs)\n if inspect.iscoroutine(result):\n await result\n\n await self.task_group.start(wrapper)\n\n # Context management ---------------------------------------------------------------\n\n async def start(self):\n self.started = True\n self.task_group = anyio.create_task_group()\n self.limiter = (\n anyio.CapacityLimiter(self.limit) if self.limit is not None else None\n )\n self.client = get_client()\n await self.client.__aenter__()\n await self.task_group.__aenter__()\n\n async def shutdown(self, *exc_info):\n self.started = False\n # We must cancel scheduled task scopes before closing the task group\n for scope in self.scheduled_task_scopes:\n scope.cancel()\n await self.task_group.__aexit__(*exc_info)\n await self.client.__aexit__(*exc_info)\n self.task_group = None\n self.client = None\n self.submitting_flow_run_ids.clear()\n self.cancelling_flow_run_ids.clear()\n self.scheduled_task_scopes.clear()\n self._work_queue_cache_expiration = None\n self._work_queue_cache = []\n\n async def __aenter__(self):\n await self.start()\n return self\n\n async def __aexit__(self, *exc_info):\n await self.shutdown(*exc_info)\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.cancel_run","title":"cancel_run
async
","text":"Cancel a flow run by killing its infrastructure
Source code inprefect/agent.py
async def cancel_run(self, flow_run: FlowRun) -> None:\n \"\"\"\n Cancel a flow run by killing its infrastructure\n \"\"\"\n if not flow_run.infrastructure_pid:\n self.logger.error(\n f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n \" attached. Cancellation cannot be guaranteed.\"\n )\n await self._mark_flow_run_as_cancelled(\n flow_run,\n state_updates={\n \"message\": (\n \"This flow run is missing infrastructure tracking information\"\n \" and cancellation cannot be guaranteed.\"\n )\n },\n )\n return\n\n try:\n infrastructure = await self.get_infrastructure(flow_run)\n if infrastructure.is_using_a_runner:\n self.logger.info(\n f\"Skipping cancellation because flow run {str(flow_run.id)!r} is\"\n \" using enhanced cancellation. A dedicated runner will handle\"\n \" cancellation.\"\n )\n return\n except Exception:\n self.logger.exception(\n f\"Failed to get infrastructure for flow run '{flow_run.id}'. \"\n \"Flow run cannot be cancelled.\"\n )\n # Note: We leave this flow run in the cancelling set because it cannot be\n # cancelled and this will prevent additional attempts.\n return\n\n if not hasattr(infrastructure, \"kill\"):\n self.logger.error(\n f\"Flow run '{flow_run.id}' infrastructure {infrastructure.type!r} \"\n \"does not support killing created infrastructure. \"\n \"Cancellation cannot be guaranteed.\"\n )\n return\n\n self.logger.info(\n f\"Killing {infrastructure.type} {flow_run.infrastructure_pid} for flow run \"\n f\"'{flow_run.id}'...\"\n )\n try:\n await infrastructure.kill(flow_run.infrastructure_pid)\n except InfrastructureNotFound as exc:\n self.logger.warning(f\"{exc} Marking flow run as cancelled.\")\n await self._mark_flow_run_as_cancelled(flow_run)\n except InfrastructureNotAvailable as exc:\n self.logger.warning(f\"{exc} Flow run cannot be cancelled by this agent.\")\n except Exception:\n self.logger.exception(\n \"Encountered exception while killing infrastructure for flow run \"\n f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n )\n # We will try again on generic exceptions\n self.cancelling_flow_run_ids.remove(flow_run.id)\n return\n else:\n await self._mark_flow_run_as_cancelled(flow_run)\n self.logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.get_and_submit_flow_runs","title":"get_and_submit_flow_runs
async
","text":"The principle method on agents. Queries for scheduled flow runs and submits them for execution in parallel.
Source code inprefect/agent.py
async def get_and_submit_flow_runs(self) -> List[FlowRun]:\n \"\"\"\n The principle method on agents. Queries for scheduled flow runs and submits\n them for execution in parallel.\n \"\"\"\n if not self.started:\n raise RuntimeError(\n \"Agent is not started. Use `async with PrefectAgent()...`\"\n )\n\n self.logger.debug(\"Checking for scheduled flow runs...\")\n\n before = pendulum.now(\"utc\").add(\n seconds=self.prefetch_seconds or PREFECT_AGENT_PREFETCH_SECONDS.value()\n )\n\n submittable_runs: List[FlowRun] = []\n\n if self.work_pool_name:\n responses = await self.client.get_scheduled_flow_runs_for_work_pool(\n work_pool_name=self.work_pool_name,\n work_queue_names=[wq.name async for wq in self.get_work_queues()],\n scheduled_before=before,\n )\n submittable_runs.extend([response.flow_run for response in responses])\n\n else:\n # load runs from each work queue\n async for work_queue in self.get_work_queues():\n # print a nice message if the work queue is paused\n if work_queue.is_paused:\n self.logger.info(\n f\"Work queue {work_queue.name!r} ({work_queue.id}) is paused.\"\n )\n\n else:\n try:\n queue_runs = await self.client.get_runs_in_work_queue(\n id=work_queue.id, limit=10, scheduled_before=before\n )\n submittable_runs.extend(queue_runs)\n except ObjectNotFound:\n self.logger.error(\n f\"Work queue {work_queue.name!r} ({work_queue.id}) not\"\n \" found.\"\n )\n except Exception as exc:\n self.logger.exception(exc)\n\n submittable_runs.sort(key=lambda run: run.next_scheduled_start_time)\n\n for flow_run in submittable_runs:\n # don't resubmit a run\n if flow_run.id in self.submitting_flow_run_ids:\n continue\n\n try:\n if self.limiter:\n self.limiter.acquire_on_behalf_of_nowait(flow_run.id)\n except anyio.WouldBlock:\n self.logger.info(\n f\"Flow run limit reached; {self.limiter.borrowed_tokens} flow runs\"\n \" in progress.\"\n )\n break\n else:\n self.logger.info(f\"Submitting flow run '{flow_run.id}'\")\n self.submitting_flow_run_ids.add(flow_run.id)\n self.task_group.start_soon(\n self.submit_run,\n flow_run,\n )\n\n return list(\n filter(lambda run: run.id in self.submitting_flow_run_ids, submittable_runs)\n )\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.get_work_queues","title":"get_work_queues
async
","text":"Loads the work queue objects corresponding to the agent's target work queues. If any of them don't exist, they are created.
Source code inprefect/agent.py
async def get_work_queues(self) -> AsyncIterator[WorkQueue]:\n \"\"\"\n Loads the work queue objects corresponding to the agent's target work\n queues. If any of them don't exist, they are created.\n \"\"\"\n\n # if the queue cache has not expired, yield queues from the cache\n now = pendulum.now(\"UTC\")\n if (self._work_queue_cache_expiration or now) > now:\n for queue in self._work_queue_cache:\n yield queue\n return\n\n # otherwise clear the cache, set the expiration for 30 seconds, and\n # reload the work queues\n self._work_queue_cache.clear()\n self._work_queue_cache_expiration = now.add(seconds=30)\n\n await self.update_matched_agent_work_queues()\n\n for name in self.work_queues:\n try:\n work_queue = await self.client.read_work_queue_by_name(\n work_pool_name=self.work_pool_name, name=name\n )\n except (ObjectNotFound, Exception):\n work_queue = None\n\n # if the work queue wasn't found and the agent is NOT polling\n # for queues using a regex, try to create it\n if work_queue is None and not self.work_queue_prefix:\n try:\n work_queue = await self.client.create_work_queue(\n work_pool_name=self.work_pool_name, name=name\n )\n except Exception:\n # if creating it raises an exception, it was probably just\n # created by some other agent; rather than entering a re-read\n # loop with new error handling, we log the exception and\n # continue.\n self.logger.exception(f\"Failed to create work queue {name!r}.\")\n continue\n else:\n log_str = f\"Created work queue {name!r}\"\n if self.work_pool_name:\n log_str = (\n f\"Created work queue {name!r} in work pool\"\n f\" {self.work_pool_name!r}.\"\n )\n else:\n log_str = f\"Created work queue '{name}'.\"\n self.logger.info(log_str)\n\n if work_queue is None:\n self.logger.error(\n f\"Work queue '{name!r}' with prefix {self.work_queue_prefix} wasn't\"\n \" found\"\n )\n else:\n self._work_queue_cache.append(work_queue)\n yield work_queue\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.submit_run","title":"submit_run
async
","text":"Submit a flow run to the infrastructure
Source code inprefect/agent.py
async def submit_run(self, flow_run: FlowRun) -> None:\n \"\"\"\n Submit a flow run to the infrastructure\n \"\"\"\n ready_to_submit = await self._propose_pending_state(flow_run)\n\n if ready_to_submit:\n try:\n infrastructure = await self.get_infrastructure(flow_run)\n except Exception as exc:\n self.logger.exception(\n f\"Failed to get infrastructure for flow run '{flow_run.id}'.\"\n )\n await self._propose_failed_state(flow_run, exc)\n if self.limiter:\n self.limiter.release_on_behalf_of(flow_run.id)\n else:\n # Wait for submission to be completed. Note that the submission function\n # may continue to run in the background after this exits.\n readiness_result = await self.task_group.start(\n self._submit_run_and_capture_errors, flow_run, infrastructure\n )\n\n if readiness_result and not isinstance(readiness_result, Exception):\n try:\n await self.client.update_flow_run(\n flow_run_id=flow_run.id,\n infrastructure_pid=str(readiness_result),\n )\n except Exception:\n self.logger.exception(\n \"An error occurred while setting the `infrastructure_pid`\"\n f\" on flow run {flow_run.id!r}. The flow run will not be\"\n \" cancellable.\"\n )\n\n self.logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n else:\n # If the run is not ready to submit, release the concurrency slot\n if self.limiter:\n self.limiter.release_on_behalf_of(flow_run.id)\n\n self.submitting_flow_run_ids.remove(flow_run.id)\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/artifacts/","title":"prefect.artifacts","text":"","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts","title":"prefect.artifacts
","text":"Interface for creating and reading artifacts.
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_link_artifact","title":"create_link_artifact
async
","text":"Create a link artifact.
Parameters:
Name Type Description Defaultlink
str
The link to create.
requiredlink_text
Optional[str]
The link text.
None
key
Optional[str]
A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.
None
description
Optional[str]
A user-specified description of the artifact.
None
Returns:
Type DescriptionUUID
The table artifact ID.
Source code inprefect/artifacts.py
@sync_compatible\nasync def create_link_artifact(\n link: str,\n link_text: Optional[str] = None,\n key: Optional[str] = None,\n description: Optional[str] = None,\n) -> UUID:\n \"\"\"\n Create a link artifact.\n\n Arguments:\n link: The link to create.\n link_text: The link text.\n key: A user-provided string identifier.\n Required for the artifact to show in the Artifacts page in the UI.\n The key must only contain lowercase letters, numbers, and dashes.\n description: A user-specified description of the artifact.\n\n\n Returns:\n The table artifact ID.\n \"\"\"\n formatted_link = f\"[{link_text}]({link})\" if link_text else f\"[{link}]({link})\"\n artifact = await _create_artifact(\n key=key,\n type=\"markdown\",\n description=description,\n data=formatted_link,\n )\n\n return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_markdown_artifact","title":"create_markdown_artifact
async
","text":"Create a markdown artifact.
Parameters:
Name Type Description Defaultmarkdown
str
The markdown to create.
requiredkey
Optional[str]
A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.
None
description
Optional[str]
A user-specified description of the artifact.
None
Returns:
Type DescriptionUUID
The table artifact ID.
Source code inprefect/artifacts.py
@sync_compatible\nasync def create_markdown_artifact(\n markdown: str,\n key: Optional[str] = None,\n description: Optional[str] = None,\n) -> UUID:\n \"\"\"\n Create a markdown artifact.\n\n Arguments:\n markdown: The markdown to create.\n key: A user-provided string identifier.\n Required for the artifact to show in the Artifacts page in the UI.\n The key must only contain lowercase letters, numbers, and dashes.\n description: A user-specified description of the artifact.\n\n Returns:\n The table artifact ID.\n \"\"\"\n artifact = await _create_artifact(\n key=key,\n type=\"markdown\",\n description=description,\n data=markdown,\n )\n\n return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_table_artifact","title":"create_table_artifact
async
","text":"Create a table artifact.
Parameters:
Name Type Description Defaulttable
Union[Dict[str, List[Any]], List[Dict[str, Any]], List[List[Any]]]
The table to create.
requiredkey
Optional[str]
A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.
None
description
Optional[str]
A user-specified description of the artifact.
None
Returns:
Type DescriptionUUID
The table artifact ID.
Source code inprefect/artifacts.py
@sync_compatible\nasync def create_table_artifact(\n table: Union[Dict[str, List[Any]], List[Dict[str, Any]], List[List[Any]]],\n key: Optional[str] = None,\n description: Optional[str] = None,\n) -> UUID:\n \"\"\"\n Create a table artifact.\n\n Arguments:\n table: The table to create.\n key: A user-provided string identifier.\n Required for the artifact to show in the Artifacts page in the UI.\n The key must only contain lowercase letters, numbers, and dashes.\n description: A user-specified description of the artifact.\n\n Returns:\n The table artifact ID.\n \"\"\"\n\n def _sanitize_nan_values(item):\n \"\"\"\n Sanitize NaN values in a given item. The item can be a dict, list or float.\n \"\"\"\n\n if isinstance(item, list):\n return [_sanitize_nan_values(sub_item) for sub_item in item]\n\n elif isinstance(item, dict):\n return {k: _sanitize_nan_values(v) for k, v in item.items()}\n\n elif isinstance(item, float) and math.isnan(item):\n return None\n\n else:\n return item\n\n sanitized_table = _sanitize_nan_values(table)\n\n if isinstance(table, dict) and all(isinstance(v, list) for v in table.values()):\n pass\n elif isinstance(table, list) and all(isinstance(v, (list, dict)) for v in table):\n pass\n else:\n raise TypeError(INVALID_TABLE_TYPE_ERROR)\n\n formatted_table = json.dumps(sanitized_table)\n\n artifact = await _create_artifact(\n key=key,\n type=\"table\",\n description=description,\n data=formatted_table,\n )\n\n return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/context/","title":"prefect.context","text":"","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context","title":"prefect.context
","text":"Async and thread safe models for passing runtime context data.
These contexts should never be directly mutated by the user.
For more user-accessible information about the current run, see prefect.runtime
.
ContextModel
","text":" Bases: BaseModel
A base model for context data that forbids mutation and extra data while providing a context manager
Source code inprefect/context.py
class ContextModel(BaseModel):\n \"\"\"\n A base model for context data that forbids mutation and extra data while providing\n a context manager\n \"\"\"\n\n # The context variable for storing data must be defined by the child class\n __var__: ContextVar\n _token: Token = PrivateAttr(None)\n\n class Config:\n allow_mutation = False\n arbitrary_types_allowed = True\n extra = \"forbid\"\n\n def __enter__(self):\n if self._token is not None:\n raise RuntimeError(\n \"Context already entered. Context enter calls cannot be nested.\"\n )\n self._token = self.__var__.set(self)\n return self\n\n def __exit__(self, *_):\n if not self._token:\n raise RuntimeError(\n \"Asymmetric use of context. Context exit called without an enter.\"\n )\n self.__var__.reset(self._token)\n self._token = None\n\n @classmethod\n def get(cls: Type[T]) -> Optional[T]:\n return cls.__var__.get(None)\n\n def copy(self, **kwargs):\n \"\"\"\n Duplicate the context model, optionally choosing which fields to include, exclude, or change.\n\n Attributes:\n include: Fields to include in new model.\n exclude: Fields to exclude from new model, as with values this takes precedence over include.\n update: Values to change/add in the new model. Note: the data is not validated before creating\n the new model - you should trust this data.\n deep: Set to `True` to make a deep copy of the model.\n\n Returns:\n A new model instance.\n \"\"\"\n # Remove the token on copy to avoid re-entrance errors\n new = super().copy(**kwargs)\n new._token = None\n return new\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.ContextModel.copy","title":"copy
","text":"Duplicate the context model, optionally choosing which fields to include, exclude, or change.
Attributes:
Name Type Descriptioninclude
Fields to include in new model.
exclude
Fields to exclude from new model, as with values this takes precedence over include.
update
Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data.
deep
Set to True
to make a deep copy of the model.
Returns:
Type DescriptionA new model instance.
Source code inprefect/context.py
def copy(self, **kwargs):\n \"\"\"\n Duplicate the context model, optionally choosing which fields to include, exclude, or change.\n\n Attributes:\n include: Fields to include in new model.\n exclude: Fields to exclude from new model, as with values this takes precedence over include.\n update: Values to change/add in the new model. Note: the data is not validated before creating\n the new model - you should trust this data.\n deep: Set to `True` to make a deep copy of the model.\n\n Returns:\n A new model instance.\n \"\"\"\n # Remove the token on copy to avoid re-entrance errors\n new = super().copy(**kwargs)\n new._token = None\n return new\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.EngineContext","title":"EngineContext
","text":" Bases: RunContext
The context for a flow run. Data in this context is only available from within a flow run function.
Attributes:
Name Type Descriptionflow
Optional[Flow]
The flow instance associated with the run
flow_run
Optional[FlowRun]
The API metadata for the flow run
task_runner
BaseTaskRunner
The task runner instance being used for the flow run
task_run_futures
List[PrefectFuture]
A list of futures for task runs submitted within this flow run
task_run_states
List[State]
A list of states for task runs created within this flow run
task_run_results
Dict[int, State]
A mapping of result ids to task run states for this flow run
flow_run_states
List[State]
A list of states for flow runs created within this flow run
sync_portal
Optional[BlockingPortal]
A blocking portal for sync task/flow runs in an async flow
timeout_scope
Optional[CancelScope]
The cancellation scope for flow level timeouts
Source code inprefect/context.py
class EngineContext(RunContext):\n \"\"\"\n The context for a flow run. Data in this context is only available from within a\n flow run function.\n\n Attributes:\n flow: The flow instance associated with the run\n flow_run: The API metadata for the flow run\n task_runner: The task runner instance being used for the flow run\n task_run_futures: A list of futures for task runs submitted within this flow run\n task_run_states: A list of states for task runs created within this flow run\n task_run_results: A mapping of result ids to task run states for this flow run\n flow_run_states: A list of states for flow runs created within this flow run\n sync_portal: A blocking portal for sync task/flow runs in an async flow\n timeout_scope: The cancellation scope for flow level timeouts\n \"\"\"\n\n flow: Optional[\"Flow\"] = None\n flow_run: Optional[FlowRun] = None\n autonomous_task_run: Optional[TaskRun] = None\n task_runner: BaseTaskRunner\n log_prints: bool = False\n parameters: Optional[Dict[str, Any]] = None\n\n # Result handling\n result_factory: ResultFactory\n\n # Counter for task calls allowing unique\n task_run_dynamic_keys: Dict[str, int] = Field(default_factory=dict)\n\n # Counter for flow pauses\n observed_flow_pauses: Dict[str, int] = Field(default_factory=dict)\n\n # Tracking for objects created by this flow run\n task_run_futures: List[PrefectFuture] = Field(default_factory=list)\n task_run_states: List[State] = Field(default_factory=list)\n task_run_results: Dict[int, State] = Field(default_factory=dict)\n flow_run_states: List[State] = Field(default_factory=list)\n\n # The synchronous portal is only created for async flows for creating engine calls\n # from synchronous task and subflow calls\n sync_portal: Optional[anyio.abc.BlockingPortal] = None\n timeout_scope: Optional[anyio.abc.CancelScope] = None\n\n # Task group that can be used for background tasks during the flow run\n background_tasks: anyio.abc.TaskGroup\n\n # Events worker to emit events to Prefect Cloud\n events: Optional[EventsWorker] = None\n\n __var__ = ContextVar(\"flow_run\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.PrefectObjectRegistry","title":"PrefectObjectRegistry
","text":" Bases: ContextModel
A context that acts as a registry for all Prefect objects that are registered during load and execution.
Attributes:
Name Type Descriptionstart_time
DateTimeTZ
The time the object registry was created.
block_code_execution
bool
If set, flow calls will be ignored.
capture_failures
bool
If set, failures during init will be silenced and tracked.
Source code inprefect/context.py
class PrefectObjectRegistry(ContextModel):\n \"\"\"\n A context that acts as a registry for all Prefect objects that are\n registered during load and execution.\n\n Attributes:\n start_time: The time the object registry was created.\n block_code_execution: If set, flow calls will be ignored.\n capture_failures: If set, failures during __init__ will be silenced and tracked.\n \"\"\"\n\n start_time: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n\n _instance_registry: Dict[Type[T], List[T]] = PrivateAttr(\n default_factory=lambda: defaultdict(list)\n )\n\n # Failures will be a tuple of (exception, instance, args, kwargs)\n _instance_init_failures: Dict[\n Type[T], List[Tuple[Exception, T, Tuple, Dict]]\n ] = PrivateAttr(default_factory=lambda: defaultdict(list))\n\n block_code_execution: bool = False\n capture_failures: bool = False\n\n __var__ = ContextVar(\"object_registry\")\n\n def get_instances(self, type_: Type[T]) -> List[T]:\n instances = []\n for registered_type, type_instances in self._instance_registry.items():\n if type_ in registered_type.mro():\n instances.extend(type_instances)\n return instances\n\n def get_instance_failures(\n self, type_: Type[T]\n ) -> List[Tuple[Exception, T, Tuple, Dict]]:\n failures = []\n for type__ in type_.mro():\n failures.extend(self._instance_init_failures[type__])\n return failures\n\n def register_instance(self, object):\n # TODO: Consider using a 'Set' to avoid duplicate entries\n self._instance_registry[type(object)].append(object)\n\n def register_init_failure(\n self, exc: Exception, object: Any, init_args: Tuple, init_kwargs: Dict\n ):\n self._instance_init_failures[type(object)].append(\n (exc, object, init_args, init_kwargs)\n )\n\n @classmethod\n def register_instances(cls, type_: Type[T]) -> Type[T]:\n \"\"\"\n Decorator for a class that adds registration to the `PrefectObjectRegistry`\n on initialization of instances.\n \"\"\"\n original_init = type_.__init__\n\n def __register_init__(__self__: T, *args: Any, **kwargs: Any) -> None:\n registry = cls.get()\n try:\n original_init(__self__, *args, **kwargs)\n except Exception as exc:\n if not registry or not registry.capture_failures:\n raise\n else:\n registry.register_init_failure(exc, __self__, args, kwargs)\n else:\n if registry:\n registry.register_instance(__self__)\n\n update_wrapper(__register_init__, original_init)\n\n type_.__init__ = __register_init__\n return type_\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.PrefectObjectRegistry.register_instances","title":"register_instances
classmethod
","text":"Decorator for a class that adds registration to the PrefectObjectRegistry
on initialization of instances.
prefect/context.py
@classmethod\ndef register_instances(cls, type_: Type[T]) -> Type[T]:\n \"\"\"\n Decorator for a class that adds registration to the `PrefectObjectRegistry`\n on initialization of instances.\n \"\"\"\n original_init = type_.__init__\n\n def __register_init__(__self__: T, *args: Any, **kwargs: Any) -> None:\n registry = cls.get()\n try:\n original_init(__self__, *args, **kwargs)\n except Exception as exc:\n if not registry or not registry.capture_failures:\n raise\n else:\n registry.register_init_failure(exc, __self__, args, kwargs)\n else:\n if registry:\n registry.register_instance(__self__)\n\n update_wrapper(__register_init__, original_init)\n\n type_.__init__ = __register_init__\n return type_\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.RunContext","title":"RunContext
","text":" Bases: ContextModel
The base context for a flow or task run. Data in this context will always be available when get_run_context
is called.
Attributes:
Name Type Descriptionstart_time
DateTimeTZ
The time the run context was entered
client
PrefectClient
The Prefect client instance being used for API communication
Source code inprefect/context.py
class RunContext(ContextModel):\n \"\"\"\n The base context for a flow or task run. Data in this context will always be\n available when `get_run_context` is called.\n\n Attributes:\n start_time: The time the run context was entered\n client: The Prefect client instance being used for API communication\n \"\"\"\n\n start_time: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n input_keyset: Optional[Dict[str, Dict[str, str]]] = None\n client: PrefectClient\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.SettingsContext","title":"SettingsContext
","text":" Bases: ContextModel
The context for a Prefect settings.
This allows for safe concurrent access and modification of settings.
Attributes:
Name Type Descriptionprofile
Profile
The profile that is in use.
settings
Settings
The complete settings model.
Source code inprefect/context.py
class SettingsContext(ContextModel):\n \"\"\"\n The context for a Prefect settings.\n\n This allows for safe concurrent access and modification of settings.\n\n Attributes:\n profile: The profile that is in use.\n settings: The complete settings model.\n \"\"\"\n\n profile: Profile\n settings: Settings\n\n __var__ = ContextVar(\"settings\")\n\n def __hash__(self) -> int:\n return hash(self.settings)\n\n def __enter__(self):\n \"\"\"\n Upon entrance, we ensure the home directory for the profile exists.\n \"\"\"\n return_value = super().__enter__()\n\n try:\n prefect_home = Path(self.settings.value_of(PREFECT_HOME))\n prefect_home.mkdir(mode=0o0700, exist_ok=True)\n except OSError:\n warnings.warn(\n (\n \"Failed to create the Prefect home directory at \"\n f\"{self.settings.value_of(PREFECT_HOME)}\"\n ),\n stacklevel=2,\n )\n\n return return_value\n\n @classmethod\n def get(cls) -> \"SettingsContext\":\n # Return the global context instead of `None` if no context exists\n return super().get() or GLOBAL_SETTINGS_CONTEXT\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.TagsContext","title":"TagsContext
","text":" Bases: ContextModel
The context for prefect.tags
management.
Attributes:
Name Type Descriptioncurrent_tags
Set[str]
A set of current tags in the context
Source code inprefect/context.py
class TagsContext(ContextModel):\n \"\"\"\n The context for `prefect.tags` management.\n\n Attributes:\n current_tags: A set of current tags in the context\n \"\"\"\n\n current_tags: Set[str] = Field(default_factory=set)\n\n @classmethod\n def get(cls) -> \"TagsContext\":\n # Return an empty `TagsContext` instead of `None` if no context exists\n return cls.__var__.get(TagsContext())\n\n __var__ = ContextVar(\"tags\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.TaskRunContext","title":"TaskRunContext
","text":" Bases: RunContext
The context for a task run. Data in this context is only available from within a task run function.
Attributes:
Name Type Descriptiontask
Task
The task instance associated with the task run
task_run
TaskRun
The API metadata for this task run
Source code inprefect/context.py
class TaskRunContext(RunContext):\n \"\"\"\n The context for a task run. Data in this context is only available from within a\n task run function.\n\n Attributes:\n task: The task instance associated with the task run\n task_run: The API metadata for this task run\n \"\"\"\n\n task: \"Task\"\n task_run: TaskRun\n log_prints: bool = False\n parameters: Dict[str, Any]\n\n # Result handling\n result_factory: ResultFactory\n\n __var__ = ContextVar(\"task_run\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.get_run_context","title":"get_run_context
","text":"Get the current run context from within a task or flow function.
Returns:
Type DescriptionUnion[FlowRunContext, TaskRunContext]
A FlowRunContext
or TaskRunContext
depending on the function type.
Raises RuntimeError: If called outside of a flow or task run.
Source code inprefect/context.py
def get_run_context() -> Union[FlowRunContext, TaskRunContext]:\n \"\"\"\n Get the current run context from within a task or flow function.\n\n Returns:\n A `FlowRunContext` or `TaskRunContext` depending on the function type.\n\n Raises\n RuntimeError: If called outside of a flow or task run.\n \"\"\"\n task_run_ctx = TaskRunContext.get()\n if task_run_ctx:\n return task_run_ctx\n\n flow_run_ctx = FlowRunContext.get()\n if flow_run_ctx:\n return flow_run_ctx\n\n raise MissingContextError(\n \"No run context available. You are not in a flow or task run context.\"\n )\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.get_settings_context","title":"get_settings_context
","text":"Get the current settings context which contains profile information and the settings that are being used.
Generally, the settings that are being used are a combination of values from the profile and environment. See prefect.context.use_profile
for more details.
prefect/context.py
def get_settings_context() -> SettingsContext:\n \"\"\"\n Get the current settings context which contains profile information and the\n settings that are being used.\n\n Generally, the settings that are being used are a combination of values from the\n profile and environment. See `prefect.context.use_profile` for more details.\n \"\"\"\n settings_ctx = SettingsContext.get()\n\n if not settings_ctx:\n raise MissingContextError(\"No settings context found.\")\n\n return settings_ctx\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.registry_from_script","title":"registry_from_script
","text":"Return a fresh registry with instances populated from execution of a script.
Source code inprefect/context.py
def registry_from_script(\n path: str,\n block_code_execution: bool = True,\n capture_failures: bool = True,\n) -> PrefectObjectRegistry:\n \"\"\"\n Return a fresh registry with instances populated from execution of a script.\n \"\"\"\n with PrefectObjectRegistry(\n block_code_execution=block_code_execution,\n capture_failures=capture_failures,\n ) as registry:\n load_script_as_module(path)\n\n return registry\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.root_settings_context","title":"root_settings_context
","text":"Return the settings context that will exist as the root context for the module.
The profile to use is determined with the following precedence - Command line via 'prefect --profile ' - Environment variable via 'PREFECT_PROFILE' - Profiles file via the 'active' key Source code in prefect/context.py
def root_settings_context():\n \"\"\"\n Return the settings context that will exist as the root context for the module.\n\n The profile to use is determined with the following precedence\n - Command line via 'prefect --profile <name>'\n - Environment variable via 'PREFECT_PROFILE'\n - Profiles file via the 'active' key\n \"\"\"\n profiles = prefect.settings.load_profiles()\n active_name = profiles.active_name\n profile_source = \"in the profiles file\"\n\n if \"PREFECT_PROFILE\" in os.environ:\n active_name = os.environ[\"PREFECT_PROFILE\"]\n profile_source = \"by environment variable\"\n\n if (\n sys.argv[0].endswith(\"/prefect\")\n and len(sys.argv) >= 3\n and sys.argv[1] == \"--profile\"\n ):\n active_name = sys.argv[2]\n profile_source = \"by command line argument\"\n\n if active_name not in profiles.names:\n print(\n (\n f\"WARNING: Active profile {active_name!r} set {profile_source} not \"\n \"found. The default profile will be used instead. \"\n ),\n file=sys.stderr,\n )\n active_name = \"default\"\n\n with use_profile(\n profiles[active_name],\n # Override environment variables if the profile was set by the CLI\n override_environment_variables=profile_source == \"by command line argument\",\n ) as settings_context:\n return settings_context\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.tags","title":"tags
","text":"Context manager to add tags to flow and task run calls.
Tags are always combined with any existing tags.
Yields:
Type DescriptionSet[str]
The current set of tags
Examples:
>>> from prefect import tags, task, flow\n>>> @task\n>>> def my_task():\n>>> pass\n
Run a task with tags
>>> @flow\n>>> def my_flow():\n>>> with tags(\"a\", \"b\"):\n>>> my_task() # has tags: a, b\n
Run a flow with tags
>>> @flow\n>>> def my_flow():\n>>> pass\n>>> with tags(\"a\", \"b\"):\n>>> my_flow() # has tags: a, b\n
Run a task with nested tag contexts
>>> @flow\n>>> def my_flow():\n>>> with tags(\"a\", \"b\"):\n>>> with tags(\"c\", \"d\"):\n>>> my_task() # has tags: a, b, c, d\n>>> my_task() # has tags: a, b\n
Inspect the current tags
>>> @flow\n>>> def my_flow():\n>>> with tags(\"c\", \"d\"):\n>>> with tags(\"e\", \"f\") as current_tags:\n>>> print(current_tags)\n>>> with tags(\"a\", \"b\"):\n>>> my_flow()\n{\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"}\n
Source code in prefect/context.py
@contextmanager\ndef tags(*new_tags: str) -> Set[str]:\n \"\"\"\n Context manager to add tags to flow and task run calls.\n\n Tags are always combined with any existing tags.\n\n Yields:\n The current set of tags\n\n Examples:\n >>> from prefect import tags, task, flow\n >>> @task\n >>> def my_task():\n >>> pass\n\n Run a task with tags\n\n >>> @flow\n >>> def my_flow():\n >>> with tags(\"a\", \"b\"):\n >>> my_task() # has tags: a, b\n\n Run a flow with tags\n\n >>> @flow\n >>> def my_flow():\n >>> pass\n >>> with tags(\"a\", \"b\"):\n >>> my_flow() # has tags: a, b\n\n Run a task with nested tag contexts\n\n >>> @flow\n >>> def my_flow():\n >>> with tags(\"a\", \"b\"):\n >>> with tags(\"c\", \"d\"):\n >>> my_task() # has tags: a, b, c, d\n >>> my_task() # has tags: a, b\n\n Inspect the current tags\n\n >>> @flow\n >>> def my_flow():\n >>> with tags(\"c\", \"d\"):\n >>> with tags(\"e\", \"f\") as current_tags:\n >>> print(current_tags)\n >>> with tags(\"a\", \"b\"):\n >>> my_flow()\n {\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"}\n \"\"\"\n current_tags = TagsContext.get().current_tags\n new_tags = current_tags.union(new_tags)\n with TagsContext(current_tags=new_tags):\n yield new_tags\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.use_profile","title":"use_profile
","text":"Switch to a profile for the duration of this context.
Profile contexts are confined to an async context in a single thread.
Parameters:
Name Type Description Defaultprofile
Union[Profile, str]
The name of the profile to load or an instance of a Profile.
requiredoverride_environment_variable
If set, variables in the profile will take precedence over current environment variables. By default, environment variables will override profile settings.
requiredinclude_current_context
bool
If set, the new settings will be constructed with the current settings context as a base. If not set, the use_base settings will be loaded from the environment and defaults.
True
Yields:
Type DescriptionThe created SettingsContext
object
prefect/context.py
@contextmanager\ndef use_profile(\n profile: Union[Profile, str],\n override_environment_variables: bool = False,\n include_current_context: bool = True,\n):\n \"\"\"\n Switch to a profile for the duration of this context.\n\n Profile contexts are confined to an async context in a single thread.\n\n Args:\n profile: The name of the profile to load or an instance of a Profile.\n override_environment_variable: If set, variables in the profile will take\n precedence over current environment variables. By default, environment\n variables will override profile settings.\n include_current_context: If set, the new settings will be constructed\n with the current settings context as a base. If not set, the use_base settings\n will be loaded from the environment and defaults.\n\n Yields:\n The created `SettingsContext` object\n \"\"\"\n if isinstance(profile, str):\n profiles = prefect.settings.load_profiles()\n profile = profiles[profile]\n\n if not isinstance(profile, Profile):\n raise TypeError(\n f\"Unexpected type {type(profile).__name__!r} for `profile`. \"\n \"Expected 'str' or 'Profile'.\"\n )\n\n # Create a copy of the profiles settings as we will mutate it\n profile_settings = profile.settings.copy()\n\n existing_context = SettingsContext.get()\n if existing_context and include_current_context:\n settings = existing_context.settings\n else:\n settings = prefect.settings.get_settings_from_env()\n\n if not override_environment_variables:\n for key in os.environ:\n if key in prefect.settings.SETTING_VARIABLES:\n profile_settings.pop(prefect.settings.SETTING_VARIABLES[key], None)\n\n new_settings = settings.copy_with_update(updates=profile_settings)\n\n with SettingsContext(profile=profile, settings=new_settings) as ctx:\n yield ctx\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/engine/","title":"prefect.engine","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine","title":"prefect.engine
","text":"Client-side execution and orchestration of flows and tasks.
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--engine-process-overview","title":"Engine process overview","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--flows","title":"Flows","text":"The flow is called by the user or an existing flow run is executed in a new process.
See Flow.__call__
and prefect.engine.__main__
(python -m prefect.engine
)
A synchronous function acts as an entrypoint to the engine. The engine executes on a dedicated \"global loop\" thread. For asynchronous flow calls, we return a coroutine from the entrypoint so the user can enter the engine without blocking their event loop.
See enter_flow_run_engine_from_flow_call
, enter_flow_run_engine_from_subprocess
The thread that calls the entrypoint waits until orchestration of the flow run completes. This thread is referred to as the \"user\" thread and is usually the \"main\" thread. The thread is not blocked while waiting \u2014 it allows the engine to send work back to it. This allows us to send calls back to the user thread from the global loop thread.
See wait_for_call_in_loop_thread
and call_soon_in_waiting_thread
The asynchronous engine branches depending on if the flow run exists already and if there is a parent flow run in the current context.
See create_then_begin_flow_run
, create_and_begin_subflow_run
, and retrieve_flow_then_begin_flow_run
The asynchronous engine prepares for execution of the flow run. This includes starting the task runner, preparing context, etc.
See begin_flow_run
The flow run is orchestrated through states, calling the user's function as necessary. Generally the user's function is sent for execution on the user thread. If the flow function cannot be safely executed on the user thread, e.g. it is a synchronous child in an asynchronous parent it will be scheduled on a worker thread instead.
See orchestrate_flow_run
, call_soon_in_waiting_thread
, call_soon_in_new_thread
The task is called or submitted by the user. We require that this is always within a flow.
See Task.__call__
and Task.submit
A synchronous function acts as an entrypoint to the engine. Unlike flow calls, this will not block until completion if submit
was used.
See enter_task_run_engine
A future is created for the task call. Creation of the task run and submission to the task runner is scheduled as a background task so submission of many tasks can occur concurrently.
See create_task_run_future
and create_task_run_then_submit
The engine branches depending on if a future, state, or result is requested. If a future is requested, it is returned immediately to the user thread. Otherwise, the engine will wait for the task run to complete and return the final state or result.
See get_task_call_return_value
An engine function is submitted to the task runner. The task runner will schedule this function for execution on a worker. When executed, it will prepare for orchestration and wait for completion of the run.
See create_task_run_then_submit
and begin_task_run
The task run is orchestrated through states, calling the user's function as necessary. The user's function is always executed in a worker thread for isolation.
See orchestrate_task_run
, call_soon_in_new_thread
_Ideally, for local and sequential task runners we would send the task run to the user thread as we do for flows. See #9855.
begin_flow_run
async
","text":"Begins execution of a flow run; blocks until completion of the flow run
Note that the flow_run
contains a parameters
attribute which is the serialized parameters sent to the backend while the parameters
argument here should be the deserialized and validated dictionary of python objects.
Returns:
Type DescriptionState
The final state of the run
Source code inprefect/engine.py
async def begin_flow_run(\n flow: Flow,\n flow_run: FlowRun,\n parameters: Dict[str, Any],\n client: PrefectClient,\n user_thread: threading.Thread,\n) -> State:\n \"\"\"\n Begins execution of a flow run; blocks until completion of the flow run\n\n - Starts a task runner\n - Determines the result storage block to use\n - Orchestrates the flow run (runs the user-function and generates tasks)\n - Waits for tasks to complete / shutsdown the task runner\n - Sets a terminal state for the flow run\n\n Note that the `flow_run` contains a `parameters` attribute which is the serialized\n parameters sent to the backend while the `parameters` argument here should be the\n deserialized and validated dictionary of python objects.\n\n Returns:\n The final state of the run\n \"\"\"\n logger = flow_run_logger(flow_run, flow)\n\n log_prints = should_log_prints(flow)\n flow_run_context = PartialModel(FlowRunContext, log_prints=log_prints)\n\n async with AsyncExitStack() as stack:\n await stack.enter_async_context(\n report_flow_run_crashes(flow_run=flow_run, client=client, flow=flow)\n )\n\n # Create a task group for background tasks\n flow_run_context.background_tasks = await stack.enter_async_context(\n anyio.create_task_group()\n )\n\n # If the flow is async, we need to provide a portal so sync tasks can run\n flow_run_context.sync_portal = (\n stack.enter_context(start_blocking_portal()) if flow.isasync else None\n )\n\n task_runner = flow.task_runner.duplicate()\n if task_runner is NotImplemented:\n # Backwards compatibility; will not support concurrent flow runs\n task_runner = flow.task_runner\n logger.warning(\n f\"Task runner {type(task_runner).__name__!r} does not implement the\"\n \" `duplicate` method and will fail if used for concurrent execution of\"\n \" the same flow.\"\n )\n\n logger.debug(\n f\"Starting {type(flow.task_runner).__name__!r}; submitted tasks \"\n f\"will be run {CONCURRENCY_MESSAGES[flow.task_runner.concurrency_type]}...\"\n )\n\n flow_run_context.task_runner = await stack.enter_async_context(\n task_runner.start()\n )\n\n flow_run_context.result_factory = await ResultFactory.from_flow(\n flow, client=client\n )\n\n if log_prints:\n stack.enter_context(patch_print())\n\n terminal_or_paused_state = await orchestrate_flow_run(\n flow,\n flow_run=flow_run,\n parameters=parameters,\n wait_for=None,\n client=client,\n partial_flow_run_context=flow_run_context,\n # Orchestration needs to be interruptible if it has a timeout\n interruptible=flow.timeout_seconds is not None,\n user_thread=user_thread,\n )\n\n if terminal_or_paused_state.is_paused():\n timeout = terminal_or_paused_state.state_details.pause_timeout\n msg = \"Currently paused and suspending execution.\"\n if timeout:\n msg += f\" Resume before {timeout.to_rfc3339_string()} to finish execution.\"\n logger.log(level=logging.INFO, msg=msg)\n await APILogHandler.aflush()\n\n return terminal_or_paused_state\n else:\n terminal_state = terminal_or_paused_state\n\n # If debugging, use the more complete `repr` than the usual `str` description\n display_state = repr(terminal_state) if PREFECT_DEBUG_MODE else str(terminal_state)\n\n logger.log(\n level=logging.INFO if terminal_state.is_completed() else logging.ERROR,\n msg=f\"Finished in state {display_state}\",\n )\n\n # When a \"root\" flow run finishes, flush logs so we do not have to rely on handling\n # during interpreter shutdown\n await APILogHandler.aflush()\n\n return terminal_state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_task_map","title":"begin_task_map
async
","text":"Async entrypoint for task mapping
Source code inprefect/engine.py
async def begin_task_map(\n task: Task,\n flow_run_context: Optional[FlowRunContext],\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n return_type: EngineReturnType,\n task_runner: Optional[BaseTaskRunner],\n autonomous: bool = False,\n) -> List[Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun]]:\n \"\"\"Async entrypoint for task mapping\"\"\"\n # We need to resolve some futures to map over their data, collect the upstream\n # links beforehand to retain relationship tracking.\n task_inputs = {\n k: await collect_task_run_inputs(v, max_depth=0) for k, v in parameters.items()\n }\n\n # Resolve the top-level parameters in order to get mappable data of a known length.\n # Nested parameters will be resolved in each mapped child where their relationships\n # will also be tracked.\n parameters = await resolve_inputs(parameters, max_depth=1)\n\n # Ensure that any parameters in kwargs are expanded before this check\n parameters = explode_variadic_parameter(task.fn, parameters)\n\n iterable_parameters = {}\n static_parameters = {}\n annotated_parameters = {}\n for key, val in parameters.items():\n if isinstance(val, (allow_failure, quote)):\n # Unwrap annotated parameters to determine if they are iterable\n annotated_parameters[key] = val\n val = val.unwrap()\n\n if isinstance(val, unmapped):\n static_parameters[key] = val.value\n elif isiterable(val):\n iterable_parameters[key] = list(val)\n else:\n static_parameters[key] = val\n\n if not len(iterable_parameters):\n raise MappingMissingIterable(\n \"No iterable parameters were received. Parameters for map must \"\n f\"include at least one iterable. Parameters: {parameters}\"\n )\n\n iterable_parameter_lengths = {\n key: len(val) for key, val in iterable_parameters.items()\n }\n lengths = set(iterable_parameter_lengths.values())\n if len(lengths) > 1:\n raise MappingLengthMismatch(\n \"Received iterable parameters with different lengths. Parameters for map\"\n f\" must all be the same length. Got lengths: {iterable_parameter_lengths}\"\n )\n\n map_length = list(lengths)[0]\n\n task_runs = []\n for i in range(map_length):\n call_parameters = {key: value[i] for key, value in iterable_parameters.items()}\n call_parameters.update({key: value for key, value in static_parameters.items()})\n\n # Add default values for parameters; these are skipped earlier since they should\n # not be mapped over\n for key, value in get_parameter_defaults(task.fn).items():\n call_parameters.setdefault(key, value)\n\n # Re-apply annotations to each key again\n for key, annotation in annotated_parameters.items():\n call_parameters[key] = annotation.rewrap(call_parameters[key])\n\n # Collapse any previously exploded kwargs\n call_parameters = collapse_variadic_parameters(task.fn, call_parameters)\n\n if autonomous:\n task_runs.append(\n await create_autonomous_task_run(\n task=task,\n parameters=call_parameters,\n )\n )\n else:\n task_runs.append(\n partial(\n get_task_call_return_value,\n task=task,\n flow_run_context=flow_run_context,\n parameters=call_parameters,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=task_runner,\n extra_task_inputs=task_inputs,\n )\n )\n\n if autonomous:\n return task_runs\n\n # Maintain the order of the task runs when using the sequential task runner\n runner = task_runner if task_runner else flow_run_context.task_runner\n if runner.concurrency_type == TaskConcurrencyType.SEQUENTIAL:\n return [await task_run() for task_run in task_runs]\n\n return await gather(*task_runs)\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_task_run","title":"begin_task_run
async
","text":"Entrypoint for task run execution.
This function is intended for submission to the task runner.
This method may be called from a worker so we ensure the settings context has been entered. For example, with a runner that is executing tasks in the same event loop, we will likely not enter the context again because the current context already matches:
main thread: --> Flow called with settings A --> begin_task_run
executes same event loop --> Profile A matches and is not entered again
However, with execution on a remote environment, we are going to need to ensure the settings for the task run are respected by entering the context:
main thread: --> Flow called with settings A --> begin_task_run
is scheduled on a remote worker, settings A is serialized remote worker: --> Remote worker imports Prefect (may not occur) --> Global settings is loaded with default settings --> begin_task_run
executes on a different event loop than the flow --> Current settings is not set or does not match, settings A is entered
prefect/engine.py
async def begin_task_run(\n task: Task,\n task_run: TaskRun,\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n result_factory: ResultFactory,\n log_prints: bool,\n settings: prefect.context.SettingsContext,\n):\n \"\"\"\n Entrypoint for task run execution.\n\n This function is intended for submission to the task runner.\n\n This method may be called from a worker so we ensure the settings context has been\n entered. For example, with a runner that is executing tasks in the same event loop,\n we will likely not enter the context again because the current context already\n matches:\n\n main thread:\n --> Flow called with settings A\n --> `begin_task_run` executes same event loop\n --> Profile A matches and is not entered again\n\n However, with execution on a remote environment, we are going to need to ensure the\n settings for the task run are respected by entering the context:\n\n main thread:\n --> Flow called with settings A\n --> `begin_task_run` is scheduled on a remote worker, settings A is serialized\n remote worker:\n --> Remote worker imports Prefect (may not occur)\n --> Global settings is loaded with default settings\n --> `begin_task_run` executes on a different event loop than the flow\n --> Current settings is not set or does not match, settings A is entered\n \"\"\"\n maybe_flow_run_context = prefect.context.FlowRunContext.get()\n\n async with AsyncExitStack() as stack:\n # The settings context may be null on a remote worker so we use the safe `.get`\n # method and compare it to the settings required for this task run\n if prefect.context.SettingsContext.get() != settings:\n stack.enter_context(settings)\n setup_logging()\n\n if maybe_flow_run_context:\n # Accessible if on a worker that is running in the same thread as the flow\n client = maybe_flow_run_context.client\n # Only run the task in an interruptible thread if it in the same thread as\n # the flow _and_ the flow run has a timeout attached. If the task is on a\n # worker, the flow run timeout will not be raised in the worker process.\n interruptible = maybe_flow_run_context.timeout_scope is not None\n else:\n # Otherwise, retrieve a new clien`t\n client = await stack.enter_async_context(get_client())\n interruptible = False\n await stack.enter_async_context(anyio.create_task_group())\n\n await stack.enter_async_context(report_task_run_crashes(task_run, client))\n\n # TODO: Use the background tasks group to manage logging for this task\n\n if log_prints:\n stack.enter_context(patch_print())\n\n await check_api_reachable(\n client, f\"Cannot orchestrate task run '{task_run.id}'\"\n )\n try:\n state = await orchestrate_task_run(\n task=task,\n task_run=task_run,\n parameters=parameters,\n wait_for=wait_for,\n result_factory=result_factory,\n log_prints=log_prints,\n interruptible=interruptible,\n client=client,\n )\n\n if not maybe_flow_run_context:\n # When a a task run finishes on a remote worker flush logs to prevent\n # loss if the process exits\n await APILogHandler.aflush()\n\n except Abort as abort:\n # Task run probably already completed, fetch its state\n task_run = await client.read_task_run(task_run.id)\n\n if task_run.state.is_final():\n task_run_logger(task_run).info(\n f\"Task run '{task_run.id}' already finished.\"\n )\n else:\n # TODO: This is a concerning case; we should determine when this occurs\n # 1. This can occur when the flow run is not in a running state\n task_run_logger(task_run).warning(\n f\"Task run '{task_run.id}' received abort during orchestration: \"\n f\"{abort} Task run is in {task_run.state.type.value} state.\"\n )\n state = task_run.state\n\n except Pause:\n # A pause signal here should mean the flow run suspended, so we\n # should do the same. We'll look up the flow run's pause state to\n # try and reuse it, so we capture any data like timeouts.\n flow_run = await client.read_flow_run(task_run.flow_run_id)\n if flow_run.state and flow_run.state.is_paused():\n state = flow_run.state\n else:\n state = Suspended()\n\n task_run_logger(task_run).info(\n \"Task run encountered a pause signal during orchestration.\"\n )\n\n return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.collect_task_run_inputs","title":"collect_task_run_inputs
async
","text":"This function recurses through an expression to generate a set of any discernible task run inputs it finds in the data structure. It produces a set of all inputs found.
Examples:
>>> task_inputs = {\n>>> k: await collect_task_run_inputs(v) for k, v in parameters.items()\n>>> }\n
Source code in prefect/engine.py
async def collect_task_run_inputs(expr: Any, max_depth: int = -1) -> Set[TaskRunInput]:\n \"\"\"\n This function recurses through an expression to generate a set of any discernible\n task run inputs it finds in the data structure. It produces a set of all inputs\n found.\n\n Examples:\n >>> task_inputs = {\n >>> k: await collect_task_run_inputs(v) for k, v in parameters.items()\n >>> }\n \"\"\"\n # TODO: This function needs to be updated to detect parameters and constants\n\n inputs = set()\n futures = set()\n\n def add_futures_and_states_to_inputs(obj):\n if isinstance(obj, PrefectFuture):\n # We need to wait for futures to be submitted before we can get the task\n # run id but we want to do so asynchronously\n futures.add(obj)\n elif is_state(obj):\n if obj.state_details.task_run_id:\n inputs.add(TaskRunResult(id=obj.state_details.task_run_id))\n # Expressions inside quotes should not be traversed\n elif isinstance(obj, quote):\n raise StopVisiting\n else:\n state = get_state_for_result(obj)\n if state and state.state_details.task_run_id:\n inputs.add(TaskRunResult(id=state.state_details.task_run_id))\n\n visit_collection(\n expr,\n visit_fn=add_futures_and_states_to_inputs,\n return_data=False,\n max_depth=max_depth,\n )\n\n await asyncio.gather(*[future._wait_for_submission() for future in futures])\n for future in futures:\n inputs.add(TaskRunResult(id=future.task_run.id))\n\n return inputs\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_and_begin_subflow_run","title":"create_and_begin_subflow_run
async
","text":"Async entrypoint for flows calls within a flow run
Subflows differ from parent flows in that they - Resolve futures in passed parameters into values - Create a dummy task for representation in the parent flow - Retrieve default result storage from the parent flow rather than the server
Returns:
Type DescriptionAny
The final state of the run
Source code inprefect/engine.py
@inject_client\nasync def create_and_begin_subflow_run(\n flow: Flow,\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n return_type: EngineReturnType,\n client: PrefectClient,\n user_thread: threading.Thread,\n) -> Any:\n \"\"\"\n Async entrypoint for flows calls within a flow run\n\n Subflows differ from parent flows in that they\n - Resolve futures in passed parameters into values\n - Create a dummy task for representation in the parent flow\n - Retrieve default result storage from the parent flow rather than the server\n\n Returns:\n The final state of the run\n \"\"\"\n parent_flow_run_context = FlowRunContext.get()\n parent_logger = get_run_logger(parent_flow_run_context)\n log_prints = should_log_prints(flow)\n terminal_state = None\n\n parent_logger.debug(f\"Resolving inputs to {flow.name!r}\")\n task_inputs = {k: await collect_task_run_inputs(v) for k, v in parameters.items()}\n\n if wait_for:\n task_inputs[\"wait_for\"] = await collect_task_run_inputs(wait_for)\n\n rerunning = parent_flow_run_context.flow_run.run_count > 1\n\n # Generate a task in the parent flow run to represent the result of the subflow run\n dummy_task = Task(name=flow.name, fn=flow.fn, version=flow.version)\n parent_task_run = await client.create_task_run(\n task=dummy_task,\n flow_run_id=parent_flow_run_context.flow_run.id,\n dynamic_key=_dynamic_key_for_task_run(parent_flow_run_context, dummy_task),\n task_inputs=task_inputs,\n state=Pending(),\n )\n\n # Resolve any task futures in the input\n parameters = await resolve_inputs(parameters)\n\n if parent_task_run.state.is_final() and not (\n rerunning and not parent_task_run.state.is_completed()\n ):\n # Retrieve the most recent flow run from the database\n flow_runs = await client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n parent_task_run_id={\"any_\": [parent_task_run.id]}\n ),\n sort=FlowRunSort.EXPECTED_START_TIME_ASC,\n )\n flow_run = flow_runs[-1]\n\n # Set up variables required downstream\n terminal_state = flow_run.state\n logger = flow_run_logger(flow_run, flow)\n\n else:\n flow_run = await client.create_flow_run(\n flow,\n parameters=flow.serialize_parameters(parameters),\n parent_task_run_id=parent_task_run.id,\n state=parent_task_run.state if not rerunning else Pending(),\n tags=TagsContext.get().current_tags,\n )\n\n parent_logger.info(\n f\"Created subflow run {flow_run.name!r} for flow {flow.name!r}\"\n )\n\n logger = flow_run_logger(flow_run, flow)\n ui_url = PREFECT_UI_URL.value()\n if ui_url:\n logger.info(\n f\"View at {ui_url}/flow-runs/flow-run/{flow_run.id}\",\n extra={\"send_to_api\": False},\n )\n\n result_factory = await ResultFactory.from_flow(\n flow, client=parent_flow_run_context.client\n )\n\n if flow.should_validate_parameters:\n try:\n parameters = flow.validate_parameters(parameters)\n except Exception:\n message = \"Validation of flow parameters failed with error:\"\n logger.exception(message)\n terminal_state = await propose_state(\n client,\n state=await exception_to_failed_state(\n message=message, result_factory=result_factory\n ),\n flow_run_id=flow_run.id,\n )\n\n if terminal_state is None or not terminal_state.is_final():\n async with AsyncExitStack() as stack:\n await stack.enter_async_context(\n report_flow_run_crashes(flow_run=flow_run, client=client, flow=flow)\n )\n\n task_runner = flow.task_runner.duplicate()\n if task_runner is NotImplemented:\n # Backwards compatibility; will not support concurrent flow runs\n task_runner = flow.task_runner\n logger.warning(\n f\"Task runner {type(task_runner).__name__!r} does not implement\"\n \" the `duplicate` method and will fail if used for concurrent\"\n \" execution of the same flow.\"\n )\n\n await stack.enter_async_context(task_runner.start())\n\n if log_prints:\n stack.enter_context(patch_print())\n\n terminal_state = await orchestrate_flow_run(\n flow,\n flow_run=flow_run,\n parameters=parameters,\n wait_for=wait_for,\n # If the parent flow run has a timeout, then this one needs to be\n # interruptible as well\n interruptible=parent_flow_run_context.timeout_scope is not None,\n client=client,\n partial_flow_run_context=PartialModel(\n FlowRunContext,\n sync_portal=parent_flow_run_context.sync_portal,\n task_runner=task_runner,\n background_tasks=parent_flow_run_context.background_tasks,\n result_factory=result_factory,\n log_prints=log_prints,\n ),\n user_thread=user_thread,\n )\n\n # Display the full state (including the result) if debugging\n display_state = repr(terminal_state) if PREFECT_DEBUG_MODE else str(terminal_state)\n logger.log(\n level=logging.INFO if terminal_state.is_completed() else logging.ERROR,\n msg=f\"Finished in state {display_state}\",\n )\n\n # Track the subflow state so the parent flow can use it to determine its final state\n parent_flow_run_context.flow_run_states.append(terminal_state)\n\n if return_type == \"state\":\n return terminal_state\n elif return_type == \"result\":\n return await terminal_state.result(fetch=True)\n else:\n raise ValueError(f\"Invalid return type for flow engine {return_type!r}.\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_autonomous_task_run","title":"create_autonomous_task_run
async
","text":"Create a task run in the API for an autonomous task submission and store the provided parameters using the existing result storage mechanism.
Source code inprefect/engine.py
async def create_autonomous_task_run(task: Task, parameters: Dict[str, Any]) -> TaskRun:\n \"\"\"Create a task run in the API for an autonomous task submission and store\n the provided parameters using the existing result storage mechanism.\n \"\"\"\n async with get_client() as client:\n state = Scheduled()\n if parameters:\n parameters_id = uuid4()\n state.state_details.task_parameters_id = parameters_id\n\n # TODO: Improve use of result storage for parameter storage / reference\n task.persist_result = True\n\n factory = await ResultFactory.from_autonomous_task(task, client=client)\n await factory.store_parameters(parameters_id, parameters)\n\n task_run = await client.create_task_run(\n task=task,\n flow_run_id=None,\n dynamic_key=f\"{task.task_key}-{str(uuid4())[:NUM_CHARS_DYNAMIC_KEY]}\",\n state=state,\n )\n\n engine_logger.debug(f\"Submitted run of task {task.name!r} for execution\")\n\n return task_run\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_then_begin_flow_run","title":"create_then_begin_flow_run
async
","text":"Async entrypoint for flow calls
Creates the flow run in the backend, then enters the main flow run engine.
Source code inprefect/engine.py
@inject_client\nasync def create_then_begin_flow_run(\n flow: Flow,\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n return_type: EngineReturnType,\n client: PrefectClient,\n user_thread: threading.Thread,\n) -> Any:\n \"\"\"\n Async entrypoint for flow calls\n\n Creates the flow run in the backend, then enters the main flow run engine.\n \"\"\"\n # TODO: Returns a `State` depending on `return_type` and we can add an overload to\n # the function signature to clarify this eventually.\n\n await check_api_reachable(client, \"Cannot create flow run\")\n\n state = Pending()\n if flow.should_validate_parameters:\n try:\n parameters = flow.validate_parameters(parameters)\n except Exception:\n state = await exception_to_failed_state(\n message=\"Validation of flow parameters failed with error:\"\n )\n\n flow_run = await client.create_flow_run(\n flow,\n # Send serialized parameters to the backend\n parameters=flow.serialize_parameters(parameters),\n state=state,\n tags=TagsContext.get().current_tags,\n )\n\n engine_logger.info(f\"Created flow run {flow_run.name!r} for flow {flow.name!r}\")\n\n logger = flow_run_logger(flow_run, flow)\n\n ui_url = PREFECT_UI_URL.value()\n if ui_url:\n logger.info(\n f\"View at {ui_url}/flow-runs/flow-run/{flow_run.id}\",\n extra={\"send_to_api\": False},\n )\n\n if state.is_failed():\n logger.error(state.message)\n engine_logger.info(\n f\"Flow run {flow_run.name!r} received invalid parameters and is marked as\"\n \" failed.\"\n )\n else:\n state = await begin_flow_run(\n flow=flow,\n flow_run=flow_run,\n parameters=parameters,\n client=client,\n user_thread=user_thread,\n )\n\n if return_type == \"state\":\n return state\n elif return_type == \"result\":\n return await state.result(fetch=True)\n else:\n raise ValueError(f\"Invalid return type for flow engine {return_type!r}.\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_flow_run_engine_from_flow_call","title":"enter_flow_run_engine_from_flow_call
","text":"Sync entrypoint for flow calls.
This function does the heavy lifting of ensuring we can get into an async context for flow run execution with minimal overhead.
Source code inprefect/engine.py
def enter_flow_run_engine_from_flow_call(\n flow: Flow,\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n return_type: EngineReturnType,\n) -> Union[State, Awaitable[State]]:\n \"\"\"\n Sync entrypoint for flow calls.\n\n This function does the heavy lifting of ensuring we can get into an async context\n for flow run execution with minimal overhead.\n \"\"\"\n setup_logging()\n\n registry = PrefectObjectRegistry.get()\n if registry and registry.block_code_execution:\n engine_logger.warning(\n f\"Script loading is in progress, flow {flow.name!r} will not be executed.\"\n \" Consider updating the script to only call the flow if executed\"\n f' directly:\\n\\n\\tif __name__ == \"__main__\":\\n\\t\\t{flow.fn.__name__}()'\n )\n return None\n\n if TaskRunContext.get():\n raise RuntimeError(\n \"Flows cannot be run from within tasks. Did you mean to call this \"\n \"flow in a flow?\"\n )\n\n parent_flow_run_context = FlowRunContext.get()\n is_subflow_run = parent_flow_run_context is not None\n\n if wait_for is not None and not is_subflow_run:\n raise ValueError(\"Only flows run as subflows can wait for dependencies.\")\n\n begin_run = create_call(\n create_and_begin_subflow_run if is_subflow_run else create_then_begin_flow_run,\n flow=flow,\n parameters=parameters,\n wait_for=wait_for,\n return_type=return_type,\n client=parent_flow_run_context.client if is_subflow_run else None,\n user_thread=threading.current_thread(),\n )\n\n # On completion of root flows, wait for the global thread to ensure that\n # any work there is complete\n done_callbacks = (\n [create_call(wait_for_global_loop_exit)] if not is_subflow_run else None\n )\n\n # WARNING: You must define any context managers here to pass to our concurrency\n # api instead of entering them in here in the engine entrypoint. Otherwise, async\n # flows will not use the context as this function _exits_ to return an awaitable to\n # the user. Generally, you should enter contexts _within_ the async `begin_run`\n # instead but if you need to enter a context from the main thread you'll need to do\n # it here.\n contexts = [capture_sigterm()]\n\n if flow.isasync and (\n not is_subflow_run or (is_subflow_run and parent_flow_run_context.flow.isasync)\n ):\n # return a coro for the user to await if the flow is async\n # unless it is an async subflow called in a sync flow\n retval = from_async.wait_for_call_in_loop_thread(\n begin_run,\n done_callbacks=done_callbacks,\n contexts=contexts,\n )\n\n else:\n retval = from_sync.wait_for_call_in_loop_thread(\n begin_run,\n done_callbacks=done_callbacks,\n contexts=contexts,\n )\n\n return retval\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_flow_run_engine_from_subprocess","title":"enter_flow_run_engine_from_subprocess
","text":"Sync entrypoint for flow runs that have been submitted for execution by an agent
Differs from enter_flow_run_engine_from_flow_call
in that we have a flow run id but not a flow object. The flow must be retrieved before execution can begin. Additionally, this assumes that the caller is always in a context without an event loop as this should be called from a fresh process.
prefect/engine.py
def enter_flow_run_engine_from_subprocess(flow_run_id: UUID) -> State:\n \"\"\"\n Sync entrypoint for flow runs that have been submitted for execution by an agent\n\n Differs from `enter_flow_run_engine_from_flow_call` in that we have a flow run id\n but not a flow object. The flow must be retrieved before execution can begin.\n Additionally, this assumes that the caller is always in a context without an event\n loop as this should be called from a fresh process.\n \"\"\"\n\n # Ensure collections are imported and have the opportunity to register types before\n # loading the user code from the deployment\n prefect.plugins.load_prefect_collections()\n\n setup_logging()\n\n state = from_sync.wait_for_call_in_loop_thread(\n create_call(\n retrieve_flow_then_begin_flow_run,\n flow_run_id,\n user_thread=threading.current_thread(),\n ),\n contexts=[capture_sigterm()],\n )\n\n APILogHandler.flush()\n return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_task_run_engine","title":"enter_task_run_engine
","text":"Sync entrypoint for task calls
Source code inprefect/engine.py
def enter_task_run_engine(\n task: Task,\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n return_type: EngineReturnType,\n task_runner: Optional[BaseTaskRunner],\n mapped: bool,\n) -> Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun]:\n \"\"\"Sync entrypoint for task calls\"\"\"\n\n flow_run_context = FlowRunContext.get()\n\n if not flow_run_context:\n if (\n not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n or return_type == \"future\"\n or mapped\n ):\n raise RuntimeError(\n \"Tasks cannot be run outside of a flow by default.\"\n \" If you meant to submit an autonomous task, you need to set\"\n \" `prefect config set PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING=true`\"\n \" and use `your_task.submit()` instead of `your_task()`.\"\n \" Mapping autonomous tasks is not yet supported.\"\n )\n from prefect.task_engine import submit_autonomous_task_run_to_engine\n\n return submit_autonomous_task_run_to_engine(\n task=task,\n task_run=None,\n parameters=parameters,\n task_runner=task_runner,\n wait_for=wait_for,\n return_type=return_type,\n client=get_client(),\n )\n\n if TaskRunContext.get():\n raise RuntimeError(\n \"Tasks cannot be run from within tasks. Did you mean to call this \"\n \"task in a flow?\"\n )\n\n if flow_run_context.timeout_scope and flow_run_context.timeout_scope.cancel_called:\n raise TimeoutError(\"Flow run timed out\")\n\n begin_run = create_call(\n begin_task_map if mapped else get_task_call_return_value,\n task=task,\n flow_run_context=flow_run_context,\n parameters=parameters,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=task_runner,\n )\n\n if task.isasync and flow_run_context.flow.isasync:\n # return a coro for the user to await if an async task in an async flow\n return from_async.wait_for_call_in_loop_thread(begin_run)\n else:\n return from_sync.wait_for_call_in_loop_thread(begin_run)\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.get_state_for_result","title":"get_state_for_result
","text":"Get the state related to a result object.
link_state_to_result
must have been called first.
prefect/engine.py
def get_state_for_result(obj: Any) -> Optional[State]:\n \"\"\"\n Get the state related to a result object.\n\n `link_state_to_result` must have been called first.\n \"\"\"\n flow_run_context = FlowRunContext.get()\n if flow_run_context:\n return flow_run_context.task_run_results.get(id(obj))\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.link_state_to_result","title":"link_state_to_result
","text":"Caches a link between a state and a result and its components using the id
of the components to map to the state. The cache is persisted to the current flow run context since task relationships are limited to within a flow run.
This allows dependency tracking to occur when results are passed around. Note: Because id
is used, we cannot cache links between singleton objects.
We only cache the relationship between components 1-layer deep. Example: Given the result [1, [\"a\",\"b\"], (\"c\",)], the following elements will be mapped to the state: - [1, [\"a\",\"b\"], (\"c\",)] - [\"a\",\"b\"] - (\"c\",)
Note: the int `1` will not be mapped to the state because it is a singleton.\n
Other Notes: We do not hash the result because: - If changes are made to the object in the flow between task calls, we can still track that they are related. - Hashing can be expensive. - Not all objects are hashable.
We do not set an attribute, e.g. __prefect_state__
, on the result because:
prefect/engine.py
def link_state_to_result(state: State, result: Any) -> None:\n \"\"\"\n Caches a link between a state and a result and its components using\n the `id` of the components to map to the state. The cache is persisted to the\n current flow run context since task relationships are limited to within a flow run.\n\n This allows dependency tracking to occur when results are passed around.\n Note: Because `id` is used, we cannot cache links between singleton objects.\n\n We only cache the relationship between components 1-layer deep.\n Example:\n Given the result [1, [\"a\",\"b\"], (\"c\",)], the following elements will be\n mapped to the state:\n - [1, [\"a\",\"b\"], (\"c\",)]\n - [\"a\",\"b\"]\n - (\"c\",)\n\n Note: the int `1` will not be mapped to the state because it is a singleton.\n\n Other Notes:\n We do not hash the result because:\n - If changes are made to the object in the flow between task calls, we can still\n track that they are related.\n - Hashing can be expensive.\n - Not all objects are hashable.\n\n We do not set an attribute, e.g. `__prefect_state__`, on the result because:\n\n - Mutating user's objects is dangerous.\n - Unrelated equality comparisons can break unexpectedly.\n - The field can be preserved on copy.\n - We cannot set this attribute on Python built-ins.\n \"\"\"\n\n flow_run_context = FlowRunContext.get()\n\n def link_if_trackable(obj: Any) -> None:\n \"\"\"Track connection between a task run result and its associated state if it has a unique ID.\n\n We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256\n because they are singletons.\n\n This function will mutate the State if the object is an untrackable type by setting the value\n for `State.state_details.untrackable_result` to `True`.\n\n \"\"\"\n if (type(obj) in UNTRACKABLE_TYPES) or (\n isinstance(obj, int) and (-5 <= obj <= 256)\n ):\n state.state_details.untrackable_result = True\n return\n flow_run_context.task_run_results[id(obj)] = state\n\n if flow_run_context:\n visit_collection(expr=result, visit_fn=link_if_trackable, max_depth=1)\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.orchestrate_flow_run","title":"orchestrate_flow_run
async
","text":"Executes a flow run.
Note on flow timeoutsSince async flows are run directly in the main event loop, timeout behavior will match that described by anyio. If the flow is awaiting something, it will immediately return; otherwise, the next time it awaits it will exit. Sync flows are being task runner in a worker thread, which cannot be interrupted. The worker thread will exit at the next task call. The worker thread also has access to the status of the cancellation scope at FlowRunContext.timeout_scope.cancel_called
which allows it to raise a TimeoutError
to respect the timeout.
Returns:
Type DescriptionState
The final state of the run
Source code inprefect/engine.py
async def orchestrate_flow_run(\n flow: Flow,\n flow_run: FlowRun,\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n interruptible: bool,\n client: PrefectClient,\n partial_flow_run_context: PartialModel[FlowRunContext],\n user_thread: threading.Thread,\n) -> State:\n \"\"\"\n Executes a flow run.\n\n Note on flow timeouts:\n Since async flows are run directly in the main event loop, timeout behavior will\n match that described by anyio. If the flow is awaiting something, it will\n immediately return; otherwise, the next time it awaits it will exit. Sync flows\n are being task runner in a worker thread, which cannot be interrupted. The worker\n thread will exit at the next task call. The worker thread also has access to the\n status of the cancellation scope at `FlowRunContext.timeout_scope.cancel_called`\n which allows it to raise a `TimeoutError` to respect the timeout.\n\n Returns:\n The final state of the run\n \"\"\"\n\n logger = flow_run_logger(flow_run, flow)\n\n flow_run_context = None\n parent_flow_run_context = FlowRunContext.get()\n\n try:\n # Resolve futures in any non-data dependencies to ensure they are ready\n if wait_for is not None:\n await resolve_inputs({\"wait_for\": wait_for}, return_data=False)\n except UpstreamTaskError as upstream_exc:\n return await propose_state(\n client,\n Pending(name=\"NotReady\", message=str(upstream_exc)),\n flow_run_id=flow_run.id,\n # if orchestrating a run already in a pending state, force orchestration to\n # update the state name\n force=flow_run.state.is_pending(),\n )\n\n state = await propose_state(client, Running(), flow_run_id=flow_run.id)\n\n # flag to ensure we only update the flow run name once\n run_name_set = False\n\n await _run_flow_hooks(flow=flow, flow_run=flow_run, state=state)\n\n while state.is_running():\n waited_for_task_runs = False\n\n # Update the flow run to the latest data\n flow_run = await client.read_flow_run(flow_run.id)\n try:\n with partial_flow_run_context.finalize(\n flow=flow,\n flow_run=flow_run,\n client=client,\n parameters=parameters,\n ) as flow_run_context:\n # update flow run name\n if not run_name_set and flow.flow_run_name:\n flow_run_name = _resolve_custom_flow_run_name(\n flow=flow, parameters=parameters\n )\n\n await client.update_flow_run(\n flow_run_id=flow_run.id, name=flow_run_name\n )\n logger.extra[\"flow_run_name\"] = flow_run_name\n logger.debug(\n f\"Renamed flow run {flow_run.name!r} to {flow_run_name!r}\"\n )\n flow_run.name = flow_run_name\n run_name_set = True\n\n args, kwargs = parameters_to_args_kwargs(flow.fn, parameters)\n logger.debug(\n f\"Executing flow {flow.name!r} for flow run {flow_run.name!r}...\"\n )\n\n if PREFECT_DEBUG_MODE:\n logger.debug(f\"Executing {call_repr(flow.fn, *args, **kwargs)}\")\n else:\n logger.debug(\n \"Beginning execution...\", extra={\"state_message\": True}\n )\n\n flow_call = create_call(flow.fn, *args, **kwargs)\n\n # This check for a parent call is needed for cases where the engine\n # was entered directly during testing\n parent_call = get_current_call()\n\n if parent_call and (\n not parent_flow_run_context\n or (\n parent_flow_run_context\n and parent_flow_run_context.flow.isasync == flow.isasync\n )\n ):\n from_async.call_soon_in_waiting_thread(\n flow_call, thread=user_thread, timeout=flow.timeout_seconds\n )\n else:\n from_async.call_soon_in_new_thread(\n flow_call, timeout=flow.timeout_seconds\n )\n\n result = await flow_call.aresult()\n\n waited_for_task_runs = await wait_for_task_runs_and_report_crashes(\n flow_run_context.task_run_futures, client=client\n )\n except PausedRun as exc:\n # could get raised either via utility or by returning Paused from a task run\n # if a task run pauses, we set its state as the flow's state\n # to preserve reschedule and timeout behavior\n paused_flow_run = await client.read_flow_run(flow_run.id)\n if paused_flow_run.state.is_running():\n state = await propose_state(\n client,\n state=exc.state,\n flow_run_id=flow_run.id,\n )\n\n return state\n paused_flow_run_state = paused_flow_run.state\n return paused_flow_run_state\n except CancelledError as exc:\n if not flow_call.timedout():\n # If the flow call was not cancelled by us; this is a crash\n raise\n # Construct a new exception as `TimeoutError`\n original = exc\n exc = TimeoutError()\n exc.__cause__ = original\n logger.exception(\"Encountered exception during execution:\")\n terminal_state = await exception_to_failed_state(\n exc,\n message=f\"Flow run exceeded timeout of {flow.timeout_seconds} seconds\",\n result_factory=flow_run_context.result_factory,\n name=\"TimedOut\",\n )\n except Exception:\n # Generic exception in user code\n logger.exception(\"Encountered exception during execution:\")\n terminal_state = await exception_to_failed_state(\n message=\"Flow run encountered an exception.\",\n result_factory=flow_run_context.result_factory,\n )\n else:\n if result is None:\n # All tasks and subflows are reference tasks if there is no return value\n # If there are no tasks, use `None` instead of an empty iterable\n result = (\n flow_run_context.task_run_futures\n + flow_run_context.task_run_states\n + flow_run_context.flow_run_states\n ) or None\n\n terminal_state = await return_value_to_state(\n await resolve_futures_to_states(result),\n result_factory=flow_run_context.result_factory,\n )\n\n if not waited_for_task_runs:\n # An exception occurred that prevented us from waiting for task runs to\n # complete. Ensure that we wait for them before proposing a final state\n # for the flow run.\n await wait_for_task_runs_and_report_crashes(\n flow_run_context.task_run_futures, client=client\n )\n\n # Before setting the flow run state, store state.data using\n # block storage and send the resulting data document to the Prefect API instead.\n # This prevents the pickled return value of flow runs\n # from being sent to the Prefect API and stored in the Prefect database.\n # state.data is left as is, otherwise we would have to load\n # the data from block storage again after storing.\n state = await propose_state(\n client,\n state=terminal_state,\n flow_run_id=flow_run.id,\n )\n\n await _run_flow_hooks(flow=flow, flow_run=flow_run, state=state)\n\n if state.type != terminal_state.type and PREFECT_DEBUG_MODE:\n logger.debug(\n (\n f\"Received new state {state} when proposing final state\"\n f\" {terminal_state}\"\n ),\n extra={\"send_to_api\": False},\n )\n\n if not state.is_final() and not state.is_paused():\n logger.info(\n (\n f\"Received non-final state {state.name!r} when proposing final\"\n f\" state {terminal_state.name!r} and will attempt to run again...\"\n ),\n )\n # Attempt to enter a running state again\n state = await propose_state(client, Running(), flow_run_id=flow_run.id)\n\n return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.orchestrate_task_run","title":"orchestrate_task_run
async
","text":"Execute a task run
This function should be submitted to a task runner. We must construct the context here instead of receiving it already populated since we may be in a new environment.
Proposes a RUNNING state, then - if accepted, the task user function will be run - if rejected, the received state will be returned
When the user function is run, the result will be used to determine a final state - if an exception is encountered, it is trapped and stored in a FAILED state - otherwise, return_value_to_state
is used to determine the state
If the final state is COMPLETED, we generate a cache key as specified by the task
The final state is then proposed - if accepted, this is the final state and will be returned - if rejected and a new final state is provided, it will be returned - if rejected and a non-final state is provided, we will attempt to enter a RUNNING state again
Returns:
Type DescriptionState
The final state of the run
Source code inprefect/engine.py
async def orchestrate_task_run(\n task: Task,\n task_run: TaskRun,\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n result_factory: ResultFactory,\n log_prints: bool,\n interruptible: bool,\n client: PrefectClient,\n) -> State:\n \"\"\"\n Execute a task run\n\n This function should be submitted to a task runner. We must construct the context\n here instead of receiving it already populated since we may be in a new environment.\n\n Proposes a RUNNING state, then\n - if accepted, the task user function will be run\n - if rejected, the received state will be returned\n\n When the user function is run, the result will be used to determine a final state\n - if an exception is encountered, it is trapped and stored in a FAILED state\n - otherwise, `return_value_to_state` is used to determine the state\n\n If the final state is COMPLETED, we generate a cache key as specified by the task\n\n The final state is then proposed\n - if accepted, this is the final state and will be returned\n - if rejected and a new final state is provided, it will be returned\n - if rejected and a non-final state is provided, we will attempt to enter a RUNNING\n state again\n\n Returns:\n The final state of the run\n \"\"\"\n flow_run_context = prefect.context.FlowRunContext.get()\n if flow_run_context:\n flow_run = flow_run_context.flow_run\n else:\n flow_run = await client.read_flow_run(task_run.flow_run_id)\n logger = task_run_logger(task_run, task=task, flow_run=flow_run)\n\n partial_task_run_context = PartialModel(\n TaskRunContext,\n task_run=task_run,\n task=task,\n client=client,\n result_factory=result_factory,\n log_prints=log_prints,\n )\n task_introspection_start_time = time.perf_counter()\n try:\n # Resolve futures in parameters into data\n resolved_parameters = await resolve_inputs(parameters)\n # Resolve futures in any non-data dependencies to ensure they are ready\n await resolve_inputs({\"wait_for\": wait_for}, return_data=False)\n except UpstreamTaskError as upstream_exc:\n return await propose_state(\n client,\n Pending(name=\"NotReady\", message=str(upstream_exc)),\n task_run_id=task_run.id,\n # if orchestrating a run already in a pending state, force orchestration to\n # update the state name\n force=task_run.state.is_pending(),\n )\n task_introspection_end_time = time.perf_counter()\n\n introspection_time = round(\n task_introspection_end_time - task_introspection_start_time, 3\n )\n threshold = PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD.value()\n if threshold and introspection_time > threshold:\n logger.warning(\n f\"Task parameter introspection took {introspection_time} seconds \"\n f\", exceeding `PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD` of {threshold}. \"\n \"Try wrapping large task parameters with \"\n \"`prefect.utilities.annotations.quote` for increased performance, \"\n \"e.g. `my_task(quote(param))`. To disable this message set \"\n \"`PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD=0`.\"\n )\n\n # Generate the cache key to attach to proposed states\n # The cache key uses a TaskRunContext that does not include a `timeout_context``\n cache_key = (\n task.cache_key_fn(\n partial_task_run_context.finalize(parameters=resolved_parameters),\n resolved_parameters,\n )\n if task.cache_key_fn\n else None\n )\n\n task_run_context = partial_task_run_context.finalize(parameters=resolved_parameters)\n\n # Ignore the cached results for a cache key, default = false\n # Setting on task level overrules the Prefect setting (env var)\n refresh_cache = (\n task.refresh_cache\n if task.refresh_cache is not None\n else PREFECT_TASKS_REFRESH_CACHE.value()\n )\n\n # Emit an event to capture that the task run was in the `PENDING` state.\n last_event = _emit_task_run_state_change_event(\n task_run=task_run, initial_state=None, validated_state=task_run.state\n )\n last_state = task_run.state\n\n # Completed states with persisted results should have result data. If it's missing,\n # this could be a manual state transition, so we should use the Unknown result type\n # to represent that we know we don't know the result.\n if (\n last_state\n and last_state.is_completed()\n and result_factory.persist_result\n and not last_state.data\n ):\n state = await propose_state(\n client,\n state=Completed(data=await UnknownResult.create()),\n task_run_id=task_run.id,\n force=True,\n )\n\n # Transition from `PENDING` -> `RUNNING`\n try:\n state = await propose_state(\n client,\n Running(\n state_details=StateDetails(\n cache_key=cache_key, refresh_cache=refresh_cache\n )\n ),\n task_run_id=task_run.id,\n )\n except Pause as exc:\n # We shouldn't get a pause signal without a state, but if this happens,\n # just use a Paused state to assume an in-process pause.\n state = exc.state if exc.state else Paused()\n\n # If a flow submits tasks and then pauses, we may reach this point due\n # to concurrency timing because the tasks will try to transition after\n # the flow run has paused. Orchestration will send back a Paused state\n # for the task runs.\n if state.state_details.pause_reschedule:\n # If we're being asked to pause and reschedule, we should exit the\n # task and expect to be resumed later.\n raise\n\n if state.is_paused():\n BACKOFF_MAX = 10 # Seconds\n backoff_count = 0\n\n async def tick():\n nonlocal backoff_count\n if backoff_count < BACKOFF_MAX:\n backoff_count += 1\n interval = 1 + backoff_count + random.random() * backoff_count\n await anyio.sleep(interval)\n\n # Enter a loop to wait for the task run to be resumed, i.e.\n # become Pending, and then propose a Running state again.\n while True:\n await tick()\n\n # Propose a Running state again. We do this instead of reading the\n # task run because if the flow run times out, this lets\n # orchestration fail the task run.\n try:\n state = await propose_state(\n client,\n Running(\n state_details=StateDetails(\n cache_key=cache_key, refresh_cache=refresh_cache\n )\n ),\n task_run_id=task_run.id,\n )\n except Pause as exc:\n if not exc.state:\n continue\n\n if exc.state.state_details.pause_reschedule:\n # If the pause state includes pause_reschedule, we should exit the\n # task and expect to be resumed later. We've already checked for this\n # above, but we check again here in case the state changed; e.g. the\n # flow run suspended.\n raise\n else:\n # Propose a Running state again.\n continue\n else:\n break\n\n # Emit an event to capture the result of proposing a `RUNNING` state.\n last_event = _emit_task_run_state_change_event(\n task_run=task_run,\n initial_state=last_state,\n validated_state=state,\n follows=last_event,\n )\n last_state = state\n\n # flag to ensure we only update the task run name once\n run_name_set = False\n\n # Only run the task if we enter a `RUNNING` state\n while state.is_running():\n # Retrieve the latest metadata for the task run context\n task_run = await client.read_task_run(task_run.id)\n\n with task_run_context.copy(\n update={\"task_run\": task_run, \"start_time\": pendulum.now(\"UTC\")}\n ):\n try:\n args, kwargs = parameters_to_args_kwargs(task.fn, resolved_parameters)\n # update task run name\n if not run_name_set and task.task_run_name:\n task_run_name = _resolve_custom_task_run_name(\n task=task, parameters=resolved_parameters\n )\n await client.set_task_run_name(\n task_run_id=task_run.id, name=task_run_name\n )\n logger.extra[\"task_run_name\"] = task_run_name\n logger.debug(\n f\"Renamed task run {task_run.name!r} to {task_run_name!r}\"\n )\n task_run.name = task_run_name\n run_name_set = True\n\n if PREFECT_DEBUG_MODE.value():\n logger.debug(f\"Executing {call_repr(task.fn, *args, **kwargs)}\")\n else:\n logger.debug(\n \"Beginning execution...\", extra={\"state_message\": True}\n )\n\n call = from_async.call_soon_in_new_thread(\n create_call(task.fn, *args, **kwargs), timeout=task.timeout_seconds\n )\n result = await call.aresult()\n\n except (CancelledError, asyncio.CancelledError) as exc:\n if not call.timedout():\n # If the task call was not cancelled by us; this is a crash\n raise\n # Construct a new exception as `TimeoutError`\n original = exc\n exc = TimeoutError()\n exc.__cause__ = original\n logger.exception(\"Encountered exception during execution:\")\n terminal_state = await exception_to_failed_state(\n exc,\n message=(\n f\"Task run exceeded timeout of {task.timeout_seconds} seconds\"\n ),\n result_factory=task_run_context.result_factory,\n name=\"TimedOut\",\n )\n except Exception as exc:\n logger.exception(\"Encountered exception during execution:\")\n terminal_state = await exception_to_failed_state(\n exc,\n message=\"Task run encountered an exception\",\n result_factory=task_run_context.result_factory,\n )\n else:\n terminal_state = await return_value_to_state(\n result,\n result_factory=task_run_context.result_factory,\n )\n\n # for COMPLETED tasks, add the cache key and expiration\n if terminal_state.is_completed():\n terminal_state.state_details.cache_expiration = (\n (pendulum.now(\"utc\") + task.cache_expiration)\n if task.cache_expiration\n else None\n )\n terminal_state.state_details.cache_key = cache_key\n\n if terminal_state.is_failed():\n # Defer to user to decide whether failure is retriable\n terminal_state.state_details.retriable = (\n await _check_task_failure_retriable(task, task_run, terminal_state)\n )\n state = await propose_state(client, terminal_state, task_run_id=task_run.id)\n last_event = _emit_task_run_state_change_event(\n task_run=task_run,\n initial_state=last_state,\n validated_state=state,\n follows=last_event,\n )\n last_state = state\n\n await _run_task_hooks(\n task=task,\n task_run=task_run,\n state=state,\n )\n\n if state.type != terminal_state.type and PREFECT_DEBUG_MODE:\n logger.debug(\n (\n f\"Received new state {state} when proposing final state\"\n f\" {terminal_state}\"\n ),\n extra={\"send_to_api\": False},\n )\n\n if not state.is_final() and not state.is_paused():\n logger.info(\n (\n f\"Received non-final state {state.name!r} when proposing final\"\n f\" state {terminal_state.name!r} and will attempt to run\"\n \" again...\"\n ),\n )\n # Attempt to enter a running state again\n state = await propose_state(client, Running(), task_run_id=task_run.id)\n last_event = _emit_task_run_state_change_event(\n task_run=task_run,\n initial_state=last_state,\n validated_state=state,\n follows=last_event,\n )\n last_state = state\n\n # If debugging, use the more complete `repr` than the usual `str` description\n display_state = repr(state) if PREFECT_DEBUG_MODE else str(state)\n\n logger.log(\n level=logging.INFO if state.is_completed() else logging.ERROR,\n msg=f\"Finished in state {display_state}\",\n )\n return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.pause_flow_run","title":"pause_flow_run
async
","text":"Pauses the current flow run by blocking execution until resumed.
When called within a flow run, execution will block and no downstream tasks will run until the flow is resumed. Task runs that have already started will continue running. A timeout parameter can be passed that will fail the flow run if it has not been resumed within the specified time.
Parameters:
Name Type Description Defaultflow_run_id
UUID
a flow run id. If supplied, this function will attempt to pause the specified flow run outside of the flow run process. When paused, the flow run will continue execution until the NEXT task is orchestrated, at which point the flow will exit. Any tasks that have already started will run until completion. When resumed, the flow run will be rescheduled to finish execution. In order pause a flow run in this way, the flow needs to have an associated deployment and results need to be configured with the persist_results
option.
None
timeout
int
the number of seconds to wait for the flow to be resumed before failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming.
3600
poll_interval
int
The number of seconds between checking whether the flow has been resumed. Defaults to 10 seconds.
10
reschedule
bool
Flag that will reschedule the flow run if resumed. Instead of blocking execution, the flow will gracefully exit (with no result returned) instead. To use this flag, a flow needs to have an associated deployment and results need to be configured with the persist_results
option.
False
key
str
An optional key to prevent calling pauses more than once. This defaults to the number of pauses observed by the flow so far, and prevents pauses that use the \"reschedule\" option from running the same pause twice. A custom key can be supplied for custom pausing behavior.
None
wait_for_input
Optional[Type[T]]
a subclass of RunInput
or any type supported by Pydantic. If provided when the flow pauses, the flow will wait for the input to be provided before resuming. If the flow is resumed without providing the input, the flow will fail. If the flow is resumed with the input, the flow will resume and the input will be loaded and returned from this function.
None
@task\ndef task_one():\n for i in range(3):\n sleep(1)\n\n@flow\ndef my_flow():\n terminal_state = task_one.submit(return_state=True)\n if terminal_state.type == StateType.COMPLETED:\n print(\"Task one succeeded! Pausing flow run..\")\n pause_flow_run(timeout=2)\n else:\n print(\"Task one failed. Skipping pause flow run..\")\n
Source code in prefect/engine.py
@sync_compatible\n@deprecated_parameter(\n \"flow_run_id\", start_date=\"Dec 2023\", help=\"Use `suspend_flow_run` instead.\"\n)\n@deprecated_parameter(\n \"reschedule\",\n start_date=\"Dec 2023\",\n when=lambda p: p is True,\n help=\"Use `suspend_flow_run` instead.\",\n)\n@experimental_parameter(\n \"wait_for_input\", group=\"flow_run_input\", when=lambda y: y is not None\n)\nasync def pause_flow_run(\n wait_for_input: Optional[Type[T]] = None,\n flow_run_id: UUID = None,\n timeout: int = 3600,\n poll_interval: int = 10,\n reschedule: bool = False,\n key: str = None,\n) -> Optional[T]:\n \"\"\"\n Pauses the current flow run by blocking execution until resumed.\n\n When called within a flow run, execution will block and no downstream tasks will\n run until the flow is resumed. Task runs that have already started will continue\n running. A timeout parameter can be passed that will fail the flow run if it has not\n been resumed within the specified time.\n\n Args:\n flow_run_id: a flow run id. If supplied, this function will attempt to pause\n the specified flow run outside of the flow run process. When paused, the\n flow run will continue execution until the NEXT task is orchestrated, at\n which point the flow will exit. Any tasks that have already started will\n run until completion. When resumed, the flow run will be rescheduled to\n finish execution. In order pause a flow run in this way, the flow needs to\n have an associated deployment and results need to be configured with the\n `persist_results` option.\n timeout: the number of seconds to wait for the flow to be resumed before\n failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds\n any configured flow-level timeout, the flow might fail even after resuming.\n poll_interval: The number of seconds between checking whether the flow has been\n resumed. Defaults to 10 seconds.\n reschedule: Flag that will reschedule the flow run if resumed. Instead of\n blocking execution, the flow will gracefully exit (with no result returned)\n instead. To use this flag, a flow needs to have an associated deployment and\n results need to be configured with the `persist_results` option.\n key: An optional key to prevent calling pauses more than once. This defaults to\n the number of pauses observed by the flow so far, and prevents pauses that\n use the \"reschedule\" option from running the same pause twice. A custom key\n can be supplied for custom pausing behavior.\n wait_for_input: a subclass of `RunInput` or any type supported by\n Pydantic. If provided when the flow pauses, the flow will wait for the\n input to be provided before resuming. If the flow is resumed without\n providing the input, the flow will fail. If the flow is resumed with the\n input, the flow will resume and the input will be loaded and returned\n from this function.\n\n Example:\n ```python\n @task\n def task_one():\n for i in range(3):\n sleep(1)\n\n @flow\n def my_flow():\n terminal_state = task_one.submit(return_state=True)\n if terminal_state.type == StateType.COMPLETED:\n print(\"Task one succeeded! Pausing flow run..\")\n pause_flow_run(timeout=2)\n else:\n print(\"Task one failed. Skipping pause flow run..\")\n ```\n\n \"\"\"\n if flow_run_id:\n if wait_for_input is not None:\n raise RuntimeError(\"Cannot wait for input when pausing out of process.\")\n\n return await _out_of_process_pause(\n flow_run_id=flow_run_id,\n timeout=timeout,\n reschedule=reschedule,\n key=key,\n )\n else:\n return await _in_process_pause(\n timeout=timeout,\n poll_interval=poll_interval,\n reschedule=reschedule,\n key=key,\n wait_for_input=wait_for_input,\n )\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.propose_state","title":"propose_state
async
","text":"Propose a new state for a flow run or task run, invoking Prefect orchestration logic.
If the proposed state is accepted, the provided state
will be augmented with details and returned.
If the proposed state is rejected, a new state returned by the Prefect API will be returned.
If the proposed state results in a WAIT instruction from the Prefect API, the function will sleep and attempt to propose the state again.
If the proposed state results in an ABORT instruction from the Prefect API, an error will be raised.
Parameters:
Name Type Description Defaultstate
State
a new state for the task or flow run
requiredtask_run_id
UUID
an optional task run id, used when proposing task run states
None
flow_run_id
UUID
an optional flow run id, used when proposing flow run states
None
Returns:
Type DescriptionState
a State model representation of the flow or task run state
Raises:
Type DescriptionValueError
if neither task_run_id or flow_run_id is provided
Abort
if an ABORT instruction is received from the Prefect API
Source code inprefect/engine.py
async def propose_state(\n client: PrefectClient,\n state: State,\n force: bool = False,\n task_run_id: UUID = None,\n flow_run_id: UUID = None,\n) -> State:\n \"\"\"\n Propose a new state for a flow run or task run, invoking Prefect orchestration logic.\n\n If the proposed state is accepted, the provided `state` will be augmented with\n details and returned.\n\n If the proposed state is rejected, a new state returned by the Prefect API will be\n returned.\n\n If the proposed state results in a WAIT instruction from the Prefect API, the\n function will sleep and attempt to propose the state again.\n\n If the proposed state results in an ABORT instruction from the Prefect API, an\n error will be raised.\n\n Args:\n state: a new state for the task or flow run\n task_run_id: an optional task run id, used when proposing task run states\n flow_run_id: an optional flow run id, used when proposing flow run states\n\n Returns:\n a [State model][prefect.client.schemas.objects.State] representation of the\n flow or task run state\n\n Raises:\n ValueError: if neither task_run_id or flow_run_id is provided\n prefect.exceptions.Abort: if an ABORT instruction is received from\n the Prefect API\n \"\"\"\n\n # Determine if working with a task run or flow run\n if not task_run_id and not flow_run_id:\n raise ValueError(\"You must provide either a `task_run_id` or `flow_run_id`\")\n\n # Handle task and sub-flow tracing\n if state.is_final():\n if isinstance(state.data, BaseResult) and state.data.has_cached_object():\n # Avoid fetching the result unless it is cached, otherwise we defeat\n # the purpose of disabling `cache_result_in_memory`\n result = await state.result(raise_on_failure=False, fetch=True)\n else:\n result = state.data\n\n link_state_to_result(state, result)\n\n # Handle repeated WAITs in a loop instead of recursively, to avoid\n # reaching max recursion depth in extreme cases.\n async def set_state_and_handle_waits(set_state_func) -> OrchestrationResult:\n response = await set_state_func()\n while response.status == SetStateStatus.WAIT:\n engine_logger.debug(\n f\"Received wait instruction for {response.details.delay_seconds}s: \"\n f\"{response.details.reason}\"\n )\n await anyio.sleep(response.details.delay_seconds)\n response = await set_state_func()\n return response\n\n # Attempt to set the state\n if task_run_id:\n set_state = partial(client.set_task_run_state, task_run_id, state, force=force)\n response = await set_state_and_handle_waits(set_state)\n elif flow_run_id:\n set_state = partial(client.set_flow_run_state, flow_run_id, state, force=force)\n response = await set_state_and_handle_waits(set_state)\n else:\n raise ValueError(\n \"Neither flow run id or task run id were provided. At least one must \"\n \"be given.\"\n )\n\n # Parse the response to return the new state\n if response.status == SetStateStatus.ACCEPT:\n # Update the state with the details if provided\n state.id = response.state.id\n state.timestamp = response.state.timestamp\n if response.state.state_details:\n state.state_details = response.state.state_details\n return state\n\n elif response.status == SetStateStatus.ABORT:\n raise prefect.exceptions.Abort(response.details.reason)\n\n elif response.status == SetStateStatus.REJECT:\n if response.state.is_paused():\n raise Pause(response.details.reason, state=response.state)\n return response.state\n\n else:\n raise ValueError(\n f\"Received unexpected `SetStateStatus` from server: {response.status!r}\"\n )\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.report_flow_run_crashes","title":"report_flow_run_crashes
async
","text":"Detect flow run crashes during this context and update the run to a proper final state.
This context must reraise the exception to properly exit the run.
Source code inprefect/engine.py
@asynccontextmanager\nasync def report_flow_run_crashes(flow_run: FlowRun, client: PrefectClient, flow: Flow):\n \"\"\"\n Detect flow run crashes during this context and update the run to a proper final\n state.\n\n This context _must_ reraise the exception to properly exit the run.\n \"\"\"\n\n try:\n yield\n except (Abort, Pause):\n # Do not capture internal signals as crashes\n raise\n except BaseException as exc:\n state = await exception_to_crashed_state(exc)\n logger = flow_run_logger(flow_run)\n with anyio.CancelScope(shield=True):\n logger.error(f\"Crash detected! {state.message}\")\n logger.debug(\"Crash details:\", exc_info=exc)\n flow_run_state = await propose_state(client, state, flow_run_id=flow_run.id)\n engine_logger.debug(\n f\"Reported crashed flow run {flow_run.name!r} successfully!\"\n )\n\n # Only `on_crashed` and `on_cancellation` flow run state change hooks can be called here.\n # We call the hooks after the state change proposal to `CRASHED` is validated\n # or rejected (if it is in a `CANCELLING` state).\n await _run_flow_hooks(\n flow=flow,\n flow_run=flow_run,\n state=flow_run_state,\n )\n\n # Reraise the exception\n raise\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.report_task_run_crashes","title":"report_task_run_crashes
async
","text":"Detect task run crashes during this context and update the run to a proper final state.
This context must reraise the exception to properly exit the run.
Source code inprefect/engine.py
@asynccontextmanager\nasync def report_task_run_crashes(task_run: TaskRun, client: PrefectClient):\n \"\"\"\n Detect task run crashes during this context and update the run to a proper final\n state.\n\n This context _must_ reraise the exception to properly exit the run.\n \"\"\"\n try:\n yield\n except (Abort, Pause):\n # Do not capture internal signals as crashes\n raise\n except BaseException as exc:\n state = await exception_to_crashed_state(exc)\n logger = task_run_logger(task_run)\n with anyio.CancelScope(shield=True):\n logger.error(f\"Crash detected! {state.message}\")\n logger.debug(\"Crash details:\", exc_info=exc)\n await client.set_task_run_state(\n state=state,\n task_run_id=task_run.id,\n force=True,\n )\n engine_logger.debug(\n f\"Reported crashed task run {task_run.name!r} successfully!\"\n )\n\n # Reraise the exception\n raise\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.resolve_inputs","title":"resolve_inputs
async
","text":"Resolve any Quote
, PrefectFuture
, or State
types nested in parameters into data.
Returns:
Type DescriptionDict[str, Any]
A copy of the parameters with resolved data
Raises:
Type DescriptionUpstreamTaskError
If any of the upstream states are not COMPLETED
prefect/engine.py
async def resolve_inputs(\n parameters: Dict[str, Any], return_data: bool = True, max_depth: int = -1\n) -> Dict[str, Any]:\n \"\"\"\n Resolve any `Quote`, `PrefectFuture`, or `State` types nested in parameters into\n data.\n\n Returns:\n A copy of the parameters with resolved data\n\n Raises:\n UpstreamTaskError: If any of the upstream states are not `COMPLETED`\n \"\"\"\n\n futures = set()\n states = set()\n result_by_state = {}\n\n if not parameters:\n return {}\n\n def collect_futures_and_states(expr, context):\n # Expressions inside quotes should not be traversed\n if isinstance(context.get(\"annotation\"), quote):\n raise StopVisiting()\n\n if isinstance(expr, PrefectFuture):\n futures.add(expr)\n if is_state(expr):\n states.add(expr)\n\n return expr\n\n visit_collection(\n parameters,\n visit_fn=collect_futures_and_states,\n return_data=False,\n max_depth=max_depth,\n context={},\n )\n\n # Wait for all futures so we do not block when we retrieve the state in `resolve_input`\n states.update(await asyncio.gather(*[future._wait() for future in futures]))\n\n # Only retrieve the result if requested as it may be expensive\n if return_data:\n finished_states = [state for state in states if state.is_final()]\n\n state_results = await asyncio.gather(\n *[\n state.result(raise_on_failure=False, fetch=True)\n for state in finished_states\n ]\n )\n\n for state, result in zip(finished_states, state_results):\n result_by_state[state] = result\n\n def resolve_input(expr, context):\n state = None\n\n # Expressions inside quotes should not be modified\n if isinstance(context.get(\"annotation\"), quote):\n raise StopVisiting()\n\n if isinstance(expr, PrefectFuture):\n state = expr._final_state\n elif is_state(expr):\n state = expr\n else:\n return expr\n\n # Do not allow uncompleted upstreams except failures when `allow_failure` has\n # been used\n if not state.is_completed() and not (\n # TODO: Note that the contextual annotation here is only at the current level\n # if `allow_failure` is used then another annotation is used, this will\n # incorrectly evaluate to false \u2014 to resolve this, we must track all\n # annotations wrapping the current expression but this is not yet\n # implemented.\n isinstance(context.get(\"annotation\"), allow_failure) and state.is_failed()\n ):\n raise UpstreamTaskError(\n f\"Upstream task run '{state.state_details.task_run_id}' did not reach a\"\n \" 'COMPLETED' state.\"\n )\n\n return result_by_state.get(state)\n\n resolved_parameters = {}\n for parameter, value in parameters.items():\n try:\n resolved_parameters[parameter] = visit_collection(\n value,\n visit_fn=resolve_input,\n return_data=return_data,\n # we're manually going 1 layer deeper here\n max_depth=max_depth - 1,\n remove_annotations=True,\n context={},\n )\n except UpstreamTaskError:\n raise\n except Exception as exc:\n raise PrefectException(\n f\"Failed to resolve inputs in parameter {parameter!r}. If your\"\n \" parameter type is not supported, consider using the `quote`\"\n \" annotation to skip resolution of inputs.\"\n ) from exc\n\n return resolved_parameters\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.resume_flow_run","title":"resume_flow_run
async
","text":"Resumes a paused flow.
Parameters:
Name Type Description Defaultflow_run_id
the flow_run_id to resume
requiredrun_input
Optional[Dict]
a dictionary of inputs to provide to the flow run.
None
Source code in prefect/engine.py
@sync_compatible\nasync def resume_flow_run(flow_run_id, run_input: Optional[Dict] = None):\n \"\"\"\n Resumes a paused flow.\n\n Args:\n flow_run_id: the flow_run_id to resume\n run_input: a dictionary of inputs to provide to the flow run.\n \"\"\"\n client = get_client()\n async with client:\n flow_run = await client.read_flow_run(flow_run_id)\n\n if not flow_run.state.is_paused():\n raise NotPausedError(\"Cannot resume a run that isn't paused!\")\n\n response = await client.resume_flow_run(flow_run_id, run_input=run_input)\n\n if response.status == SetStateStatus.REJECT:\n if response.state.type == StateType.FAILED:\n raise FlowPauseTimeout(\"Flow run can no longer be resumed.\")\n else:\n raise RuntimeError(f\"Cannot resume this run: {response.details.reason}\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.retrieve_flow_then_begin_flow_run","title":"retrieve_flow_then_begin_flow_run
async
","text":"Async entrypoint for flow runs that have been submitted for execution by an agent
prefect/engine.py
@inject_client\nasync def retrieve_flow_then_begin_flow_run(\n flow_run_id: UUID,\n client: PrefectClient,\n user_thread: threading.Thread,\n) -> State:\n \"\"\"\n Async entrypoint for flow runs that have been submitted for execution by an agent\n\n - Retrieves the deployment information\n - Loads the flow object using deployment information\n - Updates the flow run version\n \"\"\"\n flow_run = await client.read_flow_run(flow_run_id)\n\n entrypoint = os.environ.get(\"PREFECT__FLOW_ENTRYPOINT\")\n\n try:\n flow = (\n load_flow_from_entrypoint(entrypoint)\n if entrypoint\n else await load_flow_from_flow_run(flow_run, client=client)\n )\n except Exception:\n message = (\n \"Flow could not be retrieved from\"\n f\" {'entrypoint' if entrypoint else 'deployment'}.\"\n )\n flow_run_logger(flow_run).exception(message)\n state = await exception_to_failed_state(message=message)\n await client.set_flow_run_state(\n state=state, flow_run_id=flow_run_id, force=True\n )\n return state\n\n # Update the flow run policy defaults to match settings on the flow\n # Note: Mutating the flow run object prevents us from performing another read\n # operation if these properties are used by the client downstream\n if flow_run.empirical_policy.retry_delay is None:\n flow_run.empirical_policy.retry_delay = flow.retry_delay_seconds\n\n if flow_run.empirical_policy.retries is None:\n flow_run.empirical_policy.retries = flow.retries\n\n await client.update_flow_run(\n flow_run_id=flow_run_id,\n flow_version=flow.version,\n empirical_policy=flow_run.empirical_policy,\n )\n\n if flow.should_validate_parameters:\n failed_state = None\n try:\n parameters = flow.validate_parameters(flow_run.parameters)\n except Exception:\n message = \"Validation of flow parameters failed with error: \"\n flow_run_logger(flow_run).exception(message)\n failed_state = await exception_to_failed_state(message=message)\n\n if failed_state is not None:\n await propose_state(\n client,\n state=failed_state,\n flow_run_id=flow_run_id,\n )\n return failed_state\n else:\n parameters = flow_run.parameters\n\n # Ensure default values are populated\n parameters = {**get_parameter_defaults(flow.fn), **parameters}\n\n return await begin_flow_run(\n flow=flow,\n flow_run=flow_run,\n parameters=parameters,\n client=client,\n user_thread=user_thread,\n )\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.suspend_flow_run","title":"suspend_flow_run
async
","text":"Suspends a flow run by stopping code execution until resumed.
When suspended, the flow run will continue execution until the NEXT task is orchestrated, at which point the flow will exit. Any tasks that have already started will run until completion. When resumed, the flow run will be rescheduled to finish execution. In order suspend a flow run in this way, the flow needs to have an associated deployment and results need to be configured with the persist_results
option.
Parameters:
Name Type Description Defaultflow_run_id
Optional[UUID]
a flow run id. If supplied, this function will attempt to suspend the specified flow run. If not supplied will attempt to suspend the current flow run.
None
timeout
Optional[int]
the number of seconds to wait for the flow to be resumed before failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming.
3600
key
Optional[str]
An optional key to prevent calling suspend more than once. This defaults to a random string and prevents suspends from running the same suspend twice. A custom key can be supplied for custom suspending behavior.
None
wait_for_input
Optional[Type[T]]
a subclass of RunInput
or any type supported by Pydantic. If provided when the flow suspends, the flow will remain suspended until receiving the input before resuming. If the flow is resumed without providing the input, the flow will fail. If the flow is resumed with the input, the flow will resume and the input will be loaded and returned from this function.
None
Source code in prefect/engine.py
@sync_compatible\n@inject_client\n@experimental_parameter(\n \"wait_for_input\", group=\"flow_run_input\", when=lambda y: y is not None\n)\nasync def suspend_flow_run(\n wait_for_input: Optional[Type[T]] = None,\n flow_run_id: Optional[UUID] = None,\n timeout: Optional[int] = 3600,\n key: Optional[str] = None,\n client: PrefectClient = None,\n) -> Optional[T]:\n \"\"\"\n Suspends a flow run by stopping code execution until resumed.\n\n When suspended, the flow run will continue execution until the NEXT task is\n orchestrated, at which point the flow will exit. Any tasks that have\n already started will run until completion. When resumed, the flow run will\n be rescheduled to finish execution. In order suspend a flow run in this\n way, the flow needs to have an associated deployment and results need to be\n configured with the `persist_results` option.\n\n Args:\n flow_run_id: a flow run id. If supplied, this function will attempt to\n suspend the specified flow run. If not supplied will attempt to\n suspend the current flow run.\n timeout: the number of seconds to wait for the flow to be resumed before\n failing. Defaults to 1 hour (3600 seconds). If the pause timeout\n exceeds any configured flow-level timeout, the flow might fail even\n after resuming.\n key: An optional key to prevent calling suspend more than once. This\n defaults to a random string and prevents suspends from running the\n same suspend twice. A custom key can be supplied for custom\n suspending behavior.\n wait_for_input: a subclass of `RunInput` or any type supported by\n Pydantic. If provided when the flow suspends, the flow will remain\n suspended until receiving the input before resuming. If the flow is\n resumed without providing the input, the flow will fail. If the flow is\n resumed with the input, the flow will resume and the input will be\n loaded and returned from this function.\n \"\"\"\n context = FlowRunContext.get()\n\n if flow_run_id is None:\n if TaskRunContext.get():\n raise RuntimeError(\"Cannot suspend task runs.\")\n\n if context is None or context.flow_run is None:\n raise RuntimeError(\n \"Flow runs can only be suspended from within a flow run.\"\n )\n\n logger = get_run_logger(context=context)\n logger.info(\n \"Suspending flow run, execution will be rescheduled when this flow run is\"\n \" resumed.\"\n )\n flow_run_id = context.flow_run.id\n suspending_current_flow_run = True\n pause_counter = _observed_flow_pauses(context)\n pause_key = key or str(pause_counter)\n else:\n # Since we're suspending another flow run we need to generate a pause\n # key that won't conflict with whatever suspends/pauses that flow may\n # have. Since this method won't be called during that flow run it's\n # okay that this is non-deterministic.\n suspending_current_flow_run = False\n pause_key = key or str(uuid4())\n\n proposed_state = Suspended(timeout_seconds=timeout, pause_key=pause_key)\n\n if wait_for_input:\n wait_for_input = run_input_subclass_from_type(wait_for_input)\n run_input_keyset = keyset_from_paused_state(proposed_state)\n proposed_state.state_details.run_input_keyset = run_input_keyset\n\n try:\n state = await propose_state(\n client=client,\n state=proposed_state,\n flow_run_id=flow_run_id,\n )\n except Abort as exc:\n # Aborted requests mean the suspension is not allowed\n raise RuntimeError(f\"Flow run cannot be suspended: {exc}\")\n\n if state.is_running():\n # The orchestrator rejected the suspended state which means that this\n # suspend has happened before and the flow run has been resumed.\n if wait_for_input:\n # The flow run wanted input, so we need to load it and return it\n # to the user.\n return await wait_for_input.load(run_input_keyset)\n return\n\n if not state.is_paused():\n # If we receive anything but a PAUSED state, we are unable to continue\n raise RuntimeError(\n f\"Flow run cannot be suspended. Received unexpected state from API: {state}\"\n )\n\n if wait_for_input:\n await wait_for_input.save(run_input_keyset)\n\n if suspending_current_flow_run:\n # Exit this process so the run can be resubmitted later\n raise Pause()\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/events/","title":"prefect.events","text":"","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events","title":"prefect.events
","text":"","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Event","title":"Event
","text":" Bases: PrefectBaseModel
The client-side view of an event that has happened to a Resource
Source code inprefect/events/schemas.py
class Event(PrefectBaseModel):\n \"\"\"The client-side view of an event that has happened to a Resource\"\"\"\n\n occurred: DateTimeTZ = Field(\n default_factory=pendulum.now,\n description=\"When the event happened from the sender's perspective\",\n )\n event: str = Field(\n description=\"The name of the event that happened\",\n )\n resource: Resource = Field(\n description=\"The primary Resource this event concerns\",\n )\n related: List[RelatedResource] = Field(\n default_factory=list,\n description=\"A list of additional Resources involved in this event\",\n )\n payload: Dict[str, Any] = Field(\n default_factory=dict,\n description=\"An open-ended set of data describing what happened\",\n )\n id: UUID = Field(\n default_factory=uuid4,\n description=\"The client-provided identifier of this event\",\n )\n follows: Optional[UUID] = Field(\n None,\n description=(\n \"The ID of an event that is known to have occurred prior to this \"\n \"one. If set, this may be used to establish a more precise \"\n \"ordering of causally-related events when they occur close enough \"\n \"together in time that the system may receive them out-of-order.\"\n ),\n )\n\n @property\n def involved_resources(self) -> Iterable[Resource]:\n return [self.resource] + list(self.related)\n\n @validator(\"related\")\n def enforce_maximum_related_resources(cls, value: List[RelatedResource]):\n if len(value) > MAXIMUM_RELATED_RESOURCES:\n raise ValueError(\n \"The maximum number of related resources \"\n f\"is {MAXIMUM_RELATED_RESOURCES}\"\n )\n\n return value\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.RelatedResource","title":"RelatedResource
","text":" Bases: Resource
A Resource with a specific role in an Event
Source code inprefect/events/schemas.py
class RelatedResource(Resource):\n \"\"\"A Resource with a specific role in an Event\"\"\"\n\n @root_validator(pre=True)\n def requires_resource_role(cls, values: Dict[str, Any]):\n labels = values.get(\"__root__\")\n if not isinstance(labels, dict):\n return values\n\n labels = cast(Dict[str, str], labels)\n\n if \"prefect.resource.role\" not in labels:\n raise ValueError(\n \"Related Resources must include the prefect.resource.role label\"\n )\n if not labels[\"prefect.resource.role\"]:\n raise ValueError(\"The prefect.resource.role label must be non-empty\")\n\n return values\n\n @property\n def role(self) -> str:\n return self[\"prefect.resource.role\"]\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Resource","title":"Resource
","text":" Bases: Labelled
An observable business object of interest to the user
Source code inprefect/events/schemas.py
class Resource(Labelled):\n \"\"\"An observable business object of interest to the user\"\"\"\n\n @root_validator(pre=True)\n def enforce_maximum_labels(cls, values: Dict[str, Any]):\n labels = values.get(\"__root__\")\n if not isinstance(labels, dict):\n return values\n\n if len(labels) > MAXIMUM_LABELS_PER_RESOURCE:\n raise ValueError(\n \"The maximum number of labels per resource \"\n f\"is {MAXIMUM_LABELS_PER_RESOURCE}\"\n )\n\n return values\n\n @root_validator(pre=True)\n def requires_resource_id(cls, values: Dict[str, Any]):\n labels = values.get(\"__root__\")\n if not isinstance(labels, dict):\n return values\n\n labels = cast(Dict[str, str], labels)\n\n if \"prefect.resource.id\" not in labels:\n raise ValueError(\"Resources must include the prefect.resource.id label\")\n if not labels[\"prefect.resource.id\"]:\n raise ValueError(\"The prefect.resource.id label must be non-empty\")\n\n return values\n\n @property\n def id(self) -> str:\n return self[\"prefect.resource.id\"]\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.emit_event","title":"emit_event
","text":"Send an event to Prefect Cloud.
Parameters:
Name Type Description Defaultevent
str
The name of the event that happened.
requiredresource
Dict[str, str]
The primary Resource this event concerns.
requiredoccurred
Optional[DateTimeTZ]
When the event happened from the sender's perspective. Defaults to the current datetime.
None
related
Optional[Union[List[Dict[str, str]], List[RelatedResource]]]
A list of additional Resources involved in this event.
None
payload
Optional[Dict[str, Any]]
An open-ended set of data describing what happened.
None
id
Optional[UUID]
The sender-provided identifier for this event. Defaults to a random UUID.
None
follows
Optional[Event]
The event that preceded this one. If the preceding event happened more than 5 minutes prior to this event the follows relationship will not be set.
None
Returns:
Type DescriptionOptional[Event]
The event that was emitted if worker is using a client that emit
Optional[Event]
events, otherwise None.
Source code inprefect/events/utilities.py
def emit_event(\n event: str,\n resource: Dict[str, str],\n occurred: Optional[DateTimeTZ] = None,\n related: Optional[Union[List[Dict[str, str]], List[RelatedResource]]] = None,\n payload: Optional[Dict[str, Any]] = None,\n id: Optional[UUID] = None,\n follows: Optional[Event] = None,\n) -> Optional[Event]:\n \"\"\"\n Send an event to Prefect Cloud.\n\n Args:\n event: The name of the event that happened.\n resource: The primary Resource this event concerns.\n occurred: When the event happened from the sender's perspective.\n Defaults to the current datetime.\n related: A list of additional Resources involved in this event.\n payload: An open-ended set of data describing what happened.\n id: The sender-provided identifier for this event. Defaults to a random\n UUID.\n follows: The event that preceded this one. If the preceding event\n happened more than 5 minutes prior to this event the follows\n relationship will not be set.\n\n Returns:\n The event that was emitted if worker is using a client that emit\n events, otherwise None.\n \"\"\"\n if not emit_events_to_cloud():\n return None\n\n operational_clients = [AssertingEventsClient, PrefectCloudEventsClient]\n worker_instance = EventsWorker.instance()\n\n if worker_instance.client_type not in operational_clients:\n return None\n\n event_kwargs = {\n \"event\": event,\n \"resource\": resource,\n }\n\n if occurred is None:\n occurred = pendulum.now(\"UTC\")\n event_kwargs[\"occurred\"] = occurred\n\n if related is not None:\n event_kwargs[\"related\"] = related\n\n if payload is not None:\n event_kwargs[\"payload\"] = payload\n\n if id is not None:\n event_kwargs[\"id\"] = id\n\n if follows is not None:\n if -TIGHT_TIMING < (occurred - follows.occurred) < TIGHT_TIMING:\n event_kwargs[\"follows\"] = follows.id\n\n event_obj = Event(**event_kwargs)\n worker_instance.send(event_obj)\n\n return event_obj\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/exceptions/","title":"prefect.exceptions","text":"","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions","title":"prefect.exceptions
","text":"Prefect-specific exceptions.
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.Abort","title":"Abort
","text":" Bases: PrefectSignal
Raised when the API sends an 'ABORT' instruction during state proposal.
Indicates that the run should exit immediately.
Source code inprefect/exceptions.py
class Abort(PrefectSignal):\n \"\"\"\n Raised when the API sends an 'ABORT' instruction during state proposal.\n\n Indicates that the run should exit immediately.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.BlockMissingCapabilities","title":"BlockMissingCapabilities
","text":" Bases: PrefectException
Raised when a block does not have required capabilities for a given operation.
Source code inprefect/exceptions.py
class BlockMissingCapabilities(PrefectException):\n \"\"\"\n Raised when a block does not have required capabilities for a given operation.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.CancelledRun","title":"CancelledRun
","text":" Bases: PrefectException
Raised when the result from a cancelled run is retrieved and an exception is not attached.
This occurs when a string is attached to the state instead of an exception or if the state's data is null.
Source code inprefect/exceptions.py
class CancelledRun(PrefectException):\n \"\"\"\n Raised when the result from a cancelled run is retrieved and an exception\n is not attached.\n\n This occurs when a string is attached to the state instead of an exception\n or if the state's data is null.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.CrashedRun","title":"CrashedRun
","text":" Bases: PrefectException
Raised when the result from a crashed run is retrieved.
This occurs when a string is attached to the state instead of an exception or if the state's data is null.
Source code inprefect/exceptions.py
class CrashedRun(PrefectException):\n \"\"\"\n Raised when the result from a crashed run is retrieved.\n\n This occurs when a string is attached to the state instead of an exception or if\n the state's data is null.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ExternalSignal","title":"ExternalSignal
","text":" Bases: BaseException
Base type for external signal-like exceptions that should never be caught by users.
Source code inprefect/exceptions.py
class ExternalSignal(BaseException):\n \"\"\"\n Base type for external signal-like exceptions that should never be caught by users.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FailedRun","title":"FailedRun
","text":" Bases: PrefectException
Raised when the result from a failed run is retrieved and an exception is not attached.
This occurs when a string is attached to the state instead of an exception or if the state's data is null.
Source code inprefect/exceptions.py
class FailedRun(PrefectException):\n \"\"\"\n Raised when the result from a failed run is retrieved and an exception is not\n attached.\n\n This occurs when a string is attached to the state instead of an exception or if\n the state's data is null.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowPauseTimeout","title":"FlowPauseTimeout
","text":" Bases: PrefectException
Raised when a flow pause times out
Source code inprefect/exceptions.py
class FlowPauseTimeout(PrefectException):\n \"\"\"Raised when a flow pause times out\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowRunWaitTimeout","title":"FlowRunWaitTimeout
","text":" Bases: PrefectException
Raised when a flow run takes longer than a given timeout
Source code inprefect/exceptions.py
class FlowRunWaitTimeout(PrefectException):\n \"\"\"Raised when a flow run takes longer than a given timeout\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowScriptError","title":"FlowScriptError
","text":" Bases: PrefectException
Raised when a script errors during evaluation while attempting to load a flow.
Source code inprefect/exceptions.py
class FlowScriptError(PrefectException):\n \"\"\"\n Raised when a script errors during evaluation while attempting to load a flow.\n \"\"\"\n\n def __init__(\n self,\n user_exc: Exception,\n script_path: str,\n ) -> None:\n message = f\"Flow script at {script_path!r} encountered an exception\"\n super().__init__(message)\n\n self.user_exc = user_exc\n\n def rich_user_traceback(self, **kwargs):\n trace = Traceback.extract(\n type(self.user_exc),\n self.user_exc,\n self.user_exc.__traceback__.tb_next.tb_next.tb_next.tb_next,\n )\n return Traceback(trace, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureError","title":"InfrastructureError
","text":" Bases: PrefectException
A base class for exceptions related to infrastructure blocks
Source code inprefect/exceptions.py
class InfrastructureError(PrefectException):\n \"\"\"\n A base class for exceptions related to infrastructure blocks\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureNotAvailable","title":"InfrastructureNotAvailable
","text":" Bases: PrefectException
Raised when infrastructure is not accessible from the current machine. For example, if a process was spawned on another machine it cannot be managed.
Source code inprefect/exceptions.py
class InfrastructureNotAvailable(PrefectException):\n \"\"\"\n Raised when infrastructure is not accessible from the current machine. For example,\n if a process was spawned on another machine it cannot be managed.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureNotFound","title":"InfrastructureNotFound
","text":" Bases: PrefectException
Raised when infrastructure is missing, likely because it has exited or been deleted.
Source code inprefect/exceptions.py
class InfrastructureNotFound(PrefectException):\n \"\"\"\n Raised when infrastructure is missing, likely because it has exited or been\n deleted.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InvalidNameError","title":"InvalidNameError
","text":" Bases: PrefectException
, ValueError
Raised when a name contains characters that are not permitted.
Source code inprefect/exceptions.py
class InvalidNameError(PrefectException, ValueError):\n \"\"\"\n Raised when a name contains characters that are not permitted.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InvalidRepositoryURLError","title":"InvalidRepositoryURLError
","text":" Bases: PrefectException
Raised when an incorrect URL is provided to a GitHub filesystem block.
Source code inprefect/exceptions.py
class InvalidRepositoryURLError(PrefectException):\n \"\"\"Raised when an incorrect URL is provided to a GitHub filesystem block.\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MappingLengthMismatch","title":"MappingLengthMismatch
","text":" Bases: PrefectException
Raised when attempting to call Task.map with arguments of different lengths.
Source code inprefect/exceptions.py
class MappingLengthMismatch(PrefectException):\n \"\"\"\n Raised when attempting to call Task.map with arguments of different lengths.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MappingMissingIterable","title":"MappingMissingIterable
","text":" Bases: PrefectException
Raised when attempting to call Task.map with all static arguments
Source code inprefect/exceptions.py
class MappingMissingIterable(PrefectException):\n \"\"\"\n Raised when attempting to call Task.map with all static arguments\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingContextError","title":"MissingContextError
","text":" Bases: PrefectException
, RuntimeError
Raised when a method is called that requires a task or flow run context to be active but one cannot be found.
Source code inprefect/exceptions.py
class MissingContextError(PrefectException, RuntimeError):\n \"\"\"\n Raised when a method is called that requires a task or flow run context to be\n active but one cannot be found.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingFlowError","title":"MissingFlowError
","text":" Bases: PrefectException
Raised when a given flow name is not found in the expected script.
Source code inprefect/exceptions.py
class MissingFlowError(PrefectException):\n \"\"\"\n Raised when a given flow name is not found in the expected script.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingProfileError","title":"MissingProfileError
","text":" Bases: PrefectException
, ValueError
Raised when a profile name does not exist.
Source code inprefect/exceptions.py
class MissingProfileError(PrefectException, ValueError):\n \"\"\"\n Raised when a profile name does not exist.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingResult","title":"MissingResult
","text":" Bases: PrefectException
Raised when a result is missing from a state; often when result persistence is disabled and the state is retrieved from the API.
Source code inprefect/exceptions.py
class MissingResult(PrefectException):\n \"\"\"\n Raised when a result is missing from a state; often when result persistence is\n disabled and the state is retrieved from the API.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.NotPausedError","title":"NotPausedError
","text":" Bases: PrefectException
Raised when attempting to unpause a run that isn't paused.
Source code inprefect/exceptions.py
class NotPausedError(PrefectException):\n \"\"\"Raised when attempting to unpause a run that isn't paused.\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ObjectAlreadyExists","title":"ObjectAlreadyExists
","text":" Bases: PrefectException
Raised when the client receives a 409 (conflict) from the API.
Source code inprefect/exceptions.py
class ObjectAlreadyExists(PrefectException):\n \"\"\"\n Raised when the client receives a 409 (conflict) from the API.\n \"\"\"\n\n def __init__(self, http_exc: Exception, *args, **kwargs):\n self.http_exc = http_exc\n super().__init__(*args, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ObjectNotFound","title":"ObjectNotFound
","text":" Bases: PrefectException
Raised when the client receives a 404 (not found) from the API.
Source code inprefect/exceptions.py
class ObjectNotFound(PrefectException):\n \"\"\"\n Raised when the client receives a 404 (not found) from the API.\n \"\"\"\n\n def __init__(self, http_exc: Exception, *args, **kwargs):\n self.http_exc = http_exc\n super().__init__(*args, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ParameterBindError","title":"ParameterBindError
","text":" Bases: TypeError
, PrefectException
Raised when args and kwargs cannot be converted to parameters.
Source code inprefect/exceptions.py
class ParameterBindError(TypeError, PrefectException):\n \"\"\"\n Raised when args and kwargs cannot be converted to parameters.\n \"\"\"\n\n def __init__(self, msg: str):\n super().__init__(msg)\n\n @classmethod\n def from_bind_failure(\n cls, fn: Callable, exc: TypeError, call_args: List, call_kwargs: Dict\n ) -> Self:\n fn_signature = str(inspect.signature(fn)).strip(\"()\")\n\n base = f\"Error binding parameters for function '{fn.__name__}': {exc}\"\n signature = f\"Function '{fn.__name__}' has signature '{fn_signature}'\"\n received = f\"received args: {call_args} and kwargs: {list(call_kwargs.keys())}\"\n msg = f\"{base}.\\n{signature} but {received}.\"\n return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ParameterTypeError","title":"ParameterTypeError
","text":" Bases: PrefectException
Raised when a parameter does not pass Pydantic type validation.
Source code inprefect/exceptions.py
class ParameterTypeError(PrefectException):\n \"\"\"\n Raised when a parameter does not pass Pydantic type validation.\n \"\"\"\n\n def __init__(self, msg: str):\n super().__init__(msg)\n\n @classmethod\n def from_validation_error(cls, exc: pydantic.ValidationError) -> Self:\n bad_params = [f'{err[\"loc\"][0]}: {err[\"msg\"]}' for err in exc.errors()]\n msg = \"Flow run received invalid parameters:\\n - \" + \"\\n - \".join(bad_params)\n return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.Pause","title":"Pause
","text":" Bases: PrefectSignal
Raised when a flow run is PAUSED and needs to exit for resubmission.
Source code inprefect/exceptions.py
class Pause(PrefectSignal):\n \"\"\"\n Raised when a flow run is PAUSED and needs to exit for resubmission.\n \"\"\"\n\n def __init__(self, *args, state=None, **kwargs):\n super().__init__(*args, **kwargs)\n self.state = state\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PausedRun","title":"PausedRun
","text":" Bases: PrefectException
Raised when the result from a paused run is retrieved.
Source code inprefect/exceptions.py
class PausedRun(PrefectException):\n \"\"\"\n Raised when the result from a paused run is retrieved.\n \"\"\"\n\n def __init__(self, *args, state=None, **kwargs):\n super().__init__(*args, **kwargs)\n self.state = state\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectException","title":"PrefectException
","text":" Bases: Exception
Base exception type for Prefect errors.
Source code inprefect/exceptions.py
class PrefectException(Exception):\n \"\"\"\n Base exception type for Prefect errors.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectHTTPStatusError","title":"PrefectHTTPStatusError
","text":" Bases: HTTPStatusError
Raised when client receives a Response
that contains an HTTPStatusError.
Used to include API error details in the error messages that the client provides users.
Source code inprefect/exceptions.py
class PrefectHTTPStatusError(HTTPStatusError):\n \"\"\"\n Raised when client receives a `Response` that contains an HTTPStatusError.\n\n Used to include API error details in the error messages that the client provides users.\n \"\"\"\n\n @classmethod\n def from_httpx_error(cls: Type[Self], httpx_error: HTTPStatusError) -> Self:\n \"\"\"\n Generate a `PrefectHTTPStatusError` from an `httpx.HTTPStatusError`.\n \"\"\"\n try:\n details = httpx_error.response.json()\n except Exception:\n details = None\n\n error_message, *more_info = str(httpx_error).split(\"\\n\")\n\n if details:\n message_components = [error_message, f\"Response: {details}\", *more_info]\n else:\n message_components = [error_message, *more_info]\n\n new_message = \"\\n\".join(message_components)\n\n return cls(\n new_message, request=httpx_error.request, response=httpx_error.response\n )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectHTTPStatusError.from_httpx_error","title":"from_httpx_error
classmethod
","text":"Generate a PrefectHTTPStatusError
from an httpx.HTTPStatusError
.
prefect/exceptions.py
@classmethod\ndef from_httpx_error(cls: Type[Self], httpx_error: HTTPStatusError) -> Self:\n \"\"\"\n Generate a `PrefectHTTPStatusError` from an `httpx.HTTPStatusError`.\n \"\"\"\n try:\n details = httpx_error.response.json()\n except Exception:\n details = None\n\n error_message, *more_info = str(httpx_error).split(\"\\n\")\n\n if details:\n message_components = [error_message, f\"Response: {details}\", *more_info]\n else:\n message_components = [error_message, *more_info]\n\n new_message = \"\\n\".join(message_components)\n\n return cls(\n new_message, request=httpx_error.request, response=httpx_error.response\n )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectSignal","title":"PrefectSignal
","text":" Bases: BaseException
Base type for signal-like exceptions that should never be caught by users.
Source code inprefect/exceptions.py
class PrefectSignal(BaseException):\n \"\"\"\n Base type for signal-like exceptions that should never be caught by users.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ProtectedBlockError","title":"ProtectedBlockError
","text":" Bases: PrefectException
Raised when an operation is prevented due to block protection.
Source code inprefect/exceptions.py
class ProtectedBlockError(PrefectException):\n \"\"\"\n Raised when an operation is prevented due to block protection.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ReservedArgumentError","title":"ReservedArgumentError
","text":" Bases: PrefectException
, TypeError
Raised when a function used with Prefect has an argument with a name that is reserved for a Prefect feature
Source code inprefect/exceptions.py
class ReservedArgumentError(PrefectException, TypeError):\n \"\"\"\n Raised when a function used with Prefect has an argument with a name that is\n reserved for a Prefect feature\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ScriptError","title":"ScriptError
","text":" Bases: PrefectException
Raised when a script errors during evaluation while attempting to load data
Source code inprefect/exceptions.py
class ScriptError(PrefectException):\n \"\"\"\n Raised when a script errors during evaluation while attempting to load data\n \"\"\"\n\n def __init__(\n self,\n user_exc: Exception,\n path: str,\n ) -> None:\n message = f\"Script at {str(path)!r} encountered an exception: {user_exc!r}\"\n super().__init__(message)\n self.user_exc = user_exc\n\n # Strip script run information from the traceback\n self.user_exc.__traceback__ = _trim_traceback(\n self.user_exc.__traceback__,\n remove_modules=[prefect.utilities.importtools],\n )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.SignatureMismatchError","title":"SignatureMismatchError
","text":" Bases: PrefectException
, TypeError
Raised when parameters passed to a function do not match its signature.
Source code inprefect/exceptions.py
class SignatureMismatchError(PrefectException, TypeError):\n \"\"\"Raised when parameters passed to a function do not match its signature.\"\"\"\n\n def __init__(self, msg: str):\n super().__init__(msg)\n\n @classmethod\n def from_bad_params(cls, expected_params: List[str], provided_params: List[str]):\n msg = (\n f\"Function expects parameters {expected_params} but was provided with\"\n f\" parameters {provided_params}\"\n )\n return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.TerminationSignal","title":"TerminationSignal
","text":" Bases: ExternalSignal
Raised when a flow run receives a termination signal.
Source code inprefect/exceptions.py
class TerminationSignal(ExternalSignal):\n \"\"\"\n Raised when a flow run receives a termination signal.\n \"\"\"\n\n def __init__(self, signal: int):\n self.signal = signal\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UnfinishedRun","title":"UnfinishedRun
","text":" Bases: PrefectException
Raised when the result from a run that is not finished is retrieved.
For example, if a run is in a SCHEDULED, PENDING, CANCELLING, or RUNNING state.
Source code inprefect/exceptions.py
class UnfinishedRun(PrefectException):\n \"\"\"\n Raised when the result from a run that is not finished is retrieved.\n\n For example, if a run is in a SCHEDULED, PENDING, CANCELLING, or RUNNING state.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UnspecifiedFlowError","title":"UnspecifiedFlowError
","text":" Bases: PrefectException
Raised when multiple flows are found in the expected script and no name is given.
Source code inprefect/exceptions.py
class UnspecifiedFlowError(PrefectException):\n \"\"\"\n Raised when multiple flows are found in the expected script and no name is given.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UpstreamTaskError","title":"UpstreamTaskError
","text":" Bases: PrefectException
Raised when a task relies on the result of another task but that task is not 'COMPLETE'
Source code inprefect/exceptions.py
class UpstreamTaskError(PrefectException):\n \"\"\"\n Raised when a task relies on the result of another task but that task is not\n 'COMPLETE'\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.exception_traceback","title":"exception_traceback
","text":"Convert an exception to a printable string with a traceback
Source code inprefect/exceptions.py
def exception_traceback(exc: Exception) -> str:\n \"\"\"\n Convert an exception to a printable string with a traceback\n \"\"\"\n tb = traceback.TracebackException.from_exception(exc)\n return \"\".join(list(tb.format()))\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/filesystems/","title":"prefect.filesystems","text":"","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems","title":"prefect.filesystems
","text":"","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure","title":"Azure
","text":" Bases: WritableFileSystem
, WritableDeploymentStorage
Store data as a file on Azure Datalake and Azure Blob Storage.
ExampleLoad stored Azure config:
from prefect.filesystems import Azure\n\naz_block = Azure.load(\"BLOCK_NAME\")\n
Source code in prefect/filesystems.py
class Azure(WritableFileSystem, WritableDeploymentStorage):\n \"\"\"\n Store data as a file on Azure Datalake and Azure Blob Storage.\n\n Example:\n Load stored Azure config:\n ```python\n from prefect.filesystems import Azure\n\n az_block = Azure.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"Azure\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#azure\"\n\n bucket_path: str = Field(\n default=...,\n description=\"An Azure storage bucket path.\",\n example=\"my-bucket/a-directory-within\",\n )\n azure_storage_connection_string: Optional[SecretStr] = Field(\n default=None,\n title=\"Azure storage connection string\",\n description=(\n \"Equivalent to the AZURE_STORAGE_CONNECTION_STRING environment variable.\"\n ),\n )\n azure_storage_account_name: Optional[SecretStr] = Field(\n default=None,\n title=\"Azure storage account name\",\n description=(\n \"Equivalent to the AZURE_STORAGE_ACCOUNT_NAME environment variable.\"\n ),\n )\n azure_storage_account_key: Optional[SecretStr] = Field(\n default=None,\n title=\"Azure storage account key\",\n description=\"Equivalent to the AZURE_STORAGE_ACCOUNT_KEY environment variable.\",\n )\n azure_storage_tenant_id: Optional[SecretStr] = Field(\n None,\n title=\"Azure storage tenant ID\",\n description=\"Equivalent to the AZURE_TENANT_ID environment variable.\",\n )\n azure_storage_client_id: Optional[SecretStr] = Field(\n None,\n title=\"Azure storage client ID\",\n description=\"Equivalent to the AZURE_CLIENT_ID environment variable.\",\n )\n azure_storage_client_secret: Optional[SecretStr] = Field(\n None,\n title=\"Azure storage client secret\",\n description=\"Equivalent to the AZURE_CLIENT_SECRET environment variable.\",\n )\n azure_storage_anon: bool = Field(\n default=True,\n title=\"Azure storage anonymous connection\",\n description=(\n \"Set the 'anon' flag for ADLFS. This should be False for systems that\"\n \" require ADLFS to use DefaultAzureCredentials.\"\n ),\n )\n azure_storage_container: Optional[SecretStr] = Field(\n default=None,\n title=\"Azure storage container\",\n description=(\n \"Blob Container in Azure Storage Account. If set the 'bucket_path' will\"\n \" be interpreted using the following URL format:\"\n \"'az://<container>@<storage_account>.dfs.core.windows.net/<bucket_path>'.\"\n ),\n )\n _remote_file_system: RemoteFileSystem = None\n\n @property\n def basepath(self) -> str:\n if self.azure_storage_container:\n return (\n f\"az://{self.azure_storage_container.get_secret_value()}\"\n f\"@{self.azure_storage_account_name.get_secret_value()}\"\n f\".dfs.core.windows.net/{self.bucket_path}\"\n )\n else:\n return f\"az://{self.bucket_path}\"\n\n @property\n def filesystem(self) -> RemoteFileSystem:\n settings = {}\n if self.azure_storage_connection_string:\n settings[\n \"connection_string\"\n ] = self.azure_storage_connection_string.get_secret_value()\n if self.azure_storage_account_name:\n settings[\n \"account_name\"\n ] = self.azure_storage_account_name.get_secret_value()\n if self.azure_storage_account_key:\n settings[\"account_key\"] = self.azure_storage_account_key.get_secret_value()\n if self.azure_storage_tenant_id:\n settings[\"tenant_id\"] = self.azure_storage_tenant_id.get_secret_value()\n if self.azure_storage_client_id:\n settings[\"client_id\"] = self.azure_storage_client_id.get_secret_value()\n if self.azure_storage_client_secret:\n settings[\n \"client_secret\"\n ] = self.azure_storage_client_secret.get_secret_value()\n settings[\"anon\"] = self.azure_storage_anon\n self._remote_file_system = RemoteFileSystem(\n basepath=self.basepath, settings=settings\n )\n return self._remote_file_system\n\n @sync_compatible\n async def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n ) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n\n @sync_compatible\n async def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n ) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path, to_path=to_path, ignore_file=ignore_file\n )\n\n @sync_compatible\n async def read_path(self, path: str) -> bytes:\n return await self.filesystem.read_path(path)\n\n @sync_compatible\n async def write_path(self, path: str, content: bytes) -> str:\n return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure.get_directory","title":"get_directory
async
","text":"Downloads a directory from a given remote path to a local directory.
Defaults to downloading the entire contents of the block's basepath to the current working directory.
Source code inprefect/filesystems.py
@sync_compatible\nasync def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure.put_directory","title":"put_directory
async
","text":"Uploads a directory from a given local path to a remote directory.
Defaults to uploading the entire contents of the current working directory to the block's basepath.
Source code inprefect/filesystems.py
@sync_compatible\nasync def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path, to_path=to_path, ignore_file=ignore_file\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS","title":"GCS
","text":" Bases: WritableFileSystem
, WritableDeploymentStorage
Store data as a file on Google Cloud Storage.
ExampleLoad stored GCS config:
from prefect.filesystems import GCS\n\ngcs_block = GCS.load(\"BLOCK_NAME\")\n
Source code in prefect/filesystems.py
class GCS(WritableFileSystem, WritableDeploymentStorage):\n \"\"\"\n Store data as a file on Google Cloud Storage.\n\n Example:\n Load stored GCS config:\n ```python\n from prefect.filesystems import GCS\n\n gcs_block = GCS.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/422d13bb838cf247eb2b2cf229ce6a2e717d601b-256x256.png\"\n _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#gcs\"\n\n bucket_path: str = Field(\n default=...,\n description=\"A GCS bucket path.\",\n example=\"my-bucket/a-directory-within\",\n )\n service_account_info: Optional[SecretStr] = Field(\n default=None,\n description=\"The contents of a service account keyfile as a JSON string.\",\n )\n project: Optional[str] = Field(\n default=None,\n description=(\n \"The project the GCS bucket resides in. If not provided, the project will\"\n \" be inferred from the credentials or environment.\"\n ),\n )\n\n @property\n def basepath(self) -> str:\n return f\"gcs://{self.bucket_path}\"\n\n @property\n def filesystem(self) -> RemoteFileSystem:\n settings = {}\n if self.service_account_info:\n try:\n settings[\"token\"] = json.loads(\n self.service_account_info.get_secret_value()\n )\n except json.JSONDecodeError:\n raise ValueError(\n \"Unable to load provided service_account_info. Please make sure\"\n \" that the provided value is a valid JSON string.\"\n )\n remote_file_system = RemoteFileSystem(\n basepath=f\"gcs://{self.bucket_path}\", settings=settings\n )\n return remote_file_system\n\n @sync_compatible\n async def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n ) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n\n @sync_compatible\n async def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n ) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path, to_path=to_path, ignore_file=ignore_file\n )\n\n @sync_compatible\n async def read_path(self, path: str) -> bytes:\n return await self.filesystem.read_path(path)\n\n @sync_compatible\n async def write_path(self, path: str, content: bytes) -> str:\n return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS.get_directory","title":"get_directory
async
","text":"Downloads a directory from a given remote path to a local directory.
Defaults to downloading the entire contents of the block's basepath to the current working directory.
Source code inprefect/filesystems.py
@sync_compatible\nasync def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS.put_directory","title":"put_directory
async
","text":"Uploads a directory from a given local path to a remote directory.
Defaults to uploading the entire contents of the current working directory to the block's basepath.
Source code inprefect/filesystems.py
@sync_compatible\nasync def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path, to_path=to_path, ignore_file=ignore_file\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GitHub","title":"GitHub
","text":" Bases: ReadableDeploymentStorage
Interact with files stored on GitHub repositories.
Source code inprefect/filesystems.py
class GitHub(ReadableDeploymentStorage):\n \"\"\"\n Interact with files stored on GitHub repositories.\n \"\"\"\n\n _block_type_name = \"GitHub\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/41971cfecfea5f79ff334164f06ecb34d1038dd4-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#github\"\n\n repository: str = Field(\n default=...,\n description=(\n \"The URL of a GitHub repository to read from, in either HTTPS or SSH\"\n \" format.\"\n ),\n )\n reference: Optional[str] = Field(\n default=None,\n description=\"An optional reference to pin to; can be a branch name or tag.\",\n )\n access_token: Optional[SecretStr] = Field(\n name=\"Personal Access Token\",\n default=None,\n description=(\n \"A GitHub Personal Access Token (PAT) with repo scope.\"\n \" To use a fine-grained PAT, provide '{username}:{PAT}' as the value.\"\n ),\n )\n include_git_objects: bool = Field(\n default=True,\n description=(\n \"Whether to include git objects when copying the repo contents to a\"\n \" directory.\"\n ),\n )\n\n @validator(\"access_token\")\n def _ensure_credentials_go_with_https(cls, v: str, values: dict) -> str:\n \"\"\"Ensure that credentials are not provided with 'SSH' formatted GitHub URLs.\n\n Note: validates `access_token` specifically so that it only fires when\n private repositories are used.\n \"\"\"\n if v is not None:\n if urllib.parse.urlparse(values[\"repository\"]).scheme != \"https\":\n raise InvalidRepositoryURLError(\n \"Crendentials can only be used with GitHub repositories \"\n \"using the 'HTTPS' format. You must either remove the \"\n \"credential if you wish to use the 'SSH' format and are not \"\n \"using a private repository, or you must change the repository \"\n \"URL to the 'HTTPS' format. \"\n )\n\n return v\n\n def _create_repo_url(self) -> str:\n \"\"\"Format the URL provided to the `git clone` command.\n\n For private repos: https://<oauth-key>@github.com/<username>/<repo>.git\n All other repos should be the same as `self.repository`.\n \"\"\"\n url_components = urllib.parse.urlparse(self.repository)\n if url_components.scheme == \"https\" and self.access_token is not None:\n updated_components = url_components._replace(\n netloc=f\"{self.access_token.get_secret_value()}@{url_components.netloc}\"\n )\n full_url = urllib.parse.urlunparse(updated_components)\n else:\n full_url = self.repository\n\n return full_url\n\n @staticmethod\n def _get_paths(\n dst_dir: Union[str, None], src_dir: str, sub_directory: str\n ) -> Tuple[str, str]:\n \"\"\"Returns the fully formed paths for GitHubRepository contents in the form\n (content_source, content_destination).\n \"\"\"\n if dst_dir is None:\n content_destination = Path(\".\").absolute()\n else:\n content_destination = Path(dst_dir)\n\n content_source = Path(src_dir)\n\n if sub_directory:\n content_destination = content_destination.joinpath(sub_directory)\n content_source = content_source.joinpath(sub_directory)\n\n return str(content_source), str(content_destination)\n\n @sync_compatible\n async def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n ) -> None:\n \"\"\"\n Clones a GitHub project specified in `from_path` to the provided `local_path`;\n defaults to cloning the repository reference configured on the Block to the\n present working directory.\n\n Args:\n from_path: If provided, interpreted as a subdirectory of the underlying\n repository that will be copied to the provided local path.\n local_path: A local path to clone to; defaults to present working directory.\n \"\"\"\n # CONSTRUCT COMMAND\n cmd = [\"git\", \"clone\", self._create_repo_url()]\n if self.reference:\n cmd += [\"-b\", self.reference]\n\n # Limit git history\n cmd += [\"--depth\", \"1\"]\n\n # Clone to a temporary directory and move the subdirectory over\n with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n cmd.append(tmp_dir)\n\n err_stream = io.StringIO()\n out_stream = io.StringIO()\n process = await run_process(cmd, stream_output=(out_stream, err_stream))\n if process.returncode != 0:\n err_stream.seek(0)\n raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n content_source, content_destination = self._get_paths(\n dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n )\n\n ignore_func = None\n if not self.include_git_objects:\n ignore_func = ignore_patterns(\".git\")\n\n copytree(\n src=content_source,\n dst=content_destination,\n dirs_exist_ok=True,\n ignore=ignore_func,\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GitHub.get_directory","title":"get_directory
async
","text":"Clones a GitHub project specified in from_path
to the provided local_path
; defaults to cloning the repository reference configured on the Block to the present working directory.
Parameters:
Name Type Description Defaultfrom_path
Optional[str]
If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path.
None
local_path
Optional[str]
A local path to clone to; defaults to present working directory.
None
Source code in prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n \"\"\"\n Clones a GitHub project specified in `from_path` to the provided `local_path`;\n defaults to cloning the repository reference configured on the Block to the\n present working directory.\n\n Args:\n from_path: If provided, interpreted as a subdirectory of the underlying\n repository that will be copied to the provided local path.\n local_path: A local path to clone to; defaults to present working directory.\n \"\"\"\n # CONSTRUCT COMMAND\n cmd = [\"git\", \"clone\", self._create_repo_url()]\n if self.reference:\n cmd += [\"-b\", self.reference]\n\n # Limit git history\n cmd += [\"--depth\", \"1\"]\n\n # Clone to a temporary directory and move the subdirectory over\n with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n cmd.append(tmp_dir)\n\n err_stream = io.StringIO()\n out_stream = io.StringIO()\n process = await run_process(cmd, stream_output=(out_stream, err_stream))\n if process.returncode != 0:\n err_stream.seek(0)\n raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n content_source, content_destination = self._get_paths(\n dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n )\n\n ignore_func = None\n if not self.include_git_objects:\n ignore_func = ignore_patterns(\".git\")\n\n copytree(\n src=content_source,\n dst=content_destination,\n dirs_exist_ok=True,\n ignore=ignore_func,\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem","title":"LocalFileSystem
","text":" Bases: WritableFileSystem
, WritableDeploymentStorage
Store data as a file on a local file system.
ExampleLoad stored local file system config:
from prefect.filesystems import LocalFileSystem\n\nlocal_file_system_block = LocalFileSystem.load(\"BLOCK_NAME\")\n
Source code in prefect/filesystems.py
class LocalFileSystem(WritableFileSystem, WritableDeploymentStorage):\n \"\"\"\n Store data as a file on a local file system.\n\n Example:\n Load stored local file system config:\n ```python\n from prefect.filesystems import LocalFileSystem\n\n local_file_system_block = LocalFileSystem.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"Local File System\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/ad39089fa66d273b943394a68f003f7a19aa850e-48x48.png\"\n _documentation_url = (\n \"https://docs.prefect.io/concepts/filesystems/#local-filesystem\"\n )\n\n basepath: Optional[str] = Field(\n default=None, description=\"Default local path for this block to write to.\"\n )\n\n @validator(\"basepath\", pre=True)\n def cast_pathlib(cls, value):\n if isinstance(value, Path):\n return str(value)\n return value\n\n def _resolve_path(self, path: str) -> Path:\n # Only resolve the base path at runtime, default to the current directory\n basepath = (\n Path(self.basepath).expanduser().resolve()\n if self.basepath\n else Path(\".\").resolve()\n )\n\n # Determine the path to access relative to the base path, ensuring that paths\n # outside of the base path are off limits\n if path is None:\n return basepath\n\n path: Path = Path(path).expanduser()\n\n if not path.is_absolute():\n path = basepath / path\n else:\n path = path.resolve()\n if basepath not in path.parents and (basepath != path):\n raise ValueError(\n f\"Provided path {path} is outside of the base path {basepath}.\"\n )\n\n return path\n\n @sync_compatible\n async def get_directory(\n self, from_path: str = None, local_path: str = None\n ) -> None:\n \"\"\"\n Copies a directory from one place to another on the local filesystem.\n\n Defaults to copying the entire contents of the block's basepath to the current working directory.\n \"\"\"\n if not from_path:\n from_path = Path(self.basepath).expanduser().resolve()\n else:\n from_path = self._resolve_path(from_path)\n\n if not local_path:\n local_path = Path(\".\").resolve()\n else:\n local_path = Path(local_path).resolve()\n\n if from_path == local_path:\n # If the paths are the same there is no need to copy\n # and we avoid shutil.copytree raising an error\n return\n\n # .prefectignore exists in the original location, not the current location which\n # is most likely temporary\n if (from_path / Path(\".prefectignore\")).exists():\n ignore_func = await self._get_ignore_func(\n local_path=from_path, ignore_file=from_path / Path(\".prefectignore\")\n )\n else:\n ignore_func = None\n\n copytree(from_path, local_path, dirs_exist_ok=True, ignore=ignore_func)\n\n async def _get_ignore_func(self, local_path: str, ignore_file: str):\n with open(ignore_file, \"r\") as f:\n ignore_patterns = f.readlines()\n included_files = filter_files(root=local_path, ignore_patterns=ignore_patterns)\n\n def ignore_func(directory, files):\n relative_path = Path(directory).relative_to(local_path)\n\n files_to_ignore = [\n f for f in files if str(relative_path / f) not in included_files\n ]\n return files_to_ignore\n\n return ignore_func\n\n @sync_compatible\n async def put_directory(\n self, local_path: str = None, to_path: str = None, ignore_file: str = None\n ) -> None:\n \"\"\"\n Copies a directory from one place to another on the local filesystem.\n\n Defaults to copying the entire contents of the current working directory to the block's basepath.\n An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore.\n \"\"\"\n destination_path = self._resolve_path(to_path)\n\n if not local_path:\n local_path = Path(\".\").absolute()\n\n if ignore_file:\n ignore_func = await self._get_ignore_func(\n local_path=local_path, ignore_file=ignore_file\n )\n else:\n ignore_func = None\n\n if local_path == destination_path:\n pass\n else:\n copytree(\n src=local_path,\n dst=destination_path,\n ignore=ignore_func,\n dirs_exist_ok=True,\n )\n\n @sync_compatible\n async def read_path(self, path: str) -> bytes:\n path: Path = self._resolve_path(path)\n\n # Check if the path exists\n if not path.exists():\n raise ValueError(f\"Path {path} does not exist.\")\n\n # Validate that its a file\n if not path.is_file():\n raise ValueError(f\"Path {path} is not a file.\")\n\n async with await anyio.open_file(str(path), mode=\"rb\") as f:\n content = await f.read()\n\n return content\n\n @sync_compatible\n async def write_path(self, path: str, content: bytes) -> str:\n path: Path = self._resolve_path(path)\n\n # Construct the path if it does not exist\n path.parent.mkdir(exist_ok=True, parents=True)\n\n # Check if the file already exists\n if path.exists() and not path.is_file():\n raise ValueError(f\"Path {path} already exists and is not a file.\")\n\n async with await anyio.open_file(path, mode=\"wb\") as f:\n await f.write(content)\n # Leave path stringify to the OS\n return str(path)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem.get_directory","title":"get_directory
async
","text":"Copies a directory from one place to another on the local filesystem.
Defaults to copying the entire contents of the block's basepath to the current working directory.
Source code inprefect/filesystems.py
@sync_compatible\nasync def get_directory(\n self, from_path: str = None, local_path: str = None\n) -> None:\n \"\"\"\n Copies a directory from one place to another on the local filesystem.\n\n Defaults to copying the entire contents of the block's basepath to the current working directory.\n \"\"\"\n if not from_path:\n from_path = Path(self.basepath).expanduser().resolve()\n else:\n from_path = self._resolve_path(from_path)\n\n if not local_path:\n local_path = Path(\".\").resolve()\n else:\n local_path = Path(local_path).resolve()\n\n if from_path == local_path:\n # If the paths are the same there is no need to copy\n # and we avoid shutil.copytree raising an error\n return\n\n # .prefectignore exists in the original location, not the current location which\n # is most likely temporary\n if (from_path / Path(\".prefectignore\")).exists():\n ignore_func = await self._get_ignore_func(\n local_path=from_path, ignore_file=from_path / Path(\".prefectignore\")\n )\n else:\n ignore_func = None\n\n copytree(from_path, local_path, dirs_exist_ok=True, ignore=ignore_func)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem.put_directory","title":"put_directory
async
","text":"Copies a directory from one place to another on the local filesystem.
Defaults to copying the entire contents of the current working directory to the block's basepath. An ignore_file
path may be provided that can include gitignore style expressions for filepaths to ignore.
prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n self, local_path: str = None, to_path: str = None, ignore_file: str = None\n) -> None:\n \"\"\"\n Copies a directory from one place to another on the local filesystem.\n\n Defaults to copying the entire contents of the current working directory to the block's basepath.\n An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore.\n \"\"\"\n destination_path = self._resolve_path(to_path)\n\n if not local_path:\n local_path = Path(\".\").absolute()\n\n if ignore_file:\n ignore_func = await self._get_ignore_func(\n local_path=local_path, ignore_file=ignore_file\n )\n else:\n ignore_func = None\n\n if local_path == destination_path:\n pass\n else:\n copytree(\n src=local_path,\n dst=destination_path,\n ignore=ignore_func,\n dirs_exist_ok=True,\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem","title":"RemoteFileSystem
","text":" Bases: WritableFileSystem
, WritableDeploymentStorage
Store data as a file on a remote file system.
Supports any remote file system supported by fsspec
. The file system is specified using a protocol. For example, \"s3://my-bucket/my-folder/\" will use S3.
Load stored remote file system config:
from prefect.filesystems import RemoteFileSystem\n\nremote_file_system_block = RemoteFileSystem.load(\"BLOCK_NAME\")\n
Source code in prefect/filesystems.py
class RemoteFileSystem(WritableFileSystem, WritableDeploymentStorage):\n \"\"\"\n Store data as a file on a remote file system.\n\n Supports any remote file system supported by `fsspec`. The file system is specified\n using a protocol. For example, \"s3://my-bucket/my-folder/\" will use S3.\n\n Example:\n Load stored remote file system config:\n ```python\n from prefect.filesystems import RemoteFileSystem\n\n remote_file_system_block = RemoteFileSystem.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"Remote File System\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/e86b41bc0f9c99ba9489abeee83433b43d5c9365-48x48.png\"\n _documentation_url = (\n \"https://docs.prefect.io/concepts/filesystems/#remote-file-system\"\n )\n\n basepath: str = Field(\n default=...,\n description=\"Default path for this block to write to.\",\n example=\"s3://my-bucket/my-folder/\",\n )\n settings: Dict[str, Any] = Field(\n default_factory=dict,\n description=\"Additional settings to pass through to fsspec.\",\n )\n\n # Cache for the configured fsspec file system used for access\n _filesystem: fsspec.AbstractFileSystem = None\n\n @validator(\"basepath\")\n def check_basepath(cls, value):\n scheme, netloc, _, _, _ = urllib.parse.urlsplit(value)\n\n if not scheme:\n raise ValueError(f\"Base path must start with a scheme. Got {value!r}.\")\n\n if not netloc:\n raise ValueError(\n f\"Base path must include a location after the scheme. Got {value!r}.\"\n )\n\n if scheme == \"file\":\n raise ValueError(\n \"Base path scheme cannot be 'file'. Use `LocalFileSystem` instead for\"\n \" local file access.\"\n )\n\n return value\n\n def _resolve_path(self, path: str) -> str:\n base_scheme, base_netloc, base_urlpath, _, _ = urllib.parse.urlsplit(\n self.basepath\n )\n scheme, netloc, urlpath, _, _ = urllib.parse.urlsplit(path)\n\n # Confirm that absolute paths are valid\n if scheme:\n if scheme != base_scheme:\n raise ValueError(\n f\"Path {path!r} with scheme {scheme!r} must use the same scheme as\"\n f\" the base path {base_scheme!r}.\"\n )\n\n if netloc:\n if (netloc != base_netloc) or not urlpath.startswith(base_urlpath):\n raise ValueError(\n f\"Path {path!r} is outside of the base path {self.basepath!r}.\"\n )\n\n return f\"{self.basepath.rstrip('/')}/{urlpath.lstrip('/')}\"\n\n @sync_compatible\n async def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n ) -> None:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n if from_path is None:\n from_path = str(self.basepath)\n else:\n from_path = self._resolve_path(from_path)\n\n if local_path is None:\n local_path = Path(\".\").absolute()\n\n # validate that from_path has a trailing slash for proper fsspec behavior across versions\n if not from_path.endswith(\"/\"):\n from_path += \"/\"\n\n return self.filesystem.get(from_path, local_path, recursive=True)\n\n @sync_compatible\n async def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n overwrite: bool = True,\n ) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n if to_path is None:\n to_path = str(self.basepath)\n else:\n to_path = self._resolve_path(to_path)\n\n if local_path is None:\n local_path = \".\"\n\n included_files = None\n if ignore_file:\n with open(ignore_file, \"r\") as f:\n ignore_patterns = f.readlines()\n\n included_files = filter_files(\n local_path, ignore_patterns, include_dirs=True\n )\n\n counter = 0\n for f in Path(local_path).rglob(\"*\"):\n relative_path = f.relative_to(local_path)\n if included_files and str(relative_path) not in included_files:\n continue\n\n if to_path.endswith(\"/\"):\n fpath = to_path + relative_path.as_posix()\n else:\n fpath = to_path + \"/\" + relative_path.as_posix()\n\n if f.is_dir():\n pass\n else:\n f = f.as_posix()\n if overwrite:\n self.filesystem.put_file(f, fpath, overwrite=True)\n else:\n self.filesystem.put_file(f, fpath)\n\n counter += 1\n\n return counter\n\n @sync_compatible\n async def read_path(self, path: str) -> bytes:\n path = self._resolve_path(path)\n\n with self.filesystem.open(path, \"rb\") as file:\n content = await run_sync_in_worker_thread(file.read)\n\n return content\n\n @sync_compatible\n async def write_path(self, path: str, content: bytes) -> str:\n path = self._resolve_path(path)\n dirpath = path[: path.rindex(\"/\")]\n\n self.filesystem.makedirs(dirpath, exist_ok=True)\n\n with self.filesystem.open(path, \"wb\") as file:\n await run_sync_in_worker_thread(file.write, content)\n return path\n\n @property\n def filesystem(self) -> fsspec.AbstractFileSystem:\n if not self._filesystem:\n scheme, _, _, _, _ = urllib.parse.urlsplit(self.basepath)\n\n try:\n self._filesystem = fsspec.filesystem(scheme, **self.settings)\n except ImportError as exc:\n # The path is a remote file system that uses a lib that is not installed\n raise RuntimeError(\n f\"File system created with scheme {scheme!r} from base path \"\n f\"{self.basepath!r} could not be created. \"\n \"You are likely missing a Python module required to use the given \"\n \"storage protocol.\"\n ) from exc\n\n return self._filesystem\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem.get_directory","title":"get_directory
async
","text":"Downloads a directory from a given remote path to a local directory.
Defaults to downloading the entire contents of the block's basepath to the current working directory.
Source code inprefect/filesystems.py
@sync_compatible\nasync def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n if from_path is None:\n from_path = str(self.basepath)\n else:\n from_path = self._resolve_path(from_path)\n\n if local_path is None:\n local_path = Path(\".\").absolute()\n\n # validate that from_path has a trailing slash for proper fsspec behavior across versions\n if not from_path.endswith(\"/\"):\n from_path += \"/\"\n\n return self.filesystem.get(from_path, local_path, recursive=True)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem.put_directory","title":"put_directory
async
","text":"Uploads a directory from a given local path to a remote directory.
Defaults to uploading the entire contents of the current working directory to the block's basepath.
Source code inprefect/filesystems.py
@sync_compatible\nasync def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n overwrite: bool = True,\n) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n if to_path is None:\n to_path = str(self.basepath)\n else:\n to_path = self._resolve_path(to_path)\n\n if local_path is None:\n local_path = \".\"\n\n included_files = None\n if ignore_file:\n with open(ignore_file, \"r\") as f:\n ignore_patterns = f.readlines()\n\n included_files = filter_files(\n local_path, ignore_patterns, include_dirs=True\n )\n\n counter = 0\n for f in Path(local_path).rglob(\"*\"):\n relative_path = f.relative_to(local_path)\n if included_files and str(relative_path) not in included_files:\n continue\n\n if to_path.endswith(\"/\"):\n fpath = to_path + relative_path.as_posix()\n else:\n fpath = to_path + \"/\" + relative_path.as_posix()\n\n if f.is_dir():\n pass\n else:\n f = f.as_posix()\n if overwrite:\n self.filesystem.put_file(f, fpath, overwrite=True)\n else:\n self.filesystem.put_file(f, fpath)\n\n counter += 1\n\n return counter\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3","title":"S3
","text":" Bases: WritableFileSystem
, WritableDeploymentStorage
Store data as a file on AWS S3.
ExampleLoad stored S3 config:
from prefect.filesystems import S3\n\ns3_block = S3.load(\"BLOCK_NAME\")\n
Source code in prefect/filesystems.py
class S3(WritableFileSystem, WritableDeploymentStorage):\n \"\"\"\n Store data as a file on AWS S3.\n\n Example:\n Load stored S3 config:\n ```python\n from prefect.filesystems import S3\n\n s3_block = S3.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"S3\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"\n _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#s3\"\n\n bucket_path: str = Field(\n default=...,\n description=\"An S3 bucket path.\",\n example=\"my-bucket/a-directory-within\",\n )\n aws_access_key_id: Optional[SecretStr] = Field(\n default=None,\n title=\"AWS Access Key ID\",\n description=\"Equivalent to the AWS_ACCESS_KEY_ID environment variable.\",\n example=\"AKIAIOSFODNN7EXAMPLE\",\n )\n aws_secret_access_key: Optional[SecretStr] = Field(\n default=None,\n title=\"AWS Secret Access Key\",\n description=\"Equivalent to the AWS_SECRET_ACCESS_KEY environment variable.\",\n example=\"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\",\n )\n\n _remote_file_system: RemoteFileSystem = None\n\n @property\n def basepath(self) -> str:\n return f\"s3://{self.bucket_path}\"\n\n @property\n def filesystem(self) -> RemoteFileSystem:\n settings = {}\n if self.aws_access_key_id:\n settings[\"key\"] = self.aws_access_key_id.get_secret_value()\n if self.aws_secret_access_key:\n settings[\"secret\"] = self.aws_secret_access_key.get_secret_value()\n self._remote_file_system = RemoteFileSystem(\n basepath=f\"s3://{self.bucket_path}\", settings=settings\n )\n return self._remote_file_system\n\n @sync_compatible\n async def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n ) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n\n @sync_compatible\n async def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n ) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path, to_path=to_path, ignore_file=ignore_file\n )\n\n @sync_compatible\n async def read_path(self, path: str) -> bytes:\n return await self.filesystem.read_path(path)\n\n @sync_compatible\n async def write_path(self, path: str, content: bytes) -> str:\n return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3.get_directory","title":"get_directory
async
","text":"Downloads a directory from a given remote path to a local directory.
Defaults to downloading the entire contents of the block's basepath to the current working directory.
Source code inprefect/filesystems.py
@sync_compatible\nasync def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3.put_directory","title":"put_directory
async
","text":"Uploads a directory from a given local path to a remote directory.
Defaults to uploading the entire contents of the current working directory to the block's basepath.
Source code inprefect/filesystems.py
@sync_compatible\nasync def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path, to_path=to_path, ignore_file=ignore_file\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB","title":"SMB
","text":" Bases: WritableFileSystem
, WritableDeploymentStorage
Store data as a file on a SMB share.
ExampleLoad stored SMB config:
from prefect.filesystems import SMB\nsmb_block = SMB.load(\"BLOCK_NAME\")\n
Source code in prefect/filesystems.py
class SMB(WritableFileSystem, WritableDeploymentStorage):\n \"\"\"\n Store data as a file on a SMB share.\n\n Example:\n Load stored SMB config:\n\n ```python\n from prefect.filesystems import SMB\n smb_block = SMB.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"SMB\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/3f624663f7beb97d011d011bffd51ecf6c499efc-195x195.png\"\n _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#smb\"\n\n share_path: str = Field(\n default=...,\n description=\"SMB target (requires <SHARE>, followed by <PATH>).\",\n example=\"/SHARE/dir/subdir\",\n )\n smb_username: Optional[SecretStr] = Field(\n default=None,\n title=\"SMB Username\",\n description=\"Username with access to the target SMB SHARE.\",\n )\n smb_password: Optional[SecretStr] = Field(\n default=None, title=\"SMB Password\", description=\"Password for SMB access.\"\n )\n smb_host: str = Field(\n default=..., tile=\"SMB server/hostname\", description=\"SMB server/hostname.\"\n )\n smb_port: Optional[int] = Field(\n default=None, title=\"SMB port\", description=\"SMB port (default: 445).\"\n )\n\n _remote_file_system: RemoteFileSystem = None\n\n @property\n def basepath(self) -> str:\n return f\"smb://{self.smb_host.rstrip('/')}/{self.share_path.lstrip('/')}\"\n\n @property\n def filesystem(self) -> RemoteFileSystem:\n settings = {}\n if self.smb_username:\n settings[\"username\"] = self.smb_username.get_secret_value()\n if self.smb_password:\n settings[\"password\"] = self.smb_password.get_secret_value()\n if self.smb_host:\n settings[\"host\"] = self.smb_host\n if self.smb_port:\n settings[\"port\"] = self.smb_port\n self._remote_file_system = RemoteFileSystem(\n basepath=f\"smb://{self.smb_host.rstrip('/')}/{self.share_path.lstrip('/')}\",\n settings=settings,\n )\n return self._remote_file_system\n\n @sync_compatible\n async def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n ) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n\n @sync_compatible\n async def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n ) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path,\n to_path=to_path,\n ignore_file=ignore_file,\n overwrite=False,\n )\n\n @sync_compatible\n async def read_path(self, path: str) -> bytes:\n return await self.filesystem.read_path(path)\n\n @sync_compatible\n async def write_path(self, path: str, content: bytes) -> str:\n return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB.get_directory","title":"get_directory
async
","text":"Downloads a directory from a given remote path to a local directory. Defaults to downloading the entire contents of the block's basepath to the current working directory.
Source code inprefect/filesystems.py
@sync_compatible\nasync def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB.put_directory","title":"put_directory
async
","text":"Uploads a directory from a given local path to a remote directory. Defaults to uploading the entire contents of the current working directory to the block's basepath.
Source code inprefect/filesystems.py
@sync_compatible\nasync def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path,\n to_path=to_path,\n ignore_file=ignore_file,\n overwrite=False,\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/flow_runs/","title":"prefect.flow_runs","text":"","tags":["Python API","flow-runs"]},{"location":"api-ref/prefect/flow_runs/#prefect.flow_runs","title":"prefect.flow_runs
","text":"","tags":["Python API","flow-runs"]},{"location":"api-ref/prefect/flow_runs/#prefect.flow_runs.wait_for_flow_run","title":"wait_for_flow_run
async
","text":"Waits for the prefect flow run to finish and returns the FlowRun
Parameters:
Name Type Description Defaultflow_run_id
UUID
The flow run ID for the flow run to wait for.
requiredtimeout
Optional[int]
The wait timeout in seconds. Defaults to 10800 (3 hours).
10800
poll_interval
int
The poll interval in seconds. Defaults to 5.
5
Returns:
Name Type DescriptionFlowRun
FlowRun
The finished flow run.
Raises:
Type DescriptionFlowWaitTimeout
If flow run goes over the timeout.
Examples:
Create a flow run for a deployment and wait for it to finish:
import asyncio\n\nfrom prefect import get_client\nfrom prefect.flow_runs import wait_for_flow_run\n\nasync def main():\n async with get_client() as client:\n flow_run = await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)\n print(flow_run.state)\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n
Trigger multiple flow runs and wait for them to finish:
import asyncio\n\nfrom prefect import get_client\nfrom prefect.flow_runs import wait_for_flow_run\n\nasync def main(num_runs: int):\n async with get_client() as client:\n flow_runs = [\n await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n for _\n in range(num_runs)\n ]\n coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]\n finished_flow_runs = await asyncio.gather(*coros)\n print([flow_run.state for flow_run in finished_flow_runs])\n\nif __name__ == \"__main__\":\n asyncio.run(main(num_runs=10))\n
Source code in prefect/flow_runs.py
@inject_client\nasync def wait_for_flow_run(\n flow_run_id: UUID,\n timeout: Optional[int] = 10800,\n poll_interval: int = 5,\n client: Optional[PrefectClient] = None,\n log_states: bool = False,\n) -> FlowRun:\n \"\"\"\n Waits for the prefect flow run to finish and returns the FlowRun\n\n Args:\n flow_run_id: The flow run ID for the flow run to wait for.\n timeout: The wait timeout in seconds. Defaults to 10800 (3 hours).\n poll_interval: The poll interval in seconds. Defaults to 5.\n\n Returns:\n FlowRun: The finished flow run.\n\n Raises:\n prefect.exceptions.FlowWaitTimeout: If flow run goes over the timeout.\n\n Examples:\n Create a flow run for a deployment and wait for it to finish:\n ```python\n import asyncio\n\n from prefect import get_client\n from prefect.flow_runs import wait_for_flow_run\n\n async def main():\n async with get_client() as client:\n flow_run = await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)\n print(flow_run.state)\n\n if __name__ == \"__main__\":\n asyncio.run(main())\n\n ```\n\n Trigger multiple flow runs and wait for them to finish:\n ```python\n import asyncio\n\n from prefect import get_client\n from prefect.flow_runs import wait_for_flow_run\n\n async def main(num_runs: int):\n async with get_client() as client:\n flow_runs = [\n await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n for _\n in range(num_runs)\n ]\n coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]\n finished_flow_runs = await asyncio.gather(*coros)\n print([flow_run.state for flow_run in finished_flow_runs])\n\n if __name__ == \"__main__\":\n asyncio.run(main(num_runs=10))\n\n ```\n \"\"\"\n assert client is not None, \"Client injection failed\"\n logger = get_logger()\n with anyio.move_on_after(timeout):\n while True:\n flow_run = await client.read_flow_run(flow_run_id)\n flow_state = flow_run.state\n if log_states:\n logger.info(f\"Flow run is in state {flow_run.state.name!r}\")\n if flow_state and flow_state.is_final():\n return flow_run\n await anyio.sleep(poll_interval)\n raise FlowRunWaitTimeout(\n f\"Flow run with ID {flow_run_id} exceeded watch timeout of {timeout} seconds\"\n )\n
","tags":["Python API","flow-runs"]},{"location":"api-ref/prefect/flows/","title":"prefect.flows","text":"","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows","title":"prefect.flows
","text":"Module containing the base workflow class and decorator - for most use cases, using the @flow
decorator is preferred.
Flow
","text":" Bases: Generic[P, R]
A Prefect workflow definition.
Note
We recommend using the @flow
decorator for most use-cases.
Wraps a function with an entrypoint to the Prefect engine. To preserve the input and output types, we use the generic type variables P
and R
for \"Parameters\" and \"Returns\" respectively.
Parameters:
Name Type Description Defaultfn
Callable[P, R]
The function defining the workflow.
requiredname
Optional[str]
An optional name for the flow; if not provided, the name will be inferred from the given function.
None
version
Optional[str]
An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null.
None
flow_run_name
Optional[Union[Callable[[], str], str]]
An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.
None
task_runner
Union[Type[BaseTaskRunner], BaseTaskRunner]
An optional task runner to use for task execution within the flow; if not provided, a ConcurrentTaskRunner
will be used.
ConcurrentTaskRunner
description
str
An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function.
None
timeout_seconds
Union[int, float]
An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called.
None
validate_parameters
bool
By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined as x: int
and \"5\" is passed, it will be resolved to 5
. If set to False
, no validation will be performed on flow parameters.
True
retries
Optional[int]
An optional number of times to retry on flow run failure.
None
retry_delay_seconds
Optional[Union[int, float]]
An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries
is nonzero.
None
persist_result
Optional[bool]
An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults to None
, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.
None
result_storage
Optional[ResultStorage]
An optional block to use to persist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow.
None
result_serializer
Optional[ResultSerializer]
An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value of PREFECT_RESULTS_DEFAULT_SERIALIZER
will be used unless called as a subflow, at which point the default will be loaded from the parent flow.
None
on_failure
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of callables to run when the flow enters a failed state.
None
on_completion
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of callables to run when the flow enters a completed state.
None
on_cancellation
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of callables to run when the flow enters a cancelling state.
None
on_crashed
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of callables to run when the flow enters a crashed state.
None
on_running
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of callables to run when the flow enters a running state.
None
Source code in prefect/flows.py
@PrefectObjectRegistry.register_instances\nclass Flow(Generic[P, R]):\n \"\"\"\n A Prefect workflow definition.\n\n !!! note\n We recommend using the [`@flow` decorator][prefect.flows.flow] for most use-cases.\n\n Wraps a function with an entrypoint to the Prefect engine. To preserve the input\n and output types, we use the generic type variables `P` and `R` for \"Parameters\" and\n \"Returns\" respectively.\n\n Args:\n fn: The function defining the workflow.\n name: An optional name for the flow; if not provided, the name will be inferred\n from the given function.\n version: An optional version string for the flow; if not provided, we will\n attempt to create a version string as a hash of the file containing the\n wrapped function; if the file cannot be located, the version will be null.\n flow_run_name: An optional name to distinguish runs of this flow; this name can\n be provided as a string template with the flow's parameters as variables,\n or a function that returns a string.\n task_runner: An optional task runner to use for task execution within the flow;\n if not provided, a `ConcurrentTaskRunner` will be used.\n description: An optional string description for the flow; if not provided, the\n description will be pulled from the docstring for the decorated function.\n timeout_seconds: An optional number of seconds indicating a maximum runtime for\n the flow. If the flow exceeds this runtime, it will be marked as failed.\n Flow execution may continue until the next task is called.\n validate_parameters: By default, parameters passed to flows are validated by\n Pydantic. This will check that input values conform to the annotated types\n on the function. Where possible, values will be coerced into the correct\n type; for example, if a parameter is defined as `x: int` and \"5\" is passed,\n it will be resolved to `5`. If set to `False`, no validation will be\n performed on flow parameters.\n retries: An optional number of times to retry on flow run failure.\n retry_delay_seconds: An optional number of seconds to wait before retrying the\n flow after failure. This is only applicable if `retries` is nonzero.\n persist_result: An optional toggle indicating whether the result of this flow\n should be persisted to result storage. Defaults to `None`, which indicates\n that Prefect should choose whether the result should be persisted depending on\n the features being used.\n result_storage: An optional block to use to persist the result of this flow.\n This value will be used as the default for any tasks in this flow.\n If not provided, the local file system will be used unless called as\n a subflow, at which point the default will be loaded from the parent flow.\n result_serializer: An optional serializer to use to serialize the result of this\n flow for persistence. This value will be used as the default for any tasks\n in this flow. If not provided, the value of `PREFECT_RESULTS_DEFAULT_SERIALIZER`\n will be used unless called as a subflow, at which point the default will be\n loaded from the parent flow.\n on_failure: An optional list of callables to run when the flow enters a failed state.\n on_completion: An optional list of callables to run when the flow enters a completed state.\n on_cancellation: An optional list of callables to run when the flow enters a cancelling state.\n on_crashed: An optional list of callables to run when the flow enters a crashed state.\n on_running: An optional list of callables to run when the flow enters a running state.\n \"\"\"\n\n # NOTE: These parameters (types, defaults, and docstrings) should be duplicated\n # exactly in the @flow decorator\n def __init__(\n self,\n fn: Callable[P, R],\n name: Optional[str] = None,\n version: Optional[str] = None,\n flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n retries: Optional[int] = None,\n retry_delay_seconds: Optional[Union[int, float]] = None,\n task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = ConcurrentTaskRunner,\n description: str = None,\n timeout_seconds: Union[int, float] = None,\n validate_parameters: bool = True,\n persist_result: Optional[bool] = None,\n result_storage: Optional[ResultStorage] = None,\n result_serializer: Optional[ResultSerializer] = None,\n cache_result_in_memory: bool = True,\n log_prints: Optional[bool] = None,\n on_completion: Optional[\n List[Callable[[FlowSchema, FlowRun, State], None]]\n ] = None,\n on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_cancellation: Optional[\n List[Callable[[FlowSchema, FlowRun, State], None]]\n ] = None,\n on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n ):\n if name is not None and not isinstance(name, str):\n raise TypeError(\n \"Expected string for flow parameter 'name'; got {} instead. {}\".format(\n type(name).__name__,\n (\n \"Perhaps you meant to call it? e.g.\"\n \" '@flow(name=get_flow_run_name())'\"\n if callable(name)\n else \"\"\n ),\n )\n )\n\n # Validate if hook passed is list and contains callables\n hook_categories = [\n on_completion,\n on_failure,\n on_cancellation,\n on_crashed,\n on_running,\n ]\n hook_names = [\n \"on_completion\",\n \"on_failure\",\n \"on_cancellation\",\n \"on_crashed\",\n \"on_running\",\n ]\n for hooks, hook_name in zip(hook_categories, hook_names):\n if hooks is not None:\n if not hooks:\n raise ValueError(f\"Empty list passed for '{hook_name}'\")\n try:\n hooks = list(hooks)\n except TypeError:\n raise TypeError(\n f\"Expected iterable for '{hook_name}'; got\"\n f\" {type(hooks).__name__} instead. Please provide a list of\"\n f\" hooks to '{hook_name}':\\n\\n\"\n f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n \" my_flow():\\n\\tpass\"\n )\n\n for hook in hooks:\n if not callable(hook):\n raise TypeError(\n f\"Expected callables in '{hook_name}'; got\"\n f\" {type(hook).__name__} instead. Please provide a list of\"\n f\" hooks to '{hook_name}':\\n\\n\"\n f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n \" my_flow():\\n\\tpass\"\n )\n\n if not callable(fn):\n raise TypeError(\"'fn' must be callable\")\n\n # Validate name if given\n if name:\n raise_on_name_with_banned_characters(name)\n\n self.name = name or fn.__name__.replace(\"_\", \"-\")\n\n if flow_run_name is not None:\n if not isinstance(flow_run_name, str) and not callable(flow_run_name):\n raise TypeError(\n \"Expected string or callable for 'flow_run_name'; got\"\n f\" {type(flow_run_name).__name__} instead.\"\n )\n self.flow_run_name = flow_run_name\n\n task_runner = task_runner or ConcurrentTaskRunner()\n self.task_runner = (\n task_runner() if isinstance(task_runner, type) else task_runner\n )\n\n self.log_prints = log_prints\n\n self.description = description or inspect.getdoc(fn)\n update_wrapper(self, fn)\n self.fn = fn\n self.isasync = is_async_fn(self.fn)\n\n raise_for_reserved_arguments(self.fn, [\"return_state\", \"wait_for\"])\n\n # Version defaults to a hash of the function's file\n flow_file = inspect.getsourcefile(self.fn)\n if not version:\n try:\n version = file_hash(flow_file)\n except (FileNotFoundError, TypeError, OSError):\n pass # `getsourcefile` can return null values and \"<stdin>\" for objects in repls\n self.version = version\n\n self.timeout_seconds = float(timeout_seconds) if timeout_seconds else None\n\n # FlowRunPolicy settings\n # TODO: We can instantiate a `FlowRunPolicy` and add Pydantic bound checks to\n # validate that the user passes positive numbers here\n self.retries = (\n retries if retries is not None else PREFECT_FLOW_DEFAULT_RETRIES.value()\n )\n\n self.retry_delay_seconds = (\n retry_delay_seconds\n if retry_delay_seconds is not None\n else PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS.value()\n )\n\n self.parameters = parameter_schema(self.fn)\n self.should_validate_parameters = validate_parameters\n\n if self.should_validate_parameters:\n # Try to create the validated function now so that incompatibility can be\n # raised at declaration time rather than at runtime\n # We cannot, however, store the validated function on the flow because it\n # is not picklable in some environments\n try:\n ValidatedFunction(self.fn, config={\"arbitrary_types_allowed\": True})\n except pydantic.ConfigError as exc:\n raise ValueError(\n \"Flow function is not compatible with `validate_parameters`. \"\n \"Disable validation or change the argument names.\"\n ) from exc\n\n self.persist_result = persist_result\n self.result_storage = result_storage\n self.result_serializer = result_serializer\n self.cache_result_in_memory = cache_result_in_memory\n\n # Check for collision in the registry\n registry = PrefectObjectRegistry.get()\n\n if registry and any(\n other\n for other in registry.get_instances(Flow)\n if other.name == self.name and id(other.fn) != id(self.fn)\n ):\n file = inspect.getsourcefile(self.fn)\n line_number = inspect.getsourcelines(self.fn)[1]\n warnings.warn(\n f\"A flow named {self.name!r} and defined at '{file}:{line_number}' \"\n \"conflicts with another flow. Consider specifying a unique `name` \"\n \"parameter in the flow definition:\\n\\n \"\n \"`@flow(name='my_unique_name', ...)`\"\n )\n self.on_completion = on_completion\n self.on_failure = on_failure\n self.on_cancellation = on_cancellation\n self.on_crashed = on_crashed\n self.on_running = on_running\n\n # Used for flows loaded from remote storage\n self._storage: Optional[RunnerStorage] = None\n self._entrypoint: Optional[str] = None\n\n module = fn.__module__\n if module in (\"__main__\", \"__prefect_loader__\"):\n module_name = inspect.getfile(fn)\n module = module_name if module_name != \"__main__\" else module\n\n self._entrypoint = f\"{module}:{fn.__name__}\"\n\n def with_options(\n self,\n *,\n name: str = None,\n version: str = None,\n retries: Optional[int] = None,\n retry_delay_seconds: Optional[Union[int, float]] = None,\n description: str = None,\n flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = None,\n timeout_seconds: Union[int, float] = None,\n validate_parameters: bool = None,\n persist_result: Optional[bool] = NotSet,\n result_storage: Optional[ResultStorage] = NotSet,\n result_serializer: Optional[ResultSerializer] = NotSet,\n cache_result_in_memory: bool = None,\n log_prints: Optional[bool] = NotSet,\n on_completion: Optional[\n List[Callable[[FlowSchema, FlowRun, State], None]]\n ] = None,\n on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_cancellation: Optional[\n List[Callable[[FlowSchema, FlowRun, State], None]]\n ] = None,\n on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n ) -> Self:\n \"\"\"\n Create a new flow from the current object, updating provided options.\n\n Args:\n name: A new name for the flow.\n version: A new version for the flow.\n description: A new description for the flow.\n flow_run_name: An optional name to distinguish runs of this flow; this name\n can be provided as a string template with the flow's parameters as variables,\n or a function that returns a string.\n task_runner: A new task runner for the flow.\n timeout_seconds: A new number of seconds to fail the flow after if still\n running.\n validate_parameters: A new value indicating if flow calls should validate\n given parameters.\n retries: A new number of times to retry on flow run failure.\n retry_delay_seconds: A new number of seconds to wait before retrying the\n flow after failure. This is only applicable if `retries` is nonzero.\n persist_result: A new option for enabling or disabling result persistence.\n result_storage: A new storage type to use for results.\n result_serializer: A new serializer to use for results.\n cache_result_in_memory: A new value indicating if the flow's result should\n be cached in memory.\n on_failure: A new list of callables to run when the flow enters a failed state.\n on_completion: A new list of callables to run when the flow enters a completed state.\n on_cancellation: A new list of callables to run when the flow enters a cancelling state.\n on_crashed: A new list of callables to run when the flow enters a crashed state.\n on_running: A new list of callables to run when the flow enters a running state.\n\n Returns:\n A new `Flow` instance.\n\n Examples:\n\n Create a new flow from an existing flow and update the name:\n\n >>> @flow(name=\"My flow\")\n >>> def my_flow():\n >>> return 1\n >>>\n >>> new_flow = my_flow.with_options(name=\"My new flow\")\n\n Create a new flow from an existing flow, update the task runner, and call\n it without an intermediate variable:\n\n >>> from prefect.task_runners import SequentialTaskRunner\n >>>\n >>> @flow\n >>> def my_flow(x, y):\n >>> return x + y\n >>>\n >>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n >>> assert state.result() == 4\n\n \"\"\"\n new_flow = Flow(\n fn=self.fn,\n name=name or self.name,\n description=description or self.description,\n flow_run_name=flow_run_name or self.flow_run_name,\n version=version or self.version,\n task_runner=task_runner or self.task_runner,\n retries=retries if retries is not None else self.retries,\n retry_delay_seconds=(\n retry_delay_seconds\n if retry_delay_seconds is not None\n else self.retry_delay_seconds\n ),\n timeout_seconds=(\n timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n ),\n validate_parameters=(\n validate_parameters\n if validate_parameters is not None\n else self.should_validate_parameters\n ),\n persist_result=(\n persist_result if persist_result is not NotSet else self.persist_result\n ),\n result_storage=(\n result_storage if result_storage is not NotSet else self.result_storage\n ),\n result_serializer=(\n result_serializer\n if result_serializer is not NotSet\n else self.result_serializer\n ),\n cache_result_in_memory=(\n cache_result_in_memory\n if cache_result_in_memory is not None\n else self.cache_result_in_memory\n ),\n log_prints=log_prints if log_prints is not NotSet else self.log_prints,\n on_completion=on_completion or self.on_completion,\n on_failure=on_failure or self.on_failure,\n on_cancellation=on_cancellation or self.on_cancellation,\n on_crashed=on_crashed or self.on_crashed,\n on_running=on_running or self.on_running,\n )\n new_flow._storage = self._storage\n new_flow._entrypoint = self._entrypoint\n return new_flow\n\n def validate_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Validate parameters for compatibility with the flow by attempting to cast the inputs to the\n associated types specified by the function's type annotations.\n\n Returns:\n A new dict of parameters that have been cast to the appropriate types\n\n Raises:\n ParameterTypeError: if the provided parameters are not valid\n \"\"\"\n args, kwargs = parameters_to_args_kwargs(self.fn, parameters)\n\n if HAS_PYDANTIC_V2:\n has_v1_models = any(isinstance(o, V1BaseModel) for o in args) or any(\n isinstance(o, V1BaseModel) for o in kwargs.values()\n )\n has_v2_types = any(is_v2_type(o) for o in args) or any(\n is_v2_type(o) for o in kwargs.values()\n )\n\n if has_v1_models and has_v2_types:\n raise ParameterTypeError(\n \"Cannot mix Pydantic v1 and v2 types as arguments to a flow.\"\n )\n\n if has_v1_models:\n validated_fn = V1ValidatedFunction(\n self.fn, config={\"arbitrary_types_allowed\": True}\n )\n else:\n validated_fn = V2ValidatedFunction(\n self.fn, config={\"arbitrary_types_allowed\": True}\n )\n\n else:\n validated_fn = ValidatedFunction(\n self.fn, config={\"arbitrary_types_allowed\": True}\n )\n\n try:\n model = validated_fn.init_model_instance(*args, **kwargs)\n except pydantic.ValidationError as exc:\n # We capture the pydantic exception and raise our own because the pydantic\n # exception is not picklable when using a cythonized pydantic installation\n raise ParameterTypeError.from_validation_error(exc) from None\n except V2ValidationError as exc:\n # We capture the pydantic exception and raise our own because the pydantic\n # exception is not picklable when using a cythonized pydantic installation\n raise ParameterTypeError.from_validation_error(exc) from None\n\n # Get the updated parameter dict with cast values from the model\n cast_parameters = {\n k: v\n for k, v in model._iter()\n if k in model.__fields_set__ or model.__fields__[k].default_factory\n }\n return cast_parameters\n\n def serialize_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Convert parameters to a serializable form.\n\n Uses FastAPI's `jsonable_encoder` to convert to JSON compatible objects without\n converting everything directly to a string. This maintains basic types like\n integers during API roundtrips.\n \"\"\"\n serialized_parameters = {}\n for key, value in parameters.items():\n try:\n serialized_parameters[key] = jsonable_encoder(value)\n except (TypeError, ValueError):\n logger.debug(\n f\"Parameter {key!r} for flow {self.name!r} is of unserializable \"\n f\"type {type(value).__name__!r} and will not be stored \"\n \"in the backend.\"\n )\n serialized_parameters[key] = f\"<{type(value).__name__}>\"\n return serialized_parameters\n\n @sync_compatible\n @deprecated_parameter(\n \"schedule\",\n start_date=\"Mar 2023\",\n when=lambda p: p is not None,\n help=\"Use `schedules` instead.\",\n )\n @deprecated_parameter(\n \"is_schedule_active\",\n start_date=\"Mar 2023\",\n when=lambda p: p is not None,\n help=\"Use `paused` instead.\",\n )\n async def to_deployment(\n self,\n name: str,\n interval: Optional[\n Union[\n Iterable[Union[int, float, datetime.timedelta]],\n int,\n float,\n datetime.timedelta,\n ]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n ) -> \"RunnerDeployment\":\n \"\"\"\n Creates a runner deployment object for this flow.\n\n Args:\n name: The name to give the created deployment.\n interval: An interval on which to execute the new deployment. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this deployment.\n rrule: An rrule schedule of when to execute runs of this deployment.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options such as `timezone`.\n schedule: A schedule object defining when to execute runs of this deployment.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n parameters: A dictionary of default parameter values to pass to runs of this deployment.\n triggers: A list of triggers that will kick off runs of this deployment.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for the created deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n\n Examples:\n Prepare two deployments and serve them:\n\n ```python\n from prefect import flow, serve\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n @flow\n def my_other_flow(name):\n print(f\"goodbye {name}\")\n\n if __name__ == \"__main__\":\n hello_deploy = my_flow.to_deployment(\"hello\", tags=[\"dev\"])\n bye_deploy = my_other_flow.to_deployment(\"goodbye\", tags=[\"dev\"])\n serve(hello_deploy, bye_deploy)\n ```\n \"\"\"\n from prefect.deployments.runner import RunnerDeployment\n\n if not name.endswith(\".py\"):\n raise_on_name_with_banned_characters(name)\n if self._storage and self._entrypoint:\n return await RunnerDeployment.from_storage(\n storage=self._storage,\n entrypoint=self._entrypoint,\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n paused=paused,\n schedules=schedules,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n tags=tags,\n triggers=triggers,\n parameters=parameters or {},\n description=description,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n else:\n return RunnerDeployment.from_flow(\n self,\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n paused=paused,\n schedules=schedules,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n tags=tags,\n triggers=triggers,\n parameters=parameters or {},\n description=description,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n entrypoint_type=entrypoint_type,\n )\n\n @sync_compatible\n async def serve(\n self,\n name: str,\n interval: Optional[\n Union[\n Iterable[Union[int, float, datetime.timedelta]],\n int,\n float,\n datetime.timedelta,\n ]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n parameters: Optional[dict] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n pause_on_shutdown: bool = True,\n print_starting_message: bool = True,\n limit: Optional[int] = None,\n webserver: bool = False,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n ):\n \"\"\"\n Creates a deployment for this flow and starts a runner to monitor for scheduled work.\n\n Args:\n name: The name to give the created deployment.\n interval: An interval on which to execute the deployment. Accepts a number or a\n timedelta object to create a single schedule. If a number is given, it will be\n interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n multiple schedules.\n cron: A cron schedule string of when to execute runs of this deployment.\n Also accepts an iterable of cron schedule strings to create multiple schedules.\n rrule: An rrule schedule string of when to execute runs of this deployment.\n Also accepts an iterable of rrule schedule strings to create multiple schedules.\n triggers: A list of triggers that will kick off runs of this deployment.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object defining when to execute runs of this deployment. Used to\n define additional scheduling options such as `timezone`.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n parameters: A dictionary of default parameter values to pass to runs of this deployment.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for the created deployment.\n pause_on_shutdown: If True, provided schedule will be paused when the serve function is stopped.\n If False, the schedules will continue running.\n print_starting_message: Whether or not to print the starting message when flow is served.\n limit: The maximum number of runs that can be executed concurrently.\n webserver: Whether or not to start a monitoring webserver for this flow.\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n\n Examples:\n Serve a flow:\n\n ```python\n from prefect import flow\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n if __name__ == \"__main__\":\n my_flow.serve(\"example-deployment\")\n ```\n\n Serve a flow and run it every hour:\n\n ```python\n from prefect import flow\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n if __name__ == \"__main__\":\n my_flow.serve(\"example-deployment\", interval=3600)\n ```\n \"\"\"\n from prefect.runner import Runner\n\n # Handling for my_flow.serve(__file__)\n # Will set name to name of file where my_flow.serve() without the extension\n # Non filepath strings will pass through unchanged\n name = Path(name).stem\n\n runner = Runner(name=name, pause_on_shutdown=pause_on_shutdown, limit=limit)\n deployment_id = await runner.add_flow(\n self,\n name=name,\n triggers=triggers,\n interval=interval,\n cron=cron,\n rrule=rrule,\n paused=paused,\n schedules=schedules,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n parameters=parameters,\n description=description,\n tags=tags,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n entrypoint_type=entrypoint_type,\n )\n if print_starting_message:\n help_message = (\n f\"[green]Your flow {self.name!r} is being served and polling for\"\n \" scheduled runs!\\n[/]\\nTo trigger a run for this flow, use the\"\n \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n f\" '{self.name}/{name}'\\n[/]\"\n )\n if PREFECT_UI_URL:\n help_message += (\n \"\\nYou can also run your flow via the Prefect UI:\"\n f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}[/]\\n\"\n )\n\n console = Console()\n console.print(help_message, soft_wrap=True)\n await runner.start(webserver=webserver)\n\n @classmethod\n @sync_compatible\n async def from_source(\n cls: Type[F],\n source: Union[str, RunnerStorage, ReadableDeploymentStorage],\n entrypoint: str,\n ) -> F:\n \"\"\"\n Loads a flow from a remote source.\n\n Args:\n source: Either a URL to a git repository or a storage object.\n entrypoint: The path to a file containing a flow and the name of the flow function in\n the format `./path/to/file.py:flow_func_name`.\n\n Returns:\n A new `Flow` instance.\n\n Examples:\n Load a flow from a public git repository:\n\n\n ```python\n from prefect import flow\n from prefect.runner.storage import GitRepository\n from prefect.blocks.system import Secret\n\n my_flow = flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n )\n\n my_flow()\n ```\n\n Load a flow from a private git repository using an access token stored in a `Secret` block:\n\n ```python\n from prefect import flow\n from prefect.runner.storage import GitRepository\n from prefect.blocks.system import Secret\n\n my_flow = flow.from_source(\n source=GitRepository(\n url=\"https://github.com/org/repo.git\",\n credentials={\"access_token\": Secret.load(\"github-access-token\")}\n ),\n entrypoint=\"flows.py:my_flow\",\n )\n\n my_flow()\n ```\n \"\"\"\n if isinstance(source, str):\n storage = create_storage_from_url(source)\n elif isinstance(source, RunnerStorage):\n storage = source\n elif hasattr(source, \"get_directory\"):\n storage = BlockStorageAdapter(source)\n else:\n raise TypeError(\n f\"Unsupported source type {type(source).__name__!r}. Please provide a\"\n \" URL to remote storage or a storage object.\"\n )\n with tempfile.TemporaryDirectory() as tmpdir:\n storage.set_base_path(Path(tmpdir))\n await storage.pull_code()\n\n full_entrypoint = str(storage.destination / entrypoint)\n flow: \"Flow\" = await from_async.wait_for_call_in_new_thread(\n create_call(load_flow_from_entrypoint, full_entrypoint)\n )\n flow._storage = storage\n flow._entrypoint = entrypoint\n\n return flow\n\n @sync_compatible\n async def deploy(\n self,\n name: str,\n work_pool_name: Optional[str] = None,\n image: Optional[Union[str, DeploymentImage]] = None,\n build: bool = True,\n push: bool = True,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[dict] = None,\n interval: Optional[Union[int, float, datetime.timedelta]] = None,\n cron: Optional[str] = None,\n rrule: Optional[str] = None,\n paused: Optional[bool] = None,\n schedules: Optional[List[MinimalDeploymentSchedule]] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n parameters: Optional[dict] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n print_next_steps: bool = True,\n ) -> UUID:\n \"\"\"\n Deploys a flow to run on dynamic infrastructure via a work pool.\n\n By default, calling this method will build a Docker image for the flow, push it to a registry,\n and create a deployment via the Prefect API that will run the flow on the given schedule.\n\n If you want to use an existing image, you can pass `build=False` to skip building and pushing\n an image.\n\n Args:\n name: The name to give the created deployment.\n work_pool_name: The name of the work pool to use for this deployment. Defaults to\n the value of `PREFECT_DEFAULT_WORK_POOL_NAME`.\n image: The name of the Docker image to build, including the registry and\n repository. Pass a DeploymentImage instance to customize the Dockerfile used\n and build arguments.\n build: Whether or not to build a new image for the flow. If False, the provided\n image will be used as-is and pulled at runtime.\n push: Whether or not to skip pushing the built image to a registry.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n interval: An interval on which to execute the deployment. Accepts a number or a\n timedelta object to create a single schedule. If a number is given, it will be\n interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n multiple schedules.\n cron: A cron schedule string of when to execute runs of this deployment.\n Also accepts an iterable of cron schedule strings to create multiple schedules.\n rrule: An rrule schedule string of when to execute runs of this deployment.\n Also accepts an iterable of rrule schedule strings to create multiple schedules.\n triggers: A list of triggers that will kick off runs of this deployment.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object defining when to execute runs of this deployment. Used to\n define additional scheduling options like `timezone`.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n parameters: A dictionary of default parameter values to pass to runs of this deployment.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for the created deployment.\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n print_next_steps_message: Whether or not to print a message with next steps\n after deploying the deployments.\n\n Returns:\n The ID of the created/updated deployment.\n\n Examples:\n Deploy a local flow to a work pool:\n\n ```python\n from prefect import flow\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n if __name__ == \"__main__\":\n my_flow.deploy(\n \"example-deployment\",\n work_pool_name=\"my-work-pool\",\n image=\"my-repository/my-image:dev\",\n )\n ```\n\n Deploy a remotely stored flow to a work pool:\n\n ```python\n from prefect import flow\n\n if __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n ).deploy(\n \"example-deployment\",\n work_pool_name=\"my-work-pool\",\n image=\"my-repository/my-image:dev\",\n )\n ```\n \"\"\"\n work_pool_name = work_pool_name or PREFECT_DEFAULT_WORK_POOL_NAME.value()\n\n try:\n async with get_client() as client:\n work_pool = await client.read_work_pool(work_pool_name)\n except ObjectNotFound as exc:\n raise ValueError(\n f\"Could not find work pool {work_pool_name!r}. Please create it before\"\n \" deploying this flow.\"\n ) from exc\n\n deployment = await self.to_deployment(\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedules=schedules,\n paused=paused,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n triggers=triggers,\n parameters=parameters,\n description=description,\n tags=tags,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n entrypoint_type=entrypoint_type,\n )\n\n deployment_ids = await deploy(\n deployment,\n work_pool_name=work_pool_name,\n image=image,\n build=build,\n push=push,\n print_next_steps_message=False,\n )\n\n if print_next_steps:\n console = Console()\n if not work_pool.is_push_pool and not work_pool.is_managed_pool:\n console.print(\n \"\\nTo execute flow runs from this deployment, start a worker in a\"\n \" separate terminal that pulls work from the\"\n f\" {work_pool_name!r} work pool:\"\n )\n console.print(\n f\"\\n\\t$ prefect worker start --pool {work_pool_name!r}\",\n style=\"blue\",\n )\n console.print(\n \"\\nTo schedule a run for this deployment, use the following command:\"\n )\n console.print(\n f\"\\n\\t$ prefect deployment run '{self.name}/{name}'\\n\",\n style=\"blue\",\n )\n if PREFECT_UI_URL:\n console.print(\n \"\\nYou can also run your flow via the Prefect UI:\"\n f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_ids[0]}[/]\\n\"\n )\n\n return deployment_ids[0]\n\n @overload\n def __call__(self: \"Flow[P, NoReturn]\", *args: P.args, **kwargs: P.kwargs) -> None:\n # `NoReturn` matches if a type can't be inferred for the function which stops a\n # sync function from matching the `Coroutine` overload\n ...\n\n @overload\n def __call__(\n self: \"Flow[P, Coroutine[Any, Any, T]]\", *args: P.args, **kwargs: P.kwargs\n ) -> Awaitable[T]:\n ...\n\n @overload\n def __call__(\n self: \"Flow[P, T]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> T:\n ...\n\n @overload\n def __call__(\n self: \"Flow[P, T]\",\n *args: P.args,\n return_state: Literal[True],\n **kwargs: P.kwargs,\n ) -> State[T]:\n ...\n\n def __call__(\n self,\n *args: \"P.args\",\n return_state: bool = False,\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: \"P.kwargs\",\n ):\n \"\"\"\n Run the flow and return its result.\n\n\n Flow parameter values must be serializable by Pydantic.\n\n If writing an async flow, this call must be awaited.\n\n This will create a new flow run in the API.\n\n Args:\n *args: Arguments to run the flow with.\n return_state: Return a Prefect State containing the result of the\n flow run.\n wait_for: Upstream task futures to wait for before starting the flow if called as a subflow\n **kwargs: Keyword arguments to run the flow with.\n\n Returns:\n If `return_state` is False, returns the result of the flow run.\n If `return_state` is True, returns the result of the flow run\n wrapped in a Prefect State which provides error handling.\n\n Examples:\n\n Define a flow\n\n >>> @flow\n >>> def my_flow(name):\n >>> print(f\"hello {name}\")\n >>> return f\"goodbye {name}\"\n\n Run a flow\n\n >>> my_flow(\"marvin\")\n hello marvin\n \"goodbye marvin\"\n\n Run a flow with additional tags\n\n >>> from prefect import tags\n >>> with tags(\"db\", \"blue\"):\n >>> my_flow(\"foo\")\n \"\"\"\n from prefect.engine import enter_flow_run_engine_from_flow_call\n\n # Convert the call args/kwargs to a parameter dict\n parameters = get_call_parameters(self.fn, args, kwargs)\n\n return_type = \"state\" if return_state else \"result\"\n\n task_viz_tracker = get_task_viz_tracker()\n if task_viz_tracker:\n # this is a subflow, for now return a single task and do not go further\n # we can add support for exploring subflows for tasks in the future.\n return track_viz_task(self.isasync, self.name, parameters)\n\n return enter_flow_run_engine_from_flow_call(\n self,\n parameters,\n wait_for=wait_for,\n return_type=return_type,\n )\n\n @overload\n def _run(self: \"Flow[P, NoReturn]\", *args: P.args, **kwargs: P.kwargs) -> State[T]:\n # `NoReturn` matches if a type can't be inferred for the function which stops a\n # sync function from matching the `Coroutine` overload\n ...\n\n @overload\n def _run(\n self: \"Flow[P, Coroutine[Any, Any, T]]\", *args: P.args, **kwargs: P.kwargs\n ) -> Awaitable[T]:\n ...\n\n @overload\n def _run(self: \"Flow[P, T]\", *args: P.args, **kwargs: P.kwargs) -> State[T]:\n ...\n\n def _run(\n self,\n *args: \"P.args\",\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: \"P.kwargs\",\n ):\n \"\"\"\n Run the flow and return its final state.\n\n Examples:\n\n Run a flow and get the returned result\n\n >>> state = my_flow._run(\"marvin\")\n >>> state.result()\n \"goodbye marvin\"\n \"\"\"\n from prefect.engine import enter_flow_run_engine_from_flow_call\n\n # Convert the call args/kwargs to a parameter dict\n parameters = get_call_parameters(self.fn, args, kwargs)\n\n return enter_flow_run_engine_from_flow_call(\n self,\n parameters,\n wait_for=wait_for,\n return_type=\"state\",\n )\n\n @sync_compatible\n async def visualize(self, *args, **kwargs):\n \"\"\"\n Generates a graphviz object representing the current flow. In IPython notebooks,\n it's rendered inline, otherwise in a new window as a PNG.\n\n Raises:\n - ImportError: If `graphviz` isn't installed.\n - GraphvizExecutableNotFoundError: If the `dot` executable isn't found.\n - FlowVisualizationError: If the flow can't be visualized for any other reason.\n \"\"\"\n if not PREFECT_UNIT_TEST_MODE:\n warnings.warn(\n \"`flow.visualize()` will execute code inside of your flow that is not\"\n \" decorated with `@task` or `@flow`.\"\n )\n\n try:\n with TaskVizTracker() as tracker:\n if self.isasync:\n await self.fn(*args, **kwargs)\n else:\n self.fn(*args, **kwargs)\n\n graph = build_task_dependencies(tracker)\n\n visualize_task_dependencies(graph, self.name)\n\n except GraphvizImportError:\n raise\n except GraphvizExecutableNotFoundError:\n raise\n except VisualizationUnsupportedError:\n raise\n except FlowVisualizationError:\n raise\n except Exception as e:\n msg = (\n \"It's possible you are trying to visualize a flow that contains \"\n \"code that directly interacts with the result of a task\"\n \" inside of the flow. \\nTry passing a `viz_return_value` \"\n \"to the task decorator, e.g. `@task(viz_return_value=[1, 2, 3]).`\"\n )\n\n new_exception = type(e)(str(e) + \"\\n\" + msg)\n # Copy traceback information from the original exception\n new_exception.__traceback__ = e.__traceback__\n raise new_exception\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.deploy","title":"deploy
async
","text":"Deploys a flow to run on dynamic infrastructure via a work pool.
By default, calling this method will build a Docker image for the flow, push it to a registry, and create a deployment via the Prefect API that will run the flow on the given schedule.
If you want to use an existing image, you can pass build=False
to skip building and pushing an image.
Parameters:
Name Type Description Defaultname
str
The name to give the created deployment.
requiredwork_pool_name
Optional[str]
The name of the work pool to use for this deployment. Defaults to the value of PREFECT_DEFAULT_WORK_POOL_NAME
.
None
image
Optional[Union[str, DeploymentImage]]
The name of the Docker image to build, including the registry and repository. Pass a DeploymentImage instance to customize the Dockerfile used and build arguments.
None
build
bool
Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime.
True
push
bool
Whether or not to skip pushing the built image to a registry.
True
work_queue_name
Optional[str]
The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.
None
job_variables
Optional[dict]
Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.
None
interval
Optional[Union[int, float, timedelta]]
An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules.
None
cron
Optional[str]
A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules.
None
rrule
Optional[str]
An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules.
None
triggers
Optional[List[DeploymentTrigger]]
A list of triggers that will kick off runs of this deployment.
None
paused
Optional[bool]
Whether or not to set this deployment as paused.
None
schedules
Optional[List[MinimalDeploymentSchedule]]
A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone
.
None
schedule
Optional[SCHEDULE_TYPES]
A schedule object defining when to execute runs of this deployment. Used to define additional scheduling options like timezone
.
None
is_schedule_active
Optional[bool]
Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.
None
parameters
Optional[dict]
A dictionary of default parameter values to pass to runs of this deployment.
None
description
Optional[str]
A description for the created deployment. Defaults to the flow's description if not provided.
None
tags
Optional[List[str]]
A list of tags to associate with the created deployment for organizational purposes.
None
version
Optional[str]
A version for the created deployment. Defaults to the flow's version.
None
enforce_parameter_schema
bool
Whether or not the Prefect API should enforce the parameter schema for the created deployment.
False
entrypoint_type
EntrypointType
Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.
FILE_PATH
print_next_steps_message
Whether or not to print a message with next steps after deploying the deployments.
requiredReturns:
Type DescriptionUUID
The ID of the created/updated deployment.
Examples:
Deploy a local flow to a work pool:
from prefect import flow\n\n@flow\ndef my_flow(name):\n print(f\"hello {name}\")\n\nif __name__ == \"__main__\":\n my_flow.deploy(\n \"example-deployment\",\n work_pool_name=\"my-work-pool\",\n image=\"my-repository/my-image:dev\",\n )\n
Deploy a remotely stored flow to a work pool:
from prefect import flow\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n ).deploy(\n \"example-deployment\",\n work_pool_name=\"my-work-pool\",\n image=\"my-repository/my-image:dev\",\n )\n
Source code in prefect/flows.py
@sync_compatible\nasync def deploy(\n self,\n name: str,\n work_pool_name: Optional[str] = None,\n image: Optional[Union[str, DeploymentImage]] = None,\n build: bool = True,\n push: bool = True,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[dict] = None,\n interval: Optional[Union[int, float, datetime.timedelta]] = None,\n cron: Optional[str] = None,\n rrule: Optional[str] = None,\n paused: Optional[bool] = None,\n schedules: Optional[List[MinimalDeploymentSchedule]] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n parameters: Optional[dict] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n print_next_steps: bool = True,\n) -> UUID:\n \"\"\"\n Deploys a flow to run on dynamic infrastructure via a work pool.\n\n By default, calling this method will build a Docker image for the flow, push it to a registry,\n and create a deployment via the Prefect API that will run the flow on the given schedule.\n\n If you want to use an existing image, you can pass `build=False` to skip building and pushing\n an image.\n\n Args:\n name: The name to give the created deployment.\n work_pool_name: The name of the work pool to use for this deployment. Defaults to\n the value of `PREFECT_DEFAULT_WORK_POOL_NAME`.\n image: The name of the Docker image to build, including the registry and\n repository. Pass a DeploymentImage instance to customize the Dockerfile used\n and build arguments.\n build: Whether or not to build a new image for the flow. If False, the provided\n image will be used as-is and pulled at runtime.\n push: Whether or not to skip pushing the built image to a registry.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n interval: An interval on which to execute the deployment. Accepts a number or a\n timedelta object to create a single schedule. If a number is given, it will be\n interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n multiple schedules.\n cron: A cron schedule string of when to execute runs of this deployment.\n Also accepts an iterable of cron schedule strings to create multiple schedules.\n rrule: An rrule schedule string of when to execute runs of this deployment.\n Also accepts an iterable of rrule schedule strings to create multiple schedules.\n triggers: A list of triggers that will kick off runs of this deployment.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object defining when to execute runs of this deployment. Used to\n define additional scheduling options like `timezone`.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n parameters: A dictionary of default parameter values to pass to runs of this deployment.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for the created deployment.\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n print_next_steps_message: Whether or not to print a message with next steps\n after deploying the deployments.\n\n Returns:\n The ID of the created/updated deployment.\n\n Examples:\n Deploy a local flow to a work pool:\n\n ```python\n from prefect import flow\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n if __name__ == \"__main__\":\n my_flow.deploy(\n \"example-deployment\",\n work_pool_name=\"my-work-pool\",\n image=\"my-repository/my-image:dev\",\n )\n ```\n\n Deploy a remotely stored flow to a work pool:\n\n ```python\n from prefect import flow\n\n if __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n ).deploy(\n \"example-deployment\",\n work_pool_name=\"my-work-pool\",\n image=\"my-repository/my-image:dev\",\n )\n ```\n \"\"\"\n work_pool_name = work_pool_name or PREFECT_DEFAULT_WORK_POOL_NAME.value()\n\n try:\n async with get_client() as client:\n work_pool = await client.read_work_pool(work_pool_name)\n except ObjectNotFound as exc:\n raise ValueError(\n f\"Could not find work pool {work_pool_name!r}. Please create it before\"\n \" deploying this flow.\"\n ) from exc\n\n deployment = await self.to_deployment(\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedules=schedules,\n paused=paused,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n triggers=triggers,\n parameters=parameters,\n description=description,\n tags=tags,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n entrypoint_type=entrypoint_type,\n )\n\n deployment_ids = await deploy(\n deployment,\n work_pool_name=work_pool_name,\n image=image,\n build=build,\n push=push,\n print_next_steps_message=False,\n )\n\n if print_next_steps:\n console = Console()\n if not work_pool.is_push_pool and not work_pool.is_managed_pool:\n console.print(\n \"\\nTo execute flow runs from this deployment, start a worker in a\"\n \" separate terminal that pulls work from the\"\n f\" {work_pool_name!r} work pool:\"\n )\n console.print(\n f\"\\n\\t$ prefect worker start --pool {work_pool_name!r}\",\n style=\"blue\",\n )\n console.print(\n \"\\nTo schedule a run for this deployment, use the following command:\"\n )\n console.print(\n f\"\\n\\t$ prefect deployment run '{self.name}/{name}'\\n\",\n style=\"blue\",\n )\n if PREFECT_UI_URL:\n console.print(\n \"\\nYou can also run your flow via the Prefect UI:\"\n f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_ids[0]}[/]\\n\"\n )\n\n return deployment_ids[0]\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.from_source","title":"from_source
async
classmethod
","text":"Loads a flow from a remote source.
Parameters:
Name Type Description Defaultsource
Union[str, RunnerStorage, ReadableDeploymentStorage]
Either a URL to a git repository or a storage object.
requiredentrypoint
str
The path to a file containing a flow and the name of the flow function in the format ./path/to/file.py:flow_func_name
.
Returns:
Type DescriptionF
A new Flow
instance.
Examples:
Load a flow from a public git repository:
from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nmy_flow = flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n)\n\nmy_flow()\n
Load a flow from a private git repository using an access token stored in a Secret
block:
from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nmy_flow = flow.from_source(\n source=GitRepository(\n url=\"https://github.com/org/repo.git\",\n credentials={\"access_token\": Secret.load(\"github-access-token\")}\n ),\n entrypoint=\"flows.py:my_flow\",\n)\n\nmy_flow()\n
Source code in prefect/flows.py
@classmethod\n@sync_compatible\nasync def from_source(\n cls: Type[F],\n source: Union[str, RunnerStorage, ReadableDeploymentStorage],\n entrypoint: str,\n) -> F:\n \"\"\"\n Loads a flow from a remote source.\n\n Args:\n source: Either a URL to a git repository or a storage object.\n entrypoint: The path to a file containing a flow and the name of the flow function in\n the format `./path/to/file.py:flow_func_name`.\n\n Returns:\n A new `Flow` instance.\n\n Examples:\n Load a flow from a public git repository:\n\n\n ```python\n from prefect import flow\n from prefect.runner.storage import GitRepository\n from prefect.blocks.system import Secret\n\n my_flow = flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n )\n\n my_flow()\n ```\n\n Load a flow from a private git repository using an access token stored in a `Secret` block:\n\n ```python\n from prefect import flow\n from prefect.runner.storage import GitRepository\n from prefect.blocks.system import Secret\n\n my_flow = flow.from_source(\n source=GitRepository(\n url=\"https://github.com/org/repo.git\",\n credentials={\"access_token\": Secret.load(\"github-access-token\")}\n ),\n entrypoint=\"flows.py:my_flow\",\n )\n\n my_flow()\n ```\n \"\"\"\n if isinstance(source, str):\n storage = create_storage_from_url(source)\n elif isinstance(source, RunnerStorage):\n storage = source\n elif hasattr(source, \"get_directory\"):\n storage = BlockStorageAdapter(source)\n else:\n raise TypeError(\n f\"Unsupported source type {type(source).__name__!r}. Please provide a\"\n \" URL to remote storage or a storage object.\"\n )\n with tempfile.TemporaryDirectory() as tmpdir:\n storage.set_base_path(Path(tmpdir))\n await storage.pull_code()\n\n full_entrypoint = str(storage.destination / entrypoint)\n flow: \"Flow\" = await from_async.wait_for_call_in_new_thread(\n create_call(load_flow_from_entrypoint, full_entrypoint)\n )\n flow._storage = storage\n flow._entrypoint = entrypoint\n\n return flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.serialize_parameters","title":"serialize_parameters
","text":"Convert parameters to a serializable form.
Uses FastAPI's jsonable_encoder
to convert to JSON compatible objects without converting everything directly to a string. This maintains basic types like integers during API roundtrips.
prefect/flows.py
def serialize_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Convert parameters to a serializable form.\n\n Uses FastAPI's `jsonable_encoder` to convert to JSON compatible objects without\n converting everything directly to a string. This maintains basic types like\n integers during API roundtrips.\n \"\"\"\n serialized_parameters = {}\n for key, value in parameters.items():\n try:\n serialized_parameters[key] = jsonable_encoder(value)\n except (TypeError, ValueError):\n logger.debug(\n f\"Parameter {key!r} for flow {self.name!r} is of unserializable \"\n f\"type {type(value).__name__!r} and will not be stored \"\n \"in the backend.\"\n )\n serialized_parameters[key] = f\"<{type(value).__name__}>\"\n return serialized_parameters\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.serve","title":"serve
async
","text":"Creates a deployment for this flow and starts a runner to monitor for scheduled work.
Parameters:
Name Type Description Defaultname
str
The name to give the created deployment.
requiredinterval
Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]
An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules.
None
cron
Optional[Union[Iterable[str], str]]
A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules.
None
rrule
Optional[Union[Iterable[str], str]]
An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules.
None
triggers
Optional[List[DeploymentTrigger]]
A list of triggers that will kick off runs of this deployment.
None
paused
Optional[bool]
Whether or not to set this deployment as paused.
None
schedules
Optional[List[FlexibleScheduleList]]
A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone
.
None
schedule
Optional[SCHEDULE_TYPES]
A schedule object defining when to execute runs of this deployment. Used to define additional scheduling options such as timezone
.
None
is_schedule_active
Optional[bool]
Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.
None
parameters
Optional[dict]
A dictionary of default parameter values to pass to runs of this deployment.
None
description
Optional[str]
A description for the created deployment. Defaults to the flow's description if not provided.
None
tags
Optional[List[str]]
A list of tags to associate with the created deployment for organizational purposes.
None
version
Optional[str]
A version for the created deployment. Defaults to the flow's version.
None
enforce_parameter_schema
bool
Whether or not the Prefect API should enforce the parameter schema for the created deployment.
False
pause_on_shutdown
bool
If True, provided schedule will be paused when the serve function is stopped. If False, the schedules will continue running.
True
print_starting_message
bool
Whether or not to print the starting message when flow is served.
True
limit
Optional[int]
The maximum number of runs that can be executed concurrently.
None
webserver
bool
Whether or not to start a monitoring webserver for this flow.
False
entrypoint_type
EntrypointType
Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.
FILE_PATH
Examples:
Serve a flow:
from prefect import flow\n\n@flow\ndef my_flow(name):\n print(f\"hello {name}\")\n\nif __name__ == \"__main__\":\n my_flow.serve(\"example-deployment\")\n
Serve a flow and run it every hour:
from prefect import flow\n\n@flow\ndef my_flow(name):\n print(f\"hello {name}\")\n\nif __name__ == \"__main__\":\n my_flow.serve(\"example-deployment\", interval=3600)\n
Source code in prefect/flows.py
@sync_compatible\nasync def serve(\n self,\n name: str,\n interval: Optional[\n Union[\n Iterable[Union[int, float, datetime.timedelta]],\n int,\n float,\n datetime.timedelta,\n ]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n parameters: Optional[dict] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n pause_on_shutdown: bool = True,\n print_starting_message: bool = True,\n limit: Optional[int] = None,\n webserver: bool = False,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n):\n \"\"\"\n Creates a deployment for this flow and starts a runner to monitor for scheduled work.\n\n Args:\n name: The name to give the created deployment.\n interval: An interval on which to execute the deployment. Accepts a number or a\n timedelta object to create a single schedule. If a number is given, it will be\n interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n multiple schedules.\n cron: A cron schedule string of when to execute runs of this deployment.\n Also accepts an iterable of cron schedule strings to create multiple schedules.\n rrule: An rrule schedule string of when to execute runs of this deployment.\n Also accepts an iterable of rrule schedule strings to create multiple schedules.\n triggers: A list of triggers that will kick off runs of this deployment.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object defining when to execute runs of this deployment. Used to\n define additional scheduling options such as `timezone`.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n parameters: A dictionary of default parameter values to pass to runs of this deployment.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for the created deployment.\n pause_on_shutdown: If True, provided schedule will be paused when the serve function is stopped.\n If False, the schedules will continue running.\n print_starting_message: Whether or not to print the starting message when flow is served.\n limit: The maximum number of runs that can be executed concurrently.\n webserver: Whether or not to start a monitoring webserver for this flow.\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n\n Examples:\n Serve a flow:\n\n ```python\n from prefect import flow\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n if __name__ == \"__main__\":\n my_flow.serve(\"example-deployment\")\n ```\n\n Serve a flow and run it every hour:\n\n ```python\n from prefect import flow\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n if __name__ == \"__main__\":\n my_flow.serve(\"example-deployment\", interval=3600)\n ```\n \"\"\"\n from prefect.runner import Runner\n\n # Handling for my_flow.serve(__file__)\n # Will set name to name of file where my_flow.serve() without the extension\n # Non filepath strings will pass through unchanged\n name = Path(name).stem\n\n runner = Runner(name=name, pause_on_shutdown=pause_on_shutdown, limit=limit)\n deployment_id = await runner.add_flow(\n self,\n name=name,\n triggers=triggers,\n interval=interval,\n cron=cron,\n rrule=rrule,\n paused=paused,\n schedules=schedules,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n parameters=parameters,\n description=description,\n tags=tags,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n entrypoint_type=entrypoint_type,\n )\n if print_starting_message:\n help_message = (\n f\"[green]Your flow {self.name!r} is being served and polling for\"\n \" scheduled runs!\\n[/]\\nTo trigger a run for this flow, use the\"\n \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n f\" '{self.name}/{name}'\\n[/]\"\n )\n if PREFECT_UI_URL:\n help_message += (\n \"\\nYou can also run your flow via the Prefect UI:\"\n f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}[/]\\n\"\n )\n\n console = Console()\n console.print(help_message, soft_wrap=True)\n await runner.start(webserver=webserver)\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.to_deployment","title":"to_deployment
async
","text":"Creates a runner deployment object for this flow.
Parameters:
Name Type Description Defaultname
str
The name to give the created deployment.
requiredinterval
Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]
An interval on which to execute the new deployment. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.
None
cron
Optional[Union[Iterable[str], str]]
A cron schedule of when to execute runs of this deployment.
None
rrule
Optional[Union[Iterable[str], str]]
An rrule schedule of when to execute runs of this deployment.
None
paused
Optional[bool]
Whether or not to set this deployment as paused.
None
schedules
Optional[List[FlexibleScheduleList]]
A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options such as timezone
.
None
schedule
Optional[SCHEDULE_TYPES]
A schedule object defining when to execute runs of this deployment.
None
is_schedule_active
Optional[bool]
Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.
None
parameters
Optional[dict]
A dictionary of default parameter values to pass to runs of this deployment.
None
triggers
Optional[List[DeploymentTrigger]]
A list of triggers that will kick off runs of this deployment.
None
description
Optional[str]
A description for the created deployment. Defaults to the flow's description if not provided.
None
tags
Optional[List[str]]
A list of tags to associate with the created deployment for organizational purposes.
None
version
Optional[str]
A version for the created deployment. Defaults to the flow's version.
None
enforce_parameter_schema
bool
Whether or not the Prefect API should enforce the parameter schema for the created deployment.
False
work_pool_name
Optional[str]
The name of the work pool to use for this deployment.
None
work_queue_name
Optional[str]
The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.
None
job_variables
Optional[Dict[str, Any]]
Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for
None
entrypoint_type
EntrypointType
Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.
FILE_PATH
Examples:
Prepare two deployments and serve them:
from prefect import flow, serve\n\n@flow\ndef my_flow(name):\n print(f\"hello {name}\")\n\n@flow\ndef my_other_flow(name):\n print(f\"goodbye {name}\")\n\nif __name__ == \"__main__\":\n hello_deploy = my_flow.to_deployment(\"hello\", tags=[\"dev\"])\n bye_deploy = my_other_flow.to_deployment(\"goodbye\", tags=[\"dev\"])\n serve(hello_deploy, bye_deploy)\n
Source code in prefect/flows.py
@sync_compatible\n@deprecated_parameter(\n \"schedule\",\n start_date=\"Mar 2023\",\n when=lambda p: p is not None,\n help=\"Use `schedules` instead.\",\n)\n@deprecated_parameter(\n \"is_schedule_active\",\n start_date=\"Mar 2023\",\n when=lambda p: p is not None,\n help=\"Use `paused` instead.\",\n)\nasync def to_deployment(\n self,\n name: str,\n interval: Optional[\n Union[\n Iterable[Union[int, float, datetime.timedelta]],\n int,\n float,\n datetime.timedelta,\n ]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n) -> \"RunnerDeployment\":\n \"\"\"\n Creates a runner deployment object for this flow.\n\n Args:\n name: The name to give the created deployment.\n interval: An interval on which to execute the new deployment. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this deployment.\n rrule: An rrule schedule of when to execute runs of this deployment.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options such as `timezone`.\n schedule: A schedule object defining when to execute runs of this deployment.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n parameters: A dictionary of default parameter values to pass to runs of this deployment.\n triggers: A list of triggers that will kick off runs of this deployment.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for the created deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n\n Examples:\n Prepare two deployments and serve them:\n\n ```python\n from prefect import flow, serve\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n @flow\n def my_other_flow(name):\n print(f\"goodbye {name}\")\n\n if __name__ == \"__main__\":\n hello_deploy = my_flow.to_deployment(\"hello\", tags=[\"dev\"])\n bye_deploy = my_other_flow.to_deployment(\"goodbye\", tags=[\"dev\"])\n serve(hello_deploy, bye_deploy)\n ```\n \"\"\"\n from prefect.deployments.runner import RunnerDeployment\n\n if not name.endswith(\".py\"):\n raise_on_name_with_banned_characters(name)\n if self._storage and self._entrypoint:\n return await RunnerDeployment.from_storage(\n storage=self._storage,\n entrypoint=self._entrypoint,\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n paused=paused,\n schedules=schedules,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n tags=tags,\n triggers=triggers,\n parameters=parameters or {},\n description=description,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n else:\n return RunnerDeployment.from_flow(\n self,\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n paused=paused,\n schedules=schedules,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n tags=tags,\n triggers=triggers,\n parameters=parameters or {},\n description=description,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n entrypoint_type=entrypoint_type,\n )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.validate_parameters","title":"validate_parameters
","text":"Validate parameters for compatibility with the flow by attempting to cast the inputs to the associated types specified by the function's type annotations.
Returns:
Type DescriptionDict[str, Any]
A new dict of parameters that have been cast to the appropriate types
Raises:
Type DescriptionParameterTypeError
if the provided parameters are not valid
Source code inprefect/flows.py
def validate_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Validate parameters for compatibility with the flow by attempting to cast the inputs to the\n associated types specified by the function's type annotations.\n\n Returns:\n A new dict of parameters that have been cast to the appropriate types\n\n Raises:\n ParameterTypeError: if the provided parameters are not valid\n \"\"\"\n args, kwargs = parameters_to_args_kwargs(self.fn, parameters)\n\n if HAS_PYDANTIC_V2:\n has_v1_models = any(isinstance(o, V1BaseModel) for o in args) or any(\n isinstance(o, V1BaseModel) for o in kwargs.values()\n )\n has_v2_types = any(is_v2_type(o) for o in args) or any(\n is_v2_type(o) for o in kwargs.values()\n )\n\n if has_v1_models and has_v2_types:\n raise ParameterTypeError(\n \"Cannot mix Pydantic v1 and v2 types as arguments to a flow.\"\n )\n\n if has_v1_models:\n validated_fn = V1ValidatedFunction(\n self.fn, config={\"arbitrary_types_allowed\": True}\n )\n else:\n validated_fn = V2ValidatedFunction(\n self.fn, config={\"arbitrary_types_allowed\": True}\n )\n\n else:\n validated_fn = ValidatedFunction(\n self.fn, config={\"arbitrary_types_allowed\": True}\n )\n\n try:\n model = validated_fn.init_model_instance(*args, **kwargs)\n except pydantic.ValidationError as exc:\n # We capture the pydantic exception and raise our own because the pydantic\n # exception is not picklable when using a cythonized pydantic installation\n raise ParameterTypeError.from_validation_error(exc) from None\n except V2ValidationError as exc:\n # We capture the pydantic exception and raise our own because the pydantic\n # exception is not picklable when using a cythonized pydantic installation\n raise ParameterTypeError.from_validation_error(exc) from None\n\n # Get the updated parameter dict with cast values from the model\n cast_parameters = {\n k: v\n for k, v in model._iter()\n if k in model.__fields_set__ or model.__fields__[k].default_factory\n }\n return cast_parameters\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.visualize","title":"visualize
async
","text":"Generates a graphviz object representing the current flow. In IPython notebooks, it's rendered inline, otherwise in a new window as a PNG.
Raises:
Type Description-ImportError
If graphviz
isn't installed.
-GraphvizExecutableNotFoundError
If the dot
executable isn't found.
-FlowVisualizationError
If the flow can't be visualized for any other reason.
Source code inprefect/flows.py
@sync_compatible\nasync def visualize(self, *args, **kwargs):\n \"\"\"\n Generates a graphviz object representing the current flow. In IPython notebooks,\n it's rendered inline, otherwise in a new window as a PNG.\n\n Raises:\n - ImportError: If `graphviz` isn't installed.\n - GraphvizExecutableNotFoundError: If the `dot` executable isn't found.\n - FlowVisualizationError: If the flow can't be visualized for any other reason.\n \"\"\"\n if not PREFECT_UNIT_TEST_MODE:\n warnings.warn(\n \"`flow.visualize()` will execute code inside of your flow that is not\"\n \" decorated with `@task` or `@flow`.\"\n )\n\n try:\n with TaskVizTracker() as tracker:\n if self.isasync:\n await self.fn(*args, **kwargs)\n else:\n self.fn(*args, **kwargs)\n\n graph = build_task_dependencies(tracker)\n\n visualize_task_dependencies(graph, self.name)\n\n except GraphvizImportError:\n raise\n except GraphvizExecutableNotFoundError:\n raise\n except VisualizationUnsupportedError:\n raise\n except FlowVisualizationError:\n raise\n except Exception as e:\n msg = (\n \"It's possible you are trying to visualize a flow that contains \"\n \"code that directly interacts with the result of a task\"\n \" inside of the flow. \\nTry passing a `viz_return_value` \"\n \"to the task decorator, e.g. `@task(viz_return_value=[1, 2, 3]).`\"\n )\n\n new_exception = type(e)(str(e) + \"\\n\" + msg)\n # Copy traceback information from the original exception\n new_exception.__traceback__ = e.__traceback__\n raise new_exception\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.with_options","title":"with_options
","text":"Create a new flow from the current object, updating provided options.
Parameters:
Name Type Description Defaultname
str
A new name for the flow.
None
version
str
A new version for the flow.
None
description
str
A new description for the flow.
None
flow_run_name
Optional[Union[Callable[[], str], str]]
An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.
None
task_runner
Union[Type[BaseTaskRunner], BaseTaskRunner]
A new task runner for the flow.
None
timeout_seconds
Union[int, float]
A new number of seconds to fail the flow after if still running.
None
validate_parameters
bool
A new value indicating if flow calls should validate given parameters.
None
retries
Optional[int]
A new number of times to retry on flow run failure.
None
retry_delay_seconds
Optional[Union[int, float]]
A new number of seconds to wait before retrying the flow after failure. This is only applicable if retries
is nonzero.
None
persist_result
Optional[bool]
A new option for enabling or disabling result persistence.
NotSet
result_storage
Optional[ResultStorage]
A new storage type to use for results.
NotSet
result_serializer
Optional[ResultSerializer]
A new serializer to use for results.
NotSet
cache_result_in_memory
bool
A new value indicating if the flow's result should be cached in memory.
None
on_failure
Optional[List[Callable[[Flow, FlowRun, State], None]]]
A new list of callables to run when the flow enters a failed state.
None
on_completion
Optional[List[Callable[[Flow, FlowRun, State], None]]]
A new list of callables to run when the flow enters a completed state.
None
on_cancellation
Optional[List[Callable[[Flow, FlowRun, State], None]]]
A new list of callables to run when the flow enters a cancelling state.
None
on_crashed
Optional[List[Callable[[Flow, FlowRun, State], None]]]
A new list of callables to run when the flow enters a crashed state.
None
on_running
Optional[List[Callable[[Flow, FlowRun, State], None]]]
A new list of callables to run when the flow enters a running state.
None
Returns:
Type DescriptionSelf
A new Flow
instance.
Create a new flow from an existing flow and update the name:\n\n>>> @flow(name=\"My flow\")\n>>> def my_flow():\n>>> return 1\n>>>\n>>> new_flow = my_flow.with_options(name=\"My new flow\")\n\nCreate a new flow from an existing flow, update the task runner, and call\nit without an intermediate variable:\n\n>>> from prefect.task_runners import SequentialTaskRunner\n>>>\n>>> @flow\n>>> def my_flow(x, y):\n>>> return x + y\n>>>\n>>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n>>> assert state.result() == 4\n
Source code in prefect/flows.py
def with_options(\n self,\n *,\n name: str = None,\n version: str = None,\n retries: Optional[int] = None,\n retry_delay_seconds: Optional[Union[int, float]] = None,\n description: str = None,\n flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = None,\n timeout_seconds: Union[int, float] = None,\n validate_parameters: bool = None,\n persist_result: Optional[bool] = NotSet,\n result_storage: Optional[ResultStorage] = NotSet,\n result_serializer: Optional[ResultSerializer] = NotSet,\n cache_result_in_memory: bool = None,\n log_prints: Optional[bool] = NotSet,\n on_completion: Optional[\n List[Callable[[FlowSchema, FlowRun, State], None]]\n ] = None,\n on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_cancellation: Optional[\n List[Callable[[FlowSchema, FlowRun, State], None]]\n ] = None,\n on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n) -> Self:\n \"\"\"\n Create a new flow from the current object, updating provided options.\n\n Args:\n name: A new name for the flow.\n version: A new version for the flow.\n description: A new description for the flow.\n flow_run_name: An optional name to distinguish runs of this flow; this name\n can be provided as a string template with the flow's parameters as variables,\n or a function that returns a string.\n task_runner: A new task runner for the flow.\n timeout_seconds: A new number of seconds to fail the flow after if still\n running.\n validate_parameters: A new value indicating if flow calls should validate\n given parameters.\n retries: A new number of times to retry on flow run failure.\n retry_delay_seconds: A new number of seconds to wait before retrying the\n flow after failure. This is only applicable if `retries` is nonzero.\n persist_result: A new option for enabling or disabling result persistence.\n result_storage: A new storage type to use for results.\n result_serializer: A new serializer to use for results.\n cache_result_in_memory: A new value indicating if the flow's result should\n be cached in memory.\n on_failure: A new list of callables to run when the flow enters a failed state.\n on_completion: A new list of callables to run when the flow enters a completed state.\n on_cancellation: A new list of callables to run when the flow enters a cancelling state.\n on_crashed: A new list of callables to run when the flow enters a crashed state.\n on_running: A new list of callables to run when the flow enters a running state.\n\n Returns:\n A new `Flow` instance.\n\n Examples:\n\n Create a new flow from an existing flow and update the name:\n\n >>> @flow(name=\"My flow\")\n >>> def my_flow():\n >>> return 1\n >>>\n >>> new_flow = my_flow.with_options(name=\"My new flow\")\n\n Create a new flow from an existing flow, update the task runner, and call\n it without an intermediate variable:\n\n >>> from prefect.task_runners import SequentialTaskRunner\n >>>\n >>> @flow\n >>> def my_flow(x, y):\n >>> return x + y\n >>>\n >>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n >>> assert state.result() == 4\n\n \"\"\"\n new_flow = Flow(\n fn=self.fn,\n name=name or self.name,\n description=description or self.description,\n flow_run_name=flow_run_name or self.flow_run_name,\n version=version or self.version,\n task_runner=task_runner or self.task_runner,\n retries=retries if retries is not None else self.retries,\n retry_delay_seconds=(\n retry_delay_seconds\n if retry_delay_seconds is not None\n else self.retry_delay_seconds\n ),\n timeout_seconds=(\n timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n ),\n validate_parameters=(\n validate_parameters\n if validate_parameters is not None\n else self.should_validate_parameters\n ),\n persist_result=(\n persist_result if persist_result is not NotSet else self.persist_result\n ),\n result_storage=(\n result_storage if result_storage is not NotSet else self.result_storage\n ),\n result_serializer=(\n result_serializer\n if result_serializer is not NotSet\n else self.result_serializer\n ),\n cache_result_in_memory=(\n cache_result_in_memory\n if cache_result_in_memory is not None\n else self.cache_result_in_memory\n ),\n log_prints=log_prints if log_prints is not NotSet else self.log_prints,\n on_completion=on_completion or self.on_completion,\n on_failure=on_failure or self.on_failure,\n on_cancellation=on_cancellation or self.on_cancellation,\n on_crashed=on_crashed or self.on_crashed,\n on_running=on_running or self.on_running,\n )\n new_flow._storage = self._storage\n new_flow._entrypoint = self._entrypoint\n return new_flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.flow","title":"flow
","text":"Decorator to designate a function as a Prefect workflow.
This decorator may be used for asynchronous or synchronous functions.
Flow parameters must be serializable by Pydantic.
Parameters:
Name Type Description Defaultname
Optional[str]
An optional name for the flow; if not provided, the name will be inferred from the given function.
None
version
Optional[str]
An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null.
None
flow_run_name
Optional[Union[Callable[[], str], str]]
An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.
None
retries
int
An optional number of times to retry on flow run failure.
None
retry_delay_seconds
Union[int, float]
An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries
is nonzero.
None
task_runner
BaseTaskRunner
An optional task runner to use for task execution within the flow; if not provided, a ConcurrentTaskRunner
will be instantiated.
ConcurrentTaskRunner
description
str
An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function.
None
timeout_seconds
Union[int, float]
An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called.
None
validate_parameters
bool
By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined as x: int
and \"5\" is passed, it will be resolved to 5
. If set to False
, no validation will be performed on flow parameters.
True
persist_result
Optional[bool]
An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults to None
, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.
None
result_storage
Optional[ResultStorage]
An optional block to use to persist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow.
None
result_serializer
Optional[ResultSerializer]
An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value of PREFECT_RESULTS_DEFAULT_SERIALIZER
will be used unless called as a subflow, at which point the default will be loaded from the parent flow.
None
cache_result_in_memory
bool
An optional toggle indicating whether the cached result of a running the flow should be stored in memory. Defaults to True
.
True
log_prints
Optional[bool]
If set, print
statements in the flow will be redirected to the Prefect logger for the flow run. Defaults to None
, which indicates that the value from the parent flow should be used. If this is a parent flow, the default is pulled from the PREFECT_LOGGING_LOG_PRINTS
setting.
None
on_completion
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of functions to call when the flow run is completed. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.
None
on_failure
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of functions to call when the flow run fails. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.
None
on_cancellation
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of functions to call when the flow run is cancelled. These functions will be passed the flow, flow run, and final state.
None
on_crashed
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of functions to call when the flow run crashes. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.
None
on_running
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of functions to call when the flow run is started. Each function should accept three arguments: the flow, the flow run, and the current state
None
Returns:
Type DescriptionA callable Flow
object which, when called, will run the flow and return its
final state.
Examples:
Define a simple flow
>>> from prefect import flow\n>>> @flow\n>>> def add(x, y):\n>>> return x + y\n
Define an async flow
>>> @flow\n>>> async def add(x, y):\n>>> return x + y\n
Define a flow with a version and description
>>> @flow(version=\"first-flow\", description=\"This flow is empty!\")\n>>> def my_flow():\n>>> pass\n
Define a flow with a custom name
>>> @flow(name=\"The Ultimate Flow\")\n>>> def my_flow():\n>>> pass\n
Define a flow that submits its tasks to dask
>>> from prefect_dask.task_runners import DaskTaskRunner\n>>>\n>>> @flow(task_runner=DaskTaskRunner)\n>>> def my_flow():\n>>> pass\n
Source code in prefect/flows.py
def flow(\n __fn=None,\n *,\n name: Optional[str] = None,\n version: Optional[str] = None,\n flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n retries: int = None,\n retry_delay_seconds: Union[int, float] = None,\n task_runner: BaseTaskRunner = ConcurrentTaskRunner,\n description: str = None,\n timeout_seconds: Union[int, float] = None,\n validate_parameters: bool = True,\n persist_result: Optional[bool] = None,\n result_storage: Optional[ResultStorage] = None,\n result_serializer: Optional[ResultSerializer] = None,\n cache_result_in_memory: bool = True,\n log_prints: Optional[bool] = None,\n on_completion: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_cancellation: Optional[\n List[Callable[[FlowSchema, FlowRun, State], None]]\n ] = None,\n on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n):\n \"\"\"\n Decorator to designate a function as a Prefect workflow.\n\n This decorator may be used for asynchronous or synchronous functions.\n\n Flow parameters must be serializable by Pydantic.\n\n Args:\n name: An optional name for the flow; if not provided, the name will be inferred\n from the given function.\n version: An optional version string for the flow; if not provided, we will\n attempt to create a version string as a hash of the file containing the\n wrapped function; if the file cannot be located, the version will be null.\n flow_run_name: An optional name to distinguish runs of this flow; this name can\n be provided as a string template with the flow's parameters as variables,\n or a function that returns a string.\n retries: An optional number of times to retry on flow run failure.\n retry_delay_seconds: An optional number of seconds to wait before retrying the\n flow after failure. This is only applicable if `retries` is nonzero.\n task_runner: An optional task runner to use for task execution within the flow; if\n not provided, a `ConcurrentTaskRunner` will be instantiated.\n description: An optional string description for the flow; if not provided, the\n description will be pulled from the docstring for the decorated function.\n timeout_seconds: An optional number of seconds indicating a maximum runtime for\n the flow. If the flow exceeds this runtime, it will be marked as failed.\n Flow execution may continue until the next task is called.\n validate_parameters: By default, parameters passed to flows are validated by\n Pydantic. This will check that input values conform to the annotated types\n on the function. Where possible, values will be coerced into the correct\n type; for example, if a parameter is defined as `x: int` and \"5\" is passed,\n it will be resolved to `5`. If set to `False`, no validation will be\n performed on flow parameters.\n persist_result: An optional toggle indicating whether the result of this flow\n should be persisted to result storage. Defaults to `None`, which indicates\n that Prefect should choose whether the result should be persisted depending on\n the features being used.\n result_storage: An optional block to use to persist the result of this flow.\n This value will be used as the default for any tasks in this flow.\n If not provided, the local file system will be used unless called as\n a subflow, at which point the default will be loaded from the parent flow.\n result_serializer: An optional serializer to use to serialize the result of this\n flow for persistence. This value will be used as the default for any tasks\n in this flow. If not provided, the value of `PREFECT_RESULTS_DEFAULT_SERIALIZER`\n will be used unless called as a subflow, at which point the default will be\n loaded from the parent flow.\n cache_result_in_memory: An optional toggle indicating whether the cached result of\n a running the flow should be stored in memory. Defaults to `True`.\n log_prints: If set, `print` statements in the flow will be redirected to the\n Prefect logger for the flow run. Defaults to `None`, which indicates that\n the value from the parent flow should be used. If this is a parent flow,\n the default is pulled from the `PREFECT_LOGGING_LOG_PRINTS` setting.\n on_completion: An optional list of functions to call when the flow run is\n completed. Each function should accept three arguments: the flow, the flow\n run, and the final state of the flow run.\n on_failure: An optional list of functions to call when the flow run fails. Each\n function should accept three arguments: the flow, the flow run, and the\n final state of the flow run.\n on_cancellation: An optional list of functions to call when the flow run is\n cancelled. These functions will be passed the flow, flow run, and final state.\n on_crashed: An optional list of functions to call when the flow run crashes. Each\n function should accept three arguments: the flow, the flow run, and the\n final state of the flow run.\n on_running: An optional list of functions to call when the flow run is started. Each\n function should accept three arguments: the flow, the flow run, and the current state\n\n Returns:\n A callable `Flow` object which, when called, will run the flow and return its\n final state.\n\n Examples:\n Define a simple flow\n\n >>> from prefect import flow\n >>> @flow\n >>> def add(x, y):\n >>> return x + y\n\n Define an async flow\n\n >>> @flow\n >>> async def add(x, y):\n >>> return x + y\n\n Define a flow with a version and description\n\n >>> @flow(version=\"first-flow\", description=\"This flow is empty!\")\n >>> def my_flow():\n >>> pass\n\n Define a flow with a custom name\n\n >>> @flow(name=\"The Ultimate Flow\")\n >>> def my_flow():\n >>> pass\n\n Define a flow that submits its tasks to dask\n\n >>> from prefect_dask.task_runners import DaskTaskRunner\n >>>\n >>> @flow(task_runner=DaskTaskRunner)\n >>> def my_flow():\n >>> pass\n \"\"\"\n if __fn:\n return cast(\n Flow[P, R],\n Flow(\n fn=__fn,\n name=name,\n version=version,\n flow_run_name=flow_run_name,\n task_runner=task_runner,\n description=description,\n timeout_seconds=timeout_seconds,\n validate_parameters=validate_parameters,\n retries=retries,\n retry_delay_seconds=retry_delay_seconds,\n persist_result=persist_result,\n result_storage=result_storage,\n result_serializer=result_serializer,\n cache_result_in_memory=cache_result_in_memory,\n log_prints=log_prints,\n on_completion=on_completion,\n on_failure=on_failure,\n on_cancellation=on_cancellation,\n on_crashed=on_crashed,\n on_running=on_running,\n ),\n )\n else:\n return cast(\n Callable[[Callable[P, R]], Flow[P, R]],\n partial(\n flow,\n name=name,\n version=version,\n flow_run_name=flow_run_name,\n task_runner=task_runner,\n description=description,\n timeout_seconds=timeout_seconds,\n validate_parameters=validate_parameters,\n retries=retries,\n retry_delay_seconds=retry_delay_seconds,\n persist_result=persist_result,\n result_storage=result_storage,\n result_serializer=result_serializer,\n cache_result_in_memory=cache_result_in_memory,\n log_prints=log_prints,\n on_completion=on_completion,\n on_failure=on_failure,\n on_cancellation=on_cancellation,\n on_crashed=on_crashed,\n on_running=on_running,\n ),\n )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_entrypoint","title":"load_flow_from_entrypoint
","text":"Extract a flow object from a script at an entrypoint by running all of the code in the file.
Parameters:
Name Type Description Defaultentrypoint
str
a string in the format <path_to_script>:<flow_func_name>
or a module path to a flow function
Returns:
Type DescriptionFlow
The flow object from the script
Raises:
Type DescriptionFlowScriptError
If an exception is encountered while running the script
MissingFlowError
If the flow function specified in the entrypoint does not exist
Source code inprefect/flows.py
def load_flow_from_entrypoint(entrypoint: str) -> Flow:\n \"\"\"\n Extract a flow object from a script at an entrypoint by running all of the code in the file.\n\n Args:\n entrypoint: a string in the format `<path_to_script>:<flow_func_name>` or a module path\n to a flow function\n\n Returns:\n The flow object from the script\n\n Raises:\n FlowScriptError: If an exception is encountered while running the script\n MissingFlowError: If the flow function specified in the entrypoint does not exist\n \"\"\"\n with PrefectObjectRegistry(\n block_code_execution=True,\n capture_failures=True,\n ):\n if \":\" in entrypoint:\n # split by the last colon once to handle Windows paths with drive letters i.e C:\\path\\to\\file.py:do_stuff\n path, func_name = entrypoint.rsplit(\":\", maxsplit=1)\n else:\n path, func_name = entrypoint.rsplit(\".\", maxsplit=1)\n try:\n flow = import_object(entrypoint)\n except AttributeError as exc:\n raise MissingFlowError(\n f\"Flow function with name {func_name!r} not found in {path!r}. \"\n ) from exc\n\n if not isinstance(flow, Flow):\n raise MissingFlowError(\n f\"Function with name {func_name!r} is not a flow. Make sure that it is \"\n \"decorated with '@flow'.\"\n )\n\n return flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_script","title":"load_flow_from_script
","text":"Extract a flow object from a script by running all of the code in the file.
If the script has multiple flows in it, a flow name must be provided to specify the flow to return.
Parameters:
Name Type Description Defaultpath
str
A path to a Python script containing flows
requiredflow_name
str
An optional flow name to look for in the script
None
Returns:
Type DescriptionFlow
The flow object from the script
Raises:
Type DescriptionFlowScriptError
If an exception is encountered while running the script
MissingFlowError
If no flows exist in the iterable
MissingFlowError
If a flow name is provided and that flow does not exist
UnspecifiedFlowError
If multiple flows exist but no flow name was provided
Source code inprefect/flows.py
def load_flow_from_script(path: str, flow_name: str = None) -> Flow:\n \"\"\"\n Extract a flow object from a script by running all of the code in the file.\n\n If the script has multiple flows in it, a flow name must be provided to specify\n the flow to return.\n\n Args:\n path: A path to a Python script containing flows\n flow_name: An optional flow name to look for in the script\n\n Returns:\n The flow object from the script\n\n Raises:\n FlowScriptError: If an exception is encountered while running the script\n MissingFlowError: If no flows exist in the iterable\n MissingFlowError: If a flow name is provided and that flow does not exist\n UnspecifiedFlowError: If multiple flows exist but no flow name was provided\n \"\"\"\n return select_flow(\n load_flows_from_script(path),\n flow_name=flow_name,\n from_message=f\"in script '{path}'\",\n )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_text","title":"load_flow_from_text
","text":"Load a flow from a text script.
The script will be written to a temporary local file path so errors can refer to line numbers and contextual tracebacks can be provided.
Source code inprefect/flows.py
def load_flow_from_text(script_contents: AnyStr, flow_name: str):\n \"\"\"\n Load a flow from a text script.\n\n The script will be written to a temporary local file path so errors can refer\n to line numbers and contextual tracebacks can be provided.\n \"\"\"\n with NamedTemporaryFile(\n mode=\"wt\" if isinstance(script_contents, str) else \"wb\",\n prefix=f\"flow-script-{flow_name}\",\n suffix=\".py\",\n delete=False,\n ) as tmpfile:\n tmpfile.write(script_contents)\n tmpfile.flush()\n try:\n flow = load_flow_from_script(tmpfile.name, flow_name=flow_name)\n finally:\n # windows compat\n tmpfile.close()\n os.remove(tmpfile.name)\n return flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flows_from_script","title":"load_flows_from_script
","text":"Load all flow objects from the given python script. All of the code in the file will be executed.
Returns:
Type DescriptionList[Flow]
A list of flows
Raises:
Type DescriptionFlowScriptError
If an exception is encountered while running the script
Source code inprefect/flows.py
def load_flows_from_script(path: str) -> List[Flow]:\n \"\"\"\n Load all flow objects from the given python script. All of the code in the file\n will be executed.\n\n Returns:\n A list of flows\n\n Raises:\n FlowScriptError: If an exception is encountered while running the script\n \"\"\"\n return registry_from_script(path).get_instances(Flow)\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.select_flow","title":"select_flow
","text":"Select the only flow in an iterable or a flow specified by name.
Returns A single flow object
Raises:
Type DescriptionMissingFlowError
If no flows exist in the iterable
MissingFlowError
If a flow name is provided and that flow does not exist
UnspecifiedFlowError
If multiple flows exist but no flow name was provided
Source code inprefect/flows.py
def select_flow(\n flows: Iterable[Flow], flow_name: str = None, from_message: str = None\n) -> Flow:\n \"\"\"\n Select the only flow in an iterable or a flow specified by name.\n\n Returns\n A single flow object\n\n Raises:\n MissingFlowError: If no flows exist in the iterable\n MissingFlowError: If a flow name is provided and that flow does not exist\n UnspecifiedFlowError: If multiple flows exist but no flow name was provided\n \"\"\"\n # Convert to flows by name\n flows = {f.name: f for f in flows}\n\n # Add a leading space if given, otherwise use an empty string\n from_message = (\" \" + from_message) if from_message else \"\"\n if not flows:\n raise MissingFlowError(f\"No flows found{from_message}.\")\n\n elif flow_name and flow_name not in flows:\n raise MissingFlowError(\n f\"Flow {flow_name!r} not found{from_message}. \"\n f\"Found the following flows: {listrepr(flows.keys())}. \"\n \"Check to make sure that your flow function is decorated with `@flow`.\"\n )\n\n elif not flow_name and len(flows) > 1:\n raise UnspecifiedFlowError(\n (\n f\"Found {len(flows)} flows{from_message}:\"\n f\" {listrepr(sorted(flows.keys()))}. Specify a flow name to select a\"\n \" flow.\"\n ),\n )\n\n if flow_name:\n return flows[flow_name]\n else:\n return list(flows.values())[0]\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/futures/","title":"prefect.futures","text":"","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures","title":"prefect.futures
","text":"Futures represent the execution of a task and allow retrieval of the task run's state.
This module contains the definition for futures as well as utilities for resolving futures in nested data structures.
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture","title":"PrefectFuture
","text":" Bases: Generic[R, A]
Represents the result of a computation happening in a task runner.
When tasks are called, they are submitted to a task runner which creates a future for access to the state and result of the task.
Examples:
Define a task that returns a string
>>> from prefect import flow, task\n>>> @task\n>>> def my_task() -> str:\n>>> return \"hello\"\n
Calls of this task in a flow will return a future
>>> @flow\n>>> def my_flow():\n>>> future = my_task.submit() # PrefectFuture[str, Sync] includes result type\n>>> future.task_run.id # UUID for the task run\n
Wait for the task to complete
>>> @flow\n>>> def my_flow():\n>>> future = my_task.submit()\n>>> final_state = future.wait()\n
Wait N seconds for the task to complete
>>> @flow\n>>> def my_flow():\n>>> future = my_task.submit()\n>>> final_state = future.wait(0.1)\n>>> if final_state:\n>>> ... # Task done\n>>> else:\n>>> ... # Task not done yet\n
Wait for a task to complete and retrieve its result
>>> @flow\n>>> def my_flow():\n>>> future = my_task.submit()\n>>> result = future.result()\n>>> assert result == \"hello\"\n
Wait N seconds for a task to complete and retrieve its result
>>> @flow\n>>> def my_flow():\n>>> future = my_task.submit()\n>>> result = future.result(timeout=5)\n>>> assert result == \"hello\"\n
Retrieve the state of a task without waiting for completion
>>> @flow\n>>> def my_flow():\n>>> future = my_task.submit()\n>>> state = future.get_state()\n
Source code in prefect/futures.py
class PrefectFuture(Generic[R, A]):\n \"\"\"\n Represents the result of a computation happening in a task runner.\n\n When tasks are called, they are submitted to a task runner which creates a future\n for access to the state and result of the task.\n\n Examples:\n Define a task that returns a string\n\n >>> from prefect import flow, task\n >>> @task\n >>> def my_task() -> str:\n >>> return \"hello\"\n\n Calls of this task in a flow will return a future\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task.submit() # PrefectFuture[str, Sync] includes result type\n >>> future.task_run.id # UUID for the task run\n\n Wait for the task to complete\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task.submit()\n >>> final_state = future.wait()\n\n Wait N seconds for the task to complete\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task.submit()\n >>> final_state = future.wait(0.1)\n >>> if final_state:\n >>> ... # Task done\n >>> else:\n >>> ... # Task not done yet\n\n Wait for a task to complete and retrieve its result\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task.submit()\n >>> result = future.result()\n >>> assert result == \"hello\"\n\n Wait N seconds for a task to complete and retrieve its result\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task.submit()\n >>> result = future.result(timeout=5)\n >>> assert result == \"hello\"\n\n Retrieve the state of a task without waiting for completion\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task.submit()\n >>> state = future.get_state()\n \"\"\"\n\n def __init__(\n self,\n name: str,\n key: UUID,\n task_runner: \"BaseTaskRunner\",\n asynchronous: A = True,\n _final_state: State[R] = None, # Exposed for testing\n ) -> None:\n self.key = key\n self.name = name\n self.asynchronous = asynchronous\n self.task_run = None\n self._final_state = _final_state\n self._exception: Optional[Exception] = None\n self._task_runner = task_runner\n self._submitted = anyio.Event()\n\n self._loop = asyncio.get_running_loop()\n\n @overload\n def wait(\n self: \"PrefectFuture[R, Async]\", timeout: None = None\n ) -> Awaitable[State[R]]:\n ...\n\n @overload\n def wait(self: \"PrefectFuture[R, Sync]\", timeout: None = None) -> State[R]:\n ...\n\n @overload\n def wait(\n self: \"PrefectFuture[R, Async]\", timeout: float\n ) -> Awaitable[Optional[State[R]]]:\n ...\n\n @overload\n def wait(self: \"PrefectFuture[R, Sync]\", timeout: float) -> Optional[State[R]]:\n ...\n\n def wait(self, timeout=None):\n \"\"\"\n Wait for the run to finish and return the final state\n\n If the timeout is reached before the run reaches a final state,\n `None` is returned.\n \"\"\"\n wait = create_call(self._wait, timeout=timeout)\n if self.asynchronous:\n return from_async.call_soon_in_loop_thread(wait).aresult()\n else:\n # type checking cannot handle the overloaded timeout passing\n return from_sync.call_soon_in_loop_thread(wait).result() # type: ignore\n\n @overload\n async def _wait(self, timeout: None = None) -> State[R]:\n ...\n\n @overload\n async def _wait(self, timeout: float) -> Optional[State[R]]:\n ...\n\n async def _wait(self, timeout=None):\n \"\"\"\n Async implementation for `wait`\n \"\"\"\n await self._wait_for_submission()\n\n if self._final_state:\n return self._final_state\n\n self._final_state = await self._task_runner.wait(self.key, timeout)\n return self._final_state\n\n @overload\n def result(\n self: \"PrefectFuture[R, Sync]\",\n timeout: float = None,\n raise_on_failure: bool = True,\n ) -> R:\n ...\n\n @overload\n def result(\n self: \"PrefectFuture[R, Sync]\",\n timeout: float = None,\n raise_on_failure: bool = False,\n ) -> Union[R, Exception]:\n ...\n\n @overload\n def result(\n self: \"PrefectFuture[R, Async]\",\n timeout: float = None,\n raise_on_failure: bool = True,\n ) -> Awaitable[R]:\n ...\n\n @overload\n def result(\n self: \"PrefectFuture[R, Async]\",\n timeout: float = None,\n raise_on_failure: bool = False,\n ) -> Awaitable[Union[R, Exception]]:\n ...\n\n def result(self, timeout: float = None, raise_on_failure: bool = True):\n \"\"\"\n Wait for the run to finish and return the final state.\n\n If the timeout is reached before the run reaches a final state, a `TimeoutError`\n will be raised.\n\n If `raise_on_failure` is `True` and the task run failed, the task run's\n exception will be raised.\n \"\"\"\n result = create_call(\n self._result, timeout=timeout, raise_on_failure=raise_on_failure\n )\n if self.asynchronous:\n return from_async.call_soon_in_loop_thread(result).aresult()\n else:\n return from_sync.call_soon_in_loop_thread(result).result()\n\n async def _result(self, timeout: float = None, raise_on_failure: bool = True):\n \"\"\"\n Async implementation of `result`\n \"\"\"\n final_state = await self._wait(timeout=timeout)\n if not final_state:\n raise TimeoutError(\"Call timed out before task finished.\")\n return await final_state.result(raise_on_failure=raise_on_failure, fetch=True)\n\n @overload\n def get_state(\n self: \"PrefectFuture[R, Async]\", client: PrefectClient = None\n ) -> Awaitable[State[R]]:\n ...\n\n @overload\n def get_state(\n self: \"PrefectFuture[R, Sync]\", client: PrefectClient = None\n ) -> State[R]:\n ...\n\n def get_state(self, client: PrefectClient = None):\n \"\"\"\n Get the current state of the task run.\n \"\"\"\n if self.asynchronous:\n return cast(Awaitable[State[R]], self._get_state(client=client))\n else:\n return cast(State[R], sync(self._get_state, client=client))\n\n @inject_client\n async def _get_state(self, client: PrefectClient = None) -> State[R]:\n assert client is not None # always injected\n\n # We must wait for the task run id to be populated\n await self._wait_for_submission()\n\n task_run = await client.read_task_run(self.task_run.id)\n\n if not task_run:\n raise RuntimeError(\"Future has no associated task run in the server.\")\n\n # Update the task run reference\n self.task_run = task_run\n return task_run.state\n\n async def _wait_for_submission(self):\n await run_coroutine_in_loop_from_async(self._loop, self._submitted.wait())\n\n def __hash__(self) -> int:\n return hash(self.key)\n\n def __repr__(self) -> str:\n return f\"PrefectFuture({self.name!r})\"\n\n def __bool__(self) -> bool:\n warnings.warn(\n (\n \"A 'PrefectFuture' from a task call was cast to a boolean; \"\n \"did you mean to check the result of the task instead? \"\n \"e.g. `if my_task().result(): ...`\"\n ),\n stacklevel=2,\n )\n return True\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.get_state","title":"get_state
","text":"Get the current state of the task run.
Source code inprefect/futures.py
def get_state(self, client: PrefectClient = None):\n \"\"\"\n Get the current state of the task run.\n \"\"\"\n if self.asynchronous:\n return cast(Awaitable[State[R]], self._get_state(client=client))\n else:\n return cast(State[R], sync(self._get_state, client=client))\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.result","title":"result
","text":"Wait for the run to finish and return the final state.
If the timeout is reached before the run reaches a final state, a TimeoutError
will be raised.
If raise_on_failure
is True
and the task run failed, the task run's exception will be raised.
prefect/futures.py
def result(self, timeout: float = None, raise_on_failure: bool = True):\n \"\"\"\n Wait for the run to finish and return the final state.\n\n If the timeout is reached before the run reaches a final state, a `TimeoutError`\n will be raised.\n\n If `raise_on_failure` is `True` and the task run failed, the task run's\n exception will be raised.\n \"\"\"\n result = create_call(\n self._result, timeout=timeout, raise_on_failure=raise_on_failure\n )\n if self.asynchronous:\n return from_async.call_soon_in_loop_thread(result).aresult()\n else:\n return from_sync.call_soon_in_loop_thread(result).result()\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.wait","title":"wait
","text":"Wait for the run to finish and return the final state
If the timeout is reached before the run reaches a final state, None
is returned.
prefect/futures.py
def wait(self, timeout=None):\n \"\"\"\n Wait for the run to finish and return the final state\n\n If the timeout is reached before the run reaches a final state,\n `None` is returned.\n \"\"\"\n wait = create_call(self._wait, timeout=timeout)\n if self.asynchronous:\n return from_async.call_soon_in_loop_thread(wait).aresult()\n else:\n # type checking cannot handle the overloaded timeout passing\n return from_sync.call_soon_in_loop_thread(wait).result() # type: ignore\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.call_repr","title":"call_repr
","text":"Generate a repr for a function call as \"fn_name(arg_value, kwarg_name=kwarg_value)\"
Source code inprefect/futures.py
def call_repr(__fn: Callable, *args: Any, **kwargs: Any) -> str:\n \"\"\"\n Generate a repr for a function call as \"fn_name(arg_value, kwarg_name=kwarg_value)\"\n \"\"\"\n\n name = __fn.__name__\n\n # TODO: If this computation is concerningly expensive, we can iterate checking the\n # length at each arg or avoid calling `repr` on args with large amounts of\n # data\n call_args = \", \".join(\n [repr(arg) for arg in args]\n + [f\"{key}={repr(val)}\" for key, val in kwargs.items()]\n )\n\n # Enforce a maximum length\n if len(call_args) > 100:\n call_args = call_args[:100] + \"...\"\n\n return f\"{name}({call_args})\"\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.resolve_futures_to_data","title":"resolve_futures_to_data
async
","text":"Given a Python built-in collection, recursively find PrefectFutures
and build a new collection with the same structure with futures resolved to their results. Resolving futures to their results may wait for execution to complete and require communication with the API.
Unsupported object types will be returned without modification.
Source code inprefect/futures.py
async def resolve_futures_to_data(\n expr: Union[PrefectFuture[R, Any], Any],\n raise_on_failure: bool = True,\n) -> Union[R, Any]:\n \"\"\"\n Given a Python built-in collection, recursively find `PrefectFutures` and build a\n new collection with the same structure with futures resolved to their results.\n Resolving futures to their results may wait for execution to complete and require\n communication with the API.\n\n Unsupported object types will be returned without modification.\n \"\"\"\n futures: Set[PrefectFuture] = set()\n\n maybe_expr = visit_collection(\n expr,\n visit_fn=partial(_collect_futures, futures),\n return_data=False,\n context={},\n )\n if maybe_expr is not None:\n expr = maybe_expr\n\n # Get results\n results = await asyncio.gather(\n *[\n # We must wait for the future in the thread it was created in\n from_async.call_soon_in_loop_thread(\n create_call(future._result, raise_on_failure=raise_on_failure)\n ).aresult()\n for future in futures\n ]\n )\n\n results_by_future = dict(zip(futures, results))\n\n def replace_futures_with_results(expr, context):\n # Expressions inside quotes should not be modified\n if isinstance(context.get(\"annotation\"), quote):\n raise StopVisiting()\n\n if isinstance(expr, PrefectFuture):\n return results_by_future[expr]\n else:\n return expr\n\n return visit_collection(\n expr,\n visit_fn=replace_futures_with_results,\n return_data=True,\n context={},\n )\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.resolve_futures_to_states","title":"resolve_futures_to_states
async
","text":"Given a Python built-in collection, recursively find PrefectFutures
and build a new collection with the same structure with futures resolved to their final states. Resolving futures to their final states may wait for execution to complete.
Unsupported object types will be returned without modification.
Source code inprefect/futures.py
async def resolve_futures_to_states(\n expr: Union[PrefectFuture[R, Any], Any],\n) -> Union[State[R], Any]:\n \"\"\"\n Given a Python built-in collection, recursively find `PrefectFutures` and build a\n new collection with the same structure with futures resolved to their final states.\n Resolving futures to their final states may wait for execution to complete.\n\n Unsupported object types will be returned without modification.\n \"\"\"\n futures: Set[PrefectFuture] = set()\n\n visit_collection(\n expr,\n visit_fn=partial(_collect_futures, futures),\n return_data=False,\n context={},\n )\n\n # Get final states for each future\n states = await asyncio.gather(\n *[\n # We must wait for the future in the thread it was created in\n from_async.call_soon_in_loop_thread(create_call(future._wait)).aresult()\n for future in futures\n ]\n )\n\n states_by_future = dict(zip(futures, states))\n\n def replace_futures_with_states(expr, context):\n # Expressions inside quotes should not be modified\n if isinstance(context.get(\"annotation\"), quote):\n raise StopVisiting()\n\n if isinstance(expr, PrefectFuture):\n return states_by_future[expr]\n else:\n return expr\n\n return visit_collection(\n expr,\n visit_fn=replace_futures_with_states,\n return_data=True,\n context={},\n )\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/infrastructure/","title":"prefect.infrastructure","text":"","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure","title":"prefect.infrastructure
","text":"","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer","title":"DockerContainer
","text":" Bases: Infrastructure
Runs a command in a container.
Requires a Docker Engine to be connectable. Docker settings will be retrieved from the environment.
Click here to see a tutorial.
Attributes:
Name Type Descriptionauto_remove
bool
If set, the container will be removed on completion. Otherwise, the container will remain after exit for inspection.
command
bool
A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.
env
bool
Environment variables to set for the container.
image
str
An optional string specifying the tag of a Docker image to use. Defaults to the Prefect image.
image_pull_policy
Optional[ImagePullPolicy]
Specifies if the image should be pulled. One of 'ALWAYS', 'NEVER', 'IF_NOT_PRESENT'.
image_registry
Optional[DockerRegistry]
A DockerRegistry
block containing credentials to use if image
is stored in a private image registry.
labels
Optional[DockerRegistry]
An optional dictionary of labels, mapping name to value.
name
Optional[DockerRegistry]
An optional name for the container.
network_mode
Optional[str]
Set the network mode for the created container. Defaults to 'host' if a local API url is detected, otherwise the Docker default of 'bridge' is used. If 'networks' is set, this cannot be set.
networks
List[str]
An optional list of strings specifying Docker networks to connect the container to.
stream_output
bool
If set, stream output from the container to local standard output.
volumes
List[str]
An optional list of volume mount strings in the format of \"local_path:container_path\".
memswap_limit
Union[int, str]
Total memory (memory + swap), -1 to disable swap. Should only be set if mem_limit
is also set. If mem_limit
is set, this defaults to allowing the container to use as much swap as memory. For example, if mem_limit
is 300m and memswap_limit
is not set, the container can use 600m in total of memory and swap.
mem_limit
Union[float, str]
Memory limit of the created container. Accepts float values to enforce a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g. If a string is given without a unit, bytes are assumed.
privileged
bool
Give extended privileges to this container.
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer--connecting-to-a-locally-hosted-prefect-api","title":"Connecting to a locally hosted Prefect API","text":"If using a local API URL on Linux, we will update the network mode default to 'host' to enable connectivity. If using another OS or an alternative network mode is used, we will replace 'localhost' in the API URL with 'host.docker.internal'. Generally, this will enable connectivity, but the API URL can be provided as an environment variable to override inference in more complex use-cases.
Note, if using 'host.docker.internal' in the API URL on Linux, the API must be bound to 0.0.0.0 or the Docker IP address to allow connectivity. On macOS, this is not necessary and the API is connectable while bound to localhost.
Source code inprefect/infrastructure/container.py
@deprecated_class(\n start_date=\"Mar 2024\",\n help=\"Use the Docker worker from prefect-docker instead.\"\n \" Refer to the upgrade guide for more information:\"\n \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass DockerContainer(Infrastructure):\n \"\"\"\n Runs a command in a container.\n\n Requires a Docker Engine to be connectable. Docker settings will be retrieved from\n the environment.\n\n Click [here](https://docs.prefect.io/guides/deployment/docker) to see a tutorial.\n\n Attributes:\n auto_remove: If set, the container will be removed on completion. Otherwise,\n the container will remain after exit for inspection.\n command: A list of strings specifying the command to run in the container to\n start the flow run. In most cases you should not override this.\n env: Environment variables to set for the container.\n image: An optional string specifying the tag of a Docker image to use.\n Defaults to the Prefect image.\n image_pull_policy: Specifies if the image should be pulled. One of 'ALWAYS',\n 'NEVER', 'IF_NOT_PRESENT'.\n image_registry: A `DockerRegistry` block containing credentials to use if `image` is stored in a private\n image registry.\n labels: An optional dictionary of labels, mapping name to value.\n name: An optional name for the container.\n network_mode: Set the network mode for the created container. Defaults to 'host'\n if a local API url is detected, otherwise the Docker default of 'bridge' is\n used. If 'networks' is set, this cannot be set.\n networks: An optional list of strings specifying Docker networks to connect the\n container to.\n stream_output: If set, stream output from the container to local standard output.\n volumes: An optional list of volume mount strings in the format of\n \"local_path:container_path\".\n memswap_limit: Total memory (memory + swap), -1 to disable swap. Should only be\n set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\n allowing the container to use as much swap as memory. For example, if\n `mem_limit` is 300m and `memswap_limit` is not set, the container can use\n 600m in total of memory and swap.\n mem_limit: Memory limit of the created container. Accepts float values to enforce\n a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g.\n If a string is given without a unit, bytes are assumed.\n privileged: Give extended privileges to this container.\n\n ## Connecting to a locally hosted Prefect API\n\n If using a local API URL on Linux, we will update the network mode default to 'host'\n to enable connectivity. If using another OS or an alternative network mode is used,\n we will replace 'localhost' in the API URL with 'host.docker.internal'. Generally,\n this will enable connectivity, but the API URL can be provided as an environment\n variable to override inference in more complex use-cases.\n\n Note, if using 'host.docker.internal' in the API URL on Linux, the API must be bound\n to 0.0.0.0 or the Docker IP address to allow connectivity. On macOS, this is not\n necessary and the API is connectable while bound to localhost.\n \"\"\"\n\n type: Literal[\"docker-container\"] = Field(\n default=\"docker-container\", description=\"The type of infrastructure.\"\n )\n image: str = Field(\n default_factory=get_prefect_image_name,\n description=\"Tag of a Docker image to use. Defaults to the Prefect image.\",\n )\n image_pull_policy: Optional[ImagePullPolicy] = Field(\n default=None, description=\"Specifies if the image should be pulled.\"\n )\n image_registry: Optional[DockerRegistry] = None\n networks: List[str] = Field(\n default_factory=list,\n description=(\n \"A list of strings specifying Docker networks to connect the container to.\"\n ),\n )\n network_mode: Optional[str] = Field(\n default=None,\n description=(\n \"The network mode for the created container (e.g. host, bridge). If\"\n \" 'networks' is set, this cannot be set.\"\n ),\n )\n auto_remove: bool = Field(\n default=False,\n description=\"If set, the container will be removed on completion.\",\n )\n volumes: List[str] = Field(\n default_factory=list,\n description=(\n \"A list of volume mount strings in the format of\"\n ' \"local_path:container_path\".'\n ),\n )\n stream_output: bool = Field(\n default=True,\n description=(\n \"If set, the output will be streamed from the container to local standard\"\n \" output.\"\n ),\n )\n memswap_limit: Union[int, str] = Field(\n default=None,\n description=(\n \"Total memory (memory + swap), -1 to disable swap. Should only be \"\n \"set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\"\n \"allowing the container to use as much swap as memory. For example, if \"\n \"`mem_limit` is 300m and `memswap_limit` is not set, the container can use \"\n \"600m in total of memory and swap.\"\n ),\n )\n mem_limit: Union[float, str] = Field(\n default=None,\n description=(\n \"Memory limit of the created container. Accepts float values to enforce \"\n \"a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g. \"\n \"If a string is given without a unit, bytes are assumed.\"\n ),\n )\n privileged: bool = Field(\n default=False,\n description=\"Give extended privileges to this container.\",\n )\n\n _block_type_name = \"Docker Container\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/14a315b79990200db7341e42553e23650b34bb96-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer\"\n\n @validator(\"labels\")\n def convert_labels_to_docker_format(cls, labels: Dict[str, str]):\n labels = labels or {}\n new_labels = {}\n for name, value in labels.items():\n if \"/\" in name:\n namespace, key = name.split(\"/\", maxsplit=1)\n new_namespace = \".\".join(reversed(namespace.split(\".\")))\n new_labels[f\"{new_namespace}.{key}\"] = value\n else:\n new_labels[name] = value\n return new_labels\n\n @validator(\"volumes\")\n def check_volume_format(cls, volumes):\n for volume in volumes:\n if \":\" not in volume:\n raise ValueError(\n \"Invalid volume specification. \"\n f\"Expected format 'path:container_path', but got {volume!r}\"\n )\n\n return volumes\n\n @sync_compatible\n async def run(\n self,\n task_status: Optional[anyio.abc.TaskStatus] = None,\n ) -> Optional[bool]:\n if not self.command:\n raise ValueError(\"Docker container cannot be run with empty command.\")\n\n # The `docker` library uses requests instead of an async http library so it must\n # be run in a thread to avoid blocking the event loop.\n container = await run_sync_in_worker_thread(self._create_and_start_container)\n container_pid = self._get_infrastructure_pid(container_id=container.id)\n\n # Mark as started and return the infrastructure id\n if task_status:\n task_status.started(container_pid)\n\n # Monitor the container\n container = await run_sync_in_worker_thread(\n self._watch_container_safe, container\n )\n\n exit_code = container.attrs[\"State\"].get(\"ExitCode\")\n return DockerContainerResult(\n status_code=exit_code if exit_code is not None else -1,\n identifier=container_pid,\n )\n\n async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n docker_client = self._get_client()\n base_url, container_id = self._parse_infrastructure_pid(infrastructure_pid)\n\n if docker_client.api.base_url != base_url:\n raise InfrastructureNotAvailable(\n \"\".join(\n [\n (\n f\"Unable to stop container {container_id!r}: the current\"\n \" Docker API \"\n ),\n (\n f\"URL {docker_client.api.base_url!r} does not match the\"\n \" expected \"\n ),\n f\"API base URL {base_url}.\",\n ]\n )\n )\n try:\n container = docker_client.containers.get(container_id=container_id)\n except docker.errors.NotFound:\n raise InfrastructureNotFound(\n f\"Unable to stop container {container_id!r}: The container was not\"\n \" found.\"\n )\n\n try:\n container.stop(timeout=grace_seconds)\n except Exception:\n raise\n\n def preview(self):\n # TODO: build and document a more sophisticated preview\n docker_client = self._get_client()\n try:\n return json.dumps(self._build_container_settings(docker_client))\n finally:\n docker_client.close()\n\n async def generate_work_pool_base_job_template(self):\n from prefect.workers.utilities import (\n get_default_base_job_template_for_infrastructure_type,\n )\n\n base_job_template = await get_default_base_job_template_for_infrastructure_type(\n self.get_corresponding_worker_type()\n )\n if base_job_template is None:\n return await super().generate_work_pool_base_job_template()\n for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n if key == \"command\":\n base_job_template[\"variables\"][\"properties\"][\"command\"][\n \"default\"\n ] = shlex.join(value)\n elif key == \"image_registry\":\n self.logger.warning(\n \"Image registry blocks are not supported by Docker\"\n \" work pools. Please authenticate to your registry using\"\n \" the `docker login` command on your worker instances.\"\n )\n elif key in [\n \"type\",\n \"block_type_slug\",\n \"_block_document_id\",\n \"_block_document_name\",\n \"_is_anonymous\",\n ]:\n continue\n elif key == \"image_pull_policy\":\n new_value = None\n if value == ImagePullPolicy.ALWAYS:\n new_value = \"Always\"\n elif value == ImagePullPolicy.NEVER:\n new_value = \"Never\"\n elif value == ImagePullPolicy.IF_NOT_PRESENT:\n new_value = \"IfNotPresent\"\n\n base_job_template[\"variables\"][\"properties\"][key][\"default\"] = new_value\n elif key in base_job_template[\"variables\"][\"properties\"]:\n base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n else:\n self.logger.warning(\n f\"Variable {key!r} is not supported by Docker work pools. Skipping.\"\n )\n\n return base_job_template\n\n def get_corresponding_worker_type(self):\n return \"docker\"\n\n def _get_infrastructure_pid(self, container_id: str) -> str:\n \"\"\"Generates a Docker infrastructure_pid string in the form of\n `<docker_host_base_url>:<container_id>`.\n \"\"\"\n docker_client = self._get_client()\n base_url = docker_client.api.base_url\n docker_client.close()\n return f\"{base_url}:{container_id}\"\n\n def _parse_infrastructure_pid(self, infrastructure_pid: str) -> Tuple[str, str]:\n \"\"\"Splits a Docker infrastructure_pid into its component parts\"\"\"\n\n # base_url can contain `:` so we only want the last item of the split\n base_url, container_id = infrastructure_pid.rsplit(\":\", 1)\n return base_url, str(container_id)\n\n def _build_container_settings(\n self,\n docker_client: \"DockerClient\",\n ) -> Dict:\n network_mode = self._get_network_mode()\n return dict(\n image=self.image,\n network=self.networks[0] if self.networks else None,\n network_mode=network_mode,\n command=self.command,\n environment=self._get_environment_variables(network_mode),\n auto_remove=self.auto_remove,\n labels={**CONTAINER_LABELS, **self.labels},\n extra_hosts=self._get_extra_hosts(docker_client),\n name=self._get_container_name(),\n volumes=self.volumes,\n mem_limit=self.mem_limit,\n memswap_limit=self.memswap_limit,\n privileged=self.privileged,\n )\n\n def _create_and_start_container(self) -> \"Container\":\n if self.image_registry:\n # If an image registry block was supplied, load an authenticated Docker\n # client from the block. Otherwise, use an unauthenticated client to\n # pull images from public registries.\n docker_client = self.image_registry.get_docker_client()\n else:\n docker_client = self._get_client()\n container_settings = self._build_container_settings(docker_client)\n\n if self._should_pull_image(docker_client):\n self.logger.info(f\"Pulling image {self.image!r}...\")\n self._pull_image(docker_client)\n\n container = self._create_container(docker_client, **container_settings)\n\n # Add additional networks after the container is created; only one network can\n # be attached at creation time\n if len(self.networks) > 1:\n for network_name in self.networks[1:]:\n network = docker_client.networks.get(network_name)\n network.connect(container)\n\n # Start the container\n container.start()\n\n docker_client.close()\n\n return container\n\n def _get_image_and_tag(self) -> Tuple[str, Optional[str]]:\n return parse_image_tag(self.image)\n\n def _determine_image_pull_policy(self) -> ImagePullPolicy:\n \"\"\"\n Determine the appropriate image pull policy.\n\n 1. If they specified an image pull policy, use that.\n\n 2. If they did not specify an image pull policy and gave us\n the \"latest\" tag, use ImagePullPolicy.always.\n\n 3. If they did not specify an image pull policy and did not\n specify a tag, use ImagePullPolicy.always.\n\n 4. If they did not specify an image pull policy and gave us\n a tag other than \"latest\", use ImagePullPolicy.if_not_present.\n\n This logic matches the behavior of Kubernetes.\n See:https://kubernetes.io/docs/concepts/containers/images/#imagepullpolicy-defaulting\n \"\"\"\n if not self.image_pull_policy:\n _, tag = self._get_image_and_tag()\n if tag == \"latest\" or not tag:\n return ImagePullPolicy.ALWAYS\n return ImagePullPolicy.IF_NOT_PRESENT\n return self.image_pull_policy\n\n def _get_network_mode(self) -> Optional[str]:\n # User's value takes precedence; this may collide with the incompatible options\n # mentioned below.\n if self.network_mode:\n if sys.platform != \"linux\" and self.network_mode == \"host\":\n warnings.warn(\n f\"{self.network_mode!r} network mode is not supported on platform \"\n f\"{sys.platform!r} and may not work as intended.\"\n )\n return self.network_mode\n\n # Network mode is not compatible with networks or ports (we do not support ports\n # yet though)\n if self.networks:\n return None\n\n # Check for a local API connection\n api_url = self.env.get(\"PREFECT_API_URL\", PREFECT_API_URL.value())\n\n if api_url:\n try:\n _, netloc, _, _, _, _ = urllib.parse.urlparse(api_url)\n except Exception as exc:\n warnings.warn(\n f\"Failed to parse host from API URL {api_url!r} with exception: \"\n f\"{exc}\\nThe network mode will not be inferred.\"\n )\n return None\n\n host = netloc.split(\":\")[0]\n\n # If using a locally hosted API, use a host network on linux\n if sys.platform == \"linux\" and (host == \"127.0.0.1\" or host == \"localhost\"):\n return \"host\"\n\n # Default to unset\n return None\n\n def _should_pull_image(self, docker_client: \"DockerClient\") -> bool:\n \"\"\"\n Decide whether we need to pull the Docker image.\n \"\"\"\n image_pull_policy = self._determine_image_pull_policy()\n\n if image_pull_policy is ImagePullPolicy.ALWAYS:\n return True\n elif image_pull_policy is ImagePullPolicy.NEVER:\n return False\n elif image_pull_policy is ImagePullPolicy.IF_NOT_PRESENT:\n try:\n # NOTE: images.get() wants the tag included with the image\n # name, while images.pull() wants them split.\n docker_client.images.get(self.image)\n except docker.errors.ImageNotFound:\n self.logger.debug(f\"Could not find Docker image locally: {self.image}\")\n return True\n return False\n\n def _pull_image(self, docker_client: \"DockerClient\"):\n \"\"\"\n Pull the image we're going to use to create the container.\n \"\"\"\n image, tag = self._get_image_and_tag()\n\n return docker_client.images.pull(image, tag)\n\n def _create_container(self, docker_client: \"DockerClient\", **kwargs) -> \"Container\":\n \"\"\"\n Create a docker container with retries on name conflicts.\n\n If the container already exists with the given name, an incremented index is\n added.\n \"\"\"\n # Create the container with retries on name conflicts (with an incremented idx)\n index = 0\n container = None\n name = original_name = kwargs.pop(\"name\")\n\n while not container:\n from docker.errors import APIError\n\n try:\n display_name = repr(name) if name else \"with auto-generated name\"\n self.logger.info(f\"Creating Docker container {display_name}...\")\n container = docker_client.containers.create(name=name, **kwargs)\n except APIError as exc:\n if \"Conflict\" in str(exc) and \"container name\" in str(exc):\n self.logger.info(\n f\"Docker container name {display_name} already exists; \"\n \"retrying...\"\n )\n index += 1\n name = f\"{original_name}-{index}\"\n else:\n raise\n\n self.logger.info(\n f\"Docker container {container.name!r} has status {container.status!r}\"\n )\n return container\n\n def _watch_container_safe(self, container: \"Container\") -> \"Container\":\n # Monitor the container capturing the latest snapshot while capturing\n # not found errors\n docker_client = self._get_client()\n\n try:\n for latest_container in self._watch_container(docker_client, container.id):\n container = latest_container\n except docker.errors.NotFound:\n # The container was removed during watching\n self.logger.warning(\n f\"Docker container {container.name} was removed before we could wait \"\n \"for its completion.\"\n )\n finally:\n docker_client.close()\n\n return container\n\n def _watch_container(\n self, docker_client: \"DockerClient\", container_id: str\n ) -> Generator[None, None, \"Container\"]:\n container: \"Container\" = docker_client.containers.get(container_id)\n\n status = container.status\n self.logger.info(\n f\"Docker container {container.name!r} has status {container.status!r}\"\n )\n yield container\n\n if self.stream_output:\n try:\n for log in container.logs(stream=True):\n log: bytes\n print(log.decode().rstrip())\n except docker.errors.APIError as exc:\n if \"marked for removal\" in str(exc):\n self.logger.warning(\n f\"Docker container {container.name} was marked for removal\"\n \" before logs could be retrieved. Output will not be\"\n \" streamed. \"\n )\n else:\n self.logger.exception(\n \"An unexpected Docker API error occurred while streaming\"\n f\" output from container {container.name}.\"\n )\n\n container.reload()\n if container.status != status:\n self.logger.info(\n f\"Docker container {container.name!r} has status\"\n f\" {container.status!r}\"\n )\n yield container\n\n container.wait()\n self.logger.info(\n f\"Docker container {container.name!r} has status {container.status!r}\"\n )\n yield container\n\n def _get_client(self):\n try:\n with warnings.catch_warnings():\n # Silence warnings due to use of deprecated methods within dockerpy\n # See https://github.com/docker/docker-py/pull/2931\n warnings.filterwarnings(\n \"ignore\",\n message=\"distutils Version classes are deprecated.*\",\n category=DeprecationWarning,\n )\n\n docker_client = docker.from_env()\n\n except docker.errors.DockerException as exc:\n raise RuntimeError(\"Could not connect to Docker.\") from exc\n\n return docker_client\n\n def _get_container_name(self) -> Optional[str]:\n \"\"\"\n Generates a container name to match the configured name, ensuring it is Docker\n compatible.\n \"\"\"\n # Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+` in the end\n if not self.name:\n return None\n\n return (\n slugify(\n self.name,\n lowercase=False,\n # Docker does not limit length but URL limits apply eventually so\n # limit the length for safety\n max_length=250,\n # Docker allows these characters for container names\n regex_pattern=r\"[^a-zA-Z0-9_.-]+\",\n ).lstrip(\n # Docker does not allow leading underscore, dash, or period\n \"_-.\"\n )\n # Docker does not allow 0 character names so cast to null if the name is\n # empty after slufification\n or None\n )\n\n def _get_extra_hosts(self, docker_client) -> Dict[str, str]:\n \"\"\"\n A host.docker.internal -> host-gateway mapping is necessary for communicating\n with the API on Linux machines. Docker Desktop on macOS will automatically\n already have this mapping.\n \"\"\"\n if sys.platform == \"linux\" and (\n # Do not warn if the user has specified a host manually that does not use\n # a local address\n \"PREFECT_API_URL\" not in self.env\n or re.search(\n \".*(localhost)|(127.0.0.1)|(host.docker.internal).*\",\n self.env[\"PREFECT_API_URL\"],\n )\n ):\n user_version = packaging.version.parse(\n format_outlier_version_name(docker_client.version()[\"Version\"])\n )\n required_version = packaging.version.parse(\"20.10.0\")\n\n if user_version < required_version:\n warnings.warn(\n \"`host.docker.internal` could not be automatically resolved to\"\n \" your local ip address. This feature is not supported on Docker\"\n f\" Engine v{user_version}, upgrade to v{required_version}+ if you\"\n \" encounter issues.\"\n )\n return {}\n else:\n # Compatibility for linux -- https://github.com/docker/cli/issues/2290\n # Only supported by Docker v20.10.0+ which is our minimum recommend version\n return {\"host.docker.internal\": \"host-gateway\"}\n\n def _get_environment_variables(self, network_mode):\n # If the API URL has been set by the base environment rather than the by the\n # user, update the value to ensure connectivity when using a bridge network by\n # updating local connections to use the docker internal host unless the\n # network mode is \"host\" where localhost is available already.\n env = {**self._base_environment(), **self.env}\n\n if (\n \"PREFECT_API_URL\" in env\n and \"PREFECT_API_URL\" not in self.env\n and network_mode != \"host\"\n ):\n env[\"PREFECT_API_URL\"] = (\n env[\"PREFECT_API_URL\"]\n .replace(\"localhost\", \"host.docker.internal\")\n .replace(\"127.0.0.1\", \"host.docker.internal\")\n )\n\n # Drop null values allowing users to \"unset\" variables\n return {key: value for key, value in env.items() if value is not None}\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainerResult","title":"DockerContainerResult
","text":" Bases: InfrastructureResult
Contains information about a completed Docker container
Source code inprefect/infrastructure/container.py
class DockerContainerResult(InfrastructureResult):\n \"\"\"Contains information about a completed Docker container\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure","title":"Infrastructure
","text":" Bases: Block
, ABC
prefect/infrastructure/base.py
@deprecated_class(\n start_date=\"Mar 2024\",\n help=\"Use the `BaseWorker` class to create custom infrastructure integrations instead.\"\n \" Refer to the upgrade guide for more information:\"\n \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass Infrastructure(Block, abc.ABC):\n _block_schema_capabilities = [\"run-infrastructure\"]\n\n type: str\n\n env: Dict[str, Optional[str]] = pydantic.Field(\n default_factory=dict,\n title=\"Environment\",\n description=\"Environment variables to set in the configured infrastructure.\",\n )\n labels: Dict[str, str] = pydantic.Field(\n default_factory=dict,\n description=\"Labels applied to the infrastructure for metadata purposes.\",\n )\n name: Optional[str] = pydantic.Field(\n default=None,\n description=\"Name applied to the infrastructure for identification.\",\n )\n command: Optional[List[str]] = pydantic.Field(\n default=None,\n description=\"The command to run in the infrastructure.\",\n )\n\n async def generate_work_pool_base_job_template(self):\n if self._block_document_id is None:\n raise BlockNotSavedError(\n \"Cannot publish as work pool, block has not been saved. Please call\"\n \" `.save()` on your block before publishing.\"\n )\n\n block_schema = self.__class__.schema()\n return {\n \"job_configuration\": {\"block\": \"{{ block }}\"},\n \"variables\": {\n \"type\": \"object\",\n \"properties\": {\n \"block\": {\n \"title\": \"Block\",\n \"description\": (\n \"The infrastructure block to use for job creation.\"\n ),\n \"allOf\": [{\"$ref\": f\"#/definitions/{self.__class__.__name__}\"}],\n \"default\": {\n \"$ref\": {\"block_document_id\": str(self._block_document_id)}\n },\n }\n },\n \"required\": [\"block\"],\n \"definitions\": {self.__class__.__name__: block_schema},\n },\n }\n\n def get_corresponding_worker_type(self):\n return \"block\"\n\n @sync_compatible\n async def publish_as_work_pool(self, work_pool_name: Optional[str] = None):\n \"\"\"\n Creates a work pool configured to use the given block as the job creator.\n\n Used to migrate from a agents setup to a worker setup.\n\n Args:\n work_pool_name: The name to give to the created work pool. If not provided, the name of the current\n block will be used.\n \"\"\"\n\n base_job_template = await self.generate_work_pool_base_job_template()\n work_pool_name = work_pool_name or self._block_document_name\n\n if work_pool_name is None:\n raise ValueError(\n \"`work_pool_name` must be provided if the block has not been saved.\"\n )\n\n console = Console()\n\n try:\n async with prefect.get_client() as client:\n work_pool = await client.create_work_pool(\n work_pool=WorkPoolCreate(\n name=work_pool_name,\n type=self.get_corresponding_worker_type(),\n base_job_template=base_job_template,\n )\n )\n except ObjectAlreadyExists:\n console.print(\n (\n f\"Work pool with name {work_pool_name!r} already exists, please use\"\n \" a different name.\"\n ),\n style=\"red\",\n )\n return\n\n console.print(\n f\"Work pool {work_pool.name} created!\",\n style=\"green\",\n )\n if PREFECT_UI_URL:\n console.print(\n \"You see your new work pool in the UI at\"\n f\" {PREFECT_UI_URL.value()}/work-pools/work-pool/{work_pool.name}\"\n )\n\n deploy_script = (\n \"my_flow.deploy(work_pool_name='{work_pool.name}', image='my_image:tag')\"\n )\n if not hasattr(self, \"image\"):\n deploy_script = (\n \"my_flow.from_source(source='https://github.com/org/repo.git',\"\n f\" entrypoint='flow.py:my_flow').deploy(work_pool_name='{work_pool.name}')\"\n )\n console.print(\n \"\\nYou can deploy a flow to this work pool by calling\"\n f\" [blue].deploy[/]:\\n\\n\\t{deploy_script}\\n\"\n )\n console.print(\n \"\\nTo start a worker to execute flow runs in this work pool run:\\n\"\n )\n console.print(f\"\\t[blue]prefect worker start --pool {work_pool.name}[/]\\n\")\n\n @abc.abstractmethod\n async def run(\n self,\n task_status: anyio.abc.TaskStatus = None,\n ) -> InfrastructureResult:\n \"\"\"\n Run the infrastructure.\n\n If provided a `task_status`, the status will be reported as started when the\n infrastructure is successfully created. The status return value will be an\n identifier for the infrastructure.\n\n The call will then monitor the created infrastructure, returning a result at\n the end containing a status code indicating if the infrastructure exited cleanly\n or encountered an error.\n \"\"\"\n # Note: implementations should include `sync_compatible`\n\n @abc.abstractmethod\n def preview(self) -> str:\n \"\"\"\n View a preview of the infrastructure that would be run.\n \"\"\"\n\n @property\n def logger(self):\n return get_logger(f\"prefect.infrastructure.{self.type}\")\n\n @property\n def is_using_a_runner(self):\n return self.command is not None and \"prefect flow-run execute\" in shlex.join(\n self.command\n )\n\n @classmethod\n def _base_environment(cls) -> Dict[str, str]:\n \"\"\"\n Environment variables that should be passed to all created infrastructure.\n\n These values should be overridable with the `env` field.\n \"\"\"\n return get_current_settings().to_environment_variables(exclude_unset=True)\n\n def prepare_for_flow_run(\n self: Self,\n flow_run: \"FlowRun\",\n deployment: Optional[\"Deployment\"] = None,\n flow: Optional[\"Flow\"] = None,\n ) -> Self:\n \"\"\"\n Return an infrastructure block that is prepared to execute a flow run.\n \"\"\"\n if deployment is not None:\n deployment_labels = self._base_deployment_labels(deployment)\n else:\n deployment_labels = {}\n\n if flow is not None:\n flow_labels = self._base_flow_labels(flow)\n else:\n flow_labels = {}\n\n return self.copy(\n update={\n \"env\": {**self._base_flow_run_environment(flow_run), **self.env},\n \"labels\": {\n **self._base_flow_run_labels(flow_run),\n **deployment_labels,\n **flow_labels,\n **self.labels,\n },\n \"name\": self.name or flow_run.name,\n \"command\": self.command or self._base_flow_run_command(),\n }\n )\n\n @staticmethod\n def _base_flow_run_command() -> List[str]:\n \"\"\"\n Generate a command for a flow run job.\n \"\"\"\n if experiment_enabled(\"enhanced_cancellation\"):\n if (\n PREFECT_EXPERIMENTAL_WARN\n and PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION\n ):\n warnings.warn(\n EXPERIMENTAL_WARNING.format(\n feature=\"Enhanced flow run cancellation\",\n group=\"enhanced_cancellation\",\n help=\"\",\n ),\n ExperimentalFeature,\n stacklevel=3,\n )\n return [\"prefect\", \"flow-run\", \"execute\"]\n\n return [\"python\", \"-m\", \"prefect.engine\"]\n\n @staticmethod\n def _base_flow_run_labels(flow_run: \"FlowRun\") -> Dict[str, str]:\n \"\"\"\n Generate a dictionary of labels for a flow run job.\n \"\"\"\n return {\n \"prefect.io/flow-run-id\": str(flow_run.id),\n \"prefect.io/flow-run-name\": flow_run.name,\n \"prefect.io/version\": prefect.__version__,\n }\n\n @staticmethod\n def _base_flow_run_environment(flow_run: \"FlowRun\") -> Dict[str, str]:\n \"\"\"\n Generate a dictionary of environment variables for a flow run job.\n \"\"\"\n environment = {}\n environment[\"PREFECT__FLOW_RUN_ID\"] = str(flow_run.id)\n return environment\n\n @staticmethod\n def _base_deployment_labels(deployment: \"Deployment\") -> Dict[str, str]:\n labels = {\n \"prefect.io/deployment-name\": deployment.name,\n }\n if deployment.updated is not None:\n labels[\"prefect.io/deployment-updated\"] = deployment.updated.in_timezone(\n \"utc\"\n ).to_iso8601_string()\n return labels\n\n @staticmethod\n def _base_flow_labels(flow: \"Flow\") -> Dict[str, str]:\n return {\n \"prefect.io/flow-name\": flow.name,\n }\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.prepare_for_flow_run","title":"prepare_for_flow_run
","text":"Return an infrastructure block that is prepared to execute a flow run.
Source code inprefect/infrastructure/base.py
def prepare_for_flow_run(\n self: Self,\n flow_run: \"FlowRun\",\n deployment: Optional[\"Deployment\"] = None,\n flow: Optional[\"Flow\"] = None,\n) -> Self:\n \"\"\"\n Return an infrastructure block that is prepared to execute a flow run.\n \"\"\"\n if deployment is not None:\n deployment_labels = self._base_deployment_labels(deployment)\n else:\n deployment_labels = {}\n\n if flow is not None:\n flow_labels = self._base_flow_labels(flow)\n else:\n flow_labels = {}\n\n return self.copy(\n update={\n \"env\": {**self._base_flow_run_environment(flow_run), **self.env},\n \"labels\": {\n **self._base_flow_run_labels(flow_run),\n **deployment_labels,\n **flow_labels,\n **self.labels,\n },\n \"name\": self.name or flow_run.name,\n \"command\": self.command or self._base_flow_run_command(),\n }\n )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.preview","title":"preview
abstractmethod
","text":"View a preview of the infrastructure that would be run.
Source code inprefect/infrastructure/base.py
@abc.abstractmethod\ndef preview(self) -> str:\n \"\"\"\n View a preview of the infrastructure that would be run.\n \"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.publish_as_work_pool","title":"publish_as_work_pool
async
","text":"Creates a work pool configured to use the given block as the job creator.
Used to migrate from a agents setup to a worker setup.
Parameters:
Name Type Description Defaultwork_pool_name
Optional[str]
The name to give to the created work pool. If not provided, the name of the current block will be used.
None
Source code in prefect/infrastructure/base.py
@sync_compatible\nasync def publish_as_work_pool(self, work_pool_name: Optional[str] = None):\n \"\"\"\n Creates a work pool configured to use the given block as the job creator.\n\n Used to migrate from a agents setup to a worker setup.\n\n Args:\n work_pool_name: The name to give to the created work pool. If not provided, the name of the current\n block will be used.\n \"\"\"\n\n base_job_template = await self.generate_work_pool_base_job_template()\n work_pool_name = work_pool_name or self._block_document_name\n\n if work_pool_name is None:\n raise ValueError(\n \"`work_pool_name` must be provided if the block has not been saved.\"\n )\n\n console = Console()\n\n try:\n async with prefect.get_client() as client:\n work_pool = await client.create_work_pool(\n work_pool=WorkPoolCreate(\n name=work_pool_name,\n type=self.get_corresponding_worker_type(),\n base_job_template=base_job_template,\n )\n )\n except ObjectAlreadyExists:\n console.print(\n (\n f\"Work pool with name {work_pool_name!r} already exists, please use\"\n \" a different name.\"\n ),\n style=\"red\",\n )\n return\n\n console.print(\n f\"Work pool {work_pool.name} created!\",\n style=\"green\",\n )\n if PREFECT_UI_URL:\n console.print(\n \"You see your new work pool in the UI at\"\n f\" {PREFECT_UI_URL.value()}/work-pools/work-pool/{work_pool.name}\"\n )\n\n deploy_script = (\n \"my_flow.deploy(work_pool_name='{work_pool.name}', image='my_image:tag')\"\n )\n if not hasattr(self, \"image\"):\n deploy_script = (\n \"my_flow.from_source(source='https://github.com/org/repo.git',\"\n f\" entrypoint='flow.py:my_flow').deploy(work_pool_name='{work_pool.name}')\"\n )\n console.print(\n \"\\nYou can deploy a flow to this work pool by calling\"\n f\" [blue].deploy[/]:\\n\\n\\t{deploy_script}\\n\"\n )\n console.print(\n \"\\nTo start a worker to execute flow runs in this work pool run:\\n\"\n )\n console.print(f\"\\t[blue]prefect worker start --pool {work_pool.name}[/]\\n\")\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.run","title":"run
abstractmethod
async
","text":"Run the infrastructure.
If provided a task_status
, the status will be reported as started when the infrastructure is successfully created. The status return value will be an identifier for the infrastructure.
The call will then monitor the created infrastructure, returning a result at the end containing a status code indicating if the infrastructure exited cleanly or encountered an error.
Source code inprefect/infrastructure/base.py
@abc.abstractmethod\nasync def run(\n self,\n task_status: anyio.abc.TaskStatus = None,\n) -> InfrastructureResult:\n \"\"\"\n Run the infrastructure.\n\n If provided a `task_status`, the status will be reported as started when the\n infrastructure is successfully created. The status return value will be an\n identifier for the infrastructure.\n\n The call will then monitor the created infrastructure, returning a result at\n the end containing a status code indicating if the infrastructure exited cleanly\n or encountered an error.\n \"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig","title":"KubernetesClusterConfig
","text":" Bases: Block
Stores configuration for interaction with Kubernetes clusters.
See from_file
for creation.
Attributes:
Name Type Descriptionconfig
Dict
The entire loaded YAML contents of a kubectl config file
context_name
str
The name of the kubectl context to use
ExampleLoad a saved Kubernetes cluster config:
from prefect.blocks.kubernetes import KubernetesClusterConfig\n\ncluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n
Source code in prefect/blocks/kubernetes.py
class KubernetesClusterConfig(Block):\n \"\"\"\n Stores configuration for interaction with Kubernetes clusters.\n\n See `from_file` for creation.\n\n Attributes:\n config: The entire loaded YAML contents of a kubectl config file\n context_name: The name of the kubectl context to use\n\n Example:\n Load a saved Kubernetes cluster config:\n ```python\n from prefect.blocks.kubernetes import KubernetesClusterConfig\n\n cluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"Kubernetes Cluster Config\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig\"\n\n config: Dict = Field(\n default=..., description=\"The entire contents of a kubectl config file.\"\n )\n context_name: str = Field(\n default=..., description=\"The name of the kubectl context to use.\"\n )\n\n @validator(\"config\", pre=True)\n def parse_yaml_config(cls, value):\n if isinstance(value, str):\n return yaml.safe_load(value)\n return value\n\n @classmethod\n def from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n \"\"\"\n Create a cluster config from the a Kubernetes config file.\n\n By default, the current context in the default Kubernetes config file will be\n used.\n\n An alternative file or context may be specified.\n\n The entire config file will be loaded and stored.\n \"\"\"\n kube_config = kubernetes.config.kube_config\n\n path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n path = path.expanduser().resolve()\n\n # Determine the context\n existing_contexts, current_context = kube_config.list_kube_config_contexts(\n config_file=str(path)\n )\n context_names = {ctx[\"name\"] for ctx in existing_contexts}\n if context_name:\n if context_name not in context_names:\n raise ValueError(\n f\"Context {context_name!r} not found. \"\n f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n )\n else:\n context_name = current_context[\"name\"]\n\n # Load the entire config file\n config_file_contents = path.read_text()\n config_dict = yaml.safe_load(config_file_contents)\n\n return cls(config=config_dict, context_name=context_name)\n\n def get_api_client(self) -> \"ApiClient\":\n \"\"\"\n Returns a Kubernetes API client for this cluster config.\n \"\"\"\n return kubernetes.config.kube_config.new_client_from_config_dict(\n config_dict=self.config, context=self.context_name\n )\n\n def configure_client(self) -> None:\n \"\"\"\n Activates this cluster configuration by loading the configuration into the\n Kubernetes Python client. After calling this, Kubernetes API clients can use\n this config's context.\n \"\"\"\n kubernetes.config.kube_config.load_kube_config_from_dict(\n config_dict=self.config, context=self.context_name\n )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig.configure_client","title":"configure_client
","text":"Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context.
Source code inprefect/blocks/kubernetes.py
def configure_client(self) -> None:\n \"\"\"\n Activates this cluster configuration by loading the configuration into the\n Kubernetes Python client. After calling this, Kubernetes API clients can use\n this config's context.\n \"\"\"\n kubernetes.config.kube_config.load_kube_config_from_dict(\n config_dict=self.config, context=self.context_name\n )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig.from_file","title":"from_file
classmethod
","text":"Create a cluster config from the a Kubernetes config file.
By default, the current context in the default Kubernetes config file will be used.
An alternative file or context may be specified.
The entire config file will be loaded and stored.
Source code inprefect/blocks/kubernetes.py
@classmethod\ndef from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n \"\"\"\n Create a cluster config from the a Kubernetes config file.\n\n By default, the current context in the default Kubernetes config file will be\n used.\n\n An alternative file or context may be specified.\n\n The entire config file will be loaded and stored.\n \"\"\"\n kube_config = kubernetes.config.kube_config\n\n path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n path = path.expanduser().resolve()\n\n # Determine the context\n existing_contexts, current_context = kube_config.list_kube_config_contexts(\n config_file=str(path)\n )\n context_names = {ctx[\"name\"] for ctx in existing_contexts}\n if context_name:\n if context_name not in context_names:\n raise ValueError(\n f\"Context {context_name!r} not found. \"\n f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n )\n else:\n context_name = current_context[\"name\"]\n\n # Load the entire config file\n config_file_contents = path.read_text()\n config_dict = yaml.safe_load(config_file_contents)\n\n return cls(config=config_dict, context_name=context_name)\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig.get_api_client","title":"get_api_client
","text":"Returns a Kubernetes API client for this cluster config.
Source code inprefect/blocks/kubernetes.py
def get_api_client(self) -> \"ApiClient\":\n \"\"\"\n Returns a Kubernetes API client for this cluster config.\n \"\"\"\n return kubernetes.config.kube_config.new_client_from_config_dict(\n config_dict=self.config, context=self.context_name\n )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob","title":"KubernetesJob
","text":" Bases: Infrastructure
Runs a command as a Kubernetes Job.
For a guided tutorial, see How to use Kubernetes with Prefect. For more information, including examples for customizing the resulting manifest, see KubernetesJob
infrastructure concepts.
Attributes:
Name Type Descriptioncluster_config
Optional[KubernetesClusterConfig]
An optional Kubernetes cluster config to use for this job.
command
Optional[KubernetesClusterConfig]
A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.
customizations
JsonPatch
A list of JSON 6902 patches to apply to the base Job manifest.
env
JsonPatch
Environment variables to set for the container.
finished_job_ttl
Optional[int]
The number of seconds to retain jobs after completion. If set, finished jobs will be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be manually removed.
image
Optional[str]
An optional string specifying the image reference of a container image to use for the job, for example, docker.io/prefecthq/prefect:2-latest. The behavior is as described in https://kubernetes.io/docs/concepts/containers/images/#image-names. Defaults to the Prefect image.
image_pull_policy
Optional[KubernetesImagePullPolicy]
The Kubernetes image pull policy to use for job containers.
job
KubernetesManifest
The base manifest for the Kubernetes Job.
job_watch_timeout_seconds
Optional[int]
Number of seconds to wait for the job to complete before marking it as crashed. Defaults to None
, which means no timeout will be enforced.
labels
Optional[int]
An optional dictionary of labels to add to the job.
name
Optional[int]
An optional name for the job.
namespace
Optional[str]
An optional string signifying the Kubernetes namespace to use.
pod_watch_timeout_seconds
int
Number of seconds to watch for pod creation before timing out (default 60).
service_account_name
Optional[str]
An optional string specifying which Kubernetes service account to use.
stream_output
bool
If set, stream output from the job to local standard output.
Source code inprefect/infrastructure/kubernetes.py
@deprecated_class(\n start_date=\"Mar 2024\",\n help=\"Use the Kubernetes worker from prefect-kubernetes instead.\"\n \" Refer to the upgrade guide for more information:\"\n \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass KubernetesJob(Infrastructure):\n \"\"\"\n Runs a command as a Kubernetes Job.\n\n For a guided tutorial, see [How to use Kubernetes with Prefect](https://medium.com/the-prefect-blog/how-to-use-kubernetes-with-prefect-419b2e8b8cb2/).\n For more information, including examples for customizing the resulting manifest, see [`KubernetesJob` infrastructure concepts](https://docs.prefect.io/concepts/infrastructure/#kubernetesjob).\n\n Attributes:\n cluster_config: An optional Kubernetes cluster config to use for this job.\n command: A list of strings specifying the command to run in the container to\n start the flow run. In most cases you should not override this.\n customizations: A list of JSON 6902 patches to apply to the base Job manifest.\n env: Environment variables to set for the container.\n finished_job_ttl: The number of seconds to retain jobs after completion. If set, finished jobs will\n be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be\n manually removed.\n image: An optional string specifying the image reference of a container image\n to use for the job, for example, docker.io/prefecthq/prefect:2-latest. The\n behavior is as described in https://kubernetes.io/docs/concepts/containers/images/#image-names.\n Defaults to the Prefect image.\n image_pull_policy: The Kubernetes image pull policy to use for job containers.\n job: The base manifest for the Kubernetes Job.\n job_watch_timeout_seconds: Number of seconds to wait for the job to complete\n before marking it as crashed. Defaults to `None`, which means no timeout will be enforced.\n labels: An optional dictionary of labels to add to the job.\n name: An optional name for the job.\n namespace: An optional string signifying the Kubernetes namespace to use.\n pod_watch_timeout_seconds: Number of seconds to watch for pod creation before timing out (default 60).\n service_account_name: An optional string specifying which Kubernetes service account to use.\n stream_output: If set, stream output from the job to local standard output.\n \"\"\"\n\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob\"\n\n type: Literal[\"kubernetes-job\"] = Field(\n default=\"kubernetes-job\", description=\"The type of infrastructure.\"\n )\n # shortcuts for the most common user-serviceable settings\n image: Optional[str] = Field(\n default=None,\n description=(\n \"The image reference of a container image to use for the job, for example,\"\n \" `docker.io/prefecthq/prefect:2-latest`.The behavior is as described in\"\n \" the Kubernetes documentation and uses the latest version of Prefect by\"\n \" default, unless an image is already present in a provided job manifest.\"\n ),\n )\n namespace: Optional[str] = Field(\n default=None,\n description=(\n \"The Kubernetes namespace to use for this job. Defaults to 'default' \"\n \"unless a namespace is already present in a provided job manifest.\"\n ),\n )\n service_account_name: Optional[str] = Field(\n default=None, description=\"The Kubernetes service account to use for this job.\"\n )\n image_pull_policy: Optional[KubernetesImagePullPolicy] = Field(\n default=None,\n description=\"The Kubernetes image pull policy to use for job containers.\",\n )\n\n # connection to a cluster\n cluster_config: Optional[KubernetesClusterConfig] = Field(\n default=None, description=\"The Kubernetes cluster config to use for this job.\"\n )\n\n # settings allowing full customization of the Job\n job: KubernetesManifest = Field(\n default_factory=lambda: KubernetesJob.base_job_manifest(),\n description=\"The base manifest for the Kubernetes Job.\",\n title=\"Base Job Manifest\",\n )\n customizations: JsonPatch = Field(\n default_factory=lambda: JsonPatch([]),\n description=\"A list of JSON 6902 patches to apply to the base Job manifest.\",\n )\n\n # controls the behavior of execution\n job_watch_timeout_seconds: Optional[int] = Field(\n default=None,\n description=(\n \"Number of seconds to wait for the job to complete before marking it as\"\n \" crashed. Defaults to `None`, which means no timeout will be enforced.\"\n ),\n )\n pod_watch_timeout_seconds: int = Field(\n default=60,\n description=\"Number of seconds to watch for pod creation before timing out.\",\n )\n stream_output: bool = Field(\n default=True,\n description=(\n \"If set, output will be streamed from the job to local standard output.\"\n ),\n )\n finished_job_ttl: Optional[int] = Field(\n default=None,\n description=(\n \"The number of seconds to retain jobs after completion. If set, finished\"\n \" jobs will be cleaned up by Kubernetes after the given delay. If None\"\n \" (default), jobs will need to be manually removed.\"\n ),\n )\n\n # internal-use only right now\n _api_dns_name: Optional[str] = None # Replaces 'localhost' in API URL\n\n _block_type_name = \"Kubernetes Job\"\n\n @validator(\"job\")\n def ensure_job_includes_all_required_components(cls, value: KubernetesManifest):\n patch = JsonPatch.from_diff(value, cls.base_job_manifest())\n missing_paths = sorted([op[\"path\"] for op in patch if op[\"op\"] == \"add\"])\n if missing_paths:\n raise ValueError(\n \"Job is missing required attributes at the following paths: \"\n f\"{', '.join(missing_paths)}\"\n )\n return value\n\n @validator(\"job\")\n def ensure_job_has_compatible_values(cls, value: KubernetesManifest):\n patch = JsonPatch.from_diff(value, cls.base_job_manifest())\n incompatible = sorted(\n [\n f\"{op['path']} must have value {op['value']!r}\"\n for op in patch\n if op[\"op\"] == \"replace\"\n ]\n )\n if incompatible:\n raise ValueError(\n \"Job has incompatible values for the following attributes: \"\n f\"{', '.join(incompatible)}\"\n )\n return value\n\n @validator(\"customizations\", pre=True)\n def cast_customizations_to_a_json_patch(\n cls, value: Union[List[Dict], JsonPatch, str]\n ) -> JsonPatch:\n if isinstance(value, list):\n return JsonPatch(value)\n elif isinstance(value, str):\n try:\n return JsonPatch(json.loads(value))\n except json.JSONDecodeError as exc:\n raise ValueError(\n f\"Unable to parse customizations as JSON: {value}. Please make sure\"\n \" that the provided value is a valid JSON string.\"\n ) from exc\n return value\n\n @root_validator\n def default_namespace(cls, values):\n job = values.get(\"job\")\n\n namespace = values.get(\"namespace\")\n job_namespace = job[\"metadata\"].get(\"namespace\") if job else None\n\n if not namespace and not job_namespace:\n values[\"namespace\"] = \"default\"\n\n return values\n\n @root_validator\n def default_image(cls, values):\n job = values.get(\"job\")\n image = values.get(\"image\")\n job_image = (\n job[\"spec\"][\"template\"][\"spec\"][\"containers\"][0].get(\"image\")\n if job\n else None\n )\n\n if not image and not job_image:\n values[\"image\"] = get_prefect_image_name()\n\n return values\n\n # Support serialization of the 'JsonPatch' type\n class Config:\n arbitrary_types_allowed = True\n json_encoders = {JsonPatch: lambda p: p.patch}\n\n def dict(self, *args, **kwargs) -> Dict:\n d = super().dict(*args, **kwargs)\n d[\"customizations\"] = self.customizations.patch\n return d\n\n @classmethod\n def base_job_manifest(cls) -> KubernetesManifest:\n \"\"\"Produces the bare minimum allowed Job manifest\"\"\"\n return {\n \"apiVersion\": \"batch/v1\",\n \"kind\": \"Job\",\n \"metadata\": {\"labels\": {}},\n \"spec\": {\n \"template\": {\n \"spec\": {\n \"parallelism\": 1,\n \"completions\": 1,\n \"restartPolicy\": \"Never\",\n \"containers\": [\n {\n \"name\": \"prefect-job\",\n \"env\": [],\n }\n ],\n }\n }\n },\n }\n\n # Note that we're using the yaml package to load both YAML and JSON files below.\n # This works because YAML is a strict superset of JSON:\n #\n # > The YAML 1.23 specification was published in 2009. Its primary focus was\n # > making YAML a strict superset of JSON. It also removed many of the problematic\n # > implicit typing recommendations.\n #\n # https://yaml.org/spec/1.2.2/#12-yaml-history\n\n @classmethod\n def job_from_file(cls, filename: str) -> KubernetesManifest:\n \"\"\"Load a Kubernetes Job manifest from a YAML or JSON file.\"\"\"\n with open(filename, \"r\", encoding=\"utf-8\") as f:\n return yaml.load(f, yaml.SafeLoader)\n\n @classmethod\n def customize_from_file(cls, filename: str) -> JsonPatch:\n \"\"\"Load an RFC 6902 JSON patch from a YAML or JSON file.\"\"\"\n with open(filename, \"r\", encoding=\"utf-8\") as f:\n return JsonPatch(yaml.load(f, yaml.SafeLoader))\n\n @sync_compatible\n async def run(\n self,\n task_status: Optional[anyio.abc.TaskStatus] = None,\n ) -> KubernetesJobResult:\n if not self.command:\n raise ValueError(\"Kubernetes job cannot be run with empty command.\")\n\n self._configure_kubernetes_library_client()\n manifest = self.build_job()\n job = await run_sync_in_worker_thread(self._create_job, manifest)\n\n pid = await run_sync_in_worker_thread(self._get_infrastructure_pid, job)\n # Indicate that the job has started\n if task_status is not None:\n task_status.started(pid)\n\n # Monitor the job until completion\n status_code = await run_sync_in_worker_thread(\n self._watch_job, job.metadata.name\n )\n return KubernetesJobResult(identifier=pid, status_code=status_code)\n\n async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n self._configure_kubernetes_library_client()\n job_cluster_uid, job_namespace, job_name = self._parse_infrastructure_pid(\n infrastructure_pid\n )\n\n if not job_namespace == self.namespace:\n raise InfrastructureNotAvailable(\n f\"Unable to kill job {job_name!r}: The job is running in namespace \"\n f\"{job_namespace!r} but this block is configured to use \"\n f\"{self.namespace!r}.\"\n )\n\n current_cluster_uid = self._get_cluster_uid()\n if job_cluster_uid != current_cluster_uid:\n raise InfrastructureNotAvailable(\n f\"Unable to kill job {job_name!r}: The job is running on another \"\n \"cluster.\"\n )\n\n with self.get_batch_client() as batch_client:\n try:\n batch_client.delete_namespaced_job(\n name=job_name,\n namespace=job_namespace,\n grace_period_seconds=grace_seconds,\n # Foreground propagation deletes dependent objects before deleting owner objects.\n # This ensures that the pods are cleaned up before the job is marked as deleted.\n # See: https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion\n propagation_policy=\"Foreground\",\n )\n except kubernetes.client.exceptions.ApiException as exc:\n if exc.status == 404:\n raise InfrastructureNotFound(\n f\"Unable to kill job {job_name!r}: The job was not found.\"\n ) from exc\n else:\n raise\n\n def preview(self):\n return yaml.dump(self.build_job())\n\n def get_corresponding_worker_type(self):\n return \"kubernetes\"\n\n async def generate_work_pool_base_job_template(self):\n from prefect.workers.utilities import (\n get_default_base_job_template_for_infrastructure_type,\n )\n\n base_job_template = await get_default_base_job_template_for_infrastructure_type(\n self.get_corresponding_worker_type()\n )\n assert (\n base_job_template is not None\n ), \"Failed to retrieve default base job template.\"\n for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n if key == \"command\":\n base_job_template[\"variables\"][\"properties\"][\"command\"][\n \"default\"\n ] = shlex.join(value)\n elif key in [\n \"type\",\n \"block_type_slug\",\n \"_block_document_id\",\n \"_block_document_name\",\n \"_is_anonymous\",\n \"job\",\n \"customizations\",\n ]:\n continue\n elif key == \"image_pull_policy\":\n base_job_template[\"variables\"][\"properties\"][\"image_pull_policy\"][\n \"default\"\n ] = value.value\n elif key == \"cluster_config\":\n base_job_template[\"variables\"][\"properties\"][\"cluster_config\"][\n \"default\"\n ] = {\n \"$ref\": {\n \"block_document_id\": str(self.cluster_config._block_document_id)\n }\n }\n elif key in base_job_template[\"variables\"][\"properties\"]:\n base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n else:\n self.logger.warning(\n f\"Variable {key!r} is not supported by Kubernetes work pools.\"\n \" Skipping.\"\n )\n\n custom_job_manifest = self.dict(exclude_unset=True, exclude_defaults=True).get(\n \"job\"\n )\n if custom_job_manifest:\n job_manifest = self.build_job()\n else:\n job_manifest = copy.deepcopy(\n base_job_template[\"job_configuration\"][\"job_manifest\"]\n )\n job_manifest = self.customizations.apply(job_manifest)\n base_job_template[\"job_configuration\"][\"job_manifest\"] = job_manifest\n\n return base_job_template\n\n def build_job(self) -> KubernetesManifest:\n \"\"\"Builds the Kubernetes Job Manifest\"\"\"\n job_manifest = copy.copy(self.job)\n job_manifest = self._shortcut_customizations().apply(job_manifest)\n job_manifest = self.customizations.apply(job_manifest)\n return job_manifest\n\n @contextmanager\n def get_batch_client(self) -> Generator[\"BatchV1Api\", None, None]:\n with kubernetes.client.ApiClient() as client:\n try:\n yield kubernetes.client.BatchV1Api(api_client=client)\n finally:\n client.rest_client.pool_manager.clear()\n\n @contextmanager\n def get_client(self) -> Generator[\"CoreV1Api\", None, None]:\n with kubernetes.client.ApiClient() as client:\n try:\n yield kubernetes.client.CoreV1Api(api_client=client)\n finally:\n client.rest_client.pool_manager.clear()\n\n def _get_infrastructure_pid(self, job: \"V1Job\") -> str:\n \"\"\"\n Generates a Kubernetes infrastructure PID.\n\n The PID is in the format: \"<cluster uid>:<namespace>:<job name>\".\n \"\"\"\n cluster_uid = self._get_cluster_uid()\n pid = f\"{cluster_uid}:{self.namespace}:{job.metadata.name}\"\n return pid\n\n def _parse_infrastructure_pid(\n self, infrastructure_pid: str\n ) -> Tuple[str, str, str]:\n \"\"\"\n Parse a Kubernetes infrastructure PID into its component parts.\n\n Returns a cluster UID, namespace, and job name.\n \"\"\"\n cluster_uid, namespace, job_name = infrastructure_pid.split(\":\", 2)\n return cluster_uid, namespace, job_name\n\n def _get_cluster_uid(self) -> str:\n \"\"\"\n Gets a unique id for the current cluster being used.\n\n There is no real unique identifier for a cluster. However, the `kube-system`\n namespace is immutable and has a persistence UID that we use instead.\n\n PREFECT_KUBERNETES_CLUSTER_UID can be set in cases where the `kube-system`\n namespace cannot be read e.g. when a cluster role cannot be created. If set,\n this variable will be used and we will not attempt to read the `kube-system`\n namespace.\n\n See https://github.com/kubernetes/kubernetes/issues/44954\n \"\"\"\n # Default to an environment variable\n env_cluster_uid = os.environ.get(\"PREFECT_KUBERNETES_CLUSTER_UID\")\n if env_cluster_uid:\n return env_cluster_uid\n\n # Read the UID from the cluster namespace\n with self.get_client() as client:\n namespace = client.read_namespace(\"kube-system\")\n cluster_uid = namespace.metadata.uid\n\n return cluster_uid\n\n def _configure_kubernetes_library_client(self) -> None:\n \"\"\"\n Set the correct kubernetes client configuration.\n\n WARNING: This action is not threadsafe and may override the configuration\n specified by another `KubernetesJob` instance.\n \"\"\"\n # TODO: Investigate returning a configured client so calls on other threads\n # will not invalidate the config needed here\n\n # if a k8s cluster block is provided to the flow runner, use that\n if self.cluster_config:\n self.cluster_config.configure_client()\n else:\n # If no block specified, try to load Kubernetes configuration within a cluster. If that doesn't\n # work, try to load the configuration from the local environment, allowing\n # any further ConfigExceptions to bubble up.\n try:\n kubernetes.config.load_incluster_config()\n except kubernetes.config.ConfigException:\n kubernetes.config.load_kube_config()\n\n def _shortcut_customizations(self) -> JsonPatch:\n \"\"\"Produces the JSON 6902 patch for the most commonly used customizations, like\n image and namespace, which we offer as top-level parameters (with sensible\n default values)\"\"\"\n shortcuts = []\n\n if self.namespace:\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/metadata/namespace\",\n \"value\": self.namespace,\n }\n )\n\n if self.image:\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/spec/template/spec/containers/0/image\",\n \"value\": self.image,\n }\n )\n\n shortcuts += [\n {\n \"op\": \"add\",\n \"path\": (\n f\"/metadata/labels/{self._slugify_label_key(key).replace('/', '~1', 1)}\"\n ),\n \"value\": self._slugify_label_value(value),\n }\n for key, value in self.labels.items()\n ]\n\n shortcuts += [\n {\n \"op\": \"add\",\n \"path\": \"/spec/template/spec/containers/0/env/-\",\n \"value\": {\"name\": key, \"value\": value},\n }\n for key, value in self._get_environment_variables().items()\n ]\n\n if self.image_pull_policy:\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/spec/template/spec/containers/0/imagePullPolicy\",\n \"value\": self.image_pull_policy.value,\n }\n )\n\n if self.service_account_name:\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/spec/template/spec/serviceAccountName\",\n \"value\": self.service_account_name,\n }\n )\n\n if self.finished_job_ttl is not None:\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/spec/ttlSecondsAfterFinished\",\n \"value\": self.finished_job_ttl,\n }\n )\n\n if self.command:\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/spec/template/spec/containers/0/args\",\n \"value\": self.command,\n }\n )\n\n if self.name:\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/metadata/generateName\",\n \"value\": self._slugify_name(self.name) + \"-\",\n }\n )\n else:\n # Generate name is required\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/metadata/generateName\",\n \"value\": (\n \"prefect-job-\"\n # We generate a name using a hash of the primary job settings\n + stable_hash(\n *self.command,\n *self.env.keys(),\n *[v for v in self.env.values() if v is not None],\n )\n + \"-\"\n ),\n }\n )\n\n return JsonPatch(shortcuts)\n\n def _get_job(self, job_id: str) -> Optional[\"V1Job\"]:\n with self.get_batch_client() as batch_client:\n try:\n job = batch_client.read_namespaced_job(job_id, self.namespace)\n except kubernetes.client.exceptions.ApiException:\n self.logger.error(f\"Job {job_id!r} was removed.\", exc_info=True)\n return None\n return job\n\n def _get_job_pod(self, job_name: str) -> \"V1Pod\":\n \"\"\"Get the first running pod for a job.\"\"\"\n\n # Wait until we find a running pod for the job\n # if `pod_watch_timeout_seconds` is None, no timeout will be enforced\n watch = kubernetes.watch.Watch()\n self.logger.debug(f\"Job {job_name!r}: Starting watch for pod start...\")\n last_phase = None\n with self.get_client() as client:\n for event in watch.stream(\n func=client.list_namespaced_pod,\n namespace=self.namespace,\n label_selector=f\"job-name={job_name}\",\n timeout_seconds=self.pod_watch_timeout_seconds,\n ):\n phase = event[\"object\"].status.phase\n if phase != last_phase:\n self.logger.info(f\"Job {job_name!r}: Pod has status {phase!r}.\")\n\n if phase != \"Pending\":\n watch.stop()\n return event[\"object\"]\n\n last_phase = phase\n\n self.logger.error(f\"Job {job_name!r}: Pod never started.\")\n\n def _watch_job(self, job_name: str) -> int:\n \"\"\"\n Watch a job.\n\n Return the final status code of the first container.\n \"\"\"\n self.logger.debug(f\"Job {job_name!r}: Monitoring job...\")\n\n job = self._get_job(job_name)\n if not job:\n return -1\n\n pod = self._get_job_pod(job_name)\n if not pod:\n return -1\n\n # Calculate the deadline before streaming output\n deadline = (\n (time.monotonic() + self.job_watch_timeout_seconds)\n if self.job_watch_timeout_seconds is not None\n else None\n )\n\n if self.stream_output:\n with self.get_client() as client:\n logs = client.read_namespaced_pod_log(\n pod.metadata.name,\n self.namespace,\n follow=True,\n _preload_content=False,\n container=\"prefect-job\",\n )\n try:\n for log in logs.stream():\n print(log.decode().rstrip())\n\n # Check if we have passed the deadline and should stop streaming\n # logs\n remaining_time = (\n deadline - time.monotonic() if deadline else None\n )\n if deadline and remaining_time <= 0:\n break\n\n except Exception:\n self.logger.warning(\n (\n \"Error occurred while streaming logs - \"\n \"Job will continue to run but logs will \"\n \"no longer be streamed to stdout.\"\n ),\n exc_info=True,\n )\n\n with self.get_batch_client() as batch_client:\n # Check if the job is completed before beginning a watch\n job = batch_client.read_namespaced_job(\n name=job_name, namespace=self.namespace\n )\n completed = job.status.completion_time is not None\n\n while not completed:\n remaining_time = (\n math.ceil(deadline - time.monotonic()) if deadline else None\n )\n if deadline and remaining_time <= 0:\n self.logger.error(\n f\"Job {job_name!r}: Job did not complete within \"\n f\"timeout of {self.job_watch_timeout_seconds}s.\"\n )\n return -1\n\n watch = kubernetes.watch.Watch()\n # The kubernetes library will disable retries if the timeout kwarg is\n # present regardless of the value so we do not pass it unless given\n # https://github.com/kubernetes-client/python/blob/84f5fea2a3e4b161917aa597bf5e5a1d95e24f5a/kubernetes/base/watch/watch.py#LL160\n timeout_seconds = (\n {\"timeout_seconds\": remaining_time} if deadline else {}\n )\n\n for event in watch.stream(\n func=batch_client.list_namespaced_job,\n field_selector=f\"metadata.name={job_name}\",\n namespace=self.namespace,\n **timeout_seconds,\n ):\n if event[\"type\"] == \"DELETED\":\n self.logger.error(f\"Job {job_name!r}: Job has been deleted.\")\n completed = True\n elif event[\"object\"].status.completion_time:\n if not event[\"object\"].status.succeeded:\n # Job failed, exit while loop and return pod exit code\n self.logger.error(f\"Job {job_name!r}: Job failed.\")\n completed = True\n # Check if the job has reached its backoff limit\n # and stop watching if it has\n elif (\n event[\"object\"].spec.backoff_limit is not None\n and event[\"object\"].status.failed is not None\n and event[\"object\"].status.failed\n > event[\"object\"].spec.backoff_limit\n ):\n self.logger.error(\n f\"Job {job_name!r}: Job reached backoff limit.\"\n )\n completed = True\n # If the job has no backoff limit, check if it has failed\n # and stop watching if it has\n elif (\n not event[\"object\"].spec.backoff_limit\n and event[\"object\"].status.failed\n ):\n completed = True\n\n if completed:\n watch.stop()\n break\n\n with self.get_client() as core_client:\n # Get all pods for the job\n pods = core_client.list_namespaced_pod(\n namespace=self.namespace, label_selector=f\"job-name={job_name}\"\n )\n # Get the status for only the most recently used pod\n pods.items.sort(\n key=lambda pod: pod.metadata.creation_timestamp, reverse=True\n )\n most_recent_pod = pods.items[0] if pods.items else None\n first_container_status = (\n most_recent_pod.status.container_statuses[0]\n if most_recent_pod\n else None\n )\n if not first_container_status:\n self.logger.error(f\"Job {job_name!r}: No pods found for job.\")\n return -1\n\n # In some cases, such as spot instance evictions, the pod will be forcibly\n # terminated and not report a status correctly.\n elif (\n first_container_status.state is None\n or first_container_status.state.terminated is None\n or first_container_status.state.terminated.exit_code is None\n ):\n self.logger.error(\n f\"Could not determine exit code for {job_name!r}.\"\n \"Exit code will be reported as -1.\"\n \"First container status info did not report an exit code.\"\n f\"First container info: {first_container_status}.\"\n )\n return -1\n\n return first_container_status.state.terminated.exit_code\n\n def _create_job(self, job_manifest: KubernetesManifest) -> \"V1Job\":\n \"\"\"\n Given a Kubernetes Job Manifest, create the Job on the configured Kubernetes\n cluster and return its name.\n \"\"\"\n with self.get_batch_client() as batch_client:\n job = batch_client.create_namespaced_job(self.namespace, job_manifest)\n return job\n\n def _slugify_name(self, name: str) -> str:\n \"\"\"\n Slugify text for use as a name.\n\n Keeps only alphanumeric characters and dashes, and caps the length\n of the slug at 45 chars.\n\n The 45 character length allows room for the k8s utility\n \"generateName\" to generate a unique name from the slug while\n keeping the total length of a name below 63 characters, which is\n the limit for e.g. label names that follow RFC 1123 (hostnames) and\n RFC 1035 (domain names).\n\n Args:\n name: The name of the job\n\n Returns:\n the slugified job name\n \"\"\"\n slug = slugify(\n name,\n max_length=45, # Leave enough space for generateName\n regex_pattern=r\"[^a-zA-Z0-9-]+\",\n )\n\n # TODO: Handle the case that the name is an empty string after being\n # slugified.\n\n return slug\n\n def _slugify_label_key(self, key: str) -> str:\n \"\"\"\n Slugify text for use as a label key.\n\n Keys are composed of an optional prefix and name, separated by a slash (/).\n\n Keeps only alphanumeric characters, dashes, underscores, and periods.\n Limits the length of the label prefix to 253 characters.\n Limits the length of the label name to 63 characters.\n\n See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set\n\n Args:\n key: The label key\n\n Returns:\n The slugified label key\n \"\"\"\n if \"/\" in key:\n prefix, name = key.split(\"/\", maxsplit=1)\n else:\n prefix = None\n name = key\n\n name_slug = (\n slugify(name, max_length=63, regex_pattern=r\"[^a-zA-Z0-9-_.]+\").strip(\n \"_-.\" # Must start or end with alphanumeric characters\n )\n or name\n )\n # Fallback to the original if we end up with an empty slug, this will allow\n # Kubernetes to throw the validation error\n\n if prefix:\n prefix_slug = (\n slugify(\n prefix,\n max_length=253,\n regex_pattern=r\"[^a-zA-Z0-9-\\.]+\",\n ).strip(\"_-.\") # Must start or end with alphanumeric characters\n or prefix\n )\n\n return f\"{prefix_slug}/{name_slug}\"\n\n return name_slug\n\n def _slugify_label_value(self, value: str) -> str:\n \"\"\"\n Slugify text for use as a label value.\n\n Keeps only alphanumeric characters, dashes, underscores, and periods.\n Limits the total length of label text to below 63 characters.\n\n See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set\n\n Args:\n value: The text for the label\n\n Returns:\n The slugified value\n \"\"\"\n slug = (\n slugify(value, max_length=63, regex_pattern=r\"[^a-zA-Z0-9-_\\.]+\").strip(\n \"_-.\" # Must start or end with alphanumeric characters\n )\n or value\n )\n # Fallback to the original if we end up with an empty slug, this will allow\n # Kubernetes to throw the validation error\n\n return slug\n\n def _get_environment_variables(self):\n # If the API URL has been set by the base environment rather than the by the\n # user, update the value to ensure connectivity when using a bridge network by\n # updating local connections to use the internal host\n env = {**self._base_environment(), **self.env}\n\n if (\n \"PREFECT_API_URL\" in env\n and \"PREFECT_API_URL\" not in self.env\n and self._api_dns_name\n ):\n env[\"PREFECT_API_URL\"] = (\n env[\"PREFECT_API_URL\"]\n .replace(\"localhost\", self._api_dns_name)\n .replace(\"127.0.0.1\", self._api_dns_name)\n )\n\n # Drop null values allowing users to \"unset\" variables\n return {key: value for key, value in env.items() if value is not None}\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.base_job_manifest","title":"base_job_manifest
classmethod
","text":"Produces the bare minimum allowed Job manifest
Source code inprefect/infrastructure/kubernetes.py
@classmethod\ndef base_job_manifest(cls) -> KubernetesManifest:\n \"\"\"Produces the bare minimum allowed Job manifest\"\"\"\n return {\n \"apiVersion\": \"batch/v1\",\n \"kind\": \"Job\",\n \"metadata\": {\"labels\": {}},\n \"spec\": {\n \"template\": {\n \"spec\": {\n \"parallelism\": 1,\n \"completions\": 1,\n \"restartPolicy\": \"Never\",\n \"containers\": [\n {\n \"name\": \"prefect-job\",\n \"env\": [],\n }\n ],\n }\n }\n },\n }\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.build_job","title":"build_job
","text":"Builds the Kubernetes Job Manifest
Source code inprefect/infrastructure/kubernetes.py
def build_job(self) -> KubernetesManifest:\n \"\"\"Builds the Kubernetes Job Manifest\"\"\"\n job_manifest = copy.copy(self.job)\n job_manifest = self._shortcut_customizations().apply(job_manifest)\n job_manifest = self.customizations.apply(job_manifest)\n return job_manifest\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.customize_from_file","title":"customize_from_file
classmethod
","text":"Load an RFC 6902 JSON patch from a YAML or JSON file.
Source code inprefect/infrastructure/kubernetes.py
@classmethod\ndef customize_from_file(cls, filename: str) -> JsonPatch:\n \"\"\"Load an RFC 6902 JSON patch from a YAML or JSON file.\"\"\"\n with open(filename, \"r\", encoding=\"utf-8\") as f:\n return JsonPatch(yaml.load(f, yaml.SafeLoader))\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.job_from_file","title":"job_from_file
classmethod
","text":"Load a Kubernetes Job manifest from a YAML or JSON file.
Source code inprefect/infrastructure/kubernetes.py
@classmethod\ndef job_from_file(cls, filename: str) -> KubernetesManifest:\n \"\"\"Load a Kubernetes Job manifest from a YAML or JSON file.\"\"\"\n with open(filename, \"r\", encoding=\"utf-8\") as f:\n return yaml.load(f, yaml.SafeLoader)\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJobResult","title":"KubernetesJobResult
","text":" Bases: InfrastructureResult
Contains information about the final state of a completed Kubernetes Job
Source code inprefect/infrastructure/kubernetes.py
class KubernetesJobResult(InfrastructureResult):\n \"\"\"Contains information about the final state of a completed Kubernetes Job\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Process","title":"Process
","text":" Bases: Infrastructure
Run a command in a new process.
Current environment variables and Prefect settings will be included in the created process. Configured environment variables will override any current environment variables.
Attributes:
Name Type Descriptioncommand
A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.
env
Environment variables to set for the new process.
labels
Labels for the process. Labels are for metadata purposes only and cannot be attached to the process itself.
name
A name for the process. For display purposes only.
stream_output
bool
Whether to stream output to local stdout.
working_dir
Union[str, Path, None]
Working directory where the process should be opened. If not set, a tmp directory will be used.
Source code inprefect/infrastructure/process.py
@deprecated_class(\n start_date=\"Mar 2024\",\n help=\"Use the process worker instead.\"\n \" Refer to the upgrade guide for more information:\"\n \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass Process(Infrastructure):\n \"\"\"\n Run a command in a new process.\n\n Current environment variables and Prefect settings will be included in the created\n process. Configured environment variables will override any current environment\n variables.\n\n Attributes:\n command: A list of strings specifying the command to run in the container to\n start the flow run. In most cases you should not override this.\n env: Environment variables to set for the new process.\n labels: Labels for the process. Labels are for metadata purposes only and\n cannot be attached to the process itself.\n name: A name for the process. For display purposes only.\n stream_output: Whether to stream output to local stdout.\n working_dir: Working directory where the process should be opened. If not set,\n a tmp directory will be used.\n \"\"\"\n\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/356e6766a91baf20e1d08bbe16e8b5aaef4d8643-48x48.png\"\n _documentation_url = \"https://docs.prefect.io/concepts/infrastructure/#process\"\n\n type: Literal[\"process\"] = Field(\n default=\"process\", description=\"The type of infrastructure.\"\n )\n stream_output: bool = Field(\n default=True,\n description=(\n \"If set, output will be streamed from the process to local standard output.\"\n ),\n )\n working_dir: Union[str, Path, None] = Field(\n default=None,\n description=(\n \"If set, the process will open within the specified path as the working\"\n \" directory. Otherwise, a temporary directory will be created.\"\n ),\n ) # Underlying accepted types are str, bytes, PathLike[str], None\n\n @sync_compatible\n async def run(\n self,\n task_status: anyio.abc.TaskStatus = None,\n ) -> \"ProcessResult\":\n if not self.command:\n raise ValueError(\"Process cannot be run with empty command.\")\n\n _use_threaded_child_watcher()\n display_name = f\" {self.name!r}\" if self.name else \"\"\n\n # Open a subprocess to execute the flow run\n self.logger.info(f\"Opening process{display_name}...\")\n working_dir_ctx = (\n tempfile.TemporaryDirectory(suffix=\"prefect\")\n if not self.working_dir\n else contextlib.nullcontext(self.working_dir)\n )\n with working_dir_ctx as working_dir:\n self.logger.debug(\n f\"Process{display_name} running command: {' '.join(self.command)} in\"\n f\" {working_dir}\"\n )\n\n # We must add creationflags to a dict so it is only passed as a function\n # parameter on Windows, because the presence of creationflags causes\n # errors on Unix even if set to None\n kwargs: Dict[str, object] = {}\n if sys.platform == \"win32\":\n kwargs[\"creationflags\"] = subprocess.CREATE_NEW_PROCESS_GROUP\n\n process = await run_process(\n self.command,\n stream_output=self.stream_output,\n task_status=task_status,\n task_status_handler=_infrastructure_pid_from_process,\n env=self._get_environment_variables(),\n cwd=working_dir,\n **kwargs,\n )\n\n # Use the pid for display if no name was given\n display_name = display_name or f\" {process.pid}\"\n\n if process.returncode:\n help_message = None\n if process.returncode == -9:\n help_message = (\n \"This indicates that the process exited due to a SIGKILL signal. \"\n \"Typically, this is either caused by manual cancellation or \"\n \"high memory usage causing the operating system to \"\n \"terminate the process.\"\n )\n if process.returncode == -15:\n help_message = (\n \"This indicates that the process exited due to a SIGTERM signal. \"\n \"Typically, this is caused by manual cancellation.\"\n )\n elif process.returncode == 247:\n help_message = (\n \"This indicates that the process was terminated due to high \"\n \"memory usage.\"\n )\n elif (\n sys.platform == \"win32\" and process.returncode == STATUS_CONTROL_C_EXIT\n ):\n help_message = (\n \"Process was terminated due to a Ctrl+C or Ctrl+Break signal. \"\n \"Typically, this is caused by manual cancellation.\"\n )\n\n self.logger.error(\n f\"Process{display_name} exited with status code: {process.returncode}\"\n + (f\"; {help_message}\" if help_message else \"\")\n )\n else:\n self.logger.info(f\"Process{display_name} exited cleanly.\")\n\n return ProcessResult(\n status_code=process.returncode, identifier=str(process.pid)\n )\n\n async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n hostname, pid = _parse_infrastructure_pid(infrastructure_pid)\n\n if hostname != socket.gethostname():\n raise InfrastructureNotAvailable(\n f\"Unable to kill process {pid!r}: The process is running on a different\"\n f\" host {hostname!r}.\"\n )\n\n # In a non-windows environment first send a SIGTERM, then, after\n # `grace_seconds` seconds have passed subsequent send SIGKILL. In\n # Windows we use CTRL_BREAK_EVENT as SIGTERM is useless:\n # https://bugs.python.org/issue26350\n if sys.platform == \"win32\":\n try:\n os.kill(pid, signal.CTRL_BREAK_EVENT)\n except (ProcessLookupError, WindowsError):\n raise InfrastructureNotFound(\n f\"Unable to kill process {pid!r}: The process was not found.\"\n )\n else:\n try:\n os.kill(pid, signal.SIGTERM)\n except ProcessLookupError:\n raise InfrastructureNotFound(\n f\"Unable to kill process {pid!r}: The process was not found.\"\n )\n\n # Throttle how often we check if the process is still alive to keep\n # from making too many system calls in a short period of time.\n check_interval = max(grace_seconds / 10, 1)\n\n with anyio.move_on_after(grace_seconds):\n while True:\n await anyio.sleep(check_interval)\n\n # Detect if the process is still alive. If not do an early\n # return as the process respected the SIGTERM from above.\n try:\n os.kill(pid, 0)\n except ProcessLookupError:\n return\n\n try:\n os.kill(pid, signal.SIGKILL)\n except OSError:\n # We shouldn't ever end up here, but it's possible that the\n # process ended right after the check above.\n return\n\n def preview(self):\n environment = self._get_environment_variables(include_os_environ=False)\n return \" \\\\\\n\".join(\n [f\"{key}={value}\" for key, value in environment.items()]\n + [\" \".join(self.command)]\n )\n\n def _get_environment_variables(self, include_os_environ: bool = True):\n os_environ = os.environ if include_os_environ else {}\n # The base environment must override the current environment or\n # the Prefect settings context may not be respected\n env = {**os_environ, **self._base_environment(), **self.env}\n\n # Drop null values allowing users to \"unset\" variables\n return {key: value for key, value in env.items() if value is not None}\n\n def _base_flow_run_command(self):\n return [get_sys_executable(), \"-m\", \"prefect.engine\"]\n\n def get_corresponding_worker_type(self):\n return \"process\"\n\n async def generate_work_pool_base_job_template(self):\n from prefect.workers.utilities import (\n get_default_base_job_template_for_infrastructure_type,\n )\n\n base_job_template = await get_default_base_job_template_for_infrastructure_type(\n self.get_corresponding_worker_type(),\n )\n assert (\n base_job_template is not None\n ), \"Failed to generate default base job template for Process worker.\"\n for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n if key == \"command\":\n base_job_template[\"variables\"][\"properties\"][\"command\"][\n \"default\"\n ] = shlex.join(value)\n elif key in [\n \"type\",\n \"block_type_slug\",\n \"_block_document_id\",\n \"_block_document_name\",\n \"_is_anonymous\",\n ]:\n continue\n elif key in base_job_template[\"variables\"][\"properties\"]:\n base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n else:\n self.logger.warning(\n f\"Variable {key!r} is not supported by Process work pools.\"\n \" Skipping.\"\n )\n\n return base_job_template\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.ProcessResult","title":"ProcessResult
","text":" Bases: InfrastructureResult
Contains information about the final state of a completed process
Source code inprefect/infrastructure/process.py
class ProcessResult(InfrastructureResult):\n \"\"\"Contains information about the final state of a completed process\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/logging/","title":"Logging","text":"","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/#prefect.logging","title":"prefect.logging
","text":"","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/#prefect.logging.get_logger","title":"get_logger
cached
","text":"Get a prefect
logger. These loggers are intended for internal use within the prefect
package.
See get_run_logger
for retrieving loggers for use within task or flow runs. By default, only run-related loggers are connected to the APILogHandler
.
prefect/logging/loggers.py
@lru_cache()\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Get a `prefect` logger. These loggers are intended for internal use within the\n `prefect` package.\n\n See `get_run_logger` for retrieving loggers for use within task or flow runs.\n By default, only run-related loggers are connected to the `APILogHandler`.\n \"\"\"\n parent_logger = logging.getLogger(\"prefect\")\n\n if name:\n # Append the name if given but allow explicit full names e.g. \"prefect.test\"\n # should not become \"prefect.prefect.test\"\n if not name.startswith(parent_logger.name + \".\"):\n logger = parent_logger.getChild(name)\n else:\n logger = logging.getLogger(name)\n else:\n logger = parent_logger\n\n # Prevent the current API key from being logged in plain text\n obfuscate_api_key_filter = ObfuscateApiKeyFilter()\n logger.addFilter(obfuscate_api_key_filter)\n\n return logger\n
","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/#prefect.logging.get_run_logger","title":"get_run_logger
","text":"Get a Prefect logger for the current task run or flow run.
The logger will be named either prefect.task_runs
or prefect.flow_runs
. Contextual data about the run will be attached to the log records.
These loggers are connected to the APILogHandler
by default to send log records to the API.
Parameters:
Name Type Description Defaultcontext
RunContext
A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed.
None
**kwargs
str
Additional keyword arguments will be attached to the log records in addition to the run metadata
{}
Raises:
Type DescriptionRuntimeError
If no context can be found
Source code inprefect/logging/loggers.py
def get_run_logger(\n context: \"RunContext\" = None, **kwargs: str\n) -> Union[logging.Logger, logging.LoggerAdapter]:\n \"\"\"\n Get a Prefect logger for the current task run or flow run.\n\n The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n Contextual data about the run will be attached to the log records.\n\n These loggers are connected to the `APILogHandler` by default to send log records to\n the API.\n\n Arguments:\n context: A specific context may be provided as an override. By default, the\n context is inferred from global state and this should not be needed.\n **kwargs: Additional keyword arguments will be attached to the log records in\n addition to the run metadata\n\n Raises:\n RuntimeError: If no context can be found\n \"\"\"\n # Check for existing contexts\n task_run_context = prefect.context.TaskRunContext.get()\n flow_run_context = prefect.context.FlowRunContext.get()\n\n # Apply the context override\n if context:\n if isinstance(context, prefect.context.FlowRunContext):\n flow_run_context = context\n elif isinstance(context, prefect.context.TaskRunContext):\n task_run_context = context\n else:\n raise TypeError(\n f\"Received unexpected type {type(context).__name__!r} for context. \"\n \"Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'.\"\n )\n\n # Determine if this is a task or flow run logger\n if task_run_context:\n logger = task_run_logger(\n task_run=task_run_context.task_run,\n task=task_run_context.task,\n flow_run=flow_run_context.flow_run if flow_run_context else None,\n flow=flow_run_context.flow if flow_run_context else None,\n **kwargs,\n )\n elif flow_run_context:\n logger = flow_run_logger(\n flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs\n )\n elif (\n get_logger(\"prefect.flow_run\").disabled\n and get_logger(\"prefect.task_run\").disabled\n ):\n logger = logging.getLogger(\"null\")\n else:\n raise MissingContextError(\"There is no active flow or task run context.\")\n\n return logger\n
","tags":["Python API","logging"]},{"location":"api-ref/prefect/manifests/","title":"prefect.manifests","text":"","tags":["Python API","deployments"]},{"location":"api-ref/prefect/manifests/#prefect.manifests","title":"prefect.manifests
","text":"Manifests are portable descriptions of one or more workflows within a given directory structure.
They are the foundational building blocks for defining Flow Deployments.
","tags":["Python API","deployments"]},{"location":"api-ref/prefect/manifests/#prefect.manifests.Manifest","title":"Manifest
","text":" Bases: BaseModel
A JSON representation of a flow.
Source code inprefect/manifests.py
class Manifest(BaseModel):\n \"\"\"A JSON representation of a flow.\"\"\"\n\n flow_name: str = Field(default=..., description=\"The name of the flow.\")\n import_path: str = Field(\n default=..., description=\"The relative import path for the flow.\"\n )\n parameter_openapi_schema: ParameterSchema = Field(\n default=..., description=\"The OpenAPI schema of the flow's parameters.\"\n )\n
","tags":["Python API","deployments"]},{"location":"api-ref/prefect/serializers/","title":"prefect.serializers","text":"","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers","title":"prefect.serializers
","text":"Serializer implementations for converting objects to bytes and bytes to objects.
All serializers are based on the Serializer
class and include a type
string that allows them to be referenced without referencing the actual class. For example, you can get often specify the JSONSerializer
with the string \"json\". Some serializers support additional settings for configuration of serialization. These are stored on the instance so the same settings can be used to load saved objects.
All serializers must implement dumps
and loads
which convert objects to bytes and bytes to an object respectively.
CompressedJSONSerializer
","text":" Bases: CompressedSerializer
A compressed serializer preconfigured to use the json serializer.
Source code inprefect/serializers.py
class CompressedJSONSerializer(CompressedSerializer):\n \"\"\"\n A compressed serializer preconfigured to use the json serializer.\n \"\"\"\n\n type: Literal[\"compressed/json\"] = \"compressed/json\"\n serializer: Serializer = pydantic.Field(default_factory=JSONSerializer)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedPickleSerializer","title":"CompressedPickleSerializer
","text":" Bases: CompressedSerializer
A compressed serializer preconfigured to use the pickle serializer.
Source code inprefect/serializers.py
class CompressedPickleSerializer(CompressedSerializer):\n \"\"\"\n A compressed serializer preconfigured to use the pickle serializer.\n \"\"\"\n\n type: Literal[\"compressed/pickle\"] = \"compressed/pickle\"\n serializer: Serializer = pydantic.Field(default_factory=PickleSerializer)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedSerializer","title":"CompressedSerializer
","text":" Bases: Serializer
Wraps another serializer, compressing its output. Uses lzma
by default. See compressionlib
for using alternative libraries.
Attributes:
Name Type Descriptionserializer
Serializer
The serializer to use before compression.
compressionlib
str
The import path of a compression module to use. Must have methods compress(bytes) -> bytes
and decompress(bytes) -> bytes
.
level
str
If not null, the level of compression to pass to compress
.
prefect/serializers.py
class CompressedSerializer(Serializer):\n \"\"\"\n Wraps another serializer, compressing its output.\n Uses `lzma` by default. See `compressionlib` for using alternative libraries.\n\n Attributes:\n serializer: The serializer to use before compression.\n compressionlib: The import path of a compression module to use.\n Must have methods `compress(bytes) -> bytes` and `decompress(bytes) -> bytes`.\n level: If not null, the level of compression to pass to `compress`.\n \"\"\"\n\n type: Literal[\"compressed\"] = \"compressed\"\n\n serializer: Serializer\n compressionlib: str = \"lzma\"\n\n @pydantic.validator(\"serializer\", pre=True)\n def cast_type_names_to_serializers(cls, value):\n if isinstance(value, str):\n return Serializer(type=value)\n return value\n\n @pydantic.validator(\"compressionlib\")\n def check_compressionlib(cls, value):\n \"\"\"\n Check that the given pickle library is importable and has compress/decompress\n methods.\n \"\"\"\n try:\n compressor = from_qualified_name(value)\n except (ImportError, AttributeError) as exc:\n raise ValueError(\n f\"Failed to import requested compression library: {value!r}.\"\n ) from exc\n\n if not callable(getattr(compressor, \"compress\", None)):\n raise ValueError(\n f\"Compression library at {value!r} does not have a 'compress' method.\"\n )\n\n if not callable(getattr(compressor, \"decompress\", None)):\n raise ValueError(\n f\"Compression library at {value!r} does not have a 'decompress' method.\"\n )\n\n return value\n\n def dumps(self, obj: Any) -> bytes:\n blob = self.serializer.dumps(obj)\n compressor = from_qualified_name(self.compressionlib)\n return base64.encodebytes(compressor.compress(blob))\n\n def loads(self, blob: bytes) -> Any:\n compressor = from_qualified_name(self.compressionlib)\n uncompressed = compressor.decompress(base64.decodebytes(blob))\n return self.serializer.loads(uncompressed)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedSerializer.check_compressionlib","title":"check_compressionlib
","text":"Check that the given pickle library is importable and has compress/decompress methods.
Source code inprefect/serializers.py
@pydantic.validator(\"compressionlib\")\ndef check_compressionlib(cls, value):\n \"\"\"\n Check that the given pickle library is importable and has compress/decompress\n methods.\n \"\"\"\n try:\n compressor = from_qualified_name(value)\n except (ImportError, AttributeError) as exc:\n raise ValueError(\n f\"Failed to import requested compression library: {value!r}.\"\n ) from exc\n\n if not callable(getattr(compressor, \"compress\", None)):\n raise ValueError(\n f\"Compression library at {value!r} does not have a 'compress' method.\"\n )\n\n if not callable(getattr(compressor, \"decompress\", None)):\n raise ValueError(\n f\"Compression library at {value!r} does not have a 'decompress' method.\"\n )\n\n return value\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.JSONSerializer","title":"JSONSerializer
","text":" Bases: Serializer
Serializes data to JSON.
Input types must be compatible with the stdlib json library.
Wraps the json
library to serialize to UTF-8 bytes instead of string types.
prefect/serializers.py
class JSONSerializer(Serializer):\n \"\"\"\n Serializes data to JSON.\n\n Input types must be compatible with the stdlib json library.\n\n Wraps the `json` library to serialize to UTF-8 bytes instead of string types.\n \"\"\"\n\n type: Literal[\"json\"] = \"json\"\n jsonlib: str = \"json\"\n object_encoder: Optional[str] = pydantic.Field(\n default=\"prefect.serializers.prefect_json_object_encoder\",\n description=(\n \"An optional callable to use when serializing objects that are not \"\n \"supported by the JSON encoder. By default, this is set to a callable that \"\n \"adds support for all types supported by Pydantic.\"\n ),\n )\n object_decoder: Optional[str] = pydantic.Field(\n default=\"prefect.serializers.prefect_json_object_decoder\",\n description=(\n \"An optional callable to use when deserializing objects. This callable \"\n \"is passed each dictionary encountered during JSON deserialization. \"\n \"By default, this is set to a callable that deserializes content created \"\n \"by our default `object_encoder`.\"\n ),\n )\n dumps_kwargs: dict = pydantic.Field(default_factory=dict)\n loads_kwargs: dict = pydantic.Field(default_factory=dict)\n\n @pydantic.validator(\"dumps_kwargs\")\n def dumps_kwargs_cannot_contain_default(cls, value):\n # `default` is set by `object_encoder`. A user provided callable would make this\n # class unserializable anyway.\n if \"default\" in value:\n raise ValueError(\n \"`default` cannot be provided. Use `object_encoder` instead.\"\n )\n return value\n\n @pydantic.validator(\"loads_kwargs\")\n def loads_kwargs_cannot_contain_object_hook(cls, value):\n # `object_hook` is set by `object_decoder`. A user provided callable would make\n # this class unserializable anyway.\n if \"object_hook\" in value:\n raise ValueError(\n \"`object_hook` cannot be provided. Use `object_decoder` instead.\"\n )\n return value\n\n def dumps(self, data: Any) -> bytes:\n json = from_qualified_name(self.jsonlib)\n kwargs = self.dumps_kwargs.copy()\n if self.object_encoder:\n kwargs[\"default\"] = from_qualified_name(self.object_encoder)\n result = json.dumps(data, **kwargs)\n if isinstance(result, str):\n # The standard library returns str but others may return bytes directly\n result = result.encode()\n return result\n\n def loads(self, blob: bytes) -> Any:\n json = from_qualified_name(self.jsonlib)\n kwargs = self.loads_kwargs.copy()\n if self.object_decoder:\n kwargs[\"object_hook\"] = from_qualified_name(self.object_decoder)\n return json.loads(blob.decode(), **kwargs)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.PickleSerializer","title":"PickleSerializer
","text":" Bases: Serializer
Serializes objects using the pickle protocol.
cloudpickle
by default. See picklelib
for using alternative libraries.prefect/serializers.py
class PickleSerializer(Serializer):\n \"\"\"\n Serializes objects using the pickle protocol.\n\n - Uses `cloudpickle` by default. See `picklelib` for using alternative libraries.\n - Stores the version of the pickle library to check for compatibility during\n deserialization.\n - Wraps pickles in base64 for safe transmission.\n \"\"\"\n\n type: Literal[\"pickle\"] = \"pickle\"\n\n picklelib: str = \"cloudpickle\"\n picklelib_version: str = None\n\n @pydantic.validator(\"picklelib\")\n def check_picklelib(cls, value):\n \"\"\"\n Check that the given pickle library is importable and has dumps/loads methods.\n \"\"\"\n try:\n pickler = from_qualified_name(value)\n except (ImportError, AttributeError) as exc:\n raise ValueError(\n f\"Failed to import requested pickle library: {value!r}.\"\n ) from exc\n\n if not callable(getattr(pickler, \"dumps\", None)):\n raise ValueError(\n f\"Pickle library at {value!r} does not have a 'dumps' method.\"\n )\n\n if not callable(getattr(pickler, \"loads\", None)):\n raise ValueError(\n f\"Pickle library at {value!r} does not have a 'loads' method.\"\n )\n\n return value\n\n @pydantic.root_validator\n def check_picklelib_version(cls, values):\n \"\"\"\n Infers a default value for `picklelib_version` if null or ensures it matches\n the version retrieved from the `pickelib`.\n \"\"\"\n picklelib = values.get(\"picklelib\")\n picklelib_version = values.get(\"picklelib_version\")\n\n if not picklelib:\n raise ValueError(\"Unable to check version of unrecognized picklelib module\")\n\n pickler = from_qualified_name(picklelib)\n pickler_version = getattr(pickler, \"__version__\", None)\n\n if not picklelib_version:\n values[\"picklelib_version\"] = pickler_version\n elif picklelib_version != pickler_version:\n warnings.warn(\n (\n f\"Mismatched {picklelib!r} versions. Found {pickler_version} in the\"\n f\" environment but {picklelib_version} was requested. This may\"\n \" cause the serializer to fail.\"\n ),\n RuntimeWarning,\n stacklevel=3,\n )\n\n return values\n\n def dumps(self, obj: Any) -> bytes:\n pickler = from_qualified_name(self.picklelib)\n blob = pickler.dumps(obj)\n return base64.encodebytes(blob)\n\n def loads(self, blob: bytes) -> Any:\n pickler = from_qualified_name(self.picklelib)\n return pickler.loads(base64.decodebytes(blob))\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.PickleSerializer.check_picklelib","title":"check_picklelib
","text":"Check that the given pickle library is importable and has dumps/loads methods.
Source code inprefect/serializers.py
@pydantic.validator(\"picklelib\")\ndef check_picklelib(cls, value):\n \"\"\"\n Check that the given pickle library is importable and has dumps/loads methods.\n \"\"\"\n try:\n pickler = from_qualified_name(value)\n except (ImportError, AttributeError) as exc:\n raise ValueError(\n f\"Failed to import requested pickle library: {value!r}.\"\n ) from exc\n\n if not callable(getattr(pickler, \"dumps\", None)):\n raise ValueError(\n f\"Pickle library at {value!r} does not have a 'dumps' method.\"\n )\n\n if not callable(getattr(pickler, \"loads\", None)):\n raise ValueError(\n f\"Pickle library at {value!r} does not have a 'loads' method.\"\n )\n\n return value\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.PickleSerializer.check_picklelib_version","title":"check_picklelib_version
","text":"Infers a default value for picklelib_version
if null or ensures it matches the version retrieved from the pickelib
.
prefect/serializers.py
@pydantic.root_validator\ndef check_picklelib_version(cls, values):\n \"\"\"\n Infers a default value for `picklelib_version` if null or ensures it matches\n the version retrieved from the `pickelib`.\n \"\"\"\n picklelib = values.get(\"picklelib\")\n picklelib_version = values.get(\"picklelib_version\")\n\n if not picklelib:\n raise ValueError(\"Unable to check version of unrecognized picklelib module\")\n\n pickler = from_qualified_name(picklelib)\n pickler_version = getattr(pickler, \"__version__\", None)\n\n if not picklelib_version:\n values[\"picklelib_version\"] = pickler_version\n elif picklelib_version != pickler_version:\n warnings.warn(\n (\n f\"Mismatched {picklelib!r} versions. Found {pickler_version} in the\"\n f\" environment but {picklelib_version} was requested. This may\"\n \" cause the serializer to fail.\"\n ),\n RuntimeWarning,\n stacklevel=3,\n )\n\n return values\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer","title":"Serializer
","text":" Bases: BaseModel
, Generic[D]
, ABC
A serializer that can encode objects of type 'D' into bytes.
Source code inprefect/serializers.py
@add_type_dispatch\nclass Serializer(BaseModel, Generic[D], abc.ABC):\n \"\"\"\n A serializer that can encode objects of type 'D' into bytes.\n \"\"\"\n\n type: str\n\n @abc.abstractmethod\n def dumps(self, obj: D) -> bytes:\n \"\"\"Encode the object into a blob of bytes.\"\"\"\n\n @abc.abstractmethod\n def loads(self, blob: bytes) -> D:\n \"\"\"Decode the blob of bytes into an object.\"\"\"\n\n class Config:\n extra = \"forbid\"\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer.dumps","title":"dumps
abstractmethod
","text":"Encode the object into a blob of bytes.
Source code inprefect/serializers.py
@abc.abstractmethod\ndef dumps(self, obj: D) -> bytes:\n \"\"\"Encode the object into a blob of bytes.\"\"\"\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer.loads","title":"loads
abstractmethod
","text":"Decode the blob of bytes into an object.
Source code inprefect/serializers.py
@abc.abstractmethod\ndef loads(self, blob: bytes) -> D:\n \"\"\"Decode the blob of bytes into an object.\"\"\"\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.prefect_json_object_decoder","title":"prefect_json_object_decoder
","text":"JSONDecoder.object_hook
for decoding objects from JSON when previously encoded with prefect_json_object_encoder
prefect/serializers.py
def prefect_json_object_decoder(result: dict):\n \"\"\"\n `JSONDecoder.object_hook` for decoding objects from JSON when previously encoded\n with `prefect_json_object_encoder`\n \"\"\"\n if \"__class__\" in result:\n return pydantic.parse_obj_as(\n from_qualified_name(result[\"__class__\"]), result[\"data\"]\n )\n elif \"__exc_type__\" in result:\n return from_qualified_name(result[\"__exc_type__\"])(result[\"message\"])\n else:\n return result\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.prefect_json_object_encoder","title":"prefect_json_object_encoder
","text":"JSONEncoder.default
for encoding objects into JSON with extended type support.
Raises a TypeError
to fallback on other encoders on failure.
prefect/serializers.py
def prefect_json_object_encoder(obj: Any) -> Any:\n \"\"\"\n `JSONEncoder.default` for encoding objects into JSON with extended type support.\n\n Raises a `TypeError` to fallback on other encoders on failure.\n \"\"\"\n if isinstance(obj, BaseException):\n return {\"__exc_type__\": to_qualified_name(obj.__class__), \"message\": str(obj)}\n else:\n return {\n \"__class__\": to_qualified_name(obj.__class__),\n \"data\": pydantic_encoder(obj),\n }\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/settings/","title":"prefect.settings","text":"","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings","title":"prefect.settings
","text":"Prefect settings management.
Each setting is defined as a Setting
type. The name of each setting is stylized in all caps, matching the environment variable that can be used to change the setting.
All settings defined in this file are used to generate a dynamic Pydantic settings class called Settings
. When instantiated, this class will load settings from environment variables and pull default values from the setting definitions.
The current instance of Settings
being used by the application is stored in a SettingsContext
model which allows each instance of the Settings
class to be accessed in an async-safe manner.
Aside from environment variables, we allow settings to be changed during the runtime of the process using profiles. Profiles contain setting overrides that the user may persist without setting environment variables. Profiles are also used internally for managing settings during task run execution where differing settings may be used concurrently in the same process and during testing where we need to override settings to ensure their value is respected as intended.
The SettingsContext
is set when the prefect
module is imported. This context is referred to as the \"root\" settings context for clarity. Generally, this is the only settings context that will be used. When this context is entered, we will instantiate a Settings
object, loading settings from environment variables and defaults, then we will load the active profile and use it to override settings. See enter_root_settings_context
for details on determining the active profile.
Another SettingsContext
may be entered at any time to change the settings being used by the code within the context. Generally, users should not use this. Settings management should be left to Prefect application internals.
Generally, settings should be accessed with SETTING_VARIABLE.value()
which will pull the current Settings
instance from the current SettingsContext
and retrieve the value of the relevant setting.
Accessing a setting's value will also call the Setting.value_callback
which allows settings to be dynamically modified on retrieval. This allows us to make settings dependent on the value of other settings or perform other dynamic effects.
PREFECT_HOME = Setting(Path, default=Path('~') / '.prefect', value_callback=expanduser_in_path)
module-attribute
","text":"Prefect's home directory. Defaults to ~/.prefect
. This directory may be created automatically when required.
PREFECT_EXTRA_ENTRYPOINTS = Setting(str, default='')
module-attribute
","text":"Modules for Prefect to import when Prefect is imported.
Values should be separated by commas, e.g. my_module,my_other_module
. Objects within modules may be specified by a ':' partition, e.g. my_module:my_object
. If a callable object is provided, it will be called with no arguments on import.
PREFECT_DEBUG_MODE = Setting(bool, default=False)
module-attribute
","text":"If True
, places the API in debug mode. This may modify behavior to facilitate debugging, including extra logs and other verbose assistance. Defaults to False
.
PREFECT_CLI_COLORS = Setting(bool, default=True)
module-attribute
","text":"If True
, use colors in CLI output. If False
, output will not include colors codes. Defaults to True
.
PREFECT_CLI_PROMPT = Setting(Optional[bool], default=None)
module-attribute
","text":"If True
, use interactive prompts in CLI commands. If False
, no interactive prompts will be used. If None
, the value will be dynamically determined based on the presence of an interactive-enabled terminal.
PREFECT_CLI_WRAP_LINES = Setting(bool, default=True)
module-attribute
","text":"If True
, wrap text by inserting new lines in long lines in CLI output. If False
, output will not be wrapped. Defaults to True
.
PREFECT_TEST_MODE = Setting(bool, default=False)
module-attribute
","text":"If True
, places the API in test mode. This may modify behavior to facilitate testing. Defaults to False
.
PREFECT_UNIT_TEST_MODE = Setting(bool, default=False)
module-attribute
","text":"This variable only exists to facilitate unit testing. If True
, code is executing in a unit test context. Defaults to False
.
PREFECT_TEST_SETTING = Setting(Any, default=None, value_callback=only_return_value_in_test_mode)
module-attribute
","text":"This variable only exists to facilitate testing of settings. If accessed when PREFECT_TEST_MODE
is not set, None
is returned.
PREFECT_API_TLS_INSECURE_SKIP_VERIFY = Setting(bool, default=False)
module-attribute
","text":"If True
, disables SSL checking to allow insecure requests. This is recommended only during development, e.g. when using self-signed certificates.
PREFECT_API_URL = Setting(str, default=None)
module-attribute
","text":"If provided, the URL of a hosted Prefect API. Defaults to None
.
When using Prefect Cloud, this will include an account and workspace.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SILENCE_API_URL_MISCONFIGURATION","title":"PREFECT_SILENCE_API_URL_MISCONFIGURATION = Setting(bool, default=False)
module-attribute
","text":"If True
, disable the warning when a user accidentally misconfigure its PREFECT_API_URL
Sometimes when a user manually set PREFECT_API_URL
to a custom url,reverse-proxy for example, we would like to silence this warning so we will set it to FALSE
.
PREFECT_API_KEY = Setting(str, default=None, is_secret=True)
module-attribute
","text":"API key used to authenticate with a the Prefect API. Defaults to None
.
PREFECT_API_ENABLE_HTTP2 = Setting(bool, default=True)
module-attribute
","text":"If true, enable support for HTTP/2 for communicating with an API.
If the API does not support HTTP/2, this will have no effect and connections will be made via HTTP/1.1.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_MAX_RETRIES","title":"PREFECT_CLIENT_MAX_RETRIES = Setting(int, default=5)
module-attribute
","text":"The maximum number of retries to perform on failed HTTP requests.
Defaults to 5. Set to 0 to disable retries.
See PREFECT_CLIENT_RETRY_EXTRA_CODES
for details on which HTTP status codes are retried.
PREFECT_CLIENT_RETRY_JITTER_FACTOR = Setting(float, default=0.2)
module-attribute
","text":"A value greater than or equal to zero to control the amount of jitter added to retried client requests. Higher values introduce larger amounts of jitter.
Set to 0 to disable jitter. See clamped_poisson_interval
for details on the how jitter can affect retry lengths.
PREFECT_CLIENT_RETRY_EXTRA_CODES = Setting(str, default='', value_callback=status_codes_as_integers_in_range)
module-attribute
","text":"A comma-separated list of extra HTTP status codes to retry on. Defaults to an empty string. 429, 502 and 503 are always retried. Please note that not all routes are idempotent and retrying may result in unexpected behavior.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_API_URL","title":"PREFECT_CLOUD_API_URL = Setting(str, default='https://api.prefect.cloud/api', value_callback=check_for_deprecated_cloud_url)
module-attribute
","text":"API URL for Prefect Cloud. Used for authentication.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_URL","title":"PREFECT_CLOUD_URL = Setting(str, default=None, deprecated=True, deprecated_start_date='Dec 2022', deprecated_help='Use `PREFECT_CLOUD_API_URL` instead.')
module-attribute
","text":"","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_URL","title":"PREFECT_UI_URL = Setting(Optional[str], default=None, value_callback=default_ui_url)
module-attribute
","text":"The URL for the UI. By default, this is inferred from the PREFECT_API_URL.
When using Prefect Cloud, this will include the account and workspace. When using an ephemeral server, this will be None
.
PREFECT_CLOUD_UI_URL = Setting(str, default=None, value_callback=default_cloud_ui_url)
module-attribute
","text":"The URL for the Cloud UI. By default, this is inferred from the PREFECT_CLOUD_API_URL.
PREFECT_UI_URL will be workspace specific and will be usable in the open source too.In contrast, this value is only valid for Cloud and will not include the workspace.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_REQUEST_TIMEOUT","title":"PREFECT_API_REQUEST_TIMEOUT = Setting(float, default=60.0)
module-attribute
","text":"The default timeout for requests to the API
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN","title":"PREFECT_EXPERIMENTAL_WARN = Setting(bool, default=True)
module-attribute
","text":"If enabled, warn on usage of experimental features.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_PROFILES_PATH","title":"PREFECT_PROFILES_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'profiles.toml', value_callback=template_with_settings(PREFECT_HOME))
module-attribute
","text":"The path to a profiles configuration files.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RESULTS_DEFAULT_SERIALIZER","title":"PREFECT_RESULTS_DEFAULT_SERIALIZER = Setting(str, default='pickle')
module-attribute
","text":"The default serializer to use when not otherwise specified.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RESULTS_PERSIST_BY_DEFAULT","title":"PREFECT_RESULTS_PERSIST_BY_DEFAULT = Setting(bool, default=False)
module-attribute
","text":"The default setting for persisting results when not otherwise specified. If enabled, flow and task results will be persisted unless they opt out.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASKS_REFRESH_CACHE","title":"PREFECT_TASKS_REFRESH_CACHE = Setting(bool, default=False)
module-attribute
","text":"If True
, enables a refresh of cached results: re-executing the task will refresh the cached results. Defaults to False
.
PREFECT_TASK_DEFAULT_RETRIES = Setting(int, default=0)
module-attribute
","text":"This value sets the default number of retries for all tasks. This value does not overwrite individually set retries values on tasks
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_FLOW_DEFAULT_RETRIES","title":"PREFECT_FLOW_DEFAULT_RETRIES = Setting(int, default=0)
module-attribute
","text":"This value sets the default number of retries for all flows. This value does not overwrite individually set retries values on a flow
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS","title":"PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS = Setting(Union[int, float], default=0)
module-attribute
","text":"This value sets the retry delay seconds for all flows. This value does not overwrite individually set retry delay seconds
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS","title":"PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS = Setting(Union[float, int, List[float]], default=0)
module-attribute
","text":"This value sets the default retry delay seconds for all tasks. This value does not overwrite individually set retry delay seconds
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS","title":"PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS = Setting(int, default=30)
module-attribute
","text":"The number of seconds to wait before retrying when a task run cannot secure a concurrency slot from the server.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOCAL_STORAGE_PATH","title":"PREFECT_LOCAL_STORAGE_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'storage', value_callback=template_with_settings(PREFECT_HOME))
module-attribute
","text":"The path to a block storage directory to store things in.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MEMO_STORE_PATH","title":"PREFECT_MEMO_STORE_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'memo_store.toml', value_callback=template_with_settings(PREFECT_HOME))
module-attribute
","text":"The path to the memo store file.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION","title":"PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION = Setting(bool, default=True)
module-attribute
","text":"Controls whether or not block auto-registration on start up should be memoized. Setting to False may result in slower server start up times.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_LEVEL","title":"PREFECT_LOGGING_LEVEL = Setting(str, default='INFO', value_callback=debug_mode_log_level)
module-attribute
","text":"The default logging level for Prefect loggers. Defaults to \"INFO\" during normal operation. Is forced to \"DEBUG\" during debug mode.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_INTERNAL_LEVEL","title":"PREFECT_LOGGING_INTERNAL_LEVEL = Setting(str, default='ERROR', value_callback=debug_mode_log_level)
module-attribute
","text":"The default logging level for Prefect's internal machinery loggers. Defaults to \"ERROR\" during normal operation. Is forced to \"DEBUG\" during debug mode.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_SERVER_LEVEL","title":"PREFECT_LOGGING_SERVER_LEVEL = Setting(str, default='WARNING')
module-attribute
","text":"The default logging level for the Prefect API server.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_SETTINGS_PATH","title":"PREFECT_LOGGING_SETTINGS_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'logging.yml', value_callback=template_with_settings(PREFECT_HOME))
module-attribute
","text":"The path to a custom YAML logging configuration file. If no file is found, the default logging.yml
is used. Defaults to a logging.yml in the Prefect home directory.
PREFECT_LOGGING_EXTRA_LOGGERS = Setting(str, default='', value_callback=get_extra_loggers)
module-attribute
","text":"Additional loggers to attach to Prefect logging at runtime. Values should be comma separated. The handlers attached to the 'prefect' logger will be added to these loggers. Additionally, if the level is not set, it will be set to the same level as the 'prefect' logger.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_LOG_PRINTS","title":"PREFECT_LOGGING_LOG_PRINTS = Setting(bool, default=False)
module-attribute
","text":"If set, print
statements in flows and tasks will be redirected to the Prefect logger for the given run. This setting can be overridden by individual tasks and flows.
PREFECT_LOGGING_TO_API_ENABLED = Setting(bool, default=True)
module-attribute
","text":"Toggles sending logs to the API. If False
, logs sent to the API log handler will not be sent to the API.
PREFECT_LOGGING_TO_API_BATCH_INTERVAL = Setting(float, default=2.0)
module-attribute
","text":"The number of seconds between batched writes of logs to the API.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_BATCH_SIZE","title":"PREFECT_LOGGING_TO_API_BATCH_SIZE = Setting(int, default=4000000)
module-attribute
","text":"The maximum size in bytes for a batch of logs.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_MAX_LOG_SIZE","title":"PREFECT_LOGGING_TO_API_MAX_LOG_SIZE = Setting(int, default=1000000)
module-attribute
","text":"The maximum size in bytes for a single log.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW","title":"PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW = Setting(Literal['warn', 'error', 'ignore'], default='warn')
module-attribute
","text":"Controls the behavior when loggers attempt to send logs to the API handler from outside of a flow.
All logs sent to the API must be associated with a flow run. The API log handler can only be used outside of a flow by manually providing a flow run identifier. Logs that are not associated with a flow run will not be sent to the API. This setting can be used to determine if a warning or error is displayed when the identifier is missing.
The following options are available:
PREFECT_SQLALCHEMY_POOL_SIZE = Setting(int, default=None)
module-attribute
","text":"Controls connection pool size when using a PostgreSQL database with the Prefect API. If not set, the default SQLAlchemy pool size will be used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SQLALCHEMY_MAX_OVERFLOW","title":"PREFECT_SQLALCHEMY_MAX_OVERFLOW = Setting(int, default=None)
module-attribute
","text":"Controls maximum overflow of the connection pool when using a PostgreSQL database with the Prefect API. If not set, the default SQLAlchemy maximum overflow value will be used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_COLORS","title":"PREFECT_LOGGING_COLORS = Setting(bool, default=True)
module-attribute
","text":"Whether to style console logs with color.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_MARKUP","title":"PREFECT_LOGGING_MARKUP = Setting(bool, default=False)
module-attribute
","text":"Whether to interpret strings wrapped in square brackets as a style. This allows styles to be conveniently added to log messages, e.g. [red]This is a red message.[/red]
. However, the downside is, if enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output, e.g. DROP TABLE [dbo].[SomeTable];\"
outputs DROP TABLE .[SomeTable];
.
PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD = Setting(float, default=10.0)
module-attribute
","text":"Threshold time in seconds for logging a warning if task parameter introspection exceeds this duration. Parameter introspection can be a significant performance hit when the parameter is a large collection object, e.g. a large dictionary or DataFrame, and each element needs to be inspected. See prefect.utilities.annotations.quote
for more details. Defaults to 10.0
. Set to 0
to disable logging the warning.
PREFECT_AGENT_QUERY_INTERVAL = Setting(float, default=15)
module-attribute
","text":"The agent loop interval, in seconds. Agents will check for new runs this often. Defaults to 15
.
PREFECT_AGENT_PREFETCH_SECONDS = Setting(int, default=15)
module-attribute
","text":"Agents will look for scheduled runs this many seconds in the future and attempt to run them. This accounts for any additional infrastructure spin-up time or latency in preparing a flow run. Note flow runs will not start before their scheduled time, even if they are prefetched. Defaults to 15
.
PREFECT_ASYNC_FETCH_STATE_RESULT = Setting(bool, default=False)
module-attribute
","text":"Determines whether State.result()
fetches results automatically or not. In Prefect 2.6.0, the State.result()
method was updated to be async to facilitate automatic retrieval of results from storage which means when writing async code you must await
the call. For backwards compatibility, the result is not retrieved by default for async users. You may opt into this per call by passing fetch=True
or toggle this setting to change the behavior globally. This setting does not affect users writing synchronous tasks and flows. This setting does not affect retrieval of results when using Future.result()
.
PREFECT_API_BLOCKS_REGISTER_ON_START = Setting(bool, default=True)
module-attribute
","text":"If set, any block types that have been imported will be registered with the backend on application startup. If not set, block types must be manually registered.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_PASSWORD","title":"PREFECT_API_DATABASE_PASSWORD = Setting(str, default=None, is_secret=True)
module-attribute
","text":"Password to template into the PREFECT_API_DATABASE_CONNECTION_URL
. This is useful if the password must be provided separately from the connection URL. To use this setting, you must include it in your connection URL.
PREFECT_API_DATABASE_CONNECTION_URL = Setting(str, default=None, value_callback=default_database_connection_url, is_secret=True)
module-attribute
","text":"A database connection URL in a SQLAlchemy-compatible format. Prefect currently supports SQLite and Postgres. Note that all Prefect database engines must use an async driver - for SQLite, use sqlite+aiosqlite
and for Postgres use postgresql+asyncpg
.
SQLite in-memory databases can be used by providing the url sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false
, which will allow the database to be accessed by multiple threads. Note that in-memory databases can not be accessed from multiple processes and should only be used for simple tests.
Defaults to a sqlite database stored in the Prefect home directory.
If you need to provide password via a different environment variable, you use the PREFECT_API_DATABASE_PASSWORD
setting. For example:
PREFECT_API_DATABASE_PASSWORD='mypassword'\nPREFECT_API_DATABASE_CONNECTION_URL='postgresql+asyncpg://postgres:${PREFECT_API_DATABASE_PASSWORD}@localhost/prefect'\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_ECHO","title":"PREFECT_API_DATABASE_ECHO = Setting(bool, default=False)
module-attribute
","text":"If True
, SQLAlchemy will log all SQL issued to the database. Defaults to False
.
PREFECT_API_DATABASE_MIGRATE_ON_START = Setting(bool, default=True)
module-attribute
","text":"If True
, the database will be upgraded on application creation. If False
, the database will need to be upgraded manually.
PREFECT_API_DATABASE_TIMEOUT = Setting(Optional[float], default=10.0)
module-attribute
","text":"A statement timeout, in seconds, applied to all database interactions made by the API. Defaults to 10 seconds.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_CONNECTION_TIMEOUT","title":"PREFECT_API_DATABASE_CONNECTION_TIMEOUT = Setting(Optional[float], default=5)
module-attribute
","text":"A connection timeout, in seconds, applied to database connections. Defaults to 5
.
PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS = Setting(float, default=60)
module-attribute
","text":"The scheduler loop interval, in seconds. This determines how often the scheduler will attempt to schedule new flow runs, but has no impact on how quickly either flow runs or task runs are actually executed. Defaults to 60
.
PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE = Setting(int, default=100)
module-attribute
","text":"The number of deployments the scheduler will attempt to schedule in a single batch. If there are more deployments than the batch size, the scheduler immediately attempts to schedule the next batch; it does not sleep for scheduler_loop_seconds
until it has visited every deployment once. Defaults to 100
.
PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS = Setting(int, default=100)
module-attribute
","text":"The scheduler will attempt to schedule up to this many auto-scheduled runs in the future. Note that runs may have fewer than this many scheduled runs, depending on the value of scheduler_max_scheduled_time
. Defaults to 100
.
PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS = Setting(int, default=3)
module-attribute
","text":"The scheduler will attempt to schedule at least this many auto-scheduled runs in the future. Note that runs may have more than this many scheduled runs, depending on the value of scheduler_min_scheduled_time
. Defaults to 3
.
PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME = Setting(timedelta, default=timedelta(days=100))
module-attribute
","text":"The scheduler will create new runs up to this far in the future. Note that this setting will take precedence over scheduler_max_runs
: if a flow runs once a month and scheduler_max_scheduled_time
is three months, then only three runs will be scheduled. Defaults to 100 days (8640000
seconds).
PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME = Setting(timedelta, default=timedelta(hours=1))
module-attribute
","text":"The scheduler will create new runs at least this far in the future. Note that this setting will take precedence over scheduler_min_runs
: if a flow runs every hour and scheduler_min_scheduled_time
is three hours, then three runs will be scheduled even if scheduler_min_runs
is 1. Defaults to 1 hour (3600
seconds).
PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE = Setting(int, default=500)
module-attribute
","text":"The number of flow runs the scheduler will attempt to insert in one batch across all deployments. If the number of flow runs to schedule exceeds this amount, the runs will be inserted in batches of this size. Defaults to 500
.
PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS = Setting(float, default=5)
module-attribute
","text":"The late runs service will look for runs to mark as late this often. Defaults to 5
.
PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS = Setting(timedelta, default=timedelta(seconds=5))
module-attribute
","text":"The late runs service will mark runs as late after they have exceeded their scheduled start time by this many seconds. Defaults to 5
seconds.
PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS = Setting(float, default=5)
module-attribute
","text":"The pause expiration service will look for runs to mark as failed this often. Defaults to 5
.
PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS = Setting(float, default=20)
module-attribute
","text":"The cancellation cleanup service will look non-terminal tasks and subflows this often. Defaults to 20
.
PREFECT_API_DEFAULT_LIMIT = Setting(int, default=200)
module-attribute
","text":"The default limit applied to queries that can return multiple objects, such as POST /flow_runs/filter
.
PREFECT_SERVER_API_HOST = Setting(str, default='127.0.0.1')
module-attribute
","text":"The API's host address (defaults to 127.0.0.1
).
PREFECT_SERVER_API_PORT = Setting(int, default=4200)
module-attribute
","text":"The API's port address (defaults to 4200
).
PREFECT_SERVER_API_KEEPALIVE_TIMEOUT = Setting(int, default=5)
module-attribute
","text":"The API's keep alive timeout (defaults to 5
). Refer to https://www.uvicorn.org/settings/#timeouts for details.
When the API is hosted behind a load balancer, you may want to set this to a value greater than the load balancer's idle timeout.
Note this setting only applies when calling prefect server start
; if hosting the API with another tool you will need to configure this there instead.
PREFECT_UI_ENABLED = Setting(bool, default=True)
module-attribute
","text":"Whether or not to serve the Prefect UI.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_API_URL","title":"PREFECT_UI_API_URL = Setting(str, default=None, value_callback=default_ui_api_url)
module-attribute
","text":"The connection url for communication from the UI to the API. Defaults to PREFECT_API_URL
if set. Otherwise, the default URL is generated from PREFECT_SERVER_API_HOST
and PREFECT_SERVER_API_PORT
. If providing a custom value, the aforementioned settings may be templated into the given string.
PREFECT_SERVER_ANALYTICS_ENABLED = Setting(bool, default=True)
module-attribute
","text":"When enabled, Prefect sends anonymous data (e.g. count of flow runs, package version) on server startup to help us improve our product.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_ENABLED","title":"PREFECT_API_SERVICES_SCHEDULER_ENABLED = Setting(bool, default=True)
module-attribute
","text":"Whether or not to start the scheduling service in the server application. If disabled, you will need to run this service separately to schedule runs for deployments.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_ENABLED","title":"PREFECT_API_SERVICES_LATE_RUNS_ENABLED = Setting(bool, default=True)
module-attribute
","text":"Whether or not to start the late runs service in the server application. If disabled, you will need to run this service separately to have runs past their scheduled start time marked as late.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED","title":"PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED = Setting(bool, default=True)
module-attribute
","text":"Whether or not to start the flow run notifications service in the server application. If disabled, you will need to run this service separately to send flow run notifications.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED","title":"PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED = Setting(bool, default=True)
module-attribute
","text":"Whether or not to start the paused flow run expiration service in the server application. If disabled, paused flows that have timed out will remain in a Paused state until a resume attempt.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH","title":"PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH = Setting(int, default=2000)
module-attribute
","text":"The maximum number of characters allowed for a task run cache key. This setting cannot be changed client-side, it must be set on the server.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED","title":"PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED = Setting(bool, default=True)
module-attribute
","text":"Whether or not to start the cancellation cleanup service in the server application. If disabled, task runs and subflow runs belonging to cancelled flows may remain in non-terminal states.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES","title":"PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES = Setting(int, default=10000)
module-attribute
","text":"The maximum size of a flow run graph on the v2 API
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS","title":"PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS = Setting(int, default=10000)
module-attribute
","text":"The maximum number of artifacts to show on a flow run graph on the v2 API
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS_ON_FLOW_RUN_GRAPH","title":"PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS_ON_FLOW_RUN_GRAPH = Setting(bool, default=False)
module-attribute
","text":"Whether or not to enable artifacts on the flow run graph.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_STATES_ON_FLOW_RUN_GRAPH","title":"PREFECT_EXPERIMENTAL_ENABLE_STATES_ON_FLOW_RUN_GRAPH = Setting(bool, default=False)
module-attribute
","text":"Whether or not to enable flow run states on the flow run graph.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_EVENTS_CLIENT","title":"PREFECT_EXPERIMENTAL_ENABLE_EVENTS_CLIENT = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable experimental Prefect work pools.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_EVENTS_CLIENT","title":"PREFECT_EXPERIMENTAL_WARN_EVENTS_CLIENT = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when experimental Prefect work pools are used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORK_POOLS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORK_POOLS = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable experimental Prefect work pools.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORK_POOLS","title":"PREFECT_EXPERIMENTAL_WARN_WORK_POOLS = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when experimental Prefect work pools are used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORKERS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORKERS = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable experimental Prefect workers.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORKERS","title":"PREFECT_EXPERIMENTAL_WARN_WORKERS = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when experimental Prefect workers are used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_VISUALIZE","title":"PREFECT_EXPERIMENTAL_WARN_VISUALIZE = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when experimental Prefect visualize is used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION","title":"PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable experimental enhanced flow run cancellation.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_DEPLOYMENT_PARAMETERS","title":"PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_DEPLOYMENT_PARAMETERS = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable enhanced deployment parameters.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION","title":"PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when experimental enhanced flow run cancellation is used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_DEPLOYMENT_STATUS","title":"PREFECT_EXPERIMENTAL_ENABLE_DEPLOYMENT_STATUS = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable deployment status in the UI
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_DEPLOYMENT_STATUS","title":"PREFECT_EXPERIMENTAL_WARN_DEPLOYMENT_STATUS = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when deployment status is used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_FLOW_RUN_INPUT","title":"PREFECT_EXPERIMENTAL_FLOW_RUN_INPUT = Setting(bool, default=False)
module-attribute
","text":"Whether or not to enable flow run input.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INPUT","title":"PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INPUT = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable flow run input.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_PROCESS_LIMIT","title":"PREFECT_RUNNER_PROCESS_LIMIT = Setting(int, default=5)
module-attribute
","text":"Maximum number of processes a runner will execute in parallel.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_POLL_FREQUENCY","title":"PREFECT_RUNNER_POLL_FREQUENCY = Setting(int, default=10)
module-attribute
","text":"Number of seconds a runner should wait between queries for scheduled work.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_MISSED_POLLS_TOLERANCE","title":"PREFECT_RUNNER_SERVER_MISSED_POLLS_TOLERANCE = Setting(int, default=2)
module-attribute
","text":"Number of missed polls before a runner is considered unhealthy by its webserver.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_HOST","title":"PREFECT_RUNNER_SERVER_HOST = Setting(str, default='localhost')
module-attribute
","text":"The host address the runner's webserver should bind to.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_PORT","title":"PREFECT_RUNNER_SERVER_PORT = Setting(int, default=8080)
module-attribute
","text":"The port the runner's webserver should bind to.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_LOG_LEVEL","title":"PREFECT_RUNNER_SERVER_LOG_LEVEL = Setting(str, default='error')
module-attribute
","text":"The log level of the runner's webserver.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_ENABLE","title":"PREFECT_RUNNER_SERVER_ENABLE = Setting(bool, default=False)
module-attribute
","text":"Whether or not to enable the runner's webserver.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_HEARTBEAT_SECONDS","title":"PREFECT_WORKER_HEARTBEAT_SECONDS = Setting(float, default=30)
module-attribute
","text":"Number of seconds a worker should wait between sending a heartbeat.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_QUERY_SECONDS","title":"PREFECT_WORKER_QUERY_SECONDS = Setting(float, default=10)
module-attribute
","text":"Number of seconds a worker should wait between queries for scheduled flow runs.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_PREFETCH_SECONDS","title":"PREFECT_WORKER_PREFETCH_SECONDS = Setting(float, default=10)
module-attribute
","text":"The number of seconds into the future a worker should query for scheduled flow runs. Can be used to compensate for infrastructure start up time for a worker.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_WEBSERVER_HOST","title":"PREFECT_WORKER_WEBSERVER_HOST = Setting(str, default='0.0.0.0')
module-attribute
","text":"The host address the worker's webserver should bind to.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_WEBSERVER_PORT","title":"PREFECT_WORKER_WEBSERVER_PORT = Setting(int, default=8080)
module-attribute
","text":"The port the worker's webserver should bind to.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_DEFAULT_STORAGE_BLOCK","title":"PREFECT_TASK_SCHEDULING_DEFAULT_STORAGE_BLOCK = Setting(str, default='local-file-system/prefect-task-scheduling')
module-attribute
","text":"The block-type/block-document
slug of a block to use as the default storage for autonomous tasks.
PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS = Setting(bool, default=True)
module-attribute
","text":"Whether or not to delete failed task submissions from the database.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE","title":"PREFECT_TASK_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE = Setting(int, default=1000)
module-attribute
","text":"The maximum number of scheduled tasks to queue for submission.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_MAX_RETRY_QUEUE_SIZE","title":"PREFECT_TASK_SCHEDULING_MAX_RETRY_QUEUE_SIZE = Setting(int, default=100)
module-attribute
","text":"The maximum number of retries to queue for submission.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_PENDING_TASK_TIMEOUT","title":"PREFECT_TASK_SCHEDULING_PENDING_TASK_TIMEOUT = Setting(timedelta, default=timedelta(seconds=30))
module-attribute
","text":"How long before a PENDING task are made available to another task server. In practice, a task server should move a task from PENDING to RUNNING very quickly, so runs stuck in PENDING for a while is a sign that the task server may have crashed.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_FLOW_RUN_INFRA_OVERRIDES","title":"PREFECT_EXPERIMENTAL_ENABLE_FLOW_RUN_INFRA_OVERRIDES = Setting(bool, default=False)
module-attribute
","text":"Whether or not to enable infrastructure overrides made on flow runs.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INFRA_OVERRIDES","title":"PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INFRA_OVERRIDES = Setting(bool, default=True)
module-attribute
","text":"Whether or not to warn infrastructure when experimental flow runs overrides are used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_EXTRA_RUNNER_ENDPOINTS","title":"PREFECT_EXPERIMENTAL_ENABLE_EXTRA_RUNNER_ENDPOINTS = Setting(bool, default=False)
module-attribute
","text":"Whether or not to enable experimental worker webserver endpoints.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS","title":"PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable experimental Prefect artifacts.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_ARTIFACTS","title":"PREFECT_EXPERIMENTAL_WARN_ARTIFACTS = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when experimental Prefect artifacts are used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORKSPACE_DASHBOARD","title":"PREFECT_EXPERIMENTAL_ENABLE_WORKSPACE_DASHBOARD = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable the experimental workspace dashboard.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORKSPACE_DASHBOARD","title":"PREFECT_EXPERIMENTAL_WARN_WORKSPACE_DASHBOARD = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when the experimental workspace dashboard is enabled.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING","title":"PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING = Setting(bool, default=False)
module-attribute
","text":"Whether or not to enable experimental task scheduling.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORK_QUEUE_STATUS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORK_QUEUE_STATUS = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable experimental work queue status in-place of work queue health.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_DEFAULT_RESULT_STORAGE_BLOCK","title":"PREFECT_DEFAULT_RESULT_STORAGE_BLOCK = Setting(str, default=None)
module-attribute
","text":"The block-type/block-document
slug of a block to use as the default result storage.
PREFECT_DEFAULT_WORK_POOL_NAME = Setting(str, default=None)
module-attribute
","text":"The default work pool to deploy to.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE","title":"PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE = Setting(str, default=None)
module-attribute
","text":"The default Docker namespace to use when building images.
Can be either an organization/username or a registry URL with an organization/username.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_SERVE_BASE","title":"PREFECT_UI_SERVE_BASE = Setting(str, default='/')
module-attribute
","text":"The base URL path to serve the Prefect UI from.
Defaults to the root path.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_STATIC_DIRECTORY","title":"PREFECT_UI_STATIC_DIRECTORY = Setting(str, default=None)
module-attribute
","text":"The directory to serve static files from. This should be used when running into permissions issues when attempting to serve the UI from the default directory (for example when running in a Docker container)
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting","title":"Setting
","text":" Bases: Generic[T]
Setting definition type.
Source code inprefect/settings.py
class Setting(Generic[T]):\n \"\"\"\n Setting definition type.\n \"\"\"\n\n def __init__(\n self,\n type: Type[T],\n *,\n deprecated: bool = False,\n deprecated_start_date: Optional[str] = None,\n deprecated_end_date: Optional[str] = None,\n deprecated_help: str = \"\",\n deprecated_when_message: str = \"\",\n deprecated_when: Optional[Callable[[Any], bool]] = None,\n deprecated_renamed_to: Optional[\"Setting[T]\"] = None,\n value_callback: Optional[Callable[[\"Settings\", T], T]] = None,\n is_secret: bool = False,\n **kwargs: Any,\n ) -> None:\n self.field: fields.FieldInfo = Field(**kwargs)\n self.type = type\n self.value_callback = value_callback\n self._name = None\n self.is_secret = is_secret\n self.deprecated = deprecated\n self.deprecated_start_date = deprecated_start_date\n self.deprecated_end_date = deprecated_end_date\n self.deprecated_help = deprecated_help\n self.deprecated_when = deprecated_when or (lambda _: True)\n self.deprecated_when_message = deprecated_when_message\n self.deprecated_renamed_to = deprecated_renamed_to\n self.deprecated_renamed_from = None\n self.__doc__ = self.field.description\n\n # Validate the deprecation settings, will throw an error at setting definition\n # time if the developer has not configured it correctly\n if deprecated:\n generate_deprecation_message(\n name=\"...\", # setting names not populated until after init\n start_date=self.deprecated_start_date,\n end_date=self.deprecated_end_date,\n help=self.deprecated_help,\n when=self.deprecated_when_message,\n )\n\n if deprecated_renamed_to is not None:\n # Track the deprecation both ways\n deprecated_renamed_to.deprecated_renamed_from = self\n\n def value(self, bypass_callback: bool = False) -> T:\n \"\"\"\n Get the current value of a setting.\n\n Example:\n ```python\n from prefect.settings import PREFECT_API_URL\n PREFECT_API_URL.value()\n ```\n \"\"\"\n return self.value_from(get_current_settings(), bypass_callback=bypass_callback)\n\n def value_from(self, settings: \"Settings\", bypass_callback: bool = False) -> T:\n \"\"\"\n Get the value of a setting from a settings object\n\n Example:\n ```python\n from prefect.settings import get_default_settings\n PREFECT_API_URL.value_from(get_default_settings())\n ```\n \"\"\"\n value = settings.value_of(self, bypass_callback=bypass_callback)\n\n if not bypass_callback and self.deprecated and self.deprecated_when(value):\n # Check if this setting is deprecated and someone is accessing the value\n # via the old name\n warnings.warn(self.deprecated_message, DeprecationWarning, stacklevel=3)\n\n # If the the value is empty, return the new setting's value for compat\n if value is None and self.deprecated_renamed_to is not None:\n return self.deprecated_renamed_to.value_from(settings)\n\n if not bypass_callback and self.deprecated_renamed_from is not None:\n # Check if this setting is a rename of a deprecated setting and the\n # deprecated setting is set and should be used for compatibility\n deprecated_value = self.deprecated_renamed_from.value_from(\n settings, bypass_callback=True\n )\n if deprecated_value is not None:\n warnings.warn(\n (\n f\"{self.deprecated_renamed_from.deprecated_message} Because\"\n f\" {self.deprecated_renamed_from.name!r} is set it will be used\"\n f\" instead of {self.name!r} for backwards compatibility.\"\n ),\n DeprecationWarning,\n stacklevel=3,\n )\n return deprecated_value or value\n\n return value\n\n @property\n def name(self):\n if self._name:\n return self._name\n\n # Lookup the name on first access\n for name, val in tuple(globals().items()):\n if val == self:\n self._name = name\n return name\n\n raise ValueError(\"Setting not found in `prefect.settings` module.\")\n\n @name.setter\n def name(self, value: str):\n self._name = value\n\n @property\n def deprecated_message(self):\n return generate_deprecation_message(\n name=f\"Setting {self.name!r}\",\n start_date=self.deprecated_start_date,\n end_date=self.deprecated_end_date,\n help=self.deprecated_help,\n when=self.deprecated_when_message,\n )\n\n def __repr__(self) -> str:\n return f\"<{self.name}: {self.type.__name__}>\"\n\n def __bool__(self) -> bool:\n \"\"\"\n Returns a truthy check of the current value.\n \"\"\"\n return bool(self.value())\n\n def __eq__(self, __o: object) -> bool:\n return __o.__eq__(self.value())\n\n def __hash__(self) -> int:\n return hash((type(self), self.name))\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting.value","title":"value
","text":"Get the current value of a setting.
Example:
from prefect.settings import PREFECT_API_URL\nPREFECT_API_URL.value()\n
Source code in prefect/settings.py
def value(self, bypass_callback: bool = False) -> T:\n \"\"\"\n Get the current value of a setting.\n\n Example:\n ```python\n from prefect.settings import PREFECT_API_URL\n PREFECT_API_URL.value()\n ```\n \"\"\"\n return self.value_from(get_current_settings(), bypass_callback=bypass_callback)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting.value_from","title":"value_from
","text":"Get the value of a setting from a settings object
Example:
from prefect.settings import get_default_settings\nPREFECT_API_URL.value_from(get_default_settings())\n
Source code in prefect/settings.py
def value_from(self, settings: \"Settings\", bypass_callback: bool = False) -> T:\n \"\"\"\n Get the value of a setting from a settings object\n\n Example:\n ```python\n from prefect.settings import get_default_settings\n PREFECT_API_URL.value_from(get_default_settings())\n ```\n \"\"\"\n value = settings.value_of(self, bypass_callback=bypass_callback)\n\n if not bypass_callback and self.deprecated and self.deprecated_when(value):\n # Check if this setting is deprecated and someone is accessing the value\n # via the old name\n warnings.warn(self.deprecated_message, DeprecationWarning, stacklevel=3)\n\n # If the the value is empty, return the new setting's value for compat\n if value is None and self.deprecated_renamed_to is not None:\n return self.deprecated_renamed_to.value_from(settings)\n\n if not bypass_callback and self.deprecated_renamed_from is not None:\n # Check if this setting is a rename of a deprecated setting and the\n # deprecated setting is set and should be used for compatibility\n deprecated_value = self.deprecated_renamed_from.value_from(\n settings, bypass_callback=True\n )\n if deprecated_value is not None:\n warnings.warn(\n (\n f\"{self.deprecated_renamed_from.deprecated_message} Because\"\n f\" {self.deprecated_renamed_from.name!r} is set it will be used\"\n f\" instead of {self.name!r} for backwards compatibility.\"\n ),\n DeprecationWarning,\n stacklevel=3,\n )\n return deprecated_value or value\n\n return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings","title":"Settings
","text":" Bases: SettingsFieldsMixin
Contains validated Prefect settings.
Settings should be accessed using the relevant Setting
object. For example:
from prefect.settings import PREFECT_HOME\nPREFECT_HOME.value()\n
Accessing a setting attribute directly will ignore any value_callback
mutations. This is not recommended:
from prefect.settings import Settings\nSettings().PREFECT_PROFILES_PATH # PosixPath('${PREFECT_HOME}/profiles.toml')\n
Source code in prefect/settings.py
@add_cloudpickle_reduction\nclass Settings(SettingsFieldsMixin):\n \"\"\"\n Contains validated Prefect settings.\n\n Settings should be accessed using the relevant `Setting` object. For example:\n ```python\n from prefect.settings import PREFECT_HOME\n PREFECT_HOME.value()\n ```\n\n Accessing a setting attribute directly will ignore any `value_callback` mutations.\n This is not recommended:\n ```python\n from prefect.settings import Settings\n Settings().PREFECT_PROFILES_PATH # PosixPath('${PREFECT_HOME}/profiles.toml')\n ```\n \"\"\"\n\n def value_of(self, setting: Setting[T], bypass_callback: bool = False) -> T:\n \"\"\"\n Retrieve a setting's value.\n \"\"\"\n value = getattr(self, setting.name)\n if setting.value_callback and not bypass_callback:\n value = setting.value_callback(self, value)\n return value\n\n @validator(PREFECT_LOGGING_LEVEL.name, PREFECT_LOGGING_SERVER_LEVEL.name)\n def check_valid_log_level(cls, value):\n if isinstance(value, str):\n value = value.upper()\n logging._checkLevel(value)\n return value\n\n @root_validator\n def post_root_validators(cls, values):\n \"\"\"\n Add root validation functions for settings here.\n \"\"\"\n # TODO: We could probably register these dynamically but this is the simpler\n # approach for now. We can explore more interesting validation features\n # in the future.\n values = max_log_size_smaller_than_batch_size(values)\n values = warn_on_database_password_value_without_usage(values)\n if not values[\"PREFECT_SILENCE_API_URL_MISCONFIGURATION\"]:\n values = warn_on_misconfigured_api_url(values)\n return values\n\n def copy_with_update(\n self,\n updates: Mapping[Setting, Any] = None,\n set_defaults: Mapping[Setting, Any] = None,\n restore_defaults: Iterable[Setting] = None,\n ) -> \"Settings\":\n \"\"\"\n Create a new `Settings` object with validation.\n\n Arguments:\n updates: A mapping of settings to new values. Existing values for the\n given settings will be overridden.\n set_defaults: A mapping of settings to new default values. Existing values for\n the given settings will only be overridden if they were not set.\n restore_defaults: An iterable of settings to restore to their default values.\n\n Returns:\n A new `Settings` object.\n \"\"\"\n updates = updates or {}\n set_defaults = set_defaults or {}\n restore_defaults = restore_defaults or set()\n restore_defaults_names = {setting.name for setting in restore_defaults}\n\n return self.__class__(\n **{\n **{setting.name: value for setting, value in set_defaults.items()},\n **self.dict(exclude_unset=True, exclude=restore_defaults_names),\n **{setting.name: value for setting, value in updates.items()},\n }\n )\n\n def with_obfuscated_secrets(self):\n \"\"\"\n Returns a copy of this settings object with secret setting values obfuscated.\n \"\"\"\n settings = self.copy(\n update={\n setting.name: obfuscate(self.value_of(setting))\n for setting in SETTING_VARIABLES.values()\n if setting.is_secret\n # Exclude deprecated settings with null values to avoid warnings\n and not (setting.deprecated and self.value_of(setting) is None)\n }\n )\n # Ensure that settings that have not been marked as \"set\" before are still so\n # after we have updated their value above\n settings.__fields_set__.intersection_update(self.__fields_set__)\n return settings\n\n def hash_key(self) -> str:\n \"\"\"\n Return a hash key for the settings object. This is needed since some\n settings may be unhashable. An example is lists.\n \"\"\"\n env_variables = self.to_environment_variables()\n return str(hash(tuple((key, value) for key, value in env_variables.items())))\n\n def to_environment_variables(\n self, include: Iterable[Setting] = None, exclude_unset: bool = False\n ) -> Dict[str, str]:\n \"\"\"\n Convert the settings object to environment variables.\n\n Note that setting values will not be run through their `value_callback` allowing\n dynamic resolution to occur when loaded from the returned environment.\n\n Args:\n include_keys: An iterable of settings to include in the return value.\n If not set, all settings are used.\n exclude_unset: Only include settings that have been set (i.e. the value is\n not from the default). If set, unset keys will be dropped even if they\n are set in `include_keys`.\n\n Returns:\n A dictionary of settings with values cast to strings\n \"\"\"\n include = set(include or SETTING_VARIABLES.values())\n\n if exclude_unset:\n set_keys = {\n # Collect all of the \"set\" keys and cast to `Setting` objects\n SETTING_VARIABLES[key]\n for key in self.dict(exclude_unset=True)\n }\n include.intersection_update(set_keys)\n\n # Validate the types of items in `include` to prevent exclusion bugs\n for key in include:\n if not isinstance(key, Setting):\n raise TypeError(\n \"Invalid type {type(key).__name__!r} for key in `include`.\"\n )\n\n env = {\n # Use `getattr` instead of `value_of` to avoid value callback resolution\n key: getattr(self, key)\n for key, setting in SETTING_VARIABLES.items()\n if setting in include\n }\n\n # Cast to strings and drop null values\n return {key: str(value) for key, value in env.items() if value is not None}\n\n class Config:\n frozen = True\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.value_of","title":"value_of
","text":"Retrieve a setting's value.
Source code inprefect/settings.py
def value_of(self, setting: Setting[T], bypass_callback: bool = False) -> T:\n \"\"\"\n Retrieve a setting's value.\n \"\"\"\n value = getattr(self, setting.name)\n if setting.value_callback and not bypass_callback:\n value = setting.value_callback(self, value)\n return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.post_root_validators","title":"post_root_validators
","text":"Add root validation functions for settings here.
Source code inprefect/settings.py
@root_validator\ndef post_root_validators(cls, values):\n \"\"\"\n Add root validation functions for settings here.\n \"\"\"\n # TODO: We could probably register these dynamically but this is the simpler\n # approach for now. We can explore more interesting validation features\n # in the future.\n values = max_log_size_smaller_than_batch_size(values)\n values = warn_on_database_password_value_without_usage(values)\n if not values[\"PREFECT_SILENCE_API_URL_MISCONFIGURATION\"]:\n values = warn_on_misconfigured_api_url(values)\n return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.with_obfuscated_secrets","title":"with_obfuscated_secrets
","text":"Returns a copy of this settings object with secret setting values obfuscated.
Source code inprefect/settings.py
def with_obfuscated_secrets(self):\n \"\"\"\n Returns a copy of this settings object with secret setting values obfuscated.\n \"\"\"\n settings = self.copy(\n update={\n setting.name: obfuscate(self.value_of(setting))\n for setting in SETTING_VARIABLES.values()\n if setting.is_secret\n # Exclude deprecated settings with null values to avoid warnings\n and not (setting.deprecated and self.value_of(setting) is None)\n }\n )\n # Ensure that settings that have not been marked as \"set\" before are still so\n # after we have updated their value above\n settings.__fields_set__.intersection_update(self.__fields_set__)\n return settings\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.hash_key","title":"hash_key
","text":"Return a hash key for the settings object. This is needed since some settings may be unhashable. An example is lists.
Source code inprefect/settings.py
def hash_key(self) -> str:\n \"\"\"\n Return a hash key for the settings object. This is needed since some\n settings may be unhashable. An example is lists.\n \"\"\"\n env_variables = self.to_environment_variables()\n return str(hash(tuple((key, value) for key, value in env_variables.items())))\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.to_environment_variables","title":"to_environment_variables
","text":"Convert the settings object to environment variables.
Note that setting values will not be run through their value_callback
allowing dynamic resolution to occur when loaded from the returned environment.
Parameters:
Name Type Description Defaultinclude_keys
An iterable of settings to include in the return value. If not set, all settings are used.
requiredexclude_unset
bool
Only include settings that have been set (i.e. the value is not from the default). If set, unset keys will be dropped even if they are set in include_keys
.
False
Returns:
Type DescriptionDict[str, str]
A dictionary of settings with values cast to strings
Source code inprefect/settings.py
def to_environment_variables(\n self, include: Iterable[Setting] = None, exclude_unset: bool = False\n) -> Dict[str, str]:\n \"\"\"\n Convert the settings object to environment variables.\n\n Note that setting values will not be run through their `value_callback` allowing\n dynamic resolution to occur when loaded from the returned environment.\n\n Args:\n include_keys: An iterable of settings to include in the return value.\n If not set, all settings are used.\n exclude_unset: Only include settings that have been set (i.e. the value is\n not from the default). If set, unset keys will be dropped even if they\n are set in `include_keys`.\n\n Returns:\n A dictionary of settings with values cast to strings\n \"\"\"\n include = set(include or SETTING_VARIABLES.values())\n\n if exclude_unset:\n set_keys = {\n # Collect all of the \"set\" keys and cast to `Setting` objects\n SETTING_VARIABLES[key]\n for key in self.dict(exclude_unset=True)\n }\n include.intersection_update(set_keys)\n\n # Validate the types of items in `include` to prevent exclusion bugs\n for key in include:\n if not isinstance(key, Setting):\n raise TypeError(\n \"Invalid type {type(key).__name__!r} for key in `include`.\"\n )\n\n env = {\n # Use `getattr` instead of `value_of` to avoid value callback resolution\n key: getattr(self, key)\n for key, setting in SETTING_VARIABLES.items()\n if setting in include\n }\n\n # Cast to strings and drop null values\n return {key: str(value) for key, value in env.items() if value is not None}\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile","title":"Profile
","text":" Bases: BaseModel
A user profile containing settings.
Source code inprefect/settings.py
class Profile(BaseModel):\n \"\"\"\n A user profile containing settings.\n \"\"\"\n\n name: str\n settings: Dict[Setting, Any] = Field(default_factory=dict)\n source: Optional[Path]\n\n @validator(\"settings\", pre=True)\n def map_names_to_settings(cls, value):\n if value is None:\n return value\n\n # Cast string setting names to variables\n validated = {}\n for setting, val in value.items():\n if isinstance(setting, str) and setting in SETTING_VARIABLES:\n validated[SETTING_VARIABLES[setting]] = val\n elif isinstance(setting, Setting):\n validated[setting] = val\n else:\n raise ValueError(f\"Unknown setting {setting!r}.\")\n\n return validated\n\n def validate_settings(self) -> None:\n \"\"\"\n Validate the settings contained in this profile.\n\n Raises:\n pydantic.ValidationError: When settings do not have valid values.\n \"\"\"\n # Create a new `Settings` instance with the settings from this profile relying\n # on Pydantic validation to raise an error.\n # We do not return the `Settings` object because this is not the recommended\n # path for constructing settings with a profile. See `use_profile` instead.\n Settings(**{setting.name: value for setting, value in self.settings.items()})\n\n def convert_deprecated_renamed_settings(self) -> List[Tuple[Setting, Setting]]:\n \"\"\"\n Update settings in place to replace deprecated settings with new settings when\n renamed.\n\n Returns a list of tuples with the old and new setting.\n \"\"\"\n changed = []\n for setting in tuple(self.settings):\n if (\n setting.deprecated\n and setting.deprecated_renamed_to\n and setting.deprecated_renamed_to not in self.settings\n ):\n self.settings[setting.deprecated_renamed_to] = self.settings.pop(\n setting\n )\n changed.append((setting, setting.deprecated_renamed_to))\n return changed\n\n class Config:\n arbitrary_types_allowed = True\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile.validate_settings","title":"validate_settings
","text":"Validate the settings contained in this profile.
Raises:
Type DescriptionValidationError
When settings do not have valid values.
Source code inprefect/settings.py
def validate_settings(self) -> None:\n \"\"\"\n Validate the settings contained in this profile.\n\n Raises:\n pydantic.ValidationError: When settings do not have valid values.\n \"\"\"\n # Create a new `Settings` instance with the settings from this profile relying\n # on Pydantic validation to raise an error.\n # We do not return the `Settings` object because this is not the recommended\n # path for constructing settings with a profile. See `use_profile` instead.\n Settings(**{setting.name: value for setting, value in self.settings.items()})\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile.convert_deprecated_renamed_settings","title":"convert_deprecated_renamed_settings
","text":"Update settings in place to replace deprecated settings with new settings when renamed.
Returns a list of tuples with the old and new setting.
Source code inprefect/settings.py
def convert_deprecated_renamed_settings(self) -> List[Tuple[Setting, Setting]]:\n \"\"\"\n Update settings in place to replace deprecated settings with new settings when\n renamed.\n\n Returns a list of tuples with the old and new setting.\n \"\"\"\n changed = []\n for setting in tuple(self.settings):\n if (\n setting.deprecated\n and setting.deprecated_renamed_to\n and setting.deprecated_renamed_to not in self.settings\n ):\n self.settings[setting.deprecated_renamed_to] = self.settings.pop(\n setting\n )\n changed.append((setting, setting.deprecated_renamed_to))\n return changed\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection","title":"ProfilesCollection
","text":"\" A utility class for working with a collection of profiles.
Profiles in the collection must have unique names.
The collection may store the name of the active profile.
Source code inprefect/settings.py
class ProfilesCollection:\n \"\"\" \"\n A utility class for working with a collection of profiles.\n\n Profiles in the collection must have unique names.\n\n The collection may store the name of the active profile.\n \"\"\"\n\n def __init__(\n self, profiles: Iterable[Profile], active: Optional[str] = None\n ) -> None:\n self.profiles_by_name = {profile.name: profile for profile in profiles}\n self.active_name = active\n\n @property\n def names(self) -> Set[str]:\n \"\"\"\n Return a set of profile names in this collection.\n \"\"\"\n return set(self.profiles_by_name.keys())\n\n @property\n def active_profile(self) -> Optional[Profile]:\n \"\"\"\n Retrieve the active profile in this collection.\n \"\"\"\n if self.active_name is None:\n return None\n return self[self.active_name]\n\n def set_active(self, name: Optional[str], check: bool = True):\n \"\"\"\n Set the active profile name in the collection.\n\n A null value may be passed to indicate that this collection does not determine\n the active profile.\n \"\"\"\n if check and name is not None and name not in self.names:\n raise ValueError(f\"Unknown profile name {name!r}.\")\n self.active_name = name\n\n def update_profile(\n self, name: str, settings: Mapping[Union[Dict, str], Any], source: Path = None\n ) -> Profile:\n \"\"\"\n Add a profile to the collection or update the existing on if the name is already\n present in this collection.\n\n If updating an existing profile, the settings will be merged. Settings can\n be dropped from the existing profile by setting them to `None` in the new\n profile.\n\n Returns the new profile object.\n \"\"\"\n existing = self.profiles_by_name.get(name)\n\n # Convert the input to a `Profile` to cast settings to the correct type\n profile = Profile(name=name, settings=settings, source=source)\n\n if existing:\n new_settings = {**existing.settings, **profile.settings}\n\n # Drop null keys to restore to default\n for key, value in tuple(new_settings.items()):\n if value is None:\n new_settings.pop(key)\n\n new_profile = Profile(\n name=profile.name,\n settings=new_settings,\n source=source or profile.source,\n )\n else:\n new_profile = profile\n\n self.profiles_by_name[new_profile.name] = new_profile\n\n return new_profile\n\n def add_profile(self, profile: Profile) -> None:\n \"\"\"\n Add a profile to the collection.\n\n If the profile name already exists, an exception will be raised.\n \"\"\"\n if profile.name in self.profiles_by_name:\n raise ValueError(\n f\"Profile name {profile.name!r} already exists in collection.\"\n )\n\n self.profiles_by_name[profile.name] = profile\n\n def remove_profile(self, name: str) -> None:\n \"\"\"\n Remove a profile from the collection.\n \"\"\"\n self.profiles_by_name.pop(name)\n\n def without_profile_source(self, path: Optional[Path]) -> \"ProfilesCollection\":\n \"\"\"\n Remove profiles that were loaded from a given path.\n\n Returns a new collection.\n \"\"\"\n return ProfilesCollection(\n [\n profile\n for profile in self.profiles_by_name.values()\n if profile.source != path\n ],\n active=self.active_name,\n )\n\n def to_dict(self):\n \"\"\"\n Convert to a dictionary suitable for writing to disk.\n \"\"\"\n return {\n \"active\": self.active_name,\n \"profiles\": {\n profile.name: {\n setting.name: value for setting, value in profile.settings.items()\n }\n for profile in self.profiles_by_name.values()\n },\n }\n\n def __getitem__(self, name: str) -> Profile:\n return self.profiles_by_name[name]\n\n def __iter__(self):\n return self.profiles_by_name.__iter__()\n\n def items(self):\n return self.profiles_by_name.items()\n\n def __eq__(self, __o: object) -> bool:\n if not isinstance(__o, ProfilesCollection):\n return False\n\n return (\n self.profiles_by_name == __o.profiles_by_name\n and self.active_name == __o.active_name\n )\n\n def __repr__(self) -> str:\n return (\n f\"ProfilesCollection(profiles={list(self.profiles_by_name.values())!r},\"\n f\" active={self.active_name!r})>\"\n )\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.names","title":"names: Set[str]
property
","text":"Return a set of profile names in this collection.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.active_profile","title":"active_profile: Optional[Profile]
property
","text":"Retrieve the active profile in this collection.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.set_active","title":"set_active
","text":"Set the active profile name in the collection.
A null value may be passed to indicate that this collection does not determine the active profile.
Source code inprefect/settings.py
def set_active(self, name: Optional[str], check: bool = True):\n \"\"\"\n Set the active profile name in the collection.\n\n A null value may be passed to indicate that this collection does not determine\n the active profile.\n \"\"\"\n if check and name is not None and name not in self.names:\n raise ValueError(f\"Unknown profile name {name!r}.\")\n self.active_name = name\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.update_profile","title":"update_profile
","text":"Add a profile to the collection or update the existing on if the name is already present in this collection.
If updating an existing profile, the settings will be merged. Settings can be dropped from the existing profile by setting them to None
in the new profile.
Returns the new profile object.
Source code inprefect/settings.py
def update_profile(\n self, name: str, settings: Mapping[Union[Dict, str], Any], source: Path = None\n) -> Profile:\n \"\"\"\n Add a profile to the collection or update the existing on if the name is already\n present in this collection.\n\n If updating an existing profile, the settings will be merged. Settings can\n be dropped from the existing profile by setting them to `None` in the new\n profile.\n\n Returns the new profile object.\n \"\"\"\n existing = self.profiles_by_name.get(name)\n\n # Convert the input to a `Profile` to cast settings to the correct type\n profile = Profile(name=name, settings=settings, source=source)\n\n if existing:\n new_settings = {**existing.settings, **profile.settings}\n\n # Drop null keys to restore to default\n for key, value in tuple(new_settings.items()):\n if value is None:\n new_settings.pop(key)\n\n new_profile = Profile(\n name=profile.name,\n settings=new_settings,\n source=source or profile.source,\n )\n else:\n new_profile = profile\n\n self.profiles_by_name[new_profile.name] = new_profile\n\n return new_profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.add_profile","title":"add_profile
","text":"Add a profile to the collection.
If the profile name already exists, an exception will be raised.
Source code inprefect/settings.py
def add_profile(self, profile: Profile) -> None:\n \"\"\"\n Add a profile to the collection.\n\n If the profile name already exists, an exception will be raised.\n \"\"\"\n if profile.name in self.profiles_by_name:\n raise ValueError(\n f\"Profile name {profile.name!r} already exists in collection.\"\n )\n\n self.profiles_by_name[profile.name] = profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.remove_profile","title":"remove_profile
","text":"Remove a profile from the collection.
Source code inprefect/settings.py
def remove_profile(self, name: str) -> None:\n \"\"\"\n Remove a profile from the collection.\n \"\"\"\n self.profiles_by_name.pop(name)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.without_profile_source","title":"without_profile_source
","text":"Remove profiles that were loaded from a given path.
Returns a new collection.
Source code inprefect/settings.py
def without_profile_source(self, path: Optional[Path]) -> \"ProfilesCollection\":\n \"\"\"\n Remove profiles that were loaded from a given path.\n\n Returns a new collection.\n \"\"\"\n return ProfilesCollection(\n [\n profile\n for profile in self.profiles_by_name.values()\n if profile.source != path\n ],\n active=self.active_name,\n )\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_extra_loggers","title":"get_extra_loggers
","text":"value_callback
for PREFECT_LOGGING_EXTRA_LOGGERS
that parses the CSV string into a list and trims whitespace from logger names.
prefect/settings.py
def get_extra_loggers(_: \"Settings\", value: str) -> List[str]:\n \"\"\"\n `value_callback` for `PREFECT_LOGGING_EXTRA_LOGGERS`that parses the CSV string into a\n list and trims whitespace from logger names.\n \"\"\"\n return [name.strip() for name in value.split(\",\")] if value else []\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.debug_mode_log_level","title":"debug_mode_log_level
","text":"value_callback
for PREFECT_LOGGING_LEVEL
that overrides the log level to DEBUG when debug mode is enabled.
prefect/settings.py
def debug_mode_log_level(settings, value):\n \"\"\"\n `value_callback` for `PREFECT_LOGGING_LEVEL` that overrides the log level to DEBUG\n when debug mode is enabled.\n \"\"\"\n if PREFECT_DEBUG_MODE.value_from(settings):\n return \"DEBUG\"\n else:\n return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.only_return_value_in_test_mode","title":"only_return_value_in_test_mode
","text":"value_callback
for PREFECT_TEST_SETTING
that only allows access during test mode
prefect/settings.py
def only_return_value_in_test_mode(settings, value):\n \"\"\"\n `value_callback` for `PREFECT_TEST_SETTING` that only allows access during test mode\n \"\"\"\n if PREFECT_TEST_MODE.value_from(settings):\n return value\n else:\n return None\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.default_ui_api_url","title":"default_ui_api_url
","text":"value_callback
for PREFECT_UI_API_URL
that sets the default value to relative path '/api', otherwise it constructs an API URL from the API settings.
prefect/settings.py
def default_ui_api_url(settings, value):\n \"\"\"\n `value_callback` for `PREFECT_UI_API_URL` that sets the default value to\n relative path '/api', otherwise it constructs an API URL from the API settings.\n \"\"\"\n if value is None:\n # Set a default value\n value = \"/api\"\n\n return template_with_settings(\n PREFECT_SERVER_API_HOST, PREFECT_SERVER_API_PORT, PREFECT_API_URL\n )(settings, value)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.status_codes_as_integers_in_range","title":"status_codes_as_integers_in_range
","text":"value_callback
for PREFECT_CLIENT_RETRY_EXTRA_CODES
that ensures status codes are integers in the range 100-599.
prefect/settings.py
def status_codes_as_integers_in_range(_, value):\n \"\"\"\n `value_callback` for `PREFECT_CLIENT_RETRY_EXTRA_CODES` that ensures status codes\n are integers in the range 100-599.\n \"\"\"\n if value == \"\":\n return set()\n\n values = {v.strip() for v in value.split(\",\")}\n\n if any(not v.isdigit() or int(v) < 100 or int(v) > 599 for v in values):\n raise ValueError(\n \"PREFECT_CLIENT_RETRY_EXTRA_CODES must be a comma separated list of \"\n \"integers between 100 and 599.\"\n )\n\n values = {int(v) for v in values}\n return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.template_with_settings","title":"template_with_settings
","text":"Returns a value_callback
that will template the given settings into the runtime value for the setting.
prefect/settings.py
def template_with_settings(*upstream_settings: Setting) -> Callable[[\"Settings\", T], T]:\n \"\"\"\n Returns a `value_callback` that will template the given settings into the runtime\n value for the setting.\n \"\"\"\n\n def templater(settings, value):\n if value is None:\n return value # Do not attempt to template a null string\n\n original_type = type(value)\n template_values = {\n setting.name: setting.value_from(settings) for setting in upstream_settings\n }\n template = string.Template(str(value))\n return original_type(template.substitute(template_values))\n\n return templater\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.max_log_size_smaller_than_batch_size","title":"max_log_size_smaller_than_batch_size
","text":"Validator for settings asserting the batch size and match log size are compatible
Source code inprefect/settings.py
def max_log_size_smaller_than_batch_size(values):\n \"\"\"\n Validator for settings asserting the batch size and match log size are compatible\n \"\"\"\n if (\n values[\"PREFECT_LOGGING_TO_API_BATCH_SIZE\"]\n < values[\"PREFECT_LOGGING_TO_API_MAX_LOG_SIZE\"]\n ):\n raise ValueError(\n \"`PREFECT_LOGGING_TO_API_MAX_LOG_SIZE` cannot be larger than\"\n \" `PREFECT_LOGGING_TO_API_BATCH_SIZE`\"\n )\n return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.warn_on_database_password_value_without_usage","title":"warn_on_database_password_value_without_usage
","text":"Validator for settings warning if the database password is set but not used.
Source code inprefect/settings.py
def warn_on_database_password_value_without_usage(values):\n \"\"\"\n Validator for settings warning if the database password is set but not used.\n \"\"\"\n value = values[\"PREFECT_API_DATABASE_PASSWORD\"]\n if (\n value\n and not value.startswith(OBFUSCATED_PREFIX)\n and (\n \"PREFECT_API_DATABASE_PASSWORD\"\n not in values[\"PREFECT_API_DATABASE_CONNECTION_URL\"]\n )\n ):\n warnings.warn(\n \"PREFECT_API_DATABASE_PASSWORD is set but not included in the \"\n \"PREFECT_API_DATABASE_CONNECTION_URL. \"\n \"The provided password will be ignored.\"\n )\n return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.warn_on_misconfigured_api_url","title":"warn_on_misconfigured_api_url
","text":"Validator for settings warning if the API URL is misconfigured.
Source code inprefect/settings.py
def warn_on_misconfigured_api_url(values):\n \"\"\"\n Validator for settings warning if the API URL is misconfigured.\n \"\"\"\n api_url = values[\"PREFECT_API_URL\"]\n if api_url is not None:\n misconfigured_mappings = {\n \"app.prefect.cloud\": (\n \"`PREFECT_API_URL` points to `app.prefect.cloud`. Did you\"\n \" mean `api.prefect.cloud`?\"\n ),\n \"account/\": (\n \"`PREFECT_API_URL` uses `/account/` but should use `/accounts/`.\"\n ),\n \"workspace/\": (\n \"`PREFECT_API_URL` uses `/workspace/` but should use `/workspaces/`.\"\n ),\n }\n warnings_list = []\n\n for misconfig, warning in misconfigured_mappings.items():\n if misconfig in api_url:\n warnings_list.append(warning)\n\n parsed_url = urlparse(api_url)\n if parsed_url.path and not parsed_url.path.startswith(\"/api\"):\n warnings_list.append(\n \"`PREFECT_API_URL` should have `/api` after the base URL.\"\n )\n\n if warnings_list:\n example = 'e.g. PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"'\n warnings_list.append(example)\n\n warnings.warn(\"\\n\".join(warnings_list), stacklevel=2)\n\n return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_current_settings","title":"get_current_settings
","text":"Returns a settings object populated with values from the current settings context or, if no settings context is active, the environment.
Source code inprefect/settings.py
def get_current_settings() -> Settings:\n \"\"\"\n Returns a settings object populated with values from the current settings context\n or, if no settings context is active, the environment.\n \"\"\"\n from prefect.context import SettingsContext\n\n settings_context = SettingsContext.get()\n if settings_context is not None:\n return settings_context.settings\n\n return get_settings_from_env()\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_settings_from_env","title":"get_settings_from_env
","text":"Returns a settings object populated with default values and overrides from environment variables, ignoring any values in profiles.
Calls with the same environment return a cached object instead of reconstructing to avoid validation overhead.
Source code inprefect/settings.py
def get_settings_from_env() -> Settings:\n \"\"\"\n Returns a settings object populated with default values and overrides from\n environment variables, ignoring any values in profiles.\n\n Calls with the same environment return a cached object instead of reconstructing\n to avoid validation overhead.\n \"\"\"\n # Since os.environ is a Dict[str, str] we can safely hash it by contents, but we\n # must be careful to avoid hashing a generator instead of a tuple\n cache_key = hash(tuple((key, value) for key, value in os.environ.items()))\n\n if cache_key not in _FROM_ENV_CACHE:\n _FROM_ENV_CACHE[cache_key] = Settings()\n\n return _FROM_ENV_CACHE[cache_key]\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_default_settings","title":"get_default_settings
","text":"Returns a settings object populated with default values, ignoring any overrides from environment variables or profiles.
This is cached since the defaults should not change during the lifetime of the module.
Source code inprefect/settings.py
def get_default_settings() -> Settings:\n \"\"\"\n Returns a settings object populated with default values, ignoring any overrides\n from environment variables or profiles.\n\n This is cached since the defaults should not change during the lifetime of the\n module.\n \"\"\"\n global _DEFAULTS_CACHE\n\n if not _DEFAULTS_CACHE:\n old = os.environ\n try:\n os.environ = {}\n settings = get_settings_from_env()\n finally:\n os.environ = old\n\n _DEFAULTS_CACHE = settings\n\n return _DEFAULTS_CACHE\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.temporary_settings","title":"temporary_settings
","text":"Temporarily override the current settings by entering a new profile.
See Settings.copy_with_update
for details on different argument behavior.
Examples:
>>> from prefect.settings import PREFECT_API_URL\n>>>\n>>> with temporary_settings(updates={PREFECT_API_URL: \"foo\"}):\n>>> assert PREFECT_API_URL.value() == \"foo\"\n>>>\n>>> with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"}):\n>>> assert PREFECT_API_URL.value() == \"foo\"\n>>>\n>>> with temporary_settings(restore_defaults={PREFECT_API_URL}):\n>>> assert PREFECT_API_URL.value() is None\n>>>\n>>> with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"})\n>>> assert PREFECT_API_URL.value() == \"bar\"\n>>> assert PREFECT_API_URL.value() is None\n
Source code in prefect/settings.py
@contextmanager\ndef temporary_settings(\n updates: Mapping[Setting, Any] = None,\n set_defaults: Mapping[Setting, Any] = None,\n restore_defaults: Iterable[Setting] = None,\n) -> Settings:\n \"\"\"\n Temporarily override the current settings by entering a new profile.\n\n See `Settings.copy_with_update` for details on different argument behavior.\n\n Examples:\n >>> from prefect.settings import PREFECT_API_URL\n >>>\n >>> with temporary_settings(updates={PREFECT_API_URL: \"foo\"}):\n >>> assert PREFECT_API_URL.value() == \"foo\"\n >>>\n >>> with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"}):\n >>> assert PREFECT_API_URL.value() == \"foo\"\n >>>\n >>> with temporary_settings(restore_defaults={PREFECT_API_URL}):\n >>> assert PREFECT_API_URL.value() is None\n >>>\n >>> with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"})\n >>> assert PREFECT_API_URL.value() == \"bar\"\n >>> assert PREFECT_API_URL.value() is None\n \"\"\"\n import prefect.context\n\n context = prefect.context.get_settings_context()\n\n new_settings = context.settings.copy_with_update(\n updates=updates, set_defaults=set_defaults, restore_defaults=restore_defaults\n )\n\n with prefect.context.SettingsContext(\n profile=context.profile, settings=new_settings\n ):\n yield new_settings\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_profiles","title":"load_profiles
","text":"Load all profiles from the default and current profile paths.
Source code inprefect/settings.py
def load_profiles() -> ProfilesCollection:\n \"\"\"\n Load all profiles from the default and current profile paths.\n \"\"\"\n profiles = _read_profiles_from(DEFAULT_PROFILES_PATH)\n\n user_profiles_path = PREFECT_PROFILES_PATH.value()\n if user_profiles_path.exists():\n user_profiles = _read_profiles_from(user_profiles_path)\n\n # Merge all of the user profiles with the defaults\n for name in user_profiles:\n profiles.update_profile(\n name,\n settings=user_profiles[name].settings,\n source=user_profiles[name].source,\n )\n\n if user_profiles.active_name:\n profiles.set_active(user_profiles.active_name, check=False)\n\n return profiles\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_current_profile","title":"load_current_profile
","text":"Load the current profile from the default and current profile paths.
This will not include settings from the current settings context. Only settings that have been persisted to the profiles file will be saved.
Source code inprefect/settings.py
def load_current_profile():\n \"\"\"\n Load the current profile from the default and current profile paths.\n\n This will _not_ include settings from the current settings context. Only settings\n that have been persisted to the profiles file will be saved.\n \"\"\"\n from prefect.context import SettingsContext\n\n profiles = load_profiles()\n context = SettingsContext.get()\n\n if context:\n profiles.set_active(context.profile.name)\n\n return profiles.active_profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.save_profiles","title":"save_profiles
","text":"Writes all non-default profiles to the current profiles path.
Source code inprefect/settings.py
def save_profiles(profiles: ProfilesCollection) -> None:\n \"\"\"\n Writes all non-default profiles to the current profiles path.\n \"\"\"\n profiles_path = PREFECT_PROFILES_PATH.value()\n profiles = profiles.without_profile_source(DEFAULT_PROFILES_PATH)\n return _write_profiles_to(profiles_path, profiles)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_profile","title":"load_profile
","text":"Load a single profile by name.
Source code inprefect/settings.py
def load_profile(name: str) -> Profile:\n \"\"\"\n Load a single profile by name.\n \"\"\"\n profiles = load_profiles()\n try:\n return profiles[name]\n except KeyError:\n raise ValueError(f\"Profile {name!r} not found.\")\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.update_current_profile","title":"update_current_profile
","text":"Update the persisted data for the profile currently in-use.
If the profile does not exist in the profiles file, it will be created.
Given settings will be merged with the existing settings as described in ProfilesCollection.update_profile
.
Returns:
Type DescriptionProfile
The new profile.
Source code inprefect/settings.py
def update_current_profile(settings: Dict[Union[str, Setting], Any]) -> Profile:\n \"\"\"\n Update the persisted data for the profile currently in-use.\n\n If the profile does not exist in the profiles file, it will be created.\n\n Given settings will be merged with the existing settings as described in\n `ProfilesCollection.update_profile`.\n\n Returns:\n The new profile.\n \"\"\"\n import prefect.context\n\n current_profile = prefect.context.get_settings_context().profile\n\n if not current_profile:\n raise MissingProfileError(\"No profile is currently in use.\")\n\n profiles = load_profiles()\n\n # Ensure the current profile's settings are present\n profiles.update_profile(current_profile.name, current_profile.settings)\n # Then merge the new settings in\n new_profile = profiles.update_profile(current_profile.name, settings)\n\n # Validate before saving\n new_profile.validate_settings()\n\n save_profiles(profiles)\n\n return profiles[current_profile.name]\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/software/","title":"prefect.software","text":"","tags":["Python API","software"]},{"location":"api-ref/prefect/software/#prefect.software","title":"prefect.software
","text":"","tags":["Python API","software"]},{"location":"api-ref/prefect/states/","title":"prefect.states","text":"","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states","title":"prefect.states
","text":"","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.AwaitingRetry","title":"AwaitingRetry
","text":"Convenience function for creating AwaitingRetry
states.
Returns:
Name Type DescriptionState
State
a AwaitingRetry state
Source code inprefect/states.py
def AwaitingRetry(\n cls: Type[State] = State, scheduled_time: datetime.datetime = None, **kwargs\n) -> State:\n \"\"\"Convenience function for creating `AwaitingRetry` states.\n\n Returns:\n State: a AwaitingRetry state\n \"\"\"\n return Scheduled(\n cls=cls, scheduled_time=scheduled_time, name=\"AwaitingRetry\", **kwargs\n )\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Cancelled","title":"Cancelled
","text":"Convenience function for creating Cancelled
states.
Returns:
Name Type DescriptionState
State
a Cancelled state
Source code inprefect/states.py
def Cancelled(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Cancelled` states.\n\n Returns:\n State: a Cancelled state\n \"\"\"\n return cls(type=StateType.CANCELLED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Cancelling","title":"Cancelling
","text":"Convenience function for creating Cancelling
states.
Returns:
Name Type DescriptionState
State
a Cancelling state
Source code inprefect/states.py
def Cancelling(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Cancelling` states.\n\n Returns:\n State: a Cancelling state\n \"\"\"\n return cls(type=StateType.CANCELLING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Completed","title":"Completed
","text":"Convenience function for creating Completed
states.
Returns:
Name Type DescriptionState
State
a Completed state
Source code inprefect/states.py
def Completed(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Completed` states.\n\n Returns:\n State: a Completed state\n \"\"\"\n return cls(type=StateType.COMPLETED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Crashed","title":"Crashed
","text":"Convenience function for creating Crashed
states.
Returns:
Name Type DescriptionState
State
a Crashed state
Source code inprefect/states.py
def Crashed(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Crashed` states.\n\n Returns:\n State: a Crashed state\n \"\"\"\n return cls(type=StateType.CRASHED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Failed","title":"Failed
","text":"Convenience function for creating Failed
states.
Returns:
Name Type DescriptionState
State
a Failed state
Source code inprefect/states.py
def Failed(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Failed` states.\n\n Returns:\n State: a Failed state\n \"\"\"\n return cls(type=StateType.FAILED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Late","title":"Late
","text":"Convenience function for creating Late
states.
Returns:
Name Type DescriptionState
State
a Late state
Source code inprefect/states.py
def Late(\n cls: Type[State] = State, scheduled_time: datetime.datetime = None, **kwargs\n) -> State:\n \"\"\"Convenience function for creating `Late` states.\n\n Returns:\n State: a Late state\n \"\"\"\n return Scheduled(cls=cls, scheduled_time=scheduled_time, name=\"Late\", **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Paused","title":"Paused
","text":"Convenience function for creating Paused
states.
Returns:
Name Type DescriptionState
State
a Paused state
Source code inprefect/states.py
def Paused(\n cls: Type[State] = State,\n timeout_seconds: Optional[int] = None,\n pause_expiration_time: Optional[datetime.datetime] = None,\n reschedule: bool = False,\n pause_key: Optional[str] = None,\n **kwargs,\n) -> State:\n \"\"\"Convenience function for creating `Paused` states.\n\n Returns:\n State: a Paused state\n \"\"\"\n state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n\n if state_details.pause_timeout:\n raise ValueError(\"An extra pause timeout was provided in state_details\")\n\n if pause_expiration_time is not None and timeout_seconds is not None:\n raise ValueError(\n \"Cannot supply both a pause_expiration_time and timeout_seconds\"\n )\n\n if pause_expiration_time is None and timeout_seconds is None:\n pass\n else:\n state_details.pause_timeout = pause_expiration_time or (\n pendulum.now(\"UTC\") + pendulum.Duration(seconds=timeout_seconds)\n )\n\n state_details.pause_reschedule = reschedule\n state_details.pause_key = pause_key\n\n return cls(type=StateType.PAUSED, state_details=state_details, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Pending","title":"Pending
","text":"Convenience function for creating Pending
states.
Returns:
Name Type DescriptionState
State
a Pending state
Source code inprefect/states.py
def Pending(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Pending` states.\n\n Returns:\n State: a Pending state\n \"\"\"\n return cls(type=StateType.PENDING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Retrying","title":"Retrying
","text":"Convenience function for creating Retrying
states.
Returns:
Name Type DescriptionState
State
a Retrying state
Source code inprefect/states.py
def Retrying(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Retrying` states.\n\n Returns:\n State: a Retrying state\n \"\"\"\n return cls(type=StateType.RUNNING, name=\"Retrying\", **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Running","title":"Running
","text":"Convenience function for creating Running
states.
Returns:
Name Type DescriptionState
State
a Running state
Source code inprefect/states.py
def Running(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Running` states.\n\n Returns:\n State: a Running state\n \"\"\"\n return cls(type=StateType.RUNNING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Scheduled","title":"Scheduled
","text":"Convenience function for creating Scheduled
states.
Returns:
Name Type DescriptionState
State
a Scheduled state
Source code inprefect/states.py
def Scheduled(\n cls: Type[State] = State, scheduled_time: datetime.datetime = None, **kwargs\n) -> State:\n \"\"\"Convenience function for creating `Scheduled` states.\n\n Returns:\n State: a Scheduled state\n \"\"\"\n state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n if scheduled_time is None:\n scheduled_time = pendulum.now(\"UTC\")\n elif state_details.scheduled_time:\n raise ValueError(\"An extra scheduled_time was provided in state_details\")\n state_details.scheduled_time = scheduled_time\n\n return cls(type=StateType.SCHEDULED, state_details=state_details, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Suspended","title":"Suspended
","text":"Convenience function for creating Suspended
states.
Returns:
Name Type DescriptionState
a Suspended state
Source code inprefect/states.py
def Suspended(\n cls: Type[State] = State,\n timeout_seconds: Optional[int] = None,\n pause_expiration_time: Optional[datetime.datetime] = None,\n pause_key: Optional[str] = None,\n **kwargs,\n):\n \"\"\"Convenience function for creating `Suspended` states.\n\n Returns:\n State: a Suspended state\n \"\"\"\n return Paused(\n cls=cls,\n name=\"Suspended\",\n reschedule=True,\n timeout_seconds=timeout_seconds,\n pause_expiration_time=pause_expiration_time,\n pause_key=pause_key,\n **kwargs,\n )\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.exception_to_crashed_state","title":"exception_to_crashed_state
async
","text":"Takes an exception that occurs outside of user code and converts it to a 'Crash' exception with a 'Crashed' state.
Source code inprefect/states.py
async def exception_to_crashed_state(\n exc: BaseException,\n result_factory: Optional[ResultFactory] = None,\n) -> State:\n \"\"\"\n Takes an exception that occurs _outside_ of user code and converts it to a\n 'Crash' exception with a 'Crashed' state.\n \"\"\"\n state_message = None\n\n if isinstance(exc, anyio.get_cancelled_exc_class()):\n state_message = \"Execution was cancelled by the runtime environment.\"\n\n elif isinstance(exc, KeyboardInterrupt):\n state_message = \"Execution was aborted by an interrupt signal.\"\n\n elif isinstance(exc, TerminationSignal):\n state_message = \"Execution was aborted by a termination signal.\"\n\n elif isinstance(exc, SystemExit):\n state_message = \"Execution was aborted by Python system exit call.\"\n\n elif isinstance(exc, (httpx.TimeoutException, httpx.ConnectError)):\n try:\n request: httpx.Request = exc.request\n except RuntimeError:\n # The request property is not set\n state_message = (\n \"Request failed while attempting to contact the server:\"\n f\" {format_exception(exc)}\"\n )\n else:\n # TODO: We can check if this is actually our API url\n state_message = f\"Request to {request.url} failed: {format_exception(exc)}.\"\n\n else:\n state_message = (\n \"Execution was interrupted by an unexpected exception:\"\n f\" {format_exception(exc)}\"\n )\n\n if result_factory:\n data = await result_factory.create_result(exc)\n else:\n # Attach the exception for local usage, will not be available when retrieved\n # from the API\n data = exc\n\n return Crashed(message=state_message, data=data)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.exception_to_failed_state","title":"exception_to_failed_state
async
","text":"Convenience function for creating Failed
states from exceptions
prefect/states.py
async def exception_to_failed_state(\n exc: Optional[BaseException] = None,\n result_factory: Optional[ResultFactory] = None,\n **kwargs,\n) -> State:\n \"\"\"\n Convenience function for creating `Failed` states from exceptions\n \"\"\"\n if not exc:\n _, exc, _ = sys.exc_info()\n if exc is None:\n raise ValueError(\n \"Exception was not passed and no active exception could be found.\"\n )\n else:\n pass\n\n if result_factory:\n data = await result_factory.create_result(exc)\n else:\n # Attach the exception for local usage, will not be available when retrieved\n # from the API\n data = exc\n\n existing_message = kwargs.pop(\"message\", \"\")\n if existing_message and not existing_message.endswith(\" \"):\n existing_message += \" \"\n\n # TODO: Consider if we want to include traceback information, it is intentionally\n # excluded from messages for now\n message = existing_message + format_exception(exc)\n\n return Failed(data=data, message=message, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.get_state_exception","title":"get_state_exception
async
","text":"If not given a FAILED or CRASHED state, this raise a value error.
If the state result is a state, its exception will be returned.
If the state result is an iterable of states, the exception of the first failure will be returned.
If the state result is a string, a wrapper exception will be returned with the string as the message.
If the state result is null, a wrapper exception will be returned with the state message attached.
If the state result is not of a known type, a TypeError
will be returned.
When a wrapper exception is returned, the type will be: - FailedRun
if the state type is FAILED. - CrashedRun
if the state type is CRASHED. - CancelledRun
if the state type is CANCELLED.
prefect/states.py
@sync_compatible\nasync def get_state_exception(state: State) -> BaseException:\n \"\"\"\n If not given a FAILED or CRASHED state, this raise a value error.\n\n If the state result is a state, its exception will be returned.\n\n If the state result is an iterable of states, the exception of the first failure\n will be returned.\n\n If the state result is a string, a wrapper exception will be returned with the\n string as the message.\n\n If the state result is null, a wrapper exception will be returned with the state\n message attached.\n\n If the state result is not of a known type, a `TypeError` will be returned.\n\n When a wrapper exception is returned, the type will be:\n - `FailedRun` if the state type is FAILED.\n - `CrashedRun` if the state type is CRASHED.\n - `CancelledRun` if the state type is CANCELLED.\n \"\"\"\n\n if state.is_failed():\n wrapper = FailedRun\n default_message = \"Run failed.\"\n elif state.is_crashed():\n wrapper = CrashedRun\n default_message = \"Run crashed.\"\n elif state.is_cancelled():\n wrapper = CancelledRun\n default_message = \"Run cancelled.\"\n else:\n raise ValueError(f\"Expected failed or crashed state got {state!r}.\")\n\n if isinstance(state.data, BaseResult):\n result = await state.data.get()\n elif state.data is None:\n result = None\n else:\n result = state.data\n\n if result is None:\n return wrapper(state.message or default_message)\n\n if isinstance(result, Exception):\n return result\n\n elif isinstance(result, BaseException):\n return result\n\n elif isinstance(result, str):\n return wrapper(result)\n\n elif is_state(result):\n # Return the exception from the inner state\n return await get_state_exception(result)\n\n elif is_state_iterable(result):\n # Return the first failure\n for state in result:\n if state.is_failed() or state.is_crashed() or state.is_cancelled():\n return await get_state_exception(state)\n\n raise ValueError(\n \"Failed state result was an iterable of states but none were failed.\"\n )\n\n else:\n raise TypeError(\n f\"Unexpected result for failed state: {result!r} \u2014\u2014 \"\n f\"{type(result).__name__} cannot be resolved into an exception\"\n )\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.get_state_result","title":"get_state_result
","text":"Get the result from a state.
See State.result()
prefect/states.py
def get_state_result(\n state: State[R], raise_on_failure: bool = True, fetch: Optional[bool] = None\n) -> R:\n \"\"\"\n Get the result from a state.\n\n See `State.result()`\n \"\"\"\n\n if fetch is None and (\n PREFECT_ASYNC_FETCH_STATE_RESULT or not in_async_main_thread()\n ):\n # Fetch defaults to `True` for sync users or async users who have opted in\n fetch = True\n\n if not fetch:\n if fetch is None and in_async_main_thread():\n warnings.warn(\n (\n \"State.result() was called from an async context but not awaited. \"\n \"This method will be updated to return a coroutine by default in \"\n \"the future. Pass `fetch=True` and `await` the call to get rid of \"\n \"this warning.\"\n ),\n DeprecationWarning,\n stacklevel=2,\n )\n # Backwards compatibility\n if isinstance(state.data, DataDocument):\n return result_from_state_with_data_document(\n state, raise_on_failure=raise_on_failure\n )\n else:\n return state.data\n else:\n return _get_state_result(state, raise_on_failure=raise_on_failure)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.is_state","title":"is_state
","text":"Check if the given object is a state instance
Source code inprefect/states.py
def is_state(obj: Any) -> TypeGuard[State]:\n \"\"\"\n Check if the given object is a state instance\n \"\"\"\n # We may want to narrow this to client-side state types but for now this provides\n # backwards compatibility\n try:\n from prefect.server.schemas.states import State as State_\n\n classes_ = (State, State_)\n except ImportError:\n classes_ = State\n\n # return isinstance(obj, (State, State_))\n return isinstance(obj, classes_)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.is_state_iterable","title":"is_state_iterable
","text":"Check if a the given object is an iterable of states types
Supported iterables are: - set - list - tuple
Other iterables will return False
even if they contain states.
prefect/states.py
def is_state_iterable(obj: Any) -> TypeGuard[Iterable[State]]:\n \"\"\"\n Check if a the given object is an iterable of states types\n\n Supported iterables are:\n - set\n - list\n - tuple\n\n Other iterables will return `False` even if they contain states.\n \"\"\"\n # We do not check for arbitrary iterables because this is not intended to be used\n # for things like dictionaries, dataframes, or pydantic models\n if (\n not isinstance(obj, BaseAnnotation)\n and isinstance(obj, (list, set, tuple))\n and obj\n ):\n return all([is_state(o) for o in obj])\n else:\n return False\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.raise_state_exception","title":"raise_state_exception
async
","text":"Given a FAILED or CRASHED state, raise the contained exception.
Source code inprefect/states.py
@sync_compatible\nasync def raise_state_exception(state: State) -> None:\n \"\"\"\n Given a FAILED or CRASHED state, raise the contained exception.\n \"\"\"\n if not (state.is_failed() or state.is_crashed() or state.is_cancelled()):\n return None\n\n raise await get_state_exception(state)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.return_value_to_state","title":"return_value_to_state
async
","text":"Given a return value from a user's function, create a State
the run should be placed in.
The aggregate rule says that given multiple states we will determine the final state such that:
data
attributeCallers should resolve all futures into states before passing return values to this function.
Source code inprefect/states.py
async def return_value_to_state(retval: R, result_factory: ResultFactory) -> State[R]:\n \"\"\"\n Given a return value from a user's function, create a `State` the run should\n be placed in.\n\n - If data is returned, we create a 'COMPLETED' state with the data\n - If a single, manually created state is returned, we use that state as given\n (manual creation is determined by the lack of ids)\n - If an upstream state or iterable of upstream states is returned, we apply the\n aggregate rule\n\n The aggregate rule says that given multiple states we will determine the final state\n such that:\n\n - If any states are not COMPLETED the final state is FAILED\n - If all of the states are COMPLETED the final state is COMPLETED\n - The states will be placed in the final state `data` attribute\n\n Callers should resolve all futures into states before passing return values to this\n function.\n \"\"\"\n\n if (\n is_state(retval)\n # Check for manual creation\n and not retval.state_details.flow_run_id\n and not retval.state_details.task_run_id\n ):\n state = retval\n\n # Do not modify states with data documents attached; backwards compatibility\n if isinstance(state.data, DataDocument):\n return state\n\n # Unless the user has already constructed a result explicitly, use the factory\n # to update the data to the correct type\n if not isinstance(state.data, BaseResult):\n state.data = await result_factory.create_result(state.data)\n\n return state\n\n # Determine a new state from the aggregate of contained states\n if is_state(retval) or is_state_iterable(retval):\n states = StateGroup(ensure_iterable(retval))\n\n # Determine the new state type\n if states.all_completed():\n new_state_type = StateType.COMPLETED\n elif states.any_cancelled():\n new_state_type = StateType.CANCELLED\n elif states.any_paused():\n new_state_type = StateType.PAUSED\n else:\n new_state_type = StateType.FAILED\n\n # Generate a nice message for the aggregate\n if states.all_completed():\n message = \"All states completed.\"\n elif states.any_cancelled():\n message = f\"{states.cancelled_count}/{states.total_count} states cancelled.\"\n elif states.any_paused():\n message = f\"{states.paused_count}/{states.total_count} states paused.\"\n elif states.any_failed():\n message = f\"{states.fail_count}/{states.total_count} states failed.\"\n elif not states.all_final():\n message = (\n f\"{states.not_final_count}/{states.total_count} states are not final.\"\n )\n else:\n message = \"Given states: \" + states.counts_message()\n\n # TODO: We may actually want to set the data to a `StateGroup` object and just\n # allow it to be unpacked into a tuple and such so users can interact with\n # it\n return State(\n type=new_state_type,\n message=message,\n data=await result_factory.create_result(retval),\n )\n\n # Generators aren't portable, implicitly convert them to a list.\n if isinstance(retval, GeneratorType):\n data = list(retval)\n else:\n data = retval\n\n # Otherwise, they just gave data and this is a completed retval\n return Completed(data=await result_factory.create_result(data))\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/task-runners/","title":"prefect.task_runners","text":"","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners","title":"prefect.task_runners
","text":"Interface and implementations of various task runners.
Task Runners in Prefect are responsible for managing the execution of Prefect task runs. Generally speaking, users are not expected to interact with task runners outside of configuring and initializing them for a flow.
Example>>> from prefect import flow, task\n>>> from prefect.task_runners import SequentialTaskRunner\n>>> from typing import List\n>>>\n>>> @task\n>>> def say_hello(name):\n... print(f\"hello {name}\")\n>>>\n>>> @task\n>>> def say_goodbye(name):\n... print(f\"goodbye {name}\")\n>>>\n>>> @flow(task_runner=SequentialTaskRunner())\n>>> def greetings(names: List[str]):\n... for name in names:\n... say_hello(name)\n... say_goodbye(name)\n>>>\n>>> greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\nhello arthur\ngoodbye arthur\nhello trillian\ngoodbye trillian\nhello ford\ngoodbye ford\nhello marvin\ngoodbye marvin\n
Switching to a DaskTaskRunner
:
>>> from prefect_dask.task_runners import DaskTaskRunner\n>>> flow.task_runner = DaskTaskRunner()\n>>> greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\nhello arthur\ngoodbye arthur\nhello trillian\nhello ford\ngoodbye marvin\nhello marvin\ngoodbye ford\ngoodbye trillian\n
For usage details, see the Task Runners documentation.
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner","title":"BaseTaskRunner
","text":"Source code in prefect/task_runners.py
class BaseTaskRunner(metaclass=abc.ABCMeta):\n def __init__(self) -> None:\n self.logger = get_logger(f\"task_runner.{self.name}\")\n self._started: bool = False\n\n @property\n @abc.abstractmethod\n def concurrency_type(self) -> TaskConcurrencyType:\n pass # noqa\n\n @property\n def name(self):\n return type(self).__name__.lower().replace(\"taskrunner\", \"\")\n\n def duplicate(self):\n \"\"\"\n Return a new task runner instance with the same options.\n \"\"\"\n # The base class returns `NotImplemented` to indicate that this is not yet\n # implemented by a given task runner.\n return NotImplemented\n\n def __eq__(self, other: object) -> bool:\n \"\"\"\n Returns true if the task runners use the same options.\n \"\"\"\n if type(other) == type(self) and (\n # Compare public attributes for naive equality check\n # Subclasses should implement this method with a check init option equality\n {k: v for k, v in self.__dict__.items() if not k.startswith(\"_\")}\n == {k: v for k, v in other.__dict__.items() if not k.startswith(\"_\")}\n ):\n return True\n else:\n return NotImplemented\n\n @abc.abstractmethod\n async def submit(\n self,\n key: UUID,\n call: Callable[..., Awaitable[State[R]]],\n ) -> None:\n \"\"\"\n Submit a call for execution and return a `PrefectFuture` that can be used to\n get the call result.\n\n Args:\n task_run: The task run being submitted.\n task_key: A unique key for this orchestration run of the task. Can be used\n for caching.\n call: The function to be executed\n run_kwargs: A dict of keyword arguments to pass to `call`\n\n Returns:\n A future representing the result of `call` execution\n \"\"\"\n raise NotImplementedError()\n\n @abc.abstractmethod\n async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n \"\"\"\n Given a `PrefectFuture`, wait for its return state up to `timeout` seconds.\n If it is not finished after the timeout expires, `None` should be returned.\n\n Implementers should be careful to ensure that this function never returns or\n raises an exception.\n \"\"\"\n raise NotImplementedError()\n\n @asynccontextmanager\n async def start(\n self: T,\n ) -> AsyncIterator[T]:\n \"\"\"\n Start the task runner, preparing any resources necessary for task submission.\n\n Children should implement `_start` to prepare and clean up resources.\n\n Yields:\n The prepared task runner\n \"\"\"\n if self._started:\n raise RuntimeError(\"The task runner is already started!\")\n\n async with AsyncExitStack() as exit_stack:\n self.logger.debug(\"Starting task runner...\")\n try:\n await self._start(exit_stack)\n self._started = True\n yield self\n finally:\n self.logger.debug(\"Shutting down task runner...\")\n self._started = False\n\n async def _start(self, exit_stack: AsyncExitStack) -> None:\n \"\"\"\n Create any resources required for this task runner to submit work.\n\n Cleanup of resources should be submitted to the `exit_stack`.\n \"\"\"\n pass # noqa\n\n def __str__(self) -> str:\n return type(self).__name__\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.duplicate","title":"duplicate
","text":"Return a new task runner instance with the same options.
Source code inprefect/task_runners.py
def duplicate(self):\n \"\"\"\n Return a new task runner instance with the same options.\n \"\"\"\n # The base class returns `NotImplemented` to indicate that this is not yet\n # implemented by a given task runner.\n return NotImplemented\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.start","title":"start
async
","text":"Start the task runner, preparing any resources necessary for task submission.
Children should implement _start
to prepare and clean up resources.
Yields:
Type DescriptionAsyncIterator[T]
The prepared task runner
Source code inprefect/task_runners.py
@asynccontextmanager\nasync def start(\n self: T,\n) -> AsyncIterator[T]:\n \"\"\"\n Start the task runner, preparing any resources necessary for task submission.\n\n Children should implement `_start` to prepare and clean up resources.\n\n Yields:\n The prepared task runner\n \"\"\"\n if self._started:\n raise RuntimeError(\"The task runner is already started!\")\n\n async with AsyncExitStack() as exit_stack:\n self.logger.debug(\"Starting task runner...\")\n try:\n await self._start(exit_stack)\n self._started = True\n yield self\n finally:\n self.logger.debug(\"Shutting down task runner...\")\n self._started = False\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.submit","title":"submit
abstractmethod
async
","text":"Submit a call for execution and return a PrefectFuture
that can be used to get the call result.
Parameters:
Name Type Description Defaulttask_run
The task run being submitted.
requiredtask_key
A unique key for this orchestration run of the task. Can be used for caching.
requiredcall
Callable[..., Awaitable[State[R]]]
The function to be executed
requiredrun_kwargs
A dict of keyword arguments to pass to call
Returns:
Type DescriptionNone
A future representing the result of call
execution
prefect/task_runners.py
@abc.abstractmethod\nasync def submit(\n self,\n key: UUID,\n call: Callable[..., Awaitable[State[R]]],\n) -> None:\n \"\"\"\n Submit a call for execution and return a `PrefectFuture` that can be used to\n get the call result.\n\n Args:\n task_run: The task run being submitted.\n task_key: A unique key for this orchestration run of the task. Can be used\n for caching.\n call: The function to be executed\n run_kwargs: A dict of keyword arguments to pass to `call`\n\n Returns:\n A future representing the result of `call` execution\n \"\"\"\n raise NotImplementedError()\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.wait","title":"wait
abstractmethod
async
","text":"Given a PrefectFuture
, wait for its return state up to timeout
seconds. If it is not finished after the timeout expires, None
should be returned.
Implementers should be careful to ensure that this function never returns or raises an exception.
Source code inprefect/task_runners.py
@abc.abstractmethod\nasync def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n \"\"\"\n Given a `PrefectFuture`, wait for its return state up to `timeout` seconds.\n If it is not finished after the timeout expires, `None` should be returned.\n\n Implementers should be careful to ensure that this function never returns or\n raises an exception.\n \"\"\"\n raise NotImplementedError()\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.ConcurrentTaskRunner","title":"ConcurrentTaskRunner
","text":" Bases: BaseTaskRunner
A concurrent task runner that allows tasks to switch when blocking on IO. Synchronous tasks will be submitted to a thread pool maintained by anyio
.
Using a thread for concurrency:\n>>> from prefect import flow\n>>> from prefect.task_runners import ConcurrentTaskRunner\n>>> @flow(task_runner=ConcurrentTaskRunner)\n>>> def my_flow():\n>>> ...\n
Source code in prefect/task_runners.py
class ConcurrentTaskRunner(BaseTaskRunner):\n \"\"\"\n A concurrent task runner that allows tasks to switch when blocking on IO.\n Synchronous tasks will be submitted to a thread pool maintained by `anyio`.\n\n Example:\n ```\n Using a thread for concurrency:\n >>> from prefect import flow\n >>> from prefect.task_runners import ConcurrentTaskRunner\n >>> @flow(task_runner=ConcurrentTaskRunner)\n >>> def my_flow():\n >>> ...\n ```\n \"\"\"\n\n def __init__(self):\n # TODO: Consider adding `max_workers` support using anyio capacity limiters\n\n # Runtime attributes\n self._task_group: anyio.abc.TaskGroup = None\n self._result_events: Dict[UUID, Event] = {}\n self._results: Dict[UUID, Any] = {}\n self._keys: Set[UUID] = set()\n\n super().__init__()\n\n @property\n def concurrency_type(self) -> TaskConcurrencyType:\n return TaskConcurrencyType.CONCURRENT\n\n def duplicate(self):\n return type(self)()\n\n async def submit(\n self,\n key: UUID,\n call: Callable[[], Awaitable[State[R]]],\n ) -> None:\n if not self._started:\n raise RuntimeError(\n \"The task runner must be started before submitting work.\"\n )\n\n if not self._task_group:\n raise RuntimeError(\n \"The concurrent task runner cannot be used to submit work after \"\n \"serialization.\"\n )\n\n # Create an event to set on completion\n self._result_events[key] = Event()\n\n # Rely on the event loop for concurrency\n self._task_group.start_soon(self._run_and_store_result, key, call)\n\n async def wait(\n self,\n key: UUID,\n timeout: float = None,\n ) -> Optional[State]:\n if not self._task_group:\n raise RuntimeError(\n \"The concurrent task runner cannot be used to wait for work after \"\n \"serialization.\"\n )\n\n return await self._get_run_result(key, timeout)\n\n async def _run_and_store_result(\n self, key: UUID, call: Callable[[], Awaitable[State[R]]]\n ):\n \"\"\"\n Simple utility to store the orchestration result in memory on completion\n\n Since this run is occurring on the main thread, we capture exceptions to prevent\n task crashes from crashing the flow run.\n \"\"\"\n try:\n result = await call()\n except BaseException as exc:\n result = await exception_to_crashed_state(exc)\n\n self._results[key] = result\n self._result_events[key].set()\n\n async def _get_run_result(\n self, key: UUID, timeout: float = None\n ) -> Optional[State]:\n \"\"\"\n Block until the run result has been populated.\n \"\"\"\n result = None # retval on timeout\n\n # Note we do not use `asyncio.wrap_future` and instead use an `Event` to avoid\n # stdlib behavior where the wrapped future is cancelled if the parent future is\n # cancelled (as it would be during a timeout here)\n with anyio.move_on_after(timeout):\n await self._result_events[key].wait()\n result = self._results[key]\n\n return result # timeout reached\n\n async def _start(self, exit_stack: AsyncExitStack):\n \"\"\"\n Start the process pool\n \"\"\"\n self._task_group = await exit_stack.enter_async_context(\n anyio.create_task_group()\n )\n\n def __getstate__(self):\n \"\"\"\n Allow the `ConcurrentTaskRunner` to be serialized by dropping the task group.\n \"\"\"\n data = self.__dict__.copy()\n data.update({k: None for k in {\"_task_group\"}})\n return data\n\n def __setstate__(self, data: dict):\n \"\"\"\n When deserialized, we will no longer have a reference to the task group.\n \"\"\"\n self.__dict__.update(data)\n self._task_group = None\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.SequentialTaskRunner","title":"SequentialTaskRunner
","text":" Bases: BaseTaskRunner
A simple task runner that executes calls as they are submitted.
If writing synchronous tasks, this runner will always execute tasks sequentially. If writing async tasks, this runner will execute tasks sequentially unless grouped using anyio.create_task_group
or asyncio.gather
.
prefect/task_runners.py
class SequentialTaskRunner(BaseTaskRunner):\n \"\"\"\n A simple task runner that executes calls as they are submitted.\n\n If writing synchronous tasks, this runner will always execute tasks sequentially.\n If writing async tasks, this runner will execute tasks sequentially unless grouped\n using `anyio.create_task_group` or `asyncio.gather`.\n \"\"\"\n\n def __init__(self) -> None:\n super().__init__()\n self._results: Dict[str, State] = {}\n\n @property\n def concurrency_type(self) -> TaskConcurrencyType:\n return TaskConcurrencyType.SEQUENTIAL\n\n def duplicate(self):\n return type(self)()\n\n async def submit(\n self,\n key: UUID,\n call: Callable[..., Awaitable[State[R]]],\n ) -> None:\n # Run the function immediately and store the result in memory\n try:\n result = await call()\n except BaseException as exc:\n result = await exception_to_crashed_state(exc)\n\n self._results[key] = result\n\n async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n return self._results[key]\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/tasks/","title":"prefect.tasks","text":"","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks","title":"prefect.tasks
","text":"Module containing the base workflow task class and decorator - for most use cases, using the @task
decorator is preferred.
Task
","text":" Bases: Generic[P, R]
A Prefect task definition.
Note
We recommend using the @task
decorator for most use-cases.
Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function creates a new task run.
To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and \"Returns\" respectively.
Parameters:
Name Type Description Defaultfn
Callable[P, R]
The function defining the task.
requiredname
str
An optional name for the task; if not provided, the name will be inferred from the given function.
None
description
str
An optional string description for the task.
None
tags
Iterable[str]
An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags
context at task runtime.
None
version
str
An optional string specifying the version of this task definition
None
cache_key_fn
Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]
An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again.
None
cache_expiration
timedelta
An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire.
None
task_run_name
Optional[Union[Callable[[], str], str]]
An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.
None
retries
Optional[int]
An optional number of times to retry on task run failure.
None
retry_delay_seconds
Optional[Union[float, int, List[float], Callable[[int], List[float]]]]
Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries
is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.
None
retry_jitter_factor
Optional[float]
An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".
None
persist_result
Optional[bool]
An optional toggle indicating whether the result of this task should be persisted to result storage. Defaults to None
, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.
None
result_storage
Optional[ResultStorage]
An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in.
None
result_storage_key
Optional[str]
An optional key to store the result in storage at when persisted. Defaults to a unique identifier.
None
result_serializer
Optional[ResultSerializer]
An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in.
None
timeout_seconds
Union[int, float]
An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed.
None
log_prints
Optional[bool]
If set, print
statements in the task will be redirected to the Prefect logger for the task run. Defaults to None
, which indicates that the value from the flow should be used.
False
refresh_cache
Optional[bool]
If set, cached results for the cache key are not used. Defaults to None
, which indicates that a cached result from a previous execution with matching cache key is used.
None
on_failure
Optional[List[Callable[[Task, TaskRun, State], None]]]
An optional list of callables to run when the task enters a failed state.
None
on_completion
Optional[List[Callable[[Task, TaskRun, State], None]]]
An optional list of callables to run when the task enters a completed state.
None
retry_condition_fn
Optional[Callable[[Task, TaskRun, State], bool]]
An optional callable run when a task run returns a Failed state. Should return True
if the task should continue to its retry policy (e.g. retries=3
), and False
if the task should end as failed. Defaults to None
, indicating the task should always continue to its retry policy.
None
viz_return_value
Optional[Any]
An optional value to return when the task dependency tree is visualized.
None
Source code in prefect/tasks.py
@PrefectObjectRegistry.register_instances\nclass Task(Generic[P, R]):\n \"\"\"\n A Prefect task definition.\n\n !!! note\n We recommend using [the `@task` decorator][prefect.tasks.task] for most use-cases.\n\n Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function\n creates a new task run.\n\n To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and\n \"Returns\" respectively.\n\n Args:\n fn: The function defining the task.\n name: An optional name for the task; if not provided, the name will be inferred\n from the given function.\n description: An optional string description for the task.\n tags: An optional set of tags to be associated with runs of this task. These\n tags are combined with any tags defined by a `prefect.tags` context at\n task runtime.\n version: An optional string specifying the version of this task definition\n cache_key_fn: An optional callable that, given the task run context and call\n parameters, generates a string key; if the key matches a previous completed\n state, that state result will be restored instead of running the task again.\n cache_expiration: An optional amount of time indicating how long cached states\n for this task should be restorable; if not provided, cached states will\n never expire.\n task_run_name: An optional name to distinguish runs of this task; this name can be provided\n as a string template with the task's keyword arguments as variables,\n or a function that returns a string.\n retries: An optional number of times to retry on task run failure.\n retry_delay_seconds: Optionally configures how long to wait before retrying the\n task after failure. This is only applicable if `retries` is nonzero. This\n setting can either be a number of seconds, a list of retry delays, or a\n callable that, given the total number of retries, generates a list of retry\n delays. If a number of seconds, that delay will be applied to all retries.\n If a list, each retry will wait for the corresponding delay before retrying.\n When passing a callable or a list, the number of configured retry delays\n cannot exceed 50.\n retry_jitter_factor: An optional factor that defines the factor to which a retry\n can be jittered in order to avoid a \"thundering herd\".\n persist_result: An optional toggle indicating whether the result of this task\n should be persisted to result storage. Defaults to `None`, which indicates\n that Prefect should choose whether the result should be persisted depending on\n the features being used.\n result_storage: An optional block to use to persist the result of this task.\n Defaults to the value set in the flow the task is called in.\n result_storage_key: An optional key to store the result in storage at when persisted.\n Defaults to a unique identifier.\n result_serializer: An optional serializer to use to serialize the result of this\n task for persistence. Defaults to the value set in the flow the task is\n called in.\n timeout_seconds: An optional number of seconds indicating a maximum runtime for\n the task. If the task exceeds this runtime, it will be marked as failed.\n log_prints: If set, `print` statements in the task will be redirected to the\n Prefect logger for the task run. Defaults to `None`, which indicates\n that the value from the flow should be used.\n refresh_cache: If set, cached results for the cache key are not used.\n Defaults to `None`, which indicates that a cached result from a previous\n execution with matching cache key is used.\n on_failure: An optional list of callables to run when the task enters a failed state.\n on_completion: An optional list of callables to run when the task enters a completed state.\n retry_condition_fn: An optional callable run when a task run returns a Failed state. Should\n return `True` if the task should continue to its retry policy (e.g. `retries=3`), and `False` if the task\n should end as failed. Defaults to `None`, indicating the task should always continue\n to its retry policy.\n viz_return_value: An optional value to return when the task dependency tree is visualized.\n \"\"\"\n\n # NOTE: These parameters (types, defaults, and docstrings) should be duplicated\n # exactly in the @task decorator\n def __init__(\n self,\n fn: Callable[P, R],\n name: str = None,\n description: str = None,\n tags: Iterable[str] = None,\n version: str = None,\n cache_key_fn: Callable[\n [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n ] = None,\n cache_expiration: datetime.timedelta = None,\n task_run_name: Optional[Union[Callable[[], str], str]] = None,\n retries: Optional[int] = None,\n retry_delay_seconds: Optional[\n Union[\n float,\n int,\n List[float],\n Callable[[int], List[float]],\n ]\n ] = None,\n retry_jitter_factor: Optional[float] = None,\n persist_result: Optional[bool] = None,\n result_storage: Optional[ResultStorage] = None,\n result_serializer: Optional[ResultSerializer] = None,\n result_storage_key: Optional[str] = None,\n cache_result_in_memory: bool = True,\n timeout_seconds: Union[int, float] = None,\n log_prints: Optional[bool] = False,\n refresh_cache: Optional[bool] = None,\n on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n viz_return_value: Optional[Any] = None,\n ):\n # Validate if hook passed is list and contains callables\n hook_categories = [on_completion, on_failure]\n hook_names = [\"on_completion\", \"on_failure\"]\n for hooks, hook_name in zip(hook_categories, hook_names):\n if hooks is not None:\n if not hooks:\n raise ValueError(f\"Empty list passed for '{hook_name}'\")\n try:\n hooks = list(hooks)\n except TypeError:\n raise TypeError(\n f\"Expected iterable for '{hook_name}'; got\"\n f\" {type(hooks).__name__} instead. Please provide a list of\"\n f\" hooks to '{hook_name}':\\n\\n\"\n f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n \" my_flow():\\n\\tpass\"\n )\n\n for hook in hooks:\n if not callable(hook):\n raise TypeError(\n f\"Expected callables in '{hook_name}'; got\"\n f\" {type(hook).__name__} instead. Please provide a list of\"\n f\" hooks to '{hook_name}':\\n\\n\"\n f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n \" my_flow():\\n\\tpass\"\n )\n\n if not callable(fn):\n raise TypeError(\"'fn' must be callable\")\n\n self.description = description or inspect.getdoc(fn)\n update_wrapper(self, fn)\n self.fn = fn\n self.isasync = inspect.iscoroutinefunction(self.fn)\n\n if not name:\n if not hasattr(self.fn, \"__name__\"):\n self.name = type(self.fn).__name__\n else:\n self.name = self.fn.__name__\n else:\n self.name = name\n\n if task_run_name is not None:\n if not isinstance(task_run_name, str) and not callable(task_run_name):\n raise TypeError(\n \"Expected string or callable for 'task_run_name'; got\"\n f\" {type(task_run_name).__name__} instead.\"\n )\n self.task_run_name = task_run_name\n\n self.version = version\n self.log_prints = log_prints\n\n raise_for_reserved_arguments(self.fn, [\"return_state\", \"wait_for\"])\n\n self.tags = set(tags if tags else [])\n\n if not hasattr(self.fn, \"__qualname__\"):\n self.task_key = to_qualified_name(type(self.fn))\n else:\n if self.fn.__module__ == \"__main__\":\n task_definition_path = inspect.getsourcefile(self.fn)\n self.task_key = hash_objects(\n self.name, os.path.abspath(task_definition_path)\n )\n else:\n self.task_key = to_qualified_name(self.fn)\n\n self.cache_key_fn = cache_key_fn\n self.cache_expiration = cache_expiration\n self.refresh_cache = refresh_cache\n\n # TaskRunPolicy settings\n # TODO: We can instantiate a `TaskRunPolicy` and add Pydantic bound checks to\n # validate that the user passes positive numbers here\n\n self.retries = (\n retries if retries is not None else PREFECT_TASK_DEFAULT_RETRIES.value()\n )\n if retry_delay_seconds is None:\n retry_delay_seconds = PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS.value()\n\n if callable(retry_delay_seconds):\n self.retry_delay_seconds = retry_delay_seconds(retries)\n else:\n self.retry_delay_seconds = retry_delay_seconds\n\n if isinstance(self.retry_delay_seconds, list) and (\n len(self.retry_delay_seconds) > 50\n ):\n raise ValueError(\"Can not configure more than 50 retry delays per task.\")\n\n if retry_jitter_factor is not None and retry_jitter_factor < 0:\n raise ValueError(\"`retry_jitter_factor` must be >= 0.\")\n\n self.retry_jitter_factor = retry_jitter_factor\n self.persist_result = persist_result\n self.result_storage = result_storage\n self.result_serializer = result_serializer\n self.result_storage_key = result_storage_key\n self.cache_result_in_memory = cache_result_in_memory\n self.timeout_seconds = float(timeout_seconds) if timeout_seconds else None\n # Warn if this task's `name` conflicts with another task while having a\n # different function. This is to detect the case where two or more tasks\n # share a name or are lambdas, which should result in a warning, and to\n # differentiate it from the case where the task was 'copied' via\n # `with_options`, which should not result in a warning.\n registry = PrefectObjectRegistry.get()\n\n if registry and any(\n other\n for other in registry.get_instances(Task)\n if other.name == self.name and id(other.fn) != id(self.fn)\n ):\n try:\n file = inspect.getsourcefile(self.fn)\n line_number = inspect.getsourcelines(self.fn)[1]\n except TypeError:\n file = \"unknown\"\n line_number = \"unknown\"\n\n warnings.warn(\n f\"A task named {self.name!r} and defined at '{file}:{line_number}' \"\n \"conflicts with another task. Consider specifying a unique `name` \"\n \"parameter in the task definition:\\n\\n \"\n \"`@task(name='my_unique_name', ...)`\"\n )\n self.on_completion = on_completion\n self.on_failure = on_failure\n\n # retry_condition_fn must be a callable or None. If it is neither, raise a TypeError\n if retry_condition_fn is not None and not (callable(retry_condition_fn)):\n raise TypeError(\n \"Expected `retry_condition_fn` to be callable, got\"\n f\" {type(retry_condition_fn).__name__} instead.\"\n )\n\n self.retry_condition_fn = retry_condition_fn\n self.viz_return_value = viz_return_value\n\n def with_options(\n self,\n *,\n name: str = None,\n description: str = None,\n tags: Iterable[str] = None,\n cache_key_fn: Callable[\n [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n ] = None,\n task_run_name: Optional[Union[Callable[[], str], str]] = None,\n cache_expiration: datetime.timedelta = None,\n retries: Optional[int] = NotSet,\n retry_delay_seconds: Union[\n float,\n int,\n List[float],\n Callable[[int], List[float]],\n ] = NotSet,\n retry_jitter_factor: Optional[float] = NotSet,\n persist_result: Optional[bool] = NotSet,\n result_storage: Optional[ResultStorage] = NotSet,\n result_serializer: Optional[ResultSerializer] = NotSet,\n result_storage_key: Optional[str] = NotSet,\n cache_result_in_memory: Optional[bool] = None,\n timeout_seconds: Union[int, float] = None,\n log_prints: Optional[bool] = NotSet,\n refresh_cache: Optional[bool] = NotSet,\n on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n viz_return_value: Optional[Any] = None,\n ):\n \"\"\"\n Create a new task from the current object, updating provided options.\n\n Args:\n name: A new name for the task.\n description: A new description for the task.\n tags: A new set of tags for the task. If given, existing tags are ignored,\n not merged.\n cache_key_fn: A new cache key function for the task.\n cache_expiration: A new cache expiration time for the task.\n task_run_name: An optional name to distinguish runs of this task; this name can be provided\n as a string template with the task's keyword arguments as variables,\n or a function that returns a string.\n retries: A new number of times to retry on task run failure.\n retry_delay_seconds: Optionally configures how long to wait before retrying\n the task after failure. This is only applicable if `retries` is nonzero.\n This setting can either be a number of seconds, a list of retry delays,\n or a callable that, given the total number of retries, generates a list\n of retry delays. If a number of seconds, that delay will be applied to\n all retries. If a list, each retry will wait for the corresponding delay\n before retrying. When passing a callable or a list, the number of\n configured retry delays cannot exceed 50.\n retry_jitter_factor: An optional factor that defines the factor to which a\n retry can be jittered in order to avoid a \"thundering herd\".\n persist_result: A new option for enabling or disabling result persistence.\n result_storage: A new storage type to use for results.\n result_serializer: A new serializer to use for results.\n result_storage_key: A new key for the persisted result to be stored at.\n timeout_seconds: A new maximum time for the task to complete in seconds.\n log_prints: A new option for enabling or disabling redirection of `print` statements.\n refresh_cache: A new option for enabling or disabling cache refresh.\n on_completion: A new list of callables to run when the task enters a completed state.\n on_failure: A new list of callables to run when the task enters a failed state.\n retry_condition_fn: An optional callable run when a task run returns a Failed state.\n Should return `True` if the task should continue to its retry policy, and `False`\n if the task should end as failed. Defaults to `None`, indicating the task should\n always continue to its retry policy.\n viz_return_value: An optional value to return when the task dependency tree is visualized.\n\n Returns:\n A new `Task` instance.\n\n Examples:\n\n Create a new task from an existing task and update the name\n\n >>> @task(name=\"My task\")\n >>> def my_task():\n >>> return 1\n >>>\n >>> new_task = my_task.with_options(name=\"My new task\")\n\n Create a new task from an existing task and update the retry settings\n\n >>> from random import randint\n >>>\n >>> @task(retries=1, retry_delay_seconds=5)\n >>> def my_task():\n >>> x = randint(0, 5)\n >>> if x >= 3: # Make a task that fails sometimes\n >>> raise ValueError(\"Retry me please!\")\n >>> return x\n >>>\n >>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\n Use a task with updated options within a flow\n\n >>> @task(name=\"My task\")\n >>> def my_task():\n >>> return 1\n >>>\n >>> @flow\n >>> my_flow():\n >>> new_task = my_task.with_options(name=\"My new task\")\n >>> new_task()\n \"\"\"\n return Task(\n fn=self.fn,\n name=name or self.name,\n description=description or self.description,\n tags=tags or copy(self.tags),\n cache_key_fn=cache_key_fn or self.cache_key_fn,\n cache_expiration=cache_expiration or self.cache_expiration,\n task_run_name=task_run_name,\n retries=retries if retries is not NotSet else self.retries,\n retry_delay_seconds=(\n retry_delay_seconds\n if retry_delay_seconds is not NotSet\n else self.retry_delay_seconds\n ),\n retry_jitter_factor=(\n retry_jitter_factor\n if retry_jitter_factor is not NotSet\n else self.retry_jitter_factor\n ),\n persist_result=(\n persist_result if persist_result is not NotSet else self.persist_result\n ),\n result_storage=(\n result_storage if result_storage is not NotSet else self.result_storage\n ),\n result_storage_key=(\n result_storage_key\n if result_storage_key is not NotSet\n else self.result_storage_key\n ),\n result_serializer=(\n result_serializer\n if result_serializer is not NotSet\n else self.result_serializer\n ),\n cache_result_in_memory=(\n cache_result_in_memory\n if cache_result_in_memory is not None\n else self.cache_result_in_memory\n ),\n timeout_seconds=(\n timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n ),\n log_prints=(log_prints if log_prints is not NotSet else self.log_prints),\n refresh_cache=(\n refresh_cache if refresh_cache is not NotSet else self.refresh_cache\n ),\n on_completion=on_completion or self.on_completion,\n on_failure=on_failure or self.on_failure,\n retry_condition_fn=retry_condition_fn or self.retry_condition_fn,\n viz_return_value=viz_return_value or self.viz_return_value,\n )\n\n @overload\n def __call__(\n self: \"Task[P, NoReturn]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> None:\n # `NoReturn` matches if a type can't be inferred for the function which stops a\n # sync function from matching the `Coroutine` overload\n ...\n\n @overload\n def __call__(\n self: \"Task[P, T]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> T:\n ...\n\n @overload\n def __call__(\n self: \"Task[P, T]\",\n *args: P.args,\n return_state: Literal[True],\n **kwargs: P.kwargs,\n ) -> State[T]:\n ...\n\n def __call__(\n self,\n *args: P.args,\n return_state: bool = False,\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: P.kwargs,\n ):\n \"\"\"\n Run the task and return the result. If `return_state` is True returns\n the result is wrapped in a Prefect State which provides error handling.\n \"\"\"\n from prefect.engine import enter_task_run_engine\n from prefect.task_engine import submit_autonomous_task_run_to_engine\n from prefect.task_runners import SequentialTaskRunner\n\n # Convert the call args/kwargs to a parameter dict\n parameters = get_call_parameters(self.fn, args, kwargs)\n\n return_type = \"state\" if return_state else \"result\"\n\n task_run_tracker = get_task_viz_tracker()\n if task_run_tracker:\n return track_viz_task(\n self.isasync, self.name, parameters, self.viz_return_value\n )\n\n if (\n PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n and not FlowRunContext.get()\n ):\n from prefect import get_client\n\n return submit_autonomous_task_run_to_engine(\n task=self,\n task_run=None,\n task_runner=SequentialTaskRunner(),\n parameters=parameters,\n return_type=return_type,\n client=get_client(),\n )\n\n return enter_task_run_engine(\n self,\n parameters=parameters,\n wait_for=wait_for,\n task_runner=SequentialTaskRunner(),\n return_type=return_type,\n mapped=False,\n )\n\n @overload\n def _run(\n self: \"Task[P, NoReturn]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> PrefectFuture[None, Sync]:\n # `NoReturn` matches if a type can't be inferred for the function which stops a\n # sync function from matching the `Coroutine` overload\n ...\n\n @overload\n def _run(\n self: \"Task[P, Coroutine[Any, Any, T]]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> Awaitable[State[T]]:\n ...\n\n @overload\n def _run(\n self: \"Task[P, T]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> State[T]:\n ...\n\n def _run(\n self,\n *args: P.args,\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: P.kwargs,\n ) -> Union[State, Awaitable[State]]:\n \"\"\"\n Run the task and return the final state.\n \"\"\"\n from prefect.engine import enter_task_run_engine\n from prefect.task_runners import SequentialTaskRunner\n\n # Convert the call args/kwargs to a parameter dict\n parameters = get_call_parameters(self.fn, args, kwargs)\n\n return enter_task_run_engine(\n self,\n parameters=parameters,\n wait_for=wait_for,\n return_type=\"state\",\n task_runner=SequentialTaskRunner(),\n mapped=False,\n )\n\n @overload\n def submit(\n self: \"Task[P, NoReturn]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> PrefectFuture[None, Sync]:\n # `NoReturn` matches if a type can't be inferred for the function which stops a\n # sync function from matching the `Coroutine` overload\n ...\n\n @overload\n def submit(\n self: \"Task[P, Coroutine[Any, Any, T]]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> Awaitable[PrefectFuture[T, Async]]:\n ...\n\n @overload\n def submit(\n self: \"Task[P, T]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> PrefectFuture[T, Sync]:\n ...\n\n @overload\n def submit(\n self: \"Task[P, T]\",\n *args: P.args,\n return_state: Literal[True],\n **kwargs: P.kwargs,\n ) -> State[T]:\n ...\n\n @overload\n def submit(\n self: \"Task[P, T]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> TaskRun:\n ...\n\n @overload\n def submit(\n self: \"Task[P, Coroutine[Any, Any, T]]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> Awaitable[TaskRun]:\n ...\n\n def submit(\n self,\n *args: Any,\n return_state: bool = False,\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: Any,\n ) -> Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]:\n \"\"\"\n Submit a run of the task to the engine.\n\n If writing an async task, this call must be awaited.\n\n If called from within a flow function,\n\n Will create a new task run in the backing API and submit the task to the flow's\n task runner. This call only blocks execution while the task is being submitted,\n once it is submitted, the flow function will continue executing. However, note\n that the `SequentialTaskRunner` does not implement parallel execution for sync tasks\n and they are fully resolved on submission.\n\n Args:\n *args: Arguments to run the task with\n return_state: Return the result of the flow run wrapped in a\n Prefect State.\n wait_for: Upstream task futures to wait for before starting the task\n **kwargs: Keyword arguments to run the task with\n\n Returns:\n If `return_state` is False a future allowing asynchronous access to\n the state of the task\n If `return_state` is True a future wrapped in a Prefect State allowing asynchronous access to\n the state of the task\n\n Examples:\n\n Define a task\n\n >>> from prefect import task\n >>> @task\n >>> def my_task():\n >>> return \"hello\"\n\n Run a task in a flow\n\n >>> from prefect import flow\n >>> @flow\n >>> def my_flow():\n >>> my_task.submit()\n\n Wait for a task to finish\n\n >>> @flow\n >>> def my_flow():\n >>> my_task.submit().wait()\n\n Use the result from a task in a flow\n\n >>> @flow\n >>> def my_flow():\n >>> print(my_task.submit().result())\n >>>\n >>> my_flow()\n hello\n\n Run an async task in an async flow\n\n >>> @task\n >>> async def my_async_task():\n >>> pass\n >>>\n >>> @flow\n >>> async def my_flow():\n >>> await my_async_task.submit()\n\n Run a sync task in an async flow\n\n >>> @flow\n >>> async def my_flow():\n >>> my_task.submit()\n\n Enforce ordering between tasks that do not exchange data\n >>> @task\n >>> def task_1():\n >>> pass\n >>>\n >>> @task\n >>> def task_2():\n >>> pass\n >>>\n >>> @flow\n >>> def my_flow():\n >>> x = task_1.submit()\n >>>\n >>> # task 2 will wait for task_1 to complete\n >>> y = task_2.submit(wait_for=[x])\n\n \"\"\"\n\n from prefect.engine import create_autonomous_task_run, enter_task_run_engine\n\n # Convert the call args/kwargs to a parameter dict\n parameters = get_call_parameters(self.fn, args, kwargs)\n return_type = \"state\" if return_state else \"future\"\n\n task_viz_tracker = get_task_viz_tracker()\n if task_viz_tracker:\n raise VisualizationUnsupportedError(\n \"`task.submit()` is not currently supported by `flow.visualize()`\"\n )\n\n if (\n PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n and not FlowRunContext.get()\n ):\n create_autonomous_task_run_call = create_call(\n create_autonomous_task_run, task=self, parameters=parameters\n )\n if self.isasync:\n return from_async.wait_for_call_in_loop_thread(\n create_autonomous_task_run_call\n )\n else:\n return from_sync.wait_for_call_in_loop_thread(\n create_autonomous_task_run_call\n )\n\n return enter_task_run_engine(\n self,\n parameters=parameters,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=None, # Use the flow's task runner\n mapped=False,\n )\n\n @overload\n def map(\n self: \"Task[P, NoReturn]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> List[PrefectFuture[None, Sync]]:\n # `NoReturn` matches if a type can't be inferred for the function which stops a\n # sync function from matching the `Coroutine` overload\n ...\n\n @overload\n def map(\n self: \"Task[P, Coroutine[Any, Any, T]]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> Awaitable[List[PrefectFuture[T, Async]]]:\n ...\n\n @overload\n def map(\n self: \"Task[P, T]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> List[PrefectFuture[T, Sync]]:\n ...\n\n @overload\n def map(\n self: \"Task[P, T]\",\n *args: P.args,\n return_state: Literal[True],\n **kwargs: P.kwargs,\n ) -> List[State[T]]:\n ...\n\n def map(\n self,\n *args: Any,\n return_state: bool = False,\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"\n Submit a mapped run of the task to a worker.\n\n Must be called within a flow function. If writing an async task, this\n call must be awaited.\n\n Must be called with at least one iterable and all iterables must be\n the same length. Any arguments that are not iterable will be treated as\n a static value and each task run will receive the same value.\n\n Will create as many task runs as the length of the iterable(s) in the\n backing API and submit the task runs to the flow's task runner. This\n call blocks if given a future as input while the future is resolved. It\n also blocks while the tasks are being submitted, once they are\n submitted, the flow function will continue executing. However, note\n that the `SequentialTaskRunner` does not implement parallel execution\n for sync tasks and they are fully resolved on submission.\n\n Args:\n *args: Iterable and static arguments to run the tasks with\n return_state: Return a list of Prefect States that wrap the results\n of each task run.\n wait_for: Upstream task futures to wait for before starting the\n task\n **kwargs: Keyword iterable arguments to run the task with\n\n Returns:\n A list of futures allowing asynchronous access to the state of the\n tasks\n\n Examples:\n\n Define a task\n\n >>> from prefect import task\n >>> @task\n >>> def my_task(x):\n >>> return x + 1\n\n Create mapped tasks\n\n >>> from prefect import flow\n >>> @flow\n >>> def my_flow():\n >>> my_task.map([1, 2, 3])\n\n Wait for all mapped tasks to finish\n\n >>> @flow\n >>> def my_flow():\n >>> futures = my_task.map([1, 2, 3])\n >>> for future in futures:\n >>> future.wait()\n >>> # Now all of the mapped tasks have finished\n >>> my_task(10)\n\n Use the result from mapped tasks in a flow\n\n >>> @flow\n >>> def my_flow():\n >>> futures = my_task.map([1, 2, 3])\n >>> for future in futures:\n >>> print(future.result())\n >>> my_flow()\n 2\n 3\n 4\n\n Enforce ordering between tasks that do not exchange data\n >>> @task\n >>> def task_1(x):\n >>> pass\n >>>\n >>> @task\n >>> def task_2(y):\n >>> pass\n >>>\n >>> @flow\n >>> def my_flow():\n >>> x = task_1.submit()\n >>>\n >>> # task 2 will wait for task_1 to complete\n >>> y = task_2.map([1, 2, 3], wait_for=[x])\n\n Use a non-iterable input as a constant across mapped tasks\n >>> @task\n >>> def display(prefix, item):\n >>> print(prefix, item)\n >>>\n >>> @flow\n >>> def my_flow():\n >>> display.map(\"Check it out: \", [1, 2, 3])\n >>>\n >>> my_flow()\n Check it out: 1\n Check it out: 2\n Check it out: 3\n\n Use `unmapped` to treat an iterable argument as a constant\n >>> from prefect import unmapped\n >>>\n >>> @task\n >>> def add_n_to_items(items, n):\n >>> return [item + n for item in items]\n >>>\n >>> @flow\n >>> def my_flow():\n >>> return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n >>>\n >>> my_flow()\n [[11, 21], [12, 22], [13, 23]]\n \"\"\"\n\n from prefect.engine import begin_task_map, enter_task_run_engine\n\n # Convert the call args/kwargs to a parameter dict; do not apply defaults\n # since they should not be mapped over\n parameters = get_call_parameters(self.fn, args, kwargs, apply_defaults=False)\n return_type = \"state\" if return_state else \"future\"\n\n task_viz_tracker = get_task_viz_tracker()\n if task_viz_tracker:\n raise VisualizationUnsupportedError(\n \"`task.map()` is not currently supported by `flow.visualize()`\"\n )\n\n if (\n PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n and not FlowRunContext.get()\n ):\n map_call = create_call(\n begin_task_map,\n task=self,\n parameters=parameters,\n flow_run_context=None,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=None,\n autonomous=True,\n )\n if self.isasync:\n return from_async.wait_for_call_in_loop_thread(map_call)\n else:\n return from_sync.wait_for_call_in_loop_thread(map_call)\n\n return enter_task_run_engine(\n self,\n parameters=parameters,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=None,\n mapped=True,\n )\n\n def serve(self, task_runner: Optional[BaseTaskRunner] = None) -> \"Task\":\n \"\"\"Serve the task using the provided task runner. This method is used to\n establish a websocket connection with the Prefect server and listen for\n submitted task runs to execute.\n\n Args:\n task_runner: The task runner to use for serving the task. If not provided,\n the default ConcurrentTaskRunner will be used.\n\n Examples:\n Serve a task using the default task runner\n >>> @task\n >>> def my_task():\n >>> return 1\n\n >>> my_task.serve()\n \"\"\"\n\n if not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING:\n raise ValueError(\n \"Task's `serve` method is an experimental feature and must be enabled with \"\n \"`prefect config set PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING=True`\"\n )\n\n from prefect.task_server import serve\n\n serve(self, task_runner=task_runner)\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.map","title":"map
","text":"Submit a mapped run of the task to a worker.
Must be called within a flow function. If writing an async task, this call must be awaited.
Must be called with at least one iterable and all iterables must be the same length. Any arguments that are not iterable will be treated as a static value and each task run will receive the same value.
Will create as many task runs as the length of the iterable(s) in the backing API and submit the task runs to the flow's task runner. This call blocks if given a future as input while the future is resolved. It also blocks while the tasks are being submitted, once they are submitted, the flow function will continue executing. However, note that the SequentialTaskRunner
does not implement parallel execution for sync tasks and they are fully resolved on submission.
Parameters:
Name Type Description Default*args
Any
Iterable and static arguments to run the tasks with
()
return_state
bool
Return a list of Prefect States that wrap the results of each task run.
False
wait_for
Optional[Iterable[PrefectFuture]]
Upstream task futures to wait for before starting the task
None
**kwargs
Any
Keyword iterable arguments to run the task with
{}
Returns:
Type DescriptionAny
A list of futures allowing asynchronous access to the state of the
Any
tasks
Define a task\n\n>>> from prefect import task\n>>> @task\n>>> def my_task(x):\n>>> return x + 1\n\nCreate mapped tasks\n\n>>> from prefect import flow\n>>> @flow\n>>> def my_flow():\n>>> my_task.map([1, 2, 3])\n\nWait for all mapped tasks to finish\n\n>>> @flow\n>>> def my_flow():\n>>> futures = my_task.map([1, 2, 3])\n>>> for future in futures:\n>>> future.wait()\n>>> # Now all of the mapped tasks have finished\n>>> my_task(10)\n\nUse the result from mapped tasks in a flow\n\n>>> @flow\n>>> def my_flow():\n>>> futures = my_task.map([1, 2, 3])\n>>> for future in futures:\n>>> print(future.result())\n>>> my_flow()\n2\n3\n4\n\nEnforce ordering between tasks that do not exchange data\n>>> @task\n>>> def task_1(x):\n>>> pass\n>>>\n>>> @task\n>>> def task_2(y):\n>>> pass\n>>>\n>>> @flow\n>>> def my_flow():\n>>> x = task_1.submit()\n>>>\n>>> # task 2 will wait for task_1 to complete\n>>> y = task_2.map([1, 2, 3], wait_for=[x])\n\nUse a non-iterable input as a constant across mapped tasks\n>>> @task\n>>> def display(prefix, item):\n>>> print(prefix, item)\n>>>\n>>> @flow\n>>> def my_flow():\n>>> display.map(\"Check it out: \", [1, 2, 3])\n>>>\n>>> my_flow()\nCheck it out: 1\nCheck it out: 2\nCheck it out: 3\n\nUse `unmapped` to treat an iterable argument as a constant\n>>> from prefect import unmapped\n>>>\n>>> @task\n>>> def add_n_to_items(items, n):\n>>> return [item + n for item in items]\n>>>\n>>> @flow\n>>> def my_flow():\n>>> return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n>>>\n>>> my_flow()\n[[11, 21], [12, 22], [13, 23]]\n
Source code in prefect/tasks.py
def map(\n self,\n *args: Any,\n return_state: bool = False,\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: Any,\n) -> Any:\n \"\"\"\n Submit a mapped run of the task to a worker.\n\n Must be called within a flow function. If writing an async task, this\n call must be awaited.\n\n Must be called with at least one iterable and all iterables must be\n the same length. Any arguments that are not iterable will be treated as\n a static value and each task run will receive the same value.\n\n Will create as many task runs as the length of the iterable(s) in the\n backing API and submit the task runs to the flow's task runner. This\n call blocks if given a future as input while the future is resolved. It\n also blocks while the tasks are being submitted, once they are\n submitted, the flow function will continue executing. However, note\n that the `SequentialTaskRunner` does not implement parallel execution\n for sync tasks and they are fully resolved on submission.\n\n Args:\n *args: Iterable and static arguments to run the tasks with\n return_state: Return a list of Prefect States that wrap the results\n of each task run.\n wait_for: Upstream task futures to wait for before starting the\n task\n **kwargs: Keyword iterable arguments to run the task with\n\n Returns:\n A list of futures allowing asynchronous access to the state of the\n tasks\n\n Examples:\n\n Define a task\n\n >>> from prefect import task\n >>> @task\n >>> def my_task(x):\n >>> return x + 1\n\n Create mapped tasks\n\n >>> from prefect import flow\n >>> @flow\n >>> def my_flow():\n >>> my_task.map([1, 2, 3])\n\n Wait for all mapped tasks to finish\n\n >>> @flow\n >>> def my_flow():\n >>> futures = my_task.map([1, 2, 3])\n >>> for future in futures:\n >>> future.wait()\n >>> # Now all of the mapped tasks have finished\n >>> my_task(10)\n\n Use the result from mapped tasks in a flow\n\n >>> @flow\n >>> def my_flow():\n >>> futures = my_task.map([1, 2, 3])\n >>> for future in futures:\n >>> print(future.result())\n >>> my_flow()\n 2\n 3\n 4\n\n Enforce ordering between tasks that do not exchange data\n >>> @task\n >>> def task_1(x):\n >>> pass\n >>>\n >>> @task\n >>> def task_2(y):\n >>> pass\n >>>\n >>> @flow\n >>> def my_flow():\n >>> x = task_1.submit()\n >>>\n >>> # task 2 will wait for task_1 to complete\n >>> y = task_2.map([1, 2, 3], wait_for=[x])\n\n Use a non-iterable input as a constant across mapped tasks\n >>> @task\n >>> def display(prefix, item):\n >>> print(prefix, item)\n >>>\n >>> @flow\n >>> def my_flow():\n >>> display.map(\"Check it out: \", [1, 2, 3])\n >>>\n >>> my_flow()\n Check it out: 1\n Check it out: 2\n Check it out: 3\n\n Use `unmapped` to treat an iterable argument as a constant\n >>> from prefect import unmapped\n >>>\n >>> @task\n >>> def add_n_to_items(items, n):\n >>> return [item + n for item in items]\n >>>\n >>> @flow\n >>> def my_flow():\n >>> return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n >>>\n >>> my_flow()\n [[11, 21], [12, 22], [13, 23]]\n \"\"\"\n\n from prefect.engine import begin_task_map, enter_task_run_engine\n\n # Convert the call args/kwargs to a parameter dict; do not apply defaults\n # since they should not be mapped over\n parameters = get_call_parameters(self.fn, args, kwargs, apply_defaults=False)\n return_type = \"state\" if return_state else \"future\"\n\n task_viz_tracker = get_task_viz_tracker()\n if task_viz_tracker:\n raise VisualizationUnsupportedError(\n \"`task.map()` is not currently supported by `flow.visualize()`\"\n )\n\n if (\n PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n and not FlowRunContext.get()\n ):\n map_call = create_call(\n begin_task_map,\n task=self,\n parameters=parameters,\n flow_run_context=None,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=None,\n autonomous=True,\n )\n if self.isasync:\n return from_async.wait_for_call_in_loop_thread(map_call)\n else:\n return from_sync.wait_for_call_in_loop_thread(map_call)\n\n return enter_task_run_engine(\n self,\n parameters=parameters,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=None,\n mapped=True,\n )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.serve","title":"serve
","text":"Serve the task using the provided task runner. This method is used to establish a websocket connection with the Prefect server and listen for submitted task runs to execute.
Parameters:
Name Type Description Defaulttask_runner
Optional[BaseTaskRunner]
The task runner to use for serving the task. If not provided, the default ConcurrentTaskRunner will be used.
None
Examples:
Serve a task using the default task runner
>>> @task\n>>> def my_task():\n>>> return 1\n
>>> my_task.serve()\n
Source code in prefect/tasks.py
def serve(self, task_runner: Optional[BaseTaskRunner] = None) -> \"Task\":\n \"\"\"Serve the task using the provided task runner. This method is used to\n establish a websocket connection with the Prefect server and listen for\n submitted task runs to execute.\n\n Args:\n task_runner: The task runner to use for serving the task. If not provided,\n the default ConcurrentTaskRunner will be used.\n\n Examples:\n Serve a task using the default task runner\n >>> @task\n >>> def my_task():\n >>> return 1\n\n >>> my_task.serve()\n \"\"\"\n\n if not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING:\n raise ValueError(\n \"Task's `serve` method is an experimental feature and must be enabled with \"\n \"`prefect config set PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING=True`\"\n )\n\n from prefect.task_server import serve\n\n serve(self, task_runner=task_runner)\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.submit","title":"submit
","text":"Submit a run of the task to the engine.
If writing an async task, this call must be awaited.
If called from within a flow function,
Will create a new task run in the backing API and submit the task to the flow's task runner. This call only blocks execution while the task is being submitted, once it is submitted, the flow function will continue executing. However, note that the SequentialTaskRunner
does not implement parallel execution for sync tasks and they are fully resolved on submission.
Parameters:
Name Type Description Default*args
Any
Arguments to run the task with
()
return_state
bool
Return the result of the flow run wrapped in a Prefect State.
False
wait_for
Optional[Iterable[PrefectFuture]]
Upstream task futures to wait for before starting the task
None
**kwargs
Any
Keyword arguments to run the task with
{}
Returns:
Type DescriptionUnion[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]
If return_state
is False a future allowing asynchronous access to the state of the task
Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]
If return_state
is True a future wrapped in a Prefect State allowing asynchronous access to the state of the task
Define a task\n\n>>> from prefect import task\n>>> @task\n>>> def my_task():\n>>> return \"hello\"\n\nRun a task in a flow\n\n>>> from prefect import flow\n>>> @flow\n>>> def my_flow():\n>>> my_task.submit()\n\nWait for a task to finish\n\n>>> @flow\n>>> def my_flow():\n>>> my_task.submit().wait()\n\nUse the result from a task in a flow\n\n>>> @flow\n>>> def my_flow():\n>>> print(my_task.submit().result())\n>>>\n>>> my_flow()\nhello\n\nRun an async task in an async flow\n\n>>> @task\n>>> async def my_async_task():\n>>> pass\n>>>\n>>> @flow\n>>> async def my_flow():\n>>> await my_async_task.submit()\n\nRun a sync task in an async flow\n\n>>> @flow\n>>> async def my_flow():\n>>> my_task.submit()\n\nEnforce ordering between tasks that do not exchange data\n>>> @task\n>>> def task_1():\n>>> pass\n>>>\n>>> @task\n>>> def task_2():\n>>> pass\n>>>\n>>> @flow\n>>> def my_flow():\n>>> x = task_1.submit()\n>>>\n>>> # task 2 will wait for task_1 to complete\n>>> y = task_2.submit(wait_for=[x])\n
Source code in prefect/tasks.py
def submit(\n self,\n *args: Any,\n return_state: bool = False,\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: Any,\n) -> Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]:\n \"\"\"\n Submit a run of the task to the engine.\n\n If writing an async task, this call must be awaited.\n\n If called from within a flow function,\n\n Will create a new task run in the backing API and submit the task to the flow's\n task runner. This call only blocks execution while the task is being submitted,\n once it is submitted, the flow function will continue executing. However, note\n that the `SequentialTaskRunner` does not implement parallel execution for sync tasks\n and they are fully resolved on submission.\n\n Args:\n *args: Arguments to run the task with\n return_state: Return the result of the flow run wrapped in a\n Prefect State.\n wait_for: Upstream task futures to wait for before starting the task\n **kwargs: Keyword arguments to run the task with\n\n Returns:\n If `return_state` is False a future allowing asynchronous access to\n the state of the task\n If `return_state` is True a future wrapped in a Prefect State allowing asynchronous access to\n the state of the task\n\n Examples:\n\n Define a task\n\n >>> from prefect import task\n >>> @task\n >>> def my_task():\n >>> return \"hello\"\n\n Run a task in a flow\n\n >>> from prefect import flow\n >>> @flow\n >>> def my_flow():\n >>> my_task.submit()\n\n Wait for a task to finish\n\n >>> @flow\n >>> def my_flow():\n >>> my_task.submit().wait()\n\n Use the result from a task in a flow\n\n >>> @flow\n >>> def my_flow():\n >>> print(my_task.submit().result())\n >>>\n >>> my_flow()\n hello\n\n Run an async task in an async flow\n\n >>> @task\n >>> async def my_async_task():\n >>> pass\n >>>\n >>> @flow\n >>> async def my_flow():\n >>> await my_async_task.submit()\n\n Run a sync task in an async flow\n\n >>> @flow\n >>> async def my_flow():\n >>> my_task.submit()\n\n Enforce ordering between tasks that do not exchange data\n >>> @task\n >>> def task_1():\n >>> pass\n >>>\n >>> @task\n >>> def task_2():\n >>> pass\n >>>\n >>> @flow\n >>> def my_flow():\n >>> x = task_1.submit()\n >>>\n >>> # task 2 will wait for task_1 to complete\n >>> y = task_2.submit(wait_for=[x])\n\n \"\"\"\n\n from prefect.engine import create_autonomous_task_run, enter_task_run_engine\n\n # Convert the call args/kwargs to a parameter dict\n parameters = get_call_parameters(self.fn, args, kwargs)\n return_type = \"state\" if return_state else \"future\"\n\n task_viz_tracker = get_task_viz_tracker()\n if task_viz_tracker:\n raise VisualizationUnsupportedError(\n \"`task.submit()` is not currently supported by `flow.visualize()`\"\n )\n\n if (\n PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n and not FlowRunContext.get()\n ):\n create_autonomous_task_run_call = create_call(\n create_autonomous_task_run, task=self, parameters=parameters\n )\n if self.isasync:\n return from_async.wait_for_call_in_loop_thread(\n create_autonomous_task_run_call\n )\n else:\n return from_sync.wait_for_call_in_loop_thread(\n create_autonomous_task_run_call\n )\n\n return enter_task_run_engine(\n self,\n parameters=parameters,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=None, # Use the flow's task runner\n mapped=False,\n )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.with_options","title":"with_options
","text":"Create a new task from the current object, updating provided options.
Parameters:
Name Type Description Defaultname
str
A new name for the task.
None
description
str
A new description for the task.
None
tags
Iterable[str]
A new set of tags for the task. If given, existing tags are ignored, not merged.
None
cache_key_fn
Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]
A new cache key function for the task.
None
cache_expiration
timedelta
A new cache expiration time for the task.
None
task_run_name
Optional[Union[Callable[[], str], str]]
An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.
None
retries
Optional[int]
A new number of times to retry on task run failure.
NotSet
retry_delay_seconds
Union[float, int, List[float], Callable[[int], List[float]]]
Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries
is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.
NotSet
retry_jitter_factor
Optional[float]
An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".
NotSet
persist_result
Optional[bool]
A new option for enabling or disabling result persistence.
NotSet
result_storage
Optional[ResultStorage]
A new storage type to use for results.
NotSet
result_serializer
Optional[ResultSerializer]
A new serializer to use for results.
NotSet
result_storage_key
Optional[str]
A new key for the persisted result to be stored at.
NotSet
timeout_seconds
Union[int, float]
A new maximum time for the task to complete in seconds.
None
log_prints
Optional[bool]
A new option for enabling or disabling redirection of print
statements.
NotSet
refresh_cache
Optional[bool]
A new option for enabling or disabling cache refresh.
NotSet
on_completion
Optional[List[Callable[[Task, TaskRun, State], None]]]
A new list of callables to run when the task enters a completed state.
None
on_failure
Optional[List[Callable[[Task, TaskRun, State], None]]]
A new list of callables to run when the task enters a failed state.
None
retry_condition_fn
Optional[Callable[[Task, TaskRun, State], bool]]
An optional callable run when a task run returns a Failed state. Should return True
if the task should continue to its retry policy, and False
if the task should end as failed. Defaults to None
, indicating the task should always continue to its retry policy.
None
viz_return_value
Optional[Any]
An optional value to return when the task dependency tree is visualized.
None
Returns:
Type DescriptionA new Task
instance.
Create a new task from an existing task and update the name\n\n>>> @task(name=\"My task\")\n>>> def my_task():\n>>> return 1\n>>>\n>>> new_task = my_task.with_options(name=\"My new task\")\n\nCreate a new task from an existing task and update the retry settings\n\n>>> from random import randint\n>>>\n>>> @task(retries=1, retry_delay_seconds=5)\n>>> def my_task():\n>>> x = randint(0, 5)\n>>> if x >= 3: # Make a task that fails sometimes\n>>> raise ValueError(\"Retry me please!\")\n>>> return x\n>>>\n>>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\nUse a task with updated options within a flow\n\n>>> @task(name=\"My task\")\n>>> def my_task():\n>>> return 1\n>>>\n>>> @flow\n>>> my_flow():\n>>> new_task = my_task.with_options(name=\"My new task\")\n>>> new_task()\n
Source code in prefect/tasks.py
def with_options(\n self,\n *,\n name: str = None,\n description: str = None,\n tags: Iterable[str] = None,\n cache_key_fn: Callable[\n [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n ] = None,\n task_run_name: Optional[Union[Callable[[], str], str]] = None,\n cache_expiration: datetime.timedelta = None,\n retries: Optional[int] = NotSet,\n retry_delay_seconds: Union[\n float,\n int,\n List[float],\n Callable[[int], List[float]],\n ] = NotSet,\n retry_jitter_factor: Optional[float] = NotSet,\n persist_result: Optional[bool] = NotSet,\n result_storage: Optional[ResultStorage] = NotSet,\n result_serializer: Optional[ResultSerializer] = NotSet,\n result_storage_key: Optional[str] = NotSet,\n cache_result_in_memory: Optional[bool] = None,\n timeout_seconds: Union[int, float] = None,\n log_prints: Optional[bool] = NotSet,\n refresh_cache: Optional[bool] = NotSet,\n on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n viz_return_value: Optional[Any] = None,\n):\n \"\"\"\n Create a new task from the current object, updating provided options.\n\n Args:\n name: A new name for the task.\n description: A new description for the task.\n tags: A new set of tags for the task. If given, existing tags are ignored,\n not merged.\n cache_key_fn: A new cache key function for the task.\n cache_expiration: A new cache expiration time for the task.\n task_run_name: An optional name to distinguish runs of this task; this name can be provided\n as a string template with the task's keyword arguments as variables,\n or a function that returns a string.\n retries: A new number of times to retry on task run failure.\n retry_delay_seconds: Optionally configures how long to wait before retrying\n the task after failure. This is only applicable if `retries` is nonzero.\n This setting can either be a number of seconds, a list of retry delays,\n or a callable that, given the total number of retries, generates a list\n of retry delays. If a number of seconds, that delay will be applied to\n all retries. If a list, each retry will wait for the corresponding delay\n before retrying. When passing a callable or a list, the number of\n configured retry delays cannot exceed 50.\n retry_jitter_factor: An optional factor that defines the factor to which a\n retry can be jittered in order to avoid a \"thundering herd\".\n persist_result: A new option for enabling or disabling result persistence.\n result_storage: A new storage type to use for results.\n result_serializer: A new serializer to use for results.\n result_storage_key: A new key for the persisted result to be stored at.\n timeout_seconds: A new maximum time for the task to complete in seconds.\n log_prints: A new option for enabling or disabling redirection of `print` statements.\n refresh_cache: A new option for enabling or disabling cache refresh.\n on_completion: A new list of callables to run when the task enters a completed state.\n on_failure: A new list of callables to run when the task enters a failed state.\n retry_condition_fn: An optional callable run when a task run returns a Failed state.\n Should return `True` if the task should continue to its retry policy, and `False`\n if the task should end as failed. Defaults to `None`, indicating the task should\n always continue to its retry policy.\n viz_return_value: An optional value to return when the task dependency tree is visualized.\n\n Returns:\n A new `Task` instance.\n\n Examples:\n\n Create a new task from an existing task and update the name\n\n >>> @task(name=\"My task\")\n >>> def my_task():\n >>> return 1\n >>>\n >>> new_task = my_task.with_options(name=\"My new task\")\n\n Create a new task from an existing task and update the retry settings\n\n >>> from random import randint\n >>>\n >>> @task(retries=1, retry_delay_seconds=5)\n >>> def my_task():\n >>> x = randint(0, 5)\n >>> if x >= 3: # Make a task that fails sometimes\n >>> raise ValueError(\"Retry me please!\")\n >>> return x\n >>>\n >>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\n Use a task with updated options within a flow\n\n >>> @task(name=\"My task\")\n >>> def my_task():\n >>> return 1\n >>>\n >>> @flow\n >>> my_flow():\n >>> new_task = my_task.with_options(name=\"My new task\")\n >>> new_task()\n \"\"\"\n return Task(\n fn=self.fn,\n name=name or self.name,\n description=description or self.description,\n tags=tags or copy(self.tags),\n cache_key_fn=cache_key_fn or self.cache_key_fn,\n cache_expiration=cache_expiration or self.cache_expiration,\n task_run_name=task_run_name,\n retries=retries if retries is not NotSet else self.retries,\n retry_delay_seconds=(\n retry_delay_seconds\n if retry_delay_seconds is not NotSet\n else self.retry_delay_seconds\n ),\n retry_jitter_factor=(\n retry_jitter_factor\n if retry_jitter_factor is not NotSet\n else self.retry_jitter_factor\n ),\n persist_result=(\n persist_result if persist_result is not NotSet else self.persist_result\n ),\n result_storage=(\n result_storage if result_storage is not NotSet else self.result_storage\n ),\n result_storage_key=(\n result_storage_key\n if result_storage_key is not NotSet\n else self.result_storage_key\n ),\n result_serializer=(\n result_serializer\n if result_serializer is not NotSet\n else self.result_serializer\n ),\n cache_result_in_memory=(\n cache_result_in_memory\n if cache_result_in_memory is not None\n else self.cache_result_in_memory\n ),\n timeout_seconds=(\n timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n ),\n log_prints=(log_prints if log_prints is not NotSet else self.log_prints),\n refresh_cache=(\n refresh_cache if refresh_cache is not NotSet else self.refresh_cache\n ),\n on_completion=on_completion or self.on_completion,\n on_failure=on_failure or self.on_failure,\n retry_condition_fn=retry_condition_fn or self.retry_condition_fn,\n viz_return_value=viz_return_value or self.viz_return_value,\n )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.exponential_backoff","title":"exponential_backoff
","text":"A task retry backoff utility that configures exponential backoff for task retries. The exponential backoff design matches the urllib3 implementation.
Parameters:
Name Type Description Defaultbackoff_factor
float
the base delay for the first retry, subsequent retries will increase the delay time by powers of 2.
requiredReturns:
Type DescriptionCallable[[int], List[float]]
a callable that can be passed to the task constructor
Source code inprefect/tasks.py
def exponential_backoff(backoff_factor: float) -> Callable[[int], List[float]]:\n \"\"\"\n A task retry backoff utility that configures exponential backoff for task retries.\n The exponential backoff design matches the urllib3 implementation.\n\n Arguments:\n backoff_factor: the base delay for the first retry, subsequent retries will\n increase the delay time by powers of 2.\n\n Returns:\n a callable that can be passed to the task constructor\n \"\"\"\n\n def retry_backoff_callable(retries: int) -> List[float]:\n # no more than 50 retry delays can be configured on a task\n retries = min(retries, 50)\n\n return [backoff_factor * max(0, 2**r) for r in range(retries)]\n\n return retry_backoff_callable\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.task","title":"task
","text":"Decorator to designate a function as a task in a Prefect workflow.
This decorator may be used for asynchronous or synchronous functions.
Parameters:
Name Type Description Defaultname
str
An optional name for the task; if not provided, the name will be inferred from the given function.
None
description
str
An optional string description for the task.
None
tags
Iterable[str]
An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags
context at task runtime.
None
version
str
An optional string specifying the version of this task definition
None
cache_key_fn
Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]
An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again.
None
cache_expiration
timedelta
An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire.
None
task_run_name
Optional[Union[Callable[[], str], str]]
An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.
None
retries
int
An optional number of times to retry on task run failure
None
retry_delay_seconds
Union[float, int, List[float], Callable[[int], List[float]]]
Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries
is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.
None
retry_jitter_factor
Optional[float]
An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".
None
persist_result
Optional[bool]
An optional toggle indicating whether the result of this task should be persisted to result storage. Defaults to None
, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.
None
result_storage
Optional[ResultStorage]
An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in.
None
result_storage_key
Optional[str]
An optional key to store the result in storage at when persisted. Defaults to a unique identifier.
None
result_serializer
Optional[ResultSerializer]
An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in.
None
timeout_seconds
Union[int, float]
An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed.
None
log_prints
Optional[bool]
If set, print
statements in the task will be redirected to the Prefect logger for the task run. Defaults to None
, which indicates that the value from the flow should be used.
None
refresh_cache
Optional[bool]
If set, cached results for the cache key are not used. Defaults to None
, which indicates that a cached result from a previous execution with matching cache key is used.
None
on_failure
Optional[List[Callable[[Task, TaskRun, State], None]]]
An optional list of callables to run when the task enters a failed state.
None
on_completion
Optional[List[Callable[[Task, TaskRun, State], None]]]
An optional list of callables to run when the task enters a completed state.
None
retry_condition_fn
Optional[Callable[[Task, TaskRun, State], bool]]
An optional callable run when a task run returns a Failed state. Should return True
if the task should continue to its retry policy (e.g. retries=3
), and False
if the task should end as failed. Defaults to None
, indicating the task should always continue to its retry policy.
None
viz_return_value
Any
An optional value to return when the task dependency tree is visualized.
None
Returns:
Type DescriptionA callable Task
object which, when called, will submit the task for execution.
Examples:
Define a simple task
>>> @task\n>>> def add(x, y):\n>>> return x + y\n
Define an async task
>>> @task\n>>> async def add(x, y):\n>>> return x + y\n
Define a task with tags and a description
>>> @task(tags={\"a\", \"b\"}, description=\"This task is empty but its my first!\")\n>>> def my_task():\n>>> pass\n
Define a task with a custom name
>>> @task(name=\"The Ultimate Task\")\n>>> def my_task():\n>>> pass\n
Define a task that retries 3 times with a 5 second delay between attempts
>>> from random import randint\n>>>\n>>> @task(retries=3, retry_delay_seconds=5)\n>>> def my_task():\n>>> x = randint(0, 5)\n>>> if x >= 3: # Make a task that fails sometimes\n>>> raise ValueError(\"Retry me please!\")\n>>> return x\n
Define a task that is cached for a day based on its inputs
>>> from prefect.tasks import task_input_hash\n>>> from datetime import timedelta\n>>>\n>>> @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\n>>> def my_task():\n>>> return \"hello\"\n
Source code in prefect/tasks.py
def task(\n __fn=None,\n *,\n name: str = None,\n description: str = None,\n tags: Iterable[str] = None,\n version: str = None,\n cache_key_fn: Callable[[\"TaskRunContext\", Dict[str, Any]], Optional[str]] = None,\n cache_expiration: datetime.timedelta = None,\n task_run_name: Optional[Union[Callable[[], str], str]] = None,\n retries: int = None,\n retry_delay_seconds: Union[\n float,\n int,\n List[float],\n Callable[[int], List[float]],\n ] = None,\n retry_jitter_factor: Optional[float] = None,\n persist_result: Optional[bool] = None,\n result_storage: Optional[ResultStorage] = None,\n result_storage_key: Optional[str] = None,\n result_serializer: Optional[ResultSerializer] = None,\n cache_result_in_memory: bool = True,\n timeout_seconds: Union[int, float] = None,\n log_prints: Optional[bool] = None,\n refresh_cache: Optional[bool] = None,\n on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n viz_return_value: Any = None,\n):\n \"\"\"\n Decorator to designate a function as a task in a Prefect workflow.\n\n This decorator may be used for asynchronous or synchronous functions.\n\n Args:\n name: An optional name for the task; if not provided, the name will be inferred\n from the given function.\n description: An optional string description for the task.\n tags: An optional set of tags to be associated with runs of this task. These\n tags are combined with any tags defined by a `prefect.tags` context at\n task runtime.\n version: An optional string specifying the version of this task definition\n cache_key_fn: An optional callable that, given the task run context and call\n parameters, generates a string key; if the key matches a previous completed\n state, that state result will be restored instead of running the task again.\n cache_expiration: An optional amount of time indicating how long cached states\n for this task should be restorable; if not provided, cached states will\n never expire.\n task_run_name: An optional name to distinguish runs of this task; this name can be provided\n as a string template with the task's keyword arguments as variables,\n or a function that returns a string.\n retries: An optional number of times to retry on task run failure\n retry_delay_seconds: Optionally configures how long to wait before retrying the\n task after failure. This is only applicable if `retries` is nonzero. This\n setting can either be a number of seconds, a list of retry delays, or a\n callable that, given the total number of retries, generates a list of retry\n delays. If a number of seconds, that delay will be applied to all retries.\n If a list, each retry will wait for the corresponding delay before retrying.\n When passing a callable or a list, the number of configured retry delays\n cannot exceed 50.\n retry_jitter_factor: An optional factor that defines the factor to which a retry\n can be jittered in order to avoid a \"thundering herd\".\n persist_result: An optional toggle indicating whether the result of this task\n should be persisted to result storage. Defaults to `None`, which indicates\n that Prefect should choose whether the result should be persisted depending on\n the features being used.\n result_storage: An optional block to use to persist the result of this task.\n Defaults to the value set in the flow the task is called in.\n result_storage_key: An optional key to store the result in storage at when persisted.\n Defaults to a unique identifier.\n result_serializer: An optional serializer to use to serialize the result of this\n task for persistence. Defaults to the value set in the flow the task is\n called in.\n timeout_seconds: An optional number of seconds indicating a maximum runtime for\n the task. If the task exceeds this runtime, it will be marked as failed.\n log_prints: If set, `print` statements in the task will be redirected to the\n Prefect logger for the task run. Defaults to `None`, which indicates\n that the value from the flow should be used.\n refresh_cache: If set, cached results for the cache key are not used.\n Defaults to `None`, which indicates that a cached result from a previous\n execution with matching cache key is used.\n on_failure: An optional list of callables to run when the task enters a failed state.\n on_completion: An optional list of callables to run when the task enters a completed state.\n retry_condition_fn: An optional callable run when a task run returns a Failed state. Should\n return `True` if the task should continue to its retry policy (e.g. `retries=3`), and `False` if the task\n should end as failed. Defaults to `None`, indicating the task should always continue\n to its retry policy.\n viz_return_value: An optional value to return when the task dependency tree is visualized.\n\n Returns:\n A callable `Task` object which, when called, will submit the task for execution.\n\n Examples:\n Define a simple task\n\n >>> @task\n >>> def add(x, y):\n >>> return x + y\n\n Define an async task\n\n >>> @task\n >>> async def add(x, y):\n >>> return x + y\n\n Define a task with tags and a description\n\n >>> @task(tags={\"a\", \"b\"}, description=\"This task is empty but its my first!\")\n >>> def my_task():\n >>> pass\n\n Define a task with a custom name\n\n >>> @task(name=\"The Ultimate Task\")\n >>> def my_task():\n >>> pass\n\n Define a task that retries 3 times with a 5 second delay between attempts\n\n >>> from random import randint\n >>>\n >>> @task(retries=3, retry_delay_seconds=5)\n >>> def my_task():\n >>> x = randint(0, 5)\n >>> if x >= 3: # Make a task that fails sometimes\n >>> raise ValueError(\"Retry me please!\")\n >>> return x\n\n Define a task that is cached for a day based on its inputs\n\n >>> from prefect.tasks import task_input_hash\n >>> from datetime import timedelta\n >>>\n >>> @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\n >>> def my_task():\n >>> return \"hello\"\n \"\"\"\n\n if __fn:\n return cast(\n Task[P, R],\n Task(\n fn=__fn,\n name=name,\n description=description,\n tags=tags,\n version=version,\n cache_key_fn=cache_key_fn,\n cache_expiration=cache_expiration,\n task_run_name=task_run_name,\n retries=retries,\n retry_delay_seconds=retry_delay_seconds,\n retry_jitter_factor=retry_jitter_factor,\n persist_result=persist_result,\n result_storage=result_storage,\n result_storage_key=result_storage_key,\n result_serializer=result_serializer,\n cache_result_in_memory=cache_result_in_memory,\n timeout_seconds=timeout_seconds,\n log_prints=log_prints,\n refresh_cache=refresh_cache,\n on_completion=on_completion,\n on_failure=on_failure,\n retry_condition_fn=retry_condition_fn,\n viz_return_value=viz_return_value,\n ),\n )\n else:\n return cast(\n Callable[[Callable[P, R]], Task[P, R]],\n partial(\n task,\n name=name,\n description=description,\n tags=tags,\n version=version,\n cache_key_fn=cache_key_fn,\n cache_expiration=cache_expiration,\n task_run_name=task_run_name,\n retries=retries,\n retry_delay_seconds=retry_delay_seconds,\n retry_jitter_factor=retry_jitter_factor,\n persist_result=persist_result,\n result_storage=result_storage,\n result_storage_key=result_storage_key,\n result_serializer=result_serializer,\n cache_result_in_memory=cache_result_in_memory,\n timeout_seconds=timeout_seconds,\n log_prints=log_prints,\n refresh_cache=refresh_cache,\n on_completion=on_completion,\n on_failure=on_failure,\n retry_condition_fn=retry_condition_fn,\n viz_return_value=viz_return_value,\n ),\n )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.task_input_hash","title":"task_input_hash
","text":"A task cache key implementation which hashes all inputs to the task using a JSON or cloudpickle serializer. If any arguments are not JSON serializable, the pickle serializer is used as a fallback. If cloudpickle fails, this will return a null key indicating that a cache key could not be generated for the given inputs.
Parameters:
Name Type Description Defaultcontext
TaskRunContext
the active TaskRunContext
arguments
Dict[str, Any]
a dictionary of arguments to be passed to the underlying task
requiredReturns:
Type DescriptionOptional[str]
a string hash if hashing succeeded, else None
prefect/tasks.py
def task_input_hash(\n context: \"TaskRunContext\", arguments: Dict[str, Any]\n) -> Optional[str]:\n \"\"\"\n A task cache key implementation which hashes all inputs to the task using a JSON or\n cloudpickle serializer. If any arguments are not JSON serializable, the pickle\n serializer is used as a fallback. If cloudpickle fails, this will return a null key\n indicating that a cache key could not be generated for the given inputs.\n\n Arguments:\n context: the active `TaskRunContext`\n arguments: a dictionary of arguments to be passed to the underlying task\n\n Returns:\n a string hash if hashing succeeded, else `None`\n \"\"\"\n return hash_objects(\n # We use the task key to get the qualified name for the task and include the\n # task functions `co_code` bytes to avoid caching when the underlying function\n # changes\n context.task.task_key,\n context.task.fn.__code__.co_code.hex(),\n arguments,\n )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/testing/","title":"prefect.testing","text":"","tags":["Python API","testing"]},{"location":"api-ref/prefect/testing/#prefect.testing","title":"prefect.testing
","text":"","tags":["Python API","testing"]},{"location":"api-ref/prefect/variables/","title":"prefect.variables","text":"","tags":["Python API","variables"]},{"location":"api-ref/prefect/variables/#prefect.variables","title":"prefect.variables
","text":"","tags":["Python API","variables"]},{"location":"api-ref/prefect/variables/#prefect.variables.get","title":"get
async
","text":"Get a variable by name. If doesn't exist return the default.
from prefect import variables\n\n @flow\n def my_flow():\n var = variables.get(\"my_var\")\n
or from prefect import variables\n\n @flow\n async def my_flow():\n var = await variables.get(\"my_var\")\n
Source code in prefect/variables.py
@sync_compatible\nasync def get(name: str, default: str = None) -> Optional[str]:\n \"\"\"\n Get a variable by name. If doesn't exist return the default.\n ```\n from prefect import variables\n\n @flow\n def my_flow():\n var = variables.get(\"my_var\")\n ```\n or\n ```\n from prefect import variables\n\n @flow\n async def my_flow():\n var = await variables.get(\"my_var\")\n ```\n \"\"\"\n variable = await _get_variable_by_name(name)\n return variable.value if variable else default\n
","tags":["Python API","variables"]},{"location":"api-ref/prefect/blocks/core/","title":"core","text":"","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core","title":"prefect.blocks.core
","text":"","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block","title":"Block
","text":" Bases: BaseModel
, ABC
A base class for implementing a block that wraps an external service.
This class can be defined with an arbitrary set of fields and methods, and couples business logic with data contained in an block document. _block_document_name
, _block_document_id
, _block_schema_id
, and _block_type_id
are reserved by Prefect as Block metadata fields, but otherwise a Block can implement arbitrary logic. Blocks can be instantiated without populating these metadata fields, but can only be used interactively, not with the Prefect API.
Instead of the init method, a block implementation allows the definition of a block_initialization
method that is called after initialization.
prefect/blocks/core.py
@register_base_type\n@instrument_method_calls_on_class_instances\nclass Block(BaseModel, ABC):\n \"\"\"\n A base class for implementing a block that wraps an external service.\n\n This class can be defined with an arbitrary set of fields and methods, and\n couples business logic with data contained in an block document.\n `_block_document_name`, `_block_document_id`, `_block_schema_id`, and\n `_block_type_id` are reserved by Prefect as Block metadata fields, but\n otherwise a Block can implement arbitrary logic. Blocks can be instantiated\n without populating these metadata fields, but can only be used interactively,\n not with the Prefect API.\n\n Instead of the __init__ method, a block implementation allows the\n definition of a `block_initialization` method that is called after\n initialization.\n \"\"\"\n\n class Config:\n extra = \"allow\"\n\n json_encoders = {SecretDict: lambda v: v.dict()}\n\n @staticmethod\n def schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n \"\"\"\n Customizes Pydantic's schema generation feature to add blocks related information.\n \"\"\"\n schema[\"block_type_slug\"] = model.get_block_type_slug()\n # Ensures args and code examples aren't included in the schema\n description = model.get_description()\n if description:\n schema[\"description\"] = description\n else:\n # Prevent the description of the base class from being included in the schema\n schema.pop(\"description\", None)\n\n # create a list of secret field names\n # secret fields include both top-level keys and dot-delimited nested secret keys\n # A wildcard (*) means that all fields under a given key are secret.\n # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n # nested under the \"child\" key are all secret. There is no limit to nesting.\n secrets = schema[\"secret_fields\"] = []\n for field in model.__fields__.values():\n _collect_secret_fields(field.name, field.type_, secrets)\n\n # create block schema references\n refs = schema[\"block_schema_references\"] = {}\n for field in model.__fields__.values():\n if Block.is_block_class(field.type_):\n refs[field.name] = field.type_._to_block_schema_reference_dict()\n if get_origin(field.type_) is Union:\n for type_ in get_args(field.type_):\n if Block.is_block_class(type_):\n if isinstance(refs.get(field.name), list):\n refs[field.name].append(\n type_._to_block_schema_reference_dict()\n )\n elif isinstance(refs.get(field.name), dict):\n refs[field.name] = [\n refs[field.name],\n type_._to_block_schema_reference_dict(),\n ]\n else:\n refs[\n field.name\n ] = type_._to_block_schema_reference_dict()\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.block_initialization()\n\n def __str__(self) -> str:\n return self.__repr__()\n\n def __repr_args__(self):\n repr_args = super().__repr_args__()\n data_keys = self.schema()[\"properties\"].keys()\n return [\n (key, value) for key, value in repr_args if key is None or key in data_keys\n ]\n\n def block_initialization(self) -> None:\n pass\n\n # -- private class variables\n # set by the class itself\n\n # Attribute to customize the name of the block type created\n # when the block is registered with the API. If not set, block\n # type name will default to the class name.\n _block_type_name: Optional[str] = None\n _block_type_slug: Optional[str] = None\n\n # Attributes used to set properties on a block type when registered\n # with the API.\n _logo_url: Optional[HttpUrl] = None\n _documentation_url: Optional[HttpUrl] = None\n _description: Optional[str] = None\n _code_example: Optional[str] = None\n\n # -- private instance variables\n # these are set when blocks are loaded from the API\n _block_type_id: Optional[UUID] = None\n _block_schema_id: Optional[UUID] = None\n _block_schema_capabilities: Optional[List[str]] = None\n _block_schema_version: Optional[str] = None\n _block_document_id: Optional[UUID] = None\n _block_document_name: Optional[str] = None\n _is_anonymous: Optional[bool] = None\n\n # Exclude `save` as it uses the `sync_compatible` decorator and needs to be\n # decorated directly.\n _events_excluded_methods = [\"block_initialization\", \"save\", \"dict\"]\n\n @classmethod\n def __dispatch_key__(cls):\n if cls.__name__ == \"Block\":\n return None # The base class is abstract\n return block_schema_to_key(cls._to_block_schema())\n\n @classmethod\n def get_block_type_name(cls):\n return cls._block_type_name or cls.__name__\n\n @classmethod\n def get_block_type_slug(cls):\n return slugify(cls._block_type_slug or cls.get_block_type_name())\n\n @classmethod\n def get_block_capabilities(cls) -> FrozenSet[str]:\n \"\"\"\n Returns the block capabilities for this Block. Recursively collects all block\n capabilities of all parent classes into a single frozenset.\n \"\"\"\n return frozenset(\n {\n c\n for base in (cls,) + cls.__mro__\n for c in getattr(base, \"_block_schema_capabilities\", []) or []\n }\n )\n\n @classmethod\n def _get_current_package_version(cls):\n current_module = inspect.getmodule(cls)\n if current_module:\n top_level_module = sys.modules[\n current_module.__name__.split(\".\")[0] or \"__main__\"\n ]\n try:\n version = Version(top_level_module.__version__)\n # Strips off any local version information\n return version.base_version\n except (AttributeError, InvalidVersion):\n # Module does not have a __version__ attribute or is not a parsable format\n pass\n return DEFAULT_BLOCK_SCHEMA_VERSION\n\n @classmethod\n def get_block_schema_version(cls) -> str:\n return cls._block_schema_version or cls._get_current_package_version()\n\n @classmethod\n def _to_block_schema_reference_dict(cls):\n return dict(\n block_type_slug=cls.get_block_type_slug(),\n block_schema_checksum=cls._calculate_schema_checksum(),\n )\n\n @classmethod\n def _calculate_schema_checksum(\n cls, block_schema_fields: Optional[Dict[str, Any]] = None\n ):\n \"\"\"\n Generates a unique hash for the underlying schema of block.\n\n Args:\n block_schema_fields: Dictionary detailing block schema fields to generate a\n checksum for. The fields of the current class is used if this parameter\n is not provided.\n\n Returns:\n str: The calculated checksum prefixed with the hashing algorithm used.\n \"\"\"\n block_schema_fields = (\n cls.schema() if block_schema_fields is None else block_schema_fields\n )\n fields_for_checksum = remove_nested_keys([\"secret_fields\"], block_schema_fields)\n if fields_for_checksum.get(\"definitions\"):\n non_block_definitions = _get_non_block_reference_definitions(\n fields_for_checksum, fields_for_checksum[\"definitions\"]\n )\n if non_block_definitions:\n fields_for_checksum[\"definitions\"] = non_block_definitions\n else:\n # Pop off definitions entirely instead of empty dict for consistency\n # with the OpenAPI specification\n fields_for_checksum.pop(\"definitions\")\n checksum = hash_objects(fields_for_checksum, hash_algo=hashlib.sha256)\n if checksum is None:\n raise ValueError(\"Unable to compute checksum for block schema\")\n else:\n return f\"sha256:{checksum}\"\n\n def _to_block_document(\n self,\n name: Optional[str] = None,\n block_schema_id: Optional[UUID] = None,\n block_type_id: Optional[UUID] = None,\n is_anonymous: Optional[bool] = None,\n ) -> BlockDocument:\n \"\"\"\n Creates the corresponding block document based on the data stored in a block.\n The corresponding block document name, block type ID, and block schema ID must\n either be passed into the method or configured on the block.\n\n Args:\n name: The name of the created block document. Not required if anonymous.\n block_schema_id: UUID of the corresponding block schema.\n block_type_id: UUID of the corresponding block type.\n is_anonymous: if True, an anonymous block is created. Anonymous\n blocks are not displayed in the UI and used primarily for system\n operations and features that need to automatically generate blocks.\n\n Returns:\n BlockDocument: Corresponding block document\n populated with the block's configured data.\n \"\"\"\n if is_anonymous is None:\n is_anonymous = self._is_anonymous or False\n\n # name must be present if not anonymous\n if not is_anonymous and not name and not self._block_document_name:\n raise ValueError(\"No name provided, either as an argument or on the block.\")\n\n if not block_schema_id and not self._block_schema_id:\n raise ValueError(\n \"No block schema ID provided, either as an argument or on the block.\"\n )\n if not block_type_id and not self._block_type_id:\n raise ValueError(\n \"No block type ID provided, either as an argument or on the block.\"\n )\n\n # The keys passed to `include` must NOT be aliases, else some items will be missed\n # i.e. must do `self.schema_` vs `self.schema` to get a `schema_ = Field(alias=\"schema\")`\n # reported from https://github.com/PrefectHQ/prefect-dbt/issues/54\n data_keys = self.schema(by_alias=False)[\"properties\"].keys()\n\n # `block_document_data`` must return the aliased version for it to show in the UI\n block_document_data = self.dict(by_alias=True, include=data_keys)\n\n # Iterate through and find blocks that already have saved block documents to\n # create references to those saved block documents.\n for key in data_keys:\n field_value = getattr(self, key)\n if (\n isinstance(field_value, Block)\n and field_value._block_document_id is not None\n ):\n block_document_data[key] = {\n \"$ref\": {\"block_document_id\": field_value._block_document_id}\n }\n\n return BlockDocument(\n id=self._block_document_id or uuid4(),\n name=(name or self._block_document_name) if not is_anonymous else None,\n block_schema_id=block_schema_id or self._block_schema_id,\n block_type_id=block_type_id or self._block_type_id,\n data=block_document_data,\n block_schema=self._to_block_schema(\n block_type_id=block_type_id or self._block_type_id,\n ),\n block_type=self._to_block_type(),\n is_anonymous=is_anonymous,\n )\n\n @classmethod\n def _to_block_schema(cls, block_type_id: Optional[UUID] = None) -> BlockSchema:\n \"\"\"\n Creates the corresponding block schema of the block.\n The corresponding block_type_id must either be passed into\n the method or configured on the block.\n\n Args:\n block_type_id: UUID of the corresponding block type.\n\n Returns:\n BlockSchema: The corresponding block schema.\n \"\"\"\n fields = cls.schema()\n return BlockSchema(\n id=cls._block_schema_id if cls._block_schema_id is not None else uuid4(),\n checksum=cls._calculate_schema_checksum(),\n fields=fields,\n block_type_id=block_type_id or cls._block_type_id,\n block_type=cls._to_block_type(),\n capabilities=list(cls.get_block_capabilities()),\n version=cls.get_block_schema_version(),\n )\n\n @classmethod\n def _parse_docstring(cls) -> List[DocstringSection]:\n \"\"\"\n Parses the docstring into list of DocstringSection objects.\n Helper method used primarily to suppress irrelevant logs, e.g.\n `<module>:11: No type or annotation for parameter 'write_json'`\n because griffe is unable to parse the types from pydantic.BaseModel.\n \"\"\"\n with disable_logger(\"griffe.docstrings.google\"):\n with disable_logger(\"griffe.agents.nodes\"):\n docstring = Docstring(cls.__doc__)\n parsed = parse(docstring, Parser.google)\n return parsed\n\n @classmethod\n def get_description(cls) -> Optional[str]:\n \"\"\"\n Returns the description for the current block. Attempts to parse\n description from class docstring if an override is not defined.\n \"\"\"\n description = cls._description\n # If no description override has been provided, find the first text section\n # and use that as the description\n if description is None and cls.__doc__ is not None:\n parsed = cls._parse_docstring()\n parsed_description = next(\n (\n section.as_dict().get(\"value\")\n for section in parsed\n if section.kind == DocstringSectionKind.text\n ),\n None,\n )\n if isinstance(parsed_description, str):\n description = parsed_description.strip()\n return description\n\n @classmethod\n def get_code_example(cls) -> Optional[str]:\n \"\"\"\n Returns the code example for the given block. Attempts to parse\n code example from the class docstring if an override is not provided.\n \"\"\"\n code_example = (\n dedent(cls._code_example) if cls._code_example is not None else None\n )\n # If no code example override has been provided, attempt to find a examples\n # section or an admonition with the annotation \"example\" and use that as the\n # code example\n if code_example is None and cls.__doc__ is not None:\n parsed = cls._parse_docstring()\n for section in parsed:\n # Section kind will be \"examples\" if Examples section heading is used.\n if section.kind == DocstringSectionKind.examples:\n # Examples sections are made up of smaller sections that need to be\n # joined with newlines. Smaller sections are represented as tuples\n # with shape (DocstringSectionKind, str)\n code_example = \"\\n\".join(\n (part[1] for part in section.as_dict().get(\"value\", []))\n )\n break\n # Section kind will be \"admonition\" if Example section heading is used.\n if section.kind == DocstringSectionKind.admonition:\n value = section.as_dict().get(\"value\", {})\n if value.get(\"annotation\") == \"example\":\n code_example = value.get(\"description\")\n break\n\n if code_example is None:\n # If no code example has been specified or extracted from the class\n # docstring, generate a sensible default\n code_example = cls._generate_code_example()\n\n return code_example\n\n @classmethod\n def _generate_code_example(cls) -> str:\n \"\"\"Generates a default code example for the current class\"\"\"\n qualified_name = to_qualified_name(cls)\n module_str = \".\".join(qualified_name.split(\".\")[:-1])\n class_name = cls.__name__\n block_variable_name = f'{cls.get_block_type_slug().replace(\"-\", \"_\")}_block'\n\n return dedent(\n f\"\"\"\\\n ```python\n from {module_str} import {class_name}\n\n {block_variable_name} = {class_name}.load(\"BLOCK_NAME\")\n ```\"\"\"\n )\n\n @classmethod\n def _to_block_type(cls) -> BlockType:\n \"\"\"\n Creates the corresponding block type of the block.\n\n Returns:\n BlockType: The corresponding block type.\n \"\"\"\n return BlockType(\n id=cls._block_type_id or uuid4(),\n slug=cls.get_block_type_slug(),\n name=cls.get_block_type_name(),\n logo_url=cls._logo_url,\n documentation_url=cls._documentation_url,\n description=cls.get_description(),\n code_example=cls.get_code_example(),\n )\n\n @classmethod\n def _from_block_document(cls, block_document: BlockDocument):\n \"\"\"\n Instantiates a block from a given block document. The corresponding block class\n will be looked up in the block registry based on the corresponding block schema\n of the provided block document.\n\n Args:\n block_document: The block document used to instantiate a block.\n\n Raises:\n ValueError: If the provided block document doesn't have a corresponding block\n schema.\n\n Returns:\n Block: Hydrated block with data from block document.\n \"\"\"\n if block_document.block_schema is None:\n raise ValueError(\n \"Unable to determine block schema for provided block document\"\n )\n\n block_cls = (\n cls\n if cls.__name__ != \"Block\"\n # Look up the block class by dispatch\n else cls.get_block_class_from_schema(block_document.block_schema)\n )\n\n block_cls = instrument_method_calls_on_class_instances(block_cls)\n\n block = block_cls.parse_obj(block_document.data)\n block._block_document_id = block_document.id\n block.__class__._block_schema_id = block_document.block_schema_id\n block.__class__._block_type_id = block_document.block_type_id\n block._block_document_name = block_document.name\n block._is_anonymous = block_document.is_anonymous\n block._define_metadata_on_nested_blocks(\n block_document.block_document_references\n )\n\n # Due to the way blocks are loaded we can't directly instrument the\n # `load` method and have the data be about the block document. Instead\n # this will emit a proxy event for the load method so that block\n # document data can be included instead of the event being about an\n # 'anonymous' block.\n\n emit_instance_method_called_event(block, \"load\", successful=True)\n\n return block\n\n def _event_kind(self) -> str:\n return f\"prefect.block.{self.get_block_type_slug()}\"\n\n def _event_method_called_resources(self) -> Optional[ResourceTuple]:\n if not (self._block_document_id and self._block_document_name):\n return None\n\n return (\n {\n \"prefect.resource.id\": (\n f\"prefect.block-document.{self._block_document_id}\"\n ),\n \"prefect.resource.name\": self._block_document_name,\n },\n [\n {\n \"prefect.resource.id\": (\n f\"prefect.block-type.{self.get_block_type_slug()}\"\n ),\n \"prefect.resource.role\": \"block-type\",\n }\n ],\n )\n\n @classmethod\n def get_block_class_from_schema(cls: Type[Self], schema: BlockSchema) -> Type[Self]:\n \"\"\"\n Retrieve the block class implementation given a schema.\n \"\"\"\n return cls.get_block_class_from_key(block_schema_to_key(schema))\n\n @classmethod\n def get_block_class_from_key(cls: Type[Self], key: str) -> Type[Self]:\n \"\"\"\n Retrieve the block class implementation given a key.\n \"\"\"\n # Ensure collections are imported and have the opportunity to register types\n # before looking up the block class\n prefect.plugins.load_prefect_collections()\n\n return lookup_type(cls, key)\n\n def _define_metadata_on_nested_blocks(\n self, block_document_references: Dict[str, Dict[str, Any]]\n ):\n \"\"\"\n Recursively populates metadata fields on nested blocks based on the\n provided block document references.\n \"\"\"\n for item in block_document_references.items():\n field_name, block_document_reference = item\n nested_block = getattr(self, field_name)\n if isinstance(nested_block, Block):\n nested_block_document_info = block_document_reference.get(\n \"block_document\", {}\n )\n nested_block._define_metadata_on_nested_blocks(\n nested_block_document_info.get(\"block_document_references\", {})\n )\n nested_block_document_id = nested_block_document_info.get(\"id\")\n nested_block._block_document_id = (\n UUID(nested_block_document_id) if nested_block_document_id else None\n )\n nested_block._block_document_name = nested_block_document_info.get(\n \"name\"\n )\n nested_block._is_anonymous = nested_block_document_info.get(\n \"is_anonymous\"\n )\n\n @classmethod\n @inject_client\n async def _get_block_document(\n cls,\n name: str,\n client: \"PrefectClient\" = None,\n ):\n if cls.__name__ == \"Block\":\n block_type_slug, block_document_name = name.split(\"/\", 1)\n else:\n block_type_slug = cls.get_block_type_slug()\n block_document_name = name\n\n try:\n block_document = await client.read_block_document_by_name(\n name=block_document_name, block_type_slug=block_type_slug\n )\n except prefect.exceptions.ObjectNotFound as e:\n raise ValueError(\n f\"Unable to find block document named {block_document_name} for block\"\n f\" type {block_type_slug}\"\n ) from e\n\n return block_document, block_document_name\n\n @classmethod\n @sync_compatible\n @inject_client\n async def load(\n cls,\n name: str,\n validate: bool = True,\n client: \"PrefectClient\" = None,\n ):\n \"\"\"\n Retrieves data from the block document with the given name for the block type\n that corresponds with the current class and returns an instantiated version of\n the current class with the data stored in the block document.\n\n If a block document for a given block type is saved with a different schema\n than the current class calling `load`, a warning will be raised.\n\n If the current class schema is a subset of the block document schema, the block\n can be loaded as normal using the default `validate = True`.\n\n If the current class schema is a superset of the block document schema, `load`\n must be called with `validate` set to False to prevent a validation error. In\n this case, the block attributes will default to `None` and must be set manually\n and saved to a new block document before the block can be used as expected.\n\n Args:\n name: The name or slug of the block document. A block document slug is a\n string with the format <block_type_slug>/<block_document_name>\n validate: If False, the block document will be loaded without Pydantic\n validating the block schema. This is useful if the block schema has\n changed client-side since the block document referred to by `name` was saved.\n client: The client to use to load the block document. If not provided, the\n default client will be injected.\n\n Raises:\n ValueError: If the requested block document is not found.\n\n Returns:\n An instance of the current class hydrated with the data stored in the\n block document with the specified name.\n\n Examples:\n Load from a Block subclass with a block document name:\n ```python\n class Custom(Block):\n message: str\n\n Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n loaded_block = Custom.load(\"my-custom-message\")\n ```\n\n Load from Block with a block document slug:\n ```python\n class Custom(Block):\n message: str\n\n Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n loaded_block = Block.load(\"custom/my-custom-message\")\n ```\n\n Migrate a block document to a new schema:\n ```python\n # original class\n class Custom(Block):\n message: str\n\n Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n # Updated class with new required field\n class Custom(Block):\n message: str\n number_of_ducks: int\n\n loaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n # Prints UserWarning about schema mismatch\n\n loaded_block.number_of_ducks = 42\n\n loaded_block.save(\"my-custom-message\", overwrite=True)\n ```\n \"\"\"\n block_document, block_document_name = await cls._get_block_document(name)\n\n try:\n return cls._from_block_document(block_document)\n except ValidationError as e:\n if not validate:\n missing_fields = tuple(err[\"loc\"][0] for err in e.errors())\n missing_block_data = {field: None for field in missing_fields}\n warnings.warn(\n f\"Could not fully load {block_document_name!r} of block type\"\n f\" {cls._block_type_slug!r} - this is likely because one or more\"\n \" required fields were added to the schema for\"\n f\" {cls.__name__!r} that did not exist on the class when this block\"\n \" was last saved. Please specify values for new field(s):\"\n f\" {listrepr(missing_fields)}, then run\"\n f' `{cls.__name__}.save(\"{block_document_name}\", overwrite=True)`,'\n \" and load this block again before attempting to use it.\"\n )\n return cls.construct(**block_document.data, **missing_block_data)\n raise RuntimeError(\n f\"Unable to load {block_document_name!r} of block type\"\n f\" {cls._block_type_slug!r} due to failed validation. To load without\"\n \" validation, try loading again with `validate=False`.\"\n ) from e\n\n @staticmethod\n def is_block_class(block) -> bool:\n return _is_subclass(block, Block)\n\n @classmethod\n @sync_compatible\n @inject_client\n async def register_type_and_schema(cls, client: \"PrefectClient\" = None):\n \"\"\"\n Makes block available for configuration with current Prefect API.\n Recursively registers all nested blocks. Registration is idempotent.\n\n Args:\n client: Optional client to use for registering type and schema with the\n Prefect API. A new client will be created and used if one is not\n provided.\n \"\"\"\n if cls.__name__ == \"Block\":\n raise InvalidBlockRegistration(\n \"`register_type_and_schema` should be called on a Block \"\n \"subclass and not on the Block class directly.\"\n )\n if ABC in getattr(cls, \"__bases__\", []):\n raise InvalidBlockRegistration(\n \"`register_type_and_schema` should be called on a Block \"\n \"subclass and not on a Block interface class directly.\"\n )\n\n for field in cls.__fields__.values():\n if Block.is_block_class(field.type_):\n await field.type_.register_type_and_schema(client=client)\n if get_origin(field.type_) is Union:\n for type_ in get_args(field.type_):\n if Block.is_block_class(type_):\n await type_.register_type_and_schema(client=client)\n\n try:\n block_type = await client.read_block_type_by_slug(\n slug=cls.get_block_type_slug()\n )\n cls._block_type_id = block_type.id\n local_block_type = cls._to_block_type()\n if _should_update_block_type(\n local_block_type=local_block_type, server_block_type=block_type\n ):\n await client.update_block_type(\n block_type_id=block_type.id, block_type=local_block_type\n )\n except prefect.exceptions.ObjectNotFound:\n block_type = await client.create_block_type(block_type=cls._to_block_type())\n cls._block_type_id = block_type.id\n\n try:\n block_schema = await client.read_block_schema_by_checksum(\n checksum=cls._calculate_schema_checksum(),\n version=cls.get_block_schema_version(),\n )\n except prefect.exceptions.ObjectNotFound:\n block_schema = await client.create_block_schema(\n block_schema=cls._to_block_schema(block_type_id=block_type.id)\n )\n\n cls._block_schema_id = block_schema.id\n\n @inject_client\n async def _save(\n self,\n name: Optional[str] = None,\n is_anonymous: bool = False,\n overwrite: bool = False,\n client: \"PrefectClient\" = None,\n ):\n \"\"\"\n Saves the values of a block as a block document with an option to save as an\n anonymous block document.\n\n Args:\n name: User specified name to give saved block document which can later be used to load the\n block document.\n is_anonymous: Boolean value specifying whether the block document is anonymous. Anonymous\n blocks are intended for system use and are not shown in the UI. Anonymous blocks do not\n require a user-supplied name.\n overwrite: Boolean value specifying if values should be overwritten if a block document with\n the specified name already exists.\n\n Raises:\n ValueError: If a name is not given and `is_anonymous` is `False` or a name is given and\n `is_anonymous` is `True`.\n \"\"\"\n if name is None and not is_anonymous:\n if self._block_document_name is None:\n raise ValueError(\n \"You're attempting to save a block document without a name.\"\n \" Please either call `save` with a `name` or pass\"\n \" `is_anonymous=True` to save an anonymous block.\"\n )\n else:\n name = self._block_document_name\n\n self._is_anonymous = is_anonymous\n\n # Ensure block type and schema are registered before saving block document.\n await self.register_type_and_schema(client=client)\n\n try:\n block_document = await client.create_block_document(\n block_document=self._to_block_document(name=name)\n )\n except prefect.exceptions.ObjectAlreadyExists as err:\n if overwrite:\n block_document_id = self._block_document_id\n if block_document_id is None:\n existing_block_document = await client.read_block_document_by_name(\n name=name, block_type_slug=self.get_block_type_slug()\n )\n block_document_id = existing_block_document.id\n await client.update_block_document(\n block_document_id=block_document_id,\n block_document=self._to_block_document(name=name),\n )\n block_document = await client.read_block_document(\n block_document_id=block_document_id\n )\n else:\n raise ValueError(\n \"You are attempting to save values with a name that is already in\"\n \" use for this block type. If you would like to overwrite the\"\n \" values that are saved, then save with `overwrite=True`.\"\n ) from err\n\n # Update metadata on block instance for later use.\n self._block_document_name = block_document.name\n self._block_document_id = block_document.id\n return self._block_document_id\n\n @sync_compatible\n @instrument_instance_method_call()\n async def save(\n self,\n name: Optional[str] = None,\n overwrite: bool = False,\n client: \"PrefectClient\" = None,\n ):\n \"\"\"\n Saves the values of a block as a block document.\n\n Args:\n name: User specified name to give saved block document which can later be used to load the\n block document.\n overwrite: Boolean value specifying if values should be overwritten if a block document with\n the specified name already exists.\n\n \"\"\"\n document_id = await self._save(name=name, overwrite=overwrite, client=client)\n\n return document_id\n\n @classmethod\n @sync_compatible\n @inject_client\n async def delete(\n cls,\n name: str,\n client: \"PrefectClient\" = None,\n ):\n block_document, block_document_name = await cls._get_block_document(name)\n\n await client.delete_block_document(block_document.id)\n\n def _iter(self, *, include=None, exclude=None, **kwargs):\n # Injects the `block_type_slug` into serialized payloads for dispatch\n for key_value in super()._iter(include=include, exclude=exclude, **kwargs):\n yield key_value\n\n # Respect inclusion and exclusion still\n if include and \"block_type_slug\" not in include:\n return\n if exclude and \"block_type_slug\" in exclude:\n return\n\n yield \"block_type_slug\", self.get_block_type_slug()\n\n def __new__(cls: Type[Self], **kwargs) -> Self:\n \"\"\"\n Create an instance of the Block subclass type if a `block_type_slug` is\n present in the data payload.\n \"\"\"\n block_type_slug = kwargs.pop(\"block_type_slug\", None)\n if block_type_slug:\n subcls = lookup_type(cls, dispatch_key=block_type_slug)\n m = super().__new__(subcls)\n # NOTE: This is a workaround for an obscure issue where copied models were\n # missing attributes. This pattern is from Pydantic's\n # `BaseModel._copy_and_set_values`.\n # The issue this fixes could not be reproduced in unit tests that\n # directly targeted dispatch handling and was only observed when\n # copying then saving infrastructure blocks on deployment models.\n object.__setattr__(m, \"__dict__\", kwargs)\n object.__setattr__(m, \"__fields_set__\", set(kwargs.keys()))\n return m\n else:\n m = super().__new__(cls)\n object.__setattr__(m, \"__dict__\", kwargs)\n object.__setattr__(m, \"__fields_set__\", set(kwargs.keys()))\n return m\n\n def get_block_placeholder(self) -> str:\n \"\"\"\n Returns the block placeholder for the current block which can be used for\n templating.\n\n Returns:\n str: The block placeholder for the current block in the format\n `prefect.blocks.{block_type_name}.{block_document_name}`\n\n Raises:\n BlockNotSavedError: Raised if the block has not been saved.\n\n If a block has not been saved, the return value will be `None`.\n \"\"\"\n block_document_name = self._block_document_name\n if not block_document_name:\n raise BlockNotSavedError(\n \"Could not generate block placeholder for unsaved block.\"\n )\n\n return f\"prefect.blocks.{self.get_block_type_slug()}.{block_document_name}\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.Config","title":"Config
","text":"Source code in prefect/blocks/core.py
class Config:\n extra = \"allow\"\n\n json_encoders = {SecretDict: lambda v: v.dict()}\n\n @staticmethod\n def schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n \"\"\"\n Customizes Pydantic's schema generation feature to add blocks related information.\n \"\"\"\n schema[\"block_type_slug\"] = model.get_block_type_slug()\n # Ensures args and code examples aren't included in the schema\n description = model.get_description()\n if description:\n schema[\"description\"] = description\n else:\n # Prevent the description of the base class from being included in the schema\n schema.pop(\"description\", None)\n\n # create a list of secret field names\n # secret fields include both top-level keys and dot-delimited nested secret keys\n # A wildcard (*) means that all fields under a given key are secret.\n # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n # nested under the \"child\" key are all secret. There is no limit to nesting.\n secrets = schema[\"secret_fields\"] = []\n for field in model.__fields__.values():\n _collect_secret_fields(field.name, field.type_, secrets)\n\n # create block schema references\n refs = schema[\"block_schema_references\"] = {}\n for field in model.__fields__.values():\n if Block.is_block_class(field.type_):\n refs[field.name] = field.type_._to_block_schema_reference_dict()\n if get_origin(field.type_) is Union:\n for type_ in get_args(field.type_):\n if Block.is_block_class(type_):\n if isinstance(refs.get(field.name), list):\n refs[field.name].append(\n type_._to_block_schema_reference_dict()\n )\n elif isinstance(refs.get(field.name), dict):\n refs[field.name] = [\n refs[field.name],\n type_._to_block_schema_reference_dict(),\n ]\n else:\n refs[\n field.name\n ] = type_._to_block_schema_reference_dict()\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.Config.schema_extra","title":"schema_extra
staticmethod
","text":"Customizes Pydantic's schema generation feature to add blocks related information.
Source code inprefect/blocks/core.py
@staticmethod\ndef schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n \"\"\"\n Customizes Pydantic's schema generation feature to add blocks related information.\n \"\"\"\n schema[\"block_type_slug\"] = model.get_block_type_slug()\n # Ensures args and code examples aren't included in the schema\n description = model.get_description()\n if description:\n schema[\"description\"] = description\n else:\n # Prevent the description of the base class from being included in the schema\n schema.pop(\"description\", None)\n\n # create a list of secret field names\n # secret fields include both top-level keys and dot-delimited nested secret keys\n # A wildcard (*) means that all fields under a given key are secret.\n # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n # nested under the \"child\" key are all secret. There is no limit to nesting.\n secrets = schema[\"secret_fields\"] = []\n for field in model.__fields__.values():\n _collect_secret_fields(field.name, field.type_, secrets)\n\n # create block schema references\n refs = schema[\"block_schema_references\"] = {}\n for field in model.__fields__.values():\n if Block.is_block_class(field.type_):\n refs[field.name] = field.type_._to_block_schema_reference_dict()\n if get_origin(field.type_) is Union:\n for type_ in get_args(field.type_):\n if Block.is_block_class(type_):\n if isinstance(refs.get(field.name), list):\n refs[field.name].append(\n type_._to_block_schema_reference_dict()\n )\n elif isinstance(refs.get(field.name), dict):\n refs[field.name] = [\n refs[field.name],\n type_._to_block_schema_reference_dict(),\n ]\n else:\n refs[\n field.name\n ] = type_._to_block_schema_reference_dict()\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_capabilities","title":"get_block_capabilities
classmethod
","text":"Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset.
Source code inprefect/blocks/core.py
@classmethod\ndef get_block_capabilities(cls) -> FrozenSet[str]:\n \"\"\"\n Returns the block capabilities for this Block. Recursively collects all block\n capabilities of all parent classes into a single frozenset.\n \"\"\"\n return frozenset(\n {\n c\n for base in (cls,) + cls.__mro__\n for c in getattr(base, \"_block_schema_capabilities\", []) or []\n }\n )\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_class_from_key","title":"get_block_class_from_key
classmethod
","text":"Retrieve the block class implementation given a key.
Source code inprefect/blocks/core.py
@classmethod\ndef get_block_class_from_key(cls: Type[Self], key: str) -> Type[Self]:\n \"\"\"\n Retrieve the block class implementation given a key.\n \"\"\"\n # Ensure collections are imported and have the opportunity to register types\n # before looking up the block class\n prefect.plugins.load_prefect_collections()\n\n return lookup_type(cls, key)\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_class_from_schema","title":"get_block_class_from_schema
classmethod
","text":"Retrieve the block class implementation given a schema.
Source code inprefect/blocks/core.py
@classmethod\ndef get_block_class_from_schema(cls: Type[Self], schema: BlockSchema) -> Type[Self]:\n \"\"\"\n Retrieve the block class implementation given a schema.\n \"\"\"\n return cls.get_block_class_from_key(block_schema_to_key(schema))\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_placeholder","title":"get_block_placeholder
","text":"Returns the block placeholder for the current block which can be used for templating.
Returns:
Name Type Descriptionstr
str
The block placeholder for the current block in the format prefect.blocks.{block_type_name}.{block_document_name}
Raises:
Type DescriptionBlockNotSavedError
Raised if the block has not been saved.
If a block has not been saved, the return value will be None
.
prefect/blocks/core.py
def get_block_placeholder(self) -> str:\n \"\"\"\n Returns the block placeholder for the current block which can be used for\n templating.\n\n Returns:\n str: The block placeholder for the current block in the format\n `prefect.blocks.{block_type_name}.{block_document_name}`\n\n Raises:\n BlockNotSavedError: Raised if the block has not been saved.\n\n If a block has not been saved, the return value will be `None`.\n \"\"\"\n block_document_name = self._block_document_name\n if not block_document_name:\n raise BlockNotSavedError(\n \"Could not generate block placeholder for unsaved block.\"\n )\n\n return f\"prefect.blocks.{self.get_block_type_slug()}.{block_document_name}\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_code_example","title":"get_code_example
classmethod
","text":"Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided.
Source code inprefect/blocks/core.py
@classmethod\ndef get_code_example(cls) -> Optional[str]:\n \"\"\"\n Returns the code example for the given block. Attempts to parse\n code example from the class docstring if an override is not provided.\n \"\"\"\n code_example = (\n dedent(cls._code_example) if cls._code_example is not None else None\n )\n # If no code example override has been provided, attempt to find a examples\n # section or an admonition with the annotation \"example\" and use that as the\n # code example\n if code_example is None and cls.__doc__ is not None:\n parsed = cls._parse_docstring()\n for section in parsed:\n # Section kind will be \"examples\" if Examples section heading is used.\n if section.kind == DocstringSectionKind.examples:\n # Examples sections are made up of smaller sections that need to be\n # joined with newlines. Smaller sections are represented as tuples\n # with shape (DocstringSectionKind, str)\n code_example = \"\\n\".join(\n (part[1] for part in section.as_dict().get(\"value\", []))\n )\n break\n # Section kind will be \"admonition\" if Example section heading is used.\n if section.kind == DocstringSectionKind.admonition:\n value = section.as_dict().get(\"value\", {})\n if value.get(\"annotation\") == \"example\":\n code_example = value.get(\"description\")\n break\n\n if code_example is None:\n # If no code example has been specified or extracted from the class\n # docstring, generate a sensible default\n code_example = cls._generate_code_example()\n\n return code_example\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_description","title":"get_description
classmethod
","text":"Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined.
Source code inprefect/blocks/core.py
@classmethod\ndef get_description(cls) -> Optional[str]:\n \"\"\"\n Returns the description for the current block. Attempts to parse\n description from class docstring if an override is not defined.\n \"\"\"\n description = cls._description\n # If no description override has been provided, find the first text section\n # and use that as the description\n if description is None and cls.__doc__ is not None:\n parsed = cls._parse_docstring()\n parsed_description = next(\n (\n section.as_dict().get(\"value\")\n for section in parsed\n if section.kind == DocstringSectionKind.text\n ),\n None,\n )\n if isinstance(parsed_description, str):\n description = parsed_description.strip()\n return description\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.load","title":"load
async
classmethod
","text":"Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document.
If a block document for a given block type is saved with a different schema than the current class calling load
, a warning will be raised.
If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default validate = True
.
If the current class schema is a superset of the block document schema, load
must be called with validate
set to False to prevent a validation error. In this case, the block attributes will default to None
and must be set manually and saved to a new block document before the block can be used as expected.
Parameters:
Name Type Description Defaultname
str
The name or slug of the block document. A block document slug is a string with the format / required validate
bool
If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by name
was saved.
True
client
PrefectClient
The client to use to load the block document. If not provided, the default client will be injected.
None
Raises:
Type DescriptionValueError
If the requested block document is not found.
Returns:
Type DescriptionAn instance of the current class hydrated with the data stored in the
block document with the specified name.
Examples:
Load from a Block subclass with a block document name:
class Custom(Block):\n message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\nloaded_block = Custom.load(\"my-custom-message\")\n
Load from Block with a block document slug:
class Custom(Block):\n message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\nloaded_block = Block.load(\"custom/my-custom-message\")\n
Migrate a block document to a new schema:
# original class\nclass Custom(Block):\n message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\n# Updated class with new required field\nclass Custom(Block):\n message: str\n number_of_ducks: int\n\nloaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n# Prints UserWarning about schema mismatch\n\nloaded_block.number_of_ducks = 42\n\nloaded_block.save(\"my-custom-message\", overwrite=True)\n
Source code in prefect/blocks/core.py
@classmethod\n@sync_compatible\n@inject_client\nasync def load(\n cls,\n name: str,\n validate: bool = True,\n client: \"PrefectClient\" = None,\n):\n \"\"\"\n Retrieves data from the block document with the given name for the block type\n that corresponds with the current class and returns an instantiated version of\n the current class with the data stored in the block document.\n\n If a block document for a given block type is saved with a different schema\n than the current class calling `load`, a warning will be raised.\n\n If the current class schema is a subset of the block document schema, the block\n can be loaded as normal using the default `validate = True`.\n\n If the current class schema is a superset of the block document schema, `load`\n must be called with `validate` set to False to prevent a validation error. In\n this case, the block attributes will default to `None` and must be set manually\n and saved to a new block document before the block can be used as expected.\n\n Args:\n name: The name or slug of the block document. A block document slug is a\n string with the format <block_type_slug>/<block_document_name>\n validate: If False, the block document will be loaded without Pydantic\n validating the block schema. This is useful if the block schema has\n changed client-side since the block document referred to by `name` was saved.\n client: The client to use to load the block document. If not provided, the\n default client will be injected.\n\n Raises:\n ValueError: If the requested block document is not found.\n\n Returns:\n An instance of the current class hydrated with the data stored in the\n block document with the specified name.\n\n Examples:\n Load from a Block subclass with a block document name:\n ```python\n class Custom(Block):\n message: str\n\n Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n loaded_block = Custom.load(\"my-custom-message\")\n ```\n\n Load from Block with a block document slug:\n ```python\n class Custom(Block):\n message: str\n\n Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n loaded_block = Block.load(\"custom/my-custom-message\")\n ```\n\n Migrate a block document to a new schema:\n ```python\n # original class\n class Custom(Block):\n message: str\n\n Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n # Updated class with new required field\n class Custom(Block):\n message: str\n number_of_ducks: int\n\n loaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n # Prints UserWarning about schema mismatch\n\n loaded_block.number_of_ducks = 42\n\n loaded_block.save(\"my-custom-message\", overwrite=True)\n ```\n \"\"\"\n block_document, block_document_name = await cls._get_block_document(name)\n\n try:\n return cls._from_block_document(block_document)\n except ValidationError as e:\n if not validate:\n missing_fields = tuple(err[\"loc\"][0] for err in e.errors())\n missing_block_data = {field: None for field in missing_fields}\n warnings.warn(\n f\"Could not fully load {block_document_name!r} of block type\"\n f\" {cls._block_type_slug!r} - this is likely because one or more\"\n \" required fields were added to the schema for\"\n f\" {cls.__name__!r} that did not exist on the class when this block\"\n \" was last saved. Please specify values for new field(s):\"\n f\" {listrepr(missing_fields)}, then run\"\n f' `{cls.__name__}.save(\"{block_document_name}\", overwrite=True)`,'\n \" and load this block again before attempting to use it.\"\n )\n return cls.construct(**block_document.data, **missing_block_data)\n raise RuntimeError(\n f\"Unable to load {block_document_name!r} of block type\"\n f\" {cls._block_type_slug!r} due to failed validation. To load without\"\n \" validation, try loading again with `validate=False`.\"\n ) from e\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.register_type_and_schema","title":"register_type_and_schema
async
classmethod
","text":"Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent.
Parameters:
Name Type Description Defaultclient
PrefectClient
Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided.
None
Source code in prefect/blocks/core.py
@classmethod\n@sync_compatible\n@inject_client\nasync def register_type_and_schema(cls, client: \"PrefectClient\" = None):\n \"\"\"\n Makes block available for configuration with current Prefect API.\n Recursively registers all nested blocks. Registration is idempotent.\n\n Args:\n client: Optional client to use for registering type and schema with the\n Prefect API. A new client will be created and used if one is not\n provided.\n \"\"\"\n if cls.__name__ == \"Block\":\n raise InvalidBlockRegistration(\n \"`register_type_and_schema` should be called on a Block \"\n \"subclass and not on the Block class directly.\"\n )\n if ABC in getattr(cls, \"__bases__\", []):\n raise InvalidBlockRegistration(\n \"`register_type_and_schema` should be called on a Block \"\n \"subclass and not on a Block interface class directly.\"\n )\n\n for field in cls.__fields__.values():\n if Block.is_block_class(field.type_):\n await field.type_.register_type_and_schema(client=client)\n if get_origin(field.type_) is Union:\n for type_ in get_args(field.type_):\n if Block.is_block_class(type_):\n await type_.register_type_and_schema(client=client)\n\n try:\n block_type = await client.read_block_type_by_slug(\n slug=cls.get_block_type_slug()\n )\n cls._block_type_id = block_type.id\n local_block_type = cls._to_block_type()\n if _should_update_block_type(\n local_block_type=local_block_type, server_block_type=block_type\n ):\n await client.update_block_type(\n block_type_id=block_type.id, block_type=local_block_type\n )\n except prefect.exceptions.ObjectNotFound:\n block_type = await client.create_block_type(block_type=cls._to_block_type())\n cls._block_type_id = block_type.id\n\n try:\n block_schema = await client.read_block_schema_by_checksum(\n checksum=cls._calculate_schema_checksum(),\n version=cls.get_block_schema_version(),\n )\n except prefect.exceptions.ObjectNotFound:\n block_schema = await client.create_block_schema(\n block_schema=cls._to_block_schema(block_type_id=block_type.id)\n )\n\n cls._block_schema_id = block_schema.id\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.save","title":"save
async
","text":"Saves the values of a block as a block document.
Parameters:
Name Type Description Defaultname
Optional[str]
User specified name to give saved block document which can later be used to load the block document.
None
overwrite
bool
Boolean value specifying if values should be overwritten if a block document with the specified name already exists.
False
Source code in prefect/blocks/core.py
@sync_compatible\n@instrument_instance_method_call()\nasync def save(\n self,\n name: Optional[str] = None,\n overwrite: bool = False,\n client: \"PrefectClient\" = None,\n):\n \"\"\"\n Saves the values of a block as a block document.\n\n Args:\n name: User specified name to give saved block document which can later be used to load the\n block document.\n overwrite: Boolean value specifying if values should be overwritten if a block document with\n the specified name already exists.\n\n \"\"\"\n document_id = await self._save(name=name, overwrite=overwrite, client=client)\n\n return document_id\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.BlockNotSavedError","title":"BlockNotSavedError
","text":" Bases: RuntimeError
Raised when a given block is not saved and an operation that requires the block to be saved is attempted.
Source code inprefect/blocks/core.py
class BlockNotSavedError(RuntimeError):\n \"\"\"\n Raised when a given block is not saved and an operation that requires\n the block to be saved is attempted.\n \"\"\"\n\n pass\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.InvalidBlockRegistration","title":"InvalidBlockRegistration
","text":" Bases: Exception
Raised on attempted registration of the base Block class or a Block interface class
Source code inprefect/blocks/core.py
class InvalidBlockRegistration(Exception):\n \"\"\"\n Raised on attempted registration of the base Block\n class or a Block interface class\n \"\"\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.block_schema_to_key","title":"block_schema_to_key
","text":"Defines the unique key used to lookup the Block class for a given schema.
Source code inprefect/blocks/core.py
def block_schema_to_key(schema: BlockSchema) -> str:\n \"\"\"\n Defines the unique key used to lookup the Block class for a given schema.\n \"\"\"\n return f\"{schema.block_type.slug}\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/fields/","title":"fields","text":"","tags":["Python API","fields"]},{"location":"api-ref/prefect/blocks/fields/#prefect.blocks.fields","title":"prefect.blocks.fields
","text":"","tags":["Python API","fields"]},{"location":"api-ref/prefect/blocks/kubernetes/","title":"kubernetes","text":"","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes","title":"prefect.blocks.kubernetes
","text":"","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig","title":"KubernetesClusterConfig
","text":" Bases: Block
Stores configuration for interaction with Kubernetes clusters.
See from_file
for creation.
Attributes:
Name Type Descriptionconfig
Dict
The entire loaded YAML contents of a kubectl config file
context_name
str
The name of the kubectl context to use
ExampleLoad a saved Kubernetes cluster config:
from prefect.blocks.kubernetes import KubernetesClusterConfig\n\ncluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n
Source code in prefect/blocks/kubernetes.py
class KubernetesClusterConfig(Block):\n \"\"\"\n Stores configuration for interaction with Kubernetes clusters.\n\n See `from_file` for creation.\n\n Attributes:\n config: The entire loaded YAML contents of a kubectl config file\n context_name: The name of the kubectl context to use\n\n Example:\n Load a saved Kubernetes cluster config:\n ```python\n from prefect.blocks.kubernetes import KubernetesClusterConfig\n\n cluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"Kubernetes Cluster Config\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig\"\n\n config: Dict = Field(\n default=..., description=\"The entire contents of a kubectl config file.\"\n )\n context_name: str = Field(\n default=..., description=\"The name of the kubectl context to use.\"\n )\n\n @validator(\"config\", pre=True)\n def parse_yaml_config(cls, value):\n if isinstance(value, str):\n return yaml.safe_load(value)\n return value\n\n @classmethod\n def from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n \"\"\"\n Create a cluster config from the a Kubernetes config file.\n\n By default, the current context in the default Kubernetes config file will be\n used.\n\n An alternative file or context may be specified.\n\n The entire config file will be loaded and stored.\n \"\"\"\n kube_config = kubernetes.config.kube_config\n\n path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n path = path.expanduser().resolve()\n\n # Determine the context\n existing_contexts, current_context = kube_config.list_kube_config_contexts(\n config_file=str(path)\n )\n context_names = {ctx[\"name\"] for ctx in existing_contexts}\n if context_name:\n if context_name not in context_names:\n raise ValueError(\n f\"Context {context_name!r} not found. \"\n f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n )\n else:\n context_name = current_context[\"name\"]\n\n # Load the entire config file\n config_file_contents = path.read_text()\n config_dict = yaml.safe_load(config_file_contents)\n\n return cls(config=config_dict, context_name=context_name)\n\n def get_api_client(self) -> \"ApiClient\":\n \"\"\"\n Returns a Kubernetes API client for this cluster config.\n \"\"\"\n return kubernetes.config.kube_config.new_client_from_config_dict(\n config_dict=self.config, context=self.context_name\n )\n\n def configure_client(self) -> None:\n \"\"\"\n Activates this cluster configuration by loading the configuration into the\n Kubernetes Python client. After calling this, Kubernetes API clients can use\n this config's context.\n \"\"\"\n kubernetes.config.kube_config.load_kube_config_from_dict(\n config_dict=self.config, context=self.context_name\n )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.configure_client","title":"configure_client
","text":"Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context.
Source code inprefect/blocks/kubernetes.py
def configure_client(self) -> None:\n \"\"\"\n Activates this cluster configuration by loading the configuration into the\n Kubernetes Python client. After calling this, Kubernetes API clients can use\n this config's context.\n \"\"\"\n kubernetes.config.kube_config.load_kube_config_from_dict(\n config_dict=self.config, context=self.context_name\n )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.from_file","title":"from_file
classmethod
","text":"Create a cluster config from the a Kubernetes config file.
By default, the current context in the default Kubernetes config file will be used.
An alternative file or context may be specified.
The entire config file will be loaded and stored.
Source code inprefect/blocks/kubernetes.py
@classmethod\ndef from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n \"\"\"\n Create a cluster config from the a Kubernetes config file.\n\n By default, the current context in the default Kubernetes config file will be\n used.\n\n An alternative file or context may be specified.\n\n The entire config file will be loaded and stored.\n \"\"\"\n kube_config = kubernetes.config.kube_config\n\n path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n path = path.expanduser().resolve()\n\n # Determine the context\n existing_contexts, current_context = kube_config.list_kube_config_contexts(\n config_file=str(path)\n )\n context_names = {ctx[\"name\"] for ctx in existing_contexts}\n if context_name:\n if context_name not in context_names:\n raise ValueError(\n f\"Context {context_name!r} not found. \"\n f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n )\n else:\n context_name = current_context[\"name\"]\n\n # Load the entire config file\n config_file_contents = path.read_text()\n config_dict = yaml.safe_load(config_file_contents)\n\n return cls(config=config_dict, context_name=context_name)\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.get_api_client","title":"get_api_client
","text":"Returns a Kubernetes API client for this cluster config.
Source code inprefect/blocks/kubernetes.py
def get_api_client(self) -> \"ApiClient\":\n \"\"\"\n Returns a Kubernetes API client for this cluster config.\n \"\"\"\n return kubernetes.config.kube_config.new_client_from_config_dict(\n config_dict=self.config, context=self.context_name\n )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/notifications/","title":"notifications","text":"","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications","title":"prefect.blocks.notifications
","text":"","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.AbstractAppriseNotificationBlock","title":"AbstractAppriseNotificationBlock
","text":" Bases: NotificationBlock
, ABC
An abstract class for sending notifications using Apprise.
Source code inprefect/blocks/notifications.py
class AbstractAppriseNotificationBlock(NotificationBlock, ABC):\n \"\"\"\n An abstract class for sending notifications using Apprise.\n \"\"\"\n\n notify_type: Literal[\n \"prefect_default\", \"info\", \"success\", \"warning\", \"failure\"\n ] = Field(\n default=PREFECT_NOTIFY_TYPE_DEFAULT,\n description=(\n \"The type of notification being performed; the prefect_default \"\n \"is a plain notification that does not attach an image.\"\n ),\n )\n\n def __init__(self, *args, **kwargs):\n import apprise\n\n if PREFECT_NOTIFY_TYPE_DEFAULT not in apprise.NOTIFY_TYPES:\n apprise.NOTIFY_TYPES += (PREFECT_NOTIFY_TYPE_DEFAULT,)\n\n super().__init__(*args, **kwargs)\n\n def _start_apprise_client(self, url: SecretStr):\n from apprise import Apprise, AppriseAsset\n\n # A custom `AppriseAsset` that ensures Prefect Notifications\n # appear correctly across multiple messaging platforms\n prefect_app_data = AppriseAsset(\n app_id=\"Prefect Notifications\",\n app_desc=\"Prefect Notifications\",\n app_url=\"https://prefect.io\",\n )\n\n self._apprise_client = Apprise(asset=prefect_app_data)\n self._apprise_client.add(url.get_secret_value())\n\n def block_initialization(self) -> None:\n self._start_apprise_client(self.url)\n\n @sync_compatible\n @instrument_instance_method_call()\n async def notify(self, body: str, subject: Optional[str] = None):\n await self._apprise_client.async_notify(\n body=body, title=subject, notify_type=self.notify_type\n )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.AppriseNotificationBlock","title":"AppriseNotificationBlock
","text":" Bases: AbstractAppriseNotificationBlock
, ABC
A base class for sending notifications using Apprise, through webhook URLs.
Source code inprefect/blocks/notifications.py
class AppriseNotificationBlock(AbstractAppriseNotificationBlock, ABC):\n \"\"\"\n A base class for sending notifications using Apprise, through webhook URLs.\n \"\"\"\n\n _documentation_url = \"https://docs.prefect.io/ui/notifications/\"\n url: SecretStr = Field(\n default=...,\n title=\"Webhook URL\",\n description=\"Incoming webhook URL used to send notifications.\",\n example=\"https://hooks.example.com/XXX\",\n )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.CustomWebhookNotificationBlock","title":"CustomWebhookNotificationBlock
","text":" Bases: NotificationBlock
Enables sending notifications via any custom webhook.
All nested string param contains {{key}}
will be substituted with value from context/secrets.
Context values include: subject
, body
and name
.
Examples:
Load a saved custom webhook and send a message:
from prefect.blocks.notifications import CustomWebhookNotificationBlock\n\ncustom_webhook_block = CustomWebhookNotificationBlock.load(\"BLOCK_NAME\")\n\ncustom_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class CustomWebhookNotificationBlock(NotificationBlock):\n \"\"\"\n Enables sending notifications via any custom webhook.\n\n All nested string param contains `{{key}}` will be substituted with value from context/secrets.\n\n Context values include: `subject`, `body` and `name`.\n\n Examples:\n Load a saved custom webhook and send a message:\n ```python\n from prefect.blocks.notifications import CustomWebhookNotificationBlock\n\n custom_webhook_block = CustomWebhookNotificationBlock.load(\"BLOCK_NAME\")\n\n custom_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _block_type_name = \"Custom Webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c7247cb359eb6cf276734d4b1fbf00fb8930e89e-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.CustomWebhookNotificationBlock\"\n\n name: str = Field(title=\"Name\", description=\"Name of the webhook.\")\n\n url: str = Field(\n title=\"Webhook URL\",\n description=\"The webhook URL.\",\n example=\"https://hooks.slack.com/XXX\",\n )\n\n method: Literal[\"GET\", \"POST\", \"PUT\", \"PATCH\", \"DELETE\"] = Field(\n default=\"POST\", description=\"The webhook request method. Defaults to `POST`.\"\n )\n\n params: Optional[Dict[str, str]] = Field(\n default=None, title=\"Query Params\", description=\"Custom query params.\"\n )\n json_data: Optional[dict] = Field(\n default=None,\n title=\"JSON Data\",\n description=\"Send json data as payload.\",\n example=(\n '{\"text\": \"{{subject}}\\\\n{{body}}\", \"title\": \"{{name}}\", \"token\":'\n ' \"{{tokenFromSecrets}}\"}'\n ),\n )\n form_data: Optional[Dict[str, str]] = Field(\n default=None,\n title=\"Form Data\",\n description=(\n \"Send form data as payload. Should not be used together with _JSON Data_.\"\n ),\n example=(\n '{\"text\": \"{{subject}}\\\\n{{body}}\", \"title\": \"{{name}}\", \"token\":'\n ' \"{{tokenFromSecrets}}\"}'\n ),\n )\n\n headers: Optional[Dict[str, str]] = Field(None, description=\"Custom headers.\")\n cookies: Optional[Dict[str, str]] = Field(None, description=\"Custom cookies.\")\n\n timeout: float = Field(\n default=10, description=\"Request timeout in seconds. Defaults to 10.\"\n )\n\n secrets: SecretDict = Field(\n default_factory=lambda: SecretDict(dict()),\n title=\"Custom Secret Values\",\n description=\"A dictionary of secret values to be substituted in other configs.\",\n example='{\"tokenFromSecrets\":\"SomeSecretToken\"}',\n )\n\n def _build_request_args(self, body: str, subject: Optional[str]):\n \"\"\"Build kwargs for httpx.AsyncClient.request\"\"\"\n # prepare values\n values = self.secrets.get_secret_value()\n # use 'null' when subject is None\n values.update(\n {\n \"subject\": \"null\" if subject is None else subject,\n \"body\": body,\n \"name\": self.name,\n }\n )\n # do substution\n return apply_values(\n {\n \"method\": self.method,\n \"url\": self.url,\n \"params\": self.params,\n \"data\": self.form_data,\n \"json\": self.json_data,\n \"headers\": self.headers,\n \"cookies\": self.cookies,\n \"timeout\": self.timeout,\n },\n values,\n )\n\n def block_initialization(self) -> None:\n # check form_data and json_data\n if self.form_data is not None and self.json_data is not None:\n raise ValueError(\"both `Form Data` and `JSON Data` provided\")\n allowed_keys = {\"subject\", \"body\", \"name\"}.union(\n self.secrets.get_secret_value().keys()\n )\n # test template to raise a error early\n for name in [\"url\", \"params\", \"form_data\", \"json_data\", \"headers\", \"cookies\"]:\n template = getattr(self, name)\n if template is None:\n continue\n # check for placeholders not in predefined keys and secrets\n placeholders = find_placeholders(template)\n for placeholder in placeholders:\n if placeholder.name not in allowed_keys:\n raise KeyError(f\"{name}/{placeholder}\")\n\n @sync_compatible\n @instrument_instance_method_call()\n async def notify(self, body: str, subject: Optional[str] = None):\n import httpx\n\n request_args = self._build_request_args(body, subject)\n cookies = request_args.pop(\"cookies\", None)\n # make request with httpx\n client = httpx.AsyncClient(\n headers={\"user-agent\": \"Prefect Notifications\"}, cookies=cookies\n )\n async with client:\n resp = await client.request(**request_args)\n resp.raise_for_status()\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.DiscordWebhook","title":"DiscordWebhook
","text":" Bases: AbstractAppriseNotificationBlock
Enables sending notifications via a provided Discord webhook. See Apprise notify_Discord docs # noqa
Examples:
Load a saved Discord webhook and send a message:
from prefect.blocks.notifications import DiscordWebhook\n\ndiscord_webhook_block = DiscordWebhook.load(\"BLOCK_NAME\")\n\ndiscord_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class DiscordWebhook(AbstractAppriseNotificationBlock):\n \"\"\"\n Enables sending notifications via a provided Discord webhook.\n See [Apprise notify_Discord docs](https://github.com/caronc/apprise/wiki/Notify_Discord) # noqa\n\n Examples:\n Load a saved Discord webhook and send a message:\n ```python\n from prefect.blocks.notifications import DiscordWebhook\n\n discord_webhook_block = DiscordWebhook.load(\"BLOCK_NAME\")\n\n discord_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _description = \"Enables sending notifications via a provided Discord webhook.\"\n _block_type_name = \"Discord Webhook\"\n _block_type_slug = \"discord-webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/9e94976c80ef925b66d24e5d14f0d47baa6b8f88-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.DiscordWebhook\"\n\n webhook_id: SecretStr = Field(\n default=...,\n description=(\n \"The first part of 2 tokens provided to you after creating a\"\n \" incoming-webhook.\"\n ),\n )\n\n webhook_token: SecretStr = Field(\n default=...,\n description=(\n \"The second part of 2 tokens provided to you after creating a\"\n \" incoming-webhook.\"\n ),\n )\n\n botname: Optional[str] = Field(\n title=\"Bot name\",\n default=None,\n description=(\n \"Identify the name of the bot that should issue the message. If one isn't\"\n \" specified then the default is to just use your account (associated with\"\n \" the incoming-webhook).\"\n ),\n )\n\n tts: bool = Field(\n default=False,\n description=\"Whether to enable Text-To-Speech.\",\n )\n\n include_image: bool = Field(\n default=False,\n description=(\n \"Whether to include an image in-line with the message describing the\"\n \" notification type.\"\n ),\n )\n\n avatar: bool = Field(\n default=False,\n description=\"Whether to override the default discord avatar icon.\",\n )\n\n avatar_url: Optional[str] = Field(\n title=\"Avatar URL\",\n default=False,\n description=(\n \"Over-ride the default discord avatar icon URL. By default this is not set\"\n \" and Apprise chooses the URL dynamically based on the type of message\"\n \" (info, success, warning, or error).\"\n ),\n )\n\n def block_initialization(self) -> None:\n from apprise.plugins.NotifyDiscord import NotifyDiscord\n\n url = SecretStr(\n NotifyDiscord(\n webhook_id=self.webhook_id.get_secret_value(),\n webhook_token=self.webhook_token.get_secret_value(),\n botname=self.botname,\n tts=self.tts,\n include_image=self.include_image,\n avatar=self.avatar,\n avatar_url=self.avatar_url,\n ).url()\n )\n self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MattermostWebhook","title":"MattermostWebhook
","text":" Bases: AbstractAppriseNotificationBlock
Enables sending notifications via a provided Mattermost webhook. See Apprise notify_Mattermost docs # noqa
Examples:
Load a saved Mattermost webhook and send a message:
from prefect.blocks.notifications import MattermostWebhook\n\nmattermost_webhook_block = MattermostWebhook.load(\"BLOCK_NAME\")\n\nmattermost_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class MattermostWebhook(AbstractAppriseNotificationBlock):\n \"\"\"\n Enables sending notifications via a provided Mattermost webhook.\n See [Apprise notify_Mattermost docs](https://github.com/caronc/apprise/wiki/Notify_Mattermost) # noqa\n\n\n Examples:\n Load a saved Mattermost webhook and send a message:\n ```python\n from prefect.blocks.notifications import MattermostWebhook\n\n mattermost_webhook_block = MattermostWebhook.load(\"BLOCK_NAME\")\n\n mattermost_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _description = \"Enables sending notifications via a provided Mattermost webhook.\"\n _block_type_name = \"Mattermost Webhook\"\n _block_type_slug = \"mattermost-webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/1350a147130bf82cbc799a5f868d2c0116207736-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MattermostWebhook\"\n\n hostname: str = Field(\n default=...,\n description=\"The hostname of your Mattermost server.\",\n example=\"Mattermost.example.com\",\n )\n\n token: SecretStr = Field(\n default=...,\n description=\"The token associated with your Mattermost webhook.\",\n )\n\n botname: Optional[str] = Field(\n title=\"Bot name\",\n default=None,\n description=\"The name of the bot that will send the message.\",\n )\n\n channels: Optional[List[str]] = Field(\n default=None,\n description=\"The channel(s) you wish to notify.\",\n )\n\n include_image: bool = Field(\n default=False,\n description=\"Whether to include the Apprise status image in the message.\",\n )\n\n path: Optional[str] = Field(\n default=None,\n description=\"An optional sub-path specification to append to the hostname.\",\n )\n\n port: int = Field(\n default=8065,\n description=\"The port of your Mattermost server.\",\n )\n\n def block_initialization(self) -> None:\n from apprise.plugins.NotifyMattermost import NotifyMattermost\n\n url = SecretStr(\n NotifyMattermost(\n token=self.token.get_secret_value(),\n fullpath=self.path,\n host=self.hostname,\n botname=self.botname,\n channels=self.channels,\n include_image=self.include_image,\n port=self.port,\n ).url()\n )\n self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MicrosoftTeamsWebhook","title":"MicrosoftTeamsWebhook
","text":" Bases: AppriseNotificationBlock
Enables sending notifications via a provided Microsoft Teams webhook.
Examples:
Load a saved Teams webhook and send a message:
from prefect.blocks.notifications import MicrosoftTeamsWebhook\nteams_webhook_block = MicrosoftTeamsWebhook.load(\"BLOCK_NAME\")\nteams_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class MicrosoftTeamsWebhook(AppriseNotificationBlock):\n \"\"\"\n Enables sending notifications via a provided Microsoft Teams webhook.\n\n Examples:\n Load a saved Teams webhook and send a message:\n ```python\n from prefect.blocks.notifications import MicrosoftTeamsWebhook\n teams_webhook_block = MicrosoftTeamsWebhook.load(\"BLOCK_NAME\")\n teams_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _block_type_name = \"Microsoft Teams Webhook\"\n _block_type_slug = \"ms-teams-webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/817efe008a57f0a24f3587414714b563e5e23658-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MicrosoftTeamsWebhook\"\n\n url: SecretStr = Field(\n ...,\n title=\"Webhook URL\",\n description=\"The Teams incoming webhook URL used to send notifications.\",\n example=(\n \"https://your-org.webhook.office.com/webhookb2/XXX/IncomingWebhook/YYY/ZZZ\"\n ),\n )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.OpsgenieWebhook","title":"OpsgenieWebhook
","text":" Bases: AbstractAppriseNotificationBlock
Enables sending notifications via a provided Opsgenie webhook. See Apprise notify_opsgenie docs for more info on formatting the URL.
Examples:
Load a saved Opsgenie webhook and send a message:
from prefect.blocks.notifications import OpsgenieWebhook\nopsgenie_webhook_block = OpsgenieWebhook.load(\"BLOCK_NAME\")\nopsgenie_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class OpsgenieWebhook(AbstractAppriseNotificationBlock):\n \"\"\"\n Enables sending notifications via a provided Opsgenie webhook.\n See [Apprise notify_opsgenie docs](https://github.com/caronc/apprise/wiki/Notify_opsgenie)\n for more info on formatting the URL.\n\n Examples:\n Load a saved Opsgenie webhook and send a message:\n ```python\n from prefect.blocks.notifications import OpsgenieWebhook\n opsgenie_webhook_block = OpsgenieWebhook.load(\"BLOCK_NAME\")\n opsgenie_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _description = \"Enables sending notifications via a provided Opsgenie webhook.\"\n\n _block_type_name = \"Opsgenie Webhook\"\n _block_type_slug = \"opsgenie-webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d8b5bc6244ae6cd83b62ec42f10d96e14d6e9113-280x280.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.OpsgenieWebhook\"\n\n apikey: SecretStr = Field(\n default=...,\n title=\"API Key\",\n description=\"The API Key associated with your Opsgenie account.\",\n )\n\n target_user: Optional[List] = Field(\n default=None, description=\"The user(s) you wish to notify.\"\n )\n\n target_team: Optional[List] = Field(\n default=None, description=\"The team(s) you wish to notify.\"\n )\n\n target_schedule: Optional[List] = Field(\n default=None, description=\"The schedule(s) you wish to notify.\"\n )\n\n target_escalation: Optional[List] = Field(\n default=None, description=\"The escalation(s) you wish to notify.\"\n )\n\n region_name: Literal[\"us\", \"eu\"] = Field(\n default=\"us\", description=\"The 2-character region code.\"\n )\n\n batch: bool = Field(\n default=False,\n description=\"Notify all targets in batches (instead of individually).\",\n )\n\n tags: Optional[List] = Field(\n default=None,\n description=(\n \"A comma-separated list of tags you can associate with your Opsgenie\"\n \" message.\"\n ),\n example='[\"tag1\", \"tag2\"]',\n )\n\n priority: Optional[str] = Field(\n default=3,\n description=(\n \"The priority to associate with the message. It is on a scale between 1\"\n \" (LOW) and 5 (EMERGENCY).\"\n ),\n )\n\n alias: Optional[str] = Field(\n default=None, description=\"The alias to associate with the message.\"\n )\n\n entity: Optional[str] = Field(\n default=None, description=\"The entity to associate with the message.\"\n )\n\n details: Optional[Dict[str, str]] = Field(\n default=None,\n description=\"Additional details composed of key/values pairs.\",\n example='{\"key1\": \"value1\", \"key2\": \"value2\"}',\n )\n\n def block_initialization(self) -> None:\n from apprise.plugins.NotifyOpsgenie import NotifyOpsgenie\n\n targets = []\n if self.target_user:\n [targets.append(f\"@{x}\") for x in self.target_user]\n if self.target_team:\n [targets.append(f\"#{x}\") for x in self.target_team]\n if self.target_schedule:\n [targets.append(f\"*{x}\") for x in self.target_schedule]\n if self.target_escalation:\n [targets.append(f\"^{x}\") for x in self.target_escalation]\n url = SecretStr(\n NotifyOpsgenie(\n apikey=self.apikey.get_secret_value(),\n targets=targets,\n region_name=self.region_name,\n details=self.details,\n priority=self.priority,\n alias=self.alias,\n entity=self.entity,\n batch=self.batch,\n tags=self.tags,\n ).url()\n )\n self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.PagerDutyWebHook","title":"PagerDutyWebHook
","text":" Bases: AbstractAppriseNotificationBlock
Enables sending notifications via a provided PagerDuty webhook. See Apprise notify_pagerduty docs for more info on formatting the URL.
Examples:
Load a saved PagerDuty webhook and send a message:
from prefect.blocks.notifications import PagerDutyWebHook\npagerduty_webhook_block = PagerDutyWebHook.load(\"BLOCK_NAME\")\npagerduty_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class PagerDutyWebHook(AbstractAppriseNotificationBlock):\n \"\"\"\n Enables sending notifications via a provided PagerDuty webhook.\n See [Apprise notify_pagerduty docs](https://github.com/caronc/apprise/wiki/Notify_pagerduty)\n for more info on formatting the URL.\n\n Examples:\n Load a saved PagerDuty webhook and send a message:\n ```python\n from prefect.blocks.notifications import PagerDutyWebHook\n pagerduty_webhook_block = PagerDutyWebHook.load(\"BLOCK_NAME\")\n pagerduty_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _description = \"Enables sending notifications via a provided PagerDuty webhook.\"\n\n _block_type_name = \"Pager Duty Webhook\"\n _block_type_slug = \"pager-duty-webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/8dbf37d17089c1ce531708eac2e510801f7b3aee-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.PagerDutyWebHook\"\n\n # The default cannot be prefect_default because NotifyPagerDuty's\n # PAGERDUTY_SEVERITY_MAP only has these notify types defined as keys\n notify_type: Literal[\"info\", \"success\", \"warning\", \"failure\"] = Field(\n default=\"info\", description=\"The severity of the notification.\"\n )\n\n integration_key: SecretStr = Field(\n default=...,\n description=(\n \"This can be found on the Events API V2 \"\n \"integration's detail page, and is also referred to as a Routing Key. \"\n \"This must be provided alongside `api_key`, but will error if provided \"\n \"alongside `url`.\"\n ),\n )\n\n api_key: SecretStr = Field(\n default=...,\n title=\"API Key\",\n description=(\n \"This can be found under Integrations. \"\n \"This must be provided alongside `integration_key`, but will error if \"\n \"provided alongside `url`.\"\n ),\n )\n\n source: Optional[str] = Field(\n default=\"Prefect\", description=\"The source string as part of the payload.\"\n )\n\n component: str = Field(\n default=\"Notification\",\n description=\"The component string as part of the payload.\",\n )\n\n group: Optional[str] = Field(\n default=None, description=\"The group string as part of the payload.\"\n )\n\n class_id: Optional[str] = Field(\n default=None,\n title=\"Class ID\",\n description=\"The class string as part of the payload.\",\n )\n\n region_name: Literal[\"us\", \"eu\"] = Field(\n default=\"us\", description=\"The region name.\"\n )\n\n clickable_url: Optional[AnyHttpUrl] = Field(\n default=None,\n title=\"Clickable URL\",\n description=\"A clickable URL to associate with the notice.\",\n )\n\n include_image: bool = Field(\n default=True,\n description=\"Associate the notification status via a represented icon.\",\n )\n\n custom_details: Optional[Dict[str, str]] = Field(\n default=None,\n description=\"Additional details to include as part of the payload.\",\n example='{\"disk_space_left\": \"145GB\"}',\n )\n\n def block_initialization(self) -> None:\n from apprise.plugins.NotifyPagerDuty import NotifyPagerDuty\n\n url = SecretStr(\n NotifyPagerDuty(\n apikey=self.api_key.get_secret_value(),\n integrationkey=self.integration_key.get_secret_value(),\n source=self.source,\n component=self.component,\n group=self.group,\n class_id=self.class_id,\n region_name=self.region_name,\n click=self.clickable_url,\n include_image=self.include_image,\n details=self.custom_details,\n ).url()\n )\n self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SendgridEmail","title":"SendgridEmail
","text":" Bases: AbstractAppriseNotificationBlock
Enables sending notifications via any sendgrid account. See Apprise Notify_sendgrid docs
Examples:
Load a saved Sendgrid and send a email message: ```python from prefect.blocks.notifications import SendgridEmail
sendgrid_block = SendgridEmail.load(\"BLOCK_NAME\")
sendgrid_block.notify(\"Hello from Prefect!\")
Source code inprefect/blocks/notifications.py
class SendgridEmail(AbstractAppriseNotificationBlock):\n \"\"\"\n Enables sending notifications via any sendgrid account.\n See [Apprise Notify_sendgrid docs](https://github.com/caronc/apprise/wiki/Notify_Sendgrid)\n\n Examples:\n Load a saved Sendgrid and send a email message:\n ```python\n from prefect.blocks.notifications import SendgridEmail\n\n sendgrid_block = SendgridEmail.load(\"BLOCK_NAME\")\n\n sendgrid_block.notify(\"Hello from Prefect!\")\n \"\"\"\n\n _description = \"Enables sending notifications via Sendgrid email service.\"\n _block_type_name = \"Sendgrid Email\"\n _block_type_slug = \"sendgrid-email\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/82bc6ed16ca42a2252a5512c72233a253b8a58eb-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SendgridEmail\"\n\n api_key: SecretStr = Field(\n default=...,\n title=\"API Key\",\n description=\"The API Key associated with your sendgrid account.\",\n )\n\n sender_email: str = Field(\n title=\"Sender email id\",\n description=\"The sender email id.\",\n example=\"test-support@gmail.com\",\n )\n\n to_emails: List[str] = Field(\n default=...,\n title=\"Recipient emails\",\n description=\"Email ids of all recipients.\",\n example='\"recipient1@gmail.com\"',\n )\n\n def block_initialization(self) -> None:\n from apprise.plugins.NotifySendGrid import NotifySendGrid\n\n url = SecretStr(\n NotifySendGrid(\n apikey=self.api_key.get_secret_value(),\n from_email=self.sender_email,\n targets=self.to_emails,\n ).url()\n )\n\n self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SlackWebhook","title":"SlackWebhook
","text":" Bases: AppriseNotificationBlock
Enables sending notifications via a provided Slack webhook.
Examples:
Load a saved Slack webhook and send a message:
from prefect.blocks.notifications import SlackWebhook\n\nslack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\nslack_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class SlackWebhook(AppriseNotificationBlock):\n \"\"\"\n Enables sending notifications via a provided Slack webhook.\n\n Examples:\n Load a saved Slack webhook and send a message:\n ```python\n from prefect.blocks.notifications import SlackWebhook\n\n slack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\n slack_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _block_type_name = \"Slack Webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c1965ecbf8704ee1ea20d77786de9a41ce1087d1-500x500.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SlackWebhook\"\n\n url: SecretStr = Field(\n default=...,\n title=\"Webhook URL\",\n description=\"Slack incoming webhook URL used to send notifications.\",\n example=\"https://hooks.slack.com/XXX\",\n )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.TwilioSMS","title":"TwilioSMS
","text":" Bases: AbstractAppriseNotificationBlock
Enables sending notifications via Twilio SMS. Find more on sending Twilio SMS messages in the docs.
Examples:
Load a saved TwilioSMS
block and send a message:
from prefect.blocks.notifications import TwilioSMS\ntwilio_webhook_block = TwilioSMS.load(\"BLOCK_NAME\")\ntwilio_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class TwilioSMS(AbstractAppriseNotificationBlock):\n \"\"\"Enables sending notifications via Twilio SMS.\n Find more on sending Twilio SMS messages in the [docs](https://www.twilio.com/docs/sms).\n\n Examples:\n Load a saved `TwilioSMS` block and send a message:\n ```python\n from prefect.blocks.notifications import TwilioSMS\n twilio_webhook_block = TwilioSMS.load(\"BLOCK_NAME\")\n twilio_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _description = \"Enables sending notifications via Twilio SMS.\"\n _block_type_name = \"Twilio SMS\"\n _block_type_slug = \"twilio-sms\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/8bd8777999f82112c09b9c8d57083ac75a4a0d65-250x250.png\" # noqa\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.TwilioSMS\"\n\n account_sid: str = Field(\n default=...,\n description=(\n \"The Twilio Account SID - it can be found on the homepage \"\n \"of the Twilio console.\"\n ),\n )\n\n auth_token: SecretStr = Field(\n default=...,\n description=(\n \"The Twilio Authentication Token - \"\n \"it can be found on the homepage of the Twilio console.\"\n ),\n )\n\n from_phone_number: str = Field(\n default=...,\n description=\"The valid Twilio phone number to send the message from.\",\n example=\"18001234567\",\n )\n\n to_phone_numbers: List[str] = Field(\n default=...,\n description=\"A list of valid Twilio phone number(s) to send the message to.\",\n # not wrapped in brackets because of the way UI displays examples; in code should be [\"18004242424\"]\n example=\"18004242424\",\n )\n\n def block_initialization(self) -> None:\n from apprise.plugins.NotifyTwilio import NotifyTwilio\n\n url = SecretStr(\n NotifyTwilio(\n account_sid=self.account_sid,\n auth_token=self.auth_token.get_secret_value(),\n source=self.from_phone_number,\n targets=self.to_phone_numbers,\n ).url()\n )\n self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/system/","title":"system","text":"","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system","title":"prefect.blocks.system
","text":"","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.DateTime","title":"DateTime
","text":" Bases: Block
A block that represents a datetime
Attributes:
Name Type Descriptionvalue
DateTime
An ISO 8601-compatible datetime value.
ExampleLoad a stored JSON value:
from prefect.blocks.system import DateTime\n\ndata_time_block = DateTime.load(\"BLOCK_NAME\")\n
Source code in prefect/blocks/system.py
class DateTime(Block):\n \"\"\"\n A block that represents a datetime\n\n Attributes:\n value: An ISO 8601-compatible datetime value.\n\n Example:\n Load a stored JSON value:\n ```python\n from prefect.blocks.system import DateTime\n\n data_time_block = DateTime.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"Date Time\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/8b3da9a6621e92108b8e6a75b82e15374e170ff7-48x48.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.DateTime\"\n\n value: pendulum.DateTime = Field(\n default=...,\n description=\"An ISO 8601-compatible datetime value.\",\n )\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.JSON","title":"JSON
","text":" Bases: Block
A block that represents JSON
Attributes:
Name Type Descriptionvalue
Any
A JSON-compatible value.
ExampleLoad a stored JSON value:
from prefect.blocks.system import JSON\n\njson_block = JSON.load(\"BLOCK_NAME\")\n
Source code in prefect/blocks/system.py
class JSON(Block):\n \"\"\"\n A block that represents JSON\n\n Attributes:\n value: A JSON-compatible value.\n\n Example:\n Load a stored JSON value:\n ```python\n from prefect.blocks.system import JSON\n\n json_block = JSON.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/4fcef2294b6eeb423b1332d1ece5156bf296ff96-48x48.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.JSON\"\n\n value: Any = Field(default=..., description=\"A JSON-compatible value.\")\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.Secret","title":"Secret
","text":" Bases: Block
A block that represents a secret value. The value stored in this block will be obfuscated when this block is logged or shown in the UI.
Attributes:
Name Type Descriptionvalue
SecretStr
A string value that should be kept secret.
Examplefrom prefect.blocks.system import Secret\n\nsecret_block = Secret.load(\"BLOCK_NAME\")\n\n# Access the stored secret\nsecret_block.get()\n
Source code in prefect/blocks/system.py
class Secret(Block):\n \"\"\"\n A block that represents a secret value. The value stored in this block will be obfuscated when\n this block is logged or shown in the UI.\n\n Attributes:\n value: A string value that should be kept secret.\n\n Example:\n ```python\n from prefect.blocks.system import Secret\n\n secret_block = Secret.load(\"BLOCK_NAME\")\n\n # Access the stored secret\n secret_block.get()\n ```\n \"\"\"\n\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c6f20e556dd16effda9df16551feecfb5822092b-48x48.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.Secret\"\n\n value: SecretStr = Field(\n default=..., description=\"A string value that should be kept secret.\"\n )\n\n def get(self):\n return self.value.get_secret_value()\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.String","title":"String
","text":" Bases: Block
A block that represents a string
Attributes:
Name Type Descriptionvalue
str
A string value.
ExampleLoad a stored string value:
from prefect.blocks.system import String\n\nstring_block = String.load(\"BLOCK_NAME\")\n
Source code in prefect/blocks/system.py
class String(Block):\n \"\"\"\n A block that represents a string\n\n Attributes:\n value: A string value.\n\n Example:\n Load a stored string value:\n ```python\n from prefect.blocks.system import String\n\n string_block = String.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c262ea2c80a2c043564e8763f3370c3db5a6b3e6-48x48.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.String\"\n\n value: str = Field(default=..., description=\"A string value.\")\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/webhook/","title":"webhook","text":"","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook","title":"prefect.blocks.webhook
","text":"","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook","title":"Webhook
","text":" Bases: Block
Block that enables calling webhooks.
Source code inprefect/blocks/webhook.py
class Webhook(Block):\n \"\"\"\n Block that enables calling webhooks.\n \"\"\"\n\n _block_type_name = \"Webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c7247cb359eb6cf276734d4b1fbf00fb8930e89e-250x250.png\" # type: ignore\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook\"\n\n method: Literal[\"GET\", \"POST\", \"PUT\", \"PATCH\", \"DELETE\"] = Field(\n default=\"POST\", description=\"The webhook request method. Defaults to `POST`.\"\n )\n\n url: SecretStr = Field(\n default=...,\n title=\"Webhook URL\",\n description=\"The webhook URL.\",\n example=\"https://hooks.slack.com/XXX\",\n )\n\n headers: SecretDict = Field(\n default_factory=lambda: SecretDict(dict()),\n title=\"Webhook Headers\",\n description=\"A dictionary of headers to send with the webhook request.\",\n )\n\n def block_initialization(self):\n self._client = AsyncClient(transport=_http_transport)\n\n async def call(self, payload: Optional[dict] = None) -> Response:\n \"\"\"\n Call the webhook.\n\n Args:\n payload: an optional payload to send when calling the webhook.\n \"\"\"\n async with self._client:\n return await self._client.request(\n method=self.method,\n url=self.url.get_secret_value(),\n headers=self.headers.get_secret_value(),\n json=payload,\n )\n
","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook.call","title":"call
async
","text":"Call the webhook.
Parameters:
Name Type Description Defaultpayload
Optional[dict]
an optional payload to send when calling the webhook.
None
Source code in prefect/blocks/webhook.py
async def call(self, payload: Optional[dict] = None) -> Response:\n \"\"\"\n Call the webhook.\n\n Args:\n payload: an optional payload to send when calling the webhook.\n \"\"\"\n async with self._client:\n return await self._client.request(\n method=self.method,\n url=self.url.get_secret_value(),\n headers=self.headers.get_secret_value(),\n json=payload,\n )\n
","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/cli/agent/","title":"agent","text":"","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/agent/#prefect.cli.agent","title":"prefect.cli.agent
","text":"Command line interface for working with agent services
","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/agent/#prefect.cli.agent.start","title":"start
async
","text":"Start an agent process to poll one or more work queues for flow runs.
Source code inprefect/cli/agent.py
@agent_app.command()\nasync def start(\n # deprecated main argument\n work_queue: str = typer.Argument(\n None,\n show_default=False,\n help=\"DEPRECATED: A work queue name or ID\",\n ),\n work_queues: List[str] = typer.Option(\n None,\n \"-q\",\n \"--work-queue\",\n help=\"One or more work queue names for the agent to pull from.\",\n ),\n work_queue_prefix: List[str] = typer.Option(\n None,\n \"-m\",\n \"--match\",\n help=(\n \"Dynamically matches work queue names with the specified prefix for the\"\n \" agent to pull from,for example `dev-` will match all work queues with a\"\n \" name that starts with `dev-`\"\n ),\n ),\n work_pool_name: str = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"A work pool name for the agent to pull from.\",\n ),\n hide_welcome: bool = typer.Option(False, \"--hide-welcome\"),\n api: str = SettingsOption(PREFECT_API_URL),\n run_once: bool = typer.Option(\n False, help=\"Run the agent loop once, instead of forever.\"\n ),\n prefetch_seconds: int = SettingsOption(PREFECT_AGENT_PREFETCH_SECONDS),\n # deprecated tags\n tags: List[str] = typer.Option(\n None,\n \"-t\",\n \"--tag\",\n help=(\n \"DEPRECATED: One or more optional tags that will be used to create a work\"\n \" queue. This option will be removed on 2023-02-23.\"\n ),\n ),\n limit: int = typer.Option(\n None,\n \"-l\",\n \"--limit\",\n help=\"Maximum number of flow runs to start simultaneously.\",\n ),\n):\n \"\"\"\n Start an agent process to poll one or more work queues for flow runs.\n \"\"\"\n work_queues = work_queues or []\n\n if work_queue is not None:\n # try to treat the work_queue as a UUID\n try:\n async with get_client() as client:\n q = await client.read_work_queue(UUID(work_queue))\n work_queue = q.name\n # otherwise treat it as a string name\n except (TypeError, ValueError):\n pass\n work_queues.append(work_queue)\n app.console.print(\n (\n \"Agents now support multiple work queues. Instead of passing a single\"\n \" argument, provide work queue names with the `-q` or `--work-queue`\"\n f\" flag: `prefect agent start -q {work_queue}`\\n\"\n ),\n style=\"blue\",\n )\n\n if not work_queues and not tags and not work_queue_prefix and not work_pool_name:\n exit_with_error(\"No work queues provided!\", style=\"red\")\n elif bool(work_queues) + bool(tags) + bool(work_queue_prefix) > 1:\n exit_with_error(\n \"Only one of `work_queues`, `match`, or `tags` can be provided.\",\n style=\"red\",\n )\n if work_pool_name and tags:\n exit_with_error(\n \"`tag` and `pool` options cannot be used together.\", style=\"red\"\n )\n\n if tags:\n work_queue_name = f\"Agent queue {'-'.join(sorted(tags))}\"\n app.console.print(\n (\n \"`tags` are deprecated. For backwards-compatibility with old versions\"\n \" of Prefect, this agent will create a work queue named\"\n f\" `{work_queue_name}` that uses legacy tag-based matching. This option\"\n \" will be removed on 2023-02-23.\"\n ),\n style=\"red\",\n )\n\n async with get_client() as client:\n try:\n work_queue = await client.read_work_queue_by_name(work_queue_name)\n if work_queue.filter is None:\n # ensure the work queue has legacy (deprecated) tag-based behavior\n await client.update_work_queue(filter=dict(tags=tags))\n except ObjectNotFound:\n # if the work queue doesn't already exist, we create it with tags\n # to enable legacy (deprecated) tag-matching behavior\n await client.create_work_queue(name=work_queue_name, tags=tags)\n\n work_queues = [work_queue_name]\n\n if not hide_welcome:\n if api:\n app.console.print(\n f\"Starting v{prefect.__version__} agent connected to {api}...\"\n )\n else:\n app.console.print(\n f\"Starting v{prefect.__version__} agent with ephemeral API...\"\n )\n\n agent_process_id = os.getpid()\n setup_signal_handlers_agent(\n agent_process_id, \"the Prefect agent\", app.console.print\n )\n\n async with PrefectAgent(\n work_queues=work_queues,\n work_queue_prefix=work_queue_prefix,\n work_pool_name=work_pool_name,\n prefetch_seconds=prefetch_seconds,\n limit=limit,\n ) as agent:\n if not hide_welcome:\n app.console.print(ascii_name)\n if work_pool_name:\n app.console.print(\n \"Agent started! Looking for work from \"\n f\"work pool '{work_pool_name}'...\"\n )\n elif work_queue_prefix:\n app.console.print(\n \"Agent started! Looking for work from \"\n f\"queue(s) that start with the prefix: {work_queue_prefix}...\"\n )\n else:\n app.console.print(\n \"Agent started! Looking for work from \"\n f\"queue(s): {', '.join(work_queues)}...\"\n )\n\n async with anyio.create_task_group() as tg:\n tg.start_soon(\n partial(\n critical_service_loop,\n agent.get_and_submit_flow_runs,\n PREFECT_AGENT_QUERY_INTERVAL.value(),\n printer=app.console.print,\n run_once=run_once,\n jitter_range=0.3,\n backoff=4, # Up to ~1 minute interval during backoff\n )\n )\n\n tg.start_soon(\n partial(\n critical_service_loop,\n agent.check_for_cancelled_flow_runs,\n PREFECT_AGENT_QUERY_INTERVAL.value() * 2,\n printer=app.console.print,\n run_once=run_once,\n jitter_range=0.3,\n backoff=4,\n )\n )\n\n app.console.print(\"Agent stopped!\")\n
","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/artifact/","title":"artifact","text":"","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact","title":"prefect.cli.artifact
","text":"","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact.delete","title":"delete
async
","text":"Delete an artifact.
Parameters:
Name Type Description Defaultkey
Optional[str]
the key of the artifact to delete
Argument(None, help='The key of the artifact to delete.')
Examples:
$ prefect artifact delete \"my-artifact\"
Source code inprefect/cli/artifact.py
@artifact_app.command(\"delete\")\nasync def delete(\n key: Optional[str] = typer.Argument(\n None, help=\"The key of the artifact to delete.\"\n ),\n artifact_id: Optional[str] = typer.Option(\n None, \"--id\", help=\"The ID of the artifact to delete.\"\n ),\n):\n \"\"\"\n Delete an artifact.\n\n Arguments:\n key: the key of the artifact to delete\n\n Examples:\n $ prefect artifact delete \"my-artifact\"\n \"\"\"\n if key and artifact_id:\n exit_with_error(\"Please provide either a key or an artifact_id but not both.\")\n\n async with get_client() as client:\n if artifact_id is not None:\n try:\n confirm_delete = typer.confirm(\n (\n \"Are you sure you want to delete artifact with id\"\n f\" {artifact_id!r}?\"\n ),\n default=False,\n )\n if not confirm_delete:\n exit_with_error(\"Deletion aborted.\")\n\n await client.delete_artifact(artifact_id)\n exit_with_success(f\"Deleted artifact with id {artifact_id!r}.\")\n except ObjectNotFound:\n exit_with_error(f\"Artifact with id {artifact_id!r} not found!\")\n\n elif key is not None:\n artifacts = await client.read_artifacts(\n artifact_filter=ArtifactFilter(key=ArtifactFilterKey(any_=[key])),\n )\n if not artifacts:\n exit_with_error(\n f\"Artifact with key {key!r} not found. You can also specify an\"\n \" artifact id with the --id flag.\"\n )\n\n confirm_delete = typer.confirm(\n (\n f\"Are you sure you want to delete {len(artifacts)} artifact(s) with\"\n f\" key {key!r}?\"\n ),\n default=False,\n )\n if not confirm_delete:\n exit_with_error(\"Deletion aborted.\")\n\n for a in artifacts:\n await client.delete_artifact(a.id)\n\n exit_with_success(f\"Deleted {len(artifacts)} artifact(s) with key {key!r}.\")\n\n else:\n exit_with_error(\"Please provide a key or an artifact_id.\")\n
","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact.inspect","title":"inspect
async
","text":"View details about an artifact.\n\nArguments:\n key: the key of the artifact to inspect\n\nExamples:\n $ prefect artifact inspect \"my-artifact\"\n [\n {\n 'id': 'ba1d67be-0bd7-452e-8110-247fe5e6d8cc',\n 'created': '2023-03-21T21:40:09.895910+00:00',\n 'updated': '2023-03-21T21:40:09.895910+00:00',\n 'key': 'my-artifact',\n 'type': 'markdown',\n 'description': None,\n 'data': 'my markdown',\n 'metadata_': None,\n 'flow_run_id': '8dc54b6f-6e24-4586-a05c-e98c6490cb98',\n 'task_run_id': None\n },\n {\n 'id': '57f235b5-2576-45a5-bd93-c829c2900966',\n 'created': '2023-03-27T23:16:15.536434+00:00',\n 'updated': '2023-03-27T23:16:15.536434+00:00',\n 'key': 'my-artifact',\n 'type': 'markdown',\n 'description': 'my-artifact-description',\n 'data': 'my markdown',\n 'metadata_': None,\n 'flow_run_id': 'ffa91051-f249-48c1-ae0f-4754fcb7eb29',\n 'task_run_id': None\n }\n
]
Source code inprefect/cli/artifact.py
@artifact_app.command(\"inspect\")\nasync def inspect(\n key: str,\n limit: int = typer.Option(\n 10,\n \"--limit\",\n help=\"The maximum number of artifacts to return.\",\n ),\n):\n \"\"\"\n View details about an artifact.\n\n Arguments:\n key: the key of the artifact to inspect\n\n Examples:\n $ prefect artifact inspect \"my-artifact\"\n [\n {\n 'id': 'ba1d67be-0bd7-452e-8110-247fe5e6d8cc',\n 'created': '2023-03-21T21:40:09.895910+00:00',\n 'updated': '2023-03-21T21:40:09.895910+00:00',\n 'key': 'my-artifact',\n 'type': 'markdown',\n 'description': None,\n 'data': 'my markdown',\n 'metadata_': None,\n 'flow_run_id': '8dc54b6f-6e24-4586-a05c-e98c6490cb98',\n 'task_run_id': None\n },\n {\n 'id': '57f235b5-2576-45a5-bd93-c829c2900966',\n 'created': '2023-03-27T23:16:15.536434+00:00',\n 'updated': '2023-03-27T23:16:15.536434+00:00',\n 'key': 'my-artifact',\n 'type': 'markdown',\n 'description': 'my-artifact-description',\n 'data': 'my markdown',\n 'metadata_': None,\n 'flow_run_id': 'ffa91051-f249-48c1-ae0f-4754fcb7eb29',\n 'task_run_id': None\n }\n ]\n \"\"\"\n\n async with get_client() as client:\n artifacts = await client.read_artifacts(\n limit=limit,\n sort=ArtifactSort.UPDATED_DESC,\n artifact_filter=ArtifactFilter(key=ArtifactFilterKey(any_=[key])),\n )\n if not artifacts:\n exit_with_error(f\"Artifact {key!r} not found.\")\n\n artifacts = [a.dict(json_compatible=True) for a in artifacts]\n\n app.console.print(Pretty(artifacts))\n
","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact.list_artifacts","title":"list_artifacts
async
","text":"List artifacts.
Source code inprefect/cli/artifact.py
@artifact_app.command(\"ls\")\nasync def list_artifacts(\n limit: int = typer.Option(\n 100,\n \"--limit\",\n help=\"The maximum number of artifacts to return.\",\n ),\n all: bool = typer.Option(\n False,\n \"--all\",\n \"-a\",\n help=\"Whether or not to only return the latest version of each artifact.\",\n ),\n):\n \"\"\"\n List artifacts.\n \"\"\"\n table = Table(\n title=\"Artifacts\",\n caption=\"List Artifacts using `prefect artifact ls`\",\n show_header=True,\n )\n\n table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Key\", style=\"blue\", no_wrap=True)\n table.add_column(\"Type\", style=\"blue\", no_wrap=True)\n table.add_column(\"Updated\", style=\"blue\", no_wrap=True)\n\n async with get_client() as client:\n if all:\n artifacts = await client.read_artifacts(\n sort=ArtifactSort.KEY_ASC,\n limit=limit,\n )\n\n for artifact in sorted(artifacts, key=lambda x: f\"{x.key}\"):\n table.add_row(\n str(artifact.id),\n artifact.key,\n artifact.type,\n pendulum.instance(artifact.updated).diff_for_humans(),\n )\n\n else:\n artifacts = await client.read_latest_artifacts(\n sort=ArtifactCollectionSort.KEY_ASC,\n limit=limit,\n )\n\n for artifact in sorted(artifacts, key=lambda x: f\"{x.key}\"):\n table.add_row(\n str(artifact.latest_id),\n artifact.key,\n artifact.type,\n pendulum.instance(artifact.updated).diff_for_humans(),\n )\n\n app.console.print(table)\n
","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/block/","title":"block","text":"","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block","title":"prefect.cli.block
","text":"Command line interface for working with blocks.
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_create","title":"block_create
async
","text":"Generate a link to the Prefect UI to create a block.
Source code inprefect/cli/block.py
@blocks_app.command(\"create\")\nasync def block_create(\n block_type_slug: str = typer.Argument(\n ...,\n help=\"A block type slug. View available types with: prefect block type ls\",\n show_default=False,\n ),\n):\n \"\"\"\n Generate a link to the Prefect UI to create a block.\n \"\"\"\n async with get_client() as client:\n try:\n block_type = await client.read_block_type_by_slug(block_type_slug)\n except ObjectNotFound:\n app.console.print(f\"[red]Block type {block_type_slug!r} not found![/red]\")\n block_types = await client.read_block_types()\n slugs = {block_type.slug for block_type in block_types}\n app.console.print(f\"Available block types: {', '.join(slugs)}\")\n raise typer.Exit(1)\n\n if not PREFECT_UI_URL:\n exit_with_error(\n \"Prefect must be configured to use a hosted Prefect server or \"\n \"Prefect Cloud to display the Prefect UI\"\n )\n\n block_link = f\"{PREFECT_UI_URL.value()}/blocks/catalog/{block_type.slug}/create\"\n app.console.print(\n f\"Create a {block_type_slug} block: {block_link}\",\n )\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_delete","title":"block_delete
async
","text":"Delete a configured block.
Source code inprefect/cli/block.py
@blocks_app.command(\"delete\")\nasync def block_delete(\n slug: Optional[str] = typer.Argument(\n None, help=\"A block slug. Formatted as '<BLOCK_TYPE_SLUG>/<BLOCK_NAME>'\"\n ),\n block_id: Optional[str] = typer.Option(None, \"--id\", help=\"A block id.\"),\n):\n \"\"\"\n Delete a configured block.\n \"\"\"\n async with get_client() as client:\n if slug is None and block_id is not None:\n try:\n await client.delete_block_document(block_id)\n exit_with_success(f\"Deleted Block '{block_id}'.\")\n except ObjectNotFound:\n exit_with_error(f\"Deployment {block_id!r} not found!\")\n elif slug is not None:\n block_type_slug, block_document_name = slug.split(\"/\")\n try:\n block_document = await client.read_block_document_by_name(\n block_document_name, block_type_slug, include_secrets=False\n )\n await client.delete_block_document(block_document.id)\n exit_with_success(f\"Deleted Block '{slug}'.\")\n except ObjectNotFound:\n exit_with_error(f\"Block {slug!r} not found!\")\n else:\n exit_with_error(\"Must provide a block slug or id\")\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_inspect","title":"block_inspect
async
","text":"Displays details about a configured block.
Source code inprefect/cli/block.py
@blocks_app.command(\"inspect\")\nasync def block_inspect(\n slug: Optional[str] = typer.Argument(\n None, help=\"A Block slug: <BLOCK_TYPE_SLUG>/<BLOCK_NAME>\"\n ),\n block_id: Optional[str] = typer.Option(\n None, \"--id\", help=\"A Block id to search for if no slug is given\"\n ),\n):\n \"\"\"\n Displays details about a configured block.\n \"\"\"\n async with get_client() as client:\n if slug is None and block_id is not None:\n try:\n block_document = await client.read_block_document(\n block_id, include_secrets=False\n )\n except ObjectNotFound:\n exit_with_error(f\"Deployment {block_id!r} not found!\")\n elif slug is not None:\n block_type_slug, block_document_name = slug.split(\"/\")\n try:\n block_document = await client.read_block_document_by_name(\n block_document_name, block_type_slug, include_secrets=False\n )\n except ObjectNotFound:\n exit_with_error(f\"Block {slug!r} not found!\")\n else:\n exit_with_error(\"Must provide a block slug or id\")\n app.console.print(display_block(block_document))\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_ls","title":"block_ls
async
","text":"View all configured blocks.
Source code inprefect/cli/block.py
@blocks_app.command(\"ls\")\nasync def block_ls():\n \"\"\"\n View all configured blocks.\n \"\"\"\n async with get_client() as client:\n blocks = await client.read_block_documents()\n\n table = Table(\n title=\"Blocks\", caption=\"List Block Types using `prefect block type ls`\"\n )\n table.add_column(\"ID\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Type\", style=\"blue\", no_wrap=True)\n table.add_column(\"Name\", style=\"blue\", no_wrap=True)\n table.add_column(\"Slug\", style=\"blue\", no_wrap=True)\n\n for block in sorted(blocks, key=lambda x: f\"{x.block_type.slug}/{x.name}\"):\n table.add_row(\n str(block.id),\n block.block_type.name,\n str(block.name),\n f\"{block.block_type.slug}/{block.name}\",\n )\n\n app.console.print(table)\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.blocktype_delete","title":"blocktype_delete
async
","text":"Delete an unprotected Block Type.
Source code inprefect/cli/block.py
@blocktypes_app.command(\"delete\")\nasync def blocktype_delete(\n slug: str = typer.Argument(..., help=\"A Block type slug\"),\n):\n \"\"\"\n Delete an unprotected Block Type.\n \"\"\"\n async with get_client() as client:\n try:\n block_type = await client.read_block_type_by_slug(slug)\n await client.delete_block_type(block_type.id)\n exit_with_success(f\"Deleted Block Type '{slug}'.\")\n except ObjectNotFound:\n exit_with_error(f\"Block Type {slug!r} not found!\")\n except ProtectedBlockError:\n exit_with_error(f\"Block Type {slug!r} is a protected block!\")\n except PrefectHTTPStatusError:\n exit_with_error(f\"Cannot delete Block Type {slug!r}!\")\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.blocktype_inspect","title":"blocktype_inspect
async
","text":"Display details about a block type.
Source code inprefect/cli/block.py
@blocktypes_app.command(\"inspect\")\nasync def blocktype_inspect(\n slug: str = typer.Argument(..., help=\"A block type slug\"),\n):\n \"\"\"\n Display details about a block type.\n \"\"\"\n async with get_client() as client:\n try:\n block_type = await client.read_block_type_by_slug(slug)\n except ObjectNotFound:\n exit_with_error(f\"Block type {slug!r} not found!\")\n\n app.console.print(display_block_type(block_type))\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.list_types","title":"list_types
async
","text":"List all block types.
Source code inprefect/cli/block.py
@blocktypes_app.command(\"ls\")\nasync def list_types():\n \"\"\"\n List all block types.\n \"\"\"\n async with get_client() as client:\n block_types = await client.read_block_types()\n\n table = Table(\n title=\"Block Types\",\n show_lines=True,\n )\n\n table.add_column(\"Block Type Slug\", style=\"italic cyan\", no_wrap=True)\n table.add_column(\"Description\", style=\"blue\", no_wrap=False, justify=\"left\")\n table.add_column(\n \"Generate creation link\", style=\"italic cyan\", no_wrap=False, justify=\"left\"\n )\n\n for blocktype in sorted(block_types, key=lambda x: x.name):\n table.add_row(\n str(blocktype.slug),\n (\n str(blocktype.description.splitlines()[0].partition(\".\")[0])\n if blocktype.description is not None\n else \"\"\n ),\n f\"prefect block create {blocktype.slug}\",\n )\n\n app.console.print(table)\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.register","title":"register
async
","text":"Register blocks types within a module or file.
This makes the blocks available for configuration via the UI. If a block type has already been registered, its registration will be updated to match the block's current definition.
\b Examples: \b Register block types in a Python module: $ prefect block register -m prefect_aws.credentials \b Register block types in a .py file: $ prefect block register -f my_blocks.py
Source code inprefect/cli/block.py
@blocks_app.command()\nasync def register(\n module_name: Optional[str] = typer.Option(\n None,\n \"--module\",\n \"-m\",\n help=\"Python module containing block types to be registered\",\n ),\n file_path: Optional[Path] = typer.Option(\n None,\n \"--file\",\n \"-f\",\n help=\"Path to .py file containing block types to be registered\",\n ),\n):\n \"\"\"\n Register blocks types within a module or file.\n\n This makes the blocks available for configuration via the UI.\n If a block type has already been registered, its registration will be updated to\n match the block's current definition.\n\n \\b\n Examples:\n \\b\n Register block types in a Python module:\n $ prefect block register -m prefect_aws.credentials\n \\b\n Register block types in a .py file:\n $ prefect block register -f my_blocks.py\n \"\"\"\n # Handles if both options are specified or if neither are specified\n if not (bool(file_path) ^ bool(module_name)):\n exit_with_error(\n \"Please specify either a module or a file containing blocks to be\"\n \" registered, but not both.\"\n )\n\n if module_name:\n try:\n imported_module = import_module(name=module_name)\n except ModuleNotFoundError:\n exit_with_error(\n f\"Unable to load {module_name}. Please make sure the module is \"\n \"installed in your current environment.\"\n )\n\n if file_path:\n if file_path.suffix != \".py\":\n exit_with_error(\n f\"{file_path} is not a .py file. Please specify a \"\n \".py that contains blocks to be registered.\"\n )\n try:\n imported_module = await run_sync_in_worker_thread(\n load_script_as_module, str(file_path)\n )\n except ScriptError as exc:\n app.console.print(exc)\n app.console.print(exception_traceback(exc.user_exc))\n exit_with_error(\n f\"Unable to load file at {file_path}. Please make sure the file path \"\n \"is correct and the file contains valid Python.\"\n )\n\n registered_blocks = await _register_blocks_in_module(imported_module)\n number_of_registered_blocks = len(registered_blocks)\n block_text = \"block\" if 0 < number_of_registered_blocks < 2 else \"blocks\"\n app.console.print(\n f\"[green]Successfully registered {number_of_registered_blocks} {block_text}\\n\"\n )\n app.console.print(_build_registered_blocks_table(registered_blocks))\n msg = (\n \"\\n To configure the newly registered blocks, \"\n \"go to the Blocks page in the Prefect UI.\\n\"\n )\n\n ui_url = PREFECT_UI_URL.value()\n if ui_url is not None:\n block_catalog_url = f\"{ui_url}/blocks/catalog\"\n msg = f\"{msg.rstrip().rstrip('.')}: {block_catalog_url}\\n\"\n\n app.console.print(msg)\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/cloud-webhook/","title":"Cloud webhook","text":"","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook","title":"prefect.cli.cloud.webhook
","text":"Command line interface for working with webhooks
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.create","title":"create
async
","text":"Create a new Cloud webhook
Source code inprefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def create(\n webhook_name: str,\n description: str = typer.Option(\n \"\", \"--description\", \"-d\", help=\"Description of the webhook\"\n ),\n template: str = typer.Option(\n None, \"--template\", \"-t\", help=\"Jinja2 template expression\"\n ),\n):\n \"\"\"\n Create a new Cloud webhook\n \"\"\"\n if not template:\n exit_with_error(\n \"Please provide a Jinja2 template expression in the --template flag \\nwhich\"\n ' should define (at minimum) the following attributes: \\n{ \"event\":'\n ' \"your.event.name\", \"resource\": { \"prefect.resource.id\":'\n ' \"your.resource.id\" } }'\n \" \\nhttps://docs.prefect.io/latest/cloud/webhooks/#webhook-templates\"\n )\n\n confirm_logged_in()\n\n # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n response = await client.request(\n \"POST\",\n \"/webhooks/\",\n json={\n \"name\": webhook_name,\n \"description\": description,\n \"template\": template,\n },\n )\n app.console.print(f'Successfully created webhook {response[\"name\"]}')\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.delete","title":"delete
async
","text":"Delete an existing Cloud webhook
Source code inprefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def delete(webhook_id: UUID):\n \"\"\"\n Delete an existing Cloud webhook\n \"\"\"\n confirm_logged_in()\n\n confirm_delete = typer.confirm(\n \"Are you sure you want to delete it? This cannot be undone.\"\n )\n\n if not confirm_delete:\n return\n\n # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n await client.request(\"DELETE\", f\"/webhooks/{webhook_id}\")\n app.console.print(f\"Successfully deleted webhook {webhook_id}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.get","title":"get
async
","text":"Retrieve a webhook by ID.
Source code inprefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def get(webhook_id: UUID):\n \"\"\"\n Retrieve a webhook by ID.\n \"\"\"\n confirm_logged_in()\n\n # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n webhook = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n display_table = _render_webhooks_into_table([webhook])\n app.console.print(display_table)\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.ls","title":"ls
async
","text":"Fetch and list all webhooks in your workspace
Source code inprefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def ls():\n \"\"\"\n Fetch and list all webhooks in your workspace\n \"\"\"\n confirm_logged_in()\n\n # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n retrieved_webhooks = await client.request(\"POST\", \"/webhooks/filter\")\n display_table = _render_webhooks_into_table(retrieved_webhooks)\n app.console.print(display_table)\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.rotate","title":"rotate
async
","text":"Rotate url for an existing Cloud webhook, in case it has been compromised
Source code inprefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def rotate(webhook_id: UUID):\n \"\"\"\n Rotate url for an existing Cloud webhook, in case it has been compromised\n \"\"\"\n confirm_logged_in()\n\n confirm_rotate = typer.confirm(\n \"Are you sure you want to rotate? This will invalidate the old URL.\"\n )\n\n if not confirm_rotate:\n return\n\n # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n response = await client.request(\"POST\", f\"/webhooks/{webhook_id}/rotate\")\n app.console.print(f'Successfully rotated webhook URL to {response[\"slug\"]}')\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.toggle","title":"toggle
async
","text":"Toggle the enabled status of an existing Cloud webhook
Source code inprefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def toggle(\n webhook_id: UUID,\n):\n \"\"\"\n Toggle the enabled status of an existing Cloud webhook\n \"\"\"\n confirm_logged_in()\n\n status_lookup = {True: \"enabled\", False: \"disabled\"}\n\n async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n response = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n current_status = response[\"enabled\"]\n new_status = not current_status\n\n await client.request(\n \"PATCH\", f\"/webhooks/{webhook_id}\", json={\"enabled\": new_status}\n )\n app.console.print(f\"Webhook is now {status_lookup[new_status]}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.update","title":"update
async
","text":"Partially update an existing Cloud webhook
Source code inprefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def update(\n webhook_id: UUID,\n webhook_name: str = typer.Option(None, \"--name\", \"-n\", help=\"Webhook name\"),\n description: str = typer.Option(\n None, \"--description\", \"-d\", help=\"Description of the webhook\"\n ),\n template: str = typer.Option(\n None, \"--template\", \"-t\", help=\"Jinja2 template expression\"\n ),\n):\n \"\"\"\n Partially update an existing Cloud webhook\n \"\"\"\n confirm_logged_in()\n\n # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n response = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n update_payload = {\n \"name\": webhook_name or response[\"name\"],\n \"description\": description or response[\"description\"],\n \"template\": template or response[\"template\"],\n }\n\n await client.request(\"PUT\", f\"/webhooks/{webhook_id}\", json=update_payload)\n app.console.print(f\"Successfully updated webhook {webhook_id}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud/","title":"cloud","text":"","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud","title":"prefect.cli.cloud
","text":"Command line interface for interacting with Prefect Cloud
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login_api","title":"login_api = FastAPI(lifespan=lifespan)
module-attribute
","text":"This small API server is used for data transmission for browser-based log in.
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.check_key_is_valid_for_login","title":"check_key_is_valid_for_login
async
","text":"Attempt to use a key to see if it is valid
Source code inprefect/cli/cloud/__init__.py
async def check_key_is_valid_for_login(key: str):\n \"\"\"\n Attempt to use a key to see if it is valid\n \"\"\"\n async with get_cloud_client(api_key=key) as client:\n try:\n await client.read_workspaces()\n return True\n except CloudUnauthorizedError:\n return False\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login","title":"login
async
","text":"Log in to Prefect Cloud. Creates a new profile configured to use the specified PREFECT_API_KEY. Uses a previously configured profile if it exists.
Source code inprefect/cli/cloud/__init__.py
@cloud_app.command()\nasync def login(\n key: Optional[str] = typer.Option(\n None, \"--key\", \"-k\", help=\"API Key to authenticate with Prefect\"\n ),\n workspace_handle: Optional[str] = typer.Option(\n None,\n \"--workspace\",\n \"-w\",\n help=(\n \"Full handle of workspace, in format '<account_handle>/<workspace_handle>'\"\n ),\n ),\n):\n \"\"\"\n Log in to Prefect Cloud.\n Creates a new profile configured to use the specified PREFECT_API_KEY.\n Uses a previously configured profile if it exists.\n \"\"\"\n if not is_interactive() and (not key or not workspace_handle):\n exit_with_error(\n \"When not using an interactive terminal, you must supply a `--key` and\"\n \" `--workspace`.\"\n )\n\n profiles = load_profiles()\n current_profile = get_settings_context().profile\n env_var_api_key = PREFECT_API_KEY.value()\n\n if env_var_api_key and key and env_var_api_key != key:\n exit_with_error(\n \"Cannot log in with a key when a different PREFECT_API_KEY is present as an\"\n \" environment variable that will override it.\"\n )\n\n if env_var_api_key and env_var_api_key == key:\n is_valid_key = await check_key_is_valid_for_login(key)\n is_correct_key_format = key.startswith(\"pnu_\") or key.startswith(\"pnb_\")\n if not is_valid_key:\n help_message = \"Please ensure your credentials are correct and unexpired.\"\n if not is_correct_key_format:\n help_message = \"Your key is not in our expected format.\"\n exit_with_error(\n f\"Unable to authenticate with Prefect Cloud. {help_message}\"\n )\n\n already_logged_in_profiles = []\n for name, profile in profiles.items():\n profile_key = profile.settings.get(PREFECT_API_KEY)\n if (\n # If a key is provided, only show profiles with the same key\n (key and profile_key == key)\n # Otherwise, show all profiles with a key set\n or (not key and profile_key is not None)\n # Check that the key is usable to avoid suggesting unauthenticated profiles\n and await check_key_is_valid_for_login(profile_key)\n ):\n already_logged_in_profiles.append(name)\n\n current_profile_is_logged_in = current_profile.name in already_logged_in_profiles\n\n if current_profile_is_logged_in:\n app.console.print(\"It looks like you're already authenticated on this profile.\")\n should_reauth = typer.confirm(\n \"? Would you like to reauthenticate?\", default=False\n )\n if not should_reauth:\n app.console.print(\"Using the existing authentication on this profile.\")\n key = PREFECT_API_KEY.value()\n\n elif already_logged_in_profiles:\n app.console.print(\n \"It looks like you're already authenticated with another profile.\"\n )\n if typer.confirm(\n \"? Would you like to switch profiles?\",\n default=True,\n ):\n profile_name = prompt_select_from_list(\n app.console,\n \"Which authenticated profile would you like to switch to?\",\n already_logged_in_profiles,\n )\n\n profiles.set_active(profile_name)\n save_profiles(profiles)\n exit_with_success(f\"Switched to authenticated profile {profile_name!r}.\")\n\n if not key:\n choice = prompt_select_from_list(\n app.console,\n \"How would you like to authenticate?\",\n [\n (\"browser\", \"Log in with a web browser\"),\n (\"key\", \"Paste an API key\"),\n ],\n )\n\n if choice == \"key\":\n key = typer.prompt(\"Paste your API key\", hide_input=True)\n elif choice == \"browser\":\n key = await login_with_browser()\n\n async with get_cloud_client(api_key=key) as client:\n try:\n workspaces = await client.read_workspaces()\n except CloudUnauthorizedError:\n if key.startswith(\"pcu\"):\n help_message = (\n \"It looks like you're using API key from Cloud 1\"\n \" (https://cloud.prefect.io). Make sure that you generate API key\"\n \" using Cloud 2 (https://app.prefect.cloud)\"\n )\n elif not key.startswith(\"pnu_\") and not key.startswith(\"pnb_\"):\n help_message = (\n \"Your key is not in our expected format: 'pnu_' or 'pnb_'.\"\n )\n else:\n help_message = (\n \"Please ensure your credentials are correct and unexpired.\"\n )\n exit_with_error(\n f\"Unable to authenticate with Prefect Cloud. {help_message}\"\n )\n except httpx.HTTPStatusError as exc:\n exit_with_error(f\"Error connecting to Prefect Cloud: {exc!r}\")\n\n if workspace_handle:\n # Search for the given workspace\n for workspace in workspaces:\n if workspace.handle == workspace_handle:\n break\n else:\n if workspaces:\n hint = (\n \" Available workspaces:\"\n f\" {listrepr((w.handle for w in workspaces), ', ')}\"\n )\n else:\n hint = \"\"\n\n exit_with_error(f\"Workspace {workspace_handle!r} not found.\" + hint)\n else:\n # Prompt a switch if the number of workspaces is greater than one\n prompt_switch_workspace = len(workspaces) > 1\n\n current_workspace = get_current_workspace(workspaces)\n\n # Confirm that we want to switch if the current profile is already logged in\n if (\n current_profile_is_logged_in and current_workspace is not None\n ) and prompt_switch_workspace:\n app.console.print(\n f\"You are currently using workspace {current_workspace.handle!r}.\"\n )\n prompt_switch_workspace = typer.confirm(\n \"? Would you like to switch workspaces?\", default=False\n )\n\n if prompt_switch_workspace:\n workspace = prompt_select_from_list(\n app.console,\n \"Which workspace would you like to use?\",\n [(workspace, workspace.handle) for workspace in workspaces],\n )\n else:\n if current_workspace:\n workspace = current_workspace\n elif len(workspaces) > 0:\n workspace = workspaces[0]\n else:\n exit_with_error(\n \"No workspaces found! Create a workspace at\"\n f\" {PREFECT_CLOUD_UI_URL.value()} and try again.\"\n )\n\n update_current_profile(\n {\n PREFECT_API_KEY: key,\n PREFECT_API_URL: workspace.api_url(),\n }\n )\n\n exit_with_success(\n f\"Authenticated with Prefect Cloud! Using workspace {workspace.handle!r}.\"\n )\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login_with_browser","title":"login_with_browser
async
","text":"Perform login using the browser.
On failure, this function will exit the process. On success, it will return an API key.
Source code inprefect/cli/cloud/__init__.py
async def login_with_browser() -> str:\n \"\"\"\n Perform login using the browser.\n\n On failure, this function will exit the process.\n On success, it will return an API key.\n \"\"\"\n\n # Set up an event that the login API will toggle on startup\n ready_event = login_api.extra[\"ready-event\"] = anyio.Event()\n\n # Set up an event that the login API will set when a response comes from the UI\n result_event = login_api.extra[\"result-event\"] = anyio.Event()\n\n timeout_scope = None\n async with anyio.create_task_group() as tg:\n # Run a server in the background to get payload from the browser\n server = await tg.start(serve_login_api, tg.cancel_scope)\n\n # Wait for the login server to be ready\n with anyio.fail_after(10):\n await ready_event.wait()\n\n # The server may not actually be serving as the lifespan is started first\n while not server.started:\n await anyio.sleep(0)\n\n # Get the port the server is using\n server_port = server.servers[0].sockets[0].getsockname()[1]\n callback = urllib.parse.quote(f\"http://localhost:{server_port}\")\n ui_login_url = (\n PREFECT_CLOUD_UI_URL.value() + f\"/auth/client?callback={callback}\"\n )\n\n # Then open the authorization page in a new browser tab\n app.console.print(\"Opening browser...\")\n await run_sync_in_worker_thread(webbrowser.open_new_tab, ui_login_url)\n\n # Wait for the response from the browser,\n with anyio.move_on_after(120) as timeout_scope:\n app.console.print(\"Waiting for response...\")\n await result_event.wait()\n\n # Uvicorn installs signal handlers, this is the cleanest way to shutdown the\n # login API\n raise_signal(signal.SIGINT)\n\n result = login_api.extra.get(\"result\")\n if not result:\n if timeout_scope and timeout_scope.cancel_called:\n exit_with_error(\"Timed out while waiting for authorization.\")\n else:\n exit_with_error(\"Aborted.\")\n\n if result.type == \"success\":\n return result.content.api_key\n elif result.type == \"failure\":\n exit_with_error(f\"Failed to log in. {result.content.reason}\")\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.logout","title":"logout
async
","text":"Logout the current workspace. Reset PREFECT_API_KEY and PREFECT_API_URL to default.
Source code inprefect/cli/cloud/__init__.py
@cloud_app.command()\nasync def logout():\n \"\"\"\n Logout the current workspace.\n Reset PREFECT_API_KEY and PREFECT_API_URL to default.\n \"\"\"\n current_profile = prefect.context.get_settings_context().profile\n if current_profile is None:\n exit_with_error(\"There is no current profile set.\")\n\n if current_profile.settings.get(PREFECT_API_KEY) is None:\n exit_with_error(\"Current profile is not logged into Prefect Cloud.\")\n\n update_current_profile(\n {\n PREFECT_API_URL: None,\n PREFECT_API_KEY: None,\n },\n )\n\n exit_with_success(\"Logged out from Prefect Cloud.\")\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.ls","title":"ls
async
","text":"List available workspaces.
Source code inprefect/cli/cloud/__init__.py
@workspace_app.command()\nasync def ls():\n \"\"\"List available workspaces.\"\"\"\n\n confirm_logged_in()\n\n async with get_cloud_client() as client:\n try:\n workspaces = await client.read_workspaces()\n except CloudUnauthorizedError:\n exit_with_error(\n \"Unable to authenticate. Please ensure your credentials are correct.\"\n )\n\n current_workspace = get_current_workspace(workspaces)\n\n table = Table(caption=\"* active workspace\")\n table.add_column(\n \"[#024dfd]Workspaces:\", justify=\"left\", style=\"#8ea0ae\", no_wrap=True\n )\n\n for workspace_handle in sorted(workspace.handle for workspace in workspaces):\n if workspace_handle == current_workspace.handle:\n table.add_row(f\"[green]* {workspace_handle}[/green]\")\n else:\n table.add_row(f\" {workspace_handle}\")\n\n app.console.print(table)\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.open","title":"open
async
","text":"Open the Prefect Cloud UI in the browser.
Source code inprefect/cli/cloud/__init__.py
@cloud_app.command()\nasync def open():\n \"\"\"\n Open the Prefect Cloud UI in the browser.\n \"\"\"\n confirm_logged_in()\n\n current_profile = prefect.context.get_settings_context().profile\n if current_profile is None:\n exit_with_error(\n \"There is no current profile set - set one with `prefect profile create\"\n \" <name>` and `prefect profile use <name>`.\"\n )\n\n current_workspace = get_current_workspace(\n await prefect.get_cloud_client().read_workspaces()\n )\n if current_workspace is None:\n exit_with_error(\n \"There is no current workspace set - set one with `prefect cloud workspace\"\n \" set --workspace <workspace>`.\"\n )\n\n ui_url = current_workspace.ui_url()\n\n await run_sync_in_worker_thread(webbrowser.open_new_tab, ui_url)\n\n exit_with_success(f\"Opened {current_workspace.handle!r} in browser.\")\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.prompt_select_from_list","title":"prompt_select_from_list
","text":"Given a list of options, display the values to user in a table and prompt them to select one.
Parameters:
Name Type Description Defaultoptions
Union[List[str], List[Tuple[Hashable, str]]]
A list of options to present to the user. A list of tuples can be passed as key value pairs. If a value is chosen, the key will be returned.
requiredReturns:
Name Type Descriptionstr
str
the selected option
Source code inprefect/cli/cloud/__init__.py
def prompt_select_from_list(\n console, prompt: str, options: Union[List[str], List[Tuple[Hashable, str]]]\n) -> str:\n \"\"\"\n Given a list of options, display the values to user in a table and prompt them\n to select one.\n\n Args:\n options: A list of options to present to the user.\n A list of tuples can be passed as key value pairs. If a value is chosen, the\n key will be returned.\n\n Returns:\n str: the selected option\n \"\"\"\n\n current_idx = 0\n selected_option = None\n\n def build_table() -> Table:\n \"\"\"\n Generate a table of options. The `current_idx` will be highlighted.\n \"\"\"\n\n table = Table(box=False, header_style=None, padding=(0, 0))\n table.add_column(\n f\"? [bold]{prompt}[/] [bright_blue][Use arrows to move; enter to select]\",\n justify=\"left\",\n no_wrap=True,\n )\n\n for i, option in enumerate(options):\n if isinstance(option, tuple):\n option = option[1]\n\n if i == current_idx:\n # Use blue for selected options\n table.add_row(\"[bold][blue]> \" + option)\n else:\n table.add_row(\" \" + option)\n return table\n\n with Live(build_table(), auto_refresh=False, console=console) as live:\n while selected_option is None:\n key = readchar.readkey()\n\n if key == readchar.key.UP:\n current_idx = current_idx - 1\n # wrap to bottom if at the top\n if current_idx < 0:\n current_idx = len(options) - 1\n elif key == readchar.key.DOWN:\n current_idx = current_idx + 1\n # wrap to top if at the bottom\n if current_idx >= len(options):\n current_idx = 0\n elif key == readchar.key.CTRL_C:\n # gracefully exit with no message\n exit_with_error(\"\")\n elif key == readchar.key.ENTER or key == readchar.key.CR:\n selected_option = options[current_idx]\n if isinstance(selected_option, tuple):\n selected_option = selected_option[0]\n\n live.update(build_table(), refresh=True)\n\n return selected_option\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.set","title":"set
async
","text":"Set current workspace. Shows a workspace picker if no workspace is specified.
Source code inprefect/cli/cloud/__init__.py
@workspace_app.command()\nasync def set(\n workspace_handle: str = typer.Option(\n None,\n \"--workspace\",\n \"-w\",\n help=(\n \"Full handle of workspace, in format '<account_handle>/<workspace_handle>'\"\n ),\n ),\n):\n \"\"\"Set current workspace. Shows a workspace picker if no workspace is specified.\"\"\"\n confirm_logged_in()\n\n async with get_cloud_client() as client:\n try:\n workspaces = await client.read_workspaces()\n except CloudUnauthorizedError:\n exit_with_error(\n \"Unable to authenticate. Please ensure your credentials are correct.\"\n )\n\n if workspace_handle:\n # Search for the given workspace\n for workspace in workspaces:\n if workspace.handle == workspace_handle:\n break\n else:\n exit_with_error(f\"Workspace {workspace_handle!r} not found.\")\n else:\n workspace = prompt_select_from_list(\n app.console,\n \"Which workspace would you like to use?\",\n [(workspace, workspace.handle) for workspace in workspaces],\n )\n\n profile = update_current_profile({PREFECT_API_URL: workspace.api_url()})\n\n exit_with_success(\n f\"Successfully set workspace to {workspace.handle!r} in profile\"\n f\" {profile.name!r}.\"\n )\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/concurrency_limit/","title":"concurrency_limit","text":"","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit","title":"prefect.cli.concurrency_limit
","text":"Command line interface for working with concurrency limits.
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.create","title":"create
async
","text":"Create a concurrency limit against a tag.
This limit controls how many task runs with that tag may simultaneously be in a Running state.
Source code inprefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def create(tag: str, concurrency_limit: int):\n \"\"\"\n Create a concurrency limit against a tag.\n\n This limit controls how many task runs with that tag may simultaneously be in a\n Running state.\n \"\"\"\n\n async with get_client() as client:\n await client.create_concurrency_limit(\n tag=tag, concurrency_limit=concurrency_limit\n )\n await client.read_concurrency_limit_by_tag(tag)\n\n app.console.print(\n textwrap.dedent(\n f\"\"\"\n Created concurrency limit with properties:\n tag - {tag!r}\n concurrency_limit - {concurrency_limit}\n\n Delete the concurrency limit:\n prefect concurrency-limit delete {tag!r}\n\n Inspect the concurrency limit:\n prefect concurrency-limit inspect {tag!r}\n \"\"\"\n )\n )\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.delete","title":"delete
async
","text":"Delete the concurrency limit set on the specified tag.
Source code inprefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def delete(tag: str):\n \"\"\"\n Delete the concurrency limit set on the specified tag.\n \"\"\"\n\n async with get_client() as client:\n try:\n await client.delete_concurrency_limit_by_tag(tag=tag)\n except ObjectNotFound:\n exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n exit_with_success(f\"Deleted concurrency limit set on the tag: {tag}\")\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.inspect","title":"inspect
async
","text":"View details about a concurrency limit. active_slots
shows a list of TaskRun IDs which are currently using a concurrency slot.
prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def inspect(tag: str):\n \"\"\"\n View details about a concurrency limit. `active_slots` shows a list of TaskRun IDs\n which are currently using a concurrency slot.\n \"\"\"\n\n async with get_client() as client:\n try:\n result = await client.read_concurrency_limit_by_tag(tag=tag)\n except ObjectNotFound:\n exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n trid_table = Table()\n trid_table.add_column(\"Active Task Run IDs\", style=\"cyan\", no_wrap=True)\n\n cl_table = Table(title=f\"Concurrency Limit ID: [red]{str(result.id)}\")\n cl_table.add_column(\"Tag\", style=\"green\", no_wrap=True)\n cl_table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n cl_table.add_column(\"Created\", style=\"magenta\", no_wrap=True)\n cl_table.add_column(\"Updated\", style=\"magenta\", no_wrap=True)\n\n for trid in sorted(result.active_slots):\n trid_table.add_row(str(trid))\n\n cl_table.add_row(\n str(result.tag),\n str(result.concurrency_limit),\n Pretty(pendulum.instance(result.created).diff_for_humans()),\n Pretty(pendulum.instance(result.updated).diff_for_humans()),\n )\n\n group = Group(\n cl_table,\n trid_table,\n )\n app.console.print(Panel(group, expand=False))\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.ls","title":"ls
async
","text":"View all concurrency limits.
Source code inprefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def ls(limit: int = 15, offset: int = 0):\n \"\"\"\n View all concurrency limits.\n \"\"\"\n table = Table(\n title=\"Concurrency Limits\",\n caption=\"inspect a concurrency limit to show active task run IDs\",\n )\n table.add_column(\"Tag\", style=\"green\", no_wrap=True)\n table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n table.add_column(\"Active Task Runs\", style=\"magenta\", no_wrap=True)\n\n async with get_client() as client:\n concurrency_limits = await client.read_concurrency_limits(\n limit=limit, offset=offset\n )\n\n for cl in sorted(concurrency_limits, key=lambda c: c.updated, reverse=True):\n table.add_row(\n str(cl.tag),\n str(cl.id),\n str(cl.concurrency_limit),\n str(len(cl.active_slots)),\n )\n\n app.console.print(table)\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.reset","title":"reset
async
","text":"Resets the concurrency limit slots set on the specified tag.
Source code inprefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def reset(tag: str):\n \"\"\"\n Resets the concurrency limit slots set on the specified tag.\n \"\"\"\n\n async with get_client() as client:\n try:\n await client.reset_concurrency_limit_by_tag(tag=tag)\n except ObjectNotFound:\n exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n exit_with_success(f\"Reset concurrency limit set on the tag: {tag}\")\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/config/","title":"config","text":"","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config","title":"prefect.cli.config
","text":"Command line interface for working with profiles
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.set_","title":"set_
","text":"Change the value for a setting by setting the value in the current profile.
Source code inprefect/cli/config.py
@config_app.command(\"set\")\ndef set_(settings: List[str]):\n \"\"\"\n Change the value for a setting by setting the value in the current profile.\n \"\"\"\n parsed_settings = {}\n for item in settings:\n try:\n setting, value = item.split(\"=\", maxsplit=1)\n except ValueError:\n exit_with_error(\n f\"Failed to parse argument {item!r}. Use the format 'VAR=VAL'.\"\n )\n\n if setting not in prefect.settings.SETTING_VARIABLES:\n exit_with_error(f\"Unknown setting name {setting!r}.\")\n\n # Guard against changing settings that tweak config locations\n if setting in {\"PREFECT_HOME\", \"PREFECT_PROFILES_PATH\"}:\n exit_with_error(\n f\"Setting {setting!r} cannot be changed with this command. \"\n \"Use an environment variable instead.\"\n )\n\n parsed_settings[setting] = value\n\n try:\n new_profile = prefect.settings.update_current_profile(parsed_settings)\n except pydantic.ValidationError as exc:\n for error in exc.errors():\n setting = error[\"loc\"][0]\n message = error[\"msg\"]\n app.console.print(f\"Validation error for setting {setting!r}: {message}\")\n exit_with_error(\"Invalid setting value.\")\n\n for setting, value in parsed_settings.items():\n app.console.print(f\"Set {setting!r} to {value!r}.\")\n if setting in os.environ:\n app.console.print(\n f\"[yellow]{setting} is also set by an environment variable which will \"\n f\"override your config value. Run `unset {setting}` to clear it.\"\n )\n\n if prefect.settings.SETTING_VARIABLES[setting].deprecated:\n app.console.print(\n f\"[yellow]{prefect.settings.SETTING_VARIABLES[setting].deprecated_message}.\"\n )\n\n exit_with_success(f\"Updated profile {new_profile.name!r}.\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.unset","title":"unset
","text":"Restore the default value for a setting.
Removes the setting from the current profile.
Source code inprefect/cli/config.py
@config_app.command()\ndef unset(settings: List[str]):\n \"\"\"\n Restore the default value for a setting.\n\n Removes the setting from the current profile.\n \"\"\"\n profiles = prefect.settings.load_profiles()\n profile = profiles[prefect.context.get_settings_context().profile.name]\n parsed = set()\n\n for setting in settings:\n if setting not in prefect.settings.SETTING_VARIABLES:\n exit_with_error(f\"Unknown setting name {setting!r}.\")\n # Cast to settings objects\n parsed.add(prefect.settings.SETTING_VARIABLES[setting])\n\n for setting in parsed:\n if setting not in profile.settings:\n exit_with_error(f\"{setting.name!r} is not set in profile {profile.name!r}.\")\n\n profiles.update_profile(\n name=profile.name, settings={setting: None for setting in parsed}\n )\n\n for setting in settings:\n app.console.print(f\"Unset {setting!r}.\")\n\n if setting in os.environ:\n app.console.print(\n f\"[yellow]{setting!r} is also set by an environment variable. \"\n f\"Use `unset {setting}` to clear it.\"\n )\n\n prefect.settings.save_profiles(profiles)\n exit_with_success(f\"Updated profile {profile.name!r}.\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.validate","title":"validate
","text":"Read and validate the current profile.
Deprecated settings will be automatically converted to new names unless both are set.
Source code inprefect/cli/config.py
@config_app.command()\ndef validate():\n \"\"\"\n Read and validate the current profile.\n\n Deprecated settings will be automatically converted to new names unless both are\n set.\n \"\"\"\n profiles = prefect.settings.load_profiles()\n profile = profiles[prefect.context.get_settings_context().profile.name]\n changed = profile.convert_deprecated_renamed_settings()\n for old, new in changed:\n app.console.print(f\"Updated {old.name!r} to {new.name!r}.\")\n\n for setting in profile.settings.keys():\n if setting.deprecated:\n app.console.print(f\"Found deprecated setting {setting.name!r}.\")\n\n profile.validate_settings()\n\n prefect.settings.save_profiles(profiles)\n exit_with_success(\"Configuration valid!\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.view","title":"view
","text":"Display the current settings.
Source code inprefect/cli/config.py
@config_app.command()\ndef view(\n show_defaults: Optional[bool] = typer.Option(\n False, \"--show-defaults/--hide-defaults\", help=(show_defaults_help)\n ),\n show_sources: Optional[bool] = typer.Option(\n True,\n \"--show-sources/--hide-sources\",\n help=(show_sources_help),\n ),\n show_secrets: Optional[bool] = typer.Option(\n False,\n \"--show-secrets/--hide-secrets\",\n help=\"Toggle display of secrets setting values.\",\n ),\n):\n \"\"\"\n Display the current settings.\n \"\"\"\n context = prefect.context.get_settings_context()\n\n # Get settings at each level, converted to a flat dictionary for easy comparison\n default_settings = prefect.settings.get_default_settings()\n env_settings = prefect.settings.get_settings_from_env()\n current_profile_settings = context.settings\n\n # Obfuscate secrets\n if not show_secrets:\n default_settings = default_settings.with_obfuscated_secrets()\n env_settings = env_settings.with_obfuscated_secrets()\n current_profile_settings = current_profile_settings.with_obfuscated_secrets()\n\n # Display the profile first\n app.console.print(f\"PREFECT_PROFILE={context.profile.name!r}\")\n\n settings_output = []\n\n # The combination of environment variables and profile settings that are in use\n profile_overrides = current_profile_settings.dict(exclude_unset=True)\n\n # Used to see which settings in current_profile_settings came from env vars\n env_overrides = env_settings.dict(exclude_unset=True)\n\n for key, value in profile_overrides.items():\n source = \"env\" if env_overrides.get(key) is not None else \"profile\"\n source_blurb = f\" (from {source})\" if show_sources else \"\"\n settings_output.append(f\"{key}='{value}'{source_blurb}\")\n\n if show_defaults:\n for key, value in default_settings.dict().items():\n if key not in profile_overrides:\n source_blurb = \" (from defaults)\" if show_sources else \"\"\n settings_output.append(f\"{key}='{value}'{source_blurb}\")\n\n app.console.print(\"\\n\".join(sorted(settings_output)))\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/deploy/","title":"deploy","text":"","tags":["Python API","deploy","deployment","CLI"]},{"location":"api-ref/prefect/cli/deploy/#prefect.cli.deploy","title":"prefect.cli.deploy
","text":"Module containing implementation for deploying projects.
","tags":["Python API","deploy","deployment","CLI"]},{"location":"api-ref/prefect/cli/deploy/#prefect.cli.deploy.deploy","title":"deploy
async
","text":"Deploy a flow from this project by creating a deployment.
Should be run from a project root directory.
Source code inprefect/cli/deploy.py
@app.command()\nasync def deploy(\n entrypoint: str = typer.Argument(\n None,\n help=(\n \"The path to a flow entrypoint within a project, in the form of\"\n \" `./path/to/file.py:flow_func_name`\"\n ),\n ),\n flow_name: str = typer.Option(\n None,\n \"--flow\",\n \"-f\",\n help=\"DEPRECATED: The name of a registered flow to create a deployment for.\",\n ),\n names: List[str] = typer.Option(\n None,\n \"--name\",\n \"-n\",\n help=(\n \"The name to give the deployment. Can be a pattern. Examples:\"\n \" 'my-deployment', 'my-flow/my-deployment', 'my-deployment-*',\"\n \" '*-flow-name/deployment*'\"\n ),\n ),\n description: str = typer.Option(\n None,\n \"--description\",\n \"-d\",\n help=(\n \"The description to give the deployment. If not provided, the description\"\n \" will be populated from the flow's description.\"\n ),\n ),\n version: str = typer.Option(\n None, \"--version\", help=\"A version to give the deployment.\"\n ),\n tags: List[str] = typer.Option(\n None,\n \"-t\",\n \"--tag\",\n help=(\n \"One or more optional tags to apply to the deployment. Note: tags are used\"\n \" only for organizational purposes. For delegating work to agents, use the\"\n \" --work-queue flag.\"\n ),\n ),\n work_pool_name: str = SettingsOption(\n PREFECT_DEFAULT_WORK_POOL_NAME,\n \"-p\",\n \"--pool\",\n help=\"The work pool that will handle this deployment's runs.\",\n ),\n work_queue_name: str = typer.Option(\n None,\n \"-q\",\n \"--work-queue\",\n help=(\n \"The work queue that will handle this deployment's runs. \"\n \"It will be created if it doesn't already exist. Defaults to `None`.\"\n ),\n ),\n variables: List[str] = typer.Option(\n None,\n \"-v\",\n \"--variable\",\n help=(\n \"One or more job variable overrides for the work pool provided in the\"\n \" format of key=value string or a JSON object\"\n ),\n ),\n cron: List[str] = typer.Option(\n None,\n \"--cron\",\n help=\"A cron string that will be used to set a CronSchedule on the deployment.\",\n ),\n interval: List[int] = typer.Option(\n None,\n \"--interval\",\n help=(\n \"An integer specifying an interval (in seconds) that will be used to set an\"\n \" IntervalSchedule on the deployment.\"\n ),\n ),\n interval_anchor: Optional[str] = typer.Option(\n None, \"--anchor-date\", help=\"The anchor date for all interval schedules\"\n ),\n rrule: List[str] = typer.Option(\n None,\n \"--rrule\",\n help=\"An RRule that will be used to set an RRuleSchedule on the deployment.\",\n ),\n timezone: str = typer.Option(\n None,\n \"--timezone\",\n help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n ),\n trigger: List[str] = typer.Option(\n None,\n \"--trigger\",\n help=(\n \"Specifies a trigger for the deployment. The value can be a\"\n \" json string or path to `.yaml`/`.json` file. This flag can be used\"\n \" multiple times.\"\n ),\n ),\n param: List[str] = typer.Option(\n None,\n \"--param\",\n help=(\n \"An optional parameter override, values are parsed as JSON strings e.g.\"\n \" --param question=ultimate --param answer=42\"\n ),\n ),\n params: str = typer.Option(\n None,\n \"--params\",\n help=(\n \"An optional parameter override in a JSON string format e.g.\"\n ' --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\''\n ),\n ),\n enforce_parameter_schema: bool = typer.Option(\n False,\n \"--enforce-parameter-schema\",\n help=(\n \"Whether to enforce the parameter schema on this deployment. If set to\"\n \" True, any parameters passed to this deployment must match the signature\"\n \" of the flow.\"\n ),\n ),\n deploy_all: bool = typer.Option(\n False,\n \"--all\",\n help=(\n \"Deploy all flows in the project. If a flow name or entrypoint is also\"\n \" provided, this flag will be ignored.\"\n ),\n ),\n prefect_file: Path = typer.Option(\n Path(\"prefect.yaml\"),\n \"--prefect-file\",\n help=\"Specify a custom path to a prefect.yaml file\",\n ),\n ci: bool = typer.Option(\n False,\n \"--ci\",\n help=(\n \"DEPRECATED: Please use the global '--no-prompt' flag instead: 'prefect\"\n \" --no-prompt deploy'.\\n\\nRun this command in CI mode. This will disable\"\n \" interactive prompts and will error if any required arguments are not\"\n \" provided.\"\n ),\n ),\n):\n \"\"\"\n Deploy a flow from this project by creating a deployment.\n\n Should be run from a project root directory.\n \"\"\"\n if ci:\n app.console.print(\n generate_deprecation_message(\n name=\"The `--ci` flag\",\n start_date=\"Jun 2023\",\n help=(\n \"Please use the global `--no-prompt` flag instead: `prefect\"\n \" --no-prompt deploy`.\"\n ),\n ),\n style=\"yellow\",\n )\n\n options = {\n \"entrypoint\": entrypoint,\n \"flow_name\": flow_name,\n \"description\": description,\n \"version\": version,\n \"tags\": tags,\n \"work_pool_name\": work_pool_name,\n \"work_queue_name\": work_queue_name,\n \"variables\": variables,\n \"cron\": cron,\n \"interval\": interval,\n \"anchor_date\": interval_anchor,\n \"rrule\": rrule,\n \"timezone\": timezone,\n \"triggers\": trigger,\n \"param\": param,\n \"params\": params,\n \"enforce_parameter_schema\": enforce_parameter_schema,\n }\n try:\n deploy_configs, actions = _load_deploy_configs_and_actions(\n prefect_file=prefect_file, ci=ci\n )\n parsed_names = []\n for name in names:\n if \"*\" in name:\n parsed_names.extend(_parse_name_from_pattern(deploy_configs, name))\n else:\n parsed_names.append(name)\n deploy_configs = _pick_deploy_configs(\n deploy_configs, parsed_names, deploy_all, ci\n )\n\n if len(deploy_configs) > 1:\n if any(options.values()):\n app.console.print(\n (\n \"You have passed options to the deploy command, but you are\"\n \" creating or updating multiple deployments. These options\"\n \" will be ignored.\"\n ),\n style=\"yellow\",\n )\n await _run_multi_deploy(\n deploy_configs=deploy_configs,\n actions=actions,\n deploy_all=deploy_all,\n ci=ci,\n prefect_file=prefect_file,\n )\n else:\n # Accommodate passing in -n flow-name/deployment-name as well as -n deployment-name\n options[\"names\"] = [\n name.split(\"/\", 1)[-1] if \"/\" in name else name for name in parsed_names\n ]\n\n await _run_single_deploy(\n deploy_config=deploy_configs[0] if deploy_configs else {},\n actions=actions,\n options=options,\n ci=ci,\n prefect_file=prefect_file,\n )\n except ValueError as exc:\n exit_with_error(str(exc))\n
","tags":["Python API","deploy","deployment","CLI"]},{"location":"api-ref/prefect/cli/deployment/","title":"deployment","text":"","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment","title":"prefect.cli.deployment
","text":"Command line interface for working with deployments.
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.apply","title":"apply
async
","text":"Create or update a deployment from a YAML file.
Source code inprefect/cli/deployment.py
@deployment_app.command(\n deprecated=True,\n deprecated_start_date=\"Mar 2024\",\n deprecated_name=\"deployment apply\",\n deprecated_help=\"Use 'prefect deploy' to deploy flows via YAML instead.\",\n)\nasync def apply(\n paths: List[str] = typer.Argument(\n ...,\n help=\"One or more paths to deployment YAML files.\",\n ),\n upload: bool = typer.Option(\n False,\n \"--upload\",\n help=(\n \"A flag that, when provided, uploads this deployment's files to remote\"\n \" storage.\"\n ),\n ),\n work_queue_concurrency: int = typer.Option(\n None,\n \"--limit\",\n \"-l\",\n help=(\n \"Sets the concurrency limit on the work queue that handles this\"\n \" deployment's runs\"\n ),\n ),\n):\n \"\"\"\n Create or update a deployment from a YAML file.\n \"\"\"\n deployment = None\n async with get_client() as client:\n for path in paths:\n try:\n deployment = await Deployment.load_from_yaml(path)\n app.console.print(\n f\"Successfully loaded {deployment.name!r}\", style=\"green\"\n )\n except Exception as exc:\n exit_with_error(\n f\"'{path!s}' did not conform to deployment spec: {exc!r}\"\n )\n\n assert deployment\n\n await create_work_queue_and_set_concurrency_limit(\n deployment.work_queue_name,\n deployment.work_pool_name,\n work_queue_concurrency,\n )\n\n if upload:\n if (\n deployment.storage\n and \"put-directory\" in deployment.storage.get_block_capabilities()\n ):\n file_count = await deployment.upload_to_storage()\n if file_count:\n app.console.print(\n (\n f\"Successfully uploaded {file_count} files to\"\n f\" {deployment.location}\"\n ),\n style=\"green\",\n )\n else:\n app.console.print(\n (\n f\"Deployment storage {deployment.storage} does not have\"\n \" upload capabilities; no files uploaded.\"\n ),\n style=\"red\",\n )\n await check_work_pool_exists(\n work_pool_name=deployment.work_pool_name, client=client\n )\n\n if client.server_type != ServerType.CLOUD and deployment.triggers:\n app.console.print(\n (\n \"Deployment triggers are only supported on \"\n f\"Prefect Cloud. Triggers defined in {path!r} will be \"\n \"ignored.\"\n ),\n style=\"red\",\n )\n\n deployment_id = await deployment.apply()\n app.console.print(\n (\n f\"Deployment '{deployment.flow_name}/{deployment.name}'\"\n f\" successfully created with id '{deployment_id}'.\"\n ),\n style=\"green\",\n )\n\n if PREFECT_UI_URL:\n app.console.print(\n \"View Deployment in UI:\"\n f\" {PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}\"\n )\n\n if deployment.work_pool_name is not None:\n await _print_deployment_work_pool_instructions(\n work_pool_name=deployment.work_pool_name, client=client\n )\n elif deployment.work_queue_name is not None:\n app.console.print(\n \"\\nTo execute flow runs from this deployment, start an agent that\"\n f\" pulls work from the {deployment.work_queue_name!r} work queue:\"\n )\n app.console.print(\n f\"$ prefect agent start -q {deployment.work_queue_name!r}\",\n style=\"blue\",\n )\n else:\n app.console.print(\n (\n \"\\nThis deployment does not specify a work queue name, which\"\n \" means agents will not be able to pick up its runs. To add a\"\n \" work queue, edit the deployment spec and re-run this command,\"\n \" or visit the deployment in the UI.\"\n ),\n style=\"red\",\n )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.build","title":"build
async
","text":"Generate a deployment YAML from /path/to/file.py:flow_function
Source code inprefect/cli/deployment.py
@deployment_app.command(\n deprecated=True,\n deprecated_start_date=\"Mar 2024\",\n deprecated_name=\"deployment build\",\n deprecated_help=\"Use 'prefect deploy' to deploy flows via YAML instead.\",\n)\nasync def build(\n entrypoint: str = typer.Argument(\n ...,\n help=(\n \"The path to a flow entrypoint, in the form of\"\n \" `./path/to/file.py:flow_func_name`\"\n ),\n ),\n name: str = typer.Option(\n None, \"--name\", \"-n\", help=\"The name to give the deployment.\"\n ),\n description: str = typer.Option(\n None,\n \"--description\",\n \"-d\",\n help=(\n \"The description to give the deployment. If not provided, the description\"\n \" will be populated from the flow's description.\"\n ),\n ),\n version: str = typer.Option(\n None, \"--version\", \"-v\", help=\"A version to give the deployment.\"\n ),\n tags: List[str] = typer.Option(\n None,\n \"-t\",\n \"--tag\",\n help=(\n \"One or more optional tags to apply to the deployment. Note: tags are used\"\n \" only for organizational purposes. For delegating work to agents, use the\"\n \" --work-queue flag.\"\n ),\n ),\n work_queue_name: str = typer.Option(\n None,\n \"-q\",\n \"--work-queue\",\n help=(\n \"The work queue that will handle this deployment's runs. \"\n \"It will be created if it doesn't already exist. Defaults to `None`. \"\n \"Note that if a work queue is not set, work will not be scheduled.\"\n ),\n ),\n work_pool_name: str = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The work pool that will handle this deployment's runs.\",\n ),\n work_queue_concurrency: int = typer.Option(\n None,\n \"--limit\",\n \"-l\",\n help=(\n \"Sets the concurrency limit on the work queue that handles this\"\n \" deployment's runs\"\n ),\n ),\n infra_type: str = typer.Option(\n None,\n \"--infra\",\n \"-i\",\n help=\"The infrastructure type to use, prepopulated with defaults. For example: \"\n + listrepr(builtin_infrastructure_types, sep=\", \"),\n ),\n infra_block: str = typer.Option(\n None,\n \"--infra-block\",\n \"-ib\",\n help=\"The slug of the infrastructure block to use as a template.\",\n ),\n overrides: List[str] = typer.Option(\n None,\n \"--override\",\n help=(\n \"One or more optional infrastructure overrides provided as a dot delimited\"\n \" path, e.g., `env.env_key=env_value`\"\n ),\n ),\n storage_block: str = typer.Option(\n None,\n \"--storage-block\",\n \"-sb\",\n help=(\n \"The slug of a remote storage block. Use the syntax:\"\n \" 'block_type/block_name', where block_type is one of 'github', 's3',\"\n \" 'gcs', 'azure', 'smb', or a registered block from a library that\"\n \" implements the WritableDeploymentStorage interface such as\"\n \" 'gitlab-repository', 'bitbucket-repository', 's3-bucket',\"\n \" 'gcs-bucket'\"\n ),\n ),\n skip_upload: bool = typer.Option(\n False,\n \"--skip-upload\",\n help=(\n \"A flag that, when provided, skips uploading this deployment's files to\"\n \" remote storage.\"\n ),\n ),\n cron: str = typer.Option(\n None,\n \"--cron\",\n help=\"A cron string that will be used to set a CronSchedule on the deployment.\",\n ),\n interval: int = typer.Option(\n None,\n \"--interval\",\n help=(\n \"An integer specifying an interval (in seconds) that will be used to set an\"\n \" IntervalSchedule on the deployment.\"\n ),\n ),\n interval_anchor: Optional[str] = typer.Option(\n None, \"--anchor-date\", help=\"The anchor date for an interval schedule\"\n ),\n rrule: str = typer.Option(\n None,\n \"--rrule\",\n help=\"An RRule that will be used to set an RRuleSchedule on the deployment.\",\n ),\n timezone: str = typer.Option(\n None,\n \"--timezone\",\n help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n ),\n path: str = typer.Option(\n None,\n \"--path\",\n help=(\n \"An optional path to specify a subdirectory of remote storage to upload to,\"\n \" or to point to a subdirectory of a locally stored flow.\"\n ),\n ),\n output: str = typer.Option(\n None,\n \"--output\",\n \"-o\",\n help=\"An optional filename to write the deployment file to.\",\n ),\n _apply: bool = typer.Option(\n False,\n \"--apply\",\n \"-a\",\n help=(\n \"An optional flag to automatically register the resulting deployment with\"\n \" the API.\"\n ),\n ),\n param: List[str] = typer.Option(\n None,\n \"--param\",\n help=(\n \"An optional parameter override, values are parsed as JSON strings e.g.\"\n \" --param question=ultimate --param answer=42\"\n ),\n ),\n params: str = typer.Option(\n None,\n \"--params\",\n help=(\n \"An optional parameter override in a JSON string format e.g.\"\n ' --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\''\n ),\n ),\n no_schedule: bool = typer.Option(\n False,\n \"--no-schedule\",\n help=\"An optional flag to disable scheduling for this deployment.\",\n ),\n):\n \"\"\"\n Generate a deployment YAML from /path/to/file.py:flow_function\n \"\"\"\n # validate inputs\n if not name:\n exit_with_error(\n \"A name for this deployment must be provided with the '--name' flag.\"\n )\n\n if (\n len([value for value in (cron, rrule, interval) if value is not None])\n + (1 if no_schedule else 0)\n > 1\n ):\n exit_with_error(\"Only one schedule type can be provided.\")\n\n if infra_block and infra_type:\n exit_with_error(\n \"Only one of `infra` or `infra_block` can be provided, please choose one.\"\n )\n\n output_file = None\n if output:\n output_file = Path(output)\n if output_file.suffix and output_file.suffix != \".yaml\":\n exit_with_error(\"Output file must be a '.yaml' file.\")\n else:\n output_file = output_file.with_suffix(\".yaml\")\n\n # validate flow\n try:\n fpath, obj_name = entrypoint.rsplit(\":\", 1)\n except ValueError as exc:\n if str(exc) == \"not enough values to unpack (expected 2, got 1)\":\n missing_flow_name_msg = (\n \"Your flow entrypoint must include the name of the function that is\"\n f\" the entrypoint to your flow.\\nTry {entrypoint}:<flow_name>\"\n )\n exit_with_error(missing_flow_name_msg)\n else:\n raise exc\n try:\n flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, entrypoint)\n except Exception as exc:\n exit_with_error(exc)\n app.console.print(f\"Found flow {flow.name!r}\", style=\"green\")\n infra_overrides = {}\n for override in overrides or []:\n key, value = override.split(\"=\", 1)\n infra_overrides[key] = value\n\n if infra_block:\n infrastructure = await Block.load(infra_block)\n elif infra_type:\n # Create an instance of the given type\n infrastructure = Block.get_block_class_from_key(infra_type)()\n else:\n # will reset to a default of Process is no infra is present on the\n # server-side definition of this deployment\n infrastructure = None\n\n if interval_anchor and not interval:\n exit_with_error(\"An anchor date can only be provided with an interval schedule\")\n\n schedule = None\n if cron:\n cron_kwargs = {\"cron\": cron, \"timezone\": timezone}\n schedule = CronSchedule(\n **{k: v for k, v in cron_kwargs.items() if v is not None}\n )\n elif interval:\n interval_kwargs = {\n \"interval\": timedelta(seconds=interval),\n \"anchor_date\": interval_anchor,\n \"timezone\": timezone,\n }\n schedule = IntervalSchedule(\n **{k: v for k, v in interval_kwargs.items() if v is not None}\n )\n elif rrule:\n try:\n schedule = RRuleSchedule(**json.loads(rrule))\n if timezone:\n # override timezone if specified via CLI argument\n schedule.timezone = timezone\n except json.JSONDecodeError:\n schedule = RRuleSchedule(rrule=rrule, timezone=timezone)\n\n # parse storage_block\n if storage_block:\n block_type, block_name, *block_path = storage_block.split(\"/\")\n if block_path and path:\n exit_with_error(\n \"Must provide a `path` explicitly or provide one on the storage block\"\n \" specification, but not both.\"\n )\n elif not path:\n path = \"/\".join(block_path)\n storage_block = f\"{block_type}/{block_name}\"\n storage = await Block.load(storage_block)\n else:\n storage = None\n\n if create_default_ignore_file(path=\".\"):\n app.console.print(\n (\n \"Default '.prefectignore' file written to\"\n f\" {(Path('.') / '.prefectignore').absolute()}\"\n ),\n style=\"green\",\n )\n\n if param and (params is not None):\n exit_with_error(\"Can only pass one of `param` or `params` options\")\n\n parameters = dict()\n\n if param:\n for p in param or []:\n k, unparsed_value = p.split(\"=\", 1)\n try:\n v = json.loads(unparsed_value)\n app.console.print(\n f\"The parameter value {unparsed_value} is parsed as a JSON string\"\n )\n except json.JSONDecodeError:\n v = unparsed_value\n parameters[k] = v\n\n if params is not None:\n parameters = json.loads(params)\n\n # set up deployment object\n entrypoint = (\n f\"{Path(fpath).absolute().relative_to(Path('.').absolute())}:{obj_name}\"\n )\n\n init_kwargs = dict(\n path=path,\n entrypoint=entrypoint,\n version=version,\n storage=storage,\n infra_overrides=infra_overrides or {},\n )\n\n if parameters:\n init_kwargs[\"parameters\"] = parameters\n\n if description:\n init_kwargs[\"description\"] = description\n\n # if a schedule, tags, work_queue_name, or infrastructure are not provided via CLI,\n # we let `build_from_flow` load them from the server\n if schedule or no_schedule:\n init_kwargs.update(schedule=schedule)\n if tags:\n init_kwargs.update(tags=tags)\n if infrastructure:\n init_kwargs.update(infrastructure=infrastructure)\n if work_queue_name:\n init_kwargs.update(work_queue_name=work_queue_name)\n if work_pool_name:\n init_kwargs.update(work_pool_name=work_pool_name)\n\n deployment_loc = output_file or f\"{obj_name}-deployment.yaml\"\n deployment = await Deployment.build_from_flow(\n flow=flow,\n name=name,\n output=deployment_loc,\n skip_upload=skip_upload,\n apply=False,\n **init_kwargs,\n )\n app.console.print(\n f\"Deployment YAML created at '{Path(deployment_loc).absolute()!s}'.\",\n style=\"green\",\n )\n\n await create_work_queue_and_set_concurrency_limit(\n deployment.work_queue_name, deployment.work_pool_name, work_queue_concurrency\n )\n\n # we process these separately for informative output\n if not skip_upload:\n if (\n deployment.storage\n and \"put-directory\" in deployment.storage.get_block_capabilities()\n ):\n file_count = await deployment.upload_to_storage()\n if file_count:\n app.console.print(\n (\n f\"Successfully uploaded {file_count} files to\"\n f\" {deployment.location}\"\n ),\n style=\"green\",\n )\n else:\n app.console.print(\n (\n f\"Deployment storage {deployment.storage} does not have upload\"\n \" capabilities; no files uploaded. Pass --skip-upload to suppress\"\n \" this warning.\"\n ),\n style=\"green\",\n )\n\n if _apply:\n async with get_client() as client:\n await check_work_pool_exists(\n work_pool_name=deployment.work_pool_name, client=client\n )\n deployment_id = await deployment.apply()\n app.console.print(\n (\n f\"Deployment '{deployment.flow_name}/{deployment.name}'\"\n f\" successfully created with id '{deployment_id}'.\"\n ),\n style=\"green\",\n )\n if deployment.work_pool_name is not None:\n await _print_deployment_work_pool_instructions(\n work_pool_name=deployment.work_pool_name, client=client\n )\n\n elif deployment.work_queue_name is not None:\n app.console.print(\n \"\\nTo execute flow runs from this deployment, start an agent that\"\n f\" pulls work from the {deployment.work_queue_name!r} work queue:\"\n )\n app.console.print(\n f\"$ prefect agent start -q {deployment.work_queue_name!r}\",\n style=\"blue\",\n )\n else:\n app.console.print(\n (\n \"\\nThis deployment does not specify a work queue name, which\"\n \" means agents will not be able to pick up its runs. To add a\"\n \" work queue, edit the deployment spec and re-run this command,\"\n \" or visit the deployment in the UI.\"\n ),\n style=\"red\",\n )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.clear_schedules","title":"clear_schedules
async
","text":"Clear all schedules for a deployment.
Source code inprefect/cli/deployment.py
@schedule_app.command(\"clear\")\nasync def clear_schedules(\n deployment_name: str,\n assume_yes: Optional[bool] = typer.Option(\n False,\n \"--accept-yes\",\n \"-y\",\n help=\"Accept the confirmation prompt without prompting\",\n ),\n):\n \"\"\"\n Clear all schedules for a deployment.\n \"\"\"\n assert_deployment_name_format(deployment_name)\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(deployment_name)\n except ObjectNotFound:\n return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n await client.read_flow(deployment.flow_id)\n\n # Get input from user: confirm removal of all schedules\n if not assume_yes and not typer.confirm(\n \"Are you sure you want to clear all schedules for this deployment?\",\n ):\n exit_with_error(\"Clearing schedules cancelled.\")\n\n for schedule in deployment.schedules:\n try:\n await client.delete_deployment_schedule(deployment.id, schedule.id)\n except ObjectNotFound:\n pass\n\n exit_with_success(f\"Cleared all schedules for deployment {deployment_name}\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.create_schedule","title":"create_schedule
async
","text":"Create a schedule for a given deployment.
Source code inprefect/cli/deployment.py
@schedule_app.command(\"create\")\nasync def create_schedule(\n name: str,\n interval: Optional[float] = typer.Option(\n None,\n \"--interval\",\n help=\"An interval to schedule on, specified in seconds\",\n min=0.0001,\n ),\n interval_anchor: Optional[str] = typer.Option(\n None,\n \"--anchor-date\",\n help=\"The anchor date for an interval schedule\",\n ),\n rrule_string: Optional[str] = typer.Option(\n None, \"--rrule\", help=\"Deployment schedule rrule string\"\n ),\n cron_string: Optional[str] = typer.Option(\n None, \"--cron\", help=\"Deployment schedule cron string\"\n ),\n cron_day_or: Optional[str] = typer.Option(\n None,\n \"--day_or\",\n help=\"Control how croniter handles `day` and `day_of_week` entries\",\n ),\n timezone: Optional[str] = typer.Option(\n None,\n \"--timezone\",\n help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n ),\n active: Optional[bool] = typer.Option(\n True,\n \"--active\",\n help=\"Whether the schedule is active. Defaults to True.\",\n ),\n replace: Optional[bool] = typer.Option(\n False,\n \"--replace\",\n help=\"Replace the deployment's current schedule(s) with this new schedule.\",\n ),\n assume_yes: Optional[bool] = typer.Option(\n False,\n \"--accept-yes\",\n \"-y\",\n help=\"Accept the confirmation prompt without prompting\",\n ),\n):\n \"\"\"\n Create a schedule for a given deployment.\n \"\"\"\n assert_deployment_name_format(name)\n\n if sum(option is not None for option in [interval, rrule_string, cron_string]) != 1:\n exit_with_error(\n \"Exactly one of `--interval`, `--rrule`, or `--cron` must be provided.\"\n )\n\n schedule = None\n\n if interval_anchor and not interval:\n exit_with_error(\"An anchor date can only be provided with an interval schedule\")\n\n if interval is not None:\n if interval_anchor:\n try:\n pendulum.parse(interval_anchor)\n except ValueError:\n return exit_with_error(\"The anchor date must be a valid date string.\")\n interval_schedule = {\n \"interval\": interval,\n \"anchor_date\": interval_anchor,\n \"timezone\": timezone,\n }\n schedule = IntervalSchedule(\n **{k: v for k, v in interval_schedule.items() if v is not None}\n )\n\n if cron_string is not None:\n cron_schedule = {\n \"cron\": cron_string,\n \"day_or\": cron_day_or,\n \"timezone\": timezone,\n }\n schedule = CronSchedule(\n **{k: v for k, v in cron_schedule.items() if v is not None}\n )\n\n if rrule_string is not None:\n # a timezone in the `rrule_string` gets ignored by the RRuleSchedule constructor\n if \"TZID\" in rrule_string and not timezone:\n exit_with_error(\n \"You can provide a timezone by providing a dict with a `timezone` key\"\n \" to the --rrule option. E.g. {'rrule': 'FREQ=MINUTELY;INTERVAL=5',\"\n \" 'timezone': 'America/New_York'}.\\nAlternatively, you can provide a\"\n \" timezone by passing in a --timezone argument.\"\n )\n try:\n schedule = RRuleSchedule(**json.loads(rrule_string))\n if timezone:\n # override timezone if specified via CLI argument\n schedule.timezone = timezone\n except json.JSONDecodeError:\n schedule = RRuleSchedule(rrule=rrule_string, timezone=timezone)\n\n if schedule is None:\n return exit_with_success(\n \"Could not create a valid schedule from the provided options.\"\n )\n\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(name)\n except ObjectNotFound:\n return exit_with_error(f\"Deployment {name!r} not found!\")\n\n num_schedules = len(deployment.schedules)\n noun = \"schedule\" if num_schedules == 1 else \"schedules\"\n\n if replace and num_schedules > 0:\n if not assume_yes and not typer.confirm(\n f\"Are you sure you want to replace {num_schedules} {noun} for {name}?\"\n ):\n return exit_with_error(\"Schedule replacement cancelled.\")\n\n for existing_schedule in deployment.schedules:\n try:\n await client.delete_deployment_schedule(\n deployment.id, existing_schedule.id\n )\n except ObjectNotFound:\n pass\n\n await client.create_deployment_schedules(deployment.id, [(schedule, active)])\n\n if replace and num_schedules > 0:\n exit_with_success(f\"Replaced existing deployment {noun} with new schedule!\")\n else:\n exit_with_success(\"Created deployment schedule!\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.delete","title":"delete
async
","text":"Delete a deployment.
\b Examples: \b $ prefect deployment delete test_flow/test_deployment $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6
Source code inprefect/cli/deployment.py
@deployment_app.command()\nasync def delete(\n name: Optional[str] = typer.Argument(\n None, help=\"A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\"\n ),\n deployment_id: Optional[str] = typer.Option(\n None, \"--id\", help=\"A deployment id to search for if no name is given\"\n ),\n):\n \"\"\"\n Delete a deployment.\n\n \\b\n Examples:\n \\b\n $ prefect deployment delete test_flow/test_deployment\n $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6\n \"\"\"\n async with get_client() as client:\n if name is None and deployment_id is not None:\n try:\n await client.delete_deployment(deployment_id)\n exit_with_success(f\"Deleted deployment '{deployment_id}'.\")\n except ObjectNotFound:\n exit_with_error(f\"Deployment {deployment_id!r} not found!\")\n elif name is not None:\n try:\n deployment = await client.read_deployment_by_name(name)\n await client.delete_deployment(deployment.id)\n exit_with_success(f\"Deleted deployment '{name}'.\")\n except ObjectNotFound:\n exit_with_error(f\"Deployment {name!r} not found!\")\n else:\n exit_with_error(\"Must provide a deployment name or id\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.delete_schedule","title":"delete_schedule
async
","text":"Delete a deployment schedule.
Source code inprefect/cli/deployment.py
@schedule_app.command(\"delete\")\nasync def delete_schedule(\n deployment_name: str,\n schedule_id: UUID,\n assume_yes: Optional[bool] = typer.Option(\n False,\n \"--accept-yes\",\n \"-y\",\n help=\"Accept the confirmation prompt without prompting\",\n ),\n):\n \"\"\"\n Delete a deployment schedule.\n \"\"\"\n assert_deployment_name_format(deployment_name)\n\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(deployment_name)\n except ObjectNotFound:\n return exit_with_error(f\"Deployment {deployment_name} not found!\")\n\n try:\n schedule = [s for s in deployment.schedules if s.id == schedule_id][0]\n except IndexError:\n return exit_with_error(\"Deployment schedule not found!\")\n\n if not assume_yes and not typer.confirm(\n f\"Are you sure you want to delete this schedule: {schedule.schedule}\",\n ):\n return exit_with_error(\"Deletion cancelled.\")\n\n try:\n await client.delete_deployment_schedule(deployment.id, schedule_id)\n except ObjectNotFound:\n exit_with_error(\"Deployment schedule not found!\")\n\n exit_with_success(f\"Deleted deployment schedule {schedule_id}\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.inspect","title":"inspect
async
","text":"View details about a deployment.
\b Example: \b $ prefect deployment inspect \"hello-world/my-deployment\" { 'id': '610df9c3-0fb4-4856-b330-67f588d20201', 'created': '2022-08-01T18:36:25.192102+00:00', 'updated': '2022-08-01T18:36:25.188166+00:00', 'name': 'my-deployment', 'description': None, 'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e', 'schedules': None, 'parameters': {'name': 'Marvin'}, 'tags': ['test'], 'parameter_openapi_schema': { 'title': 'Parameters', 'type': 'object', 'properties': { 'name': { 'title': 'name', 'type': 'string' } }, 'required': ['name'] }, 'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32', 'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028', 'infrastructure': { 'type': 'process', 'env': {}, 'labels': {}, 'name': None, 'command': ['python', '-m', 'prefect.engine'], 'stream_output': True } }
Source code inprefect/cli/deployment.py
@deployment_app.command()\nasync def inspect(name: str):\n \"\"\"\n View details about a deployment.\n\n \\b\n Example:\n \\b\n $ prefect deployment inspect \"hello-world/my-deployment\"\n {\n 'id': '610df9c3-0fb4-4856-b330-67f588d20201',\n 'created': '2022-08-01T18:36:25.192102+00:00',\n 'updated': '2022-08-01T18:36:25.188166+00:00',\n 'name': 'my-deployment',\n 'description': None,\n 'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e',\n 'schedules': None,\n 'parameters': {'name': 'Marvin'},\n 'tags': ['test'],\n 'parameter_openapi_schema': {\n 'title': 'Parameters',\n 'type': 'object',\n 'properties': {\n 'name': {\n 'title': 'name',\n 'type': 'string'\n }\n },\n 'required': ['name']\n },\n 'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32',\n 'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028',\n 'infrastructure': {\n 'type': 'process',\n 'env': {},\n 'labels': {},\n 'name': None,\n 'command': ['python', '-m', 'prefect.engine'],\n 'stream_output': True\n }\n }\n\n \"\"\"\n assert_deployment_name_format(name)\n\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(name)\n except ObjectNotFound:\n exit_with_error(f\"Deployment {name!r} not found!\")\n\n deployment_json = deployment.dict(json_compatible=True)\n\n if deployment.infrastructure_document_id:\n deployment_json[\"infrastructure\"] = Block._from_block_document(\n await client.read_block_document(deployment.infrastructure_document_id)\n ).dict(\n exclude={\"_block_document_id\", \"_block_document_name\", \"_is_anonymous\"}\n )\n\n if client.server_type == ServerType.CLOUD:\n deployment_json[\"automations\"] = [\n a.dict()\n for a in await client.read_resource_related_automations(\n f\"prefect.deployment.{deployment.id}\"\n )\n ]\n\n app.console.print(Pretty(deployment_json))\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.list_schedules","title":"list_schedules
async
","text":"View all schedules for a deployment.
Source code inprefect/cli/deployment.py
@schedule_app.command(\"ls\")\nasync def list_schedules(deployment_name: str):\n \"\"\"\n View all schedules for a deployment.\n \"\"\"\n assert_deployment_name_format(deployment_name)\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(deployment_name)\n except ObjectNotFound:\n return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n def sort_by_created_key(schedule: DeploymentSchedule): # noqa\n return pendulum.now(\"utc\") - schedule.created\n\n def schedule_details(schedule: DeploymentSchedule):\n if isinstance(schedule.schedule, IntervalSchedule):\n return f\"interval: {schedule.schedule.interval}s\"\n elif isinstance(schedule.schedule, CronSchedule):\n return f\"cron: {schedule.schedule.cron}\"\n elif isinstance(schedule.schedule, RRuleSchedule):\n return f\"rrule: {schedule.schedule.rrule}\"\n else:\n return \"unknown\"\n\n table = Table(\n title=\"Deployment Schedules\",\n )\n table.add_column(\"ID\", style=\"blue\", no_wrap=True)\n table.add_column(\"Schedule\", style=\"cyan\", no_wrap=False)\n table.add_column(\"Active\", style=\"purple\", no_wrap=True)\n\n for schedule in sorted(deployment.schedules, key=sort_by_created_key):\n table.add_row(\n str(schedule.id),\n schedule_details(schedule),\n str(schedule.active),\n )\n\n app.console.print(table)\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.ls","title":"ls
async
","text":"View all deployments or deployments for specific flows.
Source code inprefect/cli/deployment.py
@deployment_app.command()\nasync def ls(flow_name: List[str] = None, by_created: bool = False):\n \"\"\"\n View all deployments or deployments for specific flows.\n \"\"\"\n async with get_client() as client:\n deployments = await client.read_deployments(\n flow_filter=FlowFilter(name={\"any_\": flow_name}) if flow_name else None\n )\n flows = {\n flow.id: flow\n for flow in await client.read_flows(\n flow_filter=FlowFilter(id={\"any_\": [d.flow_id for d in deployments]})\n )\n }\n\n def sort_by_name_keys(d):\n return flows[d.flow_id].name, d.name\n\n def sort_by_created_key(d):\n return pendulum.now(\"utc\") - d.created\n\n table = Table(\n title=\"Deployments\",\n )\n table.add_column(\"Name\", style=\"blue\", no_wrap=True)\n table.add_column(\"ID\", style=\"cyan\", no_wrap=True)\n\n for deployment in sorted(\n deployments, key=sort_by_created_key if by_created else sort_by_name_keys\n ):\n table.add_row(\n f\"{flows[deployment.flow_id].name}/[bold]{deployment.name}[/]\",\n str(deployment.id),\n )\n\n app.console.print(table)\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.pause_schedule","title":"pause_schedule
async
","text":"Pause a deployment schedule.
Source code inprefect/cli/deployment.py
@schedule_app.command(\"pause\")\nasync def pause_schedule(deployment_name: str, schedule_id: UUID):\n \"\"\"\n Pause a deployment schedule.\n \"\"\"\n assert_deployment_name_format(deployment_name)\n\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(deployment_name)\n except ObjectNotFound:\n return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n try:\n schedule = [s for s in deployment.schedules if s.id == schedule_id][0]\n except IndexError:\n return exit_with_error(\"Deployment schedule not found!\")\n\n if not schedule.active:\n return exit_with_error(\n f\"Deployment schedule {schedule_id} is already inactive\"\n )\n\n await client.update_deployment_schedule(\n deployment.id, schedule_id, active=False\n )\n exit_with_success(\n f\"Paused schedule {schedule.schedule} for deployment {deployment_name}\"\n )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.resume_schedule","title":"resume_schedule
async
","text":"Resume a deployment schedule.
Source code inprefect/cli/deployment.py
@schedule_app.command(\"resume\")\nasync def resume_schedule(deployment_name: str, schedule_id: UUID):\n \"\"\"\n Resume a deployment schedule.\n \"\"\"\n assert_deployment_name_format(deployment_name)\n\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(deployment_name)\n except ObjectNotFound:\n return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n try:\n schedule = [s for s in deployment.schedules if s.id == schedule_id][0]\n except IndexError:\n return exit_with_error(\"Deployment schedule not found!\")\n\n if schedule.active:\n return exit_with_error(\n f\"Deployment schedule {schedule_id} is already active\"\n )\n\n await client.update_deployment_schedule(deployment.id, schedule_id, active=True)\n exit_with_success(\n f\"Resumed schedule {schedule.schedule} for deployment {deployment_name}\"\n )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.run","title":"run
async
","text":"Create a flow run for the given flow and deployment.
The flow run will be scheduled to run immediately unless --start-in
or --start-at
is specified. The flow run will not execute until a worker starts. To watch the flow run until it reaches a terminal state, use the --watch
flag.
prefect/cli/deployment.py
@deployment_app.command()\nasync def run(\n name: Optional[str] = typer.Argument(\n None, help=\"A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\"\n ),\n deployment_id: Optional[str] = typer.Option(\n None,\n \"--id\",\n help=(\"A deployment id to search for if no name is given\"),\n ),\n job_variables: List[str] = typer.Option(\n None,\n \"-jv\",\n \"--job-variable\",\n help=(\n \"A key, value pair (key=value) specifying a flow run job variable. The value will\"\n \" be interpreted as JSON. May be passed multiple times to specify multiple\"\n \" job variable values.\"\n ),\n ),\n params: List[str] = typer.Option(\n None,\n \"-p\",\n \"--param\",\n help=(\n \"A key, value pair (key=value) specifying a flow parameter. The value will\"\n \" be interpreted as JSON. May be passed multiple times to specify multiple\"\n \" parameter values.\"\n ),\n ),\n multiparams: Optional[str] = typer.Option(\n None,\n \"--params\",\n help=(\n \"A mapping of parameters to values. To use a stdin, pass '-'. Any \"\n \"parameters passed with `--param` will take precedence over these values.\"\n ),\n ),\n start_in: Optional[str] = typer.Option(\n None,\n \"--start-in\",\n help=(\n \"A human-readable string specifying a time interval to wait before starting\"\n \" the flow run. E.g. 'in 5 minutes', 'in 1 hour', 'in 2 days'.\"\n ),\n ),\n start_at: Optional[str] = typer.Option(\n None,\n \"--start-at\",\n help=(\n \"A human-readable string specifying a time to start the flow run. E.g.\"\n \" 'at 5:30pm', 'at 2022-08-01 17:30', 'at 2022-08-01 17:30:00'.\"\n ),\n ),\n tags: List[str] = typer.Option(\n [],\n \"--tag\",\n help=(\"Tag(s) to be applied to flow run.\"),\n ),\n watch: bool = typer.Option(\n False,\n \"--watch\",\n help=(\"Whether to poll the flow run until a terminal state is reached.\"),\n ),\n watch_interval: Optional[int] = typer.Option(\n None,\n \"--watch-interval\",\n help=(\"How often to poll the flow run for state changes (in seconds).\"),\n ),\n watch_timeout: Optional[int] = typer.Option(\n None,\n \"--watch-timeout\",\n help=(\"Timeout for --watch.\"),\n ),\n):\n \"\"\"\n Create a flow run for the given flow and deployment.\n\n The flow run will be scheduled to run immediately unless `--start-in` or `--start-at` is specified.\n The flow run will not execute until a worker starts.\n To watch the flow run until it reaches a terminal state, use the `--watch` flag.\n \"\"\"\n import dateparser\n\n now = pendulum.now(\"UTC\")\n\n multi_params = {}\n if multiparams:\n if multiparams == \"-\":\n multiparams = sys.stdin.read()\n if not multiparams:\n exit_with_error(\"No data passed to stdin\")\n\n try:\n multi_params = json.loads(multiparams)\n except ValueError as exc:\n exit_with_error(f\"Failed to parse JSON: {exc}\")\n if watch_interval and not watch:\n exit_with_error(\n \"`--watch-interval` can only be used with `--watch`.\",\n )\n cli_params = _load_json_key_values(params, \"parameter\")\n conflicting_keys = set(cli_params.keys()).intersection(multi_params.keys())\n if conflicting_keys:\n app.console.print(\n \"The following parameters were specified by `--param` and `--params`, the \"\n f\"`--param` value will be used: {conflicting_keys}\"\n )\n parameters = {**multi_params, **cli_params}\n\n job_vars = _load_json_key_values(job_variables, \"job variable\")\n if start_in and start_at:\n exit_with_error(\n \"Only one of `--start-in` or `--start-at` can be set, not both.\"\n )\n\n elif start_in is None and start_at is None:\n scheduled_start_time = now\n human_dt_diff = \" (now)\"\n else:\n if start_in:\n start_time_raw = \"in \" + start_in\n else:\n start_time_raw = \"at \" + start_at\n with warnings.catch_warnings():\n # PyTZ throws a warning based on dateparser usage of the library\n # See https://github.com/scrapinghub/dateparser/issues/1089\n warnings.filterwarnings(\"ignore\", module=\"dateparser\")\n\n try:\n start_time_parsed = dateparser.parse(\n start_time_raw,\n settings={\n \"TO_TIMEZONE\": \"UTC\",\n \"RETURN_AS_TIMEZONE_AWARE\": False,\n \"PREFER_DATES_FROM\": \"future\",\n \"RELATIVE_BASE\": datetime.fromtimestamp(\n now.timestamp(), tz=timezone.utc\n ),\n },\n )\n\n except Exception as exc:\n exit_with_error(f\"Failed to parse '{start_time_raw!r}': {exc!s}\")\n\n if start_time_parsed is None:\n exit_with_error(f\"Unable to parse scheduled start time {start_time_raw!r}.\")\n\n scheduled_start_time = pendulum.instance(start_time_parsed)\n human_dt_diff = (\n \" (\" + pendulum.format_diff(scheduled_start_time.diff(now)) + \")\"\n )\n\n async with get_client() as client:\n deployment = await get_deployment(client, name, deployment_id)\n flow = await client.read_flow(deployment.flow_id)\n\n deployment_parameters = deployment.parameter_openapi_schema[\"properties\"].keys()\n unknown_keys = set(parameters.keys()).difference(deployment_parameters)\n if unknown_keys:\n available_parameters = (\n (\n \"The following parameters are available on the deployment: \"\n + listrepr(deployment_parameters, sep=\", \")\n )\n if deployment_parameters\n else \"This deployment does not accept parameters.\"\n )\n\n exit_with_error(\n \"The following parameters were specified but not found on the \"\n f\"deployment: {listrepr(unknown_keys, sep=', ')}\"\n f\"\\n{available_parameters}\"\n )\n\n app.console.print(\n f\"Creating flow run for deployment '{flow.name}/{deployment.name}'...\",\n )\n\n try:\n flow_run = await client.create_flow_run_from_deployment(\n deployment.id,\n parameters=parameters,\n state=Scheduled(scheduled_time=scheduled_start_time),\n tags=tags,\n job_variables=job_vars,\n )\n except PrefectHTTPStatusError as exc:\n detail = exc.response.json().get(\"detail\")\n if detail:\n exit_with_error(\n exc.response.json()[\"detail\"],\n )\n else:\n raise\n\n if PREFECT_UI_URL:\n run_url = f\"{PREFECT_UI_URL.value()}/flow-runs/flow-run/{flow_run.id}\"\n else:\n run_url = \"<no dashboard available>\"\n\n datetime_local_tz = scheduled_start_time.in_tz(pendulum.tz.local_timezone())\n scheduled_display = (\n datetime_local_tz.to_datetime_string()\n + \" \"\n + datetime_local_tz.tzname()\n + human_dt_diff\n )\n\n app.console.print(f\"Created flow run {flow_run.name!r}.\")\n app.console.print(\n textwrap.dedent(\n f\"\"\"\n \u2514\u2500\u2500 UUID: {flow_run.id}\n \u2514\u2500\u2500 Parameters: {flow_run.parameters}\n \u2514\u2500\u2500 Job Variables: {flow_run.job_variables}\n \u2514\u2500\u2500 Scheduled start time: {scheduled_display}\n \u2514\u2500\u2500 URL: {run_url}\n \"\"\"\n ).strip(),\n soft_wrap=True,\n )\n if watch:\n watch_interval = 5 if watch_interval is None else watch_interval\n app.console.print(f\"Watching flow run '{flow_run.name!r}'...\")\n finished_flow_run = await wait_for_flow_run(\n flow_run.id,\n timeout=watch_timeout,\n poll_interval=watch_interval,\n log_states=True,\n )\n finished_flow_run_state = finished_flow_run.state\n if finished_flow_run_state.is_completed():\n exit_with_success(\n f\"Flow run finished successfully in {finished_flow_run_state.name!r}.\"\n )\n exit_with_error(\n f\"Flow run finished in state {finished_flow_run_state.name!r}.\",\n code=1,\n )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.str_presenter","title":"str_presenter
","text":"configures yaml for dumping multiline strings Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data
Source code inprefect/cli/deployment.py
def str_presenter(dumper, data):\n \"\"\"\n configures yaml for dumping multiline strings\n Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data\n \"\"\"\n if len(data.splitlines()) > 1: # check for multiline string\n return dumper.represent_scalar(\"tag:yaml.org,2002:str\", data, style=\"|\")\n return dumper.represent_scalar(\"tag:yaml.org,2002:str\", data)\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/dev/","title":"dev","text":"","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev","title":"prefect.cli.dev
","text":"Command line interface for working with Prefect Server
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.agent","title":"agent
async
","text":"Starts a hot-reloading development agent process.
Source code inprefect/cli/dev.py
@dev_app.command()\nasync def agent(\n api_url: str = SettingsOption(PREFECT_API_URL),\n work_queues: List[str] = typer.Option(\n [\"default\"],\n \"-q\",\n \"--work-queue\",\n help=\"One or more work queue names for the agent to pull from.\",\n ),\n):\n \"\"\"\n Starts a hot-reloading development agent process.\n \"\"\"\n # Delayed import since this is only a 'dev' dependency\n import watchfiles\n\n app.console.print(\"Creating hot-reloading agent process...\")\n\n try:\n await watchfiles.arun_process(\n prefect.__module_path__,\n target=agent_process_entrypoint,\n kwargs=dict(api=api_url, work_queues=work_queues),\n )\n except RuntimeError as err:\n # a bug in watchfiles causes an 'Already borrowed' error from Rust when\n # exiting: https://github.com/samuelcolvin/watchfiles/issues/200\n if str(err).strip() != \"Already borrowed\":\n raise\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.agent_process_entrypoint","title":"agent_process_entrypoint
","text":"An entrypoint for starting an agent in a subprocess. Adds a Rich console to the Typer app, processes Typer default parameters, then starts an agent. All kwargs are forwarded to prefect.cli.agent.start
.
prefect/cli/dev.py
def agent_process_entrypoint(**kwargs):\n \"\"\"\n An entrypoint for starting an agent in a subprocess. Adds a Rich console\n to the Typer app, processes Typer default parameters, then starts an agent.\n All kwargs are forwarded to `prefect.cli.agent.start`.\n \"\"\"\n import inspect\n\n import rich\n\n # import locally so only the `dev` command breaks if Typer internals change\n from typer.models import ParameterInfo\n\n # Typer does not process default parameters when calling a function\n # directly, so we must set `start_agent`'s default parameters manually.\n # get the signature of the `start_agent` function\n start_agent_signature = inspect.signature(start_agent)\n\n # for any arguments not present in kwargs, use the default value.\n for name, param in start_agent_signature.parameters.items():\n if name not in kwargs:\n # All `param.default` values for start_agent are Typer params that store the\n # actual default value in their `default` attribute and we must call\n # `param.default.default` to get the actual default value. We should also\n # ensure we extract the right default if non-Typer defaults are added\n # to `start_agent` in the future.\n if isinstance(param.default, ParameterInfo):\n default = param.default.default\n else:\n default = param.default\n\n # Some defaults are Prefect `SettingsOption.value` methods\n # that must be called to get the actual value.\n kwargs[name] = default() if callable(default) else default\n\n # add a console, because calling the agent start function directly\n # instead of via CLI call means `app` has no `console` attached.\n app.console = (\n rich.console.Console(\n highlight=False,\n color_system=\"auto\" if PREFECT_CLI_COLORS else None,\n soft_wrap=not PREFECT_CLI_WRAP_LINES.value(),\n )\n if not getattr(app, \"console\", None)\n else app.console\n )\n\n try:\n start_agent(**kwargs) # type: ignore\n except KeyboardInterrupt:\n # expected when watchfiles kills the process\n pass\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.api","title":"api
async
","text":"Starts a hot-reloading development API.
Source code inprefect/cli/dev.py
@dev_app.command()\nasync def api(\n host: str = SettingsOption(PREFECT_SERVER_API_HOST),\n port: int = SettingsOption(PREFECT_SERVER_API_PORT),\n log_level: str = \"DEBUG\",\n services: bool = True,\n):\n \"\"\"\n Starts a hot-reloading development API.\n \"\"\"\n import watchfiles\n\n server_env = os.environ.copy()\n server_env[\"PREFECT_API_SERVICES_RUN_IN_APP\"] = str(services)\n server_env[\"PREFECT_API_SERVICES_UI\"] = \"False\"\n server_env[\"PREFECT_UI_API_URL\"] = f\"http://{host}:{port}/api\"\n\n command = [\n sys.executable,\n \"-m\",\n \"uvicorn\",\n \"--factory\",\n \"prefect.server.api.server:create_app\",\n \"--host\",\n str(host),\n \"--port\",\n str(port),\n \"--log-level\",\n log_level.lower(),\n ]\n\n app.console.print(f\"Running: {' '.join(command)}\")\n import signal\n\n stop_event = anyio.Event()\n start_command = partial(\n run_process, command=command, env=server_env, stream_output=True\n )\n\n async with anyio.create_task_group() as tg:\n try:\n server_pid = await tg.start(start_command)\n async for _ in watchfiles.awatch(\n prefect.__module_path__,\n stop_event=stop_event, # type: ignore\n ):\n # when any watched files change, restart the server\n app.console.print(\"Restarting Prefect Server...\")\n os.kill(server_pid, signal.SIGTERM) # type: ignore\n # start a new server\n server_pid = await tg.start(start_command)\n except RuntimeError as err:\n # a bug in watchfiles causes an 'Already borrowed' error from Rust when\n # exiting: https://github.com/samuelcolvin/watchfiles/issues/200\n if str(err).strip() != \"Already borrowed\":\n raise\n except KeyboardInterrupt:\n # exit cleanly on ctrl-c by killing the server process if it's\n # still running\n try:\n os.kill(server_pid, signal.SIGTERM) # type: ignore\n except ProcessLookupError:\n # process already exited\n pass\n\n stop_event.set()\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.build_docs","title":"build_docs
","text":"Builds REST API reference documentation for static display.
Source code inprefect/cli/dev.py
@dev_app.command()\ndef build_docs(\n schema_path: str = None,\n):\n \"\"\"\n Builds REST API reference documentation for static display.\n \"\"\"\n exit_with_error_if_not_editable_install()\n\n from prefect.server.api.server import create_app\n\n schema = create_app(ephemeral=True).openapi()\n\n if not schema_path:\n schema_path = (\n prefect.__development_base_path__ / \"docs\" / \"api-ref\" / \"schema.json\"\n ).absolute()\n # overwrite info for display purposes\n schema[\"info\"] = {}\n with open(schema_path, \"w\") as f:\n json.dump(schema, f)\n app.console.print(f\"OpenAPI schema written to {schema_path}\")\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.build_image","title":"build_image
","text":"Build a docker image for development.
Source code inprefect/cli/dev.py
@dev_app.command()\ndef build_image(\n arch: str = typer.Option(\n None,\n help=(\n \"The architecture to build the container for. \"\n \"Defaults to the architecture of the host Python. \"\n f\"[default: {platform.machine()}]\"\n ),\n ),\n python_version: str = typer.Option(\n None,\n help=(\n \"The Python version to build the container for. \"\n \"Defaults to the version of the host Python. \"\n f\"[default: {python_version_minor()}]\"\n ),\n ),\n flavor: str = typer.Option(\n None,\n help=(\n \"An alternative flavor to build, for example 'conda'. \"\n \"Defaults to the standard Python base image\"\n ),\n ),\n dry_run: bool = False,\n):\n \"\"\"\n Build a docker image for development.\n \"\"\"\n exit_with_error_if_not_editable_install()\n # TODO: Once https://github.com/tiangolo/typer/issues/354 is addressed, the\n # default can be set in the function signature\n arch = arch or platform.machine()\n python_version = python_version or python_version_minor()\n\n tag = get_prefect_image_name(python_version=python_version, flavor=flavor)\n\n # Here we use a subprocess instead of the docker-py client to easily stream output\n # as it comes\n command = [\n \"docker\",\n \"build\",\n str(prefect.__development_base_path__),\n \"--tag\",\n tag,\n \"--platform\",\n f\"linux/{arch}\",\n \"--build-arg\",\n \"PREFECT_EXTRAS=[dev]\",\n \"--build-arg\",\n f\"PYTHON_VERSION={python_version}\",\n ]\n\n if flavor:\n command += [\"--build-arg\", f\"BASE_IMAGE=prefect-{flavor}\"]\n\n if dry_run:\n print(\" \".join(command))\n return\n\n try:\n subprocess.check_call(command, shell=sys.platform == \"win32\")\n except subprocess.CalledProcessError:\n exit_with_error(\"Failed to build image!\")\n else:\n exit_with_success(f\"Built image {tag!r} for linux/{arch}\")\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.container","title":"container
","text":"Run a docker container with local code mounted and installed.
Source code inprefect/cli/dev.py
@dev_app.command()\ndef container(bg: bool = False, name=\"prefect-dev\", api: bool = True, tag: str = None):\n \"\"\"\n Run a docker container with local code mounted and installed.\n \"\"\"\n exit_with_error_if_not_editable_install()\n import docker\n from docker.models.containers import Container\n\n client = docker.from_env()\n\n containers = client.containers.list()\n container_names = {container.name for container in containers}\n if name in container_names:\n exit_with_error(\n f\"Container {name!r} already exists. Specify a different name or stop \"\n \"the existing container.\"\n )\n\n blocking_cmd = \"prefect dev api\" if api else \"sleep infinity\"\n tag = tag or get_prefect_image_name()\n\n container: Container = client.containers.create(\n image=tag,\n command=[\n \"/bin/bash\",\n \"-c\",\n ( # noqa\n \"pip install -e /opt/prefect/repo\\\\[dev\\\\] && touch /READY &&\"\n f\" {blocking_cmd}\"\n ),\n ],\n name=name,\n auto_remove=True,\n working_dir=\"/opt/prefect/repo\",\n volumes=[f\"{prefect.__development_base_path__}:/opt/prefect/repo\"],\n shm_size=\"4G\",\n )\n\n print(f\"Starting container for image {tag!r}...\")\n container.start()\n\n print(\"Waiting for installation to complete\", end=\"\", flush=True)\n try:\n ready = False\n while not ready:\n print(\".\", end=\"\", flush=True)\n result = container.exec_run(\"test -f /READY\")\n ready = result.exit_code == 0\n if not ready:\n time.sleep(3)\n except BaseException:\n print(\"\\nInterrupted. Stopping container...\")\n container.stop()\n raise\n\n print(\n textwrap.dedent(\n f\"\"\"\n Container {container.name!r} is ready! To connect to the container, run:\n\n docker exec -it {container.name} /bin/bash\n \"\"\"\n )\n )\n\n if bg:\n print(\n textwrap.dedent(\n f\"\"\"\n The container will run forever. Stop the container with:\n\n docker stop {container.name}\n \"\"\"\n )\n )\n # Exit without stopping\n return\n\n try:\n print(\"Send a keyboard interrupt to exit...\")\n container.wait()\n except KeyboardInterrupt:\n pass # Avoid showing \"Abort\"\n finally:\n print(\"\\nStopping container...\")\n container.stop()\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.kubernetes_manifest","title":"kubernetes_manifest
","text":"Generates a Kubernetes manifest for development.
Example$ prefect dev kubernetes-manifest | kubectl apply -f -
Source code inprefect/cli/dev.py
@dev_app.command()\ndef kubernetes_manifest():\n \"\"\"\n Generates a Kubernetes manifest for development.\n\n Example:\n $ prefect dev kubernetes-manifest | kubectl apply -f -\n \"\"\"\n exit_with_error_if_not_editable_install()\n\n template = Template(\n (\n prefect.__module_path__ / \"cli\" / \"templates\" / \"kubernetes-dev.yaml\"\n ).read_text()\n )\n manifest = template.substitute(\n {\n \"prefect_root_directory\": prefect.__development_base_path__,\n \"image_name\": get_prefect_image_name(),\n }\n )\n print(manifest)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.start","title":"start
async
","text":"Starts a hot-reloading development server with API, UI, and agent processes.
Each service has an individual command if you wish to start them separately. Each service can be excluded here as well.
Source code inprefect/cli/dev.py
@dev_app.command()\nasync def start(\n exclude_api: bool = typer.Option(False, \"--no-api\"),\n exclude_ui: bool = typer.Option(False, \"--no-ui\"),\n exclude_agent: bool = typer.Option(False, \"--no-agent\"),\n work_queues: List[str] = typer.Option(\n [\"default\"],\n \"-q\",\n \"--work-queue\",\n help=\"One or more work queue names for the dev agent to pull from.\",\n ),\n):\n \"\"\"\n Starts a hot-reloading development server with API, UI, and agent processes.\n\n Each service has an individual command if you wish to start them separately.\n Each service can be excluded here as well.\n \"\"\"\n async with anyio.create_task_group() as tg:\n if not exclude_api:\n tg.start_soon(\n partial(\n api,\n host=PREFECT_SERVER_API_HOST.value(),\n port=PREFECT_SERVER_API_PORT.value(),\n )\n )\n if not exclude_ui:\n tg.start_soon(ui)\n if not exclude_agent:\n # Hook the agent to the hosted API if running\n if not exclude_api:\n host = f\"http://{PREFECT_SERVER_API_HOST.value()}:{PREFECT_SERVER_API_PORT.value()}/api\" # noqa\n else:\n host = PREFECT_API_URL.value()\n tg.start_soon(agent, host, work_queues)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.ui","title":"ui
async
","text":"Starts a hot-reloading development UI.
Source code inprefect/cli/dev.py
@dev_app.command()\nasync def ui():\n \"\"\"\n Starts a hot-reloading development UI.\n \"\"\"\n exit_with_error_if_not_editable_install()\n with tmpchdir(prefect.__development_base_path__ / \"ui\"):\n app.console.print(\"Installing npm packages...\")\n await run_process([\"npm\", \"install\"], stream_output=True)\n\n app.console.print(\"Starting UI development server...\")\n await run_process(command=[\"npm\", \"run\", \"serve\"], stream_output=True)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/flow/","title":"flow","text":"","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow/#prefect.cli.flow","title":"prefect.cli.flow
","text":"Command line interface for working with flows.
","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow/#prefect.cli.flow.ls","title":"ls
async
","text":"View flows.
Source code inprefect/cli/flow.py
@flow_app.command()\nasync def ls(\n limit: int = 15,\n):\n \"\"\"\n View flows.\n \"\"\"\n async with get_client() as client:\n flows = await client.read_flows(\n limit=limit,\n sort=FlowSort.CREATED_DESC,\n )\n\n table = Table(title=\"Flows\")\n table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"Created\", no_wrap=True)\n\n for flow in flows:\n table.add_row(\n str(flow.id),\n str(flow.name),\n str(flow.created),\n )\n\n app.console.print(table)\n
","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow/#prefect.cli.flow.serve","title":"serve
async
","text":"Serve a flow via an entrypoint.
Source code inprefect/cli/flow.py
@flow_app.command()\nasync def serve(\n entrypoint: str = typer.Argument(\n ...,\n help=(\n \"The path to a file containing a flow and the name of the flow function in\"\n \" the format `./path/to/file.py:flow_func_name`.\"\n ),\n ),\n name: str = typer.Option(\n ...,\n \"--name\",\n \"-n\",\n help=\"The name to give the deployment created for the flow.\",\n ),\n description: Optional[str] = typer.Option(\n None,\n \"--description\",\n \"-d\",\n help=(\n \"The description to give the created deployment. If not provided, the\"\n \" description will be populated from the flow's description.\"\n ),\n ),\n version: Optional[str] = typer.Option(\n None, \"-v\", \"--version\", help=\"A version to give the created deployment.\"\n ),\n tags: Optional[List[str]] = typer.Option(\n None,\n \"-t\",\n \"--tag\",\n help=\"One or more optional tags to apply to the created deployment.\",\n ),\n cron: Optional[str] = typer.Option(\n None,\n \"--cron\",\n help=(\n \"A cron string that will be used to set a schedule for the created\"\n \" deployment.\"\n ),\n ),\n interval: Optional[int] = typer.Option(\n None,\n \"--interval\",\n help=(\n \"An integer specifying an interval (in seconds) between scheduled runs of\"\n \" the flow.\"\n ),\n ),\n interval_anchor: Optional[str] = typer.Option(\n None, \"--anchor-date\", help=\"The start date for an interval schedule.\"\n ),\n rrule: Optional[str] = typer.Option(\n None,\n \"--rrule\",\n help=\"An RRule that will be used to set a schedule for the created deployment.\",\n ),\n timezone: Optional[str] = typer.Option(\n None,\n \"--timezone\",\n help=\"Timezone to used scheduling flow runs e.g. 'America/New_York'\",\n ),\n pause_on_shutdown: bool = typer.Option(\n True,\n help=(\n \"If set, provided schedule will be paused when the serve command is\"\n \" stopped. If not set, the schedules will continue running.\"\n ),\n ),\n):\n \"\"\"\n Serve a flow via an entrypoint.\n \"\"\"\n runner = Runner(name=name, pause_on_shutdown=pause_on_shutdown)\n try:\n schedules = []\n if interval or cron or rrule:\n schedule = construct_schedule(\n interval=interval,\n cron=cron,\n rrule=rrule,\n timezone=timezone,\n anchor_date=interval_anchor,\n )\n schedules = [MinimalDeploymentSchedule(schedule=schedule, active=True)]\n\n runner_deployment = RunnerDeployment.from_entrypoint(\n entrypoint=entrypoint,\n name=name,\n schedules=schedules,\n description=description,\n tags=tags or [],\n version=version,\n )\n except (MissingFlowError, ValueError) as exc:\n exit_with_error(str(exc))\n deployment_id = await runner.add_deployment(runner_deployment)\n\n help_message = (\n f\"[green]Your flow {runner_deployment.flow_name!r} is being served and polling\"\n \" for scheduled runs!\\n[/]\\nTo trigger a run for this flow, use the following\"\n \" command:\\n[blue]\\n\\t$ prefect deployment run\"\n f\" '{runner_deployment.flow_name}/{name}'\\n[/]\"\n )\n if PREFECT_UI_URL:\n help_message += (\n \"\\nYou can also run your flow via the Prefect UI:\"\n f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}[/]\\n\"\n )\n\n app.console.print(help_message, soft_wrap=True)\n await runner.start()\n
","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow_run/","title":"flow_run","text":"","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run","title":"prefect.cli.flow_run
","text":"Command line interface for working with flow runs
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.cancel","title":"cancel
async
","text":"Cancel a flow run by ID.
Source code inprefect/cli/flow_run.py
@flow_run_app.command()\nasync def cancel(id: UUID):\n \"\"\"Cancel a flow run by ID.\"\"\"\n async with get_client() as client:\n cancelling_state = State(type=StateType.CANCELLING)\n try:\n result = await client.set_flow_run_state(\n flow_run_id=id, state=cancelling_state\n )\n except ObjectNotFound:\n exit_with_error(f\"Flow run '{id}' not found!\")\n\n if result.status == SetStateStatus.ABORT:\n exit_with_error(\n f\"Flow run '{id}' was unable to be cancelled. Reason:\"\n f\" '{result.details.reason}'\"\n )\n\n exit_with_success(f\"Flow run '{id}' was successfully scheduled for cancellation.\")\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.delete","title":"delete
async
","text":"Delete a flow run by ID.
Source code inprefect/cli/flow_run.py
@flow_run_app.command()\nasync def delete(id: UUID):\n \"\"\"\n Delete a flow run by ID.\n \"\"\"\n async with get_client() as client:\n try:\n await client.delete_flow_run(id)\n except ObjectNotFound:\n exit_with_error(f\"Flow run '{id}' not found!\")\n\n exit_with_success(f\"Successfully deleted flow run '{id}'.\")\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.inspect","title":"inspect
async
","text":"View details about a flow run.
Source code inprefect/cli/flow_run.py
@flow_run_app.command()\nasync def inspect(id: UUID):\n \"\"\"\n View details about a flow run.\n \"\"\"\n async with get_client() as client:\n try:\n flow_run = await client.read_flow_run(id)\n except httpx.HTTPStatusError as exc:\n if exc.response.status_code == status.HTTP_404_NOT_FOUND:\n exit_with_error(f\"Flow run {id!r} not found!\")\n else:\n raise\n\n app.console.print(Pretty(flow_run))\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.logs","title":"logs
async
","text":"View logs for a flow run.
Source code inprefect/cli/flow_run.py
@flow_run_app.command()\nasync def logs(\n id: UUID,\n head: bool = typer.Option(\n False,\n \"--head\",\n \"-h\",\n help=(\n f\"Show the first {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of\"\n \" all logs.\"\n ),\n ),\n num_logs: int = typer.Option(\n None,\n \"--num-logs\",\n \"-n\",\n help=(\n \"Number of logs to show when using the --head or --tail flag. If None,\"\n f\" defaults to {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS}.\"\n ),\n min=1,\n ),\n reverse: bool = typer.Option(\n False,\n \"--reverse\",\n \"-r\",\n help=\"Reverse the logs order to print the most recent logs first\",\n ),\n tail: bool = typer.Option(\n False,\n \"--tail\",\n \"-t\",\n help=(\n f\"Show the last {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of\"\n \" all logs.\"\n ),\n ),\n):\n \"\"\"\n View logs for a flow run.\n \"\"\"\n # Pagination - API returns max 200 (LOGS_DEFAULT_PAGE_SIZE) logs at a time\n offset = 0\n more_logs = True\n num_logs_returned = 0\n\n # if head and tail flags are being used together\n if head and tail:\n exit_with_error(\"Please provide either a `head` or `tail` option but not both.\")\n\n user_specified_num_logs = (\n num_logs or LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS\n if head or tail or num_logs\n else None\n )\n\n # if using tail update offset according to LOGS_DEFAULT_PAGE_SIZE\n if tail:\n offset = max(0, user_specified_num_logs - LOGS_DEFAULT_PAGE_SIZE)\n\n log_filter = LogFilter(flow_run_id={\"any_\": [id]})\n\n async with get_client() as client:\n # Get the flow run\n try:\n flow_run = await client.read_flow_run(id)\n except ObjectNotFound:\n exit_with_error(f\"Flow run {str(id)!r} not found!\")\n\n while more_logs:\n num_logs_to_return_from_page = (\n LOGS_DEFAULT_PAGE_SIZE\n if user_specified_num_logs is None\n else min(\n LOGS_DEFAULT_PAGE_SIZE, user_specified_num_logs - num_logs_returned\n )\n )\n\n # Get the next page of logs\n page_logs = await client.read_logs(\n log_filter=log_filter,\n limit=num_logs_to_return_from_page,\n offset=offset,\n sort=(\n LogSort.TIMESTAMP_DESC if reverse or tail else LogSort.TIMESTAMP_ASC\n ),\n )\n\n for log in reversed(page_logs) if tail and not reverse else page_logs:\n app.console.print(\n # Print following the flow run format (declared in logging.yml)\n (\n f\"{pendulum.instance(log.timestamp).to_datetime_string()}.{log.timestamp.microsecond // 1000:03d} |\"\n f\" {logging.getLevelName(log.level):7s} | Flow run\"\n f\" {flow_run.name!r} - {log.message}\"\n ),\n soft_wrap=True,\n )\n\n # Update the number of logs retrieved\n num_logs_returned += num_logs_to_return_from_page\n\n if tail:\n # If the current offset is not 0, update the offset for the next page\n if offset != 0:\n offset = (\n 0\n # Reset the offset to 0 if there are less logs than the LOGS_DEFAULT_PAGE_SIZE to get the remaining log\n if offset < LOGS_DEFAULT_PAGE_SIZE\n else offset - LOGS_DEFAULT_PAGE_SIZE\n )\n else:\n more_logs = False\n else:\n if len(page_logs) == LOGS_DEFAULT_PAGE_SIZE:\n offset += LOGS_DEFAULT_PAGE_SIZE\n else:\n # No more logs to show, exit\n more_logs = False\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.ls","title":"ls
async
","text":"View recent flow runs or flow runs for specific flows
Source code inprefect/cli/flow_run.py
@flow_run_app.command()\nasync def ls(\n flow_name: List[str] = typer.Option(None, help=\"Name of the flow\"),\n limit: int = typer.Option(15, help=\"Maximum number of flow runs to list\"),\n state: List[str] = typer.Option(None, help=\"Name of the flow run's state\"),\n state_type: List[StateType] = typer.Option(\n None, help=\"Type of the flow run's state\"\n ),\n):\n \"\"\"\n View recent flow runs or flow runs for specific flows\n \"\"\"\n\n state_filter = {}\n if state:\n state_filter[\"name\"] = {\"any_\": state}\n if state_type:\n state_filter[\"type\"] = {\"any_\": state_type}\n\n async with get_client() as client:\n flow_runs = await client.read_flow_runs(\n flow_filter=FlowFilter(name={\"any_\": flow_name}) if flow_name else None,\n flow_run_filter=FlowRunFilter(state=state_filter) if state_filter else None,\n limit=limit,\n sort=FlowRunSort.EXPECTED_START_TIME_DESC,\n )\n flows_by_id = {\n flow.id: flow\n for flow in await client.read_flows(\n flow_filter=FlowFilter(id={\"any_\": [run.flow_id for run in flow_runs]})\n )\n }\n\n table = Table(title=\"Flow Runs\")\n table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Flow\", style=\"blue\", no_wrap=True)\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"State\", no_wrap=True)\n table.add_column(\"When\", style=\"bold\", no_wrap=True)\n\n for flow_run in sorted(flow_runs, key=lambda d: d.created, reverse=True):\n flow = flows_by_id[flow_run.flow_id]\n timestamp = (\n flow_run.state.state_details.scheduled_time\n if flow_run.state.is_scheduled()\n else flow_run.state.timestamp\n )\n table.add_row(\n str(flow_run.id),\n str(flow.name),\n str(flow_run.name),\n str(flow_run.state.type.value),\n pendulum.instance(timestamp).diff_for_humans(),\n )\n\n app.console.print(table)\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/kubernetes/","title":"kubernetes","text":"","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes","title":"prefect.cli.kubernetes
","text":"Command line interface for working with Prefect on Kubernetes
","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes.manifest_agent","title":"manifest_agent
","text":"Generates a manifest for deploying Agent on Kubernetes.
Example$ prefect kubernetes manifest agent | kubectl apply -f -
Source code inprefect/cli/kubernetes.py
@manifest_app.command(\"agent\")\ndef manifest_agent(\n api_url: str = SettingsOption(PREFECT_API_URL),\n api_key: str = SettingsOption(PREFECT_API_KEY),\n image_tag: str = typer.Option(\n get_prefect_image_name(),\n \"-i\",\n \"--image-tag\",\n help=\"The tag of a Docker image to use for the Agent.\",\n ),\n namespace: str = typer.Option(\n \"default\",\n \"-n\",\n \"--namespace\",\n help=\"A Kubernetes namespace to create agent in.\",\n ),\n work_queue: str = typer.Option(\n \"kubernetes\",\n \"-q\",\n \"--work-queue\",\n help=\"A work queue name for the agent to pull from.\",\n ),\n):\n \"\"\"\n Generates a manifest for deploying Agent on Kubernetes.\n\n Example:\n $ prefect kubernetes manifest agent | kubectl apply -f -\n \"\"\"\n\n template = Template(\n (\n prefect.__module_path__ / \"cli\" / \"templates\" / \"kubernetes-agent.yaml\"\n ).read_text()\n )\n manifest = template.substitute(\n {\n \"api_url\": api_url,\n \"api_key\": api_key,\n \"image_name\": image_tag,\n \"namespace\": namespace,\n \"work_queue\": work_queue,\n }\n )\n print(manifest)\n
","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes.manifest_flow_run_job","title":"manifest_flow_run_job
async
","text":"Prints the default KubernetesJob Job manifest.
Use this file to fully customize your KubernetesJob
deployments.
\b Example: \b $ prefect kubernetes manifest flow-run-job
\b Output, a YAML file: \b apiVersion: batch/v1 kind: Job ...
Source code inprefect/cli/kubernetes.py
@manifest_app.command(\"flow-run-job\")\nasync def manifest_flow_run_job():\n \"\"\"\n Prints the default KubernetesJob Job manifest.\n\n Use this file to fully customize your `KubernetesJob` deployments.\n\n \\b\n Example:\n \\b\n $ prefect kubernetes manifest flow-run-job\n\n \\b\n Output, a YAML file:\n \\b\n apiVersion: batch/v1\n kind: Job\n ...\n \"\"\"\n\n KubernetesJob.base_job_manifest()\n\n output = yaml.dump(KubernetesJob.base_job_manifest())\n\n # add some commentary where appropriate\n output = output.replace(\n \"metadata:\\n labels:\",\n \"metadata:\\n # labels are required, even if empty\\n labels:\",\n )\n output = output.replace(\n \"containers:\\n\",\n \"containers: # the first container is required\\n\",\n )\n output = output.replace(\n \"env: []\\n\",\n \"env: [] # env is required, even if empty\\n\",\n )\n\n print(output)\n
","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes.manifest_server","title":"manifest_server
","text":"Generates a manifest for deploying Prefect on Kubernetes.
Example$ prefect kubernetes manifest server | kubectl apply -f -
Source code inprefect/cli/kubernetes.py
@manifest_app.command(\"server\")\ndef manifest_server(\n image_tag: str = typer.Option(\n get_prefect_image_name(),\n \"-i\",\n \"--image-tag\",\n help=\"The tag of a Docker image to use for the server.\",\n ),\n namespace: str = typer.Option(\n \"default\",\n \"-n\",\n \"--namespace\",\n help=\"A Kubernetes namespace to create the server in.\",\n ),\n log_level: str = SettingsOption(PREFECT_LOGGING_SERVER_LEVEL),\n):\n \"\"\"\n Generates a manifest for deploying Prefect on Kubernetes.\n\n Example:\n $ prefect kubernetes manifest server | kubectl apply -f -\n \"\"\"\n\n template = Template(\n (\n prefect.__module_path__ / \"cli\" / \"templates\" / \"kubernetes-server.yaml\"\n ).read_text()\n )\n manifest = template.substitute(\n {\n \"image_name\": image_tag,\n \"namespace\": namespace,\n \"log_level\": log_level,\n }\n )\n print(manifest)\n
","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/profile/","title":"profile","text":"","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile","title":"prefect.cli.profile
","text":"Command line interface for working with profiles.
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.create","title":"create
","text":"Create a new profile.
Source code inprefect/cli/profile.py
@profile_app.command()\ndef create(\n name: str,\n from_name: str = typer.Option(None, \"--from\", help=\"Copy an existing profile.\"),\n):\n \"\"\"\n Create a new profile.\n \"\"\"\n\n profiles = prefect.settings.load_profiles()\n if name in profiles:\n app.console.print(\n textwrap.dedent(\n f\"\"\"\n [red]Profile {name!r} already exists.[/red]\n To create a new profile, remove the existing profile first:\n\n prefect profile delete {name!r}\n \"\"\"\n ).strip()\n )\n raise typer.Exit(1)\n\n if from_name:\n if from_name not in profiles:\n exit_with_error(f\"Profile {from_name!r} not found.\")\n\n # Create a copy of the profile with a new name and add to the collection\n profiles.add_profile(profiles[from_name].copy(update={\"name\": name}))\n else:\n profiles.add_profile(prefect.settings.Profile(name=name, settings={}))\n\n prefect.settings.save_profiles(profiles)\n\n app.console.print(\n textwrap.dedent(\n f\"\"\"\n Created profile with properties:\n name - {name!r}\n from name - {from_name or None}\n\n Use created profile for future, subsequent commands:\n prefect profile use {name!r}\n\n Use created profile temporarily for a single command:\n prefect -p {name!r} config view\n \"\"\"\n )\n )\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.delete","title":"delete
","text":"Delete the given profile.
Source code inprefect/cli/profile.py
@profile_app.command()\ndef delete(name: str):\n \"\"\"\n Delete the given profile.\n \"\"\"\n profiles = prefect.settings.load_profiles()\n if name not in profiles:\n exit_with_error(f\"Profile {name!r} not found.\")\n\n current_profile = prefect.context.get_settings_context().profile\n if current_profile.name == name:\n exit_with_error(\n f\"Profile {name!r} is the active profile. You must switch profiles before\"\n \" it can be deleted.\"\n )\n\n profiles.remove_profile(name)\n\n verb = \"Removed\"\n if name == \"default\":\n verb = \"Reset\"\n\n prefect.settings.save_profiles(profiles)\n exit_with_success(f\"{verb} profile {name!r}.\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.inspect","title":"inspect
","text":"Display settings from a given profile; defaults to active.
Source code inprefect/cli/profile.py
@profile_app.command()\ndef inspect(\n name: Optional[str] = typer.Argument(\n None, help=\"Name of profile to inspect; defaults to active profile.\"\n ),\n):\n \"\"\"\n Display settings from a given profile; defaults to active.\n \"\"\"\n profiles = prefect.settings.load_profiles()\n if name is None:\n current_profile = prefect.context.get_settings_context().profile\n if not current_profile:\n exit_with_error(\"No active profile set - please provide a name to inspect.\")\n name = current_profile.name\n print(f\"No name provided, defaulting to {name!r}\")\n if name not in profiles:\n exit_with_error(f\"Profile {name!r} not found.\")\n\n if not profiles[name].settings:\n # TODO: Consider instructing on how to add settings.\n print(f\"Profile {name!r} is empty.\")\n\n for setting, value in profiles[name].settings.items():\n app.console.print(f\"{setting.name}='{value}'\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.ls","title":"ls
","text":"List profile names.
Source code inprefect/cli/profile.py
@profile_app.command()\ndef ls():\n \"\"\"\n List profile names.\n \"\"\"\n profiles = prefect.settings.load_profiles()\n current_profile = prefect.context.get_settings_context().profile\n current_name = current_profile.name if current_profile is not None else None\n\n table = Table(caption=\"* active profile\")\n table.add_column(\n \"[#024dfd]Available Profiles:\", justify=\"right\", style=\"#8ea0ae\", no_wrap=True\n )\n\n for name in profiles:\n if name == current_name:\n table.add_row(f\"[green] * {name}[/green]\")\n else:\n table.add_row(f\" {name}\")\n app.console.print(table)\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.rename","title":"rename
","text":"Change the name of a profile.
Source code inprefect/cli/profile.py
@profile_app.command()\ndef rename(name: str, new_name: str):\n \"\"\"\n Change the name of a profile.\n \"\"\"\n profiles = prefect.settings.load_profiles()\n if name not in profiles:\n exit_with_error(f\"Profile {name!r} not found.\")\n\n if new_name in profiles:\n exit_with_error(f\"Profile {new_name!r} already exists.\")\n\n profiles.add_profile(profiles[name].copy(update={\"name\": new_name}))\n profiles.remove_profile(name)\n\n # If the active profile was renamed switch the active profile to the new name.\n prefect.context.get_settings_context().profile\n if profiles.active_name == name:\n profiles.set_active(new_name)\n if os.environ.get(\"PREFECT_PROFILE\") == name:\n app.console.print(\n f\"You have set your current profile to {name!r} with the \"\n \"PREFECT_PROFILE environment variable. You must update this variable to \"\n f\"{new_name!r} to continue using the profile.\"\n )\n\n prefect.settings.save_profiles(profiles)\n exit_with_success(f\"Renamed profile {name!r} to {new_name!r}.\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.use","title":"use
async
","text":"Set the given profile to active.
Source code inprefect/cli/profile.py
@profile_app.command()\nasync def use(name: str):\n \"\"\"\n Set the given profile to active.\n \"\"\"\n status_messages = {\n ConnectionStatus.CLOUD_CONNECTED: (\n exit_with_success,\n f\"Connected to Prefect Cloud using profile {name!r}\",\n ),\n ConnectionStatus.CLOUD_ERROR: (\n exit_with_error,\n f\"Error connecting to Prefect Cloud using profile {name!r}\",\n ),\n ConnectionStatus.CLOUD_UNAUTHORIZED: (\n exit_with_error,\n f\"Error authenticating with Prefect Cloud using profile {name!r}\",\n ),\n ConnectionStatus.ORION_CONNECTED: (\n exit_with_success,\n f\"Connected to Prefect server using profile {name!r}\",\n ),\n ConnectionStatus.ORION_ERROR: (\n exit_with_error,\n f\"Error connecting to Prefect server using profile {name!r}\",\n ),\n ConnectionStatus.EPHEMERAL: (\n exit_with_success,\n (\n f\"No Prefect server specified using profile {name!r} - the API will run\"\n \" in ephemeral mode.\"\n ),\n ),\n ConnectionStatus.INVALID_API: (\n exit_with_error,\n \"Error connecting to Prefect API URL\",\n ),\n }\n\n profiles = prefect.settings.load_profiles()\n if name not in profiles.names:\n exit_with_error(f\"Profile {name!r} not found.\")\n\n profiles.set_active(name)\n prefect.settings.save_profiles(profiles)\n\n with Progress(\n SpinnerColumn(),\n TextColumn(\"[progress.description]{task.description}\"),\n transient=False,\n ) as progress:\n progress.add_task(\n description=\"Checking API connectivity...\",\n total=None,\n )\n\n with use_profile(name, include_current_context=False):\n connection_status = await check_orion_connection()\n\n exit_method, msg = status_messages[connection_status]\n\n exit_method(msg)\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/project/","title":"project","text":"","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project","title":"prefect.cli.project
","text":"Deprecated - Command line interface for working with projects.
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.clone","title":"clone
async
","text":"Clone an existing project for a given deployment.
Source code inprefect/cli/project.py
@project_app.command()\nasync def clone(\n deployment_name: str = typer.Option(\n None,\n \"--deployment\",\n \"-d\",\n help=\"The name of the deployment to clone a project for.\",\n ),\n deployment_id: str = typer.Option(\n None,\n \"--id\",\n \"-i\",\n help=\"The id of the deployment to clone a project for.\",\n ),\n):\n \"\"\"\n Clone an existing project for a given deployment.\n \"\"\"\n app.console.print(\n generate_deprecation_message(\n \"The `prefect project clone` command\",\n start_date=\"Jun 2023\",\n )\n )\n if deployment_name and deployment_id:\n exit_with_error(\n \"Can only pass one of deployment name or deployment ID options.\"\n )\n\n if not deployment_name and not deployment_id:\n exit_with_error(\"Must pass either a deployment name or deployment ID.\")\n\n if deployment_name:\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(deployment_name)\n except ObjectNotFound:\n exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n else:\n async with get_client() as client:\n try:\n deployment = await client.read_deployment(deployment_id)\n except ObjectNotFound:\n exit_with_error(f\"Deployment {deployment_id!r} not found!\")\n\n if deployment.pull_steps:\n output = await run_steps(deployment.pull_steps)\n app.console.out(output[\"directory\"])\n else:\n exit_with_error(\"No pull steps found, exiting early.\")\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.init","title":"init
async
","text":"Initialize a new project.
Source code inprefect/cli/project.py
@project_app.command()\n@app.command()\nasync def init(\n name: str = None,\n recipe: str = None,\n fields: List[str] = typer.Option(\n None,\n \"-f\",\n \"--field\",\n help=(\n \"One or more fields to pass to the recipe (e.g., image_name) in the format\"\n \" of key=value.\"\n ),\n ),\n):\n \"\"\"\n Initialize a new project.\n \"\"\"\n inputs = {}\n fields = fields or []\n recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n\n for field in fields:\n key, value = field.split(\"=\")\n inputs[key] = value\n\n if not recipe and is_interactive():\n recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n recipes = []\n\n for r in recipe_paths.iterdir():\n if r.is_dir() and (r / \"prefect.yaml\").exists():\n with open(r / \"prefect.yaml\") as f:\n recipe_data = yaml.safe_load(f)\n recipe_name = r.name\n recipe_description = recipe_data.get(\n \"description\", \"(no description available)\"\n )\n recipe_dict = {\n \"name\": recipe_name,\n \"description\": recipe_description,\n }\n recipes.append(recipe_dict)\n\n selected_recipe = prompt_select_from_table(\n app.console,\n \"Would you like to initialize your deployment configuration with a recipe?\",\n columns=[\n {\"header\": \"Name\", \"key\": \"name\"},\n {\"header\": \"Description\", \"key\": \"description\"},\n ],\n data=recipes,\n opt_out_message=\"No, I'll use the default deployment configuration.\",\n opt_out_response={},\n )\n if selected_recipe != {}:\n recipe = selected_recipe[\"name\"]\n\n if recipe and (recipe_paths / recipe / \"prefect.yaml\").exists():\n with open(recipe_paths / recipe / \"prefect.yaml\") as f:\n recipe_inputs = yaml.safe_load(f).get(\"required_inputs\") or {}\n\n if recipe_inputs:\n if set(recipe_inputs.keys()) < set(inputs.keys()):\n # message to user about extra fields\n app.console.print(\n (\n f\"Warning: extra fields provided for {recipe!r} recipe:\"\n f\" '{', '.join(set(inputs.keys()) - set(recipe_inputs.keys()))}'\"\n ),\n style=\"red\",\n )\n elif set(recipe_inputs.keys()) > set(inputs.keys()):\n table = Table(\n title=f\"[red]Required inputs for {recipe!r} recipe[/red]\",\n )\n table.add_column(\"Field Name\", style=\"green\", no_wrap=True)\n table.add_column(\n \"Description\", justify=\"left\", style=\"white\", no_wrap=False\n )\n for field, description in recipe_inputs.items():\n if field not in inputs:\n table.add_row(field, description)\n\n app.console.print(table)\n\n for key, description in recipe_inputs.items():\n if key not in inputs:\n inputs[key] = typer.prompt(key)\n\n app.console.print(\"-\" * 15)\n\n try:\n files = [\n f\"[green]{fname}[/green]\"\n for fname in initialize_project(name=name, recipe=recipe, inputs=inputs)\n ]\n except ValueError as exc:\n if \"Unknown recipe\" in str(exc):\n exit_with_error(\n f\"Unknown recipe {recipe!r} provided - run [yellow]`prefect init\"\n \"`[/yellow] to see all available recipes.\"\n )\n else:\n raise\n\n files = \"\\n\".join(files)\n empty_msg = (\n f\"Created project in [green]{Path('.').resolve()}[/green]; no new files\"\n \" created.\"\n )\n file_msg = (\n f\"Created project in [green]{Path('.').resolve()}[/green] with the following\"\n f\" new files:\\n{files}\"\n )\n app.console.print(file_msg if files else empty_msg)\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.ls","title":"ls
async
","text":"List available recipes.
Source code inprefect/cli/project.py
@recipe_app.command()\nasync def ls():\n \"\"\"\n List available recipes.\n \"\"\"\n\n recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n recipes = {}\n\n for recipe in recipe_paths.iterdir():\n if recipe.is_dir() and (recipe / \"prefect.yaml\").exists():\n with open(recipe / \"prefect.yaml\") as f:\n recipes[recipe.name] = yaml.safe_load(f).get(\n \"description\", \"(no description available)\"\n )\n\n table = Table(\n title=\"Available project recipes\",\n caption=(\n \"Run `prefect project init --recipe <recipe>` to initialize a project with\"\n \" a recipe.\"\n ),\n caption_style=\"red\",\n )\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"Description\", justify=\"left\", style=\"white\", no_wrap=False)\n for name, description in sorted(recipes.items(), key=lambda x: x[0]):\n table.add_row(name, description)\n\n app.console.print(table)\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.register_flow","title":"register_flow
async
","text":"Register a flow with this project.
Source code inprefect/cli/project.py
@project_app.command()\nasync def register_flow(\n entrypoint: str = typer.Argument(\n ...,\n help=(\n \"The path to a flow entrypoint, in the form of\"\n \" `./path/to/file.py:flow_func_name`\"\n ),\n ),\n force: bool = typer.Option(\n False,\n \"--force\",\n \"-f\",\n help=(\n \"An optional flag to force register this flow and overwrite any existing\"\n \" entry\"\n ),\n ),\n):\n \"\"\"\n Register a flow with this project.\n \"\"\"\n try:\n flow = await register(entrypoint, force=force)\n except Exception as exc:\n exit_with_error(exc)\n\n app.console.print(\n (\n f\"Registered flow {flow.name!r} in\"\n f\" {(find_prefect_directory()/'flows.json').resolve()!s}\"\n ),\n style=\"green\",\n )\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/root/","title":"root","text":"","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/root/#prefect.cli.root","title":"prefect.cli.root
","text":"Base prefect
command-line application
version
async
","text":"Get the current Prefect version.
Source code inprefect/cli/root.py
@app.command()\nasync def version():\n \"\"\"Get the current Prefect version.\"\"\"\n import sqlite3\n\n from prefect.server.utilities.database import get_dialect\n from prefect.settings import PREFECT_API_DATABASE_CONNECTION_URL\n\n version_info = {\n \"Version\": prefect.__version__,\n \"API version\": SERVER_API_VERSION,\n \"Python version\": platform.python_version(),\n \"Git commit\": prefect.__version_info__[\"full-revisionid\"][:8],\n \"Built\": pendulum.parse(\n prefect.__version_info__[\"date\"]\n ).to_day_datetime_string(),\n \"OS/Arch\": f\"{sys.platform}/{platform.machine()}\",\n \"Profile\": prefect.context.get_settings_context().profile.name,\n }\n\n server_type: str\n\n try:\n # We do not context manage the client because when using an ephemeral app we do not\n # want to create the database or run migrations\n client = prefect.get_client()\n server_type = client.server_type.value\n except Exception:\n server_type = \"<client error>\"\n\n version_info[\"Server type\"] = server_type.lower()\n\n # TODO: Consider adding an API route to retrieve this information?\n if server_type == ServerType.EPHEMERAL.value:\n database = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value()).name\n version_info[\"Server\"] = {\"Database\": database}\n if database == \"sqlite\":\n version_info[\"Server\"][\"SQLite version\"] = sqlite3.sqlite_version\n\n def display(object: dict, nesting: int = 0):\n # Recursive display of a dictionary with nesting\n for key, value in object.items():\n key += \":\"\n if isinstance(value, dict):\n app.console.print(key)\n return display(value, nesting + 2)\n prefix = \" \" * nesting\n app.console.print(f\"{prefix}{key.ljust(20 - len(prefix))} {value}\")\n\n display(version_info)\n
","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/server/","title":"server","text":"","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server","title":"prefect.cli.server
","text":"Command line interface for working with Prefect
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.downgrade","title":"downgrade
async
","text":"Downgrade the Prefect database
Source code inprefect/cli/server.py
@database_app.command()\nasync def downgrade(\n yes: bool = typer.Option(False, \"--yes\", \"-y\"),\n revision: str = typer.Option(\n \"-1\",\n \"-r\",\n help=(\n \"The revision to pass to `alembic downgrade`. If not provided, \"\n \"downgrades to the most recent revision. Use 'base' to run all \"\n \"migrations.\"\n ),\n ),\n dry_run: bool = typer.Option(\n False,\n help=(\n \"Flag to show what migrations would be made without applying them. Will\"\n \" emit sql statements to stdout.\"\n ),\n ),\n):\n \"\"\"Downgrade the Prefect database\"\"\"\n from prefect.server.database.alembic_commands import alembic_downgrade\n from prefect.server.database.dependencies import provide_database_interface\n\n db = provide_database_interface()\n\n engine = await db.engine()\n\n if not yes:\n confirm = typer.confirm(\n \"Are you sure you want to downgrade the Prefect \"\n f\"database at {engine.url!r}?\"\n )\n if not confirm:\n exit_with_error(\"Database downgrade aborted!\")\n\n app.console.print(\"Running downgrade migrations ...\")\n await run_sync_in_worker_thread(\n alembic_downgrade, revision=revision, dry_run=dry_run\n )\n app.console.print(\"Migrations succeeded!\")\n exit_with_success(f\"Prefect database at {engine.url!r} downgraded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.reset","title":"reset
async
","text":"Drop and recreate all Prefect database tables
Source code inprefect/cli/server.py
@database_app.command()\nasync def reset(yes: bool = typer.Option(False, \"--yes\", \"-y\")):\n \"\"\"Drop and recreate all Prefect database tables\"\"\"\n from prefect.server.database.dependencies import provide_database_interface\n\n db = provide_database_interface()\n engine = await db.engine()\n if not yes:\n confirm = typer.confirm(\n \"Are you sure you want to reset the Prefect database located \"\n f'at \"{engine.url!r}\"? This will drop and recreate all tables.'\n )\n if not confirm:\n exit_with_error(\"Database reset aborted\")\n app.console.print(\"Downgrading database...\")\n await db.drop_db()\n app.console.print(\"Upgrading database...\")\n await db.create_db()\n exit_with_success(f'Prefect database \"{engine.url!r}\" reset!')\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.revision","title":"revision
async
","text":"Create a new migration for the Prefect database
Source code inprefect/cli/server.py
@database_app.command()\nasync def revision(\n message: str = typer.Option(\n None,\n \"--message\",\n \"-m\",\n help=\"A message to describe the migration.\",\n ),\n autogenerate: bool = False,\n):\n \"\"\"Create a new migration for the Prefect database\"\"\"\n from prefect.server.database.alembic_commands import alembic_revision\n\n app.console.print(\"Running migration file creation ...\")\n await run_sync_in_worker_thread(\n alembic_revision,\n message=message,\n autogenerate=autogenerate,\n )\n exit_with_success(\"Creating new migration file succeeded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.stamp","title":"stamp
async
","text":"Stamp the revision table with the given revision; don't run any migrations
Source code inprefect/cli/server.py
@database_app.command()\nasync def stamp(revision: str):\n \"\"\"Stamp the revision table with the given revision; don't run any migrations\"\"\"\n from prefect.server.database.alembic_commands import alembic_stamp\n\n app.console.print(\"Stamping database with revision ...\")\n await run_sync_in_worker_thread(alembic_stamp, revision=revision)\n exit_with_success(\"Stamping database with revision succeeded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.start","title":"start
async
","text":"Start a Prefect server instance
Source code inprefect/cli/server.py
@server_app.command()\nasync def start(\n host: str = SettingsOption(PREFECT_SERVER_API_HOST),\n port: int = SettingsOption(PREFECT_SERVER_API_PORT),\n keep_alive_timeout: int = SettingsOption(PREFECT_SERVER_API_KEEPALIVE_TIMEOUT),\n log_level: str = SettingsOption(PREFECT_LOGGING_SERVER_LEVEL),\n scheduler: bool = SettingsOption(PREFECT_API_SERVICES_SCHEDULER_ENABLED),\n analytics: bool = SettingsOption(\n PREFECT_SERVER_ANALYTICS_ENABLED, \"--analytics-on/--analytics-off\"\n ),\n late_runs: bool = SettingsOption(PREFECT_API_SERVICES_LATE_RUNS_ENABLED),\n ui: bool = SettingsOption(PREFECT_UI_ENABLED),\n):\n \"\"\"Start a Prefect server instance\"\"\"\n\n server_env = os.environ.copy()\n server_env[\"PREFECT_API_SERVICES_SCHEDULER_ENABLED\"] = str(scheduler)\n server_env[\"PREFECT_SERVER_ANALYTICS_ENABLED\"] = str(analytics)\n server_env[\"PREFECT_API_SERVICES_LATE_RUNS_ENABLED\"] = str(late_runs)\n server_env[\"PREFECT_API_SERVICES_UI\"] = str(ui)\n server_env[\"PREFECT_LOGGING_SERVER_LEVEL\"] = log_level\n\n base_url = f\"http://{host}:{port}\"\n\n async with anyio.create_task_group() as tg:\n app.console.print(generate_welcome_blurb(base_url, ui_enabled=ui))\n app.console.print(\"\\n\")\n\n server_process_id = await tg.start(\n partial(\n run_process,\n command=[\n get_sys_executable(),\n \"-m\",\n \"uvicorn\",\n \"--app-dir\",\n # quote wrapping needed for windows paths with spaces\n f'\"{prefect.__module_path__.parent}\"',\n \"--factory\",\n \"prefect.server.api.server:create_app\",\n \"--host\",\n str(host),\n \"--port\",\n str(port),\n \"--timeout-keep-alive\",\n str(keep_alive_timeout),\n ],\n env=server_env,\n stream_output=True,\n )\n )\n\n # Explicitly handle the interrupt signal here, as it will allow us to\n # cleanly stop the uvicorn server. Failing to do that may cause a\n # large amount of anyio error traces on the terminal, because the\n # SIGINT is handled by Typer/Click in this process (the parent process)\n # and will start shutting down subprocesses:\n # https://github.com/PrefectHQ/server/issues/2475\n\n setup_signal_handlers_server(\n server_process_id, \"the Prefect server\", app.console.print\n )\n\n app.console.print(\"Server stopped!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.upgrade","title":"upgrade
async
","text":"Upgrade the Prefect database
Source code inprefect/cli/server.py
@database_app.command()\nasync def upgrade(\n yes: bool = typer.Option(False, \"--yes\", \"-y\"),\n revision: str = typer.Option(\n \"head\",\n \"-r\",\n help=(\n \"The revision to pass to `alembic upgrade`. If not provided, runs all\"\n \" migrations.\"\n ),\n ),\n dry_run: bool = typer.Option(\n False,\n help=(\n \"Flag to show what migrations would be made without applying them. Will\"\n \" emit sql statements to stdout.\"\n ),\n ),\n):\n \"\"\"Upgrade the Prefect database\"\"\"\n from prefect.server.database.alembic_commands import alembic_upgrade\n from prefect.server.database.dependencies import provide_database_interface\n\n db = provide_database_interface()\n engine = await db.engine()\n\n if not yes:\n confirm = typer.confirm(\n f\"Are you sure you want to upgrade the Prefect database at {engine.url!r}?\"\n )\n if not confirm:\n exit_with_error(\"Database upgrade aborted!\")\n\n app.console.print(\"Running upgrade migrations ...\")\n await run_sync_in_worker_thread(alembic_upgrade, revision=revision, dry_run=dry_run)\n app.console.print(\"Migrations succeeded!\")\n exit_with_success(f\"Prefect database at {engine.url!r} upgraded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/variable/","title":"variable","text":"","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable","title":"prefect.cli.variable
","text":"","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable.delete","title":"delete
async
","text":"Delete a variable.
Parameters:
Name Type Description Defaultname
str
the name of the variable to delete
required Source code inprefect/cli/variable.py
@variable_app.command(\"delete\")\nasync def delete(\n name: str,\n):\n \"\"\"\n Delete a variable.\n\n Arguments:\n name: the name of the variable to delete\n \"\"\"\n\n async with get_client() as client:\n try:\n await client.delete_variable_by_name(\n name=name,\n )\n except ObjectNotFound:\n exit_with_error(f\"Variable {name!r} not found.\")\n\n exit_with_success(f\"Deleted variable {name!r}.\")\n
","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable.inspect","title":"inspect
async
","text":"View details about a variable.
Parameters:
Name Type Description Defaultname
str
the name of the variable to inspect
required Source code inprefect/cli/variable.py
@variable_app.command(\"inspect\")\nasync def inspect(\n name: str,\n):\n \"\"\"\n View details about a variable.\n\n Arguments:\n name: the name of the variable to inspect\n \"\"\"\n\n async with get_client() as client:\n variable = await client.read_variable_by_name(\n name=name,\n )\n if not variable:\n exit_with_error(f\"Variable {name!r} not found.\")\n\n app.console.print(Pretty(variable))\n
","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable.list_variables","title":"list_variables
async
","text":"List variables.
Source code inprefect/cli/variable.py
@variable_app.command(\"ls\")\nasync def list_variables(\n limit: int = typer.Option(\n 100,\n \"--limit\",\n help=\"The maximum number of variables to return.\",\n ),\n):\n \"\"\"\n List variables.\n \"\"\"\n async with get_client() as client:\n variables = await client.read_variables(\n limit=limit,\n )\n\n table = Table(\n title=\"Variables\",\n caption=\"List Variables using `prefect variable ls`\",\n show_header=True,\n )\n\n table.add_column(\"Name\", style=\"blue\", no_wrap=True)\n # values can be up 5000 characters so truncate early\n table.add_column(\"Value\", style=\"blue\", no_wrap=True, max_width=50)\n table.add_column(\"Created\", style=\"blue\", no_wrap=True)\n table.add_column(\"Updated\", style=\"blue\", no_wrap=True)\n\n for variable in sorted(variables, key=lambda x: f\"{x.name}\"):\n table.add_row(\n variable.name,\n variable.value,\n pendulum.instance(variable.created).diff_for_humans(),\n pendulum.instance(variable.updated).diff_for_humans(),\n )\n\n app.console.print(table)\n
","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/work_pool/","title":"work_pool","text":"","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool","title":"prefect.cli.work_pool
","text":"Command line interface for working with work queues.
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.clear_concurrency_limit","title":"clear_concurrency_limit
async
","text":"Clear the concurrency limit for a work pool.
\b Examples: $ prefect work-pool clear-concurrency-limit \"my-pool\"
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def clear_concurrency_limit(\n name: str = typer.Argument(..., help=\"The name of the work pool to update.\"),\n):\n \"\"\"\n Clear the concurrency limit for a work pool.\n\n \\b\n Examples:\n $ prefect work-pool clear-concurrency-limit \"my-pool\"\n\n \"\"\"\n async with get_client() as client:\n try:\n await client.update_work_pool(\n work_pool_name=name,\n work_pool=WorkPoolUpdate(\n concurrency_limit=None,\n ),\n )\n except ObjectNotFound as exc:\n exit_with_error(exc)\n\n exit_with_success(f\"Cleared concurrency limit for work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.create","title":"create
async
","text":"Create a new work pool.
\b Examples: \b Create a Kubernetes work pool in a paused state: \b $ prefect work-pool create \"my-pool\" --type kubernetes --paused \b Create a Docker work pool with a custom base job template: \b $ prefect work-pool create \"my-pool\" --type docker --base-job-template ./base-job-template.json
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def create(\n name: str = typer.Argument(..., help=\"The name of the work pool.\"),\n base_job_template: typer.FileText = typer.Option(\n None,\n \"--base-job-template\",\n help=(\n \"The path to a JSON file containing the base job template to use. If\"\n \" unspecified, Prefect will use the default base job template for the given\"\n \" worker type.\"\n ),\n ),\n paused: bool = typer.Option(\n False,\n \"--paused\",\n help=\"Whether or not to create the work pool in a paused state.\",\n ),\n type: str = typer.Option(\n None, \"-t\", \"--type\", help=\"The type of work pool to create.\"\n ),\n set_as_default: bool = typer.Option(\n False,\n \"--set-as-default\",\n help=(\n \"Whether or not to use the created work pool as the local default for\"\n \" deployment.\"\n ),\n ),\n provision_infrastructure: bool = typer.Option(\n False,\n \"--provision-infrastructure\",\n \"--provision-infra\",\n help=(\n \"Whether or not to provision infrastructure for the work pool if supported\"\n \" for the given work pool type.\"\n ),\n ),\n):\n \"\"\"\n Create a new work pool.\n\n \\b\n Examples:\n \\b\n Create a Kubernetes work pool in a paused state:\n \\b\n $ prefect work-pool create \"my-pool\" --type kubernetes --paused\n \\b\n Create a Docker work pool with a custom base job template:\n \\b\n $ prefect work-pool create \"my-pool\" --type docker --base-job-template ./base-job-template.json\n\n \"\"\"\n if not name.lower().strip(\"'\\\" \"):\n exit_with_error(\"Work pool name cannot be empty.\")\n async with get_client() as client:\n try:\n await client.read_work_pool(work_pool_name=name)\n except ObjectNotFound:\n pass\n else:\n exit_with_error(\n f\"Work pool named {name!r} already exists. Please try creating your\"\n \" work pool again with a different name.\"\n )\n\n if type is None:\n async with get_collections_metadata_client() as collections_client:\n if not is_interactive():\n exit_with_error(\n \"When not using an interactive terminal, you must supply a\"\n \" `--type` value.\"\n )\n worker_metadata = await collections_client.read_worker_metadata()\n\n # Retrieve only push pools if provisioning infrastructure\n data = [\n worker\n for collection in worker_metadata.values()\n for worker in collection.values()\n if provision_infrastructure\n and has_provisioner_for_type(worker[\"type\"])\n or not provision_infrastructure\n ]\n worker = prompt_select_from_table(\n app.console,\n \"What type of work pool infrastructure would you like to use?\",\n columns=[\n {\"header\": \"Infrastructure Type\", \"key\": \"display_name\"},\n {\"header\": \"Description\", \"key\": \"description\"},\n ],\n data=data,\n table_kwargs={\"show_lines\": True},\n )\n type = worker[\"type\"]\n\n available_work_pool_types = await get_available_work_pool_types()\n if type not in available_work_pool_types:\n exit_with_error(\n f\"Unknown work pool type {type!r}. \"\n \"Please choose from\"\n f\" {', '.join(available_work_pool_types)}.\"\n )\n\n if base_job_template is None:\n template_contents = (\n await get_default_base_job_template_for_infrastructure_type(type)\n )\n else:\n template_contents = json.load(base_job_template)\n\n if provision_infrastructure:\n try:\n provisioner = get_infrastructure_provisioner_for_work_pool_type(type)\n provisioner.console = app.console\n template_contents = await provisioner.provision(\n work_pool_name=name, base_job_template=template_contents\n )\n except ValueError as exc:\n print(exc)\n app.console.print(\n (\n \"Automatic infrastructure provisioning is not supported for\"\n f\" {type!r} work pools.\"\n ),\n style=\"yellow\",\n )\n except RuntimeError as exc:\n exit_with_error(f\"Failed to provision infrastructure: {exc}\")\n\n try:\n wp = WorkPoolCreate(\n name=name,\n type=type,\n base_job_template=template_contents,\n is_paused=paused,\n )\n work_pool = await client.create_work_pool(work_pool=wp)\n app.console.print(f\"Created work pool {work_pool.name!r}!\\n\", style=\"green\")\n if (\n not work_pool.is_paused\n and not work_pool.is_managed_pool\n and not work_pool.is_push_pool\n ):\n app.console.print(\"To start a worker for this work pool, run:\\n\")\n app.console.print(\n f\"\\t[blue]prefect worker start --pool {work_pool.name}[/]\\n\"\n )\n if set_as_default:\n set_work_pool_as_default(work_pool.name)\n exit_with_success(\"\")\n except ObjectAlreadyExists:\n exit_with_error(\n f\"Work pool named {name!r} already exists. Please try creating your\"\n \" work pool again with a different name.\"\n )\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.delete","title":"delete
async
","text":"Delete a work pool.
\b Examples: $ prefect work-pool delete \"my-pool\"
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def delete(\n name: str = typer.Argument(..., help=\"The name of the work pool to delete.\"),\n):\n \"\"\"\n Delete a work pool.\n\n \\b\n Examples:\n $ prefect work-pool delete \"my-pool\"\n\n \"\"\"\n async with get_client() as client:\n try:\n await client.delete_work_pool(work_pool_name=name)\n except ObjectNotFound as exc:\n exit_with_error(exc)\n\n exit_with_success(f\"Deleted work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.get_default_base_job_template","title":"get_default_base_job_template
async
","text":"Get the default base job template for a given work pool type.
\b Examples: $ prefect work-pool get-default-base-job-template --type kubernetes
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def get_default_base_job_template(\n type: str = typer.Option(\n None,\n \"-t\",\n \"--type\",\n help=\"The type of work pool for which to get the default base job template.\",\n ),\n file: str = typer.Option(\n None, \"-f\", \"--file\", help=\"If set, write the output to a file.\"\n ),\n):\n \"\"\"\n Get the default base job template for a given work pool type.\n\n \\b\n Examples:\n $ prefect work-pool get-default-base-job-template --type kubernetes\n \"\"\"\n base_job_template = await get_default_base_job_template_for_infrastructure_type(\n type\n )\n if base_job_template is None:\n exit_with_error(\n f\"Unknown work pool type {type!r}. \"\n \"Please choose from\"\n f\" {', '.join(await get_available_work_pool_types())}.\"\n )\n\n if file is None:\n print(json.dumps(base_job_template, indent=2))\n else:\n with open(file, mode=\"w\") as f:\n json.dump(base_job_template, fp=f, indent=2)\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.has_provisioner_for_type","title":"has_provisioner_for_type
","text":"Check if there is a provisioner for the given work pool type.
Parameters:
Name Type Description Defaultwork_pool_type
str
The type of the work pool.
requiredReturns:
Name Type Descriptionbool
bool
True if a provisioner exists for the given type, False otherwise.
Source code inprefect/cli/work_pool.py
def has_provisioner_for_type(work_pool_type: str) -> bool:\n \"\"\"\n Check if there is a provisioner for the given work pool type.\n\n Args:\n work_pool_type (str): The type of the work pool.\n\n Returns:\n bool: True if a provisioner exists for the given type, False otherwise.\n \"\"\"\n return work_pool_type in _provisioners\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.inspect","title":"inspect
async
","text":"Inspect a work pool.
\b Examples: $ prefect work-pool inspect \"my-pool\"
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def inspect(\n name: str = typer.Argument(..., help=\"The name of the work pool to inspect.\"),\n):\n \"\"\"\n Inspect a work pool.\n\n \\b\n Examples:\n $ prefect work-pool inspect \"my-pool\"\n\n \"\"\"\n async with get_client() as client:\n try:\n pool = await client.read_work_pool(work_pool_name=name)\n app.console.print(Pretty(pool))\n except ObjectNotFound:\n exit_with_error(f\"Work pool {name!r} not found!\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.ls","title":"ls
async
","text":"List work pools.
\b Examples: $ prefect work-pool ls
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def ls(\n verbose: bool = typer.Option(\n False,\n \"--verbose\",\n \"-v\",\n help=\"Show additional information about work pools.\",\n ),\n):\n \"\"\"\n List work pools.\n\n \\b\n Examples:\n $ prefect work-pool ls\n \"\"\"\n table = Table(\n title=\"Work Pools\", caption=\"(**) denotes a paused pool\", caption_style=\"red\"\n )\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"Type\", style=\"magenta\", no_wrap=True)\n table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n if verbose:\n table.add_column(\"Base Job Template\", style=\"magenta\", no_wrap=True)\n\n async with get_client() as client:\n pools = await client.read_work_pools()\n\n def sort_by_created_key(q):\n return pendulum.now(\"utc\") - q.created\n\n for pool in sorted(pools, key=sort_by_created_key):\n row = [\n f\"{pool.name} [red](**)\" if pool.is_paused else pool.name,\n str(pool.type),\n str(pool.id),\n (\n f\"[red]{pool.concurrency_limit}\"\n if pool.concurrency_limit\n else \"[blue]None\"\n ),\n ]\n if verbose:\n row.append(str(pool.base_job_template))\n table.add_row(*row)\n\n app.console.print(table)\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.pause","title":"pause
async
","text":"Pause a work pool.
\b Examples: $ prefect work-pool pause \"my-pool\"
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def pause(\n name: str = typer.Argument(..., help=\"The name of the work pool to pause.\"),\n):\n \"\"\"\n Pause a work pool.\n\n \\b\n Examples:\n $ prefect work-pool pause \"my-pool\"\n\n \"\"\"\n async with get_client() as client:\n try:\n await client.update_work_pool(\n work_pool_name=name,\n work_pool=WorkPoolUpdate(\n is_paused=True,\n ),\n )\n except ObjectNotFound as exc:\n exit_with_error(exc)\n\n exit_with_success(f\"Paused work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.preview","title":"preview
async
","text":"Preview the work pool's scheduled work for all queues.
\b Examples: $ prefect work-pool preview \"my-pool\" --hours 24
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def preview(\n name: str = typer.Argument(None, help=\"The name or ID of the work pool to preview\"),\n hours: int = typer.Option(\n None,\n \"-h\",\n \"--hours\",\n help=\"The number of hours to look ahead; defaults to 1 hour\",\n ),\n):\n \"\"\"\n Preview the work pool's scheduled work for all queues.\n\n \\b\n Examples:\n $ prefect work-pool preview \"my-pool\" --hours 24\n\n \"\"\"\n if hours is None:\n hours = 1\n\n async with get_client() as client:\n try:\n responses = await client.get_scheduled_flow_runs_for_work_pool(\n work_pool_name=name,\n )\n except ObjectNotFound as exc:\n exit_with_error(exc)\n\n runs = [response.flow_run for response in responses]\n table = Table(caption=\"(**) denotes a late run\", caption_style=\"red\")\n\n table.add_column(\n \"Scheduled Start Time\", justify=\"left\", style=\"yellow\", no_wrap=True\n )\n table.add_column(\"Run ID\", justify=\"left\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"Deployment ID\", style=\"blue\", no_wrap=True)\n\n pendulum.now(\"utc\").add(hours=hours or 1)\n\n now = pendulum.now(\"utc\")\n\n def sort_by_created_key(r):\n return now - r.created\n\n for run in sorted(runs, key=sort_by_created_key):\n table.add_row(\n (\n f\"{run.expected_start_time} [red](**)\"\n if run.expected_start_time < now\n else f\"{run.expected_start_time}\"\n ),\n str(run.id),\n run.name,\n str(run.deployment_id),\n )\n\n if runs:\n app.console.print(table)\n else:\n app.console.print(\n (\n \"No runs found - try increasing how far into the future you preview\"\n \" with the --hours flag\"\n ),\n style=\"yellow\",\n )\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.provision_infrastructure","title":"provision_infrastructure
async
","text":"Provision infrastructure for a work pool.
\b Examples: $ prefect work-pool provision-infrastructure \"my-pool\"
$ prefect work-pool provision-infra \"my-pool\"\n
Source code in prefect/cli/work_pool.py
@work_pool_app.command(aliases=[\"provision-infra\"])\nasync def provision_infrastructure(\n name: str = typer.Argument(\n ..., help=\"The name of the work pool to provision infrastructure for.\"\n ),\n):\n \"\"\"\n Provision infrastructure for a work pool.\n\n \\b\n Examples:\n $ prefect work-pool provision-infrastructure \"my-pool\"\n\n $ prefect work-pool provision-infra \"my-pool\"\n\n \"\"\"\n async with get_client() as client:\n try:\n work_pool = await client.read_work_pool(work_pool_name=name)\n if not work_pool.is_push_pool:\n exit_with_error(\n f\"Work pool {name!r} is not a push pool type. \"\n \"Please try provisioning infrastructure for a push pool.\"\n )\n except ObjectNotFound:\n exit_with_error(f\"Work pool {name!r} does not exist.\")\n except Exception as exc:\n exit_with_error(f\"Failed to read work pool {name!r}: {exc}\")\n\n try:\n provisioner = get_infrastructure_provisioner_for_work_pool_type(\n work_pool.type\n )\n provisioner.console = app.console\n new_base_job_template = await provisioner.provision(\n work_pool_name=name, base_job_template=work_pool.base_job_template\n )\n\n await client.update_work_pool(\n work_pool_name=name,\n work_pool=WorkPoolUpdate(\n base_job_template=new_base_job_template,\n ),\n )\n\n except ValueError as exc:\n app.console.print(f\"Error: {exc}\")\n app.console.print(\n (\n \"Automatic infrastructure provisioning is not supported for\"\n f\" {work_pool.type!r} work pools.\"\n ),\n style=\"yellow\",\n )\n except RuntimeError as exc:\n exit_with_error(\n f\"Failed to provision infrastructure for '{name}' work pool: {exc}\"\n )\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.resume","title":"resume
async
","text":"Resume a work pool.
\b Examples: $ prefect work-pool resume \"my-pool\"
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def resume(\n name: str = typer.Argument(..., help=\"The name of the work pool to resume.\"),\n):\n \"\"\"\n Resume a work pool.\n\n \\b\n Examples:\n $ prefect work-pool resume \"my-pool\"\n\n \"\"\"\n async with get_client() as client:\n try:\n await client.update_work_pool(\n work_pool_name=name,\n work_pool=WorkPoolUpdate(\n is_paused=False,\n ),\n )\n except ObjectNotFound as exc:\n exit_with_error(exc)\n\n exit_with_success(f\"Resumed work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.set_concurrency_limit","title":"set_concurrency_limit
async
","text":"Set the concurrency limit for a work pool.
\b Examples: $ prefect work-pool set-concurrency-limit \"my-pool\" 10
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def set_concurrency_limit(\n name: str = typer.Argument(..., help=\"The name of the work pool to update.\"),\n concurrency_limit: int = typer.Argument(\n ..., help=\"The new concurrency limit for the work pool.\"\n ),\n):\n \"\"\"\n Set the concurrency limit for a work pool.\n\n \\b\n Examples:\n $ prefect work-pool set-concurrency-limit \"my-pool\" 10\n\n \"\"\"\n async with get_client() as client:\n try:\n await client.update_work_pool(\n work_pool_name=name,\n work_pool=WorkPoolUpdate(\n concurrency_limit=concurrency_limit,\n ),\n )\n except ObjectNotFound as exc:\n exit_with_error(exc)\n\n exit_with_success(\n f\"Set concurrency limit for work pool {name!r} to {concurrency_limit}\"\n )\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.update","title":"update
async
","text":"Update a work pool.
\b Examples: $ prefect work-pool update \"my-pool\"
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def update(\n name: str = typer.Argument(..., help=\"The name of the work pool to update.\"),\n base_job_template: typer.FileText = typer.Option(\n None,\n \"--base-job-template\",\n help=(\n \"The path to a JSON file containing the base job template to use. If\"\n \" unspecified, Prefect will use the default base job template for the given\"\n \" worker type. If None, the base job template will not be modified.\"\n ),\n ),\n concurrency_limit: int = typer.Option(\n None,\n \"--concurrency-limit\",\n help=(\n \"The concurrency limit for the work pool. If None, the concurrency limit\"\n \" will not be modified.\"\n ),\n ),\n description: str = typer.Option(\n None,\n \"--description\",\n help=(\n \"The description for the work pool. If None, the description will not be\"\n \" modified.\"\n ),\n ),\n):\n \"\"\"\n Update a work pool.\n\n \\b\n Examples:\n $ prefect work-pool update \"my-pool\"\n\n \"\"\"\n wp = WorkPoolUpdate()\n if base_job_template:\n wp.base_job_template = json.load(base_job_template)\n if concurrency_limit:\n wp.concurrency_limit = concurrency_limit\n if description:\n wp.description = description\n\n async with get_client() as client:\n try:\n await client.update_work_pool(\n work_pool_name=name,\n work_pool=wp,\n )\n except ObjectNotFound:\n exit_with_error(\"Work pool named {name!r} does not exist.\")\n\n exit_with_success(f\"Updated work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_queue/","title":"work_queue","text":"","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue","title":"prefect.cli.work_queue
","text":"Command line interface for working with work queues.
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.clear_concurrency_limit","title":"clear_concurrency_limit
async
","text":"Clear any concurrency limits from a work queue.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def clear_concurrency_limit(\n name: str = typer.Argument(..., help=\"The name or ID of the work queue to clear\"),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool that the work queue belongs to.\",\n ),\n):\n \"\"\"\n Clear any concurrency limits from a work queue.\n \"\"\"\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name,\n work_pool_name=pool,\n )\n async with get_client() as client:\n try:\n await client.update_work_queue(\n id=queue_id,\n concurrency_limit=None,\n )\n except ObjectNotFound:\n if pool:\n error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n else:\n error_message = f\"No work queue found: {name!r}\"\n exit_with_error(error_message)\n\n if pool:\n success_message = (\n f\"Concurrency limits removed on work queue {name!r} in work pool {pool!r}\"\n )\n else:\n success_message = f\"Concurrency limits removed on work queue {name!r}\"\n exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.create","title":"create
async
","text":"Create a work queue.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def create(\n name: str = typer.Argument(..., help=\"The unique name to assign this work queue\"),\n limit: int = typer.Option(\n None, \"-l\", \"--limit\", help=\"The concurrency limit to set on the queue.\"\n ),\n tags: List[str] = typer.Option(\n None,\n \"-t\",\n \"--tag\",\n help=(\n \"DEPRECATED: One or more optional tags. This option will be removed on\"\n \" 2023-02-23.\"\n ),\n ),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool to create the work queue in.\",\n ),\n priority: Optional[int] = typer.Option(\n None,\n \"-q\",\n \"--priority\",\n help=\"The associated priority for the created work queue\",\n ),\n):\n \"\"\"\n Create a work queue.\n \"\"\"\n if tags:\n app.console.print(\n (\n \"Supplying `tags` for work queues is deprecated. This work \"\n \"queue will use legacy tag-matching behavior. \"\n \"This option will be removed on 2023-02-23.\"\n ),\n style=\"red\",\n )\n\n if pool and tags:\n exit_with_error(\n \"Work queues created with tags cannot specify work pools or set priorities.\"\n )\n\n async with get_client() as client:\n try:\n result = await client.create_work_queue(\n name=name, tags=tags or None, work_pool_name=pool, priority=priority\n )\n if limit is not None:\n await client.update_work_queue(\n id=result.id,\n concurrency_limit=limit,\n )\n except ObjectAlreadyExists:\n exit_with_error(f\"Work queue with name: {name!r} already exists.\")\n except ObjectNotFound:\n exit_with_error(f\"Work pool with name: {pool!r} not found.\")\n\n if tags:\n tags_message = f\"tags - {', '.join(sorted(tags))}\\n\" or \"\"\n output_msg = dedent(\n f\"\"\"\n Created work queue with properties:\n name - {name!r}\n id - {result.id}\n concurrency limit - {limit}\n {tags_message}\n Start an agent to pick up flow runs from the work queue:\n prefect agent start -q '{result.name}'\n\n Inspect the work queue:\n prefect work-queue inspect '{result.name}'\n \"\"\"\n )\n else:\n if not pool:\n # specify the default work pool name after work queue creation to allow the server\n # to handle a bunch of logic associated with agents without work pools\n pool = DEFAULT_AGENT_WORK_POOL_NAME\n output_msg = dedent(\n f\"\"\"\n Created work queue with properties:\n name - {name!r}\n work pool - {pool!r}\n id - {result.id}\n concurrency limit - {limit}\n Start an agent to pick up flow runs from the work queue:\n prefect agent start -q '{result.name} -p {pool}'\n\n Inspect the work queue:\n prefect work-queue inspect '{result.name}'\n \"\"\"\n )\n exit_with_success(output_msg)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.delete","title":"delete
async
","text":"Delete a work queue by ID.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def delete(\n name: str = typer.Argument(..., help=\"The name or ID of the work queue to delete\"),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool containing the work queue to delete.\",\n ),\n):\n \"\"\"\n Delete a work queue by ID.\n \"\"\"\n\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name,\n work_pool_name=pool,\n )\n async with get_client() as client:\n try:\n await client.delete_work_queue_by_id(id=queue_id)\n except ObjectNotFound:\n if pool:\n error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n else:\n error_message = f\"No work queue found: {name!r}\"\n exit_with_error(error_message)\n if pool:\n success_message = (\n f\"Successfully deleted work queue {name!r} in work pool {pool!r}\"\n )\n else:\n success_message = f\"Successfully deleted work queue {name!r}\"\n exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.inspect","title":"inspect
async
","text":"Inspect a work queue by ID.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def inspect(\n name: str = typer.Argument(\n None, help=\"The name or ID of the work queue to inspect\"\n ),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool that the work queue belongs to.\",\n ),\n):\n \"\"\"\n Inspect a work queue by ID.\n \"\"\"\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name,\n work_pool_name=pool,\n )\n async with get_client() as client:\n try:\n result = await client.read_work_queue(id=queue_id)\n app.console.print(Pretty(result))\n except ObjectNotFound:\n if pool:\n error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n else:\n error_message = f\"No work queue found: {name!r}\"\n exit_with_error(error_message)\n\n try:\n status = await client.read_work_queue_status(id=queue_id)\n app.console.print(Pretty(status))\n except ObjectNotFound:\n pass\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.ls","title":"ls
async
","text":"View all work queues.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def ls(\n verbose: bool = typer.Option(\n False, \"--verbose\", \"-v\", help=\"Display more information.\"\n ),\n work_queue_prefix: str = typer.Option(\n None,\n \"--match\",\n \"-m\",\n help=(\n \"Will match work queues with names that start with the specified prefix\"\n \" string\"\n ),\n ),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool containing the work queues to list.\",\n ),\n):\n \"\"\"\n View all work queues.\n \"\"\"\n if not pool and not experiment_enabled(\"work_pools\"):\n table = Table(\n title=\"Work Queues\",\n caption=\"(**) denotes a paused queue\",\n caption_style=\"red\",\n )\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n if verbose:\n table.add_column(\"Filter (Deprecated)\", style=\"magenta\", no_wrap=True)\n\n async with get_client() as client:\n if work_queue_prefix is not None:\n queues = await client.match_work_queues([work_queue_prefix])\n else:\n queues = await client.read_work_queues()\n\n def sort_by_created_key(q):\n return pendulum.now(\"utc\") - q.created\n\n for queue in sorted(queues, key=sort_by_created_key):\n row = [\n f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n str(queue.id),\n (\n f\"[red]{queue.concurrency_limit}\"\n if queue.concurrency_limit\n else \"[blue]None\"\n ),\n ]\n if verbose and queue.filter is not None:\n row.append(queue.filter.json())\n table.add_row(*row)\n elif not pool:\n table = Table(\n title=\"Work Queues\",\n caption=\"(**) denotes a paused queue\",\n caption_style=\"red\",\n )\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"Pool\", style=\"magenta\", no_wrap=True)\n table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n if verbose:\n table.add_column(\"Filter (Deprecated)\", style=\"magenta\", no_wrap=True)\n\n async with get_client() as client:\n if work_queue_prefix is not None:\n queues = await client.match_work_queues([work_queue_prefix])\n else:\n queues = await client.read_work_queues()\n\n pool_ids = [q.work_pool_id for q in queues]\n wp_filter = WorkPoolFilter(id=WorkPoolFilterId(any_=pool_ids))\n pools = await client.read_work_pools(work_pool_filter=wp_filter)\n pool_id_name_map = {p.id: p.name for p in pools}\n\n def sort_by_created_key(q):\n return pendulum.now(\"utc\") - q.created\n\n for queue in sorted(queues, key=sort_by_created_key):\n row = [\n f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n pool_id_name_map[queue.work_pool_id],\n str(queue.id),\n (\n f\"[red]{queue.concurrency_limit}\"\n if queue.concurrency_limit\n else \"[blue]None\"\n ),\n ]\n if verbose and queue.filter is not None:\n row.append(queue.filter.json())\n table.add_row(*row)\n\n else:\n table = Table(\n title=f\"Work Queues in Work Pool {pool!r}\",\n caption=\"(**) denotes a paused queue\",\n caption_style=\"red\",\n )\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"Priority\", style=\"magenta\", no_wrap=True)\n table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n if verbose:\n table.add_column(\"Description\", style=\"cyan\", no_wrap=False)\n\n async with get_client() as client:\n try:\n queues = await client.read_work_queues(work_pool_name=pool)\n except ObjectNotFound:\n exit_with_error(f\"No work pool found: {pool!r}\")\n\n def sort_by_created_key(q):\n return pendulum.now(\"utc\") - q.created\n\n for queue in sorted(queues, key=sort_by_created_key):\n row = [\n f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n f\"{queue.priority}\",\n (\n f\"[red]{queue.concurrency_limit}\"\n if queue.concurrency_limit\n else \"[blue]None\"\n ),\n ]\n if verbose:\n row.append(queue.description)\n table.add_row(*row)\n\n app.console.print(table)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.pause","title":"pause
async
","text":"Pause a work queue.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def pause(\n name: str = typer.Argument(..., help=\"The name or ID of the work queue to pause\"),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool that the work queue belongs to.\",\n ),\n):\n \"\"\"\n Pause a work queue.\n \"\"\"\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name,\n work_pool_name=pool,\n )\n\n async with get_client() as client:\n try:\n await client.update_work_queue(\n id=queue_id,\n is_paused=True,\n )\n except ObjectNotFound:\n if pool:\n error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n else:\n error_message = f\"No work queue found: {name!r}\"\n exit_with_error(error_message)\n\n if pool:\n success_message = f\"Work queue {name!r} in work pool {pool!r} paused\"\n else:\n success_message = f\"Work queue {name!r} paused\"\n exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.preview","title":"preview
async
","text":"Preview a work queue.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def preview(\n name: str = typer.Argument(\n None, help=\"The name or ID of the work queue to preview\"\n ),\n hours: int = typer.Option(\n None,\n \"-h\",\n \"--hours\",\n help=\"The number of hours to look ahead; defaults to 1 hour\",\n ),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool that the work queue belongs to.\",\n ),\n):\n \"\"\"\n Preview a work queue.\n \"\"\"\n if pool:\n title = f\"Preview of Work Queue {name!r} in Work Pool {pool!r}\"\n else:\n title = f\"Preview of Work Queue {name!r}\"\n\n table = Table(title=title, caption=\"(**) denotes a late run\", caption_style=\"red\")\n table.add_column(\n \"Scheduled Start Time\", justify=\"left\", style=\"yellow\", no_wrap=True\n )\n table.add_column(\"Run ID\", justify=\"left\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"Deployment ID\", style=\"blue\", no_wrap=True)\n\n window = pendulum.now(\"utc\").add(hours=hours or 1)\n\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name, work_pool_name=pool\n )\n async with get_client() as client:\n if pool:\n try:\n responses = await client.get_scheduled_flow_runs_for_work_pool(\n work_pool_name=pool,\n work_queue_names=[name],\n )\n runs = [response.flow_run for response in responses]\n except ObjectNotFound:\n exit_with_error(f\"No work queue found: {name!r} in work pool {pool!r}\")\n else:\n try:\n runs = await client.get_runs_in_work_queue(\n queue_id,\n limit=10,\n scheduled_before=window,\n )\n except ObjectNotFound:\n exit_with_error(f\"No work queue found: {name!r}\")\n now = pendulum.now(\"utc\")\n\n def sort_by_created_key(r):\n return now - r.created\n\n for run in sorted(runs, key=sort_by_created_key):\n table.add_row(\n (\n f\"{run.expected_start_time} [red](**)\"\n if run.expected_start_time < now\n else f\"{run.expected_start_time}\"\n ),\n str(run.id),\n run.name,\n str(run.deployment_id),\n )\n\n if runs:\n app.console.print(table)\n else:\n app.console.print(\n (\n \"No runs found - try increasing how far into the future you preview\"\n \" with the --hours flag\"\n ),\n style=\"yellow\",\n )\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.read_wq_runs","title":"read_wq_runs
async
","text":"Get runs in a work queue. Note that this will trigger an artificial poll of the work queue.
Source code inprefect/cli/work_queue.py
@work_app.command(\"read-runs\")\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def read_wq_runs(\n name: str = typer.Argument(..., help=\"The name or ID of the work queue to poll\"),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool containing the work queue to poll.\",\n ),\n):\n \"\"\"\n Get runs in a work queue. Note that this will trigger an artificial poll of\n the work queue.\n \"\"\"\n\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name,\n work_pool_name=pool,\n )\n async with get_client() as client:\n try:\n runs = await client.get_runs_in_work_queue(id=queue_id)\n except ObjectNotFound:\n if pool:\n error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n else:\n error_message = f\"No work queue found: {name!r}\"\n exit_with_error(error_message)\n success_message = (\n f\"Read {len(runs)} runs for work queue {name!r} in work pool {pool}: {runs}\"\n )\n exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.resume","title":"resume
async
","text":"Resume a paused work queue.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def resume(\n name: str = typer.Argument(..., help=\"The name or ID of the work queue to resume\"),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool that the work queue belongs to.\",\n ),\n):\n \"\"\"\n Resume a paused work queue.\n \"\"\"\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name,\n work_pool_name=pool,\n )\n\n async with get_client() as client:\n try:\n await client.update_work_queue(\n id=queue_id,\n is_paused=False,\n )\n except ObjectNotFound:\n if pool:\n error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n else:\n error_message = f\"No work queue found: {name!r}\"\n exit_with_error(error_message)\n\n if pool:\n success_message = f\"Work queue {name!r} in work pool {pool!r} resumed\"\n else:\n success_message = f\"Work queue {name!r} resumed\"\n exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.set_concurrency_limit","title":"set_concurrency_limit
async
","text":"Set a concurrency limit on a work queue.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def set_concurrency_limit(\n name: str = typer.Argument(..., help=\"The name or ID of the work queue\"),\n limit: int = typer.Argument(..., help=\"The concurrency limit to set on the queue.\"),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool that the work queue belongs to.\",\n ),\n):\n \"\"\"\n Set a concurrency limit on a work queue.\n \"\"\"\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name,\n work_pool_name=pool,\n )\n\n async with get_client() as client:\n try:\n await client.update_work_queue(\n id=queue_id,\n concurrency_limit=limit,\n )\n except ObjectNotFound:\n if pool:\n error_message = (\n f\"No work queue named {name!r} found in work pool {pool!r}.\"\n )\n else:\n error_message = f\"No work queue named {name!r} found.\"\n exit_with_error(error_message)\n\n if pool:\n success_message = (\n f\"Concurrency limit of {limit} set on work queue {name!r} in work pool\"\n f\" {pool!r}\"\n )\n else:\n success_message = f\"Concurrency limit of {limit} set on work queue {name!r}\"\n exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/worker/","title":"worker","text":"","tags":["Python API","workers","CLI"]},{"location":"api-ref/prefect/cli/worker/#prefect.cli.worker","title":"prefect.cli.worker
","text":"","tags":["Python API","workers","CLI"]},{"location":"api-ref/prefect/cli/worker/#prefect.cli.worker.start","title":"start
async
","text":"Start a worker process to poll a work pool for flow runs.
Source code inprefect/cli/worker.py
@worker_app.command()\nasync def start(\n worker_name: str = typer.Option(\n None,\n \"-n\",\n \"--name\",\n help=(\n \"The name to give to the started worker. If not provided, a unique name\"\n \" will be generated.\"\n ),\n ),\n work_pool_name: str = typer.Option(\n ...,\n \"-p\",\n \"--pool\",\n help=\"The work pool the started worker should poll.\",\n prompt=True,\n ),\n work_queues: List[str] = typer.Option(\n None,\n \"-q\",\n \"--work-queue\",\n help=(\n \"One or more work queue names for the worker to pull from. If not provided,\"\n \" the worker will pull from all work queues in the work pool.\"\n ),\n ),\n worker_type: Optional[str] = typer.Option(\n None,\n \"-t\",\n \"--type\",\n help=(\n \"The type of worker to start. If not provided, the worker type will be\"\n \" inferred from the work pool.\"\n ),\n ),\n prefetch_seconds: int = SettingsOption(\n PREFECT_WORKER_PREFETCH_SECONDS,\n help=\"Number of seconds to look into the future for scheduled flow runs.\",\n ),\n run_once: bool = typer.Option(\n False, help=\"Only run worker polling once. By default, the worker runs forever.\"\n ),\n limit: int = typer.Option(\n None,\n \"-l\",\n \"--limit\",\n help=\"Maximum number of flow runs to start simultaneously.\",\n ),\n with_healthcheck: bool = typer.Option(\n False, help=\"Start a healthcheck server for the worker.\"\n ),\n install_policy: InstallPolicy = typer.Option(\n InstallPolicy.PROMPT.value,\n \"--install-policy\",\n help=\"Install policy to use workers from Prefect integration packages.\",\n case_sensitive=False,\n ),\n base_job_template: typer.FileText = typer.Option(\n None,\n \"--base-job-template\",\n help=(\n \"The path to a JSON file containing the base job template to use. If\"\n \" unspecified, Prefect will use the default base job template for the given\"\n \" worker type. If the work pool already exists, this will be ignored.\"\n ),\n ),\n):\n \"\"\"\n Start a worker process to poll a work pool for flow runs.\n \"\"\"\n\n is_paused = await _check_work_pool_paused(work_pool_name)\n if is_paused:\n app.console.print(\n (\n f\"The work pool {work_pool_name!r} is currently paused. This worker\"\n \" will not execute any flow runs until the work pool is unpaused.\"\n ),\n style=\"yellow\",\n )\n\n worker_cls = await _get_worker_class(worker_type, work_pool_name, install_policy)\n\n if worker_cls is None:\n exit_with_error(\n \"Unable to start worker. Please ensure you have the necessary dependencies\"\n \" installed to run your desired worker type.\"\n )\n\n worker_process_id = os.getpid()\n setup_signal_handlers_worker(\n worker_process_id, f\"the {worker_type} worker\", app.console.print\n )\n\n template_contents = None\n if base_job_template is not None:\n template_contents = json.load(fp=base_job_template)\n\n async with worker_cls(\n name=worker_name,\n work_pool_name=work_pool_name,\n work_queues=work_queues,\n limit=limit,\n prefetch_seconds=prefetch_seconds,\n heartbeat_interval_seconds=PREFECT_WORKER_HEARTBEAT_SECONDS.value(),\n base_job_template=template_contents,\n ) as worker:\n app.console.print(f\"Worker {worker.name!r} started!\", style=\"green\")\n async with anyio.create_task_group() as tg:\n # wait for an initial heartbeat to configure the worker\n await worker.sync_with_backend()\n # schedule the scheduled flow run polling loop\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=worker.get_and_submit_flow_runs,\n interval=PREFECT_WORKER_QUERY_SECONDS.value(),\n run_once=run_once,\n printer=app.console.print,\n jitter_range=0.3,\n backoff=4, # Up to ~1 minute interval during backoff\n )\n )\n # schedule the sync loop\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=worker.sync_with_backend,\n interval=worker.heartbeat_interval_seconds,\n run_once=run_once,\n printer=app.console.print,\n jitter_range=0.3,\n backoff=4,\n )\n )\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=worker.check_for_cancelled_flow_runs,\n interval=PREFECT_WORKER_QUERY_SECONDS.value() * 2,\n run_once=run_once,\n printer=app.console.print,\n jitter_range=0.3,\n backoff=4,\n )\n )\n\n started_event = await worker._emit_worker_started_event()\n\n # if --with-healthcheck was passed, start the healthcheck server\n if with_healthcheck:\n # we'll start the ASGI server in a separate thread so that\n # uvicorn does not block the main thread\n server_thread = threading.Thread(\n name=\"healthcheck-server-thread\",\n target=partial(\n start_healthcheck_server,\n worker=worker,\n query_interval_seconds=PREFECT_WORKER_QUERY_SECONDS.value(),\n ),\n daemon=True,\n )\n server_thread.start()\n\n await worker._emit_worker_stopped_event(started_event)\n app.console.print(f\"Worker {worker.name!r} stopped!\")\n
","tags":["Python API","workers","CLI"]},{"location":"api-ref/prefect/client/base/","title":"base","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base","title":"prefect.client.base
","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse","title":"PrefectResponse
","text":" Bases: Response
A Prefect wrapper for the httpx.Response
class.
Provides more informative error messages.
Source code inprefect/client/base.py
class PrefectResponse(httpx.Response):\n \"\"\"\n A Prefect wrapper for the `httpx.Response` class.\n\n Provides more informative error messages.\n \"\"\"\n\n def raise_for_status(self) -> None:\n \"\"\"\n Raise an exception if the response contains an HTTPStatusError.\n\n The `PrefectHTTPStatusError` contains useful additional information that\n is not contained in the `HTTPStatusError`.\n \"\"\"\n try:\n return super().raise_for_status()\n except HTTPStatusError as exc:\n raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__\n\n @classmethod\n def from_httpx_response(cls: Type[Self], response: httpx.Response) -> Self:\n \"\"\"\n Create a `PrefectReponse` from an `httpx.Response`.\n\n By changing the `__class__` attribute of the Response, we change the method\n resolution order to look for methods defined in PrefectResponse, while leaving\n everything else about the original Response instance intact.\n \"\"\"\n new_response = copy.copy(response)\n new_response.__class__ = cls\n return new_response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse.raise_for_status","title":"raise_for_status
","text":"Raise an exception if the response contains an HTTPStatusError.
The PrefectHTTPStatusError
contains useful additional information that is not contained in the HTTPStatusError
.
prefect/client/base.py
def raise_for_status(self) -> None:\n \"\"\"\n Raise an exception if the response contains an HTTPStatusError.\n\n The `PrefectHTTPStatusError` contains useful additional information that\n is not contained in the `HTTPStatusError`.\n \"\"\"\n try:\n return super().raise_for_status()\n except HTTPStatusError as exc:\n raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse.from_httpx_response","title":"from_httpx_response
classmethod
","text":"Create a PrefectReponse
from an httpx.Response
.
By changing the __class__
attribute of the Response, we change the method resolution order to look for methods defined in PrefectResponse, while leaving everything else about the original Response instance intact.
prefect/client/base.py
@classmethod\ndef from_httpx_response(cls: Type[Self], response: httpx.Response) -> Self:\n \"\"\"\n Create a `PrefectReponse` from an `httpx.Response`.\n\n By changing the `__class__` attribute of the Response, we change the method\n resolution order to look for methods defined in PrefectResponse, while leaving\n everything else about the original Response instance intact.\n \"\"\"\n new_response = copy.copy(response)\n new_response.__class__ = cls\n return new_response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectHttpxClient","title":"PrefectHttpxClient
","text":" Bases: AsyncClient
A Prefect wrapper for the async httpx client with support for retry-after headers for the provided status codes (typically 429, 502 and 503).
Additionally, this client will always call raise_for_status
on responses.
For more details on rate limit headers, see: Configuring Cloudflare Rate Limiting
Source code inprefect/client/base.py
class PrefectHttpxClient(httpx.AsyncClient):\n \"\"\"\n A Prefect wrapper for the async httpx client with support for retry-after headers\n for the provided status codes (typically 429, 502 and 503).\n\n Additionally, this client will always call `raise_for_status` on responses.\n\n For more details on rate limit headers, see:\n [Configuring Cloudflare Rate Limiting](https://support.cloudflare.com/hc/en-us/articles/115001635128-Configuring-Rate-Limiting-from-UI)\n \"\"\"\n\n async def _send_with_retry(\n self,\n request: Callable,\n retry_codes: Set[int] = set(),\n retry_exceptions: Tuple[Exception, ...] = tuple(),\n ):\n \"\"\"\n Send a request and retry it if it fails.\n\n Sends the provided request and retries it up to PREFECT_CLIENT_MAX_RETRIES times\n if the request either raises an exception listed in `retry_exceptions` or\n receives a response with a status code listed in `retry_codes`.\n\n Retries will be delayed based on either the retry header (preferred) or\n exponential backoff if a retry header is not provided.\n \"\"\"\n try_count = 0\n response = None\n\n while try_count <= PREFECT_CLIENT_MAX_RETRIES.value():\n try_count += 1\n retry_seconds = None\n exc_info = None\n\n try:\n response = await request()\n except retry_exceptions: # type: ignore\n if try_count > PREFECT_CLIENT_MAX_RETRIES.value():\n raise\n # Otherwise, we will ignore this error but capture the info for logging\n exc_info = sys.exc_info()\n else:\n # We got a response; return immediately if it is not retryable\n if response.status_code not in retry_codes:\n return response\n\n if \"Retry-After\" in response.headers:\n retry_seconds = float(response.headers[\"Retry-After\"])\n\n # Use an exponential back-off if not set in a header\n if retry_seconds is None:\n retry_seconds = 2**try_count\n\n # Add jitter\n jitter_factor = PREFECT_CLIENT_RETRY_JITTER_FACTOR.value()\n if retry_seconds > 0 and jitter_factor > 0:\n if response is not None and \"Retry-After\" in response.headers:\n # Always wait for _at least_ retry seconds if requested by the API\n retry_seconds = bounded_poisson_interval(\n retry_seconds, retry_seconds * (1 + jitter_factor)\n )\n else:\n # Otherwise, use a symmetrical jitter\n retry_seconds = clamped_poisson_interval(\n retry_seconds, jitter_factor\n )\n\n logger.debug(\n (\n \"Encountered retryable exception during request. \"\n if exc_info\n else (\n \"Received response with retryable status code\"\n f\" {response.status_code}. \"\n )\n )\n + f\"Another attempt will be made in {retry_seconds}s. \"\n \"This is attempt\"\n f\" {try_count}/{PREFECT_CLIENT_MAX_RETRIES.value() + 1}.\",\n exc_info=exc_info,\n )\n await anyio.sleep(retry_seconds)\n\n assert (\n response is not None\n ), \"Retry handling ended without response or exception\"\n\n # We ran out of retries, return the failed response\n return response\n\n async def send(self, *args, **kwargs) -> Response:\n \"\"\"\n Send a request with automatic retry behavior for the following status codes:\n\n - 429 CloudFlare-style rate limiting\n - 502 Bad Gateway\n - 503 Service unavailable\n \"\"\"\n\n api_request = partial(super().send, *args, **kwargs)\n\n response = await self._send_with_retry(\n request=api_request,\n retry_codes={\n status.HTTP_429_TOO_MANY_REQUESTS,\n status.HTTP_503_SERVICE_UNAVAILABLE,\n status.HTTP_502_BAD_GATEWAY,\n status.HTTP_408_REQUEST_TIMEOUT,\n *PREFECT_CLIENT_RETRY_EXTRA_CODES.value(),\n },\n retry_exceptions=(\n httpx.ReadTimeout,\n httpx.PoolTimeout,\n httpx.ConnectTimeout,\n # `ConnectionResetError` when reading socket raises as a `ReadError`\n httpx.ReadError,\n # Sockets can be closed during writes resulting in a `WriteError`\n httpx.WriteError,\n # Uvicorn bug, see https://github.com/PrefectHQ/prefect/issues/7512\n httpx.RemoteProtocolError,\n # HTTP2 bug, see https://github.com/PrefectHQ/prefect/issues/7442\n httpx.LocalProtocolError,\n ),\n )\n\n # Convert to a Prefect response to add nicer errors messages\n response = PrefectResponse.from_httpx_response(response)\n\n # Always raise bad responses\n # NOTE: We may want to remove this and handle responses per route in the\n # `PrefectClient`\n response.raise_for_status()\n\n return response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectHttpxClient.send","title":"send
async
","text":"Send a request with automatic retry behavior for the following status codes:
prefect/client/base.py
async def send(self, *args, **kwargs) -> Response:\n \"\"\"\n Send a request with automatic retry behavior for the following status codes:\n\n - 429 CloudFlare-style rate limiting\n - 502 Bad Gateway\n - 503 Service unavailable\n \"\"\"\n\n api_request = partial(super().send, *args, **kwargs)\n\n response = await self._send_with_retry(\n request=api_request,\n retry_codes={\n status.HTTP_429_TOO_MANY_REQUESTS,\n status.HTTP_503_SERVICE_UNAVAILABLE,\n status.HTTP_502_BAD_GATEWAY,\n status.HTTP_408_REQUEST_TIMEOUT,\n *PREFECT_CLIENT_RETRY_EXTRA_CODES.value(),\n },\n retry_exceptions=(\n httpx.ReadTimeout,\n httpx.PoolTimeout,\n httpx.ConnectTimeout,\n # `ConnectionResetError` when reading socket raises as a `ReadError`\n httpx.ReadError,\n # Sockets can be closed during writes resulting in a `WriteError`\n httpx.WriteError,\n # Uvicorn bug, see https://github.com/PrefectHQ/prefect/issues/7512\n httpx.RemoteProtocolError,\n # HTTP2 bug, see https://github.com/PrefectHQ/prefect/issues/7442\n httpx.LocalProtocolError,\n ),\n )\n\n # Convert to a Prefect response to add nicer errors messages\n response = PrefectResponse.from_httpx_response(response)\n\n # Always raise bad responses\n # NOTE: We may want to remove this and handle responses per route in the\n # `PrefectClient`\n response.raise_for_status()\n\n return response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.app_lifespan_context","title":"app_lifespan_context
async
","text":"A context manager that calls startup/shutdown hooks for the given application.
Lifespan contexts are cached per application to avoid calling the lifespan hooks more than once if the context is entered in nested code. A no-op context will be returned if the context for the given application is already being managed.
This manager is robust to concurrent access within the event loop. For example, if you have concurrent contexts for the same application, it is guaranteed that startup hooks will be called before their context starts and shutdown hooks will only be called after their context exits.
A reference count is used to support nested use of clients without running lifespan hooks excessively. The first client context entered will create and enter a lifespan context. Each subsequent client will increment a reference count but will not create a new lifespan context. When each client context exits, the reference count is decremented. When the last client context exits, the lifespan will be closed.
In simple nested cases, the first client context will be the one to exit the lifespan. However, if client contexts are entered concurrently they may not exit in a consistent order. If the first client context was responsible for closing the lifespan, it would have to wait until all other client contexts to exit to avoid firing shutdown hooks while the application is in use. Waiting for the other clients to exit can introduce deadlocks, so, instead, the first client will exit without closing the lifespan context and reference counts will be used to ensure the lifespan is closed once all of the clients are done.
Source code inprefect/client/base.py
@asynccontextmanager\nasync def app_lifespan_context(app: ASGIApp) -> AsyncGenerator[None, None]:\n \"\"\"\n A context manager that calls startup/shutdown hooks for the given application.\n\n Lifespan contexts are cached per application to avoid calling the lifespan hooks\n more than once if the context is entered in nested code. A no-op context will be\n returned if the context for the given application is already being managed.\n\n This manager is robust to concurrent access within the event loop. For example,\n if you have concurrent contexts for the same application, it is guaranteed that\n startup hooks will be called before their context starts and shutdown hooks will\n only be called after their context exits.\n\n A reference count is used to support nested use of clients without running\n lifespan hooks excessively. The first client context entered will create and enter\n a lifespan context. Each subsequent client will increment a reference count but will\n not create a new lifespan context. When each client context exits, the reference\n count is decremented. When the last client context exits, the lifespan will be\n closed.\n\n In simple nested cases, the first client context will be the one to exit the\n lifespan. However, if client contexts are entered concurrently they may not exit\n in a consistent order. If the first client context was responsible for closing\n the lifespan, it would have to wait until all other client contexts to exit to\n avoid firing shutdown hooks while the application is in use. Waiting for the other\n clients to exit can introduce deadlocks, so, instead, the first client will exit\n without closing the lifespan context and reference counts will be used to ensure\n the lifespan is closed once all of the clients are done.\n \"\"\"\n # TODO: A deadlock has been observed during multithreaded use of clients while this\n # lifespan context is being used. This has only been reproduced on Python 3.7\n # and while we hope to discourage using multiple event loops in threads, this\n # bug may emerge again.\n # See https://github.com/PrefectHQ/orion/pull/1696\n thread_id = threading.get_ident()\n\n # The id of the application is used instead of the hash so each application instance\n # is managed independently even if they share the same settings. We include the\n # thread id since applications are managed separately per thread.\n key = (thread_id, id(app))\n\n # On exception, this will be populated with exception details\n exc_info = (None, None, None)\n\n # Get a lock unique to this thread since anyio locks are not threadsafe\n lock = APP_LIFESPANS_LOCKS[thread_id]\n\n async with lock:\n if key in APP_LIFESPANS:\n # The lifespan is already being managed, just increment the reference count\n APP_LIFESPANS_REF_COUNTS[key] += 1\n else:\n # Create a new lifespan manager\n APP_LIFESPANS[key] = context = LifespanManager(\n app, startup_timeout=30, shutdown_timeout=30\n )\n APP_LIFESPANS_REF_COUNTS[key] = 1\n\n # Ensure we enter the context before releasing the lock so startup hooks\n # are complete before another client can be used\n await context.__aenter__()\n\n try:\n yield\n except BaseException:\n exc_info = sys.exc_info()\n raise\n finally:\n # If we do not shield against anyio cancellation, the lock will return\n # immediately and the code in its context will not run, leaving the lifespan\n # open\n with anyio.CancelScope(shield=True):\n async with lock:\n # After the consumer exits the context, decrement the reference count\n APP_LIFESPANS_REF_COUNTS[key] -= 1\n\n # If this the last context to exit, close the lifespan\n if APP_LIFESPANS_REF_COUNTS[key] <= 0:\n APP_LIFESPANS_REF_COUNTS.pop(key)\n context = APP_LIFESPANS.pop(key)\n await context.__aexit__(*exc_info)\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/","title":"cloud","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud","title":"prefect.client.cloud
","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudUnauthorizedError","title":"CloudUnauthorizedError
","text":" Bases: PrefectException
Raised when the CloudClient receives a 401 or 403 from the Cloud API.
Source code inprefect/client/cloud.py
class CloudUnauthorizedError(PrefectException):\n \"\"\"\n Raised when the CloudClient receives a 401 or 403 from the Cloud API.\n \"\"\"\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudClient","title":"CloudClient
","text":"Source code in prefect/client/cloud.py
class CloudClient:\n def __init__(\n self,\n host: str,\n api_key: str,\n httpx_settings: dict = None,\n ) -> None:\n httpx_settings = httpx_settings or dict()\n httpx_settings.setdefault(\"headers\", dict())\n httpx_settings[\"headers\"].setdefault(\"Authorization\", f\"Bearer {api_key}\")\n\n httpx_settings.setdefault(\"base_url\", host)\n if not PREFECT_UNIT_TEST_MODE.value():\n httpx_settings.setdefault(\"follow_redirects\", True)\n self._client = PrefectHttpxClient(**httpx_settings)\n\n async def api_healthcheck(self):\n \"\"\"\n Attempts to connect to the Cloud API and raises the encountered exception if not\n successful.\n\n If successful, returns `None`.\n \"\"\"\n with anyio.fail_after(10):\n await self.read_workspaces()\n\n async def read_workspaces(self) -> List[Workspace]:\n return pydantic.parse_obj_as(List[Workspace], await self.get(\"/me/workspaces\"))\n\n async def read_worker_metadata(self) -> Dict[str, Any]:\n configured_url = prefect.settings.PREFECT_API_URL.value()\n account_id, workspace_id = re.findall(PARSE_API_URL_REGEX, configured_url)[0]\n return await self.get(\n f\"accounts/{account_id}/workspaces/{workspace_id}/collections/work_pool_types\"\n )\n\n async def __aenter__(self):\n await self._client.__aenter__()\n return self\n\n async def __aexit__(self, *exc_info):\n return await self._client.__aexit__(*exc_info)\n\n def __enter__(self):\n raise RuntimeError(\n \"The `CloudClient` must be entered with an async context. Use 'async \"\n \"with CloudClient(...)' not 'with CloudClient(...)'\"\n )\n\n def __exit__(self, *_):\n assert False, \"This should never be called but must be defined for __enter__\"\n\n async def get(self, route, **kwargs):\n return await self.request(\"GET\", route, **kwargs)\n\n async def request(self, method, route, **kwargs):\n try:\n res = await self._client.request(method, route, **kwargs)\n res.raise_for_status()\n except httpx.HTTPStatusError as exc:\n if exc.response.status_code in (\n status.HTTP_401_UNAUTHORIZED,\n status.HTTP_403_FORBIDDEN,\n ):\n raise CloudUnauthorizedError\n else:\n raise exc\n\n if res.status_code == status.HTTP_204_NO_CONTENT:\n return\n\n return res.json()\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudClient.api_healthcheck","title":"api_healthcheck
async
","text":"Attempts to connect to the Cloud API and raises the encountered exception if not successful.
If successful, returns None
.
prefect/client/cloud.py
async def api_healthcheck(self):\n \"\"\"\n Attempts to connect to the Cloud API and raises the encountered exception if not\n successful.\n\n If successful, returns `None`.\n \"\"\"\n with anyio.fail_after(10):\n await self.read_workspaces()\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.get_cloud_client","title":"get_cloud_client
","text":"Needs a docstring.
Source code inprefect/client/cloud.py
def get_cloud_client(\n host: Optional[str] = None,\n api_key: Optional[str] = None,\n httpx_settings: Optional[dict] = None,\n infer_cloud_url: bool = False,\n) -> \"CloudClient\":\n \"\"\"\n Needs a docstring.\n \"\"\"\n if httpx_settings is not None:\n httpx_settings = httpx_settings.copy()\n\n if infer_cloud_url is False:\n host = host or PREFECT_CLOUD_API_URL.value()\n else:\n configured_url = prefect.settings.PREFECT_API_URL.value()\n host = re.sub(PARSE_API_URL_REGEX, \"\", configured_url)\n\n return CloudClient(\n host=host,\n api_key=api_key or PREFECT_API_KEY.value(),\n httpx_settings=httpx_settings,\n )\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/orchestration/","title":"orchestration","text":"Asynchronous client implementation for communicating with the Prefect REST API.
Explore the client by communicating with an in-memory webserver \u2014 no setup required:
$ # start python REPL with native await functionality\n$ python -m asyncio\n>>> from prefect import get_client\n>>> async with get_client() as client:\n... response = await client.hello()\n... print(response.json())\n\ud83d\udc4b\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration","title":"prefect.client.orchestration
","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient","title":"PrefectClient
","text":"An asynchronous client for interacting with the Prefect REST API.
Parameters:
Name Type Description Defaultapi
Union[str, ASGIApp]
the REST API URL or FastAPI application to connect to
requiredapi_key
str
An optional API key for authentication.
None
api_version
str
The API version this client is compatible with.
None
httpx_settings
dict
An optional dictionary of settings to pass to the underlying httpx.AsyncClient
None
Say hello to a Prefect REST API\n\n<div class=\"terminal\">\n```\n>>> async with get_client() as client:\n>>> response = await client.hello()\n>>>\n>>> print(response.json())\n\ud83d\udc4b\n```\n</div>\n
Source code in prefect/client/orchestration.py
class PrefectClient:\n \"\"\"\n An asynchronous client for interacting with the [Prefect REST API](/api-ref/rest-api/).\n\n Args:\n api: the REST API URL or FastAPI application to connect to\n api_key: An optional API key for authentication.\n api_version: The API version this client is compatible with.\n httpx_settings: An optional dictionary of settings to pass to the underlying\n `httpx.AsyncClient`\n\n Examples:\n\n Say hello to a Prefect REST API\n\n <div class=\"terminal\">\n ```\n >>> async with get_client() as client:\n >>> response = await client.hello()\n >>>\n >>> print(response.json())\n \ud83d\udc4b\n ```\n </div>\n \"\"\"\n\n def __init__(\n self,\n api: Union[str, ASGIApp],\n *,\n api_key: str = None,\n api_version: str = None,\n httpx_settings: dict = None,\n ) -> None:\n httpx_settings = httpx_settings.copy() if httpx_settings else {}\n httpx_settings.setdefault(\"headers\", {})\n\n if PREFECT_API_TLS_INSECURE_SKIP_VERIFY:\n httpx_settings.setdefault(\"verify\", False)\n\n if api_version is None:\n api_version = SERVER_API_VERSION\n httpx_settings[\"headers\"].setdefault(\"X-PREFECT-API-VERSION\", api_version)\n if api_key:\n httpx_settings[\"headers\"].setdefault(\"Authorization\", f\"Bearer {api_key}\")\n\n # Context management\n self._exit_stack = AsyncExitStack()\n self._ephemeral_app: Optional[ASGIApp] = None\n self.manage_lifespan = True\n self.server_type: ServerType\n\n # Only set if this client started the lifespan of the application\n self._ephemeral_lifespan: Optional[LifespanManager] = None\n\n self._closed = False\n self._started = False\n\n # Connect to an external application\n if isinstance(api, str):\n if httpx_settings.get(\"app\"):\n raise ValueError(\n \"Invalid httpx settings: `app` cannot be set when providing an \"\n \"api url. `app` is only for use with ephemeral instances. Provide \"\n \"it as the `api` parameter instead.\"\n )\n httpx_settings.setdefault(\"base_url\", api)\n\n # See https://www.python-httpx.org/advanced/#pool-limit-configuration\n httpx_settings.setdefault(\n \"limits\",\n httpx.Limits(\n # We see instability when allowing the client to open many connections at once.\n # Limiting concurrency results in more stable performance.\n max_connections=16,\n max_keepalive_connections=8,\n # The Prefect Cloud LB will keep connections alive for 30s.\n # Only allow the client to keep connections alive for 25s.\n keepalive_expiry=25,\n ),\n )\n\n # See https://www.python-httpx.org/http2/\n # Enabling HTTP/2 support on the client does not necessarily mean that your requests\n # and responses will be transported over HTTP/2, since both the client and the server\n # need to support HTTP/2. If you connect to a server that only supports HTTP/1.1 the\n # client will use a standard HTTP/1.1 connection instead.\n httpx_settings.setdefault(\"http2\", PREFECT_API_ENABLE_HTTP2.value())\n\n self.server_type = (\n ServerType.CLOUD\n if api.startswith(PREFECT_CLOUD_API_URL.value())\n else ServerType.SERVER\n )\n\n # Connect to an in-process application\n elif isinstance(api, ASGIApp):\n self._ephemeral_app = api\n self.server_type = ServerType.EPHEMERAL\n\n # When using an ephemeral server, server-side exceptions can be raised\n # client-side breaking all of our response error code handling. To work\n # around this, we create an ASGI transport with application exceptions\n # disabled instead of using the application directly.\n # refs:\n # - https://github.com/PrefectHQ/prefect/pull/9637\n # - https://github.com/encode/starlette/blob/d3a11205ed35f8e5a58a711db0ff59c86fa7bb31/starlette/middleware/errors.py#L184\n # - https://github.com/tiangolo/fastapi/blob/8cc967a7605d3883bd04ceb5d25cc94ae079612f/fastapi/applications.py#L163-L164\n httpx_settings.setdefault(\n \"transport\",\n httpx.ASGITransport(\n app=self._ephemeral_app, raise_app_exceptions=False\n ),\n )\n httpx_settings.setdefault(\"base_url\", \"http://ephemeral-prefect/api\")\n\n else:\n raise TypeError(\n f\"Unexpected type {type(api).__name__!r} for argument `api`. Expected\"\n \" 'str' or 'ASGIApp/FastAPI'\"\n )\n\n # See https://www.python-httpx.org/advanced/#timeout-configuration\n httpx_settings.setdefault(\n \"timeout\",\n httpx.Timeout(\n connect=PREFECT_API_REQUEST_TIMEOUT.value(),\n read=PREFECT_API_REQUEST_TIMEOUT.value(),\n write=PREFECT_API_REQUEST_TIMEOUT.value(),\n pool=PREFECT_API_REQUEST_TIMEOUT.value(),\n ),\n )\n\n if not PREFECT_UNIT_TEST_MODE:\n httpx_settings.setdefault(\"follow_redirects\", True)\n self._client = PrefectHttpxClient(**httpx_settings)\n self._loop = None\n\n # See https://www.python-httpx.org/advanced/#custom-transports\n #\n # If we're using an HTTP/S client (not the ephemeral client), adjust the\n # transport to add retries _after_ it is instantiated. If we alter the transport\n # before instantiation, the transport will not be aware of proxies unless we\n # reproduce all of the logic to make it so.\n #\n # Only alter the transport to set our default of 3 retries, don't modify any\n # transport a user may have provided via httpx_settings.\n #\n # Making liberal use of getattr and isinstance checks here to avoid any\n # surprises if the internals of httpx or httpcore change on us\n if isinstance(api, str) and not httpx_settings.get(\"transport\"):\n transport_for_url = getattr(self._client, \"_transport_for_url\", None)\n if callable(transport_for_url):\n server_transport = transport_for_url(httpx.URL(api))\n if isinstance(server_transport, httpx.AsyncHTTPTransport):\n pool = getattr(server_transport, \"_pool\", None)\n if isinstance(pool, httpcore.AsyncConnectionPool):\n pool._retries = 3\n\n self.logger = get_logger(\"client\")\n\n @property\n def api_url(self) -> httpx.URL:\n \"\"\"\n Get the base URL for the API.\n \"\"\"\n return self._client.base_url\n\n # API methods ----------------------------------------------------------------------\n\n async def api_healthcheck(self) -> Optional[Exception]:\n \"\"\"\n Attempts to connect to the API and returns the encountered exception if not\n successful.\n\n If successful, returns `None`.\n \"\"\"\n try:\n await self._client.get(\"/health\")\n return None\n except Exception as exc:\n return exc\n\n async def hello(self) -> httpx.Response:\n \"\"\"\n Send a GET request to /hello for testing purposes.\n \"\"\"\n return await self._client.get(\"/hello\")\n\n async def create_flow(self, flow: \"FlowObject\") -> UUID:\n \"\"\"\n Create a flow in the Prefect API.\n\n Args:\n flow: a [Flow][prefect.flows.Flow] object\n\n Raises:\n httpx.RequestError: if a flow was not created for any reason\n\n Returns:\n the ID of the flow in the backend\n \"\"\"\n return await self.create_flow_from_name(flow.name)\n\n async def create_flow_from_name(self, flow_name: str) -> UUID:\n \"\"\"\n Create a flow in the Prefect API.\n\n Args:\n flow_name: the name of the new flow\n\n Raises:\n httpx.RequestError: if a flow was not created for any reason\n\n Returns:\n the ID of the flow in the backend\n \"\"\"\n flow_data = FlowCreate(name=flow_name)\n response = await self._client.post(\n \"/flows/\", json=flow_data.dict(json_compatible=True)\n )\n\n flow_id = response.json().get(\"id\")\n if not flow_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n # Return the id of the created flow\n return UUID(flow_id)\n\n async def read_flow(self, flow_id: UUID) -> Flow:\n \"\"\"\n Query the Prefect API for a flow by id.\n\n Args:\n flow_id: the flow ID of interest\n\n Returns:\n a [Flow model][prefect.client.schemas.objects.Flow] representation of the flow\n \"\"\"\n response = await self._client.get(f\"/flows/{flow_id}\")\n return Flow.parse_obj(response.json())\n\n async def read_flows(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n work_pool_filter: WorkPoolFilter = None,\n work_queue_filter: WorkQueueFilter = None,\n sort: FlowSort = None,\n limit: int = None,\n offset: int = 0,\n ) -> List[Flow]:\n \"\"\"\n Query the Prefect API for flows. Only flows matching all criteria will\n be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n work_pool_filter: filter criteria for work pools\n work_queue_filter: filter criteria for work pool queues\n sort: sort criteria for the flows\n limit: limit for the flow query\n offset: offset for the flow query\n\n Returns:\n a list of Flow model representations of the flows\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n \"work_queues\": (\n work_queue_filter.dict(json_compatible=True)\n if work_queue_filter\n else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n\n response = await self._client.post(\"/flows/filter\", json=body)\n return pydantic.parse_obj_as(List[Flow], response.json())\n\n async def read_flow_by_name(\n self,\n flow_name: str,\n ) -> Flow:\n \"\"\"\n Query the Prefect API for a flow by name.\n\n Args:\n flow_name: the name of a flow\n\n Returns:\n a fully hydrated Flow model\n \"\"\"\n response = await self._client.get(f\"/flows/name/{flow_name}\")\n return Flow.parse_obj(response.json())\n\n async def create_flow_run_from_deployment(\n self,\n deployment_id: UUID,\n *,\n parameters: Dict[str, Any] = None,\n context: dict = None,\n state: prefect.states.State = None,\n name: str = None,\n tags: Iterable[str] = None,\n idempotency_key: str = None,\n parent_task_run_id: UUID = None,\n work_queue_name: str = None,\n job_variables: Optional[Dict[str, Any]] = None,\n ) -> FlowRun:\n \"\"\"\n Create a flow run for a deployment.\n\n Args:\n deployment_id: The deployment ID to create the flow run from\n parameters: Parameter overrides for this flow run. Merged with the\n deployment defaults\n context: Optional run context data\n state: The initial state for the run. If not provided, defaults to\n `Scheduled` for now. Should always be a `Scheduled` type.\n name: An optional name for the flow run. If not provided, the server will\n generate a name.\n tags: An optional iterable of tags to apply to the flow run; these tags\n are merged with the deployment's tags.\n idempotency_key: Optional idempotency key for creation of the flow run.\n If the key matches the key of an existing flow run, the existing run will\n be returned instead of creating a new one.\n parent_task_run_id: if a subflow run is being created, the placeholder task\n run identifier in the parent flow\n work_queue_name: An optional work queue name to add this run to. If not provided,\n will default to the deployment's set work queue. If one is provided that does not\n exist, a new work queue will be created within the deployment's work pool.\n job_variables: Optional variables that will be supplied to the flow run job.\n\n Raises:\n httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n Returns:\n The flow run model\n \"\"\"\n if job_variables is not None and experiment_enabled(\"flow_run_infra_overrides\"):\n if (\n PREFECT_EXPERIMENTAL_WARN\n and PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INFRA_OVERRIDES\n ):\n warnings.warn(\n EXPERIMENTAL_WARNING.format(\n feature=\"Flow run job variables\",\n group=\"flow_run_infra_overrides\",\n help=\"To use this feature, update your workers to Prefect 2.16.4 or later. \",\n ),\n ExperimentalFeature,\n stacklevel=3,\n )\n\n parameters = parameters or {}\n context = context or {}\n state = state or prefect.states.Scheduled()\n tags = tags or []\n\n flow_run_create = DeploymentFlowRunCreate(\n parameters=parameters,\n context=context,\n state=state.to_state_create(),\n tags=tags,\n name=name,\n idempotency_key=idempotency_key,\n parent_task_run_id=parent_task_run_id,\n job_variables=job_variables,\n )\n\n # done separately to avoid including this field in payloads sent to older API versions\n if work_queue_name:\n flow_run_create.work_queue_name = work_queue_name\n\n response = await self._client.post(\n f\"/deployments/{deployment_id}/create_flow_run\",\n json=flow_run_create.dict(json_compatible=True, exclude_unset=True),\n )\n return FlowRun.parse_obj(response.json())\n\n async def create_flow_run(\n self,\n flow: \"FlowObject\",\n name: str = None,\n parameters: Dict[str, Any] = None,\n context: dict = None,\n tags: Iterable[str] = None,\n parent_task_run_id: UUID = None,\n state: \"prefect.states.State\" = None,\n ) -> FlowRun:\n \"\"\"\n Create a flow run for a flow.\n\n Args:\n flow: The flow model to create the flow run for\n name: An optional name for the flow run\n parameters: Parameter overrides for this flow run.\n context: Optional run context data\n tags: a list of tags to apply to this flow run\n parent_task_run_id: if a subflow run is being created, the placeholder task\n run identifier in the parent flow\n state: The initial state for the run. If not provided, defaults to\n `Scheduled` for now. Should always be a `Scheduled` type.\n\n Raises:\n httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n Returns:\n The flow run model\n \"\"\"\n parameters = parameters or {}\n context = context or {}\n\n if state is None:\n state = prefect.states.Pending()\n\n # Retrieve the flow id\n flow_id = await self.create_flow(flow)\n\n flow_run_create = FlowRunCreate(\n flow_id=flow_id,\n flow_version=flow.version,\n name=name,\n parameters=parameters,\n context=context,\n tags=list(tags or []),\n parent_task_run_id=parent_task_run_id,\n state=state.to_state_create(),\n empirical_policy=FlowRunPolicy(\n retries=flow.retries,\n retry_delay=flow.retry_delay_seconds,\n ),\n )\n\n flow_run_create_json = flow_run_create.dict(json_compatible=True)\n response = await self._client.post(\"/flow_runs/\", json=flow_run_create_json)\n flow_run = FlowRun.parse_obj(response.json())\n\n # Restore the parameters to the local objects to retain expectations about\n # Python objects\n flow_run.parameters = parameters\n\n return flow_run\n\n async def update_flow_run(\n self,\n flow_run_id: UUID,\n flow_version: Optional[str] = None,\n parameters: Optional[dict] = None,\n name: Optional[str] = None,\n tags: Optional[Iterable[str]] = None,\n empirical_policy: Optional[FlowRunPolicy] = None,\n infrastructure_pid: Optional[str] = None,\n job_variables: Optional[dict] = None,\n ) -> httpx.Response:\n \"\"\"\n Update a flow run's details.\n\n Args:\n flow_run_id: The identifier for the flow run to update.\n flow_version: A new version string for the flow run.\n parameters: A dictionary of parameter values for the flow run. This will not\n be merged with any existing parameters.\n name: A new name for the flow run.\n empirical_policy: A new flow run orchestration policy. This will not be\n merged with any existing policy.\n tags: An iterable of new tags for the flow run. These will not be merged with\n any existing tags.\n infrastructure_pid: The id of flow run as returned by an\n infrastructure block.\n\n Returns:\n an `httpx.Response` object from the PATCH request\n \"\"\"\n if job_variables is not None and experiment_enabled(\"flow_run_infra_overrides\"):\n if (\n PREFECT_EXPERIMENTAL_WARN\n and PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INFRA_OVERRIDES\n ):\n warnings.warn(\n EXPERIMENTAL_WARNING.format(\n feature=\"Flow run job variables\",\n group=\"flow_run_infra_overrides\",\n help=\"To use this feature, update your workers to Prefect 2.16.4 or later. \",\n ),\n ExperimentalFeature,\n stacklevel=3,\n )\n\n params = {}\n if flow_version is not None:\n params[\"flow_version\"] = flow_version\n if parameters is not None:\n params[\"parameters\"] = parameters\n if name is not None:\n params[\"name\"] = name\n if tags is not None:\n params[\"tags\"] = tags\n if empirical_policy is not None:\n params[\"empirical_policy\"] = empirical_policy\n if infrastructure_pid:\n params[\"infrastructure_pid\"] = infrastructure_pid\n if job_variables is not None:\n params[\"job_variables\"] = job_variables\n\n flow_run_data = FlowRunUpdate(**params)\n\n return await self._client.patch(\n f\"/flow_runs/{flow_run_id}\",\n json=flow_run_data.dict(json_compatible=True, exclude_unset=True),\n )\n\n async def delete_flow_run(\n self,\n flow_run_id: UUID,\n ) -> None:\n \"\"\"\n Delete a flow run by UUID.\n\n Args:\n flow_run_id: The flow run UUID of interest.\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/flow_runs/{flow_run_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def create_concurrency_limit(\n self,\n tag: str,\n concurrency_limit: int,\n ) -> UUID:\n \"\"\"\n Create a tag concurrency limit in the Prefect API. These limits govern concurrently\n running tasks.\n\n Args:\n tag: a tag the concurrency limit is applied to\n concurrency_limit: the maximum number of concurrent task runs for a given tag\n\n Raises:\n httpx.RequestError: if the concurrency limit was not created for any reason\n\n Returns:\n the ID of the concurrency limit in the backend\n \"\"\"\n\n concurrency_limit_create = ConcurrencyLimitCreate(\n tag=tag,\n concurrency_limit=concurrency_limit,\n )\n response = await self._client.post(\n \"/concurrency_limits/\",\n json=concurrency_limit_create.dict(json_compatible=True),\n )\n\n concurrency_limit_id = response.json().get(\"id\")\n\n if not concurrency_limit_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n return UUID(concurrency_limit_id)\n\n async def read_concurrency_limit_by_tag(\n self,\n tag: str,\n ):\n \"\"\"\n Read the concurrency limit set on a specific tag.\n\n Args:\n tag: a tag the concurrency limit is applied to\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: if the concurrency limit was not created for any reason\n\n Returns:\n the concurrency limit set on a specific tag\n \"\"\"\n try:\n response = await self._client.get(\n f\"/concurrency_limits/tag/{tag}\",\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n concurrency_limit_id = response.json().get(\"id\")\n\n if not concurrency_limit_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n concurrency_limit = ConcurrencyLimit.parse_obj(response.json())\n return concurrency_limit\n\n async def read_concurrency_limits(\n self,\n limit: int,\n offset: int,\n ):\n \"\"\"\n Lists concurrency limits set on task run tags.\n\n Args:\n limit: the maximum number of concurrency limits returned\n offset: the concurrency limit query offset\n\n Returns:\n a list of concurrency limits\n \"\"\"\n\n body = {\n \"limit\": limit,\n \"offset\": offset,\n }\n\n response = await self._client.post(\"/concurrency_limits/filter\", json=body)\n return pydantic.parse_obj_as(List[ConcurrencyLimit], response.json())\n\n async def reset_concurrency_limit_by_tag(\n self,\n tag: str,\n slot_override: Optional[List[Union[UUID, str]]] = None,\n ):\n \"\"\"\n Resets the concurrency limit slots set on a specific tag.\n\n Args:\n tag: a tag the concurrency limit is applied to\n slot_override: a list of task run IDs that are currently using a\n concurrency slot, please check that any task run IDs included in\n `slot_override` are currently running, otherwise those concurrency\n slots will never be released.\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n \"\"\"\n if slot_override is not None:\n slot_override = [str(slot) for slot in slot_override]\n\n try:\n await self._client.post(\n f\"/concurrency_limits/tag/{tag}/reset\",\n json=dict(slot_override=slot_override),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def delete_concurrency_limit_by_tag(\n self,\n tag: str,\n ):\n \"\"\"\n Delete the concurrency limit set on a specific tag.\n\n Args:\n tag: a tag the concurrency limit is applied to\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n \"\"\"\n try:\n await self._client.delete(\n f\"/concurrency_limits/tag/{tag}\",\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def create_work_queue(\n self,\n name: str,\n tags: Optional[List[str]] = None,\n description: Optional[str] = None,\n is_paused: Optional[bool] = None,\n concurrency_limit: Optional[int] = None,\n priority: Optional[int] = None,\n work_pool_name: Optional[str] = None,\n ) -> WorkQueue:\n \"\"\"\n Create a work queue.\n\n Args:\n name: a unique name for the work queue\n tags: DEPRECATED: an optional list of tags to filter on; only work scheduled with these tags\n will be included in the queue. This option will be removed on 2023-02-23.\n description: An optional description for the work queue.\n is_paused: Whether or not the work queue is paused.\n concurrency_limit: An optional concurrency limit for the work queue.\n priority: The queue's priority. Lower values are higher priority (1 is the highest).\n work_pool_name: The name of the work pool to use for this queue.\n\n Raises:\n prefect.exceptions.ObjectAlreadyExists: If request returns 409\n httpx.RequestError: If request fails\n\n Returns:\n The created work queue\n \"\"\"\n if tags:\n warnings.warn(\n (\n \"The use of tags for creating work queue filters is deprecated.\"\n \" This option will be removed on 2023-02-23.\"\n ),\n DeprecationWarning,\n )\n filter = QueueFilter(tags=tags)\n else:\n filter = None\n create_model = WorkQueueCreate(name=name, filter=filter)\n if description is not None:\n create_model.description = description\n if is_paused is not None:\n create_model.is_paused = is_paused\n if concurrency_limit is not None:\n create_model.concurrency_limit = concurrency_limit\n if priority is not None:\n create_model.priority = priority\n\n data = create_model.dict(json_compatible=True)\n try:\n if work_pool_name is not None:\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/queues\", json=data\n )\n else:\n response = await self._client.post(\"/work_queues/\", json=data)\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n elif e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return WorkQueue.parse_obj(response.json())\n\n async def read_work_queue_by_name(\n self,\n name: str,\n work_pool_name: Optional[str] = None,\n ) -> WorkQueue:\n \"\"\"\n Read a work queue by name.\n\n Args:\n name (str): a unique name for the work queue\n work_pool_name (str, optional): the name of the work pool\n the queue belongs to.\n\n Raises:\n prefect.exceptions.ObjectNotFound: if no work queue is found\n httpx.HTTPStatusError: other status errors\n\n Returns:\n WorkQueue: a work queue API object\n \"\"\"\n try:\n if work_pool_name is not None:\n response = await self._client.get(\n f\"/work_pools/{work_pool_name}/queues/{name}\"\n )\n else:\n response = await self._client.get(f\"/work_queues/name/{name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n return WorkQueue.parse_obj(response.json())\n\n async def update_work_queue(self, id: UUID, **kwargs):\n \"\"\"\n Update properties of a work queue.\n\n Args:\n id: the ID of the work queue to update\n **kwargs: the fields to update\n\n Raises:\n ValueError: if no kwargs are provided\n prefect.exceptions.ObjectNotFound: if request returns 404\n httpx.RequestError: if the request fails\n\n \"\"\"\n if not kwargs:\n raise ValueError(\"No fields provided to update.\")\n\n data = WorkQueueUpdate(**kwargs).dict(json_compatible=True, exclude_unset=True)\n try:\n await self._client.patch(f\"/work_queues/{id}\", json=data)\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def get_runs_in_work_queue(\n self,\n id: UUID,\n limit: int = 10,\n scheduled_before: datetime.datetime = None,\n ) -> List[FlowRun]:\n \"\"\"\n Read flow runs off a work queue.\n\n Args:\n id: the id of the work queue to read from\n limit: a limit on the number of runs to return\n scheduled_before: a timestamp; only runs scheduled before this time will be returned.\n Defaults to now.\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n List[FlowRun]: a list of FlowRun objects read from the queue\n \"\"\"\n if scheduled_before is None:\n scheduled_before = pendulum.now(\"UTC\")\n\n try:\n response = await self._client.post(\n f\"/work_queues/{id}/get_runs\",\n json={\n \"limit\": limit,\n \"scheduled_before\": scheduled_before.isoformat(),\n },\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return pydantic.parse_obj_as(List[FlowRun], response.json())\n\n async def read_work_queue(\n self,\n id: UUID,\n ) -> WorkQueue:\n \"\"\"\n Read a work queue.\n\n Args:\n id: the id of the work queue to load\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n WorkQueue: an instantiated WorkQueue object\n \"\"\"\n try:\n response = await self._client.get(f\"/work_queues/{id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return WorkQueue.parse_obj(response.json())\n\n async def read_work_queue_status(\n self,\n id: UUID,\n ) -> WorkQueueStatusDetail:\n \"\"\"\n Read a work queue status.\n\n Args:\n id: the id of the work queue to load\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n WorkQueueStatus: an instantiated WorkQueueStatus object\n \"\"\"\n try:\n response = await self._client.get(f\"/work_queues/{id}/status\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return WorkQueueStatusDetail.parse_obj(response.json())\n\n async def match_work_queues(\n self,\n prefixes: List[str],\n work_pool_name: Optional[str] = None,\n ) -> List[WorkQueue]:\n \"\"\"\n Query the Prefect API for work queues with names with a specific prefix.\n\n Args:\n prefixes: a list of strings used to match work queue name prefixes\n work_pool_name: an optional work pool name to scope the query to\n\n Returns:\n a list of WorkQueue model representations\n of the work queues\n \"\"\"\n page_length = 100\n current_page = 0\n work_queues = []\n\n while True:\n new_queues = await self.read_work_queues(\n work_pool_name=work_pool_name,\n offset=current_page * page_length,\n limit=page_length,\n work_queue_filter=WorkQueueFilter(\n name=WorkQueueFilterName(startswith_=prefixes)\n ),\n )\n if not new_queues:\n break\n work_queues += new_queues\n current_page += 1\n\n return work_queues\n\n async def delete_work_queue_by_id(\n self,\n id: UUID,\n ):\n \"\"\"\n Delete a work queue by its ID.\n\n Args:\n id: the id of the work queue to delete\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(\n f\"/work_queues/{id}\",\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def create_block_type(self, block_type: BlockTypeCreate) -> BlockType:\n \"\"\"\n Create a block type in the Prefect API.\n \"\"\"\n try:\n response = await self._client.post(\n \"/block_types/\",\n json=block_type.dict(\n json_compatible=True, exclude_unset=True, exclude={\"id\"}\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n return BlockType.parse_obj(response.json())\n\n async def create_block_schema(self, block_schema: BlockSchemaCreate) -> BlockSchema:\n \"\"\"\n Create a block schema in the Prefect API.\n \"\"\"\n try:\n response = await self._client.post(\n \"/block_schemas/\",\n json=block_schema.dict(\n json_compatible=True,\n exclude_unset=True,\n exclude={\"id\", \"block_type\", \"checksum\"},\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n return BlockSchema.parse_obj(response.json())\n\n async def create_block_document(\n self,\n block_document: Union[BlockDocument, BlockDocumentCreate],\n include_secrets: bool = True,\n ) -> BlockDocument:\n \"\"\"\n Create a block document in the Prefect API. This data is used to configure a\n corresponding Block.\n\n Args:\n include_secrets (bool): whether to include secret values\n on the stored Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. Note Blocks may not work as expected if\n this is set to `False`.\n \"\"\"\n if isinstance(block_document, BlockDocument):\n block_document = BlockDocumentCreate.parse_obj(\n block_document.dict(\n json_compatible=True,\n include_secrets=include_secrets,\n exclude_unset=True,\n exclude={\"id\", \"block_schema\", \"block_type\"},\n ),\n )\n\n try:\n response = await self._client.post(\n \"/block_documents/\",\n json=block_document.dict(\n json_compatible=True,\n include_secrets=include_secrets,\n exclude_unset=True,\n exclude={\"id\", \"block_schema\", \"block_type\"},\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n return BlockDocument.parse_obj(response.json())\n\n async def update_block_document(\n self,\n block_document_id: UUID,\n block_document: BlockDocumentUpdate,\n ):\n \"\"\"\n Update a block document in the Prefect API.\n \"\"\"\n try:\n await self._client.patch(\n f\"/block_documents/{block_document_id}\",\n json=block_document.dict(\n json_compatible=True,\n exclude_unset=True,\n include={\"data\", \"merge_existing_data\", \"block_schema_id\"},\n include_secrets=True,\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def delete_block_document(self, block_document_id: UUID):\n \"\"\"\n Delete a block document.\n \"\"\"\n try:\n await self._client.delete(f\"/block_documents/{block_document_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_block_type_by_slug(self, slug: str) -> BlockType:\n \"\"\"\n Read a block type by its slug.\n \"\"\"\n try:\n response = await self._client.get(f\"/block_types/slug/{slug}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockType.parse_obj(response.json())\n\n async def read_block_schema_by_checksum(\n self, checksum: str, version: Optional[str] = None\n ) -> BlockSchema:\n \"\"\"\n Look up a block schema checksum\n \"\"\"\n try:\n url = f\"/block_schemas/checksum/{checksum}\"\n if version is not None:\n url = f\"{url}?version={version}\"\n response = await self._client.get(url)\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockSchema.parse_obj(response.json())\n\n async def update_block_type(self, block_type_id: UUID, block_type: BlockTypeUpdate):\n \"\"\"\n Update a block document in the Prefect API.\n \"\"\"\n try:\n await self._client.patch(\n f\"/block_types/{block_type_id}\",\n json=block_type.dict(\n json_compatible=True,\n exclude_unset=True,\n include=BlockTypeUpdate.updatable_fields(),\n include_secrets=True,\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def delete_block_type(self, block_type_id: UUID):\n \"\"\"\n Delete a block type.\n \"\"\"\n try:\n await self._client.delete(f\"/block_types/{block_type_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n elif (\n e.response.status_code == status.HTTP_403_FORBIDDEN\n and e.response.json()[\"detail\"]\n == \"protected block types cannot be deleted.\"\n ):\n raise prefect.exceptions.ProtectedBlockError(\n \"Protected block types cannot be deleted.\"\n ) from e\n else:\n raise\n\n async def read_block_types(self) -> List[BlockType]:\n \"\"\"\n Read all block types\n Raises:\n httpx.RequestError: if the block types were not found\n\n Returns:\n List of BlockTypes.\n \"\"\"\n response = await self._client.post(\"/block_types/filter\", json={})\n return pydantic.parse_obj_as(List[BlockType], response.json())\n\n async def read_block_schemas(self) -> List[BlockSchema]:\n \"\"\"\n Read all block schemas\n Raises:\n httpx.RequestError: if a valid block schema was not found\n\n Returns:\n A BlockSchema.\n \"\"\"\n response = await self._client.post(\"/block_schemas/filter\", json={})\n return pydantic.parse_obj_as(List[BlockSchema], response.json())\n\n async def get_most_recent_block_schema_for_block_type(\n self,\n block_type_id: UUID,\n ) -> Optional[BlockSchema]:\n \"\"\"\n Fetches the most recent block schema for a specified block type ID.\n\n Args:\n block_type_id: The ID of the block type.\n\n Raises:\n httpx.RequestError: If the request fails for any reason.\n\n Returns:\n The most recent block schema or None.\n \"\"\"\n try:\n response = await self._client.post(\n \"/block_schemas/filter\",\n json={\n \"block_schemas\": {\"block_type_id\": {\"any_\": [str(block_type_id)]}},\n \"limit\": 1,\n },\n )\n except httpx.HTTPStatusError:\n raise\n return BlockSchema.parse_obj(response.json()[0]) if response.json() else None\n\n async def read_block_document(\n self,\n block_document_id: UUID,\n include_secrets: bool = True,\n ):\n \"\"\"\n Read the block document with the specified ID.\n\n Args:\n block_document_id: the block document id\n include_secrets (bool): whether to include secret values\n on the Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. These fields are automatically obfuscated\n by Pydantic, but users can additionally choose not to receive\n their values from the API. Note that any business logic on the\n Block may not work if this is `False`.\n\n Raises:\n httpx.RequestError: if the block document was not found for any reason\n\n Returns:\n A block document or None.\n \"\"\"\n assert (\n block_document_id is not None\n ), \"Unexpected ID on block document. Was it persisted?\"\n try:\n response = await self._client.get(\n f\"/block_documents/{block_document_id}\",\n params=dict(include_secrets=include_secrets),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockDocument.parse_obj(response.json())\n\n async def read_block_document_by_name(\n self,\n name: str,\n block_type_slug: str,\n include_secrets: bool = True,\n ) -> BlockDocument:\n \"\"\"\n Read the block document with the specified name that corresponds to a\n specific block type name.\n\n Args:\n name: The block document name.\n block_type_slug: The block type slug.\n include_secrets (bool): whether to include secret values\n on the Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. These fields are automatically obfuscated\n by Pydantic, but users can additionally choose not to receive\n their values from the API. Note that any business logic on the\n Block may not work if this is `False`.\n\n Raises:\n httpx.RequestError: if the block document was not found for any reason\n\n Returns:\n A block document or None.\n \"\"\"\n try:\n response = await self._client.get(\n f\"/block_types/slug/{block_type_slug}/block_documents/name/{name}\",\n params=dict(include_secrets=include_secrets),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockDocument.parse_obj(response.json())\n\n async def read_block_documents(\n self,\n block_schema_type: Optional[str] = None,\n offset: Optional[int] = None,\n limit: Optional[int] = None,\n include_secrets: bool = True,\n ):\n \"\"\"\n Read block documents\n\n Args:\n block_schema_type: an optional block schema type\n offset: an offset\n limit: the number of blocks to return\n include_secrets (bool): whether to include secret values\n on the Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. These fields are automatically obfuscated\n by Pydantic, but users can additionally choose not to receive\n their values from the API. Note that any business logic on the\n Block may not work if this is `False`.\n\n Returns:\n A list of block documents\n \"\"\"\n response = await self._client.post(\n \"/block_documents/filter\",\n json=dict(\n block_schema_type=block_schema_type,\n offset=offset,\n limit=limit,\n include_secrets=include_secrets,\n ),\n )\n return pydantic.parse_obj_as(List[BlockDocument], response.json())\n\n async def read_block_documents_by_type(\n self,\n block_type_slug: str,\n offset: Optional[int] = None,\n limit: Optional[int] = None,\n include_secrets: bool = True,\n ) -> List[BlockDocument]:\n \"\"\"Retrieve block documents by block type slug.\n\n Args:\n block_type_slug: The block type slug.\n offset: an offset\n limit: the number of blocks to return\n include_secrets: whether to include secret values\n\n Returns:\n A list of block documents\n \"\"\"\n response = await self._client.get(\n f\"/block_types/slug/{block_type_slug}/block_documents\",\n params=dict(\n offset=offset,\n limit=limit,\n include_secrets=include_secrets,\n ),\n )\n\n return pydantic.parse_obj_as(List[BlockDocument], response.json())\n\n async def create_deployment(\n self,\n flow_id: UUID,\n name: str,\n version: str = None,\n schedule: SCHEDULE_TYPES = None,\n schedules: List[DeploymentScheduleCreate] = None,\n parameters: Dict[str, Any] = None,\n description: str = None,\n work_queue_name: str = None,\n work_pool_name: str = None,\n tags: List[str] = None,\n storage_document_id: UUID = None,\n manifest_path: str = None,\n path: str = None,\n entrypoint: str = None,\n infrastructure_document_id: UUID = None,\n infra_overrides: Dict[str, Any] = None,\n parameter_openapi_schema: dict = None,\n is_schedule_active: Optional[bool] = None,\n paused: Optional[bool] = None,\n pull_steps: Optional[List[dict]] = None,\n enforce_parameter_schema: Optional[bool] = None,\n ) -> UUID:\n \"\"\"\n Create a deployment.\n\n Args:\n flow_id: the flow ID to create a deployment for\n name: the name of the deployment\n version: an optional version string for the deployment\n schedule: an optional schedule to apply to the deployment\n tags: an optional list of tags to apply to the deployment\n storage_document_id: an reference to the storage block document\n used for the deployed flow\n infrastructure_document_id: an reference to the infrastructure block document\n to use for this deployment\n\n Raises:\n httpx.RequestError: if the deployment was not created for any reason\n\n Returns:\n the ID of the deployment in the backend\n \"\"\"\n\n deployment_create = DeploymentCreate(\n flow_id=flow_id,\n name=name,\n version=version,\n parameters=dict(parameters or {}),\n tags=list(tags or []),\n work_queue_name=work_queue_name,\n description=description,\n storage_document_id=storage_document_id,\n path=path,\n entrypoint=entrypoint,\n manifest_path=manifest_path, # for backwards compat\n infrastructure_document_id=infrastructure_document_id,\n infra_overrides=infra_overrides or {},\n parameter_openapi_schema=parameter_openapi_schema,\n is_schedule_active=is_schedule_active,\n paused=paused,\n schedule=schedule,\n schedules=schedules or [],\n pull_steps=pull_steps,\n enforce_parameter_schema=enforce_parameter_schema,\n )\n\n if work_pool_name is not None:\n deployment_create.work_pool_name = work_pool_name\n\n # Exclude newer fields that are not set to avoid compatibility issues\n exclude = {\n field\n for field in [\"work_pool_name\", \"work_queue_name\"]\n if field not in deployment_create.__fields_set__\n }\n\n if deployment_create.is_schedule_active is None:\n exclude.add(\"is_schedule_active\")\n\n if deployment_create.paused is None:\n exclude.add(\"paused\")\n\n if deployment_create.pull_steps is None:\n exclude.add(\"pull_steps\")\n\n if deployment_create.enforce_parameter_schema is None:\n exclude.add(\"enforce_parameter_schema\")\n\n json = deployment_create.dict(json_compatible=True, exclude=exclude)\n response = await self._client.post(\n \"/deployments/\",\n json=json,\n )\n deployment_id = response.json().get(\"id\")\n if not deployment_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n return UUID(deployment_id)\n\n async def update_schedule(self, deployment_id: UUID, active: bool = True):\n path = \"set_schedule_active\" if active else \"set_schedule_inactive\"\n await self._client.post(\n f\"/deployments/{deployment_id}/{path}\",\n )\n\n async def set_deployment_paused_state(self, deployment_id: UUID, paused: bool):\n await self._client.patch(\n f\"/deployments/{deployment_id}\", json={\"paused\": paused}\n )\n\n async def update_deployment(\n self,\n deployment: Deployment,\n schedule: SCHEDULE_TYPES = None,\n is_schedule_active: bool = None,\n ):\n deployment_update = DeploymentUpdate(\n version=deployment.version,\n schedule=schedule if schedule is not None else deployment.schedule,\n is_schedule_active=(\n is_schedule_active\n if is_schedule_active is not None\n else deployment.is_schedule_active\n ),\n description=deployment.description,\n work_queue_name=deployment.work_queue_name,\n tags=deployment.tags,\n manifest_path=deployment.manifest_path,\n path=deployment.path,\n entrypoint=deployment.entrypoint,\n parameters=deployment.parameters,\n storage_document_id=deployment.storage_document_id,\n infrastructure_document_id=deployment.infrastructure_document_id,\n infra_overrides=deployment.infra_overrides,\n enforce_parameter_schema=deployment.enforce_parameter_schema,\n )\n\n if getattr(deployment, \"work_pool_name\", None) is not None:\n deployment_update.work_pool_name = deployment.work_pool_name\n\n exclude = set()\n if deployment.enforce_parameter_schema is None:\n exclude.add(\"enforce_parameter_schema\")\n\n await self._client.patch(\n f\"/deployments/{deployment.id}\",\n json=deployment_update.dict(json_compatible=True, exclude=exclude),\n )\n\n async def _create_deployment_from_schema(self, schema: DeploymentCreate) -> UUID:\n \"\"\"\n Create a deployment from a prepared `DeploymentCreate` schema.\n \"\"\"\n # TODO: We are likely to remove this method once we have considered the\n # packaging interface for deployments further.\n response = await self._client.post(\n \"/deployments/\", json=schema.dict(json_compatible=True)\n )\n deployment_id = response.json().get(\"id\")\n if not deployment_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n return UUID(deployment_id)\n\n async def read_deployment(\n self,\n deployment_id: UUID,\n ) -> DeploymentResponse:\n \"\"\"\n Query the Prefect API for a deployment by id.\n\n Args:\n deployment_id: the deployment ID of interest\n\n Returns:\n a [Deployment model][prefect.client.schemas.objects.Deployment] representation of the deployment\n \"\"\"\n try:\n response = await self._client.get(f\"/deployments/{deployment_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return DeploymentResponse.parse_obj(response.json())\n\n async def read_deployment_by_name(\n self,\n name: str,\n ) -> DeploymentResponse:\n \"\"\"\n Query the Prefect API for a deployment by name.\n\n Args:\n name: A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n a Deployment model representation of the deployment\n \"\"\"\n try:\n response = await self._client.get(f\"/deployments/name/{name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n return DeploymentResponse.parse_obj(response.json())\n\n async def read_deployments(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n work_pool_filter: WorkPoolFilter = None,\n work_queue_filter: WorkQueueFilter = None,\n limit: int = None,\n sort: DeploymentSort = None,\n offset: int = 0,\n ) -> List[DeploymentResponse]:\n \"\"\"\n Query the Prefect API for deployments. Only deployments matching all\n the provided criteria will be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n work_pool_filter: filter criteria for work pools\n work_queue_filter: filter criteria for work pool queues\n limit: a limit for the deployment query\n offset: an offset for the deployment query\n\n Returns:\n a list of Deployment model representations\n of the deployments\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n \"work_pool_queues\": (\n work_queue_filter.dict(json_compatible=True)\n if work_queue_filter\n else None\n ),\n \"limit\": limit,\n \"offset\": offset,\n \"sort\": sort,\n }\n\n response = await self._client.post(\"/deployments/filter\", json=body)\n return pydantic.parse_obj_as(List[DeploymentResponse], response.json())\n\n async def delete_deployment(\n self,\n deployment_id: UUID,\n ):\n \"\"\"\n Delete deployment by id.\n\n Args:\n deployment_id: The deployment id of interest.\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/deployments/{deployment_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def create_deployment_schedules(\n self,\n deployment_id: UUID,\n schedules: List[Tuple[SCHEDULE_TYPES, bool]],\n ) -> List[DeploymentSchedule]:\n \"\"\"\n Create deployment schedules.\n\n Args:\n deployment_id: the deployment ID\n schedules: a list of tuples containing the schedule to create\n and whether or not it should be active.\n\n Raises:\n httpx.RequestError: if the schedules were not created for any reason\n\n Returns:\n the list of schedules created in the backend\n \"\"\"\n deployment_schedule_create = [\n DeploymentScheduleCreate(schedule=schedule[0], active=schedule[1])\n for schedule in schedules\n ]\n\n json = [\n deployment_schedule_create.dict(json_compatible=True)\n for deployment_schedule_create in deployment_schedule_create\n ]\n response = await self._client.post(\n f\"/deployments/{deployment_id}/schedules\", json=json\n )\n return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n\n async def read_deployment_schedules(\n self,\n deployment_id: UUID,\n ) -> List[DeploymentSchedule]:\n \"\"\"\n Query the Prefect API for a deployment's schedules.\n\n Args:\n deployment_id: the deployment ID\n\n Returns:\n a list of DeploymentSchedule model representations of the deployment schedules\n \"\"\"\n try:\n response = await self._client.get(f\"/deployments/{deployment_id}/schedules\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n\n async def update_deployment_schedule(\n self,\n deployment_id: UUID,\n schedule_id: UUID,\n active: Optional[bool] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n ):\n \"\"\"\n Update a deployment schedule by ID.\n\n Args:\n deployment_id: the deployment ID\n schedule_id: the deployment schedule ID of interest\n active: whether or not the schedule should be active\n schedule: the cron, rrule, or interval schedule this deployment schedule should use\n \"\"\"\n kwargs = {}\n if active is not None:\n kwargs[\"active\"] = active\n elif schedule is not None:\n kwargs[\"schedule\"] = schedule\n\n deployment_schedule_update = DeploymentScheduleUpdate(**kwargs)\n json = deployment_schedule_update.dict(json_compatible=True, exclude_unset=True)\n\n try:\n await self._client.patch(\n f\"/deployments/{deployment_id}/schedules/{schedule_id}\", json=json\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def delete_deployment_schedule(\n self,\n deployment_id: UUID,\n schedule_id: UUID,\n ) -> None:\n \"\"\"\n Delete a deployment schedule.\n\n Args:\n deployment_id: the deployment ID\n schedule_id: the ID of the deployment schedule to delete.\n\n Raises:\n httpx.RequestError: if the schedules were not deleted for any reason\n \"\"\"\n try:\n await self._client.delete(\n f\"/deployments/{deployment_id}/schedules/{schedule_id}\"\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_flow_run(self, flow_run_id: UUID) -> FlowRun:\n \"\"\"\n Query the Prefect API for a flow run by id.\n\n Args:\n flow_run_id: the flow run ID of interest\n\n Returns:\n a Flow Run model representation of the flow run\n \"\"\"\n try:\n response = await self._client.get(f\"/flow_runs/{flow_run_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return FlowRun.parse_obj(response.json())\n\n async def resume_flow_run(\n self, flow_run_id: UUID, run_input: Optional[Dict] = None\n ) -> OrchestrationResult:\n \"\"\"\n Resumes a paused flow run.\n\n Args:\n flow_run_id: the flow run ID of interest\n run_input: the input to resume the flow run with\n\n Returns:\n an OrchestrationResult model representation of state orchestration output\n \"\"\"\n try:\n response = await self._client.post(\n f\"/flow_runs/{flow_run_id}/resume\", json={\"run_input\": run_input}\n )\n except httpx.HTTPStatusError:\n raise\n\n return OrchestrationResult.parse_obj(response.json())\n\n async def read_flow_runs(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n work_pool_filter: WorkPoolFilter = None,\n work_queue_filter: WorkQueueFilter = None,\n sort: FlowRunSort = None,\n limit: int = None,\n offset: int = 0,\n ) -> List[FlowRun]:\n \"\"\"\n Query the Prefect API for flow runs. Only flow runs matching all criteria will\n be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n work_pool_filter: filter criteria for work pools\n work_queue_filter: filter criteria for work pool queues\n sort: sort criteria for the flow runs\n limit: limit for the flow run query\n offset: offset for the flow run query\n\n Returns:\n a list of Flow Run model representations\n of the flow runs\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n \"work_pool_queues\": (\n work_queue_filter.dict(json_compatible=True)\n if work_queue_filter\n else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n\n response = await self._client.post(\"/flow_runs/filter\", json=body)\n return pydantic.parse_obj_as(List[FlowRun], response.json())\n\n async def set_flow_run_state(\n self,\n flow_run_id: UUID,\n state: \"prefect.states.State\",\n force: bool = False,\n ) -> OrchestrationResult:\n \"\"\"\n Set the state of a flow run.\n\n Args:\n flow_run_id: the id of the flow run\n state: the state to set\n force: if True, disregard orchestration logic when setting the state,\n forcing the Prefect API to accept the state\n\n Returns:\n an OrchestrationResult model representation of state orchestration output\n \"\"\"\n state_create = state.to_state_create()\n state_create.state_details.flow_run_id = flow_run_id\n state_create.state_details.transition_id = uuid4()\n try:\n response = await self._client.post(\n f\"/flow_runs/{flow_run_id}/set_state\",\n json=dict(state=state_create.dict(json_compatible=True), force=force),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n return OrchestrationResult.parse_obj(response.json())\n\n async def read_flow_run_states(\n self, flow_run_id: UUID\n ) -> List[prefect.states.State]:\n \"\"\"\n Query for the states of a flow run\n\n Args:\n flow_run_id: the id of the flow run\n\n Returns:\n a list of State model representations\n of the flow run states\n \"\"\"\n response = await self._client.get(\n \"/flow_run_states/\", params=dict(flow_run_id=str(flow_run_id))\n )\n return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n\n async def set_task_run_name(self, task_run_id: UUID, name: str):\n task_run_data = TaskRunUpdate(name=name)\n return await self._client.patch(\n f\"/task_runs/{task_run_id}\",\n json=task_run_data.dict(json_compatible=True, exclude_unset=True),\n )\n\n async def create_task_run(\n self,\n task: \"TaskObject\",\n flow_run_id: Optional[UUID],\n dynamic_key: str,\n name: str = None,\n extra_tags: Iterable[str] = None,\n state: prefect.states.State = None,\n task_inputs: Dict[\n str,\n List[\n Union[\n TaskRunResult,\n Parameter,\n Constant,\n ]\n ],\n ] = None,\n ) -> TaskRun:\n \"\"\"\n Create a task run\n\n Args:\n task: The Task to run\n flow_run_id: The flow run id with which to associate the task run\n dynamic_key: A key unique to this particular run of a Task within the flow\n name: An optional name for the task run\n extra_tags: an optional list of extra tags to apply to the task run in\n addition to `task.tags`\n state: The initial state for the run. If not provided, defaults to\n `Pending` for now. Should always be a `Scheduled` type.\n task_inputs: the set of inputs passed to the task\n\n Returns:\n The created task run.\n \"\"\"\n tags = set(task.tags).union(extra_tags or [])\n\n if state is None:\n state = prefect.states.Pending()\n\n task_run_data = TaskRunCreate(\n name=name,\n flow_run_id=flow_run_id,\n task_key=task.task_key,\n dynamic_key=dynamic_key,\n tags=list(tags),\n task_version=task.version,\n empirical_policy=TaskRunPolicy(\n retries=task.retries,\n retry_delay=task.retry_delay_seconds,\n retry_jitter_factor=task.retry_jitter_factor,\n ),\n state=state.to_state_create(),\n task_inputs=task_inputs or {},\n )\n\n response = await self._client.post(\n \"/task_runs/\", json=task_run_data.dict(json_compatible=True)\n )\n return TaskRun.parse_obj(response.json())\n\n async def read_task_run(self, task_run_id: UUID) -> TaskRun:\n \"\"\"\n Query the Prefect API for a task run by id.\n\n Args:\n task_run_id: the task run ID of interest\n\n Returns:\n a Task Run model representation of the task run\n \"\"\"\n response = await self._client.get(f\"/task_runs/{task_run_id}\")\n return TaskRun.parse_obj(response.json())\n\n async def read_task_runs(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n sort: TaskRunSort = None,\n limit: int = None,\n offset: int = 0,\n ) -> List[TaskRun]:\n \"\"\"\n Query the Prefect API for task runs. Only task runs matching all criteria will\n be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n sort: sort criteria for the task runs\n limit: a limit for the task run query\n offset: an offset for the task run query\n\n Returns:\n a list of Task Run model representations\n of the task runs\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\"/task_runs/filter\", json=body)\n return pydantic.parse_obj_as(List[TaskRun], response.json())\n\n async def delete_task_run(self, task_run_id: UUID) -> None:\n \"\"\"\n Delete a task run by id.\n\n Args:\n task_run_id: the task run ID of interest\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/task_runs/{task_run_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def set_task_run_state(\n self,\n task_run_id: UUID,\n state: prefect.states.State,\n force: bool = False,\n ) -> OrchestrationResult:\n \"\"\"\n Set the state of a task run.\n\n Args:\n task_run_id: the id of the task run\n state: the state to set\n force: if True, disregard orchestration logic when setting the state,\n forcing the Prefect API to accept the state\n\n Returns:\n an OrchestrationResult model representation of state orchestration output\n \"\"\"\n state_create = state.to_state_create()\n state_create.state_details.task_run_id = task_run_id\n response = await self._client.post(\n f\"/task_runs/{task_run_id}/set_state\",\n json=dict(state=state_create.dict(json_compatible=True), force=force),\n )\n return OrchestrationResult.parse_obj(response.json())\n\n async def read_task_run_states(\n self, task_run_id: UUID\n ) -> List[prefect.states.State]:\n \"\"\"\n Query for the states of a task run\n\n Args:\n task_run_id: the id of the task run\n\n Returns:\n a list of State model representations of the task run states\n \"\"\"\n response = await self._client.get(\n \"/task_run_states/\", params=dict(task_run_id=str(task_run_id))\n )\n return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n\n async def create_logs(self, logs: Iterable[Union[LogCreate, dict]]) -> None:\n \"\"\"\n Create logs for a flow or task run\n\n Args:\n logs: An iterable of `LogCreate` objects or already json-compatible dicts\n \"\"\"\n serialized_logs = [\n log.dict(json_compatible=True) if isinstance(log, LogCreate) else log\n for log in logs\n ]\n await self._client.post(\"/logs/\", json=serialized_logs)\n\n async def create_flow_run_notification_policy(\n self,\n block_document_id: UUID,\n is_active: bool = True,\n tags: List[str] = None,\n state_names: List[str] = None,\n message_template: Optional[str] = None,\n ) -> UUID:\n \"\"\"\n Create a notification policy for flow runs\n\n Args:\n block_document_id: The block document UUID\n is_active: Whether the notification policy is active\n tags: List of flow tags\n state_names: List of state names\n message_template: Notification message template\n \"\"\"\n if tags is None:\n tags = []\n if state_names is None:\n state_names = []\n\n policy = FlowRunNotificationPolicyCreate(\n block_document_id=block_document_id,\n is_active=is_active,\n tags=tags,\n state_names=state_names,\n message_template=message_template,\n )\n response = await self._client.post(\n \"/flow_run_notification_policies/\",\n json=policy.dict(json_compatible=True),\n )\n\n policy_id = response.json().get(\"id\")\n if not policy_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n return UUID(policy_id)\n\n async def delete_flow_run_notification_policy(\n self,\n id: UUID,\n ) -> None:\n \"\"\"\n Delete a flow run notification policy by id.\n\n Args:\n id: UUID of the flow run notification policy to delete.\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/flow_run_notification_policies/{id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def update_flow_run_notification_policy(\n self,\n id: UUID,\n block_document_id: Optional[UUID] = None,\n is_active: Optional[bool] = None,\n tags: Optional[List[str]] = None,\n state_names: Optional[List[str]] = None,\n message_template: Optional[str] = None,\n ) -> None:\n \"\"\"\n Update a notification policy for flow runs\n\n Args:\n id: UUID of the notification policy\n block_document_id: The block document UUID\n is_active: Whether the notification policy is active\n tags: List of flow tags\n state_names: List of state names\n message_template: Notification message template\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n params = {}\n if block_document_id is not None:\n params[\"block_document_id\"] = block_document_id\n if is_active is not None:\n params[\"is_active\"] = is_active\n if tags is not None:\n params[\"tags\"] = tags\n if state_names is not None:\n params[\"state_names\"] = state_names\n if message_template is not None:\n params[\"message_template\"] = message_template\n\n policy = FlowRunNotificationPolicyUpdate(**params)\n\n try:\n await self._client.patch(\n f\"/flow_run_notification_policies/{id}\",\n json=policy.dict(json_compatible=True, exclude_unset=True),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_flow_run_notification_policies(\n self,\n flow_run_notification_policy_filter: FlowRunNotificationPolicyFilter,\n limit: Optional[int] = None,\n offset: int = 0,\n ) -> List[FlowRunNotificationPolicy]:\n \"\"\"\n Query the Prefect API for flow run notification policies. Only policies matching all criteria will\n be returned.\n\n Args:\n flow_run_notification_policy_filter: filter criteria for notification policies\n limit: a limit for the notification policies query\n offset: an offset for the notification policies query\n\n Returns:\n a list of FlowRunNotificationPolicy model representations\n of the notification policies\n \"\"\"\n body = {\n \"flow_run_notification_policy_filter\": (\n flow_run_notification_policy_filter.dict(json_compatible=True)\n if flow_run_notification_policy_filter\n else None\n ),\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\n \"/flow_run_notification_policies/filter\", json=body\n )\n return pydantic.parse_obj_as(List[FlowRunNotificationPolicy], response.json())\n\n async def read_logs(\n self,\n log_filter: LogFilter = None,\n limit: int = None,\n offset: int = None,\n sort: LogSort = LogSort.TIMESTAMP_ASC,\n ) -> List[Log]:\n \"\"\"\n Read flow and task run logs.\n \"\"\"\n body = {\n \"logs\": log_filter.dict(json_compatible=True) if log_filter else None,\n \"limit\": limit,\n \"offset\": offset,\n \"sort\": sort,\n }\n\n response = await self._client.post(\"/logs/filter\", json=body)\n return pydantic.parse_obj_as(List[Log], response.json())\n\n async def resolve_datadoc(self, datadoc: DataDocument) -> Any:\n \"\"\"\n Recursively decode possibly nested data documents.\n\n \"server\" encoded documents will be retrieved from the server.\n\n Args:\n datadoc: The data document to resolve\n\n Returns:\n a decoded object, the innermost data\n \"\"\"\n if not isinstance(datadoc, DataDocument):\n raise TypeError(\n f\"`resolve_datadoc` received invalid type {type(datadoc).__name__}\"\n )\n\n async def resolve_inner(data):\n if isinstance(data, bytes):\n try:\n data = DataDocument.parse_raw(data)\n except pydantic.ValidationError:\n return data\n\n if isinstance(data, DataDocument):\n return await resolve_inner(data.decode())\n\n return data\n\n return await resolve_inner(datadoc)\n\n async def send_worker_heartbeat(\n self,\n work_pool_name: str,\n worker_name: str,\n heartbeat_interval_seconds: Optional[float] = None,\n ):\n \"\"\"\n Sends a worker heartbeat for a given work pool.\n\n Args:\n work_pool_name: The name of the work pool to heartbeat against.\n worker_name: The name of the worker sending the heartbeat.\n \"\"\"\n await self._client.post(\n f\"/work_pools/{work_pool_name}/workers/heartbeat\",\n json={\n \"name\": worker_name,\n \"heartbeat_interval_seconds\": heartbeat_interval_seconds,\n },\n )\n\n async def read_workers_for_work_pool(\n self,\n work_pool_name: str,\n worker_filter: Optional[WorkerFilter] = None,\n offset: Optional[int] = None,\n limit: Optional[int] = None,\n ) -> List[Worker]:\n \"\"\"\n Reads workers for a given work pool.\n\n Args:\n work_pool_name: The name of the work pool for which to get\n member workers.\n worker_filter: Criteria by which to filter workers.\n limit: Limit for the worker query.\n offset: Limit for the worker query.\n \"\"\"\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/workers/filter\",\n json={\n \"worker_filter\": (\n worker_filter.dict(json_compatible=True, exclude_unset=True)\n if worker_filter\n else None\n ),\n \"offset\": offset,\n \"limit\": limit,\n },\n )\n\n return pydantic.parse_obj_as(List[Worker], response.json())\n\n async def read_work_pool(self, work_pool_name: str) -> WorkPool:\n \"\"\"\n Reads information for a given work pool\n\n Args:\n work_pool_name: The name of the work pool to for which to get\n information.\n\n Returns:\n Information about the requested work pool.\n \"\"\"\n try:\n response = await self._client.get(f\"/work_pools/{work_pool_name}\")\n return pydantic.parse_obj_as(WorkPool, response.json())\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_work_pools(\n self,\n limit: Optional[int] = None,\n offset: int = 0,\n work_pool_filter: Optional[WorkPoolFilter] = None,\n ) -> List[WorkPool]:\n \"\"\"\n Reads work pools.\n\n Args:\n limit: Limit for the work pool query.\n offset: Offset for the work pool query.\n work_pool_filter: Criteria by which to filter work pools.\n\n Returns:\n A list of work pools.\n \"\"\"\n\n body = {\n \"limit\": limit,\n \"offset\": offset,\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n }\n response = await self._client.post(\"/work_pools/filter\", json=body)\n return pydantic.parse_obj_as(List[WorkPool], response.json())\n\n async def create_work_pool(\n self,\n work_pool: WorkPoolCreate,\n ) -> WorkPool:\n \"\"\"\n Creates a work pool with the provided configuration.\n\n Args:\n work_pool: Desired configuration for the new work pool.\n\n Returns:\n Information about the newly created work pool.\n \"\"\"\n try:\n response = await self._client.post(\n \"/work_pools/\",\n json=work_pool.dict(json_compatible=True, exclude_unset=True),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n\n return pydantic.parse_obj_as(WorkPool, response.json())\n\n async def update_work_pool(\n self,\n work_pool_name: str,\n work_pool: WorkPoolUpdate,\n ):\n \"\"\"\n Updates a work pool.\n\n Args:\n work_pool_name: Name of the work pool to update.\n work_pool: Fields to update in the work pool.\n \"\"\"\n try:\n await self._client.patch(\n f\"/work_pools/{work_pool_name}\",\n json=work_pool.dict(json_compatible=True, exclude_unset=True),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def delete_work_pool(\n self,\n work_pool_name: str,\n ):\n \"\"\"\n Deletes a work pool.\n\n Args:\n work_pool_name: Name of the work pool to delete.\n \"\"\"\n try:\n await self._client.delete(f\"/work_pools/{work_pool_name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_work_queues(\n self,\n work_pool_name: Optional[str] = None,\n work_queue_filter: Optional[WorkQueueFilter] = None,\n limit: Optional[int] = None,\n offset: Optional[int] = None,\n ) -> List[WorkQueue]:\n \"\"\"\n Retrieves queues for a work pool.\n\n Args:\n work_pool_name: Name of the work pool for which to get queues.\n work_queue_filter: Criteria by which to filter queues.\n limit: Limit for the queue query.\n offset: Limit for the queue query.\n\n Returns:\n List of queues for the specified work pool.\n \"\"\"\n json = {\n \"work_queues\": (\n work_queue_filter.dict(json_compatible=True, exclude_unset=True)\n if work_queue_filter\n else None\n ),\n \"limit\": limit,\n \"offset\": offset,\n }\n\n if work_pool_name:\n try:\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/queues/filter\",\n json=json,\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n else:\n response = await self._client.post(\"/work_queues/filter\", json=json)\n\n return pydantic.parse_obj_as(List[WorkQueue], response.json())\n\n async def get_scheduled_flow_runs_for_deployments(\n self,\n deployment_ids: List[UUID],\n scheduled_before: Optional[datetime.datetime] = None,\n limit: Optional[int] = None,\n ):\n body: Dict[str, Any] = dict(deployment_ids=[str(id) for id in deployment_ids])\n if scheduled_before:\n body[\"scheduled_before\"] = str(scheduled_before)\n if limit:\n body[\"limit\"] = limit\n\n response = await self._client.post(\n \"/deployments/get_scheduled_flow_runs\",\n json=body,\n )\n\n return pydantic.parse_obj_as(List[FlowRunResponse], response.json())\n\n async def get_scheduled_flow_runs_for_work_pool(\n self,\n work_pool_name: str,\n work_queue_names: Optional[List[str]] = None,\n scheduled_before: Optional[datetime.datetime] = None,\n ) -> List[WorkerFlowRunResponse]:\n \"\"\"\n Retrieves scheduled flow runs for the provided set of work pool queues.\n\n Args:\n work_pool_name: The name of the work pool that the work pool\n queues are associated with.\n work_queue_names: The names of the work pool queues from which\n to get scheduled flow runs.\n scheduled_before: Datetime used to filter returned flow runs. Flow runs\n scheduled for after the given datetime string will not be returned.\n\n Returns:\n A list of worker flow run responses containing information about the\n retrieved flow runs.\n \"\"\"\n body: Dict[str, Any] = {}\n if work_queue_names is not None:\n body[\"work_queue_names\"] = list(work_queue_names)\n if scheduled_before:\n body[\"scheduled_before\"] = str(scheduled_before)\n\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/get_scheduled_flow_runs\",\n json=body,\n )\n return pydantic.parse_obj_as(List[WorkerFlowRunResponse], response.json())\n\n async def create_artifact(\n self,\n artifact: ArtifactCreate,\n ) -> Artifact:\n \"\"\"\n Creates an artifact with the provided configuration.\n\n Args:\n artifact: Desired configuration for the new artifact.\n Returns:\n Information about the newly created artifact.\n \"\"\"\n\n response = await self._client.post(\n \"/artifacts/\",\n json=artifact.dict(json_compatible=True, exclude_unset=True),\n )\n\n return pydantic.parse_obj_as(Artifact, response.json())\n\n async def read_artifacts(\n self,\n *,\n artifact_filter: ArtifactFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n sort: ArtifactSort = None,\n limit: int = None,\n offset: int = 0,\n ) -> List[Artifact]:\n \"\"\"\n Query the Prefect API for artifacts. Only artifacts matching all criteria will\n be returned.\n Args:\n artifact_filter: filter criteria for artifacts\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n sort: sort criteria for the artifacts\n limit: limit for the artifact query\n offset: offset for the artifact query\n Returns:\n a list of Artifact model representations of the artifacts\n \"\"\"\n body = {\n \"artifacts\": (\n artifact_filter.dict(json_compatible=True) if artifact_filter else None\n ),\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\"/artifacts/filter\", json=body)\n return pydantic.parse_obj_as(List[Artifact], response.json())\n\n async def read_latest_artifacts(\n self,\n *,\n artifact_filter: ArtifactCollectionFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n sort: ArtifactCollectionSort = None,\n limit: int = None,\n offset: int = 0,\n ) -> List[ArtifactCollection]:\n \"\"\"\n Query the Prefect API for artifacts. Only artifacts matching all criteria will\n be returned.\n Args:\n artifact_filter: filter criteria for artifacts\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n sort: sort criteria for the artifacts\n limit: limit for the artifact query\n offset: offset for the artifact query\n Returns:\n a list of Artifact model representations of the artifacts\n \"\"\"\n body = {\n \"artifacts\": (\n artifact_filter.dict(json_compatible=True) if artifact_filter else None\n ),\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\"/artifacts/latest/filter\", json=body)\n return pydantic.parse_obj_as(List[ArtifactCollection], response.json())\n\n async def delete_artifact(self, artifact_id: UUID) -> None:\n \"\"\"\n Deletes an artifact with the provided id.\n\n Args:\n artifact_id: The id of the artifact to delete.\n \"\"\"\n try:\n await self._client.delete(f\"/artifacts/{artifact_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_variable_by_name(self, name: str) -> Optional[Variable]:\n \"\"\"Reads a variable by name. Returns None if no variable is found.\"\"\"\n try:\n response = await self._client.get(f\"/variables/name/{name}\")\n return pydantic.parse_obj_as(Variable, response.json())\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n return None\n else:\n raise\n\n async def delete_variable_by_name(self, name: str):\n \"\"\"Deletes a variable by name.\"\"\"\n try:\n await self._client.delete(f\"/variables/name/{name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_variables(self, limit: int = None) -> List[Variable]:\n \"\"\"Reads all variables.\"\"\"\n response = await self._client.post(\"/variables/filter\", json={\"limit\": limit})\n return pydantic.parse_obj_as(List[Variable], response.json())\n\n async def read_worker_metadata(self) -> Dict[str, Any]:\n \"\"\"Reads worker metadata stored in Prefect collection registry.\"\"\"\n response = await self._client.get(\"collections/views/aggregate-worker-metadata\")\n response.raise_for_status()\n return response.json()\n\n async def create_automation(self, automation: Automation) -> UUID:\n \"\"\"Creates an automation in Prefect Cloud.\"\"\"\n if self.server_type != ServerType.CLOUD:\n raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n response = await self._client.post(\n \"/automations/\",\n json=automation.dict(json_compatible=True),\n )\n\n return UUID(response.json()[\"id\"])\n\n async def read_resource_related_automations(\n self, resource_id: str\n ) -> List[ExistingAutomation]:\n if self.server_type != ServerType.CLOUD:\n raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n response = await self._client.get(f\"/automations/related-to/{resource_id}\")\n response.raise_for_status()\n return pydantic.parse_obj_as(List[ExistingAutomation], response.json())\n\n async def delete_resource_owned_automations(self, resource_id: str):\n if self.server_type != ServerType.CLOUD:\n raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n await self._client.delete(f\"/automations/owned-by/{resource_id}\")\n\n async def increment_concurrency_slots(\n self, names: List[str], slots: int, mode: str\n ) -> httpx.Response:\n return await self._client.post(\n \"/v2/concurrency_limits/increment\",\n json={\"names\": names, \"slots\": slots, \"mode\": mode},\n )\n\n async def release_concurrency_slots(\n self, names: List[str], slots: int, occupancy_seconds: float\n ) -> httpx.Response:\n return await self._client.post(\n \"/v2/concurrency_limits/decrement\",\n json={\n \"names\": names,\n \"slots\": slots,\n \"occupancy_seconds\": occupancy_seconds,\n },\n )\n\n async def create_global_concurrency_limit(\n self, concurrency_limit: GlobalConcurrencyLimitCreate\n ) -> UUID:\n response = await self._client.post(\n \"/v2/concurrency_limits/\",\n json=concurrency_limit.dict(json_compatible=True, exclude_unset=True),\n )\n return UUID(response.json()[\"id\"])\n\n async def update_global_concurrency_limit(\n self, name: str, concurrency_limit: GlobalConcurrencyLimitUpdate\n ) -> httpx.Response:\n try:\n response = await self._client.patch(\n f\"/v2/concurrency_limits/{name}\",\n json=concurrency_limit.dict(json_compatible=True, exclude_unset=True),\n )\n return response\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def delete_global_concurrency_limit_by_name(\n self, name: str\n ) -> httpx.Response:\n try:\n response = await self._client.delete(f\"/v2/concurrency_limits/{name}\")\n return response\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_global_concurrency_limit_by_name(\n self, name: str\n ) -> Dict[str, object]:\n try:\n response = await self._client.get(f\"/v2/concurrency_limits/{name}\")\n return response.json()\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_global_concurrency_limits(\n self, limit: int = 10, offset: int = 0\n ) -> List[Dict[str, object]]:\n response = await self._client.post(\n \"/v2/concurrency_limits/filter\",\n json={\n \"limit\": limit,\n \"offset\": offset,\n },\n )\n return response.json()\n\n async def create_flow_run_input(\n self, flow_run_id: UUID, key: str, value: str, sender: Optional[str] = None\n ):\n \"\"\"\n Creates a flow run input.\n\n Args:\n flow_run_id: The flow run id.\n key: The input key.\n value: The input value.\n sender: The sender of the input.\n \"\"\"\n\n # Initialize the input to ensure that the key is valid.\n FlowRunInput(flow_run_id=flow_run_id, key=key, value=value)\n\n response = await self._client.post(\n f\"/flow_runs/{flow_run_id}/input\",\n json={\"key\": key, \"value\": value, \"sender\": sender},\n )\n response.raise_for_status()\n\n async def filter_flow_run_input(\n self, flow_run_id: UUID, key_prefix: str, limit: int, exclude_keys: Set[str]\n ) -> List[FlowRunInput]:\n response = await self._client.post(\n f\"/flow_runs/{flow_run_id}/input/filter\",\n json={\n \"prefix\": key_prefix,\n \"limit\": limit,\n \"exclude_keys\": list(exclude_keys),\n },\n )\n response.raise_for_status()\n return pydantic.parse_obj_as(List[FlowRunInput], response.json())\n\n async def read_flow_run_input(self, flow_run_id: UUID, key: str) -> str:\n \"\"\"\n Reads a flow run input.\n\n Args:\n flow_run_id: The flow run id.\n key: The input key.\n \"\"\"\n response = await self._client.get(f\"/flow_runs/{flow_run_id}/input/{key}\")\n response.raise_for_status()\n return response.content.decode()\n\n async def delete_flow_run_input(self, flow_run_id: UUID, key: str):\n \"\"\"\n Deletes a flow run input.\n\n Args:\n flow_run_id: The flow run id.\n key: The input key.\n \"\"\"\n response = await self._client.delete(f\"/flow_runs/{flow_run_id}/input/{key}\")\n response.raise_for_status()\n\n async def __aenter__(self):\n \"\"\"\n Start the client.\n\n If the client is already started, this will raise an exception.\n\n If the client is already closed, this will raise an exception. Use a new client\n instance instead.\n \"\"\"\n if self._closed:\n # httpx.AsyncClient does not allow reuse so we will not either.\n raise RuntimeError(\n \"The client cannot be started again after closing. \"\n \"Retrieve a new client with `get_client()` instead.\"\n )\n\n if self._started:\n # httpx.AsyncClient does not allow reentrancy so we will not either.\n raise RuntimeError(\"The client cannot be started more than once.\")\n\n self._loop = asyncio.get_running_loop()\n await self._exit_stack.__aenter__()\n\n # Enter a lifespan context if using an ephemeral application.\n # See https://github.com/encode/httpx/issues/350\n if self._ephemeral_app and self.manage_lifespan:\n self._ephemeral_lifespan = await self._exit_stack.enter_async_context(\n app_lifespan_context(self._ephemeral_app)\n )\n\n if self._ephemeral_app:\n self.logger.debug(\n \"Using ephemeral application with database at \"\n f\"{PREFECT_API_DATABASE_CONNECTION_URL.value()}\"\n )\n else:\n self.logger.debug(f\"Connecting to API at {self.api_url}\")\n\n # Enter the httpx client's context\n await self._exit_stack.enter_async_context(self._client)\n\n self._started = True\n\n return self\n\n async def __aexit__(self, *exc_info):\n \"\"\"\n Shutdown the client.\n \"\"\"\n self._closed = True\n return await self._exit_stack.__aexit__(*exc_info)\n\n def __enter__(self):\n raise RuntimeError(\n \"The `PrefectClient` must be entered with an async context. Use 'async \"\n \"with PrefectClient(...)' not 'with PrefectClient(...)'\"\n )\n\n def __exit__(self, *_):\n assert False, \"This should never be called but must be defined for __enter__\"\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.api_url","title":"api_url: httpx.URL
property
","text":"Get the base URL for the API.
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.api_healthcheck","title":"api_healthcheck
async
","text":"Attempts to connect to the API and returns the encountered exception if not successful.
If successful, returns None
.
prefect/client/orchestration.py
async def api_healthcheck(self) -> Optional[Exception]:\n \"\"\"\n Attempts to connect to the API and returns the encountered exception if not\n successful.\n\n If successful, returns `None`.\n \"\"\"\n try:\n await self._client.get(\"/health\")\n return None\n except Exception as exc:\n return exc\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.hello","title":"hello
async
","text":"Send a GET request to /hello for testing purposes.
Source code inprefect/client/orchestration.py
async def hello(self) -> httpx.Response:\n \"\"\"\n Send a GET request to /hello for testing purposes.\n \"\"\"\n return await self._client.get(\"/hello\")\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow","title":"create_flow
async
","text":"Create a flow in the Prefect API.
Parameters:
Name Type Description Defaultflow
Flow
a Flow object
requiredRaises:
Type DescriptionRequestError
if a flow was not created for any reason
Returns:
Type DescriptionUUID
the ID of the flow in the backend
Source code inprefect/client/orchestration.py
async def create_flow(self, flow: \"FlowObject\") -> UUID:\n \"\"\"\n Create a flow in the Prefect API.\n\n Args:\n flow: a [Flow][prefect.flows.Flow] object\n\n Raises:\n httpx.RequestError: if a flow was not created for any reason\n\n Returns:\n the ID of the flow in the backend\n \"\"\"\n return await self.create_flow_from_name(flow.name)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_from_name","title":"create_flow_from_name
async
","text":"Create a flow in the Prefect API.
Parameters:
Name Type Description Defaultflow_name
str
the name of the new flow
requiredRaises:
Type DescriptionRequestError
if a flow was not created for any reason
Returns:
Type DescriptionUUID
the ID of the flow in the backend
Source code inprefect/client/orchestration.py
async def create_flow_from_name(self, flow_name: str) -> UUID:\n \"\"\"\n Create a flow in the Prefect API.\n\n Args:\n flow_name: the name of the new flow\n\n Raises:\n httpx.RequestError: if a flow was not created for any reason\n\n Returns:\n the ID of the flow in the backend\n \"\"\"\n flow_data = FlowCreate(name=flow_name)\n response = await self._client.post(\n \"/flows/\", json=flow_data.dict(json_compatible=True)\n )\n\n flow_id = response.json().get(\"id\")\n if not flow_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n # Return the id of the created flow\n return UUID(flow_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow","title":"read_flow
async
","text":"Query the Prefect API for a flow by id.
Parameters:
Name Type Description Defaultflow_id
UUID
the flow ID of interest
requiredReturns:
Type DescriptionFlow
a Flow model representation of the flow
Source code inprefect/client/orchestration.py
async def read_flow(self, flow_id: UUID) -> Flow:\n \"\"\"\n Query the Prefect API for a flow by id.\n\n Args:\n flow_id: the flow ID of interest\n\n Returns:\n a [Flow model][prefect.client.schemas.objects.Flow] representation of the flow\n \"\"\"\n response = await self._client.get(f\"/flows/{flow_id}\")\n return Flow.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flows","title":"read_flows
async
","text":"Query the Prefect API for flows. Only flows matching all criteria will be returned.
Parameters:
Name Type Description Defaultflow_filter
FlowFilter
filter criteria for flows
None
flow_run_filter
FlowRunFilter
filter criteria for flow runs
None
task_run_filter
TaskRunFilter
filter criteria for task runs
None
deployment_filter
DeploymentFilter
filter criteria for deployments
None
work_pool_filter
WorkPoolFilter
filter criteria for work pools
None
work_queue_filter
WorkQueueFilter
filter criteria for work pool queues
None
sort
FlowSort
sort criteria for the flows
None
limit
int
limit for the flow query
None
offset
int
offset for the flow query
0
Returns:
Type DescriptionList[Flow]
a list of Flow model representations of the flows
Source code inprefect/client/orchestration.py
async def read_flows(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n work_pool_filter: WorkPoolFilter = None,\n work_queue_filter: WorkQueueFilter = None,\n sort: FlowSort = None,\n limit: int = None,\n offset: int = 0,\n) -> List[Flow]:\n \"\"\"\n Query the Prefect API for flows. Only flows matching all criteria will\n be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n work_pool_filter: filter criteria for work pools\n work_queue_filter: filter criteria for work pool queues\n sort: sort criteria for the flows\n limit: limit for the flow query\n offset: offset for the flow query\n\n Returns:\n a list of Flow model representations of the flows\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n \"work_queues\": (\n work_queue_filter.dict(json_compatible=True)\n if work_queue_filter\n else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n\n response = await self._client.post(\"/flows/filter\", json=body)\n return pydantic.parse_obj_as(List[Flow], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_by_name","title":"read_flow_by_name
async
","text":"Query the Prefect API for a flow by name.
Parameters:
Name Type Description Defaultflow_name
str
the name of a flow
requiredReturns:
Type DescriptionFlow
a fully hydrated Flow model
Source code inprefect/client/orchestration.py
async def read_flow_by_name(\n self,\n flow_name: str,\n) -> Flow:\n \"\"\"\n Query the Prefect API for a flow by name.\n\n Args:\n flow_name: the name of a flow\n\n Returns:\n a fully hydrated Flow model\n \"\"\"\n response = await self._client.get(f\"/flows/name/{flow_name}\")\n return Flow.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_from_deployment","title":"create_flow_run_from_deployment
async
","text":"Create a flow run for a deployment.
Parameters:
Name Type Description Defaultdeployment_id
UUID
The deployment ID to create the flow run from
requiredparameters
Dict[str, Any]
Parameter overrides for this flow run. Merged with the deployment defaults
None
context
dict
Optional run context data
None
state
State
The initial state for the run. If not provided, defaults to Scheduled
for now. Should always be a Scheduled
type.
None
name
str
An optional name for the flow run. If not provided, the server will generate a name.
None
tags
Iterable[str]
An optional iterable of tags to apply to the flow run; these tags are merged with the deployment's tags.
None
idempotency_key
str
Optional idempotency key for creation of the flow run. If the key matches the key of an existing flow run, the existing run will be returned instead of creating a new one.
None
parent_task_run_id
UUID
if a subflow run is being created, the placeholder task run identifier in the parent flow
None
work_queue_name
str
An optional work queue name to add this run to. If not provided, will default to the deployment's set work queue. If one is provided that does not exist, a new work queue will be created within the deployment's work pool.
None
job_variables
Optional[Dict[str, Any]]
Optional variables that will be supplied to the flow run job.
None
Raises:
Type DescriptionRequestError
if the Prefect API does not successfully create a run for any reason
Returns:
Type DescriptionFlowRun
The flow run model
Source code inprefect/client/orchestration.py
async def create_flow_run_from_deployment(\n self,\n deployment_id: UUID,\n *,\n parameters: Dict[str, Any] = None,\n context: dict = None,\n state: prefect.states.State = None,\n name: str = None,\n tags: Iterable[str] = None,\n idempotency_key: str = None,\n parent_task_run_id: UUID = None,\n work_queue_name: str = None,\n job_variables: Optional[Dict[str, Any]] = None,\n) -> FlowRun:\n \"\"\"\n Create a flow run for a deployment.\n\n Args:\n deployment_id: The deployment ID to create the flow run from\n parameters: Parameter overrides for this flow run. Merged with the\n deployment defaults\n context: Optional run context data\n state: The initial state for the run. If not provided, defaults to\n `Scheduled` for now. Should always be a `Scheduled` type.\n name: An optional name for the flow run. If not provided, the server will\n generate a name.\n tags: An optional iterable of tags to apply to the flow run; these tags\n are merged with the deployment's tags.\n idempotency_key: Optional idempotency key for creation of the flow run.\n If the key matches the key of an existing flow run, the existing run will\n be returned instead of creating a new one.\n parent_task_run_id: if a subflow run is being created, the placeholder task\n run identifier in the parent flow\n work_queue_name: An optional work queue name to add this run to. If not provided,\n will default to the deployment's set work queue. If one is provided that does not\n exist, a new work queue will be created within the deployment's work pool.\n job_variables: Optional variables that will be supplied to the flow run job.\n\n Raises:\n httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n Returns:\n The flow run model\n \"\"\"\n if job_variables is not None and experiment_enabled(\"flow_run_infra_overrides\"):\n if (\n PREFECT_EXPERIMENTAL_WARN\n and PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INFRA_OVERRIDES\n ):\n warnings.warn(\n EXPERIMENTAL_WARNING.format(\n feature=\"Flow run job variables\",\n group=\"flow_run_infra_overrides\",\n help=\"To use this feature, update your workers to Prefect 2.16.4 or later. \",\n ),\n ExperimentalFeature,\n stacklevel=3,\n )\n\n parameters = parameters or {}\n context = context or {}\n state = state or prefect.states.Scheduled()\n tags = tags or []\n\n flow_run_create = DeploymentFlowRunCreate(\n parameters=parameters,\n context=context,\n state=state.to_state_create(),\n tags=tags,\n name=name,\n idempotency_key=idempotency_key,\n parent_task_run_id=parent_task_run_id,\n job_variables=job_variables,\n )\n\n # done separately to avoid including this field in payloads sent to older API versions\n if work_queue_name:\n flow_run_create.work_queue_name = work_queue_name\n\n response = await self._client.post(\n f\"/deployments/{deployment_id}/create_flow_run\",\n json=flow_run_create.dict(json_compatible=True, exclude_unset=True),\n )\n return FlowRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run","title":"create_flow_run
async
","text":"Create a flow run for a flow.
Parameters:
Name Type Description Defaultflow
Flow
The flow model to create the flow run for
requiredname
str
An optional name for the flow run
None
parameters
Dict[str, Any]
Parameter overrides for this flow run.
None
context
dict
Optional run context data
None
tags
Iterable[str]
a list of tags to apply to this flow run
None
parent_task_run_id
UUID
if a subflow run is being created, the placeholder task run identifier in the parent flow
None
state
State
The initial state for the run. If not provided, defaults to Scheduled
for now. Should always be a Scheduled
type.
None
Raises:
Type DescriptionRequestError
if the Prefect API does not successfully create a run for any reason
Returns:
Type DescriptionFlowRun
The flow run model
Source code inprefect/client/orchestration.py
async def create_flow_run(\n self,\n flow: \"FlowObject\",\n name: str = None,\n parameters: Dict[str, Any] = None,\n context: dict = None,\n tags: Iterable[str] = None,\n parent_task_run_id: UUID = None,\n state: \"prefect.states.State\" = None,\n) -> FlowRun:\n \"\"\"\n Create a flow run for a flow.\n\n Args:\n flow: The flow model to create the flow run for\n name: An optional name for the flow run\n parameters: Parameter overrides for this flow run.\n context: Optional run context data\n tags: a list of tags to apply to this flow run\n parent_task_run_id: if a subflow run is being created, the placeholder task\n run identifier in the parent flow\n state: The initial state for the run. If not provided, defaults to\n `Scheduled` for now. Should always be a `Scheduled` type.\n\n Raises:\n httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n Returns:\n The flow run model\n \"\"\"\n parameters = parameters or {}\n context = context or {}\n\n if state is None:\n state = prefect.states.Pending()\n\n # Retrieve the flow id\n flow_id = await self.create_flow(flow)\n\n flow_run_create = FlowRunCreate(\n flow_id=flow_id,\n flow_version=flow.version,\n name=name,\n parameters=parameters,\n context=context,\n tags=list(tags or []),\n parent_task_run_id=parent_task_run_id,\n state=state.to_state_create(),\n empirical_policy=FlowRunPolicy(\n retries=flow.retries,\n retry_delay=flow.retry_delay_seconds,\n ),\n )\n\n flow_run_create_json = flow_run_create.dict(json_compatible=True)\n response = await self._client.post(\"/flow_runs/\", json=flow_run_create_json)\n flow_run = FlowRun.parse_obj(response.json())\n\n # Restore the parameters to the local objects to retain expectations about\n # Python objects\n flow_run.parameters = parameters\n\n return flow_run\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_flow_run","title":"update_flow_run
async
","text":"Update a flow run's details.
Parameters:
Name Type Description Defaultflow_run_id
UUID
The identifier for the flow run to update.
requiredflow_version
Optional[str]
A new version string for the flow run.
None
parameters
Optional[dict]
A dictionary of parameter values for the flow run. This will not be merged with any existing parameters.
None
name
Optional[str]
A new name for the flow run.
None
empirical_policy
Optional[FlowRunPolicy]
A new flow run orchestration policy. This will not be merged with any existing policy.
None
tags
Optional[Iterable[str]]
An iterable of new tags for the flow run. These will not be merged with any existing tags.
None
infrastructure_pid
Optional[str]
The id of flow run as returned by an infrastructure block.
None
Returns:
Type DescriptionResponse
an httpx.Response
object from the PATCH request
prefect/client/orchestration.py
async def update_flow_run(\n self,\n flow_run_id: UUID,\n flow_version: Optional[str] = None,\n parameters: Optional[dict] = None,\n name: Optional[str] = None,\n tags: Optional[Iterable[str]] = None,\n empirical_policy: Optional[FlowRunPolicy] = None,\n infrastructure_pid: Optional[str] = None,\n job_variables: Optional[dict] = None,\n) -> httpx.Response:\n \"\"\"\n Update a flow run's details.\n\n Args:\n flow_run_id: The identifier for the flow run to update.\n flow_version: A new version string for the flow run.\n parameters: A dictionary of parameter values for the flow run. This will not\n be merged with any existing parameters.\n name: A new name for the flow run.\n empirical_policy: A new flow run orchestration policy. This will not be\n merged with any existing policy.\n tags: An iterable of new tags for the flow run. These will not be merged with\n any existing tags.\n infrastructure_pid: The id of flow run as returned by an\n infrastructure block.\n\n Returns:\n an `httpx.Response` object from the PATCH request\n \"\"\"\n if job_variables is not None and experiment_enabled(\"flow_run_infra_overrides\"):\n if (\n PREFECT_EXPERIMENTAL_WARN\n and PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INFRA_OVERRIDES\n ):\n warnings.warn(\n EXPERIMENTAL_WARNING.format(\n feature=\"Flow run job variables\",\n group=\"flow_run_infra_overrides\",\n help=\"To use this feature, update your workers to Prefect 2.16.4 or later. \",\n ),\n ExperimentalFeature,\n stacklevel=3,\n )\n\n params = {}\n if flow_version is not None:\n params[\"flow_version\"] = flow_version\n if parameters is not None:\n params[\"parameters\"] = parameters\n if name is not None:\n params[\"name\"] = name\n if tags is not None:\n params[\"tags\"] = tags\n if empirical_policy is not None:\n params[\"empirical_policy\"] = empirical_policy\n if infrastructure_pid:\n params[\"infrastructure_pid\"] = infrastructure_pid\n if job_variables is not None:\n params[\"job_variables\"] = job_variables\n\n flow_run_data = FlowRunUpdate(**params)\n\n return await self._client.patch(\n f\"/flow_runs/{flow_run_id}\",\n json=flow_run_data.dict(json_compatible=True, exclude_unset=True),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_flow_run","title":"delete_flow_run
async
","text":"Delete a flow run by UUID.
Parameters:
Name Type Description Defaultflow_run_id
UUID
The flow run UUID of interest.
required Source code inprefect/client/orchestration.py
async def delete_flow_run(\n self,\n flow_run_id: UUID,\n) -> None:\n \"\"\"\n Delete a flow run by UUID.\n\n Args:\n flow_run_id: The flow run UUID of interest.\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/flow_runs/{flow_run_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_concurrency_limit","title":"create_concurrency_limit
async
","text":"Create a tag concurrency limit in the Prefect API. These limits govern concurrently running tasks.
Parameters:
Name Type Description Defaulttag
str
a tag the concurrency limit is applied to
requiredconcurrency_limit
int
the maximum number of concurrent task runs for a given tag
requiredRaises:
Type DescriptionRequestError
if the concurrency limit was not created for any reason
Returns:
Type DescriptionUUID
the ID of the concurrency limit in the backend
Source code inprefect/client/orchestration.py
async def create_concurrency_limit(\n self,\n tag: str,\n concurrency_limit: int,\n) -> UUID:\n \"\"\"\n Create a tag concurrency limit in the Prefect API. These limits govern concurrently\n running tasks.\n\n Args:\n tag: a tag the concurrency limit is applied to\n concurrency_limit: the maximum number of concurrent task runs for a given tag\n\n Raises:\n httpx.RequestError: if the concurrency limit was not created for any reason\n\n Returns:\n the ID of the concurrency limit in the backend\n \"\"\"\n\n concurrency_limit_create = ConcurrencyLimitCreate(\n tag=tag,\n concurrency_limit=concurrency_limit,\n )\n response = await self._client.post(\n \"/concurrency_limits/\",\n json=concurrency_limit_create.dict(json_compatible=True),\n )\n\n concurrency_limit_id = response.json().get(\"id\")\n\n if not concurrency_limit_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n return UUID(concurrency_limit_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_concurrency_limit_by_tag","title":"read_concurrency_limit_by_tag
async
","text":"Read the concurrency limit set on a specific tag.
Parameters:
Name Type Description Defaulttag
str
a tag the concurrency limit is applied to
requiredRaises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
if the concurrency limit was not created for any reason
Returns:
Type Descriptionthe concurrency limit set on a specific tag
Source code inprefect/client/orchestration.py
async def read_concurrency_limit_by_tag(\n self,\n tag: str,\n):\n \"\"\"\n Read the concurrency limit set on a specific tag.\n\n Args:\n tag: a tag the concurrency limit is applied to\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: if the concurrency limit was not created for any reason\n\n Returns:\n the concurrency limit set on a specific tag\n \"\"\"\n try:\n response = await self._client.get(\n f\"/concurrency_limits/tag/{tag}\",\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n concurrency_limit_id = response.json().get(\"id\")\n\n if not concurrency_limit_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n concurrency_limit = ConcurrencyLimit.parse_obj(response.json())\n return concurrency_limit\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_concurrency_limits","title":"read_concurrency_limits
async
","text":"Lists concurrency limits set on task run tags.
Parameters:
Name Type Description Defaultlimit
int
the maximum number of concurrency limits returned
requiredoffset
int
the concurrency limit query offset
requiredReturns:
Type Descriptiona list of concurrency limits
Source code inprefect/client/orchestration.py
async def read_concurrency_limits(\n self,\n limit: int,\n offset: int,\n):\n \"\"\"\n Lists concurrency limits set on task run tags.\n\n Args:\n limit: the maximum number of concurrency limits returned\n offset: the concurrency limit query offset\n\n Returns:\n a list of concurrency limits\n \"\"\"\n\n body = {\n \"limit\": limit,\n \"offset\": offset,\n }\n\n response = await self._client.post(\"/concurrency_limits/filter\", json=body)\n return pydantic.parse_obj_as(List[ConcurrencyLimit], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.reset_concurrency_limit_by_tag","title":"reset_concurrency_limit_by_tag
async
","text":"Resets the concurrency limit slots set on a specific tag.
Parameters:
Name Type Description Defaulttag
str
a tag the concurrency limit is applied to
requiredslot_override
Optional[List[Union[UUID, str]]]
a list of task run IDs that are currently using a concurrency slot, please check that any task run IDs included in slot_override
are currently running, otherwise those concurrency slots will never be released.
None
Raises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
If request fails
Source code inprefect/client/orchestration.py
async def reset_concurrency_limit_by_tag(\n self,\n tag: str,\n slot_override: Optional[List[Union[UUID, str]]] = None,\n):\n \"\"\"\n Resets the concurrency limit slots set on a specific tag.\n\n Args:\n tag: a tag the concurrency limit is applied to\n slot_override: a list of task run IDs that are currently using a\n concurrency slot, please check that any task run IDs included in\n `slot_override` are currently running, otherwise those concurrency\n slots will never be released.\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n \"\"\"\n if slot_override is not None:\n slot_override = [str(slot) for slot in slot_override]\n\n try:\n await self._client.post(\n f\"/concurrency_limits/tag/{tag}/reset\",\n json=dict(slot_override=slot_override),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_concurrency_limit_by_tag","title":"delete_concurrency_limit_by_tag
async
","text":"Delete the concurrency limit set on a specific tag.
Parameters:
Name Type Description Defaulttag
str
a tag the concurrency limit is applied to
requiredRaises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
If request fails
Source code inprefect/client/orchestration.py
async def delete_concurrency_limit_by_tag(\n self,\n tag: str,\n):\n \"\"\"\n Delete the concurrency limit set on a specific tag.\n\n Args:\n tag: a tag the concurrency limit is applied to\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n \"\"\"\n try:\n await self._client.delete(\n f\"/concurrency_limits/tag/{tag}\",\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_work_queue","title":"create_work_queue
async
","text":"Create a work queue.
Parameters:
Name Type Description Defaultname
str
a unique name for the work queue
requiredtags
Optional[List[str]]
will be included in the queue. This option will be removed on 2023-02-23.
None
description
Optional[str]
An optional description for the work queue.
None
is_paused
Optional[bool]
Whether or not the work queue is paused.
None
concurrency_limit
Optional[int]
An optional concurrency limit for the work queue.
None
priority
Optional[int]
The queue's priority. Lower values are higher priority (1 is the highest).
None
work_pool_name
Optional[str]
The name of the work pool to use for this queue.
None
Raises:
Type DescriptionObjectAlreadyExists
If request returns 409
RequestError
If request fails
Returns:
Type DescriptionWorkQueue
The created work queue
Source code inprefect/client/orchestration.py
async def create_work_queue(\n self,\n name: str,\n tags: Optional[List[str]] = None,\n description: Optional[str] = None,\n is_paused: Optional[bool] = None,\n concurrency_limit: Optional[int] = None,\n priority: Optional[int] = None,\n work_pool_name: Optional[str] = None,\n) -> WorkQueue:\n \"\"\"\n Create a work queue.\n\n Args:\n name: a unique name for the work queue\n tags: DEPRECATED: an optional list of tags to filter on; only work scheduled with these tags\n will be included in the queue. This option will be removed on 2023-02-23.\n description: An optional description for the work queue.\n is_paused: Whether or not the work queue is paused.\n concurrency_limit: An optional concurrency limit for the work queue.\n priority: The queue's priority. Lower values are higher priority (1 is the highest).\n work_pool_name: The name of the work pool to use for this queue.\n\n Raises:\n prefect.exceptions.ObjectAlreadyExists: If request returns 409\n httpx.RequestError: If request fails\n\n Returns:\n The created work queue\n \"\"\"\n if tags:\n warnings.warn(\n (\n \"The use of tags for creating work queue filters is deprecated.\"\n \" This option will be removed on 2023-02-23.\"\n ),\n DeprecationWarning,\n )\n filter = QueueFilter(tags=tags)\n else:\n filter = None\n create_model = WorkQueueCreate(name=name, filter=filter)\n if description is not None:\n create_model.description = description\n if is_paused is not None:\n create_model.is_paused = is_paused\n if concurrency_limit is not None:\n create_model.concurrency_limit = concurrency_limit\n if priority is not None:\n create_model.priority = priority\n\n data = create_model.dict(json_compatible=True)\n try:\n if work_pool_name is not None:\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/queues\", json=data\n )\n else:\n response = await self._client.post(\"/work_queues/\", json=data)\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n elif e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue_by_name","title":"read_work_queue_by_name
async
","text":"Read a work queue by name.
Parameters:
Name Type Description Defaultname
str
a unique name for the work queue
requiredwork_pool_name
str
the name of the work pool the queue belongs to.
None
Raises:
Type DescriptionObjectNotFound
if no work queue is found
HTTPStatusError
other status errors
Returns:
Name Type DescriptionWorkQueue
WorkQueue
a work queue API object
Source code inprefect/client/orchestration.py
async def read_work_queue_by_name(\n self,\n name: str,\n work_pool_name: Optional[str] = None,\n) -> WorkQueue:\n \"\"\"\n Read a work queue by name.\n\n Args:\n name (str): a unique name for the work queue\n work_pool_name (str, optional): the name of the work pool\n the queue belongs to.\n\n Raises:\n prefect.exceptions.ObjectNotFound: if no work queue is found\n httpx.HTTPStatusError: other status errors\n\n Returns:\n WorkQueue: a work queue API object\n \"\"\"\n try:\n if work_pool_name is not None:\n response = await self._client.get(\n f\"/work_pools/{work_pool_name}/queues/{name}\"\n )\n else:\n response = await self._client.get(f\"/work_queues/name/{name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_work_queue","title":"update_work_queue
async
","text":"Update properties of a work queue.
Parameters:
Name Type Description Defaultid
UUID
the ID of the work queue to update
required**kwargs
the fields to update
{}
Raises:
Type DescriptionValueError
if no kwargs are provided
ObjectNotFound
if request returns 404
RequestError
if the request fails
Source code inprefect/client/orchestration.py
async def update_work_queue(self, id: UUID, **kwargs):\n \"\"\"\n Update properties of a work queue.\n\n Args:\n id: the ID of the work queue to update\n **kwargs: the fields to update\n\n Raises:\n ValueError: if no kwargs are provided\n prefect.exceptions.ObjectNotFound: if request returns 404\n httpx.RequestError: if the request fails\n\n \"\"\"\n if not kwargs:\n raise ValueError(\"No fields provided to update.\")\n\n data = WorkQueueUpdate(**kwargs).dict(json_compatible=True, exclude_unset=True)\n try:\n await self._client.patch(f\"/work_queues/{id}\", json=data)\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_runs_in_work_queue","title":"get_runs_in_work_queue
async
","text":"Read flow runs off a work queue.
Parameters:
Name Type Description Defaultid
UUID
the id of the work queue to read from
requiredlimit
int
a limit on the number of runs to return
10
scheduled_before
datetime
a timestamp; only runs scheduled before this time will be returned. Defaults to now.
None
Raises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
If request fails
Returns:
Type DescriptionList[FlowRun]
List[FlowRun]: a list of FlowRun objects read from the queue
Source code inprefect/client/orchestration.py
async def get_runs_in_work_queue(\n self,\n id: UUID,\n limit: int = 10,\n scheduled_before: datetime.datetime = None,\n) -> List[FlowRun]:\n \"\"\"\n Read flow runs off a work queue.\n\n Args:\n id: the id of the work queue to read from\n limit: a limit on the number of runs to return\n scheduled_before: a timestamp; only runs scheduled before this time will be returned.\n Defaults to now.\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n List[FlowRun]: a list of FlowRun objects read from the queue\n \"\"\"\n if scheduled_before is None:\n scheduled_before = pendulum.now(\"UTC\")\n\n try:\n response = await self._client.post(\n f\"/work_queues/{id}/get_runs\",\n json={\n \"limit\": limit,\n \"scheduled_before\": scheduled_before.isoformat(),\n },\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return pydantic.parse_obj_as(List[FlowRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue","title":"read_work_queue
async
","text":"Read a work queue.
Parameters:
Name Type Description Defaultid
UUID
the id of the work queue to load
requiredRaises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
If request fails
Returns:
Name Type DescriptionWorkQueue
WorkQueue
an instantiated WorkQueue object
Source code inprefect/client/orchestration.py
async def read_work_queue(\n self,\n id: UUID,\n) -> WorkQueue:\n \"\"\"\n Read a work queue.\n\n Args:\n id: the id of the work queue to load\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n WorkQueue: an instantiated WorkQueue object\n \"\"\"\n try:\n response = await self._client.get(f\"/work_queues/{id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue_status","title":"read_work_queue_status
async
","text":"Read a work queue status.
Parameters:
Name Type Description Defaultid
UUID
the id of the work queue to load
requiredRaises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
If request fails
Returns:
Name Type DescriptionWorkQueueStatus
WorkQueueStatusDetail
an instantiated WorkQueueStatus object
Source code inprefect/client/orchestration.py
async def read_work_queue_status(\n self,\n id: UUID,\n) -> WorkQueueStatusDetail:\n \"\"\"\n Read a work queue status.\n\n Args:\n id: the id of the work queue to load\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n WorkQueueStatus: an instantiated WorkQueueStatus object\n \"\"\"\n try:\n response = await self._client.get(f\"/work_queues/{id}/status\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return WorkQueueStatusDetail.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.match_work_queues","title":"match_work_queues
async
","text":"Query the Prefect API for work queues with names with a specific prefix.
Parameters:
Name Type Description Defaultprefixes
List[str]
a list of strings used to match work queue name prefixes
requiredwork_pool_name
Optional[str]
an optional work pool name to scope the query to
None
Returns:
Type DescriptionList[WorkQueue]
a list of WorkQueue model representations of the work queues
Source code inprefect/client/orchestration.py
async def match_work_queues(\n self,\n prefixes: List[str],\n work_pool_name: Optional[str] = None,\n) -> List[WorkQueue]:\n \"\"\"\n Query the Prefect API for work queues with names with a specific prefix.\n\n Args:\n prefixes: a list of strings used to match work queue name prefixes\n work_pool_name: an optional work pool name to scope the query to\n\n Returns:\n a list of WorkQueue model representations\n of the work queues\n \"\"\"\n page_length = 100\n current_page = 0\n work_queues = []\n\n while True:\n new_queues = await self.read_work_queues(\n work_pool_name=work_pool_name,\n offset=current_page * page_length,\n limit=page_length,\n work_queue_filter=WorkQueueFilter(\n name=WorkQueueFilterName(startswith_=prefixes)\n ),\n )\n if not new_queues:\n break\n work_queues += new_queues\n current_page += 1\n\n return work_queues\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_work_queue_by_id","title":"delete_work_queue_by_id
async
","text":"Delete a work queue by its ID.
Parameters:
Name Type Description Defaultid
UUID
the id of the work queue to delete
requiredRaises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
If requests fails
Source code inprefect/client/orchestration.py
async def delete_work_queue_by_id(\n self,\n id: UUID,\n):\n \"\"\"\n Delete a work queue by its ID.\n\n Args:\n id: the id of the work queue to delete\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(\n f\"/work_queues/{id}\",\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_type","title":"create_block_type
async
","text":"Create a block type in the Prefect API.
Source code inprefect/client/orchestration.py
async def create_block_type(self, block_type: BlockTypeCreate) -> BlockType:\n \"\"\"\n Create a block type in the Prefect API.\n \"\"\"\n try:\n response = await self._client.post(\n \"/block_types/\",\n json=block_type.dict(\n json_compatible=True, exclude_unset=True, exclude={\"id\"}\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n return BlockType.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_schema","title":"create_block_schema
async
","text":"Create a block schema in the Prefect API.
Source code inprefect/client/orchestration.py
async def create_block_schema(self, block_schema: BlockSchemaCreate) -> BlockSchema:\n \"\"\"\n Create a block schema in the Prefect API.\n \"\"\"\n try:\n response = await self._client.post(\n \"/block_schemas/\",\n json=block_schema.dict(\n json_compatible=True,\n exclude_unset=True,\n exclude={\"id\", \"block_type\", \"checksum\"},\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n return BlockSchema.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_document","title":"create_block_document
async
","text":"Create a block document in the Prefect API. This data is used to configure a corresponding Block.
Parameters:
Name Type Description Defaultinclude_secrets
bool
whether to include secret values on the stored Block, corresponding to Pydantic's SecretStr
and SecretBytes
fields. Note Blocks may not work as expected if this is set to False
.
True
Source code in prefect/client/orchestration.py
async def create_block_document(\n self,\n block_document: Union[BlockDocument, BlockDocumentCreate],\n include_secrets: bool = True,\n) -> BlockDocument:\n \"\"\"\n Create a block document in the Prefect API. This data is used to configure a\n corresponding Block.\n\n Args:\n include_secrets (bool): whether to include secret values\n on the stored Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. Note Blocks may not work as expected if\n this is set to `False`.\n \"\"\"\n if isinstance(block_document, BlockDocument):\n block_document = BlockDocumentCreate.parse_obj(\n block_document.dict(\n json_compatible=True,\n include_secrets=include_secrets,\n exclude_unset=True,\n exclude={\"id\", \"block_schema\", \"block_type\"},\n ),\n )\n\n try:\n response = await self._client.post(\n \"/block_documents/\",\n json=block_document.dict(\n json_compatible=True,\n include_secrets=include_secrets,\n exclude_unset=True,\n exclude={\"id\", \"block_schema\", \"block_type\"},\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_block_document","title":"update_block_document
async
","text":"Update a block document in the Prefect API.
Source code inprefect/client/orchestration.py
async def update_block_document(\n self,\n block_document_id: UUID,\n block_document: BlockDocumentUpdate,\n):\n \"\"\"\n Update a block document in the Prefect API.\n \"\"\"\n try:\n await self._client.patch(\n f\"/block_documents/{block_document_id}\",\n json=block_document.dict(\n json_compatible=True,\n exclude_unset=True,\n include={\"data\", \"merge_existing_data\", \"block_schema_id\"},\n include_secrets=True,\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_block_document","title":"delete_block_document
async
","text":"Delete a block document.
Source code inprefect/client/orchestration.py
async def delete_block_document(self, block_document_id: UUID):\n \"\"\"\n Delete a block document.\n \"\"\"\n try:\n await self._client.delete(f\"/block_documents/{block_document_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_type_by_slug","title":"read_block_type_by_slug
async
","text":"Read a block type by its slug.
Source code inprefect/client/orchestration.py
async def read_block_type_by_slug(self, slug: str) -> BlockType:\n \"\"\"\n Read a block type by its slug.\n \"\"\"\n try:\n response = await self._client.get(f\"/block_types/slug/{slug}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockType.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_schema_by_checksum","title":"read_block_schema_by_checksum
async
","text":"Look up a block schema checksum
Source code inprefect/client/orchestration.py
async def read_block_schema_by_checksum(\n self, checksum: str, version: Optional[str] = None\n) -> BlockSchema:\n \"\"\"\n Look up a block schema checksum\n \"\"\"\n try:\n url = f\"/block_schemas/checksum/{checksum}\"\n if version is not None:\n url = f\"{url}?version={version}\"\n response = await self._client.get(url)\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockSchema.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_block_type","title":"update_block_type
async
","text":"Update a block document in the Prefect API.
Source code inprefect/client/orchestration.py
async def update_block_type(self, block_type_id: UUID, block_type: BlockTypeUpdate):\n \"\"\"\n Update a block document in the Prefect API.\n \"\"\"\n try:\n await self._client.patch(\n f\"/block_types/{block_type_id}\",\n json=block_type.dict(\n json_compatible=True,\n exclude_unset=True,\n include=BlockTypeUpdate.updatable_fields(),\n include_secrets=True,\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_block_type","title":"delete_block_type
async
","text":"Delete a block type.
Source code inprefect/client/orchestration.py
async def delete_block_type(self, block_type_id: UUID):\n \"\"\"\n Delete a block type.\n \"\"\"\n try:\n await self._client.delete(f\"/block_types/{block_type_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n elif (\n e.response.status_code == status.HTTP_403_FORBIDDEN\n and e.response.json()[\"detail\"]\n == \"protected block types cannot be deleted.\"\n ):\n raise prefect.exceptions.ProtectedBlockError(\n \"Protected block types cannot be deleted.\"\n ) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_types","title":"read_block_types
async
","text":"Read all block types Raises: httpx.RequestError: if the block types were not found
Returns:
Type DescriptionList[BlockType]
List of BlockTypes.
Source code inprefect/client/orchestration.py
async def read_block_types(self) -> List[BlockType]:\n \"\"\"\n Read all block types\n Raises:\n httpx.RequestError: if the block types were not found\n\n Returns:\n List of BlockTypes.\n \"\"\"\n response = await self._client.post(\"/block_types/filter\", json={})\n return pydantic.parse_obj_as(List[BlockType], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_schemas","title":"read_block_schemas
async
","text":"Read all block schemas Raises: httpx.RequestError: if a valid block schema was not found
Returns:
Type DescriptionList[BlockSchema]
A BlockSchema.
Source code inprefect/client/orchestration.py
async def read_block_schemas(self) -> List[BlockSchema]:\n \"\"\"\n Read all block schemas\n Raises:\n httpx.RequestError: if a valid block schema was not found\n\n Returns:\n A BlockSchema.\n \"\"\"\n response = await self._client.post(\"/block_schemas/filter\", json={})\n return pydantic.parse_obj_as(List[BlockSchema], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_most_recent_block_schema_for_block_type","title":"get_most_recent_block_schema_for_block_type
async
","text":"Fetches the most recent block schema for a specified block type ID.
Parameters:
Name Type Description Defaultblock_type_id
UUID
The ID of the block type.
requiredRaises:
Type DescriptionRequestError
If the request fails for any reason.
Returns:
Type DescriptionOptional[BlockSchema]
The most recent block schema or None.
Source code inprefect/client/orchestration.py
async def get_most_recent_block_schema_for_block_type(\n self,\n block_type_id: UUID,\n) -> Optional[BlockSchema]:\n \"\"\"\n Fetches the most recent block schema for a specified block type ID.\n\n Args:\n block_type_id: The ID of the block type.\n\n Raises:\n httpx.RequestError: If the request fails for any reason.\n\n Returns:\n The most recent block schema or None.\n \"\"\"\n try:\n response = await self._client.post(\n \"/block_schemas/filter\",\n json={\n \"block_schemas\": {\"block_type_id\": {\"any_\": [str(block_type_id)]}},\n \"limit\": 1,\n },\n )\n except httpx.HTTPStatusError:\n raise\n return BlockSchema.parse_obj(response.json()[0]) if response.json() else None\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_document","title":"read_block_document
async
","text":"Read the block document with the specified ID.
Parameters:
Name Type Description Defaultblock_document_id
UUID
the block document id
requiredinclude_secrets
bool
whether to include secret values on the Block, corresponding to Pydantic's SecretStr
and SecretBytes
fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False
.
True
Raises:
Type DescriptionRequestError
if the block document was not found for any reason
Returns:
Type DescriptionA block document or None.
Source code inprefect/client/orchestration.py
async def read_block_document(\n self,\n block_document_id: UUID,\n include_secrets: bool = True,\n):\n \"\"\"\n Read the block document with the specified ID.\n\n Args:\n block_document_id: the block document id\n include_secrets (bool): whether to include secret values\n on the Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. These fields are automatically obfuscated\n by Pydantic, but users can additionally choose not to receive\n their values from the API. Note that any business logic on the\n Block may not work if this is `False`.\n\n Raises:\n httpx.RequestError: if the block document was not found for any reason\n\n Returns:\n A block document or None.\n \"\"\"\n assert (\n block_document_id is not None\n ), \"Unexpected ID on block document. Was it persisted?\"\n try:\n response = await self._client.get(\n f\"/block_documents/{block_document_id}\",\n params=dict(include_secrets=include_secrets),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_document_by_name","title":"read_block_document_by_name
async
","text":"Read the block document with the specified name that corresponds to a specific block type name.
Parameters:
Name Type Description Defaultname
str
The block document name.
requiredblock_type_slug
str
The block type slug.
requiredinclude_secrets
bool
whether to include secret values on the Block, corresponding to Pydantic's SecretStr
and SecretBytes
fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False
.
True
Raises:
Type DescriptionRequestError
if the block document was not found for any reason
Returns:
Type DescriptionBlockDocument
A block document or None.
Source code inprefect/client/orchestration.py
async def read_block_document_by_name(\n self,\n name: str,\n block_type_slug: str,\n include_secrets: bool = True,\n) -> BlockDocument:\n \"\"\"\n Read the block document with the specified name that corresponds to a\n specific block type name.\n\n Args:\n name: The block document name.\n block_type_slug: The block type slug.\n include_secrets (bool): whether to include secret values\n on the Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. These fields are automatically obfuscated\n by Pydantic, but users can additionally choose not to receive\n their values from the API. Note that any business logic on the\n Block may not work if this is `False`.\n\n Raises:\n httpx.RequestError: if the block document was not found for any reason\n\n Returns:\n A block document or None.\n \"\"\"\n try:\n response = await self._client.get(\n f\"/block_types/slug/{block_type_slug}/block_documents/name/{name}\",\n params=dict(include_secrets=include_secrets),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_documents","title":"read_block_documents
async
","text":"Read block documents
Parameters:
Name Type Description Defaultblock_schema_type
Optional[str]
an optional block schema type
None
offset
Optional[int]
an offset
None
limit
Optional[int]
the number of blocks to return
None
include_secrets
bool
whether to include secret values on the Block, corresponding to Pydantic's SecretStr
and SecretBytes
fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False
.
True
Returns:
Type DescriptionA list of block documents
Source code inprefect/client/orchestration.py
async def read_block_documents(\n self,\n block_schema_type: Optional[str] = None,\n offset: Optional[int] = None,\n limit: Optional[int] = None,\n include_secrets: bool = True,\n):\n \"\"\"\n Read block documents\n\n Args:\n block_schema_type: an optional block schema type\n offset: an offset\n limit: the number of blocks to return\n include_secrets (bool): whether to include secret values\n on the Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. These fields are automatically obfuscated\n by Pydantic, but users can additionally choose not to receive\n their values from the API. Note that any business logic on the\n Block may not work if this is `False`.\n\n Returns:\n A list of block documents\n \"\"\"\n response = await self._client.post(\n \"/block_documents/filter\",\n json=dict(\n block_schema_type=block_schema_type,\n offset=offset,\n limit=limit,\n include_secrets=include_secrets,\n ),\n )\n return pydantic.parse_obj_as(List[BlockDocument], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_documents_by_type","title":"read_block_documents_by_type
async
","text":"Retrieve block documents by block type slug.
Parameters:
Name Type Description Defaultblock_type_slug
str
The block type slug.
requiredoffset
Optional[int]
an offset
None
limit
Optional[int]
the number of blocks to return
None
include_secrets
bool
whether to include secret values
True
Returns:
Type DescriptionList[BlockDocument]
A list of block documents
Source code inprefect/client/orchestration.py
async def read_block_documents_by_type(\n self,\n block_type_slug: str,\n offset: Optional[int] = None,\n limit: Optional[int] = None,\n include_secrets: bool = True,\n) -> List[BlockDocument]:\n \"\"\"Retrieve block documents by block type slug.\n\n Args:\n block_type_slug: The block type slug.\n offset: an offset\n limit: the number of blocks to return\n include_secrets: whether to include secret values\n\n Returns:\n A list of block documents\n \"\"\"\n response = await self._client.get(\n f\"/block_types/slug/{block_type_slug}/block_documents\",\n params=dict(\n offset=offset,\n limit=limit,\n include_secrets=include_secrets,\n ),\n )\n\n return pydantic.parse_obj_as(List[BlockDocument], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_deployment","title":"create_deployment
async
","text":"Create a deployment.
Parameters:
Name Type Description Defaultflow_id
UUID
the flow ID to create a deployment for
requiredname
str
the name of the deployment
requiredversion
str
an optional version string for the deployment
None
schedule
SCHEDULE_TYPES
an optional schedule to apply to the deployment
None
tags
List[str]
an optional list of tags to apply to the deployment
None
storage_document_id
UUID
an reference to the storage block document used for the deployed flow
None
infrastructure_document_id
UUID
an reference to the infrastructure block document to use for this deployment
None
Raises:
Type DescriptionRequestError
if the deployment was not created for any reason
Returns:
Type DescriptionUUID
the ID of the deployment in the backend
Source code inprefect/client/orchestration.py
async def create_deployment(\n self,\n flow_id: UUID,\n name: str,\n version: str = None,\n schedule: SCHEDULE_TYPES = None,\n schedules: List[DeploymentScheduleCreate] = None,\n parameters: Dict[str, Any] = None,\n description: str = None,\n work_queue_name: str = None,\n work_pool_name: str = None,\n tags: List[str] = None,\n storage_document_id: UUID = None,\n manifest_path: str = None,\n path: str = None,\n entrypoint: str = None,\n infrastructure_document_id: UUID = None,\n infra_overrides: Dict[str, Any] = None,\n parameter_openapi_schema: dict = None,\n is_schedule_active: Optional[bool] = None,\n paused: Optional[bool] = None,\n pull_steps: Optional[List[dict]] = None,\n enforce_parameter_schema: Optional[bool] = None,\n) -> UUID:\n \"\"\"\n Create a deployment.\n\n Args:\n flow_id: the flow ID to create a deployment for\n name: the name of the deployment\n version: an optional version string for the deployment\n schedule: an optional schedule to apply to the deployment\n tags: an optional list of tags to apply to the deployment\n storage_document_id: an reference to the storage block document\n used for the deployed flow\n infrastructure_document_id: an reference to the infrastructure block document\n to use for this deployment\n\n Raises:\n httpx.RequestError: if the deployment was not created for any reason\n\n Returns:\n the ID of the deployment in the backend\n \"\"\"\n\n deployment_create = DeploymentCreate(\n flow_id=flow_id,\n name=name,\n version=version,\n parameters=dict(parameters or {}),\n tags=list(tags or []),\n work_queue_name=work_queue_name,\n description=description,\n storage_document_id=storage_document_id,\n path=path,\n entrypoint=entrypoint,\n manifest_path=manifest_path, # for backwards compat\n infrastructure_document_id=infrastructure_document_id,\n infra_overrides=infra_overrides or {},\n parameter_openapi_schema=parameter_openapi_schema,\n is_schedule_active=is_schedule_active,\n paused=paused,\n schedule=schedule,\n schedules=schedules or [],\n pull_steps=pull_steps,\n enforce_parameter_schema=enforce_parameter_schema,\n )\n\n if work_pool_name is not None:\n deployment_create.work_pool_name = work_pool_name\n\n # Exclude newer fields that are not set to avoid compatibility issues\n exclude = {\n field\n for field in [\"work_pool_name\", \"work_queue_name\"]\n if field not in deployment_create.__fields_set__\n }\n\n if deployment_create.is_schedule_active is None:\n exclude.add(\"is_schedule_active\")\n\n if deployment_create.paused is None:\n exclude.add(\"paused\")\n\n if deployment_create.pull_steps is None:\n exclude.add(\"pull_steps\")\n\n if deployment_create.enforce_parameter_schema is None:\n exclude.add(\"enforce_parameter_schema\")\n\n json = deployment_create.dict(json_compatible=True, exclude=exclude)\n response = await self._client.post(\n \"/deployments/\",\n json=json,\n )\n deployment_id = response.json().get(\"id\")\n if not deployment_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n return UUID(deployment_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment","title":"read_deployment
async
","text":"Query the Prefect API for a deployment by id.
Parameters:
Name Type Description Defaultdeployment_id
UUID
the deployment ID of interest
requiredReturns:
Type DescriptionDeploymentResponse
a Deployment model representation of the deployment
Source code inprefect/client/orchestration.py
async def read_deployment(\n self,\n deployment_id: UUID,\n) -> DeploymentResponse:\n \"\"\"\n Query the Prefect API for a deployment by id.\n\n Args:\n deployment_id: the deployment ID of interest\n\n Returns:\n a [Deployment model][prefect.client.schemas.objects.Deployment] representation of the deployment\n \"\"\"\n try:\n response = await self._client.get(f\"/deployments/{deployment_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return DeploymentResponse.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment_by_name","title":"read_deployment_by_name
async
","text":"Query the Prefect API for a deployment by name.
Parameters:
Name Type Description Defaultname
str
A deployed flow's name: / required
Raises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
If request fails
Returns:
Type DescriptionDeploymentResponse
a Deployment model representation of the deployment
Source code inprefect/client/orchestration.py
async def read_deployment_by_name(\n self,\n name: str,\n) -> DeploymentResponse:\n \"\"\"\n Query the Prefect API for a deployment by name.\n\n Args:\n name: A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n a Deployment model representation of the deployment\n \"\"\"\n try:\n response = await self._client.get(f\"/deployments/name/{name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n return DeploymentResponse.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployments","title":"read_deployments
async
","text":"Query the Prefect API for deployments. Only deployments matching all the provided criteria will be returned.
Parameters:
Name Type Description Defaultflow_filter
FlowFilter
filter criteria for flows
None
flow_run_filter
FlowRunFilter
filter criteria for flow runs
None
task_run_filter
TaskRunFilter
filter criteria for task runs
None
deployment_filter
DeploymentFilter
filter criteria for deployments
None
work_pool_filter
WorkPoolFilter
filter criteria for work pools
None
work_queue_filter
WorkQueueFilter
filter criteria for work pool queues
None
limit
int
a limit for the deployment query
None
offset
int
an offset for the deployment query
0
Returns:
Type DescriptionList[DeploymentResponse]
a list of Deployment model representations of the deployments
Source code inprefect/client/orchestration.py
async def read_deployments(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n work_pool_filter: WorkPoolFilter = None,\n work_queue_filter: WorkQueueFilter = None,\n limit: int = None,\n sort: DeploymentSort = None,\n offset: int = 0,\n) -> List[DeploymentResponse]:\n \"\"\"\n Query the Prefect API for deployments. Only deployments matching all\n the provided criteria will be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n work_pool_filter: filter criteria for work pools\n work_queue_filter: filter criteria for work pool queues\n limit: a limit for the deployment query\n offset: an offset for the deployment query\n\n Returns:\n a list of Deployment model representations\n of the deployments\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n \"work_pool_queues\": (\n work_queue_filter.dict(json_compatible=True)\n if work_queue_filter\n else None\n ),\n \"limit\": limit,\n \"offset\": offset,\n \"sort\": sort,\n }\n\n response = await self._client.post(\"/deployments/filter\", json=body)\n return pydantic.parse_obj_as(List[DeploymentResponse], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_deployment","title":"delete_deployment
async
","text":"Delete deployment by id.
Parameters:
Name Type Description Defaultdeployment_id
UUID
The deployment id of interest.
required Source code inprefect/client/orchestration.py
async def delete_deployment(\n self,\n deployment_id: UUID,\n):\n \"\"\"\n Delete deployment by id.\n\n Args:\n deployment_id: The deployment id of interest.\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/deployments/{deployment_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_deployment_schedules","title":"create_deployment_schedules
async
","text":"Create deployment schedules.
Parameters:
Name Type Description Defaultdeployment_id
UUID
the deployment ID
requiredschedules
List[Tuple[SCHEDULE_TYPES, bool]]
a list of tuples containing the schedule to create and whether or not it should be active.
requiredRaises:
Type DescriptionRequestError
if the schedules were not created for any reason
Returns:
Type DescriptionList[DeploymentSchedule]
the list of schedules created in the backend
Source code inprefect/client/orchestration.py
async def create_deployment_schedules(\n self,\n deployment_id: UUID,\n schedules: List[Tuple[SCHEDULE_TYPES, bool]],\n) -> List[DeploymentSchedule]:\n \"\"\"\n Create deployment schedules.\n\n Args:\n deployment_id: the deployment ID\n schedules: a list of tuples containing the schedule to create\n and whether or not it should be active.\n\n Raises:\n httpx.RequestError: if the schedules were not created for any reason\n\n Returns:\n the list of schedules created in the backend\n \"\"\"\n deployment_schedule_create = [\n DeploymentScheduleCreate(schedule=schedule[0], active=schedule[1])\n for schedule in schedules\n ]\n\n json = [\n deployment_schedule_create.dict(json_compatible=True)\n for deployment_schedule_create in deployment_schedule_create\n ]\n response = await self._client.post(\n f\"/deployments/{deployment_id}/schedules\", json=json\n )\n return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment_schedules","title":"read_deployment_schedules
async
","text":"Query the Prefect API for a deployment's schedules.
Parameters:
Name Type Description Defaultdeployment_id
UUID
the deployment ID
requiredReturns:
Type DescriptionList[DeploymentSchedule]
a list of DeploymentSchedule model representations of the deployment schedules
Source code inprefect/client/orchestration.py
async def read_deployment_schedules(\n self,\n deployment_id: UUID,\n) -> List[DeploymentSchedule]:\n \"\"\"\n Query the Prefect API for a deployment's schedules.\n\n Args:\n deployment_id: the deployment ID\n\n Returns:\n a list of DeploymentSchedule model representations of the deployment schedules\n \"\"\"\n try:\n response = await self._client.get(f\"/deployments/{deployment_id}/schedules\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_deployment_schedule","title":"update_deployment_schedule
async
","text":"Update a deployment schedule by ID.
Parameters:
Name Type Description Defaultdeployment_id
UUID
the deployment ID
requiredschedule_id
UUID
the deployment schedule ID of interest
requiredactive
Optional[bool]
whether or not the schedule should be active
None
schedule
Optional[SCHEDULE_TYPES]
the cron, rrule, or interval schedule this deployment schedule should use
None
Source code in prefect/client/orchestration.py
async def update_deployment_schedule(\n self,\n deployment_id: UUID,\n schedule_id: UUID,\n active: Optional[bool] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n):\n \"\"\"\n Update a deployment schedule by ID.\n\n Args:\n deployment_id: the deployment ID\n schedule_id: the deployment schedule ID of interest\n active: whether or not the schedule should be active\n schedule: the cron, rrule, or interval schedule this deployment schedule should use\n \"\"\"\n kwargs = {}\n if active is not None:\n kwargs[\"active\"] = active\n elif schedule is not None:\n kwargs[\"schedule\"] = schedule\n\n deployment_schedule_update = DeploymentScheduleUpdate(**kwargs)\n json = deployment_schedule_update.dict(json_compatible=True, exclude_unset=True)\n\n try:\n await self._client.patch(\n f\"/deployments/{deployment_id}/schedules/{schedule_id}\", json=json\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_deployment_schedule","title":"delete_deployment_schedule
async
","text":"Delete a deployment schedule.
Parameters:
Name Type Description Defaultdeployment_id
UUID
the deployment ID
requiredschedule_id
UUID
the ID of the deployment schedule to delete.
requiredRaises:
Type DescriptionRequestError
if the schedules were not deleted for any reason
Source code inprefect/client/orchestration.py
async def delete_deployment_schedule(\n self,\n deployment_id: UUID,\n schedule_id: UUID,\n) -> None:\n \"\"\"\n Delete a deployment schedule.\n\n Args:\n deployment_id: the deployment ID\n schedule_id: the ID of the deployment schedule to delete.\n\n Raises:\n httpx.RequestError: if the schedules were not deleted for any reason\n \"\"\"\n try:\n await self._client.delete(\n f\"/deployments/{deployment_id}/schedules/{schedule_id}\"\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run","title":"read_flow_run
async
","text":"Query the Prefect API for a flow run by id.
Parameters:
Name Type Description Defaultflow_run_id
UUID
the flow run ID of interest
requiredReturns:
Type DescriptionFlowRun
a Flow Run model representation of the flow run
Source code inprefect/client/orchestration.py
async def read_flow_run(self, flow_run_id: UUID) -> FlowRun:\n \"\"\"\n Query the Prefect API for a flow run by id.\n\n Args:\n flow_run_id: the flow run ID of interest\n\n Returns:\n a Flow Run model representation of the flow run\n \"\"\"\n try:\n response = await self._client.get(f\"/flow_runs/{flow_run_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return FlowRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.resume_flow_run","title":"resume_flow_run
async
","text":"Resumes a paused flow run.
Parameters:
Name Type Description Defaultflow_run_id
UUID
the flow run ID of interest
requiredrun_input
Optional[Dict]
the input to resume the flow run with
None
Returns:
Type DescriptionOrchestrationResult
an OrchestrationResult model representation of state orchestration output
Source code inprefect/client/orchestration.py
async def resume_flow_run(\n self, flow_run_id: UUID, run_input: Optional[Dict] = None\n) -> OrchestrationResult:\n \"\"\"\n Resumes a paused flow run.\n\n Args:\n flow_run_id: the flow run ID of interest\n run_input: the input to resume the flow run with\n\n Returns:\n an OrchestrationResult model representation of state orchestration output\n \"\"\"\n try:\n response = await self._client.post(\n f\"/flow_runs/{flow_run_id}/resume\", json={\"run_input\": run_input}\n )\n except httpx.HTTPStatusError:\n raise\n\n return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_runs","title":"read_flow_runs
async
","text":"Query the Prefect API for flow runs. Only flow runs matching all criteria will be returned.
Parameters:
Name Type Description Defaultflow_filter
FlowFilter
filter criteria for flows
None
flow_run_filter
FlowRunFilter
filter criteria for flow runs
None
task_run_filter
TaskRunFilter
filter criteria for task runs
None
deployment_filter
DeploymentFilter
filter criteria for deployments
None
work_pool_filter
WorkPoolFilter
filter criteria for work pools
None
work_queue_filter
WorkQueueFilter
filter criteria for work pool queues
None
sort
FlowRunSort
sort criteria for the flow runs
None
limit
int
limit for the flow run query
None
offset
int
offset for the flow run query
0
Returns:
Type DescriptionList[FlowRun]
a list of Flow Run model representations of the flow runs
Source code inprefect/client/orchestration.py
async def read_flow_runs(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n work_pool_filter: WorkPoolFilter = None,\n work_queue_filter: WorkQueueFilter = None,\n sort: FlowRunSort = None,\n limit: int = None,\n offset: int = 0,\n) -> List[FlowRun]:\n \"\"\"\n Query the Prefect API for flow runs. Only flow runs matching all criteria will\n be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n work_pool_filter: filter criteria for work pools\n work_queue_filter: filter criteria for work pool queues\n sort: sort criteria for the flow runs\n limit: limit for the flow run query\n offset: offset for the flow run query\n\n Returns:\n a list of Flow Run model representations\n of the flow runs\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n \"work_pool_queues\": (\n work_queue_filter.dict(json_compatible=True)\n if work_queue_filter\n else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n\n response = await self._client.post(\"/flow_runs/filter\", json=body)\n return pydantic.parse_obj_as(List[FlowRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.set_flow_run_state","title":"set_flow_run_state
async
","text":"Set the state of a flow run.
Parameters:
Name Type Description Defaultflow_run_id
UUID
the id of the flow run
requiredstate
State
the state to set
requiredforce
bool
if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state
False
Returns:
Type DescriptionOrchestrationResult
an OrchestrationResult model representation of state orchestration output
Source code inprefect/client/orchestration.py
async def set_flow_run_state(\n self,\n flow_run_id: UUID,\n state: \"prefect.states.State\",\n force: bool = False,\n) -> OrchestrationResult:\n \"\"\"\n Set the state of a flow run.\n\n Args:\n flow_run_id: the id of the flow run\n state: the state to set\n force: if True, disregard orchestration logic when setting the state,\n forcing the Prefect API to accept the state\n\n Returns:\n an OrchestrationResult model representation of state orchestration output\n \"\"\"\n state_create = state.to_state_create()\n state_create.state_details.flow_run_id = flow_run_id\n state_create.state_details.transition_id = uuid4()\n try:\n response = await self._client.post(\n f\"/flow_runs/{flow_run_id}/set_state\",\n json=dict(state=state_create.dict(json_compatible=True), force=force),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_states","title":"read_flow_run_states
async
","text":"Query for the states of a flow run
Parameters:
Name Type Description Defaultflow_run_id
UUID
the id of the flow run
requiredReturns:
Type DescriptionList[State]
a list of State model representations of the flow run states
Source code inprefect/client/orchestration.py
async def read_flow_run_states(\n self, flow_run_id: UUID\n) -> List[prefect.states.State]:\n \"\"\"\n Query for the states of a flow run\n\n Args:\n flow_run_id: the id of the flow run\n\n Returns:\n a list of State model representations\n of the flow run states\n \"\"\"\n response = await self._client.get(\n \"/flow_run_states/\", params=dict(flow_run_id=str(flow_run_id))\n )\n return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_task_run","title":"create_task_run
async
","text":"Create a task run
Parameters:
Name Type Description Defaulttask
Task
The Task to run
requiredflow_run_id
Optional[UUID]
The flow run id with which to associate the task run
requireddynamic_key
str
A key unique to this particular run of a Task within the flow
requiredname
str
An optional name for the task run
None
extra_tags
Iterable[str]
an optional list of extra tags to apply to the task run in addition to task.tags
None
state
State
The initial state for the run. If not provided, defaults to Pending
for now. Should always be a Scheduled
type.
None
task_inputs
Dict[str, List[Union[TaskRunResult, Parameter, Constant]]]
the set of inputs passed to the task
None
Returns:
Type DescriptionTaskRun
The created task run.
Source code inprefect/client/orchestration.py
async def create_task_run(\n self,\n task: \"TaskObject\",\n flow_run_id: Optional[UUID],\n dynamic_key: str,\n name: str = None,\n extra_tags: Iterable[str] = None,\n state: prefect.states.State = None,\n task_inputs: Dict[\n str,\n List[\n Union[\n TaskRunResult,\n Parameter,\n Constant,\n ]\n ],\n ] = None,\n) -> TaskRun:\n \"\"\"\n Create a task run\n\n Args:\n task: The Task to run\n flow_run_id: The flow run id with which to associate the task run\n dynamic_key: A key unique to this particular run of a Task within the flow\n name: An optional name for the task run\n extra_tags: an optional list of extra tags to apply to the task run in\n addition to `task.tags`\n state: The initial state for the run. If not provided, defaults to\n `Pending` for now. Should always be a `Scheduled` type.\n task_inputs: the set of inputs passed to the task\n\n Returns:\n The created task run.\n \"\"\"\n tags = set(task.tags).union(extra_tags or [])\n\n if state is None:\n state = prefect.states.Pending()\n\n task_run_data = TaskRunCreate(\n name=name,\n flow_run_id=flow_run_id,\n task_key=task.task_key,\n dynamic_key=dynamic_key,\n tags=list(tags),\n task_version=task.version,\n empirical_policy=TaskRunPolicy(\n retries=task.retries,\n retry_delay=task.retry_delay_seconds,\n retry_jitter_factor=task.retry_jitter_factor,\n ),\n state=state.to_state_create(),\n task_inputs=task_inputs or {},\n )\n\n response = await self._client.post(\n \"/task_runs/\", json=task_run_data.dict(json_compatible=True)\n )\n return TaskRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_run","title":"read_task_run
async
","text":"Query the Prefect API for a task run by id.
Parameters:
Name Type Description Defaulttask_run_id
UUID
the task run ID of interest
requiredReturns:
Type DescriptionTaskRun
a Task Run model representation of the task run
Source code inprefect/client/orchestration.py
async def read_task_run(self, task_run_id: UUID) -> TaskRun:\n \"\"\"\n Query the Prefect API for a task run by id.\n\n Args:\n task_run_id: the task run ID of interest\n\n Returns:\n a Task Run model representation of the task run\n \"\"\"\n response = await self._client.get(f\"/task_runs/{task_run_id}\")\n return TaskRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_runs","title":"read_task_runs
async
","text":"Query the Prefect API for task runs. Only task runs matching all criteria will be returned.
Parameters:
Name Type Description Defaultflow_filter
FlowFilter
filter criteria for flows
None
flow_run_filter
FlowRunFilter
filter criteria for flow runs
None
task_run_filter
TaskRunFilter
filter criteria for task runs
None
deployment_filter
DeploymentFilter
filter criteria for deployments
None
sort
TaskRunSort
sort criteria for the task runs
None
limit
int
a limit for the task run query
None
offset
int
an offset for the task run query
0
Returns:
Type DescriptionList[TaskRun]
a list of Task Run model representations of the task runs
Source code inprefect/client/orchestration.py
async def read_task_runs(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n sort: TaskRunSort = None,\n limit: int = None,\n offset: int = 0,\n) -> List[TaskRun]:\n \"\"\"\n Query the Prefect API for task runs. Only task runs matching all criteria will\n be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n sort: sort criteria for the task runs\n limit: a limit for the task run query\n offset: an offset for the task run query\n\n Returns:\n a list of Task Run model representations\n of the task runs\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\"/task_runs/filter\", json=body)\n return pydantic.parse_obj_as(List[TaskRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_task_run","title":"delete_task_run
async
","text":"Delete a task run by id.
Parameters:
Name Type Description Defaulttask_run_id
UUID
the task run ID of interest
required Source code inprefect/client/orchestration.py
async def delete_task_run(self, task_run_id: UUID) -> None:\n \"\"\"\n Delete a task run by id.\n\n Args:\n task_run_id: the task run ID of interest\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/task_runs/{task_run_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.set_task_run_state","title":"set_task_run_state
async
","text":"Set the state of a task run.
Parameters:
Name Type Description Defaulttask_run_id
UUID
the id of the task run
requiredstate
State
the state to set
requiredforce
bool
if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state
False
Returns:
Type DescriptionOrchestrationResult
an OrchestrationResult model representation of state orchestration output
Source code inprefect/client/orchestration.py
async def set_task_run_state(\n self,\n task_run_id: UUID,\n state: prefect.states.State,\n force: bool = False,\n) -> OrchestrationResult:\n \"\"\"\n Set the state of a task run.\n\n Args:\n task_run_id: the id of the task run\n state: the state to set\n force: if True, disregard orchestration logic when setting the state,\n forcing the Prefect API to accept the state\n\n Returns:\n an OrchestrationResult model representation of state orchestration output\n \"\"\"\n state_create = state.to_state_create()\n state_create.state_details.task_run_id = task_run_id\n response = await self._client.post(\n f\"/task_runs/{task_run_id}/set_state\",\n json=dict(state=state_create.dict(json_compatible=True), force=force),\n )\n return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_run_states","title":"read_task_run_states
async
","text":"Query for the states of a task run
Parameters:
Name Type Description Defaulttask_run_id
UUID
the id of the task run
requiredReturns:
Type DescriptionList[State]
a list of State model representations of the task run states
Source code inprefect/client/orchestration.py
async def read_task_run_states(\n self, task_run_id: UUID\n) -> List[prefect.states.State]:\n \"\"\"\n Query for the states of a task run\n\n Args:\n task_run_id: the id of the task run\n\n Returns:\n a list of State model representations of the task run states\n \"\"\"\n response = await self._client.get(\n \"/task_run_states/\", params=dict(task_run_id=str(task_run_id))\n )\n return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_logs","title":"create_logs
async
","text":"Create logs for a flow or task run
Parameters:
Name Type Description Defaultlogs
Iterable[Union[LogCreate, dict]]
An iterable of LogCreate
objects or already json-compatible dicts
prefect/client/orchestration.py
async def create_logs(self, logs: Iterable[Union[LogCreate, dict]]) -> None:\n \"\"\"\n Create logs for a flow or task run\n\n Args:\n logs: An iterable of `LogCreate` objects or already json-compatible dicts\n \"\"\"\n serialized_logs = [\n log.dict(json_compatible=True) if isinstance(log, LogCreate) else log\n for log in logs\n ]\n await self._client.post(\"/logs/\", json=serialized_logs)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_notification_policy","title":"create_flow_run_notification_policy
async
","text":"Create a notification policy for flow runs
Parameters:
Name Type Description Defaultblock_document_id
UUID
The block document UUID
requiredis_active
bool
Whether the notification policy is active
True
tags
List[str]
List of flow tags
None
state_names
List[str]
List of state names
None
message_template
Optional[str]
Notification message template
None
Source code in prefect/client/orchestration.py
async def create_flow_run_notification_policy(\n self,\n block_document_id: UUID,\n is_active: bool = True,\n tags: List[str] = None,\n state_names: List[str] = None,\n message_template: Optional[str] = None,\n) -> UUID:\n \"\"\"\n Create a notification policy for flow runs\n\n Args:\n block_document_id: The block document UUID\n is_active: Whether the notification policy is active\n tags: List of flow tags\n state_names: List of state names\n message_template: Notification message template\n \"\"\"\n if tags is None:\n tags = []\n if state_names is None:\n state_names = []\n\n policy = FlowRunNotificationPolicyCreate(\n block_document_id=block_document_id,\n is_active=is_active,\n tags=tags,\n state_names=state_names,\n message_template=message_template,\n )\n response = await self._client.post(\n \"/flow_run_notification_policies/\",\n json=policy.dict(json_compatible=True),\n )\n\n policy_id = response.json().get(\"id\")\n if not policy_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n return UUID(policy_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_flow_run_notification_policy","title":"delete_flow_run_notification_policy
async
","text":"Delete a flow run notification policy by id.
Parameters:
Name Type Description Defaultid
UUID
UUID of the flow run notification policy to delete.
required Source code inprefect/client/orchestration.py
async def delete_flow_run_notification_policy(\n self,\n id: UUID,\n) -> None:\n \"\"\"\n Delete a flow run notification policy by id.\n\n Args:\n id: UUID of the flow run notification policy to delete.\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/flow_run_notification_policies/{id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_flow_run_notification_policy","title":"update_flow_run_notification_policy
async
","text":"Update a notification policy for flow runs
Parameters:
Name Type Description Defaultid
UUID
UUID of the notification policy
requiredblock_document_id
Optional[UUID]
The block document UUID
None
is_active
Optional[bool]
Whether the notification policy is active
None
tags
Optional[List[str]]
List of flow tags
None
state_names
Optional[List[str]]
List of state names
None
message_template
Optional[str]
Notification message template
None
Source code in prefect/client/orchestration.py
async def update_flow_run_notification_policy(\n self,\n id: UUID,\n block_document_id: Optional[UUID] = None,\n is_active: Optional[bool] = None,\n tags: Optional[List[str]] = None,\n state_names: Optional[List[str]] = None,\n message_template: Optional[str] = None,\n) -> None:\n \"\"\"\n Update a notification policy for flow runs\n\n Args:\n id: UUID of the notification policy\n block_document_id: The block document UUID\n is_active: Whether the notification policy is active\n tags: List of flow tags\n state_names: List of state names\n message_template: Notification message template\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n params = {}\n if block_document_id is not None:\n params[\"block_document_id\"] = block_document_id\n if is_active is not None:\n params[\"is_active\"] = is_active\n if tags is not None:\n params[\"tags\"] = tags\n if state_names is not None:\n params[\"state_names\"] = state_names\n if message_template is not None:\n params[\"message_template\"] = message_template\n\n policy = FlowRunNotificationPolicyUpdate(**params)\n\n try:\n await self._client.patch(\n f\"/flow_run_notification_policies/{id}\",\n json=policy.dict(json_compatible=True, exclude_unset=True),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_notification_policies","title":"read_flow_run_notification_policies
async
","text":"Query the Prefect API for flow run notification policies. Only policies matching all criteria will be returned.
Parameters:
Name Type Description Defaultflow_run_notification_policy_filter
FlowRunNotificationPolicyFilter
filter criteria for notification policies
requiredlimit
Optional[int]
a limit for the notification policies query
None
offset
int
an offset for the notification policies query
0
Returns:
Type DescriptionList[FlowRunNotificationPolicy]
a list of FlowRunNotificationPolicy model representations of the notification policies
Source code inprefect/client/orchestration.py
async def read_flow_run_notification_policies(\n self,\n flow_run_notification_policy_filter: FlowRunNotificationPolicyFilter,\n limit: Optional[int] = None,\n offset: int = 0,\n) -> List[FlowRunNotificationPolicy]:\n \"\"\"\n Query the Prefect API for flow run notification policies. Only policies matching all criteria will\n be returned.\n\n Args:\n flow_run_notification_policy_filter: filter criteria for notification policies\n limit: a limit for the notification policies query\n offset: an offset for the notification policies query\n\n Returns:\n a list of FlowRunNotificationPolicy model representations\n of the notification policies\n \"\"\"\n body = {\n \"flow_run_notification_policy_filter\": (\n flow_run_notification_policy_filter.dict(json_compatible=True)\n if flow_run_notification_policy_filter\n else None\n ),\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\n \"/flow_run_notification_policies/filter\", json=body\n )\n return pydantic.parse_obj_as(List[FlowRunNotificationPolicy], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_logs","title":"read_logs
async
","text":"Read flow and task run logs.
Source code inprefect/client/orchestration.py
async def read_logs(\n self,\n log_filter: LogFilter = None,\n limit: int = None,\n offset: int = None,\n sort: LogSort = LogSort.TIMESTAMP_ASC,\n) -> List[Log]:\n \"\"\"\n Read flow and task run logs.\n \"\"\"\n body = {\n \"logs\": log_filter.dict(json_compatible=True) if log_filter else None,\n \"limit\": limit,\n \"offset\": offset,\n \"sort\": sort,\n }\n\n response = await self._client.post(\"/logs/filter\", json=body)\n return pydantic.parse_obj_as(List[Log], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.resolve_datadoc","title":"resolve_datadoc
async
","text":"Recursively decode possibly nested data documents.
\"server\" encoded documents will be retrieved from the server.
Parameters:
Name Type Description Defaultdatadoc
DataDocument
The data document to resolve
requiredReturns:
Type DescriptionAny
a decoded object, the innermost data
Source code inprefect/client/orchestration.py
async def resolve_datadoc(self, datadoc: DataDocument) -> Any:\n \"\"\"\n Recursively decode possibly nested data documents.\n\n \"server\" encoded documents will be retrieved from the server.\n\n Args:\n datadoc: The data document to resolve\n\n Returns:\n a decoded object, the innermost data\n \"\"\"\n if not isinstance(datadoc, DataDocument):\n raise TypeError(\n f\"`resolve_datadoc` received invalid type {type(datadoc).__name__}\"\n )\n\n async def resolve_inner(data):\n if isinstance(data, bytes):\n try:\n data = DataDocument.parse_raw(data)\n except pydantic.ValidationError:\n return data\n\n if isinstance(data, DataDocument):\n return await resolve_inner(data.decode())\n\n return data\n\n return await resolve_inner(datadoc)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.send_worker_heartbeat","title":"send_worker_heartbeat
async
","text":"Sends a worker heartbeat for a given work pool.
Parameters:
Name Type Description Defaultwork_pool_name
str
The name of the work pool to heartbeat against.
requiredworker_name
str
The name of the worker sending the heartbeat.
required Source code inprefect/client/orchestration.py
async def send_worker_heartbeat(\n self,\n work_pool_name: str,\n worker_name: str,\n heartbeat_interval_seconds: Optional[float] = None,\n):\n \"\"\"\n Sends a worker heartbeat for a given work pool.\n\n Args:\n work_pool_name: The name of the work pool to heartbeat against.\n worker_name: The name of the worker sending the heartbeat.\n \"\"\"\n await self._client.post(\n f\"/work_pools/{work_pool_name}/workers/heartbeat\",\n json={\n \"name\": worker_name,\n \"heartbeat_interval_seconds\": heartbeat_interval_seconds,\n },\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_workers_for_work_pool","title":"read_workers_for_work_pool
async
","text":"Reads workers for a given work pool.
Parameters:
Name Type Description Defaultwork_pool_name
str
The name of the work pool for which to get member workers.
requiredworker_filter
Optional[WorkerFilter]
Criteria by which to filter workers.
None
limit
Optional[int]
Limit for the worker query.
None
offset
Optional[int]
Limit for the worker query.
None
Source code in prefect/client/orchestration.py
async def read_workers_for_work_pool(\n self,\n work_pool_name: str,\n worker_filter: Optional[WorkerFilter] = None,\n offset: Optional[int] = None,\n limit: Optional[int] = None,\n) -> List[Worker]:\n \"\"\"\n Reads workers for a given work pool.\n\n Args:\n work_pool_name: The name of the work pool for which to get\n member workers.\n worker_filter: Criteria by which to filter workers.\n limit: Limit for the worker query.\n offset: Limit for the worker query.\n \"\"\"\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/workers/filter\",\n json={\n \"worker_filter\": (\n worker_filter.dict(json_compatible=True, exclude_unset=True)\n if worker_filter\n else None\n ),\n \"offset\": offset,\n \"limit\": limit,\n },\n )\n\n return pydantic.parse_obj_as(List[Worker], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_pool","title":"read_work_pool
async
","text":"Reads information for a given work pool
Parameters:
Name Type Description Defaultwork_pool_name
str
The name of the work pool to for which to get information.
requiredReturns:
Type DescriptionWorkPool
Information about the requested work pool.
Source code inprefect/client/orchestration.py
async def read_work_pool(self, work_pool_name: str) -> WorkPool:\n \"\"\"\n Reads information for a given work pool\n\n Args:\n work_pool_name: The name of the work pool to for which to get\n information.\n\n Returns:\n Information about the requested work pool.\n \"\"\"\n try:\n response = await self._client.get(f\"/work_pools/{work_pool_name}\")\n return pydantic.parse_obj_as(WorkPool, response.json())\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_pools","title":"read_work_pools
async
","text":"Reads work pools.
Parameters:
Name Type Description Defaultlimit
Optional[int]
Limit for the work pool query.
None
offset
int
Offset for the work pool query.
0
work_pool_filter
Optional[WorkPoolFilter]
Criteria by which to filter work pools.
None
Returns:
Type DescriptionList[WorkPool]
A list of work pools.
Source code inprefect/client/orchestration.py
async def read_work_pools(\n self,\n limit: Optional[int] = None,\n offset: int = 0,\n work_pool_filter: Optional[WorkPoolFilter] = None,\n) -> List[WorkPool]:\n \"\"\"\n Reads work pools.\n\n Args:\n limit: Limit for the work pool query.\n offset: Offset for the work pool query.\n work_pool_filter: Criteria by which to filter work pools.\n\n Returns:\n A list of work pools.\n \"\"\"\n\n body = {\n \"limit\": limit,\n \"offset\": offset,\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n }\n response = await self._client.post(\"/work_pools/filter\", json=body)\n return pydantic.parse_obj_as(List[WorkPool], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_work_pool","title":"create_work_pool
async
","text":"Creates a work pool with the provided configuration.
Parameters:
Name Type Description Defaultwork_pool
WorkPoolCreate
Desired configuration for the new work pool.
requiredReturns:
Type DescriptionWorkPool
Information about the newly created work pool.
Source code inprefect/client/orchestration.py
async def create_work_pool(\n self,\n work_pool: WorkPoolCreate,\n) -> WorkPool:\n \"\"\"\n Creates a work pool with the provided configuration.\n\n Args:\n work_pool: Desired configuration for the new work pool.\n\n Returns:\n Information about the newly created work pool.\n \"\"\"\n try:\n response = await self._client.post(\n \"/work_pools/\",\n json=work_pool.dict(json_compatible=True, exclude_unset=True),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n\n return pydantic.parse_obj_as(WorkPool, response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_work_pool","title":"update_work_pool
async
","text":"Updates a work pool.
Parameters:
Name Type Description Defaultwork_pool_name
str
Name of the work pool to update.
requiredwork_pool
WorkPoolUpdate
Fields to update in the work pool.
required Source code inprefect/client/orchestration.py
async def update_work_pool(\n self,\n work_pool_name: str,\n work_pool: WorkPoolUpdate,\n):\n \"\"\"\n Updates a work pool.\n\n Args:\n work_pool_name: Name of the work pool to update.\n work_pool: Fields to update in the work pool.\n \"\"\"\n try:\n await self._client.patch(\n f\"/work_pools/{work_pool_name}\",\n json=work_pool.dict(json_compatible=True, exclude_unset=True),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_work_pool","title":"delete_work_pool
async
","text":"Deletes a work pool.
Parameters:
Name Type Description Defaultwork_pool_name
str
Name of the work pool to delete.
required Source code inprefect/client/orchestration.py
async def delete_work_pool(\n self,\n work_pool_name: str,\n):\n \"\"\"\n Deletes a work pool.\n\n Args:\n work_pool_name: Name of the work pool to delete.\n \"\"\"\n try:\n await self._client.delete(f\"/work_pools/{work_pool_name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queues","title":"read_work_queues
async
","text":"Retrieves queues for a work pool.
Parameters:
Name Type Description Defaultwork_pool_name
Optional[str]
Name of the work pool for which to get queues.
None
work_queue_filter
Optional[WorkQueueFilter]
Criteria by which to filter queues.
None
limit
Optional[int]
Limit for the queue query.
None
offset
Optional[int]
Limit for the queue query.
None
Returns:
Type DescriptionList[WorkQueue]
List of queues for the specified work pool.
Source code inprefect/client/orchestration.py
async def read_work_queues(\n self,\n work_pool_name: Optional[str] = None,\n work_queue_filter: Optional[WorkQueueFilter] = None,\n limit: Optional[int] = None,\n offset: Optional[int] = None,\n) -> List[WorkQueue]:\n \"\"\"\n Retrieves queues for a work pool.\n\n Args:\n work_pool_name: Name of the work pool for which to get queues.\n work_queue_filter: Criteria by which to filter queues.\n limit: Limit for the queue query.\n offset: Limit for the queue query.\n\n Returns:\n List of queues for the specified work pool.\n \"\"\"\n json = {\n \"work_queues\": (\n work_queue_filter.dict(json_compatible=True, exclude_unset=True)\n if work_queue_filter\n else None\n ),\n \"limit\": limit,\n \"offset\": offset,\n }\n\n if work_pool_name:\n try:\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/queues/filter\",\n json=json,\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n else:\n response = await self._client.post(\"/work_queues/filter\", json=json)\n\n return pydantic.parse_obj_as(List[WorkQueue], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_scheduled_flow_runs_for_work_pool","title":"get_scheduled_flow_runs_for_work_pool
async
","text":"Retrieves scheduled flow runs for the provided set of work pool queues.
Parameters:
Name Type Description Defaultwork_pool_name
str
The name of the work pool that the work pool queues are associated with.
requiredwork_queue_names
Optional[List[str]]
The names of the work pool queues from which to get scheduled flow runs.
None
scheduled_before
Optional[datetime]
Datetime used to filter returned flow runs. Flow runs scheduled for after the given datetime string will not be returned.
None
Returns:
Type DescriptionList[WorkerFlowRunResponse]
A list of worker flow run responses containing information about the
List[WorkerFlowRunResponse]
retrieved flow runs.
Source code inprefect/client/orchestration.py
async def get_scheduled_flow_runs_for_work_pool(\n self,\n work_pool_name: str,\n work_queue_names: Optional[List[str]] = None,\n scheduled_before: Optional[datetime.datetime] = None,\n) -> List[WorkerFlowRunResponse]:\n \"\"\"\n Retrieves scheduled flow runs for the provided set of work pool queues.\n\n Args:\n work_pool_name: The name of the work pool that the work pool\n queues are associated with.\n work_queue_names: The names of the work pool queues from which\n to get scheduled flow runs.\n scheduled_before: Datetime used to filter returned flow runs. Flow runs\n scheduled for after the given datetime string will not be returned.\n\n Returns:\n A list of worker flow run responses containing information about the\n retrieved flow runs.\n \"\"\"\n body: Dict[str, Any] = {}\n if work_queue_names is not None:\n body[\"work_queue_names\"] = list(work_queue_names)\n if scheduled_before:\n body[\"scheduled_before\"] = str(scheduled_before)\n\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/get_scheduled_flow_runs\",\n json=body,\n )\n return pydantic.parse_obj_as(List[WorkerFlowRunResponse], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_artifact","title":"create_artifact
async
","text":"Creates an artifact with the provided configuration.
Parameters:
Name Type Description Defaultartifact
ArtifactCreate
Desired configuration for the new artifact.
required Source code inprefect/client/orchestration.py
async def create_artifact(\n self,\n artifact: ArtifactCreate,\n) -> Artifact:\n \"\"\"\n Creates an artifact with the provided configuration.\n\n Args:\n artifact: Desired configuration for the new artifact.\n Returns:\n Information about the newly created artifact.\n \"\"\"\n\n response = await self._client.post(\n \"/artifacts/\",\n json=artifact.dict(json_compatible=True, exclude_unset=True),\n )\n\n return pydantic.parse_obj_as(Artifact, response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_artifacts","title":"read_artifacts
async
","text":"Query the Prefect API for artifacts. Only artifacts matching all criteria will be returned. Args: artifact_filter: filter criteria for artifacts flow_run_filter: filter criteria for flow runs task_run_filter: filter criteria for task runs sort: sort criteria for the artifacts limit: limit for the artifact query offset: offset for the artifact query Returns: a list of Artifact model representations of the artifacts
Source code inprefect/client/orchestration.py
async def read_artifacts(\n self,\n *,\n artifact_filter: ArtifactFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n sort: ArtifactSort = None,\n limit: int = None,\n offset: int = 0,\n) -> List[Artifact]:\n \"\"\"\n Query the Prefect API for artifacts. Only artifacts matching all criteria will\n be returned.\n Args:\n artifact_filter: filter criteria for artifacts\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n sort: sort criteria for the artifacts\n limit: limit for the artifact query\n offset: offset for the artifact query\n Returns:\n a list of Artifact model representations of the artifacts\n \"\"\"\n body = {\n \"artifacts\": (\n artifact_filter.dict(json_compatible=True) if artifact_filter else None\n ),\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\"/artifacts/filter\", json=body)\n return pydantic.parse_obj_as(List[Artifact], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_latest_artifacts","title":"read_latest_artifacts
async
","text":"Query the Prefect API for artifacts. Only artifacts matching all criteria will be returned. Args: artifact_filter: filter criteria for artifacts flow_run_filter: filter criteria for flow runs task_run_filter: filter criteria for task runs sort: sort criteria for the artifacts limit: limit for the artifact query offset: offset for the artifact query Returns: a list of Artifact model representations of the artifacts
Source code inprefect/client/orchestration.py
async def read_latest_artifacts(\n self,\n *,\n artifact_filter: ArtifactCollectionFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n sort: ArtifactCollectionSort = None,\n limit: int = None,\n offset: int = 0,\n) -> List[ArtifactCollection]:\n \"\"\"\n Query the Prefect API for artifacts. Only artifacts matching all criteria will\n be returned.\n Args:\n artifact_filter: filter criteria for artifacts\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n sort: sort criteria for the artifacts\n limit: limit for the artifact query\n offset: offset for the artifact query\n Returns:\n a list of Artifact model representations of the artifacts\n \"\"\"\n body = {\n \"artifacts\": (\n artifact_filter.dict(json_compatible=True) if artifact_filter else None\n ),\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\"/artifacts/latest/filter\", json=body)\n return pydantic.parse_obj_as(List[ArtifactCollection], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_artifact","title":"delete_artifact
async
","text":"Deletes an artifact with the provided id.
Parameters:
Name Type Description Defaultartifact_id
UUID
The id of the artifact to delete.
required Source code inprefect/client/orchestration.py
async def delete_artifact(self, artifact_id: UUID) -> None:\n \"\"\"\n Deletes an artifact with the provided id.\n\n Args:\n artifact_id: The id of the artifact to delete.\n \"\"\"\n try:\n await self._client.delete(f\"/artifacts/{artifact_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_variable_by_name","title":"read_variable_by_name
async
","text":"Reads a variable by name. Returns None if no variable is found.
Source code inprefect/client/orchestration.py
async def read_variable_by_name(self, name: str) -> Optional[Variable]:\n \"\"\"Reads a variable by name. Returns None if no variable is found.\"\"\"\n try:\n response = await self._client.get(f\"/variables/name/{name}\")\n return pydantic.parse_obj_as(Variable, response.json())\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n return None\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_variable_by_name","title":"delete_variable_by_name
async
","text":"Deletes a variable by name.
Source code inprefect/client/orchestration.py
async def delete_variable_by_name(self, name: str):\n \"\"\"Deletes a variable by name.\"\"\"\n try:\n await self._client.delete(f\"/variables/name/{name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_variables","title":"read_variables
async
","text":"Reads all variables.
Source code inprefect/client/orchestration.py
async def read_variables(self, limit: int = None) -> List[Variable]:\n \"\"\"Reads all variables.\"\"\"\n response = await self._client.post(\"/variables/filter\", json={\"limit\": limit})\n return pydantic.parse_obj_as(List[Variable], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_worker_metadata","title":"read_worker_metadata
async
","text":"Reads worker metadata stored in Prefect collection registry.
Source code inprefect/client/orchestration.py
async def read_worker_metadata(self) -> Dict[str, Any]:\n \"\"\"Reads worker metadata stored in Prefect collection registry.\"\"\"\n response = await self._client.get(\"collections/views/aggregate-worker-metadata\")\n response.raise_for_status()\n return response.json()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_automation","title":"create_automation
async
","text":"Creates an automation in Prefect Cloud.
Source code inprefect/client/orchestration.py
async def create_automation(self, automation: Automation) -> UUID:\n \"\"\"Creates an automation in Prefect Cloud.\"\"\"\n if self.server_type != ServerType.CLOUD:\n raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n response = await self._client.post(\n \"/automations/\",\n json=automation.dict(json_compatible=True),\n )\n\n return UUID(response.json()[\"id\"])\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_input","title":"create_flow_run_input
async
","text":"Creates a flow run input.
Parameters:
Name Type Description Defaultflow_run_id
UUID
The flow run id.
requiredkey
str
The input key.
requiredvalue
str
The input value.
requiredsender
Optional[str]
The sender of the input.
None
Source code in prefect/client/orchestration.py
async def create_flow_run_input(\n self, flow_run_id: UUID, key: str, value: str, sender: Optional[str] = None\n):\n \"\"\"\n Creates a flow run input.\n\n Args:\n flow_run_id: The flow run id.\n key: The input key.\n value: The input value.\n sender: The sender of the input.\n \"\"\"\n\n # Initialize the input to ensure that the key is valid.\n FlowRunInput(flow_run_id=flow_run_id, key=key, value=value)\n\n response = await self._client.post(\n f\"/flow_runs/{flow_run_id}/input\",\n json={\"key\": key, \"value\": value, \"sender\": sender},\n )\n response.raise_for_status()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_input","title":"read_flow_run_input
async
","text":"Reads a flow run input.
Parameters:
Name Type Description Defaultflow_run_id
UUID
The flow run id.
requiredkey
str
The input key.
required Source code inprefect/client/orchestration.py
async def read_flow_run_input(self, flow_run_id: UUID, key: str) -> str:\n \"\"\"\n Reads a flow run input.\n\n Args:\n flow_run_id: The flow run id.\n key: The input key.\n \"\"\"\n response = await self._client.get(f\"/flow_runs/{flow_run_id}/input/{key}\")\n response.raise_for_status()\n return response.content.decode()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_flow_run_input","title":"delete_flow_run_input
async
","text":"Deletes a flow run input.
Parameters:
Name Type Description Defaultflow_run_id
UUID
The flow run id.
requiredkey
str
The input key.
required Source code inprefect/client/orchestration.py
async def delete_flow_run_input(self, flow_run_id: UUID, key: str):\n \"\"\"\n Deletes a flow run input.\n\n Args:\n flow_run_id: The flow run id.\n key: The input key.\n \"\"\"\n response = await self._client.delete(f\"/flow_runs/{flow_run_id}/input/{key}\")\n response.raise_for_status()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.get_client","title":"get_client
","text":"Retrieve a HTTP client for communicating with the Prefect REST API.
The client must be context managed; for example:
async with get_client() as client:\n await client.hello()\n
Source code in prefect/client/orchestration.py
def get_client(httpx_settings: Optional[dict] = None) -> \"PrefectClient\":\n \"\"\"\n Retrieve a HTTP client for communicating with the Prefect REST API.\n\n The client must be context managed; for example:\n\n ```python\n async with get_client() as client:\n await client.hello()\n ```\n \"\"\"\n ctx = prefect.context.get_settings_context()\n api = PREFECT_API_URL.value()\n\n if not api:\n # create an ephemeral API if none was provided\n from prefect.server.api.server import create_app\n\n api = create_app(ctx.settings, ephemeral=True)\n\n return PrefectClient(\n api,\n api_key=PREFECT_API_KEY.value(),\n httpx_settings=httpx_settings,\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_1","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions","title":"prefect.client.schemas.actions
","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.StateCreate","title":"StateCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a new state.
Source code inprefect/client/schemas/actions.py
class StateCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a new state.\"\"\"\n\n type: StateType\n name: Optional[str] = Field(default=None)\n message: Optional[str] = Field(default=None, example=\"Run started\")\n state_details: StateDetails = Field(default_factory=StateDetails)\n data: Union[\"BaseResult[R]\", \"DataDocument[R]\", Any] = Field(\n default=None,\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowCreate","title":"FlowCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass FlowCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow.\"\"\"\n\n name: str = FieldFrom(objects.Flow)\n tags: List[str] = FieldFrom(objects.Flow)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowUpdate","title":"FlowUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a flow.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass FlowUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a flow.\"\"\"\n\n tags: List[str] = FieldFrom(objects.Flow)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentCreate","title":"DeploymentCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a deployment.
Source code inprefect/client/schemas/actions.py
@experimental_field(\n \"work_pool_name\",\n group=\"work_pools\",\n when=lambda x: x is not None,\n)\n@copy_model_fields\nclass DeploymentCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a deployment.\"\"\"\n\n @root_validator(pre=True)\n def remove_old_fields(cls, values):\n # 2.7.7 removed worker_pool_queue_id in lieu of worker_pool_name and\n # worker_pool_queue_name. Those fields were later renamed to work_pool_name\n # and work_queue_name. This validator removes old fields provided\n # by older clients to avoid 422 errors.\n values_copy = copy(values)\n worker_pool_queue_id = values_copy.pop(\"worker_pool_queue_id\", None)\n worker_pool_name = values_copy.pop(\"worker_pool_name\", None)\n worker_pool_queue_name = values_copy.pop(\"worker_pool_queue_name\", None)\n work_pool_queue_name = values_copy.pop(\"work_pool_queue_name\", None)\n if worker_pool_queue_id:\n warnings.warn(\n (\n \"`worker_pool_queue_id` is no longer supported for creating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n if worker_pool_name or worker_pool_queue_name or work_pool_queue_name:\n warnings.warn(\n (\n \"`worker_pool_name`, `worker_pool_queue_name`, and \"\n \"`work_pool_name` are\"\n \"no longer supported for creating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n return values_copy\n\n name: str = FieldFrom(objects.Deployment)\n flow_id: UUID = FieldFrom(objects.Deployment)\n is_schedule_active: Optional[bool] = FieldFrom(objects.Deployment)\n paused: Optional[bool] = FieldFrom(objects.Deployment)\n schedules: List[DeploymentScheduleCreate] = Field(\n default_factory=list,\n description=\"A list of schedules for the deployment.\",\n )\n enforce_parameter_schema: Optional[bool] = Field(\n default=None,\n description=(\n \"Whether or not the deployment should enforce the parameter schema.\"\n ),\n )\n parameter_openapi_schema: Optional[Dict[str, Any]] = FieldFrom(objects.Deployment)\n parameters: Dict[str, Any] = FieldFrom(objects.Deployment)\n tags: List[str] = FieldFrom(objects.Deployment)\n pull_steps: Optional[List[dict]] = FieldFrom(objects.Deployment)\n\n manifest_path: Optional[str] = FieldFrom(objects.Deployment)\n work_queue_name: Optional[str] = FieldFrom(objects.Deployment)\n work_pool_name: Optional[str] = Field(\n default=None,\n description=\"The name of the deployment's work pool.\",\n example=\"my-work-pool\",\n )\n storage_document_id: Optional[UUID] = FieldFrom(objects.Deployment)\n infrastructure_document_id: Optional[UUID] = FieldFrom(objects.Deployment)\n schedule: Optional[SCHEDULE_TYPES] = FieldFrom(objects.Deployment)\n description: Optional[str] = FieldFrom(objects.Deployment)\n path: Optional[str] = FieldFrom(objects.Deployment)\n version: Optional[str] = FieldFrom(objects.Deployment)\n entrypoint: Optional[str] = FieldFrom(objects.Deployment)\n infra_overrides: Optional[Dict[str, Any]] = FieldFrom(objects.Deployment)\n\n def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n jsonschema.validate(self.infra_overrides, variables_schema)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentCreate.check_valid_configuration","title":"check_valid_configuration
","text":"Check that the combination of base_job_template defaults and infra_overrides conforms to the specified schema.
Source code inprefect/client/schemas/actions.py
def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n jsonschema.validate(self.infra_overrides, variables_schema)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentUpdate","title":"DeploymentUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a deployment.
Source code inprefect/client/schemas/actions.py
@experimental_field(\n \"work_pool_name\",\n group=\"work_pools\",\n when=lambda x: x is not None,\n)\n@copy_model_fields\nclass DeploymentUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a deployment.\"\"\"\n\n @root_validator(pre=True)\n def remove_old_fields(cls, values):\n # 2.7.7 removed worker_pool_queue_id in lieu of worker_pool_name and\n # worker_pool_queue_name. Those fields were later renamed to work_pool_name\n # and work_queue_name. This validator removes old fields provided\n # by older clients to avoid 422 errors.\n values_copy = copy(values)\n worker_pool_queue_id = values_copy.pop(\"worker_pool_queue_id\", None)\n worker_pool_name = values_copy.pop(\"worker_pool_name\", None)\n worker_pool_queue_name = values_copy.pop(\"worker_pool_queue_name\", None)\n work_pool_queue_name = values_copy.pop(\"work_pool_queue_name\", None)\n if worker_pool_queue_id:\n warnings.warn(\n (\n \"`worker_pool_queue_id` is no longer supported for updating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n if worker_pool_name or worker_pool_queue_name or work_pool_queue_name:\n warnings.warn(\n (\n \"`worker_pool_name`, `worker_pool_queue_name`, and \"\n \"`work_pool_name` are\"\n \"no longer supported for creating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n return values_copy\n\n @validator(\"schedule\")\n def return_none_schedule(cls, v):\n if isinstance(v, NoSchedule):\n return None\n return v\n\n version: Optional[str] = FieldFrom(objects.Deployment)\n schedule: Optional[SCHEDULE_TYPES] = FieldFrom(objects.Deployment)\n description: Optional[str] = FieldFrom(objects.Deployment)\n is_schedule_active: bool = FieldFrom(objects.Deployment)\n parameters: Optional[Dict[str, Any]] = Field(\n default=None,\n description=\"Parameters for flow runs scheduled by the deployment.\",\n )\n tags: List[str] = FieldFrom(objects.Deployment)\n work_queue_name: Optional[str] = FieldFrom(objects.Deployment)\n work_pool_name: Optional[str] = Field(\n default=None,\n description=\"The name of the deployment's work pool.\",\n example=\"my-work-pool\",\n )\n path: Optional[str] = FieldFrom(objects.Deployment)\n infra_overrides: Optional[Dict[str, Any]] = FieldFrom(objects.Deployment)\n entrypoint: Optional[str] = FieldFrom(objects.Deployment)\n manifest_path: Optional[str] = FieldFrom(objects.Deployment)\n storage_document_id: Optional[UUID] = FieldFrom(objects.Deployment)\n infrastructure_document_id: Optional[UUID] = FieldFrom(objects.Deployment)\n enforce_parameter_schema: Optional[bool] = Field(\n default=None,\n description=(\n \"Whether or not the deployment should enforce the parameter schema.\"\n ),\n )\n\n def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n if variables_schema is not None:\n jsonschema.validate(self.infra_overrides, variables_schema)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentUpdate.check_valid_configuration","title":"check_valid_configuration
","text":"Check that the combination of base_job_template defaults and infra_overrides conforms to the specified schema.
Source code inprefect/client/schemas/actions.py
def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n if variables_schema is not None:\n jsonschema.validate(self.infra_overrides, variables_schema)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunUpdate","title":"FlowRunUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a flow run.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass FlowRunUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a flow run.\"\"\"\n\n name: Optional[str] = FieldFrom(objects.FlowRun)\n flow_version: Optional[str] = FieldFrom(objects.FlowRun)\n parameters: dict = FieldFrom(objects.FlowRun)\n empirical_policy: objects.FlowRunPolicy = FieldFrom(objects.FlowRun)\n tags: List[str] = FieldFrom(objects.FlowRun)\n infrastructure_pid: Optional[str] = FieldFrom(objects.FlowRun)\n job_variables: Optional[dict] = FieldFrom(objects.FlowRun)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.TaskRunCreate","title":"TaskRunCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a task run
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass TaskRunCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a task run\"\"\"\n\n # TaskRunCreate states must be provided as StateCreate objects\n state: Optional[StateCreate] = Field(\n default=None, description=\"The state of the task run to create\"\n )\n\n name: str = FieldFrom(objects.TaskRun)\n flow_run_id: Optional[UUID] = FieldFrom(objects.TaskRun)\n task_key: str = FieldFrom(objects.TaskRun)\n dynamic_key: str = FieldFrom(objects.TaskRun)\n cache_key: Optional[str] = FieldFrom(objects.TaskRun)\n cache_expiration: Optional[objects.DateTimeTZ] = FieldFrom(objects.TaskRun)\n task_version: Optional[str] = FieldFrom(objects.TaskRun)\n empirical_policy: objects.TaskRunPolicy = FieldFrom(objects.TaskRun)\n tags: List[str] = FieldFrom(objects.TaskRun)\n task_inputs: Dict[\n str,\n List[\n Union[\n objects.TaskRunResult,\n objects.Parameter,\n objects.Constant,\n ]\n ],\n ] = FieldFrom(objects.TaskRun)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.TaskRunUpdate","title":"TaskRunUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a task run
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass TaskRunUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a task run\"\"\"\n\n name: str = FieldFrom(objects.TaskRun)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunCreate","title":"FlowRunCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow run.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass FlowRunCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow run.\"\"\"\n\n # FlowRunCreate states must be provided as StateCreate objects\n state: Optional[StateCreate] = Field(\n default=None, description=\"The state of the flow run to create\"\n )\n\n name: str = FieldFrom(objects.FlowRun)\n flow_id: UUID = FieldFrom(objects.FlowRun)\n deployment_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n flow_version: Optional[str] = FieldFrom(objects.FlowRun)\n parameters: dict = FieldFrom(objects.FlowRun)\n context: dict = FieldFrom(objects.FlowRun)\n parent_task_run_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n infrastructure_document_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n empirical_policy: objects.FlowRunPolicy = FieldFrom(objects.FlowRun)\n tags: List[str] = FieldFrom(objects.FlowRun)\n idempotency_key: Optional[str] = FieldFrom(objects.FlowRun)\n\n class Config(ActionBaseModel.Config):\n json_dumps = orjson_dumps_extra_compatible\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentFlowRunCreate","title":"DeploymentFlowRunCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow run from a deployment.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass DeploymentFlowRunCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow run from a deployment.\"\"\"\n\n # FlowRunCreate states must be provided as StateCreate objects\n state: Optional[StateCreate] = Field(\n default=None, description=\"The state of the flow run to create\"\n )\n\n name: Optional[str] = FieldFrom(objects.FlowRun)\n parameters: dict = FieldFrom(objects.FlowRun)\n context: dict = FieldFrom(objects.FlowRun)\n infrastructure_document_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n empirical_policy: objects.FlowRunPolicy = FieldFrom(objects.FlowRun)\n tags: List[str] = FieldFrom(objects.FlowRun)\n idempotency_key: Optional[str] = FieldFrom(objects.FlowRun)\n parent_task_run_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n work_queue_name: Optional[str] = FieldFrom(objects.FlowRun)\n job_variables: Optional[dict] = FieldFrom(objects.FlowRun)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.SavedSearchCreate","title":"SavedSearchCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a saved search.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass SavedSearchCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a saved search.\"\"\"\n\n name: str = FieldFrom(objects.SavedSearch)\n filters: List[objects.SavedSearchFilter] = FieldFrom(objects.SavedSearch)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.ConcurrencyLimitCreate","title":"ConcurrencyLimitCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a concurrency limit.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass ConcurrencyLimitCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a concurrency limit.\"\"\"\n\n tag: str = FieldFrom(objects.ConcurrencyLimit)\n concurrency_limit: int = FieldFrom(objects.ConcurrencyLimit)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockTypeCreate","title":"BlockTypeCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a block type.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass BlockTypeCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a block type.\"\"\"\n\n name: str = FieldFrom(objects.BlockType)\n slug: str = FieldFrom(objects.BlockType)\n logo_url: Optional[objects.HttpUrl] = FieldFrom(objects.BlockType)\n documentation_url: Optional[objects.HttpUrl] = FieldFrom(objects.BlockType)\n description: Optional[str] = FieldFrom(objects.BlockType)\n code_example: Optional[str] = FieldFrom(objects.BlockType)\n\n # validators\n _validate_slug_format = validator(\"slug\", allow_reuse=True)(\n validate_block_type_slug\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockTypeUpdate","title":"BlockTypeUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a block type.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass BlockTypeUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a block type.\"\"\"\n\n logo_url: Optional[objects.HttpUrl] = FieldFrom(objects.BlockType)\n documentation_url: Optional[objects.HttpUrl] = FieldFrom(objects.BlockType)\n description: Optional[str] = FieldFrom(objects.BlockType)\n code_example: Optional[str] = FieldFrom(objects.BlockType)\n\n @classmethod\n def updatable_fields(cls) -> set:\n return get_class_fields_only(cls)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockSchemaCreate","title":"BlockSchemaCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a block schema.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass BlockSchemaCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a block schema.\"\"\"\n\n fields: dict = FieldFrom(objects.BlockSchema)\n block_type_id: Optional[UUID] = FieldFrom(objects.BlockSchema)\n capabilities: List[str] = FieldFrom(objects.BlockSchema)\n version: str = FieldFrom(objects.BlockSchema)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockDocumentCreate","title":"BlockDocumentCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a block document.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass BlockDocumentCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a block document.\"\"\"\n\n name: Optional[str] = FieldFrom(objects.BlockDocument)\n data: dict = FieldFrom(objects.BlockDocument)\n block_schema_id: UUID = FieldFrom(objects.BlockDocument)\n block_type_id: UUID = FieldFrom(objects.BlockDocument)\n is_anonymous: bool = FieldFrom(objects.BlockDocument)\n\n _validate_name_format = validator(\"name\", allow_reuse=True)(\n validate_block_document_name\n )\n\n @root_validator\n def validate_name_is_present_if_not_anonymous(cls, values):\n # TODO: We should find an elegant way to reuse this logic from the origin model\n if not values.get(\"is_anonymous\") and not values.get(\"name\"):\n raise ValueError(\"Names must be provided for block documents.\")\n return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockDocumentUpdate","title":"BlockDocumentUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a block document.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass BlockDocumentUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a block document.\"\"\"\n\n block_schema_id: Optional[UUID] = Field(\n default=None, description=\"A block schema ID\"\n )\n data: dict = FieldFrom(objects.BlockDocument)\n merge_existing_data: bool = True\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockDocumentReferenceCreate","title":"BlockDocumentReferenceCreate
","text":" Bases: ActionBaseModel
Data used to create block document reference.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass BlockDocumentReferenceCreate(ActionBaseModel):\n \"\"\"Data used to create block document reference.\"\"\"\n\n id: UUID = FieldFrom(objects.BlockDocumentReference)\n parent_block_document_id: UUID = FieldFrom(objects.BlockDocumentReference)\n reference_block_document_id: UUID = FieldFrom(objects.BlockDocumentReference)\n name: str = FieldFrom(objects.BlockDocumentReference)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.LogCreate","title":"LogCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a log.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass LogCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a log.\"\"\"\n\n name: str = FieldFrom(objects.Log)\n level: int = FieldFrom(objects.Log)\n message: str = FieldFrom(objects.Log)\n timestamp: objects.DateTimeTZ = FieldFrom(objects.Log)\n flow_run_id: Optional[UUID] = FieldFrom(objects.Log)\n task_run_id: Optional[UUID] = FieldFrom(objects.Log)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkPoolCreate","title":"WorkPoolCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a work pool.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass WorkPoolCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a work pool.\"\"\"\n\n name: str = FieldFrom(objects.WorkPool)\n description: Optional[str] = FieldFrom(objects.WorkPool)\n type: str = Field(description=\"The work pool type.\", default=\"prefect-agent\")\n base_job_template: Dict[str, Any] = FieldFrom(objects.WorkPool)\n is_paused: bool = FieldFrom(objects.WorkPool)\n concurrency_limit: Optional[int] = FieldFrom(objects.WorkPool)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkPoolUpdate","title":"WorkPoolUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a work pool.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass WorkPoolUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a work pool.\"\"\"\n\n description: Optional[str] = FieldFrom(objects.WorkPool)\n is_paused: Optional[bool] = FieldFrom(objects.WorkPool)\n base_job_template: Optional[Dict[str, Any]] = FieldFrom(objects.WorkPool)\n concurrency_limit: Optional[int] = FieldFrom(objects.WorkPool)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkQueueCreate","title":"WorkQueueCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a work queue.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass WorkQueueCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a work queue.\"\"\"\n\n name: str = FieldFrom(objects.WorkQueue)\n description: Optional[str] = FieldFrom(objects.WorkQueue)\n is_paused: bool = FieldFrom(objects.WorkQueue)\n concurrency_limit: Optional[int] = FieldFrom(objects.WorkQueue)\n priority: Optional[int] = Field(\n default=None,\n description=(\n \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n ),\n )\n\n # DEPRECATED\n\n filter: Optional[objects.QueueFilter] = Field(\n None,\n description=\"DEPRECATED: Filter criteria for the work queue.\",\n deprecated=True,\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkQueueUpdate","title":"WorkQueueUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a work queue.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass WorkQueueUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a work queue.\"\"\"\n\n name: str = FieldFrom(objects.WorkQueue)\n description: Optional[str] = FieldFrom(objects.WorkQueue)\n is_paused: bool = FieldFrom(objects.WorkQueue)\n concurrency_limit: Optional[int] = FieldFrom(objects.WorkQueue)\n priority: Optional[int] = FieldFrom(objects.WorkQueue)\n last_polled: Optional[DateTimeTZ] = FieldFrom(objects.WorkQueue)\n\n # DEPRECATED\n\n filter: Optional[objects.QueueFilter] = Field(\n None,\n description=\"DEPRECATED: Filter criteria for the work queue.\",\n deprecated=True,\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunNotificationPolicyCreate","title":"FlowRunNotificationPolicyCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow run notification policy.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass FlowRunNotificationPolicyCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow run notification policy.\"\"\"\n\n is_active: bool = FieldFrom(objects.FlowRunNotificationPolicy)\n state_names: List[str] = FieldFrom(objects.FlowRunNotificationPolicy)\n tags: List[str] = FieldFrom(objects.FlowRunNotificationPolicy)\n block_document_id: UUID = FieldFrom(objects.FlowRunNotificationPolicy)\n message_template: Optional[str] = FieldFrom(objects.FlowRunNotificationPolicy)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunNotificationPolicyUpdate","title":"FlowRunNotificationPolicyUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a flow run notification policy.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass FlowRunNotificationPolicyUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a flow run notification policy.\"\"\"\n\n is_active: Optional[bool] = FieldFrom(objects.FlowRunNotificationPolicy)\n state_names: Optional[List[str]] = FieldFrom(objects.FlowRunNotificationPolicy)\n tags: Optional[List[str]] = FieldFrom(objects.FlowRunNotificationPolicy)\n block_document_id: Optional[UUID] = FieldFrom(objects.FlowRunNotificationPolicy)\n message_template: Optional[str] = FieldFrom(objects.FlowRunNotificationPolicy)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.ArtifactCreate","title":"ArtifactCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create an artifact.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass ArtifactCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create an artifact.\"\"\"\n\n key: Optional[str] = FieldFrom(objects.Artifact)\n type: Optional[str] = FieldFrom(objects.Artifact)\n description: Optional[str] = FieldFrom(objects.Artifact)\n data: Optional[Union[Dict[str, Any], Any]] = FieldFrom(objects.Artifact)\n metadata_: Optional[Dict[str, str]] = FieldFrom(objects.Artifact)\n flow_run_id: Optional[UUID] = FieldFrom(objects.Artifact)\n task_run_id: Optional[UUID] = FieldFrom(objects.Artifact)\n\n _validate_artifact_format = validator(\"key\", allow_reuse=True)(\n validate_artifact_key\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.ArtifactUpdate","title":"ArtifactUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update an artifact.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass ArtifactUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update an artifact.\"\"\"\n\n data: Optional[Union[Dict[str, Any], Any]] = FieldFrom(objects.Artifact)\n description: Optional[str] = FieldFrom(objects.Artifact)\n metadata_: Optional[Dict[str, str]] = FieldFrom(objects.Artifact)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.VariableCreate","title":"VariableCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a Variable.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass VariableCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a Variable.\"\"\"\n\n name: str = FieldFrom(objects.Variable)\n value: str = FieldFrom(objects.Variable)\n tags: Optional[List[str]] = FieldFrom(objects.Variable)\n\n # validators\n _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.VariableUpdate","title":"VariableUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a Variable.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass VariableUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a Variable.\"\"\"\n\n name: Optional[str] = Field(\n default=None,\n description=\"The name of the variable\",\n example=\"my_variable\",\n max_length=objects.MAX_VARIABLE_NAME_LENGTH,\n )\n value: Optional[str] = Field(\n default=None,\n description=\"The value of the variable\",\n example=\"my-value\",\n max_length=objects.MAX_VARIABLE_NAME_LENGTH,\n )\n tags: Optional[List[str]] = FieldFrom(objects.Variable)\n\n # validators\n _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.GlobalConcurrencyLimitCreate","title":"GlobalConcurrencyLimitCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a global concurrency limit.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass GlobalConcurrencyLimitCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a global concurrency limit.\"\"\"\n\n name: str = FieldFrom(objects.GlobalConcurrencyLimit)\n limit: int = FieldFrom(objects.GlobalConcurrencyLimit)\n active: Optional[bool] = FieldFrom(objects.GlobalConcurrencyLimit)\n active_slots: Optional[int] = FieldFrom(objects.GlobalConcurrencyLimit)\n slot_decay_per_second: Optional[int] = FieldFrom(objects.GlobalConcurrencyLimit)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.GlobalConcurrencyLimitUpdate","title":"GlobalConcurrencyLimitUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a global concurrency limit.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass GlobalConcurrencyLimitUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a global concurrency limit.\"\"\"\n\n name: Optional[str] = FieldFrom(objects.GlobalConcurrencyLimit)\n limit: Optional[int] = FieldFrom(objects.GlobalConcurrencyLimit)\n active: Optional[bool] = FieldFrom(objects.GlobalConcurrencyLimit)\n active_slots: Optional[int] = FieldFrom(objects.GlobalConcurrencyLimit)\n slot_decay_per_second: Optional[int] = FieldFrom(objects.GlobalConcurrencyLimit)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_2","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters","title":"prefect.client.schemas.filters
","text":"Schemas that define Prefect REST API filtering operations.
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.Operator","title":"Operator
","text":" Bases: AutoEnum
Operators for combining filter criteria.
Source code inprefect/client/schemas/filters.py
class Operator(AutoEnum):\n \"\"\"Operators for combining filter criteria.\"\"\"\n\n and_ = AutoEnum.auto()\n or_ = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.OperatorMixin","title":"OperatorMixin
","text":"Base model for Prefect filters that combines criteria with a user-provided operator
Source code inprefect/client/schemas/filters.py
class OperatorMixin:\n \"\"\"Base model for Prefect filters that combines criteria with a user-provided operator\"\"\"\n\n operator: Operator = Field(\n default=Operator.and_,\n description=\"Operator for combining filter criteria. Defaults to 'and_'.\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilterId","title":"FlowFilterId
","text":" Bases: PrefectBaseModel
Filter by Flow.id
.
prefect/client/schemas/filters.py
class FlowFilterId(PrefectBaseModel):\n \"\"\"Filter by `Flow.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilterName","title":"FlowFilterName
","text":" Bases: PrefectBaseModel
Filter by Flow.name
.
prefect/client/schemas/filters.py
class FlowFilterName(PrefectBaseModel):\n \"\"\"Filter by `Flow.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of flow names to include\",\n example=[\"my-flow-1\", \"my-flow-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilterTags","title":"FlowFilterTags
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by Flow.tags
.
prefect/client/schemas/filters.py
class FlowFilterTags(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `Flow.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Flows will be returned only if their tags are a superset\"\n \" of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include flows without tags\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilter","title":"FlowFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter for flows. Only flows matching all criteria will be returned.
Source code inprefect/client/schemas/filters.py
class FlowFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter for flows. Only flows matching all criteria will be returned.\"\"\"\n\n id: Optional[FlowFilterId] = Field(\n default=None, description=\"Filter criteria for `Flow.id`\"\n )\n name: Optional[FlowFilterName] = Field(\n default=None, description=\"Filter criteria for `Flow.name`\"\n )\n tags: Optional[FlowFilterTags] = Field(\n default=None, description=\"Filter criteria for `Flow.tags`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterId","title":"FlowRunFilterId
","text":" Bases: PrefectBaseModel
Filter by FlowRun.id.
Source code inprefect/client/schemas/filters.py
class FlowRunFilterId(PrefectBaseModel):\n \"\"\"Filter by FlowRun.id.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run ids to include\"\n )\n not_any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run ids to exclude\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterName","title":"FlowRunFilterName
","text":" Bases: PrefectBaseModel
Filter by FlowRun.name
.
prefect/client/schemas/filters.py
class FlowRunFilterName(PrefectBaseModel):\n \"\"\"Filter by `FlowRun.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of flow run names to include\",\n example=[\"my-flow-run-1\", \"my-flow-run-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterTags","title":"FlowRunFilterTags
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by FlowRun.tags
.
prefect/client/schemas/filters.py
class FlowRunFilterTags(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `FlowRun.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Flow runs will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include flow runs without tags\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterDeploymentId","title":"FlowRunFilterDeploymentId
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by FlowRun.deployment_id
.
prefect/client/schemas/filters.py
class FlowRunFilterDeploymentId(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `FlowRun.deployment_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run deployment ids to include\"\n )\n is_null_: Optional[bool] = Field(\n default=None,\n description=\"If true, only include flow runs without deployment ids\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterWorkQueueName","title":"FlowRunFilterWorkQueueName
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by FlowRun.work_queue_name
.
prefect/client/schemas/filters.py
class FlowRunFilterWorkQueueName(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `FlowRun.work_queue_name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of work queue names to include\",\n example=[\"work_queue_1\", \"work_queue_2\"],\n )\n is_null_: Optional[bool] = Field(\n default=None,\n description=\"If true, only include flow runs without work queue names\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterStateType","title":"FlowRunFilterStateType
","text":" Bases: PrefectBaseModel
Filter by FlowRun.state_type
.
prefect/client/schemas/filters.py
class FlowRunFilterStateType(PrefectBaseModel):\n \"\"\"Filter by `FlowRun.state_type`.\"\"\"\n\n any_: Optional[List[StateType]] = Field(\n default=None, description=\"A list of flow run state types to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterFlowVersion","title":"FlowRunFilterFlowVersion
","text":" Bases: PrefectBaseModel
Filter by FlowRun.flow_version
.
prefect/client/schemas/filters.py
class FlowRunFilterFlowVersion(PrefectBaseModel):\n \"\"\"Filter by `FlowRun.flow_version`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of flow run flow_versions to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterStartTime","title":"FlowRunFilterStartTime
","text":" Bases: PrefectBaseModel
Filter by FlowRun.start_time
.
prefect/client/schemas/filters.py
class FlowRunFilterStartTime(PrefectBaseModel):\n \"\"\"Filter by `FlowRun.start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs starting at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs starting at or after this time\",\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only return flow runs without a start time\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterExpectedStartTime","title":"FlowRunFilterExpectedStartTime
","text":" Bases: PrefectBaseModel
Filter by FlowRun.expected_start_time
.
prefect/client/schemas/filters.py
class FlowRunFilterExpectedStartTime(PrefectBaseModel):\n \"\"\"Filter by `FlowRun.expected_start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs scheduled to start at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs scheduled to start at or after this time\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterNextScheduledStartTime","title":"FlowRunFilterNextScheduledStartTime
","text":" Bases: PrefectBaseModel
Filter by FlowRun.next_scheduled_start_time
.
prefect/client/schemas/filters.py
class FlowRunFilterNextScheduledStartTime(PrefectBaseModel):\n \"\"\"Filter by `FlowRun.next_scheduled_start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include flow runs with a next_scheduled_start_time or before this\"\n \" time\"\n ),\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include flow runs with a next_scheduled_start_time at or after this\"\n \" time\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterParentFlowRunId","title":"FlowRunFilterParentFlowRunId
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter for subflows of the given flow runs
Source code inprefect/client/schemas/filters.py
class FlowRunFilterParentFlowRunId(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter for subflows of the given flow runs\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run parents to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterParentTaskRunId","title":"FlowRunFilterParentTaskRunId
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by FlowRun.parent_task_run_id
.
prefect/client/schemas/filters.py
class FlowRunFilterParentTaskRunId(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `FlowRun.parent_task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run parent_task_run_ids to include\"\n )\n is_null_: Optional[bool] = Field(\n default=None,\n description=\"If true, only include flow runs without parent_task_run_id\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterIdempotencyKey","title":"FlowRunFilterIdempotencyKey
","text":" Bases: PrefectBaseModel
Filter by FlowRun.idempotency_key.
Source code inprefect/client/schemas/filters.py
class FlowRunFilterIdempotencyKey(PrefectBaseModel):\n \"\"\"Filter by FlowRun.idempotency_key.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of flow run idempotency keys to include\"\n )\n not_any_: Optional[List[str]] = Field(\n default=None, description=\"A list of flow run idempotency keys to exclude\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilter","title":"FlowRunFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter flow runs. Only flow runs matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class FlowRunFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter flow runs. Only flow runs matching all criteria will be returned\"\"\"\n\n id: Optional[FlowRunFilterId] = Field(\n default=None, description=\"Filter criteria for `FlowRun.id`\"\n )\n name: Optional[FlowRunFilterName] = Field(\n default=None, description=\"Filter criteria for `FlowRun.name`\"\n )\n tags: Optional[FlowRunFilterTags] = Field(\n default=None, description=\"Filter criteria for `FlowRun.tags`\"\n )\n deployment_id: Optional[FlowRunFilterDeploymentId] = Field(\n default=None, description=\"Filter criteria for `FlowRun.deployment_id`\"\n )\n work_queue_name: Optional[FlowRunFilterWorkQueueName] = Field(\n default=None, description=\"Filter criteria for `FlowRun.work_queue_name\"\n )\n state: Optional[FlowRunFilterState] = Field(\n default=None, description=\"Filter criteria for `FlowRun.state`\"\n )\n flow_version: Optional[FlowRunFilterFlowVersion] = Field(\n default=None, description=\"Filter criteria for `FlowRun.flow_version`\"\n )\n start_time: Optional[FlowRunFilterStartTime] = Field(\n default=None, description=\"Filter criteria for `FlowRun.start_time`\"\n )\n expected_start_time: Optional[FlowRunFilterExpectedStartTime] = Field(\n default=None, description=\"Filter criteria for `FlowRun.expected_start_time`\"\n )\n next_scheduled_start_time: Optional[FlowRunFilterNextScheduledStartTime] = Field(\n default=None,\n description=\"Filter criteria for `FlowRun.next_scheduled_start_time`\",\n )\n parent_flow_run_id: Optional[FlowRunFilterParentFlowRunId] = Field(\n default=None, description=\"Filter criteria for subflows of the given flow runs\"\n )\n parent_task_run_id: Optional[FlowRunFilterParentTaskRunId] = Field(\n default=None, description=\"Filter criteria for `FlowRun.parent_task_run_id`\"\n )\n idempotency_key: Optional[FlowRunFilterIdempotencyKey] = Field(\n default=None, description=\"Filter criteria for `FlowRun.idempotency_key`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterFlowRunId","title":"TaskRunFilterFlowRunId
","text":" Bases: PrefectBaseModel
Filter by TaskRun.flow_run_id
.
prefect/client/schemas/filters.py
class TaskRunFilterFlowRunId(PrefectBaseModel):\n \"\"\"Filter by `TaskRun.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run ids to include\"\n )\n\n is_null_: bool = Field(\n default=False,\n description=\"If true, only include task runs without a flow run id\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterId","title":"TaskRunFilterId
","text":" Bases: PrefectBaseModel
Filter by TaskRun.id
.
prefect/client/schemas/filters.py
class TaskRunFilterId(PrefectBaseModel):\n \"\"\"Filter by `TaskRun.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterName","title":"TaskRunFilterName
","text":" Bases: PrefectBaseModel
Filter by TaskRun.name
.
prefect/client/schemas/filters.py
class TaskRunFilterName(PrefectBaseModel):\n \"\"\"Filter by `TaskRun.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of task run names to include\",\n example=[\"my-task-run-1\", \"my-task-run-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterTags","title":"TaskRunFilterTags
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by TaskRun.tags
.
prefect/client/schemas/filters.py
class TaskRunFilterTags(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `TaskRun.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Task runs will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include task runs without tags\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterStateType","title":"TaskRunFilterStateType
","text":" Bases: PrefectBaseModel
Filter by TaskRun.state_type
.
prefect/client/schemas/filters.py
class TaskRunFilterStateType(PrefectBaseModel):\n \"\"\"Filter by `TaskRun.state_type`.\"\"\"\n\n any_: Optional[List[StateType]] = Field(\n default=None, description=\"A list of task run state types to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterSubFlowRuns","title":"TaskRunFilterSubFlowRuns
","text":" Bases: PrefectBaseModel
Filter by TaskRun.subflow_run
.
prefect/client/schemas/filters.py
class TaskRunFilterSubFlowRuns(PrefectBaseModel):\n \"\"\"Filter by `TaskRun.subflow_run`.\"\"\"\n\n exists_: Optional[bool] = Field(\n default=None,\n description=(\n \"If true, only include task runs that are subflow run parents; if false,\"\n \" exclude parent task runs\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterStartTime","title":"TaskRunFilterStartTime
","text":" Bases: PrefectBaseModel
Filter by TaskRun.start_time
.
prefect/client/schemas/filters.py
class TaskRunFilterStartTime(PrefectBaseModel):\n \"\"\"Filter by `TaskRun.start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include task runs starting at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include task runs starting at or after this time\",\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only return task runs without a start time\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilter","title":"TaskRunFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter task runs. Only task runs matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class TaskRunFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter task runs. Only task runs matching all criteria will be returned\"\"\"\n\n id: Optional[TaskRunFilterId] = Field(\n default=None, description=\"Filter criteria for `TaskRun.id`\"\n )\n name: Optional[TaskRunFilterName] = Field(\n default=None, description=\"Filter criteria for `TaskRun.name`\"\n )\n tags: Optional[TaskRunFilterTags] = Field(\n default=None, description=\"Filter criteria for `TaskRun.tags`\"\n )\n state: Optional[TaskRunFilterState] = Field(\n default=None, description=\"Filter criteria for `TaskRun.state`\"\n )\n start_time: Optional[TaskRunFilterStartTime] = Field(\n default=None, description=\"Filter criteria for `TaskRun.start_time`\"\n )\n subflow_runs: Optional[TaskRunFilterSubFlowRuns] = Field(\n default=None, description=\"Filter criteria for `TaskRun.subflow_run`\"\n )\n flow_run_id: Optional[TaskRunFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `TaskRun.flow_run_id`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterId","title":"DeploymentFilterId
","text":" Bases: PrefectBaseModel
Filter by Deployment.id
.
prefect/client/schemas/filters.py
class DeploymentFilterId(PrefectBaseModel):\n \"\"\"Filter by `Deployment.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of deployment ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterName","title":"DeploymentFilterName
","text":" Bases: PrefectBaseModel
Filter by Deployment.name
.
prefect/client/schemas/filters.py
class DeploymentFilterName(PrefectBaseModel):\n \"\"\"Filter by `Deployment.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of deployment names to include\",\n example=[\"my-deployment-1\", \"my-deployment-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterWorkQueueName","title":"DeploymentFilterWorkQueueName
","text":" Bases: PrefectBaseModel
Filter by Deployment.work_queue_name
.
prefect/client/schemas/filters.py
class DeploymentFilterWorkQueueName(PrefectBaseModel):\n \"\"\"Filter by `Deployment.work_queue_name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of work queue names to include\",\n example=[\"work_queue_1\", \"work_queue_2\"],\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterIsScheduleActive","title":"DeploymentFilterIsScheduleActive
","text":" Bases: PrefectBaseModel
Filter by Deployment.is_schedule_active
.
prefect/client/schemas/filters.py
class DeploymentFilterIsScheduleActive(PrefectBaseModel):\n \"\"\"Filter by `Deployment.is_schedule_active`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=\"Only returns where deployment schedule is/is not active\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterTags","title":"DeploymentFilterTags
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by Deployment.tags
.
prefect/client/schemas/filters.py
class DeploymentFilterTags(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `Deployment.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Deployments will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include deployments without tags\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilter","title":"DeploymentFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter for deployments. Only deployments matching all criteria will be returned.
Source code inprefect/client/schemas/filters.py
class DeploymentFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter for deployments. Only deployments matching all criteria will be returned.\"\"\"\n\n id: Optional[DeploymentFilterId] = Field(\n default=None, description=\"Filter criteria for `Deployment.id`\"\n )\n name: Optional[DeploymentFilterName] = Field(\n default=None, description=\"Filter criteria for `Deployment.name`\"\n )\n is_schedule_active: Optional[DeploymentFilterIsScheduleActive] = Field(\n default=None, description=\"Filter criteria for `Deployment.is_schedule_active`\"\n )\n tags: Optional[DeploymentFilterTags] = Field(\n default=None, description=\"Filter criteria for `Deployment.tags`\"\n )\n work_queue_name: Optional[DeploymentFilterWorkQueueName] = Field(\n default=None, description=\"Filter criteria for `Deployment.work_queue_name`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterName","title":"LogFilterName
","text":" Bases: PrefectBaseModel
Filter by Log.name
.
prefect/client/schemas/filters.py
class LogFilterName(PrefectBaseModel):\n \"\"\"Filter by `Log.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of log names to include\",\n example=[\"prefect.logger.flow_runs\", \"prefect.logger.task_runs\"],\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterLevel","title":"LogFilterLevel
","text":" Bases: PrefectBaseModel
Filter by Log.level
.
prefect/client/schemas/filters.py
class LogFilterLevel(PrefectBaseModel):\n \"\"\"Filter by `Log.level`.\"\"\"\n\n ge_: Optional[int] = Field(\n default=None,\n description=\"Include logs with a level greater than or equal to this level\",\n example=20,\n )\n\n le_: Optional[int] = Field(\n default=None,\n description=\"Include logs with a level less than or equal to this level\",\n example=50,\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterTimestamp","title":"LogFilterTimestamp
","text":" Bases: PrefectBaseModel
Filter by Log.timestamp
.
prefect/client/schemas/filters.py
class LogFilterTimestamp(PrefectBaseModel):\n \"\"\"Filter by `Log.timestamp`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include logs with a timestamp at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include logs with a timestamp at or after this time\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterFlowRunId","title":"LogFilterFlowRunId
","text":" Bases: PrefectBaseModel
Filter by Log.flow_run_id
.
prefect/client/schemas/filters.py
class LogFilterFlowRunId(PrefectBaseModel):\n \"\"\"Filter by `Log.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run IDs to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterTaskRunId","title":"LogFilterTaskRunId
","text":" Bases: PrefectBaseModel
Filter by Log.task_run_id
.
prefect/client/schemas/filters.py
class LogFilterTaskRunId(PrefectBaseModel):\n \"\"\"Filter by `Log.task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run IDs to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilter","title":"LogFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter logs. Only logs matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class LogFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter logs. Only logs matching all criteria will be returned\"\"\"\n\n level: Optional[LogFilterLevel] = Field(\n default=None, description=\"Filter criteria for `Log.level`\"\n )\n timestamp: Optional[LogFilterTimestamp] = Field(\n default=None, description=\"Filter criteria for `Log.timestamp`\"\n )\n flow_run_id: Optional[LogFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `Log.flow_run_id`\"\n )\n task_run_id: Optional[LogFilterTaskRunId] = Field(\n default=None, description=\"Filter criteria for `Log.task_run_id`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FilterSet","title":"FilterSet
","text":" Bases: PrefectBaseModel
A collection of filters for common objects
Source code inprefect/client/schemas/filters.py
class FilterSet(PrefectBaseModel):\n \"\"\"A collection of filters for common objects\"\"\"\n\n flows: FlowFilter = Field(\n default_factory=FlowFilter, description=\"Filters that apply to flows\"\n )\n flow_runs: FlowRunFilter = Field(\n default_factory=FlowRunFilter, description=\"Filters that apply to flow runs\"\n )\n task_runs: TaskRunFilter = Field(\n default_factory=TaskRunFilter, description=\"Filters that apply to task runs\"\n )\n deployments: DeploymentFilter = Field(\n default_factory=DeploymentFilter,\n description=\"Filters that apply to deployments\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockTypeFilterName","title":"BlockTypeFilterName
","text":" Bases: PrefectBaseModel
Filter by BlockType.name
prefect/client/schemas/filters.py
class BlockTypeFilterName(PrefectBaseModel):\n \"\"\"Filter by `BlockType.name`\"\"\"\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockTypeFilterSlug","title":"BlockTypeFilterSlug
","text":" Bases: PrefectBaseModel
Filter by BlockType.slug
prefect/client/schemas/filters.py
class BlockTypeFilterSlug(PrefectBaseModel):\n \"\"\"Filter by `BlockType.slug`\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of slugs to match\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockTypeFilter","title":"BlockTypeFilter
","text":" Bases: PrefectBaseModel
Filter BlockTypes
Source code inprefect/client/schemas/filters.py
class BlockTypeFilter(PrefectBaseModel):\n \"\"\"Filter BlockTypes\"\"\"\n\n name: Optional[BlockTypeFilterName] = Field(\n default=None, description=\"Filter criteria for `BlockType.name`\"\n )\n\n slug: Optional[BlockTypeFilterSlug] = Field(\n default=None, description=\"Filter criteria for `BlockType.slug`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterBlockTypeId","title":"BlockSchemaFilterBlockTypeId
","text":" Bases: PrefectBaseModel
Filter by BlockSchema.block_type_id
.
prefect/client/schemas/filters.py
class BlockSchemaFilterBlockTypeId(PrefectBaseModel):\n \"\"\"Filter by `BlockSchema.block_type_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of block type ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterId","title":"BlockSchemaFilterId
","text":" Bases: PrefectBaseModel
Filter by BlockSchema.id
Source code inprefect/client/schemas/filters.py
class BlockSchemaFilterId(PrefectBaseModel):\n \"\"\"Filter by BlockSchema.id\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of IDs to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterCapabilities","title":"BlockSchemaFilterCapabilities
","text":" Bases: PrefectBaseModel
Filter by BlockSchema.capabilities
prefect/client/schemas/filters.py
class BlockSchemaFilterCapabilities(PrefectBaseModel):\n \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"write-storage\", \"read-storage\"],\n description=(\n \"A list of block capabilities. Block entities will be returned only if an\"\n \" associated block schema has a superset of the defined capabilities.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterVersion","title":"BlockSchemaFilterVersion
","text":" Bases: PrefectBaseModel
Filter by BlockSchema.capabilities
prefect/client/schemas/filters.py
class BlockSchemaFilterVersion(PrefectBaseModel):\n \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n example=[\"2.0.0\", \"2.1.0\"],\n description=\"A list of block schema versions.\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilter","title":"BlockSchemaFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter BlockSchemas
Source code inprefect/client/schemas/filters.py
class BlockSchemaFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter BlockSchemas\"\"\"\n\n block_type_id: Optional[BlockSchemaFilterBlockTypeId] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.block_type_id`\"\n )\n block_capabilities: Optional[BlockSchemaFilterCapabilities] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.capabilities`\"\n )\n id: Optional[BlockSchemaFilterId] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.id`\"\n )\n version: Optional[BlockSchemaFilterVersion] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.version`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterIsAnonymous","title":"BlockDocumentFilterIsAnonymous
","text":" Bases: PrefectBaseModel
Filter by BlockDocument.is_anonymous
.
prefect/client/schemas/filters.py
class BlockDocumentFilterIsAnonymous(PrefectBaseModel):\n \"\"\"Filter by `BlockDocument.is_anonymous`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=(\n \"Filter block documents for only those that are or are not anonymous.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterBlockTypeId","title":"BlockDocumentFilterBlockTypeId
","text":" Bases: PrefectBaseModel
Filter by BlockDocument.block_type_id
.
prefect/client/schemas/filters.py
class BlockDocumentFilterBlockTypeId(PrefectBaseModel):\n \"\"\"Filter by `BlockDocument.block_type_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of block type ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterId","title":"BlockDocumentFilterId
","text":" Bases: PrefectBaseModel
Filter by BlockDocument.id
.
prefect/client/schemas/filters.py
class BlockDocumentFilterId(PrefectBaseModel):\n \"\"\"Filter by `BlockDocument.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of block ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterName","title":"BlockDocumentFilterName
","text":" Bases: PrefectBaseModel
Filter by BlockDocument.name
.
prefect/client/schemas/filters.py
class BlockDocumentFilterName(PrefectBaseModel):\n \"\"\"Filter by `BlockDocument.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of block names to include\"\n )\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match block names against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-block%\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilter","title":"BlockDocumentFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class BlockDocumentFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned\"\"\"\n\n id: Optional[BlockDocumentFilterId] = Field(\n default=None, description=\"Filter criteria for `BlockDocument.id`\"\n )\n is_anonymous: Optional[BlockDocumentFilterIsAnonymous] = Field(\n # default is to exclude anonymous blocks\n BlockDocumentFilterIsAnonymous(eq_=False),\n description=(\n \"Filter criteria for `BlockDocument.is_anonymous`. \"\n \"Defaults to excluding anonymous blocks.\"\n ),\n )\n block_type_id: Optional[BlockDocumentFilterBlockTypeId] = Field(\n default=None, description=\"Filter criteria for `BlockDocument.block_type_id`\"\n )\n name: Optional[BlockDocumentFilterName] = Field(\n default=None, description=\"Filter criteria for `BlockDocument.name`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunNotificationPolicyFilterIsActive","title":"FlowRunNotificationPolicyFilterIsActive
","text":" Bases: PrefectBaseModel
Filter by FlowRunNotificationPolicy.is_active
.
prefect/client/schemas/filters.py
class FlowRunNotificationPolicyFilterIsActive(PrefectBaseModel):\n \"\"\"Filter by `FlowRunNotificationPolicy.is_active`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=(\n \"Filter notification policies for only those that are or are not active.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunNotificationPolicyFilter","title":"FlowRunNotificationPolicyFilter
","text":" Bases: PrefectBaseModel
Filter FlowRunNotificationPolicies.
Source code inprefect/client/schemas/filters.py
class FlowRunNotificationPolicyFilter(PrefectBaseModel):\n \"\"\"Filter FlowRunNotificationPolicies.\"\"\"\n\n is_active: Optional[FlowRunNotificationPolicyFilterIsActive] = Field(\n default=FlowRunNotificationPolicyFilterIsActive(eq_=False),\n description=\"Filter criteria for `FlowRunNotificationPolicy.is_active`. \",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkQueueFilterId","title":"WorkQueueFilterId
","text":" Bases: PrefectBaseModel
Filter by WorkQueue.id
.
prefect/client/schemas/filters.py
class WorkQueueFilterId(PrefectBaseModel):\n \"\"\"Filter by `WorkQueue.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None,\n description=\"A list of work queue ids to include\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkQueueFilterName","title":"WorkQueueFilterName
","text":" Bases: PrefectBaseModel
Filter by WorkQueue.name
.
prefect/client/schemas/filters.py
class WorkQueueFilterName(PrefectBaseModel):\n \"\"\"Filter by `WorkQueue.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of work queue names to include\",\n example=[\"wq-1\", \"wq-2\"],\n )\n\n startswith_: Optional[List[str]] = Field(\n default=None,\n description=(\n \"A list of case-insensitive starts-with matches. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', and 'Marvin-robot', but not 'sad-marvin'.\"\n ),\n example=[\"marvin\", \"Marvin-robot\"],\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkQueueFilter","title":"WorkQueueFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter work queues. Only work queues matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class WorkQueueFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter work queues. Only work queues matching all criteria will be\n returned\"\"\"\n\n id: Optional[WorkQueueFilterId] = Field(\n default=None, description=\"Filter criteria for `WorkQueue.id`\"\n )\n\n name: Optional[WorkQueueFilterName] = Field(\n default=None, description=\"Filter criteria for `WorkQueue.name`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkPoolFilterId","title":"WorkPoolFilterId
","text":" Bases: PrefectBaseModel
Filter by WorkPool.id
.
prefect/client/schemas/filters.py
class WorkPoolFilterId(PrefectBaseModel):\n \"\"\"Filter by `WorkPool.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of work pool ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkPoolFilterName","title":"WorkPoolFilterName
","text":" Bases: PrefectBaseModel
Filter by WorkPool.name
.
prefect/client/schemas/filters.py
class WorkPoolFilterName(PrefectBaseModel):\n \"\"\"Filter by `WorkPool.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of work pool names to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkPoolFilterType","title":"WorkPoolFilterType
","text":" Bases: PrefectBaseModel
Filter by WorkPool.type
.
prefect/client/schemas/filters.py
class WorkPoolFilterType(PrefectBaseModel):\n \"\"\"Filter by `WorkPool.type`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of work pool types to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkerFilterWorkPoolId","title":"WorkerFilterWorkPoolId
","text":" Bases: PrefectBaseModel
Filter by Worker.worker_config_id
.
prefect/client/schemas/filters.py
class WorkerFilterWorkPoolId(PrefectBaseModel):\n \"\"\"Filter by `Worker.worker_config_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of work pool ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkerFilterLastHeartbeatTime","title":"WorkerFilterLastHeartbeatTime
","text":" Bases: PrefectBaseModel
Filter by Worker.last_heartbeat_time
.
prefect/client/schemas/filters.py
class WorkerFilterLastHeartbeatTime(PrefectBaseModel):\n \"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include processes whose last heartbeat was at or before this time\"\n ),\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include processes whose last heartbeat was at or after this time\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterId","title":"ArtifactFilterId
","text":" Bases: PrefectBaseModel
Filter by Artifact.id
.
prefect/client/schemas/filters.py
class ArtifactFilterId(PrefectBaseModel):\n \"\"\"Filter by `Artifact.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of artifact ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterKey","title":"ArtifactFilterKey
","text":" Bases: PrefectBaseModel
Filter by Artifact.key
.
prefect/client/schemas/filters.py
class ArtifactFilterKey(PrefectBaseModel):\n \"\"\"Filter by `Artifact.key`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact keys to include\"\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match artifact keys against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-artifact-%\",\n )\n\n exists_: Optional[bool] = Field(\n default=None,\n description=(\n \"If `true`, only include artifacts with a non-null key. If `false`, \"\n \"only include artifacts with a null key.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterFlowRunId","title":"ArtifactFilterFlowRunId
","text":" Bases: PrefectBaseModel
Filter by Artifact.flow_run_id
.
prefect/client/schemas/filters.py
class ArtifactFilterFlowRunId(PrefectBaseModel):\n \"\"\"Filter by `Artifact.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run IDs to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterTaskRunId","title":"ArtifactFilterTaskRunId
","text":" Bases: PrefectBaseModel
Filter by Artifact.task_run_id
.
prefect/client/schemas/filters.py
class ArtifactFilterTaskRunId(PrefectBaseModel):\n \"\"\"Filter by `Artifact.task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run IDs to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterType","title":"ArtifactFilterType
","text":" Bases: PrefectBaseModel
Filter by Artifact.type
.
prefect/client/schemas/filters.py
class ArtifactFilterType(PrefectBaseModel):\n \"\"\"Filter by `Artifact.type`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to include\"\n )\n not_any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to exclude\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilter","title":"ArtifactFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter artifacts. Only artifacts matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class ArtifactFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter artifacts. Only artifacts matching all criteria will be returned\"\"\"\n\n id: Optional[ArtifactFilterId] = Field(\n default=None, description=\"Filter criteria for `Artifact.id`\"\n )\n key: Optional[ArtifactFilterKey] = Field(\n default=None, description=\"Filter criteria for `Artifact.key`\"\n )\n flow_run_id: Optional[ArtifactFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n )\n task_run_id: Optional[ArtifactFilterTaskRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n )\n type: Optional[ArtifactFilterType] = Field(\n default=None, description=\"Filter criteria for `Artifact.type`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterLatestId","title":"ArtifactCollectionFilterLatestId
","text":" Bases: PrefectBaseModel
Filter by ArtifactCollection.latest_id
.
prefect/client/schemas/filters.py
class ArtifactCollectionFilterLatestId(PrefectBaseModel):\n \"\"\"Filter by `ArtifactCollection.latest_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of artifact ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterKey","title":"ArtifactCollectionFilterKey
","text":" Bases: PrefectBaseModel
Filter by ArtifactCollection.key
.
prefect/client/schemas/filters.py
class ArtifactCollectionFilterKey(PrefectBaseModel):\n \"\"\"Filter by `ArtifactCollection.key`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact keys to include\"\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match artifact keys against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-artifact-%\",\n )\n\n exists_: Optional[bool] = Field(\n default=None,\n description=(\n \"If `true`, only include artifacts with a non-null key. If `false`, \"\n \"only include artifacts with a null key. Should return all rows in \"\n \"the ArtifactCollection table if specified.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterFlowRunId","title":"ArtifactCollectionFilterFlowRunId
","text":" Bases: PrefectBaseModel
Filter by ArtifactCollection.flow_run_id
.
prefect/client/schemas/filters.py
class ArtifactCollectionFilterFlowRunId(PrefectBaseModel):\n \"\"\"Filter by `ArtifactCollection.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run IDs to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterTaskRunId","title":"ArtifactCollectionFilterTaskRunId
","text":" Bases: PrefectBaseModel
Filter by ArtifactCollection.task_run_id
.
prefect/client/schemas/filters.py
class ArtifactCollectionFilterTaskRunId(PrefectBaseModel):\n \"\"\"Filter by `ArtifactCollection.task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run IDs to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterType","title":"ArtifactCollectionFilterType
","text":" Bases: PrefectBaseModel
Filter by ArtifactCollection.type
.
prefect/client/schemas/filters.py
class ArtifactCollectionFilterType(PrefectBaseModel):\n \"\"\"Filter by `ArtifactCollection.type`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to include\"\n )\n not_any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to exclude\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilter","title":"ArtifactCollectionFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter artifact collections. Only artifact collections matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class ArtifactCollectionFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter artifact collections. Only artifact collections matching all criteria will be returned\"\"\"\n\n latest_id: Optional[ArtifactCollectionFilterLatestId] = Field(\n default=None, description=\"Filter criteria for `Artifact.id`\"\n )\n key: Optional[ArtifactCollectionFilterKey] = Field(\n default=None, description=\"Filter criteria for `Artifact.key`\"\n )\n flow_run_id: Optional[ArtifactCollectionFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n )\n task_run_id: Optional[ArtifactCollectionFilterTaskRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n )\n type: Optional[ArtifactCollectionFilterType] = Field(\n default=None, description=\"Filter criteria for `Artifact.type`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterId","title":"VariableFilterId
","text":" Bases: PrefectBaseModel
Filter by Variable.id
.
prefect/client/schemas/filters.py
class VariableFilterId(PrefectBaseModel):\n \"\"\"Filter by `Variable.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of variable ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterName","title":"VariableFilterName
","text":" Bases: PrefectBaseModel
Filter by Variable.name
.
prefect/client/schemas/filters.py
class VariableFilterName(PrefectBaseModel):\n \"\"\"Filter by `Variable.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of variables names to include\"\n )\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match variable names against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my_variable_%\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterValue","title":"VariableFilterValue
","text":" Bases: PrefectBaseModel
Filter by Variable.value
.
prefect/client/schemas/filters.py
class VariableFilterValue(PrefectBaseModel):\n \"\"\"Filter by `Variable.value`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of variables value to include\"\n )\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match variable value against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-value-%\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterTags","title":"VariableFilterTags
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by Variable.tags
.
prefect/client/schemas/filters.py
class VariableFilterTags(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `Variable.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Variables will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include Variables without tags\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilter","title":"VariableFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter variables. Only variables matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class VariableFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter variables. Only variables matching all criteria will be returned\"\"\"\n\n id: Optional[VariableFilterId] = Field(\n default=None, description=\"Filter criteria for `Variable.id`\"\n )\n name: Optional[VariableFilterName] = Field(\n default=None, description=\"Filter criteria for `Variable.name`\"\n )\n value: Optional[VariableFilterValue] = Field(\n default=None, description=\"Filter criteria for `Variable.value`\"\n )\n tags: Optional[VariableFilterTags] = Field(\n default=None, description=\"Filter criteria for `Variable.tags`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_3","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects","title":"prefect.client.schemas.objects
","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.StateType","title":"StateType
","text":" Bases: AutoEnum
Enumeration of state types.
Source code inprefect/client/schemas/objects.py
class StateType(AutoEnum):\n \"\"\"Enumeration of state types.\"\"\"\n\n SCHEDULED = AutoEnum.auto()\n PENDING = AutoEnum.auto()\n RUNNING = AutoEnum.auto()\n COMPLETED = AutoEnum.auto()\n FAILED = AutoEnum.auto()\n CANCELLED = AutoEnum.auto()\n CRASHED = AutoEnum.auto()\n PAUSED = AutoEnum.auto()\n CANCELLING = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkPoolStatus","title":"WorkPoolStatus
","text":" Bases: AutoEnum
Enumeration of work pool statuses.
Source code inprefect/client/schemas/objects.py
class WorkPoolStatus(AutoEnum):\n \"\"\"Enumeration of work pool statuses.\"\"\"\n\n READY = AutoEnum.auto()\n NOT_READY = AutoEnum.auto()\n PAUSED = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkerStatus","title":"WorkerStatus
","text":" Bases: AutoEnum
Enumeration of worker statuses.
Source code inprefect/client/schemas/objects.py
class WorkerStatus(AutoEnum):\n \"\"\"Enumeration of worker statuses.\"\"\"\n\n ONLINE = AutoEnum.auto()\n OFFLINE = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.DeploymentStatus","title":"DeploymentStatus
","text":" Bases: AutoEnum
Enumeration of deployment statuses.
Source code inprefect/client/schemas/objects.py
class DeploymentStatus(AutoEnum):\n \"\"\"Enumeration of deployment statuses.\"\"\"\n\n READY = AutoEnum.auto()\n NOT_READY = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueueStatus","title":"WorkQueueStatus
","text":" Bases: AutoEnum
Enumeration of work queue statuses.
Source code inprefect/client/schemas/objects.py
class WorkQueueStatus(AutoEnum):\n \"\"\"Enumeration of work queue statuses.\"\"\"\n\n READY = AutoEnum.auto()\n NOT_READY = AutoEnum.auto()\n PAUSED = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State","title":"State
","text":" Bases: ObjectBaseModel
, Generic[R]
The state of a run.
Source code inprefect/client/schemas/objects.py
class State(ObjectBaseModel, Generic[R]):\n \"\"\"\n The state of a run.\n \"\"\"\n\n type: StateType\n name: Optional[str] = Field(default=None)\n timestamp: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n message: Optional[str] = Field(default=None, example=\"Run started\")\n state_details: StateDetails = Field(default_factory=StateDetails)\n data: Union[\"BaseResult[R]\", \"DataDocument[R]\", Any] = Field(\n default=None,\n )\n\n @overload\n def result(self: \"State[R]\", raise_on_failure: bool = True) -> R:\n ...\n\n @overload\n def result(self: \"State[R]\", raise_on_failure: bool = False) -> Union[R, Exception]:\n ...\n\n def result(self, raise_on_failure: bool = True, fetch: Optional[bool] = None):\n \"\"\"\n Retrieve the result attached to this state.\n\n Args:\n raise_on_failure: a boolean specifying whether to raise an exception\n if the state is of type `FAILED` and the underlying data is an exception\n fetch: a boolean specifying whether to resolve references to persisted\n results into data. For synchronous users, this defaults to `True`.\n For asynchronous users, this defaults to `False` for backwards\n compatibility.\n\n Raises:\n TypeError: If the state is failed but the result is not an exception.\n\n Returns:\n The result of the run\n\n Examples:\n >>> from prefect import flow, task\n >>> @task\n >>> def my_task(x):\n >>> return x\n\n Get the result from a task future in a flow\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task(\"hello\")\n >>> state = future.wait()\n >>> result = state.result()\n >>> print(result)\n >>> my_flow()\n hello\n\n Get the result from a flow state\n\n >>> @flow\n >>> def my_flow():\n >>> return \"hello\"\n >>> my_flow(return_state=True).result()\n hello\n\n Get the result from a failed state\n\n >>> @flow\n >>> def my_flow():\n >>> raise ValueError(\"oh no!\")\n >>> state = my_flow(return_state=True) # Error is wrapped in FAILED state\n >>> state.result() # Raises `ValueError`\n\n Get the result from a failed state without erroring\n\n >>> @flow\n >>> def my_flow():\n >>> raise ValueError(\"oh no!\")\n >>> state = my_flow(return_state=True)\n >>> result = state.result(raise_on_failure=False)\n >>> print(result)\n ValueError(\"oh no!\")\n\n\n Get the result from a flow state in an async context\n\n >>> @flow\n >>> async def my_flow():\n >>> return \"hello\"\n >>> state = await my_flow(return_state=True)\n >>> await state.result()\n hello\n \"\"\"\n from prefect.states import get_state_result\n\n return get_state_result(self, raise_on_failure=raise_on_failure, fetch=fetch)\n\n def to_state_create(self):\n \"\"\"\n Convert this state to a `StateCreate` type which can be used to set the state of\n a run in the API.\n\n This method will drop this state's `data` if it is not a result type. Only\n results should be sent to the API. Other data is only available locally.\n \"\"\"\n from prefect.client.schemas.actions import StateCreate\n from prefect.results import BaseResult\n\n return StateCreate(\n type=self.type,\n name=self.name,\n message=self.message,\n data=self.data if isinstance(self.data, BaseResult) else None,\n state_details=self.state_details,\n )\n\n @validator(\"name\", always=True)\n def default_name_from_type(cls, v, *, values, **kwargs):\n \"\"\"If a name is not provided, use the type\"\"\"\n\n # if `type` is not in `values` it means the `type` didn't pass its own\n # validation check and an error will be raised after this function is called\n if v is None and values.get(\"type\"):\n v = \" \".join([v.capitalize() for v in values.get(\"type\").value.split(\"_\")])\n return v\n\n @root_validator\n def default_scheduled_start_time(cls, values):\n \"\"\"\n TODO: This should throw an error instead of setting a default but is out of\n scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled\n into work refactoring state initialization\n \"\"\"\n if values.get(\"type\") == StateType.SCHEDULED:\n state_details = values.setdefault(\n \"state_details\", cls.__fields__[\"state_details\"].get_default()\n )\n if not state_details.scheduled_time:\n state_details.scheduled_time = pendulum.now(\"utc\")\n return values\n\n def is_scheduled(self) -> bool:\n return self.type == StateType.SCHEDULED\n\n def is_pending(self) -> bool:\n return self.type == StateType.PENDING\n\n def is_running(self) -> bool:\n return self.type == StateType.RUNNING\n\n def is_completed(self) -> bool:\n return self.type == StateType.COMPLETED\n\n def is_failed(self) -> bool:\n return self.type == StateType.FAILED\n\n def is_crashed(self) -> bool:\n return self.type == StateType.CRASHED\n\n def is_cancelled(self) -> bool:\n return self.type == StateType.CANCELLED\n\n def is_cancelling(self) -> bool:\n return self.type == StateType.CANCELLING\n\n def is_final(self) -> bool:\n return self.type in {\n StateType.CANCELLED,\n StateType.FAILED,\n StateType.COMPLETED,\n StateType.CRASHED,\n }\n\n def is_paused(self) -> bool:\n return self.type == StateType.PAUSED\n\n def copy(self, *, update: dict = None, reset_fields: bool = False, **kwargs):\n \"\"\"\n Copying API models should return an object that could be inserted into the\n database again. The 'timestamp' is reset using the default factory.\n \"\"\"\n update = update or {}\n update.setdefault(\"timestamp\", self.__fields__[\"timestamp\"].get_default())\n return super().copy(reset_fields=reset_fields, update=update, **kwargs)\n\n def __repr__(self) -> str:\n \"\"\"\n Generates a complete state representation appropriate for introspection\n and debugging, including the result:\n\n `MyCompletedState(message=\"my message\", type=COMPLETED, result=...)`\n \"\"\"\n from prefect.deprecated.data_documents import DataDocument\n\n if isinstance(self.data, DataDocument):\n result = self.data.decode()\n else:\n result = self.data\n\n display = dict(\n message=repr(self.message),\n type=str(self.type.value),\n result=repr(result),\n )\n\n return f\"{self.name}({', '.join(f'{k}={v}' for k, v in display.items())})\"\n\n def __str__(self) -> str:\n \"\"\"\n Generates a simple state representation appropriate for logging:\n\n `MyCompletedState(\"my message\", type=COMPLETED)`\n \"\"\"\n\n display = []\n\n if self.message:\n display.append(repr(self.message))\n\n if self.type.value.lower() != self.name.lower():\n display.append(f\"type={self.type.value}\")\n\n return f\"{self.name}({', '.join(display)})\"\n\n def __hash__(self) -> int:\n return hash(\n (\n getattr(self.state_details, \"flow_run_id\", None),\n getattr(self.state_details, \"task_run_id\", None),\n self.timestamp,\n self.type,\n )\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State.result","title":"result
","text":"Retrieve the result attached to this state.
Parameters:
Name Type Description Defaultraise_on_failure
bool
a boolean specifying whether to raise an exception if the state is of type FAILED
and the underlying data is an exception
True
fetch
Optional[bool]
a boolean specifying whether to resolve references to persisted results into data. For synchronous users, this defaults to True
. For asynchronous users, this defaults to False
for backwards compatibility.
None
Raises:
Type DescriptionTypeError
If the state is failed but the result is not an exception.
Returns:
Type DescriptionThe result of the run
Examples:
>>> from prefect import flow, task\n>>> @task\n>>> def my_task(x):\n>>> return x\n
Get the result from a task future in a flow
>>> @flow\n>>> def my_flow():\n>>> future = my_task(\"hello\")\n>>> state = future.wait()\n>>> result = state.result()\n>>> print(result)\n>>> my_flow()\nhello\n
Get the result from a flow state
>>> @flow\n>>> def my_flow():\n>>> return \"hello\"\n>>> my_flow(return_state=True).result()\nhello\n
Get the result from a failed state
>>> @flow\n>>> def my_flow():\n>>> raise ValueError(\"oh no!\")\n>>> state = my_flow(return_state=True) # Error is wrapped in FAILED state\n>>> state.result() # Raises `ValueError`\n
Get the result from a failed state without erroring
>>> @flow\n>>> def my_flow():\n>>> raise ValueError(\"oh no!\")\n>>> state = my_flow(return_state=True)\n>>> result = state.result(raise_on_failure=False)\n>>> print(result)\nValueError(\"oh no!\")\n
Get the result from a flow state in an async context
>>> @flow\n>>> async def my_flow():\n>>> return \"hello\"\n>>> state = await my_flow(return_state=True)\n>>> await state.result()\nhello\n
Source code in prefect/client/schemas/objects.py
def result(self, raise_on_failure: bool = True, fetch: Optional[bool] = None):\n \"\"\"\n Retrieve the result attached to this state.\n\n Args:\n raise_on_failure: a boolean specifying whether to raise an exception\n if the state is of type `FAILED` and the underlying data is an exception\n fetch: a boolean specifying whether to resolve references to persisted\n results into data. For synchronous users, this defaults to `True`.\n For asynchronous users, this defaults to `False` for backwards\n compatibility.\n\n Raises:\n TypeError: If the state is failed but the result is not an exception.\n\n Returns:\n The result of the run\n\n Examples:\n >>> from prefect import flow, task\n >>> @task\n >>> def my_task(x):\n >>> return x\n\n Get the result from a task future in a flow\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task(\"hello\")\n >>> state = future.wait()\n >>> result = state.result()\n >>> print(result)\n >>> my_flow()\n hello\n\n Get the result from a flow state\n\n >>> @flow\n >>> def my_flow():\n >>> return \"hello\"\n >>> my_flow(return_state=True).result()\n hello\n\n Get the result from a failed state\n\n >>> @flow\n >>> def my_flow():\n >>> raise ValueError(\"oh no!\")\n >>> state = my_flow(return_state=True) # Error is wrapped in FAILED state\n >>> state.result() # Raises `ValueError`\n\n Get the result from a failed state without erroring\n\n >>> @flow\n >>> def my_flow():\n >>> raise ValueError(\"oh no!\")\n >>> state = my_flow(return_state=True)\n >>> result = state.result(raise_on_failure=False)\n >>> print(result)\n ValueError(\"oh no!\")\n\n\n Get the result from a flow state in an async context\n\n >>> @flow\n >>> async def my_flow():\n >>> return \"hello\"\n >>> state = await my_flow(return_state=True)\n >>> await state.result()\n hello\n \"\"\"\n from prefect.states import get_state_result\n\n return get_state_result(self, raise_on_failure=raise_on_failure, fetch=fetch)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State.to_state_create","title":"to_state_create
","text":"Convert this state to a StateCreate
type which can be used to set the state of a run in the API.
This method will drop this state's data
if it is not a result type. Only results should be sent to the API. Other data is only available locally.
prefect/client/schemas/objects.py
def to_state_create(self):\n \"\"\"\n Convert this state to a `StateCreate` type which can be used to set the state of\n a run in the API.\n\n This method will drop this state's `data` if it is not a result type. Only\n results should be sent to the API. Other data is only available locally.\n \"\"\"\n from prefect.client.schemas.actions import StateCreate\n from prefect.results import BaseResult\n\n return StateCreate(\n type=self.type,\n name=self.name,\n message=self.message,\n data=self.data if isinstance(self.data, BaseResult) else None,\n state_details=self.state_details,\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State.default_name_from_type","title":"default_name_from_type
","text":"If a name is not provided, use the type
Source code inprefect/client/schemas/objects.py
@validator(\"name\", always=True)\ndef default_name_from_type(cls, v, *, values, **kwargs):\n \"\"\"If a name is not provided, use the type\"\"\"\n\n # if `type` is not in `values` it means the `type` didn't pass its own\n # validation check and an error will be raised after this function is called\n if v is None and values.get(\"type\"):\n v = \" \".join([v.capitalize() for v in values.get(\"type\").value.split(\"_\")])\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State.default_scheduled_start_time","title":"default_scheduled_start_time
","text":"This should throw an error instead of setting a default but is out of scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled into work refactoring state initialization
Source code inprefect/client/schemas/objects.py
@root_validator\ndef default_scheduled_start_time(cls, values):\n \"\"\"\n TODO: This should throw an error instead of setting a default but is out of\n scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled\n into work refactoring state initialization\n \"\"\"\n if values.get(\"type\") == StateType.SCHEDULED:\n state_details = values.setdefault(\n \"state_details\", cls.__fields__[\"state_details\"].get_default()\n )\n if not state_details.scheduled_time:\n state_details.scheduled_time = pendulum.now(\"utc\")\n return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunPolicy","title":"FlowRunPolicy
","text":" Bases: PrefectBaseModel
Defines of how a flow run should be orchestrated.
Source code inprefect/client/schemas/objects.py
class FlowRunPolicy(PrefectBaseModel):\n \"\"\"Defines of how a flow run should be orchestrated.\"\"\"\n\n max_retries: int = Field(\n default=0,\n description=(\n \"The maximum number of retries. Field is not used. Please use `retries`\"\n \" instead.\"\n ),\n deprecated=True,\n )\n retry_delay_seconds: float = Field(\n default=0,\n description=(\n \"The delay between retries. Field is not used. Please use `retry_delay`\"\n \" instead.\"\n ),\n deprecated=True,\n )\n retries: Optional[int] = Field(default=None, description=\"The number of retries.\")\n retry_delay: Optional[int] = Field(\n default=None, description=\"The delay time between retries, in seconds.\"\n )\n pause_keys: Optional[set] = Field(\n default_factory=set, description=\"Tracks pauses this run has observed.\"\n )\n resuming: Optional[bool] = Field(\n default=False, description=\"Indicates if this run is resuming from a pause.\"\n )\n\n @root_validator\n def populate_deprecated_fields(cls, values):\n \"\"\"\n If deprecated fields are provided, populate the corresponding new fields\n to preserve orchestration behavior.\n \"\"\"\n if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n values[\"retries\"] = values[\"max_retries\"]\n if (\n not values.get(\"retry_delay\", None)\n and values.get(\"retry_delay_seconds\", 0) != 0\n ):\n values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunPolicy.populate_deprecated_fields","title":"populate_deprecated_fields
","text":"If deprecated fields are provided, populate the corresponding new fields to preserve orchestration behavior.
Source code inprefect/client/schemas/objects.py
@root_validator\ndef populate_deprecated_fields(cls, values):\n \"\"\"\n If deprecated fields are provided, populate the corresponding new fields\n to preserve orchestration behavior.\n \"\"\"\n if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n values[\"retries\"] = values[\"max_retries\"]\n if (\n not values.get(\"retry_delay\", None)\n and values.get(\"retry_delay_seconds\", 0) != 0\n ):\n values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRun","title":"FlowRun
","text":" Bases: ObjectBaseModel
prefect/client/schemas/objects.py
class FlowRun(ObjectBaseModel):\n name: str = Field(\n default_factory=lambda: generate_slug(2),\n description=(\n \"The name of the flow run. Defaults to a random slug if not specified.\"\n ),\n example=\"my-flow-run\",\n )\n flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n state_id: Optional[UUID] = Field(\n default=None, description=\"The id of the flow run's current state.\"\n )\n deployment_id: Optional[UUID] = Field(\n default=None,\n description=(\n \"The id of the deployment associated with this flow run, if available.\"\n ),\n )\n work_queue_name: Optional[str] = Field(\n default=None, description=\"The work queue that handled this flow run.\"\n )\n flow_version: Optional[str] = Field(\n default=None,\n description=\"The version of the flow executed in this flow run.\",\n example=\"1.0\",\n )\n parameters: dict = Field(\n default_factory=dict, description=\"Parameters for the flow run.\"\n )\n idempotency_key: Optional[str] = Field(\n default=None,\n description=(\n \"An optional idempotency key for the flow run. Used to ensure the same flow\"\n \" run is not created multiple times.\"\n ),\n )\n context: dict = Field(\n default_factory=dict,\n description=\"Additional context for the flow run.\",\n example={\"my_var\": \"my_val\"},\n )\n empirical_policy: FlowRunPolicy = Field(\n default_factory=FlowRunPolicy,\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"A list of tags on the flow run\",\n example=[\"tag-1\", \"tag-2\"],\n )\n parent_task_run_id: Optional[UUID] = Field(\n default=None,\n description=(\n \"If the flow run is a subflow, the id of the 'dummy' task in the parent\"\n \" flow used to track subflow state.\"\n ),\n )\n run_count: int = Field(\n default=0, description=\"The number of times the flow run was executed.\"\n )\n expected_start_time: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The flow run's expected start time.\",\n )\n next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The next time the flow run is scheduled to start.\",\n )\n start_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The actual start time.\"\n )\n end_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The actual end time.\"\n )\n total_run_time: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=(\n \"Total run time. If the flow run was executed multiple times, the time of\"\n \" each run will be summed.\"\n ),\n )\n estimated_run_time: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=\"A real-time estimate of the total run time.\",\n )\n estimated_start_time_delta: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=\"The difference between actual and expected start time.\",\n )\n auto_scheduled: bool = Field(\n default=False,\n description=\"Whether or not the flow run was automatically scheduled.\",\n )\n infrastructure_document_id: Optional[UUID] = Field(\n default=None,\n description=\"The block document defining infrastructure to use this flow run.\",\n )\n infrastructure_pid: Optional[str] = Field(\n default=None,\n description=\"The id of the flow run as returned by an infrastructure block.\",\n )\n created_by: Optional[CreatedBy] = Field(\n default=None,\n description=\"Optional information about the creator of this flow run.\",\n )\n work_queue_id: Optional[UUID] = Field(\n default=None, description=\"The id of the run's work pool queue.\"\n )\n\n work_pool_id: Optional[UUID] = Field(\n description=\"The work pool with which the queue is associated.\"\n )\n work_pool_name: Optional[str] = Field(\n default=None,\n description=\"The name of the flow run's work pool.\",\n example=\"my-work-pool\",\n )\n state: Optional[State] = Field(\n default=None,\n description=\"The state of the flow run.\",\n example=State(type=StateType.COMPLETED),\n )\n job_variables: Optional[dict] = Field(\n default=None, description=\"Job variables for the flow run.\"\n )\n\n def __eq__(self, other: Any) -> bool:\n \"\"\"\n Check for \"equality\" to another flow run schema\n\n Estimates times are rolling and will always change with repeated queries for\n a flow run so we ignore them during equality checks.\n \"\"\"\n if isinstance(other, FlowRun):\n exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n return self.dict(exclude=exclude_fields) == other.dict(\n exclude=exclude_fields\n )\n return super().__eq__(other)\n\n @validator(\"name\", pre=True)\n def set_default_name(cls, name):\n return name or generate_slug(2)\n\n # These are server-side optimizations and should not be present on client models\n # TODO: Deprecate these fields\n\n state_type: Optional[StateType] = Field(\n default=None, description=\"The type of the current flow run state.\"\n )\n state_name: Optional[str] = Field(\n default=None, description=\"The name of the current flow run state.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.TaskRunPolicy","title":"TaskRunPolicy
","text":" Bases: PrefectBaseModel
Defines of how a task run should retry.
Source code inprefect/client/schemas/objects.py
class TaskRunPolicy(PrefectBaseModel):\n \"\"\"Defines of how a task run should retry.\"\"\"\n\n max_retries: int = Field(\n default=0,\n description=(\n \"The maximum number of retries. Field is not used. Please use `retries`\"\n \" instead.\"\n ),\n deprecated=True,\n )\n retry_delay_seconds: float = Field(\n default=0,\n description=(\n \"The delay between retries. Field is not used. Please use `retry_delay`\"\n \" instead.\"\n ),\n deprecated=True,\n )\n retries: Optional[int] = Field(default=None, description=\"The number of retries.\")\n retry_delay: Union[None, int, List[int]] = Field(\n default=None,\n description=\"A delay time or list of delay times between retries, in seconds.\",\n )\n retry_jitter_factor: Optional[float] = Field(\n default=None, description=\"Determines the amount a retry should jitter\"\n )\n\n @root_validator\n def populate_deprecated_fields(cls, values):\n \"\"\"\n If deprecated fields are provided, populate the corresponding new fields\n to preserve orchestration behavior.\n \"\"\"\n if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n values[\"retries\"] = values[\"max_retries\"]\n\n if (\n not values.get(\"retry_delay\", None)\n and values.get(\"retry_delay_seconds\", 0) != 0\n ):\n values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n\n return values\n\n @validator(\"retry_delay\")\n def validate_configured_retry_delays(cls, v):\n if isinstance(v, list) and (len(v) > 50):\n raise ValueError(\"Can not configure more than 50 retry delays per task.\")\n return v\n\n @validator(\"retry_jitter_factor\")\n def validate_jitter_factor(cls, v):\n if v is not None and v < 0:\n raise ValueError(\"`retry_jitter_factor` must be >= 0.\")\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.TaskRunPolicy.populate_deprecated_fields","title":"populate_deprecated_fields
","text":"If deprecated fields are provided, populate the corresponding new fields to preserve orchestration behavior.
Source code inprefect/client/schemas/objects.py
@root_validator\ndef populate_deprecated_fields(cls, values):\n \"\"\"\n If deprecated fields are provided, populate the corresponding new fields\n to preserve orchestration behavior.\n \"\"\"\n if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n values[\"retries\"] = values[\"max_retries\"]\n\n if (\n not values.get(\"retry_delay\", None)\n and values.get(\"retry_delay_seconds\", 0) != 0\n ):\n values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n\n return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.TaskRunInput","title":"TaskRunInput
","text":" Bases: PrefectBaseModel
Base class for classes that represent inputs to task runs, which could include, constants, parameters, or other task runs.
Source code inprefect/client/schemas/objects.py
class TaskRunInput(PrefectBaseModel):\n \"\"\"\n Base class for classes that represent inputs to task runs, which\n could include, constants, parameters, or other task runs.\n \"\"\"\n\n # freeze TaskRunInputs to allow them to be placed in sets\n class Config:\n frozen = True\n\n input_type: str\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.TaskRunResult","title":"TaskRunResult
","text":" Bases: TaskRunInput
Represents a task run result input to another task run.
Source code inprefect/client/schemas/objects.py
class TaskRunResult(TaskRunInput):\n \"\"\"Represents a task run result input to another task run.\"\"\"\n\n input_type: Literal[\"task_run\"] = \"task_run\"\n id: UUID\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Parameter","title":"Parameter
","text":" Bases: TaskRunInput
Represents a parameter input to a task run.
Source code inprefect/client/schemas/objects.py
class Parameter(TaskRunInput):\n \"\"\"Represents a parameter input to a task run.\"\"\"\n\n input_type: Literal[\"parameter\"] = \"parameter\"\n name: str\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Constant","title":"Constant
","text":" Bases: TaskRunInput
Represents constant input value to a task run.
Source code inprefect/client/schemas/objects.py
class Constant(TaskRunInput):\n \"\"\"Represents constant input value to a task run.\"\"\"\n\n input_type: Literal[\"constant\"] = \"constant\"\n type: str\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Workspace","title":"Workspace
","text":" Bases: PrefectBaseModel
A Prefect Cloud workspace.
Expected payload for each workspace returned by the me/workspaces
route.
prefect/client/schemas/objects.py
class Workspace(PrefectBaseModel):\n \"\"\"\n A Prefect Cloud workspace.\n\n Expected payload for each workspace returned by the `me/workspaces` route.\n \"\"\"\n\n account_id: UUID = Field(..., description=\"The account id of the workspace.\")\n account_name: str = Field(..., description=\"The account name.\")\n account_handle: str = Field(..., description=\"The account's unique handle.\")\n workspace_id: UUID = Field(..., description=\"The workspace id.\")\n workspace_name: str = Field(..., description=\"The workspace name.\")\n workspace_description: str = Field(..., description=\"Description of the workspace.\")\n workspace_handle: str = Field(..., description=\"The workspace's unique handle.\")\n\n class Config:\n extra = \"ignore\"\n\n @property\n def handle(self) -> str:\n \"\"\"\n The full handle of the workspace as `account_handle` / `workspace_handle`\n \"\"\"\n return self.account_handle + \"/\" + self.workspace_handle\n\n def api_url(self) -> str:\n \"\"\"\n Generate the API URL for accessing this workspace\n \"\"\"\n return (\n f\"{PREFECT_CLOUD_API_URL.value()}\"\n f\"/accounts/{self.account_id}\"\n f\"/workspaces/{self.workspace_id}\"\n )\n\n def ui_url(self) -> str:\n \"\"\"\n Generate the UI URL for accessing this workspace\n \"\"\"\n return (\n f\"{PREFECT_CLOUD_UI_URL.value()}\"\n f\"/account/{self.account_id}\"\n f\"/workspace/{self.workspace_id}\"\n )\n\n def __hash__(self):\n return hash(self.handle)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Workspace.handle","title":"handle: str
property
","text":"The full handle of the workspace as account_handle
/ workspace_handle
api_url
","text":"Generate the API URL for accessing this workspace
Source code inprefect/client/schemas/objects.py
def api_url(self) -> str:\n \"\"\"\n Generate the API URL for accessing this workspace\n \"\"\"\n return (\n f\"{PREFECT_CLOUD_API_URL.value()}\"\n f\"/accounts/{self.account_id}\"\n f\"/workspaces/{self.workspace_id}\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Workspace.ui_url","title":"ui_url
","text":"Generate the UI URL for accessing this workspace
Source code inprefect/client/schemas/objects.py
def ui_url(self) -> str:\n \"\"\"\n Generate the UI URL for accessing this workspace\n \"\"\"\n return (\n f\"{PREFECT_CLOUD_UI_URL.value()}\"\n f\"/account/{self.account_id}\"\n f\"/workspace/{self.workspace_id}\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockType","title":"BlockType
","text":" Bases: ObjectBaseModel
An ORM representation of a block type
Source code inprefect/client/schemas/objects.py
class BlockType(ObjectBaseModel):\n \"\"\"An ORM representation of a block type\"\"\"\n\n name: str = Field(default=..., description=\"A block type's name\")\n slug: str = Field(default=..., description=\"A block type's slug\")\n logo_url: Optional[HttpUrl] = Field(\n default=None, description=\"Web URL for the block type's logo\"\n )\n documentation_url: Optional[HttpUrl] = Field(\n default=None, description=\"Web URL for the block type's documentation\"\n )\n description: Optional[str] = Field(\n default=None,\n description=\"A short blurb about the corresponding block's intended use\",\n )\n code_example: Optional[str] = Field(\n default=None,\n description=\"A code snippet demonstrating use of the corresponding block\",\n )\n is_protected: bool = Field(\n default=False, description=\"Protected block types cannot be modified via API.\"\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockDocument","title":"BlockDocument
","text":" Bases: ObjectBaseModel
An ORM representation of a block document.
Source code inprefect/client/schemas/objects.py
class BlockDocument(ObjectBaseModel):\n \"\"\"An ORM representation of a block document.\"\"\"\n\n name: Optional[str] = Field(\n default=None,\n description=(\n \"The block document's name. Not required for anonymous block documents.\"\n ),\n )\n data: dict = Field(default_factory=dict, description=\"The block document's data\")\n block_schema_id: UUID = Field(default=..., description=\"A block schema ID\")\n block_schema: Optional[BlockSchema] = Field(\n default=None, description=\"The associated block schema\"\n )\n block_type_id: UUID = Field(default=..., description=\"A block type ID\")\n block_type_name: Optional[str] = Field(None, description=\"A block type name\")\n block_type: Optional[BlockType] = Field(\n default=None, description=\"The associated block type\"\n )\n block_document_references: Dict[str, Dict[str, Any]] = Field(\n default_factory=dict, description=\"Record of the block document's references\"\n )\n is_anonymous: bool = Field(\n default=False,\n description=(\n \"Whether the block is anonymous (anonymous blocks are usually created by\"\n \" Prefect automatically)\"\n ),\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n # the BlockDocumentCreate subclass allows name=None\n # and will inherit this validator\n if v is not None:\n raise_on_name_with_banned_characters(v)\n return v\n\n @root_validator\n def validate_name_is_present_if_not_anonymous(cls, values):\n # anonymous blocks may have no name prior to actually being\n # stored in the database\n if not values.get(\"is_anonymous\") and not values.get(\"name\"):\n raise ValueError(\"Names must be provided for block documents.\")\n return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Flow","title":"Flow
","text":" Bases: ObjectBaseModel
An ORM representation of flow data.
Source code inprefect/client/schemas/objects.py
class Flow(ObjectBaseModel):\n \"\"\"An ORM representation of flow data.\"\"\"\n\n name: str = Field(\n default=..., description=\"The name of the flow\", example=\"my-flow\"\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"A list of flow tags\",\n example=[\"tag-1\", \"tag-2\"],\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunnerSettings","title":"FlowRunnerSettings
","text":" Bases: PrefectBaseModel
An API schema for passing details about the flow runner.
This schema is agnostic to the types and configuration provided by clients
Source code inprefect/client/schemas/objects.py
class FlowRunnerSettings(PrefectBaseModel):\n \"\"\"\n An API schema for passing details about the flow runner.\n\n This schema is agnostic to the types and configuration provided by clients\n \"\"\"\n\n type: Optional[str] = Field(\n default=None,\n description=(\n \"The type of the flow runner which can be used by the client for\"\n \" dispatching.\"\n ),\n )\n config: Optional[dict] = Field(\n default=None, description=\"The configuration for the given flow runner type.\"\n )\n\n # The following is required for composite compatibility in the ORM\n\n def __init__(self, type: str = None, config: dict = None, **kwargs) -> None:\n # Pydantic does not support positional arguments so they must be converted to\n # keyword arguments\n super().__init__(type=type, config=config, **kwargs)\n\n def __composite_values__(self):\n return self.type, self.config\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Deployment","title":"Deployment
","text":" Bases: ObjectBaseModel
An ORM representation of deployment data.
Source code inprefect/client/schemas/objects.py
class Deployment(ObjectBaseModel):\n \"\"\"An ORM representation of deployment data.\"\"\"\n\n name: str = Field(default=..., description=\"The name of the deployment.\")\n version: Optional[str] = Field(\n default=None, description=\"An optional version for the deployment.\"\n )\n description: Optional[str] = Field(\n default=None, description=\"A description for the deployment.\"\n )\n flow_id: UUID = Field(\n default=..., description=\"The flow id associated with the deployment.\"\n )\n schedule: Optional[SCHEDULE_TYPES] = Field(\n default=None, description=\"A schedule for the deployment.\"\n )\n is_schedule_active: bool = Field(\n default=True, description=\"Whether or not the deployment schedule is active.\"\n )\n paused: bool = Field(\n default=False, description=\"Whether or not the deployment is paused.\"\n )\n schedules: List[DeploymentSchedule] = Field(\n default_factory=list, description=\"A list of schedules for the deployment.\"\n )\n infra_overrides: Dict[str, Any] = Field(\n default_factory=dict,\n description=\"Overrides to apply to the base infrastructure block at runtime.\",\n )\n parameters: Dict[str, Any] = Field(\n default_factory=dict,\n description=\"Parameters for flow runs scheduled by the deployment.\",\n )\n pull_steps: Optional[List[dict]] = Field(\n default=None,\n description=\"Pull steps for cloning and running this deployment.\",\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"A list of tags for the deployment\",\n example=[\"tag-1\", \"tag-2\"],\n )\n work_queue_name: Optional[str] = Field(\n default=None,\n description=(\n \"The work queue for the deployment. If no work queue is set, work will not\"\n \" be scheduled.\"\n ),\n )\n last_polled: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The last time the deployment was polled for status updates.\",\n )\n parameter_openapi_schema: Optional[Dict[str, Any]] = Field(\n default=None,\n description=\"The parameter schema of the flow, including defaults.\",\n )\n path: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the working directory for the workflow, relative to remote\"\n \" storage or an absolute path.\"\n ),\n )\n entrypoint: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the entrypoint for the workflow, relative to the `path`.\"\n ),\n )\n manifest_path: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the flow's manifest file, relative to the chosen storage.\"\n ),\n )\n storage_document_id: Optional[UUID] = Field(\n default=None,\n description=\"The block document defining storage used for this flow.\",\n )\n infrastructure_document_id: Optional[UUID] = Field(\n default=None,\n description=\"The block document defining infrastructure to use for flow runs.\",\n )\n created_by: Optional[CreatedBy] = Field(\n default=None,\n description=\"Optional information about the creator of this deployment.\",\n )\n updated_by: Optional[UpdatedBy] = Field(\n default=None,\n description=\"Optional information about the updater of this deployment.\",\n )\n work_queue_id: UUID = Field(\n default=None,\n description=(\n \"The id of the work pool queue to which this deployment is assigned.\"\n ),\n )\n enforce_parameter_schema: bool = Field(\n default=False,\n description=(\n \"Whether or not the deployment should enforce the parameter schema.\"\n ),\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.ConcurrencyLimit","title":"ConcurrencyLimit
","text":" Bases: ObjectBaseModel
An ORM representation of a concurrency limit.
Source code inprefect/client/schemas/objects.py
class ConcurrencyLimit(ObjectBaseModel):\n \"\"\"An ORM representation of a concurrency limit.\"\"\"\n\n tag: str = Field(\n default=..., description=\"A tag the concurrency limit is applied to.\"\n )\n concurrency_limit: int = Field(default=..., description=\"The concurrency limit.\")\n active_slots: List[UUID] = Field(\n default_factory=list,\n description=\"A list of active run ids using a concurrency slot\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockSchema","title":"BlockSchema
","text":" Bases: ObjectBaseModel
An ORM representation of a block schema.
Source code inprefect/client/schemas/objects.py
class BlockSchema(ObjectBaseModel):\n \"\"\"An ORM representation of a block schema.\"\"\"\n\n checksum: str = Field(default=..., description=\"The block schema's unique checksum\")\n fields: dict = Field(\n default_factory=dict, description=\"The block schema's field schema\"\n )\n block_type_id: Optional[UUID] = Field(default=..., description=\"A block type ID\")\n block_type: Optional[BlockType] = Field(\n default=None, description=\"The associated block type\"\n )\n capabilities: List[str] = Field(\n default_factory=list,\n description=\"A list of Block capabilities\",\n )\n version: str = Field(\n default=DEFAULT_BLOCK_SCHEMA_VERSION,\n description=\"Human readable identifier for the block schema\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockSchemaReference","title":"BlockSchemaReference
","text":" Bases: ObjectBaseModel
An ORM representation of a block schema reference.
Source code inprefect/client/schemas/objects.py
class BlockSchemaReference(ObjectBaseModel):\n \"\"\"An ORM representation of a block schema reference.\"\"\"\n\n parent_block_schema_id: UUID = Field(\n default=..., description=\"ID of block schema the reference is nested within\"\n )\n parent_block_schema: Optional[BlockSchema] = Field(\n default=None, description=\"The block schema the reference is nested within\"\n )\n reference_block_schema_id: UUID = Field(\n default=..., description=\"ID of the nested block schema\"\n )\n reference_block_schema: Optional[BlockSchema] = Field(\n default=None, description=\"The nested block schema\"\n )\n name: str = Field(\n default=..., description=\"The name that the reference is nested under\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockDocumentReference","title":"BlockDocumentReference
","text":" Bases: ObjectBaseModel
An ORM representation of a block document reference.
Source code inprefect/client/schemas/objects.py
class BlockDocumentReference(ObjectBaseModel):\n \"\"\"An ORM representation of a block document reference.\"\"\"\n\n parent_block_document_id: UUID = Field(\n default=..., description=\"ID of block document the reference is nested within\"\n )\n parent_block_document: Optional[BlockDocument] = Field(\n default=None, description=\"The block document the reference is nested within\"\n )\n reference_block_document_id: UUID = Field(\n default=..., description=\"ID of the nested block document\"\n )\n reference_block_document: Optional[BlockDocument] = Field(\n default=None, description=\"The nested block document\"\n )\n name: str = Field(\n default=..., description=\"The name that the reference is nested under\"\n )\n\n @root_validator\n def validate_parent_and_ref_are_different(cls, values):\n parent_id = values.get(\"parent_block_document_id\")\n ref_id = values.get(\"reference_block_document_id\")\n if parent_id and ref_id and parent_id == ref_id:\n raise ValueError(\n \"`parent_block_document_id` and `reference_block_document_id` cannot be\"\n \" the same\"\n )\n return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.SavedSearchFilter","title":"SavedSearchFilter
","text":" Bases: PrefectBaseModel
A filter for a saved search model. Intended for use by the Prefect UI.
Source code inprefect/client/schemas/objects.py
class SavedSearchFilter(PrefectBaseModel):\n \"\"\"A filter for a saved search model. Intended for use by the Prefect UI.\"\"\"\n\n object: str = Field(default=..., description=\"The object over which to filter.\")\n property: str = Field(\n default=..., description=\"The property of the object on which to filter.\"\n )\n type: str = Field(default=..., description=\"The type of the property.\")\n operation: str = Field(\n default=...,\n description=\"The operator to apply to the object. For example, `equals`.\",\n )\n value: Any = Field(\n default=..., description=\"A JSON-compatible value for the filter.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.SavedSearch","title":"SavedSearch
","text":" Bases: ObjectBaseModel
An ORM representation of saved search data. Represents a set of filter criteria.
Source code inprefect/client/schemas/objects.py
class SavedSearch(ObjectBaseModel):\n \"\"\"An ORM representation of saved search data. Represents a set of filter criteria.\"\"\"\n\n name: str = Field(default=..., description=\"The name of the saved search.\")\n filters: List[SavedSearchFilter] = Field(\n default_factory=list, description=\"The filter set for the saved search.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Log","title":"Log
","text":" Bases: ObjectBaseModel
An ORM representation of log data.
Source code inprefect/client/schemas/objects.py
class Log(ObjectBaseModel):\n \"\"\"An ORM representation of log data.\"\"\"\n\n name: str = Field(default=..., description=\"The logger name.\")\n level: int = Field(default=..., description=\"The log level.\")\n message: str = Field(default=..., description=\"The log message.\")\n timestamp: DateTimeTZ = Field(default=..., description=\"The log timestamp.\")\n flow_run_id: Optional[UUID] = Field(\n default=None, description=\"The flow run ID associated with the log.\"\n )\n task_run_id: Optional[UUID] = Field(\n default=None, description=\"The task run ID associated with the log.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.QueueFilter","title":"QueueFilter
","text":" Bases: PrefectBaseModel
Filter criteria definition for a work queue.
Source code inprefect/client/schemas/objects.py
class QueueFilter(PrefectBaseModel):\n \"\"\"Filter criteria definition for a work queue.\"\"\"\n\n tags: Optional[List[str]] = Field(\n default=None,\n description=\"Only include flow runs with these tags in the work queue.\",\n )\n deployment_ids: Optional[List[UUID]] = Field(\n default=None,\n description=\"Only include flow runs from these deployments in the work queue.\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueue","title":"WorkQueue
","text":" Bases: ObjectBaseModel
An ORM representation of a work queue
Source code inprefect/client/schemas/objects.py
class WorkQueue(ObjectBaseModel):\n \"\"\"An ORM representation of a work queue\"\"\"\n\n name: str = Field(default=..., description=\"The name of the work queue.\")\n description: Optional[str] = Field(\n default=\"\", description=\"An optional description for the work queue.\"\n )\n is_paused: bool = Field(\n default=False, description=\"Whether or not the work queue is paused.\"\n )\n concurrency_limit: Optional[conint(ge=0)] = Field(\n default=None, description=\"An optional concurrency limit for the work queue.\"\n )\n priority: conint(ge=1) = Field(\n default=1,\n description=(\n \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n ),\n )\n work_pool_name: Optional[str] = Field(default=None)\n # Will be required after a future migration\n work_pool_id: Optional[UUID] = Field(\n description=\"The work pool with which the queue is associated.\"\n )\n filter: Optional[QueueFilter] = Field(\n default=None,\n description=\"DEPRECATED: Filter criteria for the work queue.\",\n deprecated=True,\n )\n last_polled: Optional[DateTimeTZ] = Field(\n default=None, description=\"The last time an agent polled this queue for work.\"\n )\n status: Optional[WorkQueueStatus] = Field(\n default=None, description=\"The queue status.\"\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueueHealthPolicy","title":"WorkQueueHealthPolicy
","text":" Bases: PrefectBaseModel
prefect/client/schemas/objects.py
class WorkQueueHealthPolicy(PrefectBaseModel):\n maximum_late_runs: Optional[int] = Field(\n default=0,\n description=(\n \"The maximum number of late runs in the work queue before it is deemed\"\n \" unhealthy. Defaults to `0`.\"\n ),\n )\n maximum_seconds_since_last_polled: Optional[int] = Field(\n default=60,\n description=(\n \"The maximum number of time in seconds elapsed since work queue has been\"\n \" polled before it is deemed unhealthy. Defaults to `60`.\"\n ),\n )\n\n def evaluate_health_status(\n self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n ) -> bool:\n \"\"\"\n Given empirical information about the state of the work queue, evaluate its health status.\n\n Args:\n late_runs: the count of late runs for the work queue.\n last_polled: the last time the work queue was polled, if available.\n\n Returns:\n bool: whether or not the work queue is healthy.\n \"\"\"\n healthy = True\n if (\n self.maximum_late_runs is not None\n and late_runs_count > self.maximum_late_runs\n ):\n healthy = False\n\n if self.maximum_seconds_since_last_polled is not None:\n if (\n last_polled is None\n or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n > self.maximum_seconds_since_last_polled\n ):\n healthy = False\n\n return healthy\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueueHealthPolicy.evaluate_health_status","title":"evaluate_health_status
","text":"Given empirical information about the state of the work queue, evaluate its health status.
Parameters:
Name Type Description Defaultlate_runs
the count of late runs for the work queue.
requiredlast_polled
Optional[DateTimeTZ]
the last time the work queue was polled, if available.
None
Returns:
Name Type Descriptionbool
bool
whether or not the work queue is healthy.
Source code inprefect/client/schemas/objects.py
def evaluate_health_status(\n self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n) -> bool:\n \"\"\"\n Given empirical information about the state of the work queue, evaluate its health status.\n\n Args:\n late_runs: the count of late runs for the work queue.\n last_polled: the last time the work queue was polled, if available.\n\n Returns:\n bool: whether or not the work queue is healthy.\n \"\"\"\n healthy = True\n if (\n self.maximum_late_runs is not None\n and late_runs_count > self.maximum_late_runs\n ):\n healthy = False\n\n if self.maximum_seconds_since_last_polled is not None:\n if (\n last_polled is None\n or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n > self.maximum_seconds_since_last_polled\n ):\n healthy = False\n\n return healthy\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunNotificationPolicy","title":"FlowRunNotificationPolicy
","text":" Bases: ObjectBaseModel
An ORM representation of a flow run notification.
Source code inprefect/client/schemas/objects.py
class FlowRunNotificationPolicy(ObjectBaseModel):\n \"\"\"An ORM representation of a flow run notification.\"\"\"\n\n is_active: bool = Field(\n default=True, description=\"Whether the policy is currently active\"\n )\n state_names: List[str] = Field(\n default=..., description=\"The flow run states that trigger notifications\"\n )\n tags: List[str] = Field(\n default=...,\n description=\"The flow run tags that trigger notifications (set [] to disable)\",\n )\n block_document_id: UUID = Field(\n default=..., description=\"The block document ID used for sending notifications\"\n )\n message_template: Optional[str] = Field(\n default=None,\n description=(\n \"A templatable notification message. Use {braces} to add variables.\"\n \" Valid variables include:\"\n f\" {listrepr(sorted(FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS), sep=', ')}\"\n ),\n example=(\n \"Flow run {flow_run_name} with id {flow_run_id} entered state\"\n \" {flow_run_state_name}.\"\n ),\n )\n\n @validator(\"message_template\")\n def validate_message_template_variables(cls, v):\n if v is not None:\n try:\n v.format(**{k: \"test\" for k in FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS})\n except KeyError as exc:\n raise ValueError(f\"Invalid template variable provided: '{exc.args[0]}'\")\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Agent","title":"Agent
","text":" Bases: ObjectBaseModel
An ORM representation of an agent
Source code inprefect/client/schemas/objects.py
class Agent(ObjectBaseModel):\n \"\"\"An ORM representation of an agent\"\"\"\n\n name: str = Field(\n default_factory=lambda: generate_slug(2),\n description=(\n \"The name of the agent. If a name is not provided, it will be\"\n \" auto-generated.\"\n ),\n )\n work_queue_id: UUID = Field(\n default=..., description=\"The work queue with which the agent is associated.\"\n )\n last_activity_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The last time this agent polled for work.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkPool","title":"WorkPool
","text":" Bases: ObjectBaseModel
An ORM representation of a work pool
Source code inprefect/client/schemas/objects.py
class WorkPool(ObjectBaseModel):\n \"\"\"An ORM representation of a work pool\"\"\"\n\n name: str = Field(\n description=\"The name of the work pool.\",\n )\n description: Optional[str] = Field(\n default=None, description=\"A description of the work pool.\"\n )\n type: str = Field(description=\"The work pool type.\")\n base_job_template: Dict[str, Any] = Field(\n default_factory=dict, description=\"The work pool's base job template.\"\n )\n is_paused: bool = Field(\n default=False,\n description=\"Pausing the work pool stops the delivery of all work.\",\n )\n concurrency_limit: Optional[conint(ge=0)] = Field(\n default=None, description=\"A concurrency limit for the work pool.\"\n )\n status: Optional[WorkPoolStatus] = Field(\n default=None, description=\"The current status of the work pool.\"\n )\n\n # this required field has a default of None so that the custom validator\n # below will be called and produce a more helpful error message\n default_queue_id: UUID = Field(\n None, description=\"The id of the pool's default queue.\"\n )\n\n @property\n def is_push_pool(self) -> bool:\n return self.type.endswith(\":push\")\n\n @property\n def is_managed_pool(self) -> bool:\n return self.type.endswith(\":managed\")\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n\n @validator(\"default_queue_id\", always=True)\n def helpful_error_for_missing_default_queue_id(cls, v):\n \"\"\"\n Default queue ID is required because all pools must have a default queue\n ID, but it represents a circular foreign key relationship to a\n WorkQueue (which can't be created until the work pool exists).\n Therefore, while this field can *technically* be null, it shouldn't be.\n This should only be an issue when creating new pools, as reading\n existing ones will always have this field populated. This custom error\n message will help users understand that they should use the\n `actions.WorkPoolCreate` model in that case.\n \"\"\"\n if v is None:\n raise ValueError(\n \"`default_queue_id` is a required field. If you are \"\n \"creating a new WorkPool and don't have a queue \"\n \"ID yet, use the `actions.WorkPoolCreate` model instead.\"\n )\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkPool.helpful_error_for_missing_default_queue_id","title":"helpful_error_for_missing_default_queue_id
","text":"Default queue ID is required because all pools must have a default queue ID, but it represents a circular foreign key relationship to a WorkQueue (which can't be created until the work pool exists). Therefore, while this field can technically be null, it shouldn't be. This should only be an issue when creating new pools, as reading existing ones will always have this field populated. This custom error message will help users understand that they should use the actions.WorkPoolCreate
model in that case.
prefect/client/schemas/objects.py
@validator(\"default_queue_id\", always=True)\ndef helpful_error_for_missing_default_queue_id(cls, v):\n \"\"\"\n Default queue ID is required because all pools must have a default queue\n ID, but it represents a circular foreign key relationship to a\n WorkQueue (which can't be created until the work pool exists).\n Therefore, while this field can *technically* be null, it shouldn't be.\n This should only be an issue when creating new pools, as reading\n existing ones will always have this field populated. This custom error\n message will help users understand that they should use the\n `actions.WorkPoolCreate` model in that case.\n \"\"\"\n if v is None:\n raise ValueError(\n \"`default_queue_id` is a required field. If you are \"\n \"creating a new WorkPool and don't have a queue \"\n \"ID yet, use the `actions.WorkPoolCreate` model instead.\"\n )\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Worker","title":"Worker
","text":" Bases: ObjectBaseModel
An ORM representation of a worker
Source code inprefect/client/schemas/objects.py
class Worker(ObjectBaseModel):\n \"\"\"An ORM representation of a worker\"\"\"\n\n name: str = Field(description=\"The name of the worker.\")\n work_pool_id: UUID = Field(\n description=\"The work pool with which the queue is associated.\"\n )\n last_heartbeat_time: datetime.datetime = Field(\n None, description=\"The last time the worker process sent a heartbeat.\"\n )\n heartbeat_interval_seconds: Optional[int] = Field(\n default=None,\n description=(\n \"The number of seconds to expect between heartbeats sent by the worker.\"\n ),\n )\n status: WorkerStatus = Field(\n WorkerStatus.OFFLINE,\n description=\"Current status of the worker.\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunInput","title":"FlowRunInput
","text":" Bases: ObjectBaseModel
prefect/client/schemas/objects.py
class FlowRunInput(ObjectBaseModel):\n flow_run_id: UUID = Field(description=\"The flow run ID associated with the input.\")\n key: str = Field(description=\"The key of the input.\")\n value: str = Field(description=\"The value of the input.\")\n sender: Optional[str] = Field(description=\"The sender of the input.\")\n\n @property\n def decoded_value(self) -> Any:\n \"\"\"\n Decode the value of the input.\n\n Returns:\n Any: the decoded value\n \"\"\"\n return orjson.loads(self.value)\n\n @validator(\"key\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_alphanumeric_dashes_only(v)\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunInput.decoded_value","title":"decoded_value: Any
property
","text":"Decode the value of the input.
Returns:
Name Type DescriptionAny
Any
the decoded value
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.GlobalConcurrencyLimit","title":"GlobalConcurrencyLimit
","text":" Bases: ObjectBaseModel
An ORM representation of a global concurrency limit
Source code inprefect/client/schemas/objects.py
class GlobalConcurrencyLimit(ObjectBaseModel):\n \"\"\"An ORM representation of a global concurrency limit\"\"\"\n\n name: str = Field(description=\"The name of the global concurrency limit.\")\n limit: int = Field(\n description=(\n \"The maximum number of slots that can be occupied on this concurrency\"\n \" limit.\"\n )\n )\n active: Optional[bool] = Field(\n default=True,\n description=\"Whether or not the concurrency limit is in an active state.\",\n )\n active_slots: Optional[int] = Field(\n default=0,\n description=\"Number of tasks currently using a concurrency slot.\",\n )\n slot_decay_per_second: Optional[int] = Field(\n default=0,\n description=(\n \"Controls the rate at which slots are released when the concurrency limit\"\n \" is used as a rate limit.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_4","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses","title":"prefect.client.schemas.responses
","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.SetStateStatus","title":"SetStateStatus
","text":" Bases: AutoEnum
Enumerates return statuses for setting run states.
Source code inprefect/client/schemas/responses.py
class SetStateStatus(AutoEnum):\n \"\"\"Enumerates return statuses for setting run states.\"\"\"\n\n ACCEPT = AutoEnum.auto()\n REJECT = AutoEnum.auto()\n ABORT = AutoEnum.auto()\n WAIT = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateAcceptDetails","title":"StateAcceptDetails
","text":" Bases: PrefectBaseModel
Details associated with an ACCEPT state transition.
Source code inprefect/client/schemas/responses.py
class StateAcceptDetails(PrefectBaseModel):\n \"\"\"Details associated with an ACCEPT state transition.\"\"\"\n\n type: Literal[\"accept_details\"] = Field(\n default=\"accept_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateRejectDetails","title":"StateRejectDetails
","text":" Bases: PrefectBaseModel
Details associated with a REJECT state transition.
Source code inprefect/client/schemas/responses.py
class StateRejectDetails(PrefectBaseModel):\n \"\"\"Details associated with a REJECT state transition.\"\"\"\n\n type: Literal[\"reject_details\"] = Field(\n default=\"reject_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n reason: Optional[str] = Field(\n default=None, description=\"The reason why the state transition was rejected.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateAbortDetails","title":"StateAbortDetails
","text":" Bases: PrefectBaseModel
Details associated with an ABORT state transition.
Source code inprefect/client/schemas/responses.py
class StateAbortDetails(PrefectBaseModel):\n \"\"\"Details associated with an ABORT state transition.\"\"\"\n\n type: Literal[\"abort_details\"] = Field(\n default=\"abort_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n reason: Optional[str] = Field(\n default=None, description=\"The reason why the state transition was aborted.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateWaitDetails","title":"StateWaitDetails
","text":" Bases: PrefectBaseModel
Details associated with a WAIT state transition.
Source code inprefect/client/schemas/responses.py
class StateWaitDetails(PrefectBaseModel):\n \"\"\"Details associated with a WAIT state transition.\"\"\"\n\n type: Literal[\"wait_details\"] = Field(\n default=\"wait_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n delay_seconds: int = Field(\n default=...,\n description=(\n \"The length of time in seconds the client should wait before transitioning\"\n \" states.\"\n ),\n )\n reason: Optional[str] = Field(\n default=None, description=\"The reason why the state transition should wait.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.HistoryResponseState","title":"HistoryResponseState
","text":" Bases: PrefectBaseModel
Represents a single state's history over an interval.
Source code inprefect/client/schemas/responses.py
class HistoryResponseState(PrefectBaseModel):\n \"\"\"Represents a single state's history over an interval.\"\"\"\n\n state_type: objects.StateType = Field(default=..., description=\"The state type.\")\n state_name: str = Field(default=..., description=\"The state name.\")\n count_runs: int = Field(\n default=...,\n description=\"The number of runs in the specified state during the interval.\",\n )\n sum_estimated_run_time: datetime.timedelta = Field(\n default=...,\n description=\"The total estimated run time of all runs during the interval.\",\n )\n sum_estimated_lateness: datetime.timedelta = Field(\n default=...,\n description=(\n \"The sum of differences between actual and expected start time during the\"\n \" interval.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.HistoryResponse","title":"HistoryResponse
","text":" Bases: PrefectBaseModel
Represents a history of aggregation states over an interval
Source code inprefect/client/schemas/responses.py
class HistoryResponse(PrefectBaseModel):\n \"\"\"Represents a history of aggregation states over an interval\"\"\"\n\n interval_start: DateTimeTZ = Field(\n default=..., description=\"The start date of the interval.\"\n )\n interval_end: DateTimeTZ = Field(\n default=..., description=\"The end date of the interval.\"\n )\n states: List[HistoryResponseState] = Field(\n default=..., description=\"A list of state histories during the interval.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.OrchestrationResult","title":"OrchestrationResult
","text":" Bases: PrefectBaseModel
A container for the output of state orchestration.
Source code inprefect/client/schemas/responses.py
class OrchestrationResult(PrefectBaseModel):\n \"\"\"\n A container for the output of state orchestration.\n \"\"\"\n\n state: Optional[objects.State]\n status: SetStateStatus\n details: StateResponseDetails\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.FlowRunResponse","title":"FlowRunResponse
","text":" Bases: ObjectBaseModel
prefect/client/schemas/responses.py
@copy_model_fields\nclass FlowRunResponse(ObjectBaseModel):\n name: str = FieldFrom(objects.FlowRun)\n flow_id: UUID = FieldFrom(objects.FlowRun)\n state_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n deployment_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n work_queue_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n work_queue_name: Optional[str] = FieldFrom(objects.FlowRun)\n flow_version: Optional[str] = FieldFrom(objects.FlowRun)\n parameters: dict = FieldFrom(objects.FlowRun)\n idempotency_key: Optional[str] = FieldFrom(objects.FlowRun)\n context: dict = FieldFrom(objects.FlowRun)\n empirical_policy: objects.FlowRunPolicy = FieldFrom(objects.FlowRun)\n tags: List[str] = FieldFrom(objects.FlowRun)\n parent_task_run_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n state_type: Optional[objects.StateType] = FieldFrom(objects.FlowRun)\n state_name: Optional[str] = FieldFrom(objects.FlowRun)\n run_count: int = FieldFrom(objects.FlowRun)\n expected_start_time: Optional[DateTimeTZ] = FieldFrom(objects.FlowRun)\n next_scheduled_start_time: Optional[DateTimeTZ] = FieldFrom(objects.FlowRun)\n start_time: Optional[DateTimeTZ] = FieldFrom(objects.FlowRun)\n end_time: Optional[DateTimeTZ] = FieldFrom(objects.FlowRun)\n total_run_time: datetime.timedelta = FieldFrom(objects.FlowRun)\n estimated_run_time: datetime.timedelta = FieldFrom(objects.FlowRun)\n estimated_start_time_delta: datetime.timedelta = FieldFrom(objects.FlowRun)\n auto_scheduled: bool = FieldFrom(objects.FlowRun)\n infrastructure_document_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n infrastructure_pid: Optional[str] = FieldFrom(objects.FlowRun)\n created_by: Optional[CreatedBy] = FieldFrom(objects.FlowRun)\n work_pool_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n work_pool_name: Optional[str] = Field(\n default=None,\n description=\"The name of the flow run's work pool.\",\n example=\"my-work-pool\",\n )\n state: Optional[objects.State] = FieldFrom(objects.FlowRun)\n job_variables: Optional[dict] = FieldFrom(objects.FlowRun)\n\n def __eq__(self, other: Any) -> bool:\n \"\"\"\n Check for \"equality\" to another flow run schema\n\n Estimates times are rolling and will always change with repeated queries for\n a flow run so we ignore them during equality checks.\n \"\"\"\n if isinstance(other, FlowRunResponse):\n exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n return self.dict(exclude=exclude_fields) == other.dict(\n exclude=exclude_fields\n )\n return super().__eq__(other)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_5","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules","title":"prefect.client.schemas.schedules
","text":"Schedule schemas
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.IntervalSchedule","title":"IntervalSchedule
","text":" Bases: PrefectBaseModel
A schedule formed by adding interval
increments to an anchor_date
. If no anchor_date
is supplied, the current UTC time is used. If a timezone-naive datetime is provided for anchor_date
, it is assumed to be in the schedule's timezone (or UTC). Even if supplied with an IANA timezone, anchor dates are always stored as UTC offsets, so a timezone
can be provided to determine localization behaviors like DST boundary handling. If none is provided it will be inferred from the anchor date.
NOTE: If the IntervalSchedule
anchor_date
or timezone
is provided in a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals. For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time. For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.
Parameters:
Name Type Description Defaultinterval
timedelta
an interval to schedule on
requiredanchor_date
DateTimeTZ
an anchor date to schedule increments against; if not provided, the current timestamp will be used
requiredtimezone
str
a valid timezone string
required Source code inprefect/client/schemas/schedules.py
class IntervalSchedule(PrefectBaseModel):\n \"\"\"\n A schedule formed by adding `interval` increments to an `anchor_date`. If no\n `anchor_date` is supplied, the current UTC time is used. If a\n timezone-naive datetime is provided for `anchor_date`, it is assumed to be\n in the schedule's timezone (or UTC). Even if supplied with an IANA timezone,\n anchor dates are always stored as UTC offsets, so a `timezone` can be\n provided to determine localization behaviors like DST boundary handling. If\n none is provided it will be inferred from the anchor date.\n\n NOTE: If the `IntervalSchedule` `anchor_date` or `timezone` is provided in a\n DST-observing timezone, then the schedule will adjust itself appropriately.\n Intervals greater than 24 hours will follow DST conventions, while intervals\n of less than 24 hours will follow UTC intervals. For example, an hourly\n schedule will fire every UTC hour, even across DST boundaries. When clocks\n are set back, this will result in two runs that *appear* to both be\n scheduled for 1am local time, even though they are an hour apart in UTC\n time. For longer intervals, like a daily schedule, the interval schedule\n will adjust for DST boundaries so that the clock-hour remains constant. This\n means that a daily schedule that always fires at 9am will observe DST and\n continue to fire at 9am in the local time zone.\n\n Args:\n interval (datetime.timedelta): an interval to schedule on\n anchor_date (DateTimeTZ, optional): an anchor date to schedule increments against;\n if not provided, the current timestamp will be used\n timezone (str, optional): a valid timezone string\n \"\"\"\n\n class Config:\n extra = \"forbid\"\n exclude_none = True\n\n interval: datetime.timedelta\n anchor_date: DateTimeTZ = None\n timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n\n @validator(\"interval\")\n def interval_must_be_positive(cls, v):\n if v.total_seconds() <= 0:\n raise ValueError(\"The interval must be positive\")\n return v\n\n @validator(\"anchor_date\", always=True)\n def default_anchor_date(cls, v):\n if v is None:\n return pendulum.now(\"UTC\")\n return pendulum.instance(v)\n\n @validator(\"timezone\", always=True)\n def default_timezone(cls, v, *, values, **kwargs):\n # pendulum.tz.timezones is a callable in 3.0 and above\n # https://github.com/PrefectHQ/prefect/issues/11619\n if callable(pendulum.tz.timezones):\n timezones = pendulum.tz.timezones()\n else:\n timezones = pendulum.tz.timezones\n # if was provided, make sure its a valid IANA string\n if v and v not in timezones:\n raise ValueError(f'Invalid timezone: \"{v}\"')\n\n # otherwise infer the timezone from the anchor date\n elif v is None and values.get(\"anchor_date\"):\n tz = values[\"anchor_date\"].tz.name\n if tz in timezones:\n return tz\n # sometimes anchor dates have \"timezones\" that are UTC offsets\n # like \"-04:00\". This happens when parsing ISO8601 strings.\n # In this case we, the correct inferred localization is \"UTC\".\n else:\n return \"UTC\"\n\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.CronSchedule","title":"CronSchedule
","text":" Bases: PrefectBaseModel
Cron schedule
NOTE: If the timezone is a DST-observing one, then the schedule will adjust itself appropriately. Cron's rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule will fire on every new schedule hour, not every elapsed hour; for example, when clocks are set back this will result in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later. Longer schedules, such as one that fires at 9am every morning, will automatically adjust for DST.
Parameters:
Name Type Description Defaultcron
str
a valid cron string
requiredtimezone
str
a valid timezone string in IANA tzdata format (for example, America/New_York).
requiredday_or
bool
Control how croniter handles day
and day_of_week
entries. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday.
prefect/client/schemas/schedules.py
class CronSchedule(PrefectBaseModel):\n \"\"\"\n Cron schedule\n\n NOTE: If the timezone is a DST-observing one, then the schedule will adjust\n itself appropriately. Cron's rules for DST are based on schedule times, not\n intervals. This means that an hourly cron schedule will fire on every new\n schedule hour, not every elapsed hour; for example, when clocks are set back\n this will result in a two-hour pause as the schedule will fire *the first\n time* 1am is reached and *the first time* 2am is reached, 120 minutes later.\n Longer schedules, such as one that fires at 9am every morning, will\n automatically adjust for DST.\n\n Args:\n cron (str): a valid cron string\n timezone (str): a valid timezone string in IANA tzdata format (for example,\n America/New_York).\n day_or (bool, optional): Control how croniter handles `day` and `day_of_week`\n entries. Defaults to True, matching cron which connects those values using\n OR. If the switch is set to False, the values are connected using AND. This\n behaves like fcron and enables you to e.g. define a job that executes each\n 2nd friday of a month by setting the days of month and the weekday.\n\n \"\"\"\n\n class Config:\n extra = \"forbid\"\n\n cron: str = Field(default=..., example=\"0 0 * * *\")\n timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n day_or: bool = Field(\n default=True,\n description=(\n \"Control croniter behavior for handling day and day_of_week entries.\"\n ),\n )\n\n @validator(\"timezone\")\n def valid_timezone(cls, v):\n # pendulum.tz.timezones is a callable in 3.0 and above\n # https://github.com/PrefectHQ/prefect/issues/11619\n if callable(pendulum.tz.timezones):\n timezones = pendulum.tz.timezones()\n else:\n timezones = pendulum.tz.timezones\n\n if v and v not in timezones:\n raise ValueError(\n f'Invalid timezone: \"{v}\" (specify in IANA tzdata format, for example,'\n \" America/New_York)\"\n )\n return v\n\n @validator(\"cron\")\n def valid_cron_string(cls, v):\n # croniter allows \"random\" and \"hashed\" expressions\n # which we do not support https://github.com/kiorky/croniter\n if not croniter.is_valid(v):\n raise ValueError(f'Invalid cron string: \"{v}\"')\n elif any(c for c in v.split() if c.casefold() in [\"R\", \"H\", \"r\", \"h\"]):\n raise ValueError(\n f'Random and Hashed expressions are unsupported, received: \"{v}\"'\n )\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.RRuleSchedule","title":"RRuleSchedule
","text":" Bases: PrefectBaseModel
RRule schedule, based on the iCalendar standard (RFC 5545) as implemented in dateutils.rrule
.
RRules are appropriate for any kind of calendar-date manipulation, including irregular intervals, repetition, exclusions, week day or day-of-month adjustments, and more.
Note that as a calendar-oriented standard, RRuleSchedules
are sensitive to to the initial timezone provided. A 9am daily schedule with a daylight saving time-aware start date will maintain a local 9am time through DST boundaries; a 9am daily schedule with a UTC start date will maintain a 9am UTC time.
Parameters:
Name Type Description Defaultrrule
str
a valid RRule string
requiredtimezone
str
a valid timezone string
required Source code inprefect/client/schemas/schedules.py
class RRuleSchedule(PrefectBaseModel):\n \"\"\"\n RRule schedule, based on the iCalendar standard\n ([RFC 5545](https://datatracker.ietf.org/doc/html/rfc5545)) as\n implemented in `dateutils.rrule`.\n\n RRules are appropriate for any kind of calendar-date manipulation, including\n irregular intervals, repetition, exclusions, week day or day-of-month\n adjustments, and more.\n\n Note that as a calendar-oriented standard, `RRuleSchedules` are sensitive to\n to the initial timezone provided. A 9am daily schedule with a daylight saving\n time-aware start date will maintain a local 9am time through DST boundaries;\n a 9am daily schedule with a UTC start date will maintain a 9am UTC time.\n\n Args:\n rrule (str): a valid RRule string\n timezone (str, optional): a valid timezone string\n \"\"\"\n\n class Config:\n extra = \"forbid\"\n\n rrule: str\n timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n\n @validator(\"rrule\")\n def validate_rrule_str(cls, v):\n # attempt to parse the rrule string as an rrule object\n # this will error if the string is invalid\n try:\n dateutil.rrule.rrulestr(v, cache=True)\n except ValueError as exc:\n # rrules errors are a mix of cryptic and informative\n # so reraise to be clear that the string was invalid\n raise ValueError(f'Invalid RRule string \"{v}\": {exc}')\n if len(v) > MAX_RRULE_LENGTH:\n raise ValueError(\n f'Invalid RRule string \"{v[:40]}...\"\\n'\n f\"Max length is {MAX_RRULE_LENGTH}, got {len(v)}\"\n )\n return v\n\n @classmethod\n def from_rrule(cls, rrule: dateutil.rrule.rrule):\n if isinstance(rrule, dateutil.rrule.rrule):\n if rrule._dtstart.tzinfo is not None:\n timezone = rrule._dtstart.tzinfo.name\n else:\n timezone = \"UTC\"\n return RRuleSchedule(rrule=str(rrule), timezone=timezone)\n elif isinstance(rrule, dateutil.rrule.rruleset):\n dtstarts = [rr._dtstart for rr in rrule._rrule if rr._dtstart is not None]\n unique_dstarts = set(pendulum.instance(d).in_tz(\"UTC\") for d in dtstarts)\n unique_timezones = set(d.tzinfo for d in dtstarts if d.tzinfo is not None)\n\n if len(unique_timezones) > 1:\n raise ValueError(\n f\"rruleset has too many dtstart timezones: {unique_timezones}\"\n )\n\n if len(unique_dstarts) > 1:\n raise ValueError(f\"rruleset has too many dtstarts: {unique_dstarts}\")\n\n if unique_dstarts and unique_timezones:\n timezone = dtstarts[0].tzinfo.name\n else:\n timezone = \"UTC\"\n\n rruleset_string = \"\"\n if rrule._rrule:\n rruleset_string += \"\\n\".join(str(r) for r in rrule._rrule)\n if rrule._exrule:\n rruleset_string += \"\\n\" if rruleset_string else \"\"\n rruleset_string += \"\\n\".join(str(r) for r in rrule._exrule).replace(\n \"RRULE\", \"EXRULE\"\n )\n if rrule._rdate:\n rruleset_string += \"\\n\" if rruleset_string else \"\"\n rruleset_string += \"RDATE:\" + \",\".join(\n rd.strftime(\"%Y%m%dT%H%M%SZ\") for rd in rrule._rdate\n )\n if rrule._exdate:\n rruleset_string += \"\\n\" if rruleset_string else \"\"\n rruleset_string += \"EXDATE:\" + \",\".join(\n exd.strftime(\"%Y%m%dT%H%M%SZ\") for exd in rrule._exdate\n )\n return RRuleSchedule(rrule=rruleset_string, timezone=timezone)\n else:\n raise ValueError(f\"Invalid RRule object: {rrule}\")\n\n def to_rrule(self) -> dateutil.rrule.rrule:\n \"\"\"\n Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n here\n \"\"\"\n rrule = dateutil.rrule.rrulestr(\n self.rrule,\n dtstart=DEFAULT_ANCHOR_DATE,\n cache=True,\n )\n timezone = dateutil.tz.gettz(self.timezone)\n if isinstance(rrule, dateutil.rrule.rrule):\n kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n if rrule._until:\n kwargs.update(\n until=rrule._until.replace(tzinfo=timezone),\n )\n return rrule.replace(**kwargs)\n elif isinstance(rrule, dateutil.rrule.rruleset):\n # update rrules\n localized_rrules = []\n for rr in rrule._rrule:\n kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n if rr._until:\n kwargs.update(\n until=rr._until.replace(tzinfo=timezone),\n )\n localized_rrules.append(rr.replace(**kwargs))\n rrule._rrule = localized_rrules\n\n # update exrules\n localized_exrules = []\n for exr in rrule._exrule:\n kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n if exr._until:\n kwargs.update(\n until=exr._until.replace(tzinfo=timezone),\n )\n localized_exrules.append(exr.replace(**kwargs))\n rrule._exrule = localized_exrules\n\n # update rdates\n localized_rdates = []\n for rd in rrule._rdate:\n localized_rdates.append(rd.replace(tzinfo=timezone))\n rrule._rdate = localized_rdates\n\n # update exdates\n localized_exdates = []\n for exd in rrule._exdate:\n localized_exdates.append(exd.replace(tzinfo=timezone))\n rrule._exdate = localized_exdates\n\n return rrule\n\n @validator(\"timezone\", always=True)\n def valid_timezone(cls, v):\n if v and v not in pytz.all_timezones_set:\n raise ValueError(f'Invalid timezone: \"{v}\"')\n elif v is None:\n return \"UTC\"\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.RRuleSchedule.to_rrule","title":"to_rrule
","text":"Since rrule doesn't properly serialize/deserialize timezones, we localize dates here
Source code inprefect/client/schemas/schedules.py
def to_rrule(self) -> dateutil.rrule.rrule:\n \"\"\"\n Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n here\n \"\"\"\n rrule = dateutil.rrule.rrulestr(\n self.rrule,\n dtstart=DEFAULT_ANCHOR_DATE,\n cache=True,\n )\n timezone = dateutil.tz.gettz(self.timezone)\n if isinstance(rrule, dateutil.rrule.rrule):\n kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n if rrule._until:\n kwargs.update(\n until=rrule._until.replace(tzinfo=timezone),\n )\n return rrule.replace(**kwargs)\n elif isinstance(rrule, dateutil.rrule.rruleset):\n # update rrules\n localized_rrules = []\n for rr in rrule._rrule:\n kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n if rr._until:\n kwargs.update(\n until=rr._until.replace(tzinfo=timezone),\n )\n localized_rrules.append(rr.replace(**kwargs))\n rrule._rrule = localized_rrules\n\n # update exrules\n localized_exrules = []\n for exr in rrule._exrule:\n kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n if exr._until:\n kwargs.update(\n until=exr._until.replace(tzinfo=timezone),\n )\n localized_exrules.append(exr.replace(**kwargs))\n rrule._exrule = localized_exrules\n\n # update rdates\n localized_rdates = []\n for rd in rrule._rdate:\n localized_rdates.append(rd.replace(tzinfo=timezone))\n rrule._rdate = localized_rdates\n\n # update exdates\n localized_exdates = []\n for exd in rrule._exdate:\n localized_exdates.append(exd.replace(tzinfo=timezone))\n rrule._exdate = localized_exdates\n\n return rrule\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.construct_schedule","title":"construct_schedule
","text":"Construct a schedule from the provided arguments.
Parameters:
Name Type Description Defaultinterval
Optional[Union[int, float, timedelta]]
An interval on which to schedule runs. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.
None
anchor_date
Optional[Union[datetime, str]]
The start date for an interval schedule.
None
cron
Optional[str]
A cron schedule for runs.
None
rrule
Optional[str]
An rrule schedule of when to execute runs of this flow.
None
timezone
Optional[str]
A timezone to use for the schedule. Defaults to UTC.
None
Source code in prefect/client/schemas/schedules.py
def construct_schedule(\n interval: Optional[Union[int, float, datetime.timedelta]] = None,\n anchor_date: Optional[Union[datetime.datetime, str]] = None,\n cron: Optional[str] = None,\n rrule: Optional[str] = None,\n timezone: Optional[str] = None,\n) -> SCHEDULE_TYPES:\n \"\"\"\n Construct a schedule from the provided arguments.\n\n Args:\n interval: An interval on which to schedule runs. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n anchor_date: The start date for an interval schedule.\n cron: A cron schedule for runs.\n rrule: An rrule schedule of when to execute runs of this flow.\n timezone: A timezone to use for the schedule. Defaults to UTC.\n \"\"\"\n num_schedules = sum(1 for entry in (interval, cron, rrule) if entry is not None)\n if num_schedules > 1:\n raise ValueError(\"Only one of interval, cron, or rrule can be provided.\")\n\n if anchor_date and not interval:\n raise ValueError(\n \"An anchor date can only be provided with an interval schedule\"\n )\n\n if timezone and not (interval or cron or rrule):\n raise ValueError(\n \"A timezone can only be provided with interval, cron, or rrule\"\n )\n\n schedule = None\n if interval:\n if isinstance(interval, (int, float)):\n interval = datetime.timedelta(seconds=interval)\n schedule = IntervalSchedule(\n interval=interval, anchor_date=anchor_date, timezone=timezone\n )\n elif cron:\n schedule = CronSchedule(cron=cron, timezone=timezone)\n elif rrule:\n schedule = RRuleSchedule(rrule=rrule, timezone=timezone)\n\n if schedule is None:\n raise ValueError(\"Either interval, cron, or rrule must be provided\")\n\n return schedule\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_6","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting","title":"prefect.client.schemas.sorting
","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.FlowRunSort","title":"FlowRunSort
","text":" Bases: AutoEnum
Defines flow run sorting options.
Source code inprefect/client/schemas/sorting.py
class FlowRunSort(AutoEnum):\n \"\"\"Defines flow run sorting options.\"\"\"\n\n ID_DESC = AutoEnum.auto()\n START_TIME_ASC = AutoEnum.auto()\n START_TIME_DESC = AutoEnum.auto()\n EXPECTED_START_TIME_ASC = AutoEnum.auto()\n EXPECTED_START_TIME_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n END_TIME_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.TaskRunSort","title":"TaskRunSort
","text":" Bases: AutoEnum
Defines task run sorting options.
Source code inprefect/client/schemas/sorting.py
class TaskRunSort(AutoEnum):\n \"\"\"Defines task run sorting options.\"\"\"\n\n ID_DESC = AutoEnum.auto()\n EXPECTED_START_TIME_ASC = AutoEnum.auto()\n EXPECTED_START_TIME_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n END_TIME_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.LogSort","title":"LogSort
","text":" Bases: AutoEnum
Defines log sorting options.
Source code inprefect/client/schemas/sorting.py
class LogSort(AutoEnum):\n \"\"\"Defines log sorting options.\"\"\"\n\n TIMESTAMP_ASC = AutoEnum.auto()\n TIMESTAMP_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.FlowSort","title":"FlowSort
","text":" Bases: AutoEnum
Defines flow sorting options.
Source code inprefect/client/schemas/sorting.py
class FlowSort(AutoEnum):\n \"\"\"Defines flow sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.DeploymentSort","title":"DeploymentSort
","text":" Bases: AutoEnum
Defines deployment sorting options.
Source code inprefect/client/schemas/sorting.py
class DeploymentSort(AutoEnum):\n \"\"\"Defines deployment sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.ArtifactSort","title":"ArtifactSort
","text":" Bases: AutoEnum
Defines artifact sorting options.
Source code inprefect/client/schemas/sorting.py
class ArtifactSort(AutoEnum):\n \"\"\"Defines artifact sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n ID_DESC = AutoEnum.auto()\n KEY_DESC = AutoEnum.auto()\n KEY_ASC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.ArtifactCollectionSort","title":"ArtifactCollectionSort
","text":" Bases: AutoEnum
Defines artifact collection sorting options.
Source code inprefect/client/schemas/sorting.py
class ArtifactCollectionSort(AutoEnum):\n \"\"\"Defines artifact collection sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n ID_DESC = AutoEnum.auto()\n KEY_DESC = AutoEnum.auto()\n KEY_ASC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.VariableSort","title":"VariableSort
","text":" Bases: AutoEnum
Defines variables sorting options.
Source code inprefect/client/schemas/sorting.py
class VariableSort(AutoEnum):\n \"\"\"Defines variables sorting options.\"\"\"\n\n CREATED_DESC = \"CREATED_DESC\"\n UPDATED_DESC = \"UPDATED_DESC\"\n NAME_DESC = \"NAME_DESC\"\n NAME_ASC = \"NAME_ASC\"\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.BlockDocumentSort","title":"BlockDocumentSort
","text":" Bases: AutoEnum
Defines block document sorting options.
Source code inprefect/client/schemas/sorting.py
class BlockDocumentSort(AutoEnum):\n \"\"\"Defines block document sorting options.\"\"\"\n\n NAME_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n BLOCK_TYPE_AND_NAME_ASC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/","title":"utilities","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities","title":"prefect.client.utilities
","text":"Utilities for working with clients.
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities.get_or_create_client","title":"get_or_create_client
","text":"Returns provided client, infers a client from context if available, or creates a new client.
Parameters:
Name Type Description Default-
client (PrefectClient
an optional client to use
requiredReturns:
Type DescriptionTuple[PrefectClient, bool]
prefect/client/utilities.py
def get_or_create_client(\n client: Optional[\"PrefectClient\"] = None,\n) -> Tuple[\"PrefectClient\", bool]:\n \"\"\"\n Returns provided client, infers a client from context if available, or creates a new client.\n\n Args:\n - client (PrefectClient, optional): an optional client to use\n\n Returns:\n - tuple: a tuple of the client and a boolean indicating if the client was inferred from context\n \"\"\"\n if client is not None:\n return client, True\n from prefect._internal.concurrency.event_loop import get_running_loop\n from prefect.context import FlowRunContext, TaskRunContext\n\n flow_run_context = FlowRunContext.get()\n task_run_context = TaskRunContext.get()\n\n if (\n flow_run_context\n and getattr(flow_run_context.client, \"_loop\") == get_running_loop()\n ):\n return flow_run_context.client, True\n elif (\n task_run_context\n and getattr(task_run_context.client, \"_loop\") == get_running_loop()\n ):\n return task_run_context.client, True\n else:\n from prefect.client.orchestration import get_client as get_httpx_client\n\n return get_httpx_client(), False\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities.inject_client","title":"inject_client
","text":"Simple helper to provide a context managed client to a asynchronous function.
The decorated function must take a client
kwarg and if a client is passed when called it will be used instead of creating a new one, but it will not be context managed as it is assumed that the caller is managing the context.
prefect/client/utilities.py
def inject_client(\n fn: Callable[P, Coroutine[Any, Any, Any]],\n) -> Callable[P, Coroutine[Any, Any, Any]]:\n \"\"\"\n Simple helper to provide a context managed client to a asynchronous function.\n\n The decorated function _must_ take a `client` kwarg and if a client is passed when\n called it will be used instead of creating a new one, but it will not be context\n managed as it is assumed that the caller is managing the context.\n \"\"\"\n\n @wraps(fn)\n async def with_injected_client(*args: P.args, **kwargs: P.kwargs) -> Any:\n client = cast(Optional[\"PrefectClient\"], kwargs.pop(\"client\", None))\n client, inferred = get_or_create_client(client)\n if not inferred:\n context = client\n else:\n from prefect.utilities.asyncutils import asyncnullcontext\n\n context = asyncnullcontext()\n async with context as new_client:\n kwargs.setdefault(\"client\", new_client or client)\n return await fn(*args, **kwargs)\n\n return with_injected_client\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/concurrency/asyncio/","title":"asyncio","text":"","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/asyncio/#prefect.concurrency.asyncio","title":"prefect.concurrency.asyncio
","text":"","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/asyncio/#prefect.concurrency.asyncio.ConcurrencySlotAcquisitionError","title":"ConcurrencySlotAcquisitionError
","text":" Bases: Exception
Raised when an unhandlable occurs while acquiring concurrency slots.
Source code inprefect/concurrency/asyncio.py
class ConcurrencySlotAcquisitionError(Exception):\n \"\"\"Raised when an unhandlable occurs while acquiring concurrency slots.\"\"\"\n
","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/asyncio/#prefect.concurrency.asyncio.rate_limit","title":"rate_limit
async
","text":"Block execution until an occupy
number of slots of the concurrency limits given in names
are acquired. Requires that all given concurrency limits have a slot decay.
Parameters:
Name Type Description Defaultnames
Union[str, List[str]]
The names of the concurrency limits to acquire slots from.
requiredoccupy
int
The number of slots to acquire and hold from each limit.
1
Source code in prefect/concurrency/asyncio.py
async def rate_limit(names: Union[str, List[str]], occupy: int = 1):\n \"\"\"Block execution until an `occupy` number of slots of the concurrency\n limits given in `names` are acquired. Requires that all given concurrency\n limits have a slot decay.\n\n Args:\n names: The names of the concurrency limits to acquire slots from.\n occupy: The number of slots to acquire and hold from each limit.\n \"\"\"\n names = names if isinstance(names, list) else [names]\n limits = await _acquire_concurrency_slots(names, occupy, mode=\"rate_limit\")\n _emit_concurrency_acquisition_events(limits, occupy)\n
","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/common/","title":"common","text":"","tags":["Python API","concurrency","common"]},{"location":"api-ref/prefect/concurrency/common/#prefect.concurrency.common","title":"prefect.concurrency.common
","text":"","tags":["Python API","concurrency","common"]},{"location":"api-ref/prefect/concurrency/events/","title":"events","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/events/#prefect.concurrency.events","title":"prefect.concurrency.events
","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/services/","title":"services","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/services/#prefect.concurrency.services","title":"prefect.concurrency.services
","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/sync/","title":"sync","text":"","tags":["Python API","concurrency","sync"]},{"location":"api-ref/prefect/concurrency/sync/#prefect.concurrency.sync","title":"prefect.concurrency.sync
","text":"","tags":["Python API","concurrency","sync"]},{"location":"api-ref/prefect/concurrency/sync/#prefect.concurrency.sync.rate_limit","title":"rate_limit
","text":"Block execution until an occupy
number of slots of the concurrency limits given in names
are acquired. Requires that all given concurrency limits have a slot decay.
Parameters:
Name Type Description Defaultnames
Union[str, List[str]]
The names of the concurrency limits to acquire slots from.
requiredoccupy
int
The number of slots to acquire and hold from each limit.
1
Source code in prefect/concurrency/sync.py
def rate_limit(names: Union[str, List[str]], occupy: int = 1):\n \"\"\"Block execution until an `occupy` number of slots of the concurrency\n limits given in `names` are acquired. Requires that all given concurrency\n limits have a slot decay.\n\n Args:\n names: The names of the concurrency limits to acquire slots from.\n occupy: The number of slots to acquire and hold from each limit.\n \"\"\"\n names = names if isinstance(names, list) else [names]\n limits = _call_async_function_from_sync(\n _acquire_concurrency_slots, names, occupy, mode=\"rate_limit\"\n )\n _emit_concurrency_acquisition_events(limits, occupy)\n
","tags":["Python API","concurrency","sync"]},{"location":"api-ref/prefect/deployments/base/","title":"base","text":"","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base","title":"prefect.deployments.base
","text":"Core primitives for managing Prefect projects. Projects provide a minimally opinionated build system for managing flows and deployments.
To get started, follow along with the deloyments tutorial.
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.configure_project_by_recipe","title":"configure_project_by_recipe
","text":"Given a recipe name, returns a dictionary representing base configuration options.
Parameters:
Name Type Description Defaultrecipe
str
the name of the recipe to use
requiredformatting_kwargs
dict
additional keyword arguments to format the recipe
{}
Raises:
Type DescriptionValueError
if provided recipe name does not exist.
Source code inprefect/deployments/base.py
def configure_project_by_recipe(recipe: str, **formatting_kwargs) -> dict:\n \"\"\"\n Given a recipe name, returns a dictionary representing base configuration options.\n\n Args:\n recipe (str): the name of the recipe to use\n formatting_kwargs (dict, optional): additional keyword arguments to format the recipe\n\n Raises:\n ValueError: if provided recipe name does not exist.\n \"\"\"\n # load the recipe\n recipe_path = Path(__file__).parent / \"recipes\" / recipe / \"prefect.yaml\"\n\n if not recipe_path.exists():\n raise ValueError(f\"Unknown recipe {recipe!r} provided.\")\n\n with recipe_path.open(mode=\"r\") as f:\n config = yaml.safe_load(f)\n\n config = apply_values(\n template=config, values=formatting_kwargs, remove_notset=False\n )\n\n return config\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.create_default_prefect_yaml","title":"create_default_prefect_yaml
","text":"Creates default prefect.yaml
file in the provided path if one does not already exist; returns boolean specifying whether a file was created.
Parameters:
Name Type Description Defaultname
str
the name of the project; if not provided, the current directory name will be used
None
contents
dict
a dictionary of contents to write to the file; if not provided, defaults will be used
None
Source code in prefect/deployments/base.py
def create_default_prefect_yaml(\n path: str, name: str = None, contents: dict = None\n) -> bool:\n \"\"\"\n Creates default `prefect.yaml` file in the provided path if one does not already exist;\n returns boolean specifying whether a file was created.\n\n Args:\n name (str, optional): the name of the project; if not provided, the current directory name\n will be used\n contents (dict, optional): a dictionary of contents to write to the file; if not provided,\n defaults will be used\n \"\"\"\n path = Path(path)\n prefect_file = path / \"prefect.yaml\"\n if prefect_file.exists():\n return False\n default_file = Path(__file__).parent / \"templates\" / \"prefect.yaml\"\n\n with default_file.open(mode=\"r\") as df:\n default_contents = yaml.safe_load(df)\n\n import prefect\n\n contents[\"prefect-version\"] = prefect.__version__\n contents[\"name\"] = name\n\n with prefect_file.open(mode=\"w\") as f:\n # write header\n f.write(\n \"# Welcome to your prefect.yaml file! You can use this file for storing and\"\n \" managing\\n# configuration for deploying your flows. We recommend\"\n \" committing this file to source\\n# control along with your flow code.\\n\\n\"\n )\n\n f.write(\"# Generic metadata about this project\\n\")\n yaml.dump({\"name\": contents[\"name\"]}, f, sort_keys=False)\n yaml.dump({\"prefect-version\": contents[\"prefect-version\"]}, f, sort_keys=False)\n f.write(\"\\n\")\n\n # build\n f.write(\"# build section allows you to manage and build docker images\\n\")\n yaml.dump(\n {\"build\": contents.get(\"build\", default_contents.get(\"build\"))},\n f,\n sort_keys=False,\n )\n f.write(\"\\n\")\n\n # push\n f.write(\n \"# push section allows you to manage if and how this project is uploaded to\"\n \" remote locations\\n\"\n )\n yaml.dump(\n {\"push\": contents.get(\"push\", default_contents.get(\"push\"))},\n f,\n sort_keys=False,\n )\n f.write(\"\\n\")\n\n # pull\n f.write(\n \"# pull section allows you to provide instructions for cloning this project\"\n \" in remote locations\\n\"\n )\n yaml.dump(\n {\"pull\": contents.get(\"pull\", default_contents.get(\"pull\"))},\n f,\n sort_keys=False,\n )\n f.write(\"\\n\")\n\n # deployments\n f.write(\n \"# the deployments section allows you to provide configuration for\"\n \" deploying flows\\n\"\n )\n yaml.dump(\n {\n \"deployments\": contents.get(\n \"deployments\", default_contents.get(\"deployments\")\n )\n },\n f,\n sort_keys=False,\n )\n return True\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.find_prefect_directory","title":"find_prefect_directory
","text":"Given a path, recurses upward looking for .prefect/ directories.
Once found, returns absolute path to the ./prefect directory, which is assumed to reside within the root for the current project.
If one is never found, None
is returned.
prefect/deployments/base.py
def find_prefect_directory(path: Path = None) -> Optional[Path]:\n \"\"\"\n Given a path, recurses upward looking for .prefect/ directories.\n\n Once found, returns absolute path to the ./prefect directory, which is assumed to reside within the\n root for the current project.\n\n If one is never found, `None` is returned.\n \"\"\"\n path = Path(path or \".\").resolve()\n parent = path.parent.resolve()\n while path != parent:\n prefect_dir = path.joinpath(\".prefect\")\n if prefect_dir.is_dir():\n return prefect_dir\n\n path = parent.resolve()\n parent = path.parent.resolve()\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.initialize_project","title":"initialize_project
","text":"Initializes a basic project structure with base files. If no name is provided, the name of the current directory is used. If no recipe is provided, one is inferred.
Parameters:
Name Type Description Defaultname
str
the name of the project; if not provided, the current directory name
None
recipe
str
the name of the recipe to use; if not provided, one is inferred
None
inputs
dict
a dictionary of inputs to use when formatting the recipe
None
Returns:
Type DescriptionList[str]
List[str]: a list of files / directories that were created
Source code inprefect/deployments/base.py
def initialize_project(\n name: str = None, recipe: str = None, inputs: dict = None\n) -> List[str]:\n \"\"\"\n Initializes a basic project structure with base files. If no name is provided, the name\n of the current directory is used. If no recipe is provided, one is inferred.\n\n Args:\n name (str, optional): the name of the project; if not provided, the current directory name\n recipe (str, optional): the name of the recipe to use; if not provided, one is inferred\n inputs (dict, optional): a dictionary of inputs to use when formatting the recipe\n\n Returns:\n List[str]: a list of files / directories that were created\n \"\"\"\n # determine if in git repo or use directory name as a default\n is_git_based = False\n formatting_kwargs = {\"directory\": str(Path(\".\").absolute().resolve())}\n dir_name = os.path.basename(os.getcwd())\n\n remote_url = _get_git_remote_origin_url()\n if remote_url:\n formatting_kwargs[\"repository\"] = remote_url\n is_git_based = True\n branch = _get_git_branch()\n formatting_kwargs[\"branch\"] = branch or \"main\"\n\n formatting_kwargs[\"name\"] = dir_name\n\n has_dockerfile = Path(\"Dockerfile\").exists()\n\n if has_dockerfile:\n formatting_kwargs[\"dockerfile\"] = \"Dockerfile\"\n elif recipe is not None and \"docker\" in recipe:\n formatting_kwargs[\"dockerfile\"] = \"auto\"\n\n # hand craft a pull step\n if is_git_based and recipe is None:\n if has_dockerfile:\n recipe = \"docker-git\"\n else:\n recipe = \"git\"\n elif recipe is None and has_dockerfile:\n recipe = \"docker\"\n elif recipe is None:\n recipe = \"local\"\n\n formatting_kwargs.update(inputs or {})\n configuration = configure_project_by_recipe(recipe=recipe, **formatting_kwargs)\n\n project_name = name or dir_name\n\n files = []\n if create_default_ignore_file(\".\"):\n files.append(\".prefectignore\")\n if create_default_prefect_yaml(\".\", name=project_name, contents=configuration):\n files.append(\"prefect.yaml\")\n if set_prefect_hidden_dir():\n files.append(\".prefect/\")\n\n return files\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.register_flow","title":"register_flow
async
","text":"Register a flow with this project from an entrypoint.
Parameters:
Name Type Description Defaultentrypoint
str
the entrypoint to the flow to register
requiredforce
bool
whether or not to overwrite an existing flow with the same name
False
Raises:
Type DescriptionValueError
if force
is False
and registration would overwrite an existing flow
prefect/deployments/base.py
async def register_flow(entrypoint: str, force: bool = False):\n \"\"\"\n Register a flow with this project from an entrypoint.\n\n Args:\n entrypoint (str): the entrypoint to the flow to register\n force (bool, optional): whether or not to overwrite an existing flow with the same name\n\n Raises:\n ValueError: if `force` is `False` and registration would overwrite an existing flow\n \"\"\"\n try:\n fpath, obj_name = entrypoint.rsplit(\":\", 1)\n except ValueError as exc:\n if str(exc) == \"not enough values to unpack (expected 2, got 1)\":\n missing_flow_name_msg = (\n \"Your flow entrypoint must include the name of the function that is\"\n f\" the entrypoint to your flow.\\nTry {entrypoint}:<flow_name> as your\"\n f\" entrypoint. If you meant to specify '{entrypoint}' as the deployment\"\n f\" name, try `prefect deploy -n {entrypoint}`.\"\n )\n raise ValueError(missing_flow_name_msg)\n else:\n raise exc\n\n flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, entrypoint)\n\n fpath = Path(fpath).absolute()\n prefect_dir = find_prefect_directory()\n if not prefect_dir:\n raise FileNotFoundError(\n \"No .prefect directory could be found - run `prefect project\"\n \" init` to create one.\"\n )\n\n entrypoint = f\"{fpath.relative_to(prefect_dir.parent)!s}:{obj_name}\"\n\n flows_file = prefect_dir / \"flows.json\"\n if flows_file.exists():\n with flows_file.open(mode=\"r\") as f:\n flows = json.load(f)\n else:\n flows = {}\n\n ## quality control\n if flow.name in flows and flows[flow.name] != entrypoint:\n if not force:\n raise ValueError(\n f\"Conflicting entry found for flow with name {flow.name!r}.\\nExisting\"\n f\" entrypoint: {flows[flow.name]}\\nAttempted entrypoint:\"\n f\" {entrypoint}\\n\\nYou can try removing the existing entry for\"\n f\" {flow.name!r} from your [yellow]~/.prefect/flows.json[/yellow].\"\n )\n\n flows[flow.name] = entrypoint\n\n with flows_file.open(mode=\"w\") as f:\n json.dump(flows, f, sort_keys=True, indent=2)\n\n return flow\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.set_prefect_hidden_dir","title":"set_prefect_hidden_dir
","text":"Creates default .prefect/
directory if one does not already exist. Returns boolean specifying whether or not a directory was created.
If a path is provided, the directory will be created in that location.
Source code inprefect/deployments/base.py
def set_prefect_hidden_dir(path: str = None) -> bool:\n \"\"\"\n Creates default `.prefect/` directory if one does not already exist.\n Returns boolean specifying whether or not a directory was created.\n\n If a path is provided, the directory will be created in that location.\n \"\"\"\n path = Path(path or \".\") / \".prefect\"\n\n # use exists so that we dont accidentally overwrite a file\n if path.exists():\n return False\n path.mkdir(mode=0o0700)\n return True\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/deployments/","title":"deployments","text":"","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments","title":"prefect.deployments.deployments
","text":"Objects for specifying deployments and utilities for loading flows from deployments.
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment","title":"Deployment
","text":" Bases: BaseModel
DEPRECATION WARNING:
This class is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by flow.deploy
, which offers enhanced functionality and better a better user experience. For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.
A Prefect Deployment definition, used for specifying and building deployments.
Parameters:
Name Type Description Defaultname
A name for the deployment (required).
requiredversion
An optional version for the deployment; defaults to the flow's version
requireddescription
An optional description of the deployment; defaults to the flow's description
requiredtags
An optional list of tags to associate with this deployment; note that tags are used only for organizational purposes. For delegating work to agents, see work_queue_name
.
schedule
A schedule to run this deployment on, once registered (deprecated)
requiredis_schedule_active
Whether or not the schedule is active (deprecated)
requiredschedules
A list of schedules to run this deployment on
requiredwork_queue_name
The work queue that will handle this deployment's runs
requiredwork_pool_name
The work pool for the deployment
requiredflow_name
The name of the flow this deployment encapsulates
requiredparameters
A dictionary of parameter values to pass to runs created from this deployment
requiredinfrastructure
An optional infrastructure block used to configure infrastructure for runs; if not provided, will default to running this deployment in Agent subprocesses
requiredinfra_overrides
A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example env.CONFIG_KEY=config_value
or namespace='prefect'
storage
An optional remote storage block used to store and retrieve this workflow; if not provided, will default to referencing this flow by its local path
requiredpath
The path to the working directory for the workflow, relative to remote storage or, if stored on a local filesystem, an absolute path
requiredentrypoint
The path to the entrypoint for the workflow, always relative to the path
parameter_openapi_schema
The parameter schema of the flow, including defaults.
requiredenforce_parameter_schema
Whether or not the Prefect API should enforce the parameter schema for this deployment.
requiredCreate a new deployment using configuration defaults for an imported flow:\n\n>>> from my_project.flows import my_flow\n>>> from prefect.deployments import Deployment\n>>>\n>>> deployment = Deployment.build_from_flow(\n... flow=my_flow,\n... name=\"example\",\n... version=\"1\",\n... tags=[\"demo\"],\n>>> )\n>>> deployment.apply()\n\nCreate a new deployment with custom storage and an infrastructure override:\n\n>>> from my_project.flows import my_flow\n>>> from prefect.deployments import Deployment\n>>> from prefect.filesystems import S3\n\n>>> storage = S3.load(\"dev-bucket\") # load a pre-defined block\n>>> deployment = Deployment.build_from_flow(\n... flow=my_flow,\n... name=\"s3-example\",\n... version=\"2\",\n... tags=[\"aws\"],\n... storage=storage,\n... infra_overrides=dict(\"env.PREFECT_LOGGING_LEVEL\"=\"DEBUG\"),\n>>> )\n>>> deployment.apply()\n
Source code in prefect/deployments/deployments.py
@deprecated_class(\n start_date=\"Mar 2024\",\n help=\"Use `flow.deploy` to deploy your flows instead.\"\n \" Refer to the upgrade guide for more information:\"\n \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass Deployment(BaseModel):\n \"\"\"\n DEPRECATION WARNING:\n\n This class is deprecated as of March 2024 and will not be available after September 2024.\n It has been replaced by `flow.deploy`, which offers enhanced functionality and better a better user experience.\n For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\n\n A Prefect Deployment definition, used for specifying and building deployments.\n\n Args:\n name: A name for the deployment (required).\n version: An optional version for the deployment; defaults to the flow's version\n description: An optional description of the deployment; defaults to the flow's\n description\n tags: An optional list of tags to associate with this deployment; note that tags\n are used only for organizational purposes. For delegating work to agents,\n see `work_queue_name`.\n schedule: A schedule to run this deployment on, once registered (deprecated)\n is_schedule_active: Whether or not the schedule is active (deprecated)\n schedules: A list of schedules to run this deployment on\n work_queue_name: The work queue that will handle this deployment's runs\n work_pool_name: The work pool for the deployment\n flow_name: The name of the flow this deployment encapsulates\n parameters: A dictionary of parameter values to pass to runs created from this\n deployment\n infrastructure: An optional infrastructure block used to configure\n infrastructure for runs; if not provided, will default to running this\n deployment in Agent subprocesses\n infra_overrides: A dictionary of dot delimited infrastructure overrides that\n will be applied at runtime; for example `env.CONFIG_KEY=config_value` or\n `namespace='prefect'`\n storage: An optional remote storage block used to store and retrieve this\n workflow; if not provided, will default to referencing this flow by its\n local path\n path: The path to the working directory for the workflow, relative to remote\n storage or, if stored on a local filesystem, an absolute path\n entrypoint: The path to the entrypoint for the workflow, always relative to the\n `path`\n parameter_openapi_schema: The parameter schema of the flow, including defaults.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n\n Examples:\n\n Create a new deployment using configuration defaults for an imported flow:\n\n >>> from my_project.flows import my_flow\n >>> from prefect.deployments import Deployment\n >>>\n >>> deployment = Deployment.build_from_flow(\n ... flow=my_flow,\n ... name=\"example\",\n ... version=\"1\",\n ... tags=[\"demo\"],\n >>> )\n >>> deployment.apply()\n\n Create a new deployment with custom storage and an infrastructure override:\n\n >>> from my_project.flows import my_flow\n >>> from prefect.deployments import Deployment\n >>> from prefect.filesystems import S3\n\n >>> storage = S3.load(\"dev-bucket\") # load a pre-defined block\n >>> deployment = Deployment.build_from_flow(\n ... flow=my_flow,\n ... name=\"s3-example\",\n ... version=\"2\",\n ... tags=[\"aws\"],\n ... storage=storage,\n ... infra_overrides=dict(\"env.PREFECT_LOGGING_LEVEL\"=\"DEBUG\"),\n >>> )\n >>> deployment.apply()\n\n \"\"\"\n\n class Config:\n json_encoders = {SecretDict: lambda v: v.dict()}\n validate_assignment = True\n extra = \"forbid\"\n\n @property\n def _editable_fields(self) -> List[str]:\n editable_fields = [\n \"name\",\n \"description\",\n \"version\",\n \"work_queue_name\",\n \"work_pool_name\",\n \"tags\",\n \"parameters\",\n \"schedule\",\n \"schedules\",\n \"is_schedule_active\",\n \"infra_overrides\",\n ]\n\n # if infrastructure is baked as a pre-saved block, then\n # editing its fields will not update anything\n if self.infrastructure._block_document_id:\n return editable_fields\n else:\n return editable_fields + [\"infrastructure\"]\n\n @property\n def location(self) -> str:\n \"\"\"\n The 'location' that this deployment points to is given by `path` alone\n in the case of no remote storage, and otherwise by `storage.basepath / path`.\n\n The underlying flow entrypoint is interpreted relative to this location.\n \"\"\"\n location = \"\"\n if self.storage:\n location = (\n self.storage.basepath + \"/\"\n if not self.storage.basepath.endswith(\"/\")\n else \"\"\n )\n if self.path:\n location += self.path\n return location\n\n @sync_compatible\n async def to_yaml(self, path: Path) -> None:\n yaml_dict = self._yaml_dict()\n schema = self.schema()\n\n with open(path, \"w\") as f:\n # write header\n f.write(\n \"###\\n### A complete description of a Prefect Deployment for flow\"\n f\" {self.flow_name!r}\\n###\\n\"\n )\n\n # write editable fields\n for field in self._editable_fields:\n # write any comments\n if schema[\"properties\"][field].get(\"yaml_comment\"):\n f.write(f\"# {schema['properties'][field]['yaml_comment']}\\n\")\n # write the field\n yaml.dump({field: yaml_dict[field]}, f, sort_keys=False)\n\n # write non-editable fields\n f.write(\"\\n###\\n### DO NOT EDIT BELOW THIS LINE\\n###\\n\")\n yaml.dump(\n {k: v for k, v in yaml_dict.items() if k not in self._editable_fields},\n f,\n sort_keys=False,\n )\n\n def _yaml_dict(self) -> dict:\n \"\"\"\n Returns a YAML-compatible representation of this deployment as a dictionary.\n \"\"\"\n # avoids issues with UUIDs showing up in YAML\n all_fields = json.loads(\n self.json(\n exclude={\n \"storage\": {\"_filesystem\", \"filesystem\", \"_remote_file_system\"}\n }\n )\n )\n if all_fields[\"storage\"]:\n all_fields[\"storage\"][\n \"_block_type_slug\"\n ] = self.storage.get_block_type_slug()\n if all_fields[\"infrastructure\"]:\n all_fields[\"infrastructure\"][\n \"_block_type_slug\"\n ] = self.infrastructure.get_block_type_slug()\n return all_fields\n\n @classmethod\n def _validate_schedule(cls, value):\n \"\"\"We do not support COUNT-based (# of occurrences) RRule schedules for deployments.\"\"\"\n if value:\n rrule_value = getattr(value, \"rrule\", None)\n if rrule_value and \"COUNT\" in rrule_value.upper():\n raise ValueError(\n \"RRule schedules with `COUNT` are not supported. Please use `UNTIL`\"\n \" or the `/deployments/{id}/schedule` endpoint to schedule a fixed\"\n \" number of flow runs.\"\n )\n\n # top level metadata\n name: str = Field(..., description=\"The name of the deployment.\")\n description: Optional[str] = Field(\n default=None, description=\"An optional description of the deployment.\"\n )\n version: Optional[str] = Field(\n default=None, description=\"An optional version for the deployment.\"\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"One of more tags to apply to this deployment.\",\n )\n schedule: Optional[SCHEDULE_TYPES] = Field(default=None)\n schedules: List[MinimalDeploymentSchedule] = Field(\n default_factory=list,\n description=\"The schedules to run this deployment on.\",\n )\n is_schedule_active: Optional[bool] = Field(\n default=None, description=\"Whether or not the schedule is active.\"\n )\n flow_name: Optional[str] = Field(default=None, description=\"The name of the flow.\")\n work_queue_name: Optional[str] = Field(\n \"default\",\n description=\"The work queue for the deployment.\",\n yaml_comment=\"The work queue that will handle this deployment's runs\",\n )\n work_pool_name: Optional[str] = Field(\n default=None, description=\"The work pool for the deployment\"\n )\n # flow data\n parameters: Dict[str, Any] = Field(default_factory=dict)\n manifest_path: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the flow's manifest file, relative to the chosen storage.\"\n ),\n )\n infrastructure: Infrastructure = Field(default_factory=Process)\n infra_overrides: Dict[str, Any] = Field(\n default_factory=dict,\n description=\"Overrides to apply to the base infrastructure block at runtime.\",\n )\n storage: Optional[Block] = Field(\n None,\n help=\"The remote storage to use for this workflow.\",\n )\n path: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the working directory for the workflow, relative to remote\"\n \" storage or an absolute path.\"\n ),\n )\n entrypoint: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the entrypoint for the workflow, relative to the `path`.\"\n ),\n )\n parameter_openapi_schema: ParameterSchema = Field(\n default_factory=ParameterSchema,\n description=\"The parameter schema of the flow, including defaults.\",\n )\n timestamp: datetime = Field(default_factory=partial(pendulum.now, \"UTC\"))\n triggers: List[DeploymentTrigger] = Field(\n default_factory=list,\n description=\"The triggers that should cause this deployment to run.\",\n )\n # defaults to None to allow for backwards compatibility\n enforce_parameter_schema: Optional[bool] = Field(\n default=None,\n description=(\n \"Whether or not the Prefect API should enforce the parameter schema for\"\n \" this deployment.\"\n ),\n )\n\n @validator(\"infrastructure\", pre=True)\n def infrastructure_must_have_capabilities(cls, value):\n if isinstance(value, dict):\n if \"_block_type_slug\" in value:\n # Replace private attribute with public for dispatch\n value[\"block_type_slug\"] = value.pop(\"_block_type_slug\")\n block = Block(**value)\n elif value is None:\n return value\n else:\n block = value\n\n if \"run-infrastructure\" not in block.get_block_capabilities():\n raise ValueError(\n \"Infrastructure block must have 'run-infrastructure' capabilities.\"\n )\n return block\n\n @validator(\"storage\", pre=True)\n def storage_must_have_capabilities(cls, value):\n if isinstance(value, dict):\n block_type = Block.get_block_class_from_key(value.pop(\"_block_type_slug\"))\n block = block_type(**value)\n elif value is None:\n return value\n else:\n block = value\n\n capabilities = block.get_block_capabilities()\n if \"get-directory\" not in capabilities:\n raise ValueError(\n \"Remote Storage block must have 'get-directory' capabilities.\"\n )\n return block\n\n @validator(\"parameter_openapi_schema\", pre=True)\n def handle_openapi_schema(cls, value):\n \"\"\"\n This method ensures setting a value of `None` is handled gracefully.\n \"\"\"\n if value is None:\n return ParameterSchema()\n return value\n\n @validator(\"triggers\")\n def validate_automation_names(cls, field_value, values, field, config):\n \"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n for i, trigger in enumerate(field_value, start=1):\n if trigger.name is None:\n trigger.name = f\"{values['name']}__automation_{i}\"\n\n return field_value\n\n @root_validator(pre=True)\n def validate_deprecated_schedule_fields(cls, values):\n if values.get(\"schedule\") and not values.get(\"schedules\"):\n logger.warning(\n \"The field 'schedule' in 'Deployment' has been deprecated. It will not be \"\n \"available after Sep 2024. Define schedules in the `schedules` list instead.\"\n )\n elif values.get(\"is_schedule_active\") and not values.get(\"schedules\"):\n logger.warning(\n \"The field 'is_schedule_active' in 'Deployment' has been deprecated. It will \"\n \"not be available after Sep 2024. Use the `active` flag within a schedule in \"\n \"the `schedules` list instead and the `pause` flag in 'Deployment' to pause \"\n \"all schedules.\"\n )\n return values\n\n @root_validator(pre=True)\n def reconcile_schedules(cls, values):\n schedule = values.get(\"schedule\", NotSet)\n schedules = values.get(\"schedules\", NotSet)\n\n if schedules is not NotSet:\n values[\"schedules\"] = normalize_to_minimal_deployment_schedules(schedules)\n elif schedule is not NotSet:\n values[\"schedule\"] = None\n\n if schedule is None:\n values[\"schedules\"] = []\n else:\n values[\"schedules\"] = [\n create_minimal_deployment_schedule(\n schedule=schedule, active=values.get(\"is_schedule_active\")\n )\n ]\n\n for schedule in values.get(\"schedules\", []):\n cls._validate_schedule(schedule.schedule)\n\n return values\n\n @classmethod\n @sync_compatible\n async def load_from_yaml(cls, path: str):\n data = yaml.safe_load(await anyio.Path(path).read_bytes())\n # load blocks from server to ensure secret values are properly hydrated\n if data.get(\"storage\"):\n block_doc_name = data[\"storage\"].get(\"_block_document_name\")\n # if no doc name, this block is not stored on the server\n if block_doc_name:\n block_slug = data[\"storage\"][\"_block_type_slug\"]\n block = await Block.load(f\"{block_slug}/{block_doc_name}\")\n data[\"storage\"] = block\n\n if data.get(\"infrastructure\"):\n block_doc_name = data[\"infrastructure\"].get(\"_block_document_name\")\n # if no doc name, this block is not stored on the server\n if block_doc_name:\n block_slug = data[\"infrastructure\"][\"_block_type_slug\"]\n block = await Block.load(f\"{block_slug}/{block_doc_name}\")\n data[\"infrastructure\"] = block\n\n return cls(**data)\n\n @sync_compatible\n async def load(self) -> bool:\n \"\"\"\n Queries the API for a deployment with this name for this flow, and if found,\n prepopulates any settings that were not set at initialization.\n\n Returns a boolean specifying whether a load was successful or not.\n\n Raises:\n - ValueError: if both name and flow name are not set\n \"\"\"\n if not self.name or not self.flow_name:\n raise ValueError(\"Both a deployment name and flow name must be provided.\")\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(\n f\"{self.flow_name}/{self.name}\"\n )\n if deployment.storage_document_id:\n Block._from_block_document(\n await client.read_block_document(deployment.storage_document_id)\n )\n\n excluded_fields = self.__fields_set__.union(\n {\n \"infrastructure\",\n \"storage\",\n \"timestamp\",\n \"triggers\",\n \"enforce_parameter_schema\",\n \"schedules\",\n \"schedule\",\n \"is_schedule_active\",\n }\n )\n for field in set(self.__fields__.keys()) - excluded_fields:\n new_value = getattr(deployment, field)\n setattr(self, field, new_value)\n\n if \"schedules\" not in self.__fields_set__:\n self.schedules = [\n MinimalDeploymentSchedule(\n **schedule.dict(include={\"schedule\", \"active\"})\n )\n for schedule in deployment.schedules\n ]\n\n # The API server generates the \"schedule\" field from the\n # current list of schedules, so if the user has locally set\n # \"schedules\" to anything, we should avoid sending \"schedule\"\n # and let the API server generate a new value if necessary.\n if \"schedules\" in self.__fields_set__:\n self.schedule = None\n self.is_schedule_active = None\n else:\n # The user isn't using \"schedules,\" so we should\n # populate \"schedule\" and \"is_schedule_active\" from the\n # API's version of the deployment, unless the user gave\n # us these fields in __init__().\n if \"schedule\" not in self.__fields_set__:\n self.schedule = deployment.schedule\n if \"is_schedule_active\" not in self.__fields_set__:\n self.is_schedule_active = deployment.is_schedule_active\n\n if \"infrastructure\" not in self.__fields_set__:\n if deployment.infrastructure_document_id:\n self.infrastructure = Block._from_block_document(\n await client.read_block_document(\n deployment.infrastructure_document_id\n )\n )\n if \"storage\" not in self.__fields_set__:\n if deployment.storage_document_id:\n self.storage = Block._from_block_document(\n await client.read_block_document(\n deployment.storage_document_id\n )\n )\n except ObjectNotFound:\n return False\n return True\n\n @sync_compatible\n async def update(self, ignore_none: bool = False, **kwargs):\n \"\"\"\n Performs an in-place update with the provided settings.\n\n Args:\n ignore_none: if True, all `None` values are ignored when performing the\n update\n \"\"\"\n unknown_keys = set(kwargs.keys()) - set(self.dict().keys())\n if unknown_keys:\n raise ValueError(\n f\"Received unexpected attributes: {', '.join(unknown_keys)}\"\n )\n for key, value in kwargs.items():\n if ignore_none and value is None:\n continue\n setattr(self, key, value)\n\n @sync_compatible\n async def upload_to_storage(\n self, storage_block: str = None, ignore_file: str = \".prefectignore\"\n ) -> Optional[int]:\n \"\"\"\n Uploads the workflow this deployment represents using a provided storage block;\n if no block is provided, defaults to configuring self for local storage.\n\n Args:\n storage_block: a string reference a remote storage block slug `$type/$name`;\n if provided, used to upload the workflow's project\n ignore_file: an optional path to a `.prefectignore` file that specifies\n filename patterns to ignore when uploading to remote storage; if not\n provided, looks for `.prefectignore` in the current working directory\n \"\"\"\n file_count = None\n if storage_block:\n storage = await Block.load(storage_block)\n\n if \"put-directory\" not in storage.get_block_capabilities():\n raise BlockMissingCapabilities(\n f\"Storage block {storage!r} missing 'put-directory' capability.\"\n )\n\n self.storage = storage\n\n # upload current directory to storage location\n file_count = await self.storage.put_directory(\n ignore_file=ignore_file, to_path=self.path\n )\n elif self.storage:\n if \"put-directory\" not in self.storage.get_block_capabilities():\n raise BlockMissingCapabilities(\n f\"Storage block {self.storage!r} missing 'put-directory'\"\n \" capability.\"\n )\n\n file_count = await self.storage.put_directory(\n ignore_file=ignore_file, to_path=self.path\n )\n\n # persists storage now in case it contains secret values\n if self.storage and not self.storage._block_document_id:\n await self.storage._save(is_anonymous=True)\n\n return file_count\n\n @sync_compatible\n async def apply(\n self, upload: bool = False, work_queue_concurrency: int = None\n ) -> UUID:\n \"\"\"\n Registers this deployment with the API and returns the deployment's ID.\n\n Args:\n upload: if True, deployment files are automatically uploaded to remote\n storage\n work_queue_concurrency: If provided, sets the concurrency limit on the\n deployment's work queue\n \"\"\"\n if not self.name or not self.flow_name:\n raise ValueError(\"Both a deployment name and flow name must be set.\")\n async with get_client() as client:\n # prep IDs\n flow_id = await client.create_flow_from_name(self.flow_name)\n\n infrastructure_document_id = self.infrastructure._block_document_id\n if not infrastructure_document_id:\n # if not building off a block, will create an anonymous block\n self.infrastructure = self.infrastructure.copy()\n infrastructure_document_id = await self.infrastructure._save(\n is_anonymous=True,\n )\n\n if upload:\n await self.upload_to_storage()\n\n if self.work_queue_name and work_queue_concurrency is not None:\n try:\n res = await client.create_work_queue(\n name=self.work_queue_name, work_pool_name=self.work_pool_name\n )\n except ObjectAlreadyExists:\n res = await client.read_work_queue_by_name(\n name=self.work_queue_name, work_pool_name=self.work_pool_name\n )\n await client.update_work_queue(\n res.id, concurrency_limit=work_queue_concurrency\n )\n\n if self.schedule:\n logger.info(\n \"Interpreting the deprecated `schedule` field as an entry in \"\n \"`schedules`.\"\n )\n schedules = [\n DeploymentScheduleCreate(\n schedule=self.schedule, active=self.is_schedule_active\n )\n ]\n elif self.schedules:\n schedules = [\n DeploymentScheduleCreate(**schedule.dict())\n for schedule in self.schedules\n ]\n else:\n schedules = None\n\n # we assume storage was already saved\n storage_document_id = getattr(self.storage, \"_block_document_id\", None)\n deployment_id = await client.create_deployment(\n flow_id=flow_id,\n name=self.name,\n work_queue_name=self.work_queue_name,\n work_pool_name=self.work_pool_name,\n version=self.version,\n schedules=schedules,\n is_schedule_active=self.is_schedule_active,\n parameters=self.parameters,\n description=self.description,\n tags=self.tags,\n manifest_path=self.manifest_path, # allows for backwards YAML compat\n path=self.path,\n entrypoint=self.entrypoint,\n infra_overrides=self.infra_overrides,\n storage_document_id=storage_document_id,\n infrastructure_document_id=infrastructure_document_id,\n parameter_openapi_schema=self.parameter_openapi_schema.dict(),\n enforce_parameter_schema=self.enforce_parameter_schema,\n )\n\n if client.server_type == ServerType.CLOUD:\n # The triggers defined in the deployment spec are, essentially,\n # anonymous and attempting truly sync them with cloud is not\n # feasible. Instead, we remove all automations that are owned\n # by the deployment, meaning that they were created via this\n # mechanism below, and then recreate them.\n await client.delete_resource_owned_automations(\n f\"prefect.deployment.{deployment_id}\"\n )\n for trigger in self.triggers:\n trigger.set_deployment_id(deployment_id)\n await client.create_automation(trigger.as_automation())\n\n return deployment_id\n\n @classmethod\n @sync_compatible\n async def build_from_flow(\n cls,\n flow: Flow,\n name: str,\n output: str = None,\n skip_upload: bool = False,\n ignore_file: str = \".prefectignore\",\n apply: bool = False,\n load_existing: bool = True,\n schedules: Optional[FlexibleScheduleList] = None,\n **kwargs,\n ) -> \"Deployment\":\n \"\"\"\n Configure a deployment for a given flow.\n\n Args:\n flow: A flow function to deploy\n name: A name for the deployment\n output (optional): if provided, the full deployment specification will be\n written as a YAML file in the location specified by `output`\n skip_upload: if True, deployment files are not automatically uploaded to\n remote storage\n ignore_file: an optional path to a `.prefectignore` file that specifies\n filename patterns to ignore when uploading to remote storage; if not\n provided, looks for `.prefectignore` in the current working directory\n apply: if True, the deployment is automatically registered with the API\n load_existing: if True, load any settings that may already be configured for\n the named deployment server-side (e.g., schedules, default parameter\n values, etc.)\n schedules: An optional list of schedules. Each item in the list can be:\n - An instance of `MinimalDeploymentSchedule`.\n - A dictionary with a `schedule` key, and optionally, an\n `active` key. The `schedule` key should correspond to a\n schedule type, and `active` is a boolean indicating whether\n the schedule is active or not.\n - An instance of one of the predefined schedule types:\n `IntervalSchedule`, `CronSchedule`, or `RRuleSchedule`.\n **kwargs: other keyword arguments to pass to the constructor for the\n `Deployment` class\n \"\"\"\n if not name:\n raise ValueError(\"A deployment name must be provided.\")\n\n # note that `deployment.load` only updates settings that were *not*\n # provided at initialization\n\n deployment_args = {\n \"name\": name,\n \"flow_name\": flow.name,\n **kwargs,\n }\n\n if schedules is not None:\n deployment_args[\"schedules\"] = schedules\n\n deployment = cls(**deployment_args)\n deployment.flow_name = flow.name\n if not deployment.entrypoint:\n ## first see if an entrypoint can be determined\n flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n mod_name = getattr(flow, \"__module__\", None)\n if not flow_file:\n if not mod_name:\n # todo, check if the file location was manually set already\n raise ValueError(\"Could not determine flow's file location.\")\n module = importlib.import_module(mod_name)\n flow_file = getattr(module, \"__file__\", None)\n if not flow_file:\n raise ValueError(\"Could not determine flow's file location.\")\n\n # set entrypoint\n entry_path = Path(flow_file).absolute().relative_to(Path(\".\").absolute())\n deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n if load_existing:\n await deployment.load()\n\n # set a few attributes for this flow object\n deployment.parameter_openapi_schema = parameter_schema(flow)\n\n # ensure the ignore file exists\n if not Path(ignore_file).exists():\n Path(ignore_file).touch()\n\n if not deployment.version:\n deployment.version = flow.version\n if not deployment.description:\n deployment.description = flow.description\n\n # proxy for whether infra is docker-based\n is_docker_based = hasattr(deployment.infrastructure, \"image\")\n\n if not deployment.storage and not is_docker_based and not deployment.path:\n deployment.path = str(Path(\".\").absolute())\n elif not deployment.storage and is_docker_based:\n # only update if a path is not already set\n if not deployment.path:\n deployment.path = \"/opt/prefect/flows\"\n\n if not skip_upload:\n if (\n deployment.storage\n and \"put-directory\" in deployment.storage.get_block_capabilities()\n ):\n await deployment.upload_to_storage(ignore_file=ignore_file)\n\n if output:\n await deployment.to_yaml(output)\n\n if apply:\n await deployment.apply()\n\n return deployment\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.location","title":"location: str
property
","text":"The 'location' that this deployment points to is given by path
alone in the case of no remote storage, and otherwise by storage.basepath / path
.
The underlying flow entrypoint is interpreted relative to this location.
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.apply","title":"apply
async
","text":"Registers this deployment with the API and returns the deployment's ID.
Parameters:
Name Type Description Defaultupload
bool
if True, deployment files are automatically uploaded to remote storage
False
work_queue_concurrency
int
If provided, sets the concurrency limit on the deployment's work queue
None
Source code in prefect/deployments/deployments.py
@sync_compatible\nasync def apply(\n self, upload: bool = False, work_queue_concurrency: int = None\n) -> UUID:\n \"\"\"\n Registers this deployment with the API and returns the deployment's ID.\n\n Args:\n upload: if True, deployment files are automatically uploaded to remote\n storage\n work_queue_concurrency: If provided, sets the concurrency limit on the\n deployment's work queue\n \"\"\"\n if not self.name or not self.flow_name:\n raise ValueError(\"Both a deployment name and flow name must be set.\")\n async with get_client() as client:\n # prep IDs\n flow_id = await client.create_flow_from_name(self.flow_name)\n\n infrastructure_document_id = self.infrastructure._block_document_id\n if not infrastructure_document_id:\n # if not building off a block, will create an anonymous block\n self.infrastructure = self.infrastructure.copy()\n infrastructure_document_id = await self.infrastructure._save(\n is_anonymous=True,\n )\n\n if upload:\n await self.upload_to_storage()\n\n if self.work_queue_name and work_queue_concurrency is not None:\n try:\n res = await client.create_work_queue(\n name=self.work_queue_name, work_pool_name=self.work_pool_name\n )\n except ObjectAlreadyExists:\n res = await client.read_work_queue_by_name(\n name=self.work_queue_name, work_pool_name=self.work_pool_name\n )\n await client.update_work_queue(\n res.id, concurrency_limit=work_queue_concurrency\n )\n\n if self.schedule:\n logger.info(\n \"Interpreting the deprecated `schedule` field as an entry in \"\n \"`schedules`.\"\n )\n schedules = [\n DeploymentScheduleCreate(\n schedule=self.schedule, active=self.is_schedule_active\n )\n ]\n elif self.schedules:\n schedules = [\n DeploymentScheduleCreate(**schedule.dict())\n for schedule in self.schedules\n ]\n else:\n schedules = None\n\n # we assume storage was already saved\n storage_document_id = getattr(self.storage, \"_block_document_id\", None)\n deployment_id = await client.create_deployment(\n flow_id=flow_id,\n name=self.name,\n work_queue_name=self.work_queue_name,\n work_pool_name=self.work_pool_name,\n version=self.version,\n schedules=schedules,\n is_schedule_active=self.is_schedule_active,\n parameters=self.parameters,\n description=self.description,\n tags=self.tags,\n manifest_path=self.manifest_path, # allows for backwards YAML compat\n path=self.path,\n entrypoint=self.entrypoint,\n infra_overrides=self.infra_overrides,\n storage_document_id=storage_document_id,\n infrastructure_document_id=infrastructure_document_id,\n parameter_openapi_schema=self.parameter_openapi_schema.dict(),\n enforce_parameter_schema=self.enforce_parameter_schema,\n )\n\n if client.server_type == ServerType.CLOUD:\n # The triggers defined in the deployment spec are, essentially,\n # anonymous and attempting truly sync them with cloud is not\n # feasible. Instead, we remove all automations that are owned\n # by the deployment, meaning that they were created via this\n # mechanism below, and then recreate them.\n await client.delete_resource_owned_automations(\n f\"prefect.deployment.{deployment_id}\"\n )\n for trigger in self.triggers:\n trigger.set_deployment_id(deployment_id)\n await client.create_automation(trigger.as_automation())\n\n return deployment_id\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.build_from_flow","title":"build_from_flow
async
classmethod
","text":"Configure a deployment for a given flow.
Parameters:
Name Type Description Defaultflow
Flow
A flow function to deploy
requiredname
str
A name for the deployment
requiredoutput
optional
if provided, the full deployment specification will be written as a YAML file in the location specified by output
None
skip_upload
bool
if True, deployment files are not automatically uploaded to remote storage
False
ignore_file
str
an optional path to a .prefectignore
file that specifies filename patterns to ignore when uploading to remote storage; if not provided, looks for .prefectignore
in the current working directory
'.prefectignore'
apply
bool
if True, the deployment is automatically registered with the API
False
load_existing
bool
if True, load any settings that may already be configured for the named deployment server-side (e.g., schedules, default parameter values, etc.)
True
schedules
Optional[FlexibleScheduleList]
An optional list of schedules. Each item in the list can be: - An instance of MinimalDeploymentSchedule
. - A dictionary with a schedule
key, and optionally, an active
key. The schedule
key should correspond to a schedule type, and active
is a boolean indicating whether the schedule is active or not. - An instance of one of the predefined schedule types: IntervalSchedule
, CronSchedule
, or RRuleSchedule
.
None
**kwargs
other keyword arguments to pass to the constructor for the Deployment
class
{}
Source code in prefect/deployments/deployments.py
@classmethod\n@sync_compatible\nasync def build_from_flow(\n cls,\n flow: Flow,\n name: str,\n output: str = None,\n skip_upload: bool = False,\n ignore_file: str = \".prefectignore\",\n apply: bool = False,\n load_existing: bool = True,\n schedules: Optional[FlexibleScheduleList] = None,\n **kwargs,\n) -> \"Deployment\":\n \"\"\"\n Configure a deployment for a given flow.\n\n Args:\n flow: A flow function to deploy\n name: A name for the deployment\n output (optional): if provided, the full deployment specification will be\n written as a YAML file in the location specified by `output`\n skip_upload: if True, deployment files are not automatically uploaded to\n remote storage\n ignore_file: an optional path to a `.prefectignore` file that specifies\n filename patterns to ignore when uploading to remote storage; if not\n provided, looks for `.prefectignore` in the current working directory\n apply: if True, the deployment is automatically registered with the API\n load_existing: if True, load any settings that may already be configured for\n the named deployment server-side (e.g., schedules, default parameter\n values, etc.)\n schedules: An optional list of schedules. Each item in the list can be:\n - An instance of `MinimalDeploymentSchedule`.\n - A dictionary with a `schedule` key, and optionally, an\n `active` key. The `schedule` key should correspond to a\n schedule type, and `active` is a boolean indicating whether\n the schedule is active or not.\n - An instance of one of the predefined schedule types:\n `IntervalSchedule`, `CronSchedule`, or `RRuleSchedule`.\n **kwargs: other keyword arguments to pass to the constructor for the\n `Deployment` class\n \"\"\"\n if not name:\n raise ValueError(\"A deployment name must be provided.\")\n\n # note that `deployment.load` only updates settings that were *not*\n # provided at initialization\n\n deployment_args = {\n \"name\": name,\n \"flow_name\": flow.name,\n **kwargs,\n }\n\n if schedules is not None:\n deployment_args[\"schedules\"] = schedules\n\n deployment = cls(**deployment_args)\n deployment.flow_name = flow.name\n if not deployment.entrypoint:\n ## first see if an entrypoint can be determined\n flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n mod_name = getattr(flow, \"__module__\", None)\n if not flow_file:\n if not mod_name:\n # todo, check if the file location was manually set already\n raise ValueError(\"Could not determine flow's file location.\")\n module = importlib.import_module(mod_name)\n flow_file = getattr(module, \"__file__\", None)\n if not flow_file:\n raise ValueError(\"Could not determine flow's file location.\")\n\n # set entrypoint\n entry_path = Path(flow_file).absolute().relative_to(Path(\".\").absolute())\n deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n if load_existing:\n await deployment.load()\n\n # set a few attributes for this flow object\n deployment.parameter_openapi_schema = parameter_schema(flow)\n\n # ensure the ignore file exists\n if not Path(ignore_file).exists():\n Path(ignore_file).touch()\n\n if not deployment.version:\n deployment.version = flow.version\n if not deployment.description:\n deployment.description = flow.description\n\n # proxy for whether infra is docker-based\n is_docker_based = hasattr(deployment.infrastructure, \"image\")\n\n if not deployment.storage and not is_docker_based and not deployment.path:\n deployment.path = str(Path(\".\").absolute())\n elif not deployment.storage and is_docker_based:\n # only update if a path is not already set\n if not deployment.path:\n deployment.path = \"/opt/prefect/flows\"\n\n if not skip_upload:\n if (\n deployment.storage\n and \"put-directory\" in deployment.storage.get_block_capabilities()\n ):\n await deployment.upload_to_storage(ignore_file=ignore_file)\n\n if output:\n await deployment.to_yaml(output)\n\n if apply:\n await deployment.apply()\n\n return deployment\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.handle_openapi_schema","title":"handle_openapi_schema
","text":"This method ensures setting a value of None
is handled gracefully.
prefect/deployments/deployments.py
@validator(\"parameter_openapi_schema\", pre=True)\ndef handle_openapi_schema(cls, value):\n \"\"\"\n This method ensures setting a value of `None` is handled gracefully.\n \"\"\"\n if value is None:\n return ParameterSchema()\n return value\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.load","title":"load
async
","text":"Queries the API for a deployment with this name for this flow, and if found, prepopulates any settings that were not set at initialization.
Returns a boolean specifying whether a load was successful or not.
Raises:
Type Description-ValueError
if both name and flow name are not set
Source code inprefect/deployments/deployments.py
@sync_compatible\nasync def load(self) -> bool:\n \"\"\"\n Queries the API for a deployment with this name for this flow, and if found,\n prepopulates any settings that were not set at initialization.\n\n Returns a boolean specifying whether a load was successful or not.\n\n Raises:\n - ValueError: if both name and flow name are not set\n \"\"\"\n if not self.name or not self.flow_name:\n raise ValueError(\"Both a deployment name and flow name must be provided.\")\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(\n f\"{self.flow_name}/{self.name}\"\n )\n if deployment.storage_document_id:\n Block._from_block_document(\n await client.read_block_document(deployment.storage_document_id)\n )\n\n excluded_fields = self.__fields_set__.union(\n {\n \"infrastructure\",\n \"storage\",\n \"timestamp\",\n \"triggers\",\n \"enforce_parameter_schema\",\n \"schedules\",\n \"schedule\",\n \"is_schedule_active\",\n }\n )\n for field in set(self.__fields__.keys()) - excluded_fields:\n new_value = getattr(deployment, field)\n setattr(self, field, new_value)\n\n if \"schedules\" not in self.__fields_set__:\n self.schedules = [\n MinimalDeploymentSchedule(\n **schedule.dict(include={\"schedule\", \"active\"})\n )\n for schedule in deployment.schedules\n ]\n\n # The API server generates the \"schedule\" field from the\n # current list of schedules, so if the user has locally set\n # \"schedules\" to anything, we should avoid sending \"schedule\"\n # and let the API server generate a new value if necessary.\n if \"schedules\" in self.__fields_set__:\n self.schedule = None\n self.is_schedule_active = None\n else:\n # The user isn't using \"schedules,\" so we should\n # populate \"schedule\" and \"is_schedule_active\" from the\n # API's version of the deployment, unless the user gave\n # us these fields in __init__().\n if \"schedule\" not in self.__fields_set__:\n self.schedule = deployment.schedule\n if \"is_schedule_active\" not in self.__fields_set__:\n self.is_schedule_active = deployment.is_schedule_active\n\n if \"infrastructure\" not in self.__fields_set__:\n if deployment.infrastructure_document_id:\n self.infrastructure = Block._from_block_document(\n await client.read_block_document(\n deployment.infrastructure_document_id\n )\n )\n if \"storage\" not in self.__fields_set__:\n if deployment.storage_document_id:\n self.storage = Block._from_block_document(\n await client.read_block_document(\n deployment.storage_document_id\n )\n )\n except ObjectNotFound:\n return False\n return True\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.update","title":"update
async
","text":"Performs an in-place update with the provided settings.
Parameters:
Name Type Description Defaultignore_none
bool
if True, all None
values are ignored when performing the update
False
Source code in prefect/deployments/deployments.py
@sync_compatible\nasync def update(self, ignore_none: bool = False, **kwargs):\n \"\"\"\n Performs an in-place update with the provided settings.\n\n Args:\n ignore_none: if True, all `None` values are ignored when performing the\n update\n \"\"\"\n unknown_keys = set(kwargs.keys()) - set(self.dict().keys())\n if unknown_keys:\n raise ValueError(\n f\"Received unexpected attributes: {', '.join(unknown_keys)}\"\n )\n for key, value in kwargs.items():\n if ignore_none and value is None:\n continue\n setattr(self, key, value)\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.upload_to_storage","title":"upload_to_storage
async
","text":"Uploads the workflow this deployment represents using a provided storage block; if no block is provided, defaults to configuring self for local storage.
Parameters:
Name Type Description Defaultstorage_block
str
a string reference a remote storage block slug $type/$name
; if provided, used to upload the workflow's project
None
ignore_file
str
an optional path to a .prefectignore
file that specifies filename patterns to ignore when uploading to remote storage; if not provided, looks for .prefectignore
in the current working directory
'.prefectignore'
Source code in prefect/deployments/deployments.py
@sync_compatible\nasync def upload_to_storage(\n self, storage_block: str = None, ignore_file: str = \".prefectignore\"\n) -> Optional[int]:\n \"\"\"\n Uploads the workflow this deployment represents using a provided storage block;\n if no block is provided, defaults to configuring self for local storage.\n\n Args:\n storage_block: a string reference a remote storage block slug `$type/$name`;\n if provided, used to upload the workflow's project\n ignore_file: an optional path to a `.prefectignore` file that specifies\n filename patterns to ignore when uploading to remote storage; if not\n provided, looks for `.prefectignore` in the current working directory\n \"\"\"\n file_count = None\n if storage_block:\n storage = await Block.load(storage_block)\n\n if \"put-directory\" not in storage.get_block_capabilities():\n raise BlockMissingCapabilities(\n f\"Storage block {storage!r} missing 'put-directory' capability.\"\n )\n\n self.storage = storage\n\n # upload current directory to storage location\n file_count = await self.storage.put_directory(\n ignore_file=ignore_file, to_path=self.path\n )\n elif self.storage:\n if \"put-directory\" not in self.storage.get_block_capabilities():\n raise BlockMissingCapabilities(\n f\"Storage block {self.storage!r} missing 'put-directory'\"\n \" capability.\"\n )\n\n file_count = await self.storage.put_directory(\n ignore_file=ignore_file, to_path=self.path\n )\n\n # persists storage now in case it contains secret values\n if self.storage and not self.storage._block_document_id:\n await self.storage._save(is_anonymous=True)\n\n return file_count\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.validate_automation_names","title":"validate_automation_names
","text":"Ensure that each trigger has a name for its automation if none is provided.
Source code inprefect/deployments/deployments.py
@validator(\"triggers\")\ndef validate_automation_names(cls, field_value, values, field, config):\n \"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n for i, trigger in enumerate(field_value, start=1):\n if trigger.name is None:\n trigger.name = f\"{values['name']}__automation_{i}\"\n\n return field_value\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.load_deployments_from_yaml","title":"load_deployments_from_yaml
","text":"Load deployments from a yaml file.
Source code inprefect/deployments/deployments.py
@deprecated_callable(start_date=\"Mar 2024\")\ndef load_deployments_from_yaml(\n path: str,\n) -> PrefectObjectRegistry:\n \"\"\"\n Load deployments from a yaml file.\n \"\"\"\n with open(path, \"r\") as f:\n contents = f.read()\n\n # Parse into a yaml tree to retrieve separate documents\n nodes = yaml.compose_all(contents)\n\n with PrefectObjectRegistry(capture_failures=True) as registry:\n for node in nodes:\n with tmpchdir(path):\n deployment_dict = yaml.safe_load(yaml.serialize(node))\n # The return value is not necessary, just instantiating the Deployment\n # is enough to get it recorded on the registry\n parse_obj_as(Deployment, deployment_dict)\n\n return registry\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.load_flow_from_flow_run","title":"load_flow_from_flow_run
async
","text":"Load a flow from the location/script provided in a deployment's storage document.
If ignore_storage=True
is provided, no pull from remote storage occurs. This flag is largely for testing, and assumes the flow is already available locally.
prefect/deployments/deployments.py
@inject_client\nasync def load_flow_from_flow_run(\n flow_run: FlowRun,\n client: PrefectClient,\n ignore_storage: bool = False,\n storage_base_path: Optional[str] = None,\n) -> Flow:\n \"\"\"\n Load a flow from the location/script provided in a deployment's storage document.\n\n If `ignore_storage=True` is provided, no pull from remote storage occurs. This flag\n is largely for testing, and assumes the flow is already available locally.\n \"\"\"\n deployment = await client.read_deployment(flow_run.deployment_id)\n\n if deployment.entrypoint is None:\n raise ValueError(\n f\"Deployment {deployment.id} does not have an entrypoint and can not be run.\"\n )\n\n run_logger = flow_run_logger(flow_run)\n\n runner_storage_base_path = storage_base_path or os.environ.get(\n \"PREFECT__STORAGE_BASE_PATH\"\n )\n\n # If there's no colon, assume it's a module path\n if \":\" not in deployment.entrypoint:\n run_logger.debug(\n f\"Importing flow code from module path {deployment.entrypoint}\"\n )\n flow = await run_sync_in_worker_thread(\n load_flow_from_entrypoint, deployment.entrypoint\n )\n return flow\n\n if not ignore_storage and not deployment.pull_steps:\n sys.path.insert(0, \".\")\n if deployment.storage_document_id:\n storage_document = await client.read_block_document(\n deployment.storage_document_id\n )\n storage_block = Block._from_block_document(storage_document)\n else:\n basepath = deployment.path or Path(deployment.manifest_path).parent\n if runner_storage_base_path:\n basepath = str(basepath).replace(\n \"$STORAGE_BASE_PATH\", runner_storage_base_path\n )\n storage_block = LocalFileSystem(basepath=basepath)\n\n from_path = (\n str(deployment.path).replace(\"$STORAGE_BASE_PATH\", runner_storage_base_path)\n if runner_storage_base_path and deployment.path\n else deployment.path\n )\n run_logger.info(f\"Downloading flow code from storage at {from_path!r}\")\n await storage_block.get_directory(from_path=from_path, local_path=\".\")\n\n if deployment.pull_steps:\n run_logger.debug(f\"Running {len(deployment.pull_steps)} deployment pull steps\")\n output = await run_steps(deployment.pull_steps)\n if output.get(\"directory\"):\n run_logger.debug(f\"Changing working directory to {output['directory']!r}\")\n os.chdir(output[\"directory\"])\n\n import_path = relative_path_to_current_platform(deployment.entrypoint)\n # for backwards compat\n if deployment.manifest_path:\n with open(deployment.manifest_path, \"r\") as f:\n import_path = json.load(f)[\"import_path\"]\n import_path = (\n Path(deployment.manifest_path).parent / import_path\n ).absolute()\n run_logger.debug(f\"Importing flow code from '{import_path}'\")\n\n flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, str(import_path))\n\n return flow\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.run_deployment","title":"run_deployment
async
","text":"Create a flow run for a deployment and return it after completion or a timeout.
By default, this function blocks until the flow run finishes executing. Specify a timeout (in seconds) to wait for the flow run to execute before returning flow run metadata. To return immediately, without waiting for the flow run to execute, set timeout=0
.
Note that if you specify a timeout, this function will return the flow run metadata whether or not the flow run finished executing.
If called within a flow or task, the flow run this function creates will be linked to the current flow run as a subflow. Disable this behavior by passing as_subflow=False
.
Parameters:
Name Type Description Defaultname
Union[str, UUID]
The deployment id or deployment name in the form: <slugified-flow-name>/<slugified-deployment-name>
parameters
Optional[dict]
Parameter overrides for this flow run. Merged with the deployment defaults.
None
scheduled_time
Optional[datetime]
The time to schedule the flow run for, defaults to scheduling the flow run to start now.
None
flow_run_name
Optional[str]
A name for the created flow run
None
timeout
Optional[float]
The amount of time to wait (in seconds) for the flow run to complete before returning. Setting timeout
to 0 will return the flow run metadata immediately. Setting timeout
to None will allow this function to poll indefinitely. Defaults to None.
None
poll_interval
Optional[float]
The number of seconds between polls
5
tags
Optional[Iterable[str]]
A list of tags to associate with this flow run; tags can be used in automations and for organizational purposes.
None
idempotency_key
Optional[str]
A unique value to recognize retries of the same run, and prevent creating multiple flow runs.
None
work_queue_name
Optional[str]
The name of a work queue to use for this run. Defaults to the default work queue for the deployment.
None
as_subflow
Optional[bool]
Whether to link the flow run as a subflow of the current flow or task run.
True
Source code in prefect/deployments/deployments.py
@sync_compatible\n@inject_client\nasync def run_deployment(\n name: Union[str, UUID],\n client: Optional[PrefectClient] = None,\n parameters: Optional[dict] = None,\n scheduled_time: Optional[datetime] = None,\n flow_run_name: Optional[str] = None,\n timeout: Optional[float] = None,\n poll_interval: Optional[float] = 5,\n tags: Optional[Iterable[str]] = None,\n idempotency_key: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n as_subflow: Optional[bool] = True,\n job_variables: Optional[dict] = None,\n) -> FlowRun:\n \"\"\"\n Create a flow run for a deployment and return it after completion or a timeout.\n\n By default, this function blocks until the flow run finishes executing.\n Specify a timeout (in seconds) to wait for the flow run to execute before\n returning flow run metadata. To return immediately, without waiting for the\n flow run to execute, set `timeout=0`.\n\n Note that if you specify a timeout, this function will return the flow run\n metadata whether or not the flow run finished executing.\n\n If called within a flow or task, the flow run this function creates will\n be linked to the current flow run as a subflow. Disable this behavior by\n passing `as_subflow=False`.\n\n Args:\n name: The deployment id or deployment name in the form:\n `<slugified-flow-name>/<slugified-deployment-name>`\n parameters: Parameter overrides for this flow run. Merged with the deployment\n defaults.\n scheduled_time: The time to schedule the flow run for, defaults to scheduling\n the flow run to start now.\n flow_run_name: A name for the created flow run\n timeout: The amount of time to wait (in seconds) for the flow run to\n complete before returning. Setting `timeout` to 0 will return the flow\n run metadata immediately. Setting `timeout` to None will allow this\n function to poll indefinitely. Defaults to None.\n poll_interval: The number of seconds between polls\n tags: A list of tags to associate with this flow run; tags can be used in\n automations and for organizational purposes.\n idempotency_key: A unique value to recognize retries of the same run, and\n prevent creating multiple flow runs.\n work_queue_name: The name of a work queue to use for this run. Defaults to\n the default work queue for the deployment.\n as_subflow: Whether to link the flow run as a subflow of the current\n flow or task run.\n \"\"\"\n if timeout is not None and timeout < 0:\n raise ValueError(\"`timeout` cannot be negative\")\n\n if scheduled_time is None:\n scheduled_time = pendulum.now(\"UTC\")\n\n parameters = parameters or {}\n\n deployment_id = None\n\n if isinstance(name, UUID):\n deployment_id = name\n else:\n try:\n deployment_id = UUID(name)\n except ValueError:\n pass\n\n if deployment_id:\n deployment = await client.read_deployment(deployment_id=deployment_id)\n else:\n deployment = await client.read_deployment_by_name(name)\n\n flow_run_ctx = FlowRunContext.get()\n task_run_ctx = TaskRunContext.get()\n if as_subflow and (flow_run_ctx or task_run_ctx):\n # This was called from a flow. Link the flow run as a subflow.\n from prefect.engine import (\n Pending,\n _dynamic_key_for_task_run,\n collect_task_run_inputs,\n )\n\n task_inputs = {\n k: await collect_task_run_inputs(v) for k, v in parameters.items()\n }\n\n if deployment_id:\n flow = await client.read_flow(deployment.flow_id)\n deployment_name = f\"{flow.name}/{deployment.name}\"\n else:\n deployment_name = name\n\n # Generate a task in the parent flow run to represent the result of the subflow\n dummy_task = Task(\n name=deployment_name,\n fn=lambda: None,\n version=deployment.version,\n )\n # Override the default task key to include the deployment name\n dummy_task.task_key = f\"{__name__}.run_deployment.{slugify(deployment_name)}\"\n flow_run_id = (\n flow_run_ctx.flow_run.id\n if flow_run_ctx\n else task_run_ctx.task_run.flow_run_id\n )\n dynamic_key = (\n _dynamic_key_for_task_run(flow_run_ctx, dummy_task)\n if flow_run_ctx\n else task_run_ctx.task_run.dynamic_key\n )\n parent_task_run = await client.create_task_run(\n task=dummy_task,\n flow_run_id=flow_run_id,\n dynamic_key=dynamic_key,\n task_inputs=task_inputs,\n state=Pending(),\n )\n parent_task_run_id = parent_task_run.id\n else:\n parent_task_run_id = None\n\n flow_run = await client.create_flow_run_from_deployment(\n deployment.id,\n parameters=parameters,\n state=Scheduled(scheduled_time=scheduled_time),\n name=flow_run_name,\n tags=tags,\n idempotency_key=idempotency_key,\n parent_task_run_id=parent_task_run_id,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n\n flow_run_id = flow_run.id\n\n if timeout == 0:\n return flow_run\n\n with anyio.move_on_after(timeout):\n while True:\n flow_run = await client.read_flow_run(flow_run_id)\n flow_state = flow_run.state\n if flow_state and flow_state.is_final():\n return flow_run\n await anyio.sleep(poll_interval)\n\n return flow_run\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/runner/","title":"runner","text":"","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner","title":"prefect.deployments.runner
","text":"Objects for creating and configuring deployments for flows using serve
functionality.
import time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n \"Fastest flow this side of the Mississippi.\"\n return\n\n\nif __name__ == \"__main__\":\n # to_deployment creates RunnerDeployment instances\n slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n fast_deploy = fast_flow.to_deployment(name=\"fast\")\n\n serve(slow_deploy, fast_deploy)\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.DeploymentApplyError","title":"DeploymentApplyError
","text":" Bases: RuntimeError
Raised when an error occurs while applying a deployment.
Source code inprefect/deployments/runner.py
class DeploymentApplyError(RuntimeError):\n \"\"\"\n Raised when an error occurs while applying a deployment.\n \"\"\"\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.DeploymentImage","title":"DeploymentImage
","text":"Configuration used to build and push a Docker image for a deployment.
Attributes:
Name Type Descriptionname
The name of the Docker image to build, including the registry and repository.
tag
The tag to apply to the built image.
dockerfile
The path to the Dockerfile to use for building the image. If not provided, a default Dockerfile will be generated.
**build_kwargs
Additional keyword arguments to pass to the Docker build request. See the docker-py
documentation for more information.
prefect/deployments/runner.py
class DeploymentImage:\n \"\"\"\n Configuration used to build and push a Docker image for a deployment.\n\n Attributes:\n name: The name of the Docker image to build, including the registry and\n repository.\n tag: The tag to apply to the built image.\n dockerfile: The path to the Dockerfile to use for building the image. If\n not provided, a default Dockerfile will be generated.\n **build_kwargs: Additional keyword arguments to pass to the Docker build request.\n See the [`docker-py` documentation](https://docker-py.readthedocs.io/en/stable/images.html#docker.models.images.ImageCollection.build)\n for more information.\n\n \"\"\"\n\n def __init__(self, name, tag=None, dockerfile=\"auto\", **build_kwargs):\n image_name, image_tag = parse_image_tag(name)\n if tag and image_tag:\n raise ValueError(\n f\"Only one tag can be provided - both {image_tag!r} and {tag!r} were\"\n \" provided as tags.\"\n )\n namespace, repository = split_repository_path(image_name)\n # if the provided image name does not include a namespace (registry URL or user/org name),\n # use the default namespace\n if not namespace:\n namespace = PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE.value()\n # join the namespace and repository to create the full image name\n # ignore namespace if it is None\n self.name = \"/\".join(filter(None, [namespace, repository]))\n self.tag = tag or image_tag or slugify(pendulum.now(\"utc\").isoformat())\n self.dockerfile = dockerfile\n self.build_kwargs = build_kwargs\n\n @property\n def reference(self):\n return f\"{self.name}:{self.tag}\"\n\n def build(self):\n full_image_name = self.reference\n build_kwargs = self.build_kwargs.copy()\n build_kwargs[\"context\"] = Path.cwd()\n build_kwargs[\"tag\"] = full_image_name\n build_kwargs[\"pull\"] = build_kwargs.get(\"pull\", True)\n\n if self.dockerfile == \"auto\":\n with generate_default_dockerfile():\n build_image(**build_kwargs)\n else:\n build_kwargs[\"dockerfile\"] = self.dockerfile\n build_image(**build_kwargs)\n\n def push(self):\n with docker_client() as client:\n events = client.api.push(\n repository=self.name, tag=self.tag, stream=True, decode=True\n )\n for event in events:\n if \"error\" in event:\n raise PushError(event[\"error\"])\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.EntrypointType","title":"EntrypointType
","text":" Bases: Enum
Enum representing a entrypoint type.
File path entrypoints are in the format: path/to/file.py:function_name
. Module path entrypoints are in the format: path.to.module.function_name
.
prefect/deployments/runner.py
class EntrypointType(enum.Enum):\n \"\"\"\n Enum representing a entrypoint type.\n\n File path entrypoints are in the format: `path/to/file.py:function_name`.\n Module path entrypoints are in the format: `path.to.module.function_name`.\n \"\"\"\n\n FILE_PATH = \"file_path\"\n MODULE_PATH = \"module_path\"\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment","title":"RunnerDeployment
","text":" Bases: BaseModel
A Prefect RunnerDeployment definition, used for specifying and building deployments.
Attributes:
Name Type Descriptionname
str
A name for the deployment (required).
version
Optional[str]
An optional version for the deployment; defaults to the flow's version
description
Optional[str]
An optional description of the deployment; defaults to the flow's description
tags
List[str]
An optional list of tags to associate with this deployment; note that tags are used only for organizational purposes. For delegating work to agents, see work_queue_name
.
schedule
Optional[SCHEDULE_TYPES]
A schedule to run this deployment on, once registered
is_schedule_active
Optional[bool]
Whether or not the schedule is active
parameters
Dict[str, Any]
A dictionary of parameter values to pass to runs created from this deployment
path
Dict[str, Any]
The path to the working directory for the workflow, relative to remote storage or, if stored on a local filesystem, an absolute path
entrypoint
Optional[str]
The path to the entrypoint for the workflow, always relative to the path
parameter_openapi_schema
Optional[str]
The parameter schema of the flow, including defaults.
enforce_parameter_schema
bool
Whether or not the Prefect API should enforce the parameter schema for this deployment.
work_pool_name
Optional[str]
The name of the work pool to use for this deployment.
work_queue_name
Optional[str]
The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.
job_variables
Dict[str, Any]
Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.
Source code inprefect/deployments/runner.py
class RunnerDeployment(BaseModel):\n \"\"\"\n A Prefect RunnerDeployment definition, used for specifying and building deployments.\n\n Attributes:\n name: A name for the deployment (required).\n version: An optional version for the deployment; defaults to the flow's version\n description: An optional description of the deployment; defaults to the flow's\n description\n tags: An optional list of tags to associate with this deployment; note that tags\n are used only for organizational purposes. For delegating work to agents,\n see `work_queue_name`.\n schedule: A schedule to run this deployment on, once registered\n is_schedule_active: Whether or not the schedule is active\n parameters: A dictionary of parameter values to pass to runs created from this\n deployment\n path: The path to the working directory for the workflow, relative to remote\n storage or, if stored on a local filesystem, an absolute path\n entrypoint: The path to the entrypoint for the workflow, always relative to the\n `path`\n parameter_openapi_schema: The parameter schema of the flow, including defaults.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n \"\"\"\n\n class Config:\n arbitrary_types_allowed = True\n\n name: str = Field(..., description=\"The name of the deployment.\")\n flow_name: Optional[str] = Field(\n None, description=\"The name of the underlying flow; typically inferred.\"\n )\n description: Optional[str] = Field(\n default=None, description=\"An optional description of the deployment.\"\n )\n version: Optional[str] = Field(\n default=None, description=\"An optional version for the deployment.\"\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"One of more tags to apply to this deployment.\",\n )\n schedules: Optional[List[MinimalDeploymentSchedule]] = Field(\n default=None,\n description=\"The schedules that should cause this deployment to run.\",\n )\n schedule: Optional[SCHEDULE_TYPES] = None\n paused: Optional[bool] = Field(\n default=None, description=\"Whether or not the deployment is paused.\"\n )\n is_schedule_active: Optional[bool] = Field(\n default=None, description=\"DEPRECATED: Whether or not the schedule is active.\"\n )\n parameters: Dict[str, Any] = Field(default_factory=dict)\n entrypoint: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the entrypoint for the workflow, relative to the `path`.\"\n ),\n )\n triggers: List[DeploymentTrigger] = Field(\n default_factory=list,\n description=\"The triggers that should cause this deployment to run.\",\n )\n enforce_parameter_schema: bool = Field(\n default=False,\n description=(\n \"Whether or not the Prefect API should enforce the parameter schema for\"\n \" this deployment.\"\n ),\n )\n storage: Optional[RunnerStorage] = Field(\n default=None,\n description=(\n \"The storage object used to retrieve flow code for this deployment.\"\n ),\n )\n work_pool_name: Optional[str] = Field(\n default=None,\n description=(\n \"The name of the work pool to use for this deployment. Only used when\"\n \" the deployment is registered with a built runner.\"\n ),\n )\n work_queue_name: Optional[str] = Field(\n default=None,\n description=(\n \"The name of the work queue to use for this deployment. Only used when\"\n \" the deployment is registered with a built runner.\"\n ),\n )\n job_variables: Dict[str, Any] = Field(\n default_factory=dict,\n description=(\n \"Job variables used to override the default values of a work pool\"\n \" base job template. Only used when the deployment is registered with\"\n \" a built runner.\"\n ),\n )\n _entrypoint_type: EntrypointType = PrivateAttr(\n default=EntrypointType.FILE_PATH,\n )\n _path: Optional[str] = PrivateAttr(\n default=None,\n )\n _parameter_openapi_schema: ParameterSchema = PrivateAttr(\n default_factory=ParameterSchema,\n )\n\n @property\n def entrypoint_type(self) -> EntrypointType:\n return self._entrypoint_type\n\n @validator(\"triggers\", allow_reuse=True)\n def validate_automation_names(cls, field_value, values, field, config):\n \"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n for i, trigger in enumerate(field_value, start=1):\n if trigger.name is None:\n trigger.name = f\"{values['name']}__automation_{i}\"\n\n return field_value\n\n @root_validator(pre=True)\n def reconcile_paused(cls, values):\n paused = values.get(\"paused\")\n is_schedule_active = values.get(\"is_schedule_active\")\n\n if paused is not None:\n values[\"paused\"] = paused\n values[\"is_schedule_active\"] = not paused\n elif is_schedule_active is not None:\n values[\"paused\"] = not is_schedule_active\n values[\"is_schedule_active\"] = is_schedule_active\n else:\n values[\"paused\"] = False\n values[\"is_schedule_active\"] = True\n\n return values\n\n @root_validator(pre=True)\n def reconcile_schedules(cls, values):\n schedule = values.get(\"schedule\")\n schedules = values.get(\"schedules\")\n\n if schedules is None and schedule is not None:\n values[\"schedules\"] = [create_minimal_deployment_schedule(schedule)]\n elif schedules is not None and len(schedules) > 0:\n values[\"schedules\"] = normalize_to_minimal_deployment_schedules(schedules)\n\n return values\n\n @sync_compatible\n async def apply(\n self, work_pool_name: Optional[str] = None, image: Optional[str] = None\n ) -> UUID:\n \"\"\"\n Registers this deployment with the API and returns the deployment's ID.\n\n Args:\n work_pool_name: The name of the work pool to use for this\n deployment.\n image: The registry, name, and tag of the Docker image to\n use for this deployment. Only used when the deployment is\n deployed to a work pool.\n\n Returns:\n The ID of the created deployment.\n \"\"\"\n\n work_pool_name = work_pool_name or self.work_pool_name\n\n if image and not work_pool_name:\n raise ValueError(\n \"An image can only be provided when registering a deployment with a\"\n \" work pool.\"\n )\n\n if self.work_queue_name and not work_pool_name:\n raise ValueError(\n \"A work queue can only be provided when registering a deployment with\"\n \" a work pool.\"\n )\n\n if self.job_variables and not work_pool_name:\n raise ValueError(\n \"Job variables can only be provided when registering a deployment\"\n \" with a work pool.\"\n )\n\n async with get_client() as client:\n flow_id = await client.create_flow_from_name(self.flow_name)\n\n create_payload = dict(\n flow_id=flow_id,\n name=self.name,\n work_queue_name=self.work_queue_name,\n work_pool_name=work_pool_name,\n version=self.version,\n paused=self.paused,\n schedules=self.schedules,\n parameters=self.parameters,\n description=self.description,\n tags=self.tags,\n path=self._path,\n entrypoint=self.entrypoint,\n storage_document_id=None,\n infrastructure_document_id=None,\n parameter_openapi_schema=self._parameter_openapi_schema.dict(),\n enforce_parameter_schema=self.enforce_parameter_schema,\n )\n\n if work_pool_name:\n create_payload[\"infra_overrides\"] = self.job_variables\n if image:\n create_payload[\"infra_overrides\"][\"image\"] = image\n create_payload[\"path\"] = None if self.storage else self._path\n create_payload[\"pull_steps\"] = (\n [self.storage.to_pull_step()] if self.storage else []\n )\n\n try:\n deployment_id = await client.create_deployment(**create_payload)\n except Exception as exc:\n if isinstance(exc, PrefectHTTPStatusError):\n detail = exc.response.json().get(\"detail\")\n if detail:\n raise DeploymentApplyError(detail) from exc\n raise DeploymentApplyError(\n f\"Error while applying deployment: {str(exc)}\"\n ) from exc\n\n if client.server_type == ServerType.CLOUD:\n # The triggers defined in the deployment spec are, essentially,\n # anonymous and attempting truly sync them with cloud is not\n # feasible. Instead, we remove all automations that are owned\n # by the deployment, meaning that they were created via this\n # mechanism below, and then recreate them.\n await client.delete_resource_owned_automations(\n f\"prefect.deployment.{deployment_id}\"\n )\n for trigger in self.triggers:\n trigger.set_deployment_id(deployment_id)\n await client.create_automation(trigger.as_automation())\n\n return deployment_id\n\n @staticmethod\n def _construct_deployment_schedules(\n interval: Optional[\n Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n ] = None,\n anchor_date: Optional[Union[datetime, str]] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n timezone: Optional[str] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n ) -> Union[List[MinimalDeploymentSchedule], FlexibleScheduleList]:\n \"\"\"\n Construct a schedule or schedules from the provided arguments.\n\n This method serves as a unified interface for creating deployment\n schedules. If `schedules` is provided, it is directly returned. If\n `schedule` is provided, it is encapsulated in a list and returned. If\n `interval`, `cron`, or `rrule` are provided, they are used to construct\n schedule objects.\n\n Args:\n interval: An interval on which to schedule runs, either as a single\n value or as a list of values. Accepts numbers (interpreted as\n seconds) or `timedelta` objects. Each value defines a separate\n scheduling interval.\n anchor_date: The anchor date from which interval schedules should\n start. This applies to all intervals if a list is provided.\n cron: A cron expression or a list of cron expressions defining cron\n schedules. Each expression defines a separate cron schedule.\n rrule: An rrule string or a list of rrule strings for scheduling.\n Each string defines a separate recurrence rule.\n timezone: The timezone to apply to the cron or rrule schedules.\n This is a single value applied uniformly to all schedules.\n schedule: A singular schedule object, used for advanced scheduling\n options like specifying a timezone. This is returned as a list\n containing this single schedule.\n schedules: A pre-defined list of schedule objects. If provided,\n this list is returned as-is, bypassing other schedule construction\n logic.\n \"\"\"\n\n num_schedules = sum(\n 1\n for entry in (interval, cron, rrule, schedule, schedules)\n if entry is not None\n )\n if num_schedules > 1:\n raise ValueError(\n \"Only one of interval, cron, rrule, schedule, or schedules can be provided.\"\n )\n elif num_schedules == 0:\n return []\n\n if schedules is not None:\n return schedules\n elif interval or cron or rrule:\n # `interval`, `cron`, and `rrule` can be lists of values. This\n # block figures out which one is not None and uses that to\n # construct the list of schedules via `construct_schedule`.\n parameters = [(\"interval\", interval), (\"cron\", cron), (\"rrule\", rrule)]\n schedule_type, value = [\n param for param in parameters if param[1] is not None\n ][0]\n\n if not isiterable(value):\n value = [value]\n\n return [\n create_minimal_deployment_schedule(\n construct_schedule(\n **{\n schedule_type: v,\n \"timezone\": timezone,\n \"anchor_date\": anchor_date,\n }\n )\n )\n for v in value\n ]\n else:\n return [create_minimal_deployment_schedule(schedule)]\n\n def _set_defaults_from_flow(self, flow: \"Flow\"):\n self._parameter_openapi_schema = parameter_schema(flow)\n\n if not self.version:\n self.version = flow.version\n if not self.description:\n self.description = flow.description\n\n @classmethod\n def from_flow(\n cls,\n flow: \"Flow\",\n name: str,\n interval: Optional[\n Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n ) -> \"RunnerDeployment\":\n \"\"\"\n Configure a deployment for a given flow.\n\n Args:\n flow: A flow function to deploy\n name: A name for the deployment\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n \"\"\"\n constructed_schedules = cls._construct_deployment_schedules(\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedule=schedule,\n schedules=schedules,\n )\n\n job_variables = job_variables or {}\n\n deployment = cls(\n name=Path(name).stem,\n flow_name=flow.name,\n schedule=schedule,\n schedules=constructed_schedules,\n is_schedule_active=is_schedule_active,\n paused=paused,\n tags=tags or [],\n triggers=triggers or [],\n parameters=parameters or {},\n description=description,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n\n if not deployment.entrypoint:\n no_file_location_error = (\n \"Flows defined interactively cannot be deployed. Check out the\"\n \" quickstart guide for help getting started:\"\n \" https://docs.prefect.io/latest/getting-started/quickstart\"\n )\n ## first see if an entrypoint can be determined\n flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n mod_name = getattr(flow, \"__module__\", None)\n if entrypoint_type == EntrypointType.MODULE_PATH:\n if mod_name:\n deployment.entrypoint = f\"{mod_name}.{flow.__name__}\"\n else:\n raise ValueError(\n \"Unable to determine module path for provided flow.\"\n )\n else:\n if not flow_file:\n if not mod_name:\n raise ValueError(no_file_location_error)\n try:\n module = importlib.import_module(mod_name)\n flow_file = getattr(module, \"__file__\", None)\n except ModuleNotFoundError as exc:\n if \"__prefect_loader__\" in str(exc):\n raise ValueError(\n \"Cannot create a RunnerDeployment from a flow that has been\"\n \" loaded from an entrypoint. To deploy a flow via\"\n \" entrypoint, use RunnerDeployment.from_entrypoint instead.\"\n )\n raise ValueError(no_file_location_error)\n if not flow_file:\n raise ValueError(no_file_location_error)\n\n # set entrypoint\n entry_path = (\n Path(flow_file).absolute().relative_to(Path.cwd().absolute())\n )\n deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n if entrypoint_type == EntrypointType.FILE_PATH and not deployment._path:\n deployment._path = \".\"\n\n deployment._entrypoint_type = entrypoint_type\n\n cls._set_defaults_from_flow(deployment, flow)\n\n return deployment\n\n @classmethod\n def from_entrypoint(\n cls,\n entrypoint: str,\n name: str,\n interval: Optional[\n Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n ) -> \"RunnerDeployment\":\n \"\"\"\n Configure a deployment for a given flow located at a given entrypoint.\n\n Args:\n entrypoint: The path to a file containing a flow and the name of the flow function in\n the format `./path/to/file.py:flow_func_name`.\n name: A name for the deployment\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n \"\"\"\n from prefect.flows import load_flow_from_entrypoint\n\n job_variables = job_variables or {}\n flow = load_flow_from_entrypoint(entrypoint)\n\n constructed_schedules = cls._construct_deployment_schedules(\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedule=schedule,\n schedules=schedules,\n )\n\n deployment = cls(\n name=Path(name).stem,\n flow_name=flow.name,\n schedule=schedule,\n schedules=constructed_schedules,\n paused=paused,\n is_schedule_active=is_schedule_active,\n tags=tags or [],\n triggers=triggers or [],\n parameters=parameters or {},\n description=description,\n version=version,\n entrypoint=entrypoint,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n deployment._path = str(Path.cwd())\n\n cls._set_defaults_from_flow(deployment, flow)\n\n return deployment\n\n @classmethod\n @sync_compatible\n async def from_storage(\n cls,\n storage: RunnerStorage,\n entrypoint: str,\n name: str,\n interval: Optional[\n Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n ):\n \"\"\"\n Create a RunnerDeployment from a flow located at a given entrypoint and stored in a\n local storage location.\n\n Args:\n entrypoint: The path to a file containing a flow and the name of the flow function in\n the format `./path/to/file.py:flow_func_name`.\n name: A name for the deployment\n storage: A storage object to use for retrieving flow code. If not provided, a\n URL must be provided.\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n \"\"\"\n from prefect.flows import load_flow_from_entrypoint\n\n constructed_schedules = cls._construct_deployment_schedules(\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedule=schedule,\n schedules=schedules,\n )\n\n job_variables = job_variables or {}\n\n with tempfile.TemporaryDirectory() as tmpdir:\n storage.set_base_path(Path(tmpdir))\n await storage.pull_code()\n\n full_entrypoint = str(storage.destination / entrypoint)\n flow = await from_async.wait_for_call_in_new_thread(\n create_call(load_flow_from_entrypoint, full_entrypoint)\n )\n\n deployment = cls(\n name=Path(name).stem,\n flow_name=flow.name,\n schedule=schedule,\n schedules=constructed_schedules,\n paused=paused,\n is_schedule_active=is_schedule_active,\n tags=tags or [],\n triggers=triggers or [],\n parameters=parameters or {},\n description=description,\n version=version,\n entrypoint=entrypoint,\n enforce_parameter_schema=enforce_parameter_schema,\n storage=storage,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n deployment._path = str(storage.destination).replace(\n tmpdir, \"$STORAGE_BASE_PATH\"\n )\n\n cls._set_defaults_from_flow(deployment, flow)\n\n return deployment\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.apply","title":"apply
async
","text":"Registers this deployment with the API and returns the deployment's ID.
Parameters:
Name Type Description Defaultwork_pool_name
Optional[str]
The name of the work pool to use for this deployment.
None
image
Optional[str]
The registry, name, and tag of the Docker image to use for this deployment. Only used when the deployment is deployed to a work pool.
None
Returns:
Type DescriptionUUID
The ID of the created deployment.
Source code inprefect/deployments/runner.py
@sync_compatible\nasync def apply(\n self, work_pool_name: Optional[str] = None, image: Optional[str] = None\n) -> UUID:\n \"\"\"\n Registers this deployment with the API and returns the deployment's ID.\n\n Args:\n work_pool_name: The name of the work pool to use for this\n deployment.\n image: The registry, name, and tag of the Docker image to\n use for this deployment. Only used when the deployment is\n deployed to a work pool.\n\n Returns:\n The ID of the created deployment.\n \"\"\"\n\n work_pool_name = work_pool_name or self.work_pool_name\n\n if image and not work_pool_name:\n raise ValueError(\n \"An image can only be provided when registering a deployment with a\"\n \" work pool.\"\n )\n\n if self.work_queue_name and not work_pool_name:\n raise ValueError(\n \"A work queue can only be provided when registering a deployment with\"\n \" a work pool.\"\n )\n\n if self.job_variables and not work_pool_name:\n raise ValueError(\n \"Job variables can only be provided when registering a deployment\"\n \" with a work pool.\"\n )\n\n async with get_client() as client:\n flow_id = await client.create_flow_from_name(self.flow_name)\n\n create_payload = dict(\n flow_id=flow_id,\n name=self.name,\n work_queue_name=self.work_queue_name,\n work_pool_name=work_pool_name,\n version=self.version,\n paused=self.paused,\n schedules=self.schedules,\n parameters=self.parameters,\n description=self.description,\n tags=self.tags,\n path=self._path,\n entrypoint=self.entrypoint,\n storage_document_id=None,\n infrastructure_document_id=None,\n parameter_openapi_schema=self._parameter_openapi_schema.dict(),\n enforce_parameter_schema=self.enforce_parameter_schema,\n )\n\n if work_pool_name:\n create_payload[\"infra_overrides\"] = self.job_variables\n if image:\n create_payload[\"infra_overrides\"][\"image\"] = image\n create_payload[\"path\"] = None if self.storage else self._path\n create_payload[\"pull_steps\"] = (\n [self.storage.to_pull_step()] if self.storage else []\n )\n\n try:\n deployment_id = await client.create_deployment(**create_payload)\n except Exception as exc:\n if isinstance(exc, PrefectHTTPStatusError):\n detail = exc.response.json().get(\"detail\")\n if detail:\n raise DeploymentApplyError(detail) from exc\n raise DeploymentApplyError(\n f\"Error while applying deployment: {str(exc)}\"\n ) from exc\n\n if client.server_type == ServerType.CLOUD:\n # The triggers defined in the deployment spec are, essentially,\n # anonymous and attempting truly sync them with cloud is not\n # feasible. Instead, we remove all automations that are owned\n # by the deployment, meaning that they were created via this\n # mechanism below, and then recreate them.\n await client.delete_resource_owned_automations(\n f\"prefect.deployment.{deployment_id}\"\n )\n for trigger in self.triggers:\n trigger.set_deployment_id(deployment_id)\n await client.create_automation(trigger.as_automation())\n\n return deployment_id\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.from_entrypoint","title":"from_entrypoint
classmethod
","text":"Configure a deployment for a given flow located at a given entrypoint.
Parameters:
Name Type Description Defaultentrypoint
str
The path to a file containing a flow and the name of the flow function in the format ./path/to/file.py:flow_func_name
.
name
str
A name for the deployment
requiredinterval
Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]
An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.
None
cron
Optional[Union[Iterable[str], str]]
A cron schedule of when to execute runs of this flow.
None
rrule
Optional[Union[Iterable[str], str]]
An rrule schedule of when to execute runs of this flow.
None
paused
Optional[bool]
Whether or not to set this deployment as paused.
None
schedules
Optional[FlexibleScheduleList]
A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone
.
None
schedule
Optional[SCHEDULE_TYPES]
A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.
None
is_schedule_active
Optional[bool]
Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.
None
triggers
Optional[List[DeploymentTrigger]]
A list of triggers that should kick of a run of this flow.
None
parameters
Optional[dict]
A dictionary of default parameter values to pass to runs of this flow.
None
description
Optional[str]
A description for the created deployment. Defaults to the flow's description if not provided.
None
tags
Optional[List[str]]
A list of tags to associate with the created deployment for organizational purposes.
None
version
Optional[str]
A version for the created deployment. Defaults to the flow's version.
None
enforce_parameter_schema
bool
Whether or not the Prefect API should enforce the parameter schema for this deployment.
False
work_pool_name
Optional[str]
The name of the work pool to use for this deployment.
None
work_queue_name
Optional[str]
The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.
None
job_variables
Optional[Dict[str, Any]]
Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.
None
Source code in prefect/deployments/runner.py
@classmethod\ndef from_entrypoint(\n cls,\n entrypoint: str,\n name: str,\n interval: Optional[\n Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n) -> \"RunnerDeployment\":\n \"\"\"\n Configure a deployment for a given flow located at a given entrypoint.\n\n Args:\n entrypoint: The path to a file containing a flow and the name of the flow function in\n the format `./path/to/file.py:flow_func_name`.\n name: A name for the deployment\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n \"\"\"\n from prefect.flows import load_flow_from_entrypoint\n\n job_variables = job_variables or {}\n flow = load_flow_from_entrypoint(entrypoint)\n\n constructed_schedules = cls._construct_deployment_schedules(\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedule=schedule,\n schedules=schedules,\n )\n\n deployment = cls(\n name=Path(name).stem,\n flow_name=flow.name,\n schedule=schedule,\n schedules=constructed_schedules,\n paused=paused,\n is_schedule_active=is_schedule_active,\n tags=tags or [],\n triggers=triggers or [],\n parameters=parameters or {},\n description=description,\n version=version,\n entrypoint=entrypoint,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n deployment._path = str(Path.cwd())\n\n cls._set_defaults_from_flow(deployment, flow)\n\n return deployment\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.from_flow","title":"from_flow
classmethod
","text":"Configure a deployment for a given flow.
Parameters:
Name Type Description Defaultflow
Flow
A flow function to deploy
requiredname
str
A name for the deployment
requiredinterval
Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]
An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.
None
cron
Optional[Union[Iterable[str], str]]
A cron schedule of when to execute runs of this flow.
None
rrule
Optional[Union[Iterable[str], str]]
An rrule schedule of when to execute runs of this flow.
None
paused
Optional[bool]
Whether or not to set this deployment as paused.
None
schedules
Optional[FlexibleScheduleList]
A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone
.
None
schedule
Optional[SCHEDULE_TYPES]
A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.
None
is_schedule_active
Optional[bool]
Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.
None
triggers
Optional[List[DeploymentTrigger]]
A list of triggers that should kick of a run of this flow.
None
parameters
Optional[dict]
A dictionary of default parameter values to pass to runs of this flow.
None
description
Optional[str]
A description for the created deployment. Defaults to the flow's description if not provided.
None
tags
Optional[List[str]]
A list of tags to associate with the created deployment for organizational purposes.
None
version
Optional[str]
A version for the created deployment. Defaults to the flow's version.
None
enforce_parameter_schema
bool
Whether or not the Prefect API should enforce the parameter schema for this deployment.
False
work_pool_name
Optional[str]
The name of the work pool to use for this deployment.
None
work_queue_name
Optional[str]
The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.
None
job_variables
Optional[Dict[str, Any]]
Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.
None
Source code in prefect/deployments/runner.py
@classmethod\ndef from_flow(\n cls,\n flow: \"Flow\",\n name: str,\n interval: Optional[\n Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n) -> \"RunnerDeployment\":\n \"\"\"\n Configure a deployment for a given flow.\n\n Args:\n flow: A flow function to deploy\n name: A name for the deployment\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n \"\"\"\n constructed_schedules = cls._construct_deployment_schedules(\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedule=schedule,\n schedules=schedules,\n )\n\n job_variables = job_variables or {}\n\n deployment = cls(\n name=Path(name).stem,\n flow_name=flow.name,\n schedule=schedule,\n schedules=constructed_schedules,\n is_schedule_active=is_schedule_active,\n paused=paused,\n tags=tags or [],\n triggers=triggers or [],\n parameters=parameters or {},\n description=description,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n\n if not deployment.entrypoint:\n no_file_location_error = (\n \"Flows defined interactively cannot be deployed. Check out the\"\n \" quickstart guide for help getting started:\"\n \" https://docs.prefect.io/latest/getting-started/quickstart\"\n )\n ## first see if an entrypoint can be determined\n flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n mod_name = getattr(flow, \"__module__\", None)\n if entrypoint_type == EntrypointType.MODULE_PATH:\n if mod_name:\n deployment.entrypoint = f\"{mod_name}.{flow.__name__}\"\n else:\n raise ValueError(\n \"Unable to determine module path for provided flow.\"\n )\n else:\n if not flow_file:\n if not mod_name:\n raise ValueError(no_file_location_error)\n try:\n module = importlib.import_module(mod_name)\n flow_file = getattr(module, \"__file__\", None)\n except ModuleNotFoundError as exc:\n if \"__prefect_loader__\" in str(exc):\n raise ValueError(\n \"Cannot create a RunnerDeployment from a flow that has been\"\n \" loaded from an entrypoint. To deploy a flow via\"\n \" entrypoint, use RunnerDeployment.from_entrypoint instead.\"\n )\n raise ValueError(no_file_location_error)\n if not flow_file:\n raise ValueError(no_file_location_error)\n\n # set entrypoint\n entry_path = (\n Path(flow_file).absolute().relative_to(Path.cwd().absolute())\n )\n deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n if entrypoint_type == EntrypointType.FILE_PATH and not deployment._path:\n deployment._path = \".\"\n\n deployment._entrypoint_type = entrypoint_type\n\n cls._set_defaults_from_flow(deployment, flow)\n\n return deployment\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.from_storage","title":"from_storage
async
classmethod
","text":"Create a RunnerDeployment from a flow located at a given entrypoint and stored in a local storage location.
Parameters:
Name Type Description Defaultentrypoint
str
The path to a file containing a flow and the name of the flow function in the format ./path/to/file.py:flow_func_name
.
name
str
A name for the deployment
requiredstorage
RunnerStorage
A storage object to use for retrieving flow code. If not provided, a URL must be provided.
requiredinterval
Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]
An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.
None
cron
Optional[Union[Iterable[str], str]]
A cron schedule of when to execute runs of this flow.
None
rrule
Optional[Union[Iterable[str], str]]
An rrule schedule of when to execute runs of this flow.
None
schedule
Optional[SCHEDULE_TYPES]
A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.
None
is_schedule_active
Optional[bool]
Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.
None
triggers
Optional[List[DeploymentTrigger]]
A list of triggers that should kick of a run of this flow.
None
parameters
Optional[dict]
A dictionary of default parameter values to pass to runs of this flow.
None
description
Optional[str]
A description for the created deployment. Defaults to the flow's description if not provided.
None
tags
Optional[List[str]]
A list of tags to associate with the created deployment for organizational purposes.
None
version
Optional[str]
A version for the created deployment. Defaults to the flow's version.
None
enforce_parameter_schema
bool
Whether or not the Prefect API should enforce the parameter schema for this deployment.
False
work_pool_name
Optional[str]
The name of the work pool to use for this deployment.
None
work_queue_name
Optional[str]
The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.
None
job_variables
Optional[Dict[str, Any]]
Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.
None
Source code in prefect/deployments/runner.py
@classmethod\n@sync_compatible\nasync def from_storage(\n cls,\n storage: RunnerStorage,\n entrypoint: str,\n name: str,\n interval: Optional[\n Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n):\n \"\"\"\n Create a RunnerDeployment from a flow located at a given entrypoint and stored in a\n local storage location.\n\n Args:\n entrypoint: The path to a file containing a flow and the name of the flow function in\n the format `./path/to/file.py:flow_func_name`.\n name: A name for the deployment\n storage: A storage object to use for retrieving flow code. If not provided, a\n URL must be provided.\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n \"\"\"\n from prefect.flows import load_flow_from_entrypoint\n\n constructed_schedules = cls._construct_deployment_schedules(\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedule=schedule,\n schedules=schedules,\n )\n\n job_variables = job_variables or {}\n\n with tempfile.TemporaryDirectory() as tmpdir:\n storage.set_base_path(Path(tmpdir))\n await storage.pull_code()\n\n full_entrypoint = str(storage.destination / entrypoint)\n flow = await from_async.wait_for_call_in_new_thread(\n create_call(load_flow_from_entrypoint, full_entrypoint)\n )\n\n deployment = cls(\n name=Path(name).stem,\n flow_name=flow.name,\n schedule=schedule,\n schedules=constructed_schedules,\n paused=paused,\n is_schedule_active=is_schedule_active,\n tags=tags or [],\n triggers=triggers or [],\n parameters=parameters or {},\n description=description,\n version=version,\n entrypoint=entrypoint,\n enforce_parameter_schema=enforce_parameter_schema,\n storage=storage,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n deployment._path = str(storage.destination).replace(\n tmpdir, \"$STORAGE_BASE_PATH\"\n )\n\n cls._set_defaults_from_flow(deployment, flow)\n\n return deployment\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.validate_automation_names","title":"validate_automation_names
","text":"Ensure that each trigger has a name for its automation if none is provided.
Source code inprefect/deployments/runner.py
@validator(\"triggers\", allow_reuse=True)\ndef validate_automation_names(cls, field_value, values, field, config):\n \"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n for i, trigger in enumerate(field_value, start=1):\n if trigger.name is None:\n trigger.name = f\"{values['name']}__automation_{i}\"\n\n return field_value\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.deploy","title":"deploy
async
","text":"Deploy the provided list of deployments to dynamic infrastructure via a work pool.
By default, calling this function will build a Docker image for the deployments, push it to a registry, and create each deployment via the Prefect API that will run the corresponding flow on the given schedule.
If you want to use an existing image, you can pass build=False
to skip building and pushing an image.
Parameters:
Name Type Description Default*deployments
RunnerDeployment
A list of deployments to deploy.
()
work_pool_name
Optional[str]
The name of the work pool to use for these deployments. Defaults to the value of PREFECT_DEFAULT_WORK_POOL_NAME
.
None
image
Optional[Union[str, DeploymentImage]]
The name of the Docker image to build, including the registry and repository. Pass a DeploymentImage instance to customize the Dockerfile used and build arguments.
None
build
bool
Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime.
True
push
bool
Whether or not to skip pushing the built image to a registry.
True
print_next_steps_message
bool
Whether or not to print a message with next steps after deploying the deployments.
True
Returns:
Type DescriptionList[UUID]
A list of deployment IDs for the created/updated deployments.
Examples:
Deploy a group of flows to a work pool:
from prefect import deploy, flow\n\n@flow(log_prints=True)\ndef local_flow():\n print(\"I'm a locally defined flow!\")\n\nif __name__ == \"__main__\":\n deploy(\n local_flow.to_deployment(name=\"example-deploy-local-flow\"),\n flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n ).to_deployment(\n name=\"example-deploy-remote-flow\",\n ),\n work_pool_name=\"my-work-pool\",\n image=\"my-registry/my-image:dev\",\n )\n
Source code in prefect/deployments/runner.py
@sync_compatible\nasync def deploy(\n *deployments: RunnerDeployment,\n work_pool_name: Optional[str] = None,\n image: Optional[Union[str, DeploymentImage]] = None,\n build: bool = True,\n push: bool = True,\n print_next_steps_message: bool = True,\n ignore_warnings: bool = False,\n) -> List[UUID]:\n \"\"\"\n Deploy the provided list of deployments to dynamic infrastructure via a\n work pool.\n\n By default, calling this function will build a Docker image for the deployments, push it to a\n registry, and create each deployment via the Prefect API that will run the corresponding\n flow on the given schedule.\n\n If you want to use an existing image, you can pass `build=False` to skip building and pushing\n an image.\n\n Args:\n *deployments: A list of deployments to deploy.\n work_pool_name: The name of the work pool to use for these deployments. Defaults to\n the value of `PREFECT_DEFAULT_WORK_POOL_NAME`.\n image: The name of the Docker image to build, including the registry and\n repository. Pass a DeploymentImage instance to customize the Dockerfile used\n and build arguments.\n build: Whether or not to build a new image for the flow. If False, the provided\n image will be used as-is and pulled at runtime.\n push: Whether or not to skip pushing the built image to a registry.\n print_next_steps_message: Whether or not to print a message with next steps\n after deploying the deployments.\n\n Returns:\n A list of deployment IDs for the created/updated deployments.\n\n Examples:\n Deploy a group of flows to a work pool:\n\n ```python\n from prefect import deploy, flow\n\n @flow(log_prints=True)\n def local_flow():\n print(\"I'm a locally defined flow!\")\n\n if __name__ == \"__main__\":\n deploy(\n local_flow.to_deployment(name=\"example-deploy-local-flow\"),\n flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n ).to_deployment(\n name=\"example-deploy-remote-flow\",\n ),\n work_pool_name=\"my-work-pool\",\n image=\"my-registry/my-image:dev\",\n )\n ```\n \"\"\"\n work_pool_name = work_pool_name or PREFECT_DEFAULT_WORK_POOL_NAME.value()\n\n if not image and not all(\n d.storage or d.entrypoint_type == EntrypointType.MODULE_PATH\n for d in deployments\n ):\n raise ValueError(\n \"Either an image or remote storage location must be provided when deploying\"\n \" a deployment.\"\n )\n\n if not work_pool_name:\n raise ValueError(\n \"A work pool name must be provided when deploying a deployment. Either\"\n \" provide a work pool name when calling `deploy` or set\"\n \" `PREFECT_DEFAULT_WORK_POOL_NAME` in your profile.\"\n )\n\n if image and isinstance(image, str):\n image_name, image_tag = parse_image_tag(image)\n image = DeploymentImage(name=image_name, tag=image_tag)\n\n try:\n async with get_client() as client:\n work_pool = await client.read_work_pool(work_pool_name)\n except ObjectNotFound as exc:\n raise ValueError(\n f\"Could not find work pool {work_pool_name!r}. Please create it before\"\n \" deploying this flow.\"\n ) from exc\n\n is_docker_based_work_pool = get_from_dict(\n work_pool.base_job_template, \"variables.properties.image\", False\n )\n is_block_based_work_pool = get_from_dict(\n work_pool.base_job_template, \"variables.properties.block\", False\n )\n # carve out an exception for block based work pools that only have a block in their base job template\n console = Console()\n if not is_docker_based_work_pool and not is_block_based_work_pool:\n if image:\n raise ValueError(\n f\"Work pool {work_pool_name!r} does not support custom Docker images.\"\n \" Please use a work pool with an `image` variable in its base job template\"\n \" or specify a remote storage location for the flow with `.from_source`.\"\n \" If you are attempting to deploy a flow to a local process work pool,\"\n \" consider using `flow.serve` instead. See the documentation for more\"\n \" information: https://docs.prefect.io/latest/concepts/flows/#serving-a-flow\"\n )\n elif work_pool.type == \"process\" and not ignore_warnings:\n console.print(\n \"Looks like you're deploying to a process work pool. If you're creating a\"\n \" deployment for local development, calling `.serve` on your flow is a great\"\n \" way to get started. See the documentation for more information:\"\n \" https://docs.prefect.io/latest/concepts/flows/#serving-a-flow. \"\n \" Set `ignore_warnings=True` to suppress this message.\",\n style=\"yellow\",\n )\n\n is_managed_pool = work_pool.is_managed_pool\n if is_managed_pool:\n build = False\n push = False\n\n if image and build:\n with Progress(\n SpinnerColumn(),\n TextColumn(f\"Building image {image.reference}...\"),\n transient=True,\n console=console,\n ) as progress:\n docker_build_task = progress.add_task(\"docker_build\", total=1)\n image.build()\n\n progress.update(docker_build_task, completed=1)\n console.print(\n f\"Successfully built image {image.reference!r}\", style=\"green\"\n )\n\n if image and build and push:\n with Progress(\n SpinnerColumn(),\n TextColumn(\"Pushing image...\"),\n transient=True,\n console=console,\n ) as progress:\n docker_push_task = progress.add_task(\"docker_push\", total=1)\n\n image.push()\n\n progress.update(docker_push_task, completed=1)\n\n console.print(f\"Successfully pushed image {image.reference!r}\", style=\"green\")\n\n deployment_exceptions = []\n deployment_ids = []\n image_ref = image.reference if image else None\n for deployment in track(\n deployments,\n description=\"Creating/updating deployments...\",\n console=console,\n transient=True,\n ):\n try:\n deployment_ids.append(\n await deployment.apply(image=image_ref, work_pool_name=work_pool_name)\n )\n except Exception as exc:\n if len(deployments) == 1:\n raise\n deployment_exceptions.append({\"deployment\": deployment, \"exc\": exc})\n\n if deployment_exceptions:\n console.print(\n \"Encountered errors while creating/updating deployments:\\n\",\n style=\"orange_red1\",\n )\n else:\n console.print(\"Successfully created/updated all deployments!\\n\", style=\"green\")\n\n complete_failure = len(deployment_exceptions) == len(deployments)\n\n table = Table(\n title=\"Deployments\",\n show_lines=True,\n )\n\n table.add_column(header=\"Name\", style=\"blue\", no_wrap=True)\n table.add_column(header=\"Status\", style=\"blue\", no_wrap=True)\n table.add_column(header=\"Details\", style=\"blue\")\n\n for deployment in deployments:\n errored_deployment = next(\n (d for d in deployment_exceptions if d[\"deployment\"] == deployment),\n None,\n )\n if errored_deployment:\n table.add_row(\n f\"{deployment.flow_name}/{deployment.name}\",\n \"failed\",\n str(errored_deployment[\"exc\"]),\n style=\"red\",\n )\n else:\n table.add_row(f\"{deployment.flow_name}/{deployment.name}\", \"applied\")\n console.print(table)\n\n if print_next_steps_message and not complete_failure:\n if not work_pool.is_push_pool and not work_pool.is_managed_pool:\n console.print(\n \"\\nTo execute flow runs from these deployments, start a worker in a\"\n \" separate terminal that pulls work from the\"\n f\" {work_pool_name!r} work pool:\"\n )\n console.print(\n f\"\\n\\t$ prefect worker start --pool {work_pool_name!r}\",\n style=\"blue\",\n )\n console.print(\n \"\\nTo trigger any of these deployments, use the\"\n \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n \" [DEPLOYMENT_NAME]\\n[/]\"\n )\n\n if PREFECT_UI_URL:\n console.print(\n \"\\nYou can also trigger your deployments via the Prefect UI:\"\n f\" [blue]{PREFECT_UI_URL.value()}/deployments[/]\\n\"\n )\n\n return deployment_ids\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/schedules/","title":"schedules","text":"","tags":["Python API","flow runs","deployments","schedules"]},{"location":"api-ref/prefect/deployments/schedules/#prefect.deployments.schedules","title":"prefect.deployments.schedules
","text":"","tags":["Python API","flow runs","deployments","schedules"]},{"location":"api-ref/prefect/deployments/steps/core/","title":"steps.core","text":"","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core","title":"prefect.deployments.steps.core
","text":"Core primitives for running Prefect project steps.
Project steps are YAML representations of Python functions along with their inputs.
Whenever a step is run, the following actions are taken:
requires
keyword is used to install the necessary packagesStepExecutionError
","text":" Bases: Exception
Raised when a step fails to execute.
Source code inprefect/deployments/steps/core.py
class StepExecutionError(Exception):\n \"\"\"\n Raised when a step fails to execute.\n \"\"\"\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core.run_step","title":"run_step
async
","text":"Runs a step, returns the step's output.
Steps are assumed to be in the format {\"importable.func.name\": {\"kwarg1\": \"value1\", ...}}
.
The 'id and 'requires' keywords are reserved for specific purposes and will be removed from the inputs before passing to the step function:
This keyword is used to specify packages that should be installed before running the step.
Source code inprefect/deployments/steps/core.py
async def run_step(step: Dict, upstream_outputs: Optional[Dict] = None) -> Dict:\n \"\"\"\n Runs a step, returns the step's output.\n\n Steps are assumed to be in the format `{\"importable.func.name\": {\"kwarg1\": \"value1\", ...}}`.\n\n The 'id and 'requires' keywords are reserved for specific purposes and will be removed from the\n inputs before passing to the step function:\n\n This keyword is used to specify packages that should be installed before running the step.\n \"\"\"\n fqn, inputs = _get_step_fully_qualified_name_and_inputs(step)\n upstream_outputs = upstream_outputs or {}\n\n if len(step.keys()) > 1:\n raise ValueError(\n f\"Step has unexpected additional keys: {', '.join(list(step.keys())[1:])}\"\n )\n\n keywords = {\n keyword: inputs.pop(keyword)\n for keyword in RESERVED_KEYWORDS\n if keyword in inputs\n }\n\n inputs = apply_values(inputs, upstream_outputs)\n inputs = await resolve_block_document_references(inputs)\n inputs = await resolve_variables(inputs)\n inputs = apply_values(inputs, os.environ)\n step_func = _get_function_for_step(fqn, requires=keywords.get(\"requires\"))\n result = await from_async.call_soon_in_new_thread(\n Call.new(step_func, **inputs)\n ).aresult()\n return result\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/pull/","title":"steps.pull","text":"","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull","title":"prefect.deployments.steps.pull
","text":"Core set of steps for specifying a Prefect project pull step.
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.git_clone","title":"git_clone
async
","text":"Clones a git repository into the current working directory.
Parameters:
Name Type Description Defaultrepository
str
the URL of the repository to clone
requiredbranch
Optional[str]
the branch to clone; if not provided, the default branch will be used
None
include_submodules
bool
whether to include git submodules when cloning the repository
False
access_token
Optional[str]
an access token to use for cloning the repository; if not provided the repository will be cloned using the default git credentials
None
credentials
Optional[Block]
a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the credentials to use for cloning the repository.
None
Returns:
Name Type Descriptiondict
dict
a dictionary containing a directory
key of the new directory that was created
Raises:
Type DescriptionCalledProcessError
if the git clone command fails for any reason
Examples:
Clone a public repository:
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/PrefectHQ/prefect.git\n
Clone a branch of a public repository:
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/PrefectHQ/prefect.git\n branch: my-branch\n
Clone a private repository using a GitHubCredentials block:
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/org/repo.git\n credentials: \"{{ prefect.blocks.github-credentials.my-github-credentials-block }}\"\n
Clone a private repository using an access token:
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/org/repo.git\n access_token: \"{{ prefect.blocks.secret.github-access-token }}\" # Requires creation of a Secret block\n
Note that you will need to create a Secret block to store the value of your git credentials. You can also store a username/password combo or token prefix (e.g. x-token-auth
) in your secret block. Refer to your git providers documentation for the correct authentication schema. Clone a repository with submodules:
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/org/repo.git\n include_submodules: true\n
Clone a repository with an SSH key (note that the SSH key must be added to the worker before executing flows):
pull:\n - prefect.deployments.steps.git_clone:\n repository: git@github.com:org/repo.git\n
Source code in prefect/deployments/steps/pull.py
@sync_compatible\nasync def git_clone(\n repository: str,\n branch: Optional[str] = None,\n include_submodules: bool = False,\n access_token: Optional[str] = None,\n credentials: Optional[Block] = None,\n) -> dict:\n \"\"\"\n Clones a git repository into the current working directory.\n\n Args:\n repository: the URL of the repository to clone\n branch: the branch to clone; if not provided, the default branch will be used\n include_submodules (bool): whether to include git submodules when cloning the repository\n access_token: an access token to use for cloning the repository; if not provided\n the repository will be cloned using the default git credentials\n credentials: a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the\n credentials to use for cloning the repository.\n\n Returns:\n dict: a dictionary containing a `directory` key of the new directory that was created\n\n Raises:\n subprocess.CalledProcessError: if the git clone command fails for any reason\n\n Examples:\n Clone a public repository:\n ```yaml\n pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/PrefectHQ/prefect.git\n ```\n\n Clone a branch of a public repository:\n ```yaml\n pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/PrefectHQ/prefect.git\n branch: my-branch\n ```\n\n Clone a private repository using a GitHubCredentials block:\n ```yaml\n pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/org/repo.git\n credentials: \"{{ prefect.blocks.github-credentials.my-github-credentials-block }}\"\n ```\n\n Clone a private repository using an access token:\n ```yaml\n pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/org/repo.git\n access_token: \"{{ prefect.blocks.secret.github-access-token }}\" # Requires creation of a Secret block\n ```\n Note that you will need to [create a Secret block](/concepts/blocks/#using-existing-block-types) to store the\n value of your git credentials. You can also store a username/password combo or token prefix (e.g. `x-token-auth`)\n in your secret block. Refer to your git providers documentation for the correct authentication schema.\n\n Clone a repository with submodules:\n ```yaml\n pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/org/repo.git\n include_submodules: true\n ```\n\n Clone a repository with an SSH key (note that the SSH key must be added to the worker\n before executing flows):\n ```yaml\n pull:\n - prefect.deployments.steps.git_clone:\n repository: git@github.com:org/repo.git\n ```\n \"\"\"\n if access_token and credentials:\n raise ValueError(\n \"Please provide either an access token or credentials but not both.\"\n )\n\n credentials = {\"access_token\": access_token} if access_token else credentials\n\n storage = GitRepository(\n url=repository,\n credentials=credentials,\n branch=branch,\n include_submodules=include_submodules,\n )\n\n await storage.pull_code()\n\n directory = str(storage.destination.relative_to(Path.cwd()))\n deployment_logger.info(f\"Cloned repository {repository!r} into {directory!r}\")\n return {\"directory\": directory}\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.git_clone_project","title":"git_clone_project
async
","text":"Deprecated. Use git_clone
instead.
prefect/deployments/steps/pull.py
@deprecated_callable(start_date=\"Jun 2023\", help=\"Use 'git clone' instead.\")\n@sync_compatible\nasync def git_clone_project(\n repository: str,\n branch: Optional[str] = None,\n include_submodules: bool = False,\n access_token: Optional[str] = None,\n) -> dict:\n \"\"\"Deprecated. Use `git_clone` instead.\"\"\"\n return await git_clone(\n repository=repository,\n branch=branch,\n include_submodules=include_submodules,\n access_token=access_token,\n )\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.pull_from_remote_storage","title":"pull_from_remote_storage
async
","text":"Pulls code from a remote storage location into the current working directory.
Works with protocols supported by fsspec
.
Parameters:
Name Type Description Defaulturl
str
the URL of the remote storage location. Should be a valid fsspec
URL. Some protocols may require an additional fsspec
dependency to be installed. Refer to the fsspec
docs for more details.
**settings
Any
any additional settings to pass the fsspec
filesystem class.
{}
Returns:
Name Type Descriptiondict
a dictionary containing a directory
key of the new directory that was created
Examples:
Pull code from a remote storage location:
pull:\n - prefect.deployments.steps.pull_from_remote_storage:\n url: s3://my-bucket/my-folder\n
Pull code from a remote storage location with additional settings:
pull:\n - prefect.deployments.steps.pull_from_remote_storage:\n url: s3://my-bucket/my-folder\n key: {{ prefect.blocks.secret.my-aws-access-key }}}\n secret: {{ prefect.blocks.secret.my-aws-secret-key }}}\n
Source code in prefect/deployments/steps/pull.py
async def pull_from_remote_storage(url: str, **settings: Any):\n \"\"\"\n Pulls code from a remote storage location into the current working directory.\n\n Works with protocols supported by `fsspec`.\n\n Args:\n url (str): the URL of the remote storage location. Should be a valid `fsspec` URL.\n Some protocols may require an additional `fsspec` dependency to be installed.\n Refer to the [`fsspec` docs](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations)\n for more details.\n **settings (Any): any additional settings to pass the `fsspec` filesystem class.\n\n Returns:\n dict: a dictionary containing a `directory` key of the new directory that was created\n\n Examples:\n Pull code from a remote storage location:\n ```yaml\n pull:\n - prefect.deployments.steps.pull_from_remote_storage:\n url: s3://my-bucket/my-folder\n ```\n\n Pull code from a remote storage location with additional settings:\n ```yaml\n pull:\n - prefect.deployments.steps.pull_from_remote_storage:\n url: s3://my-bucket/my-folder\n key: {{ prefect.blocks.secret.my-aws-access-key }}}\n secret: {{ prefect.blocks.secret.my-aws-secret-key }}}\n ```\n \"\"\"\n storage = RemoteStorage(url, **settings)\n\n await storage.pull_code()\n\n directory = str(storage.destination.relative_to(Path.cwd()))\n deployment_logger.info(f\"Pulled code from {url!r} into {directory!r}\")\n return {\"directory\": directory}\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.pull_with_block","title":"pull_with_block
async
","text":"Pulls code using a block.
Parameters:
Name Type Description Defaultblock_document_name
str
The name of the block document to use
requiredblock_type_slug
str
The slug of the type of block to use
required Source code inprefect/deployments/steps/pull.py
async def pull_with_block(block_document_name: str, block_type_slug: str):\n \"\"\"\n Pulls code using a block.\n\n Args:\n block_document_name: The name of the block document to use\n block_type_slug: The slug of the type of block to use\n \"\"\"\n full_slug = f\"{block_type_slug}/{block_document_name}\"\n try:\n block = await Block.load(full_slug)\n except Exception:\n deployment_logger.exception(\"Unable to load block '%s'\", full_slug)\n raise\n\n try:\n storage = BlockStorageAdapter(block)\n except Exception:\n deployment_logger.exception(\n \"Unable to create storage adapter for block '%s'\", full_slug\n )\n raise\n\n await storage.pull_code()\n\n directory = str(storage.destination.relative_to(Path.cwd()))\n deployment_logger.info(\n \"Pulled code using block '%s' into '%s'\", full_slug, directory\n )\n return {\"directory\": directory}\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.set_working_directory","title":"set_working_directory
","text":"Sets the working directory; works with both absolute and relative paths.
Parameters:
Name Type Description Defaultdirectory
str
the directory to set as the working directory
requiredReturns:
Name Type Descriptiondict
dict
a dictionary containing a directory
key of the directory that was set
prefect/deployments/steps/pull.py
def set_working_directory(directory: str) -> dict:\n \"\"\"\n Sets the working directory; works with both absolute and relative paths.\n\n Args:\n directory (str): the directory to set as the working directory\n\n Returns:\n dict: a dictionary containing a `directory` key of the\n directory that was set\n \"\"\"\n os.chdir(directory)\n return dict(directory=directory)\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/utility/","title":"steps.utility","text":"","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility","title":"prefect.deployments.steps.utility
","text":"Utility project steps that are useful for managing a project's deployment lifecycle.
Steps within this module can be used within a build
, push
, or pull
deployment action.
Use the run_shell_script
setp to retrieve the short Git commit hash of the current repository and use it as a Docker image tag:
build:\n - prefect.deployments.steps.run_shell_script:\n id: get-commit-hash\n script: git rev-parse --short HEAD\n stream_output: false\n - prefect_docker.deployments.steps.build_docker_image:\n requires: prefect-docker\n image_name: my-image\n image_tag: \"{{ get-commit-hash.stdout }}\"\n dockerfile: auto\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.RunShellScriptResult","title":"RunShellScriptResult
","text":" Bases: TypedDict
The result of a run_shell_script
step.
Attributes:
Name Type Descriptionstdout
str
The captured standard output of the script.
stderr
str
The captured standard error of the script.
Source code inprefect/deployments/steps/utility.py
class RunShellScriptResult(TypedDict):\n \"\"\"\n The result of a `run_shell_script` step.\n\n Attributes:\n stdout: The captured standard output of the script.\n stderr: The captured standard error of the script.\n \"\"\"\n\n stdout: str\n stderr: str\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.pip_install_requirements","title":"pip_install_requirements
async
","text":"Installs dependencies from a requirements.txt file.
Parameters:
Name Type Description Defaultrequirements_file
str
The requirements.txt to use for installation.
'requirements.txt'
directory
Optional[str]
The directory the requirements.txt file is in. Defaults to the current working directory.
None
stream_output
bool
Whether to stream the output from pip install should be streamed to the console
True
Returns:
Type DescriptionA dictionary with the keys stdout
and stderr
containing the output the pip install
command
Raises:
Type DescriptionCalledProcessError
if the pip install command fails for any reason
Examplepull:\n - prefect.deployments.steps.git_clone:\n id: clone-step\n repository: https://github.com/org/repo.git\n - prefect.deployments.steps.pip_install_requirements:\n directory: {{ clone-step.directory }}\n requirements_file: requirements.txt\n stream_output: False\n
Source code in prefect/deployments/steps/utility.py
async def pip_install_requirements(\n directory: Optional[str] = None,\n requirements_file: str = \"requirements.txt\",\n stream_output: bool = True,\n):\n \"\"\"\n Installs dependencies from a requirements.txt file.\n\n Args:\n requirements_file: The requirements.txt to use for installation.\n directory: The directory the requirements.txt file is in. Defaults to\n the current working directory.\n stream_output: Whether to stream the output from pip install should be\n streamed to the console\n\n Returns:\n A dictionary with the keys `stdout` and `stderr` containing the output\n the `pip install` command\n\n Raises:\n subprocess.CalledProcessError: if the pip install command fails for any reason\n\n Example:\n ```yaml\n pull:\n - prefect.deployments.steps.git_clone:\n id: clone-step\n repository: https://github.com/org/repo.git\n - prefect.deployments.steps.pip_install_requirements:\n directory: {{ clone-step.directory }}\n requirements_file: requirements.txt\n stream_output: False\n ```\n \"\"\"\n stdout_sink = io.StringIO()\n stderr_sink = io.StringIO()\n\n async with open_process(\n [get_sys_executable(), \"-m\", \"pip\", \"install\", \"-r\", requirements_file],\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n cwd=directory,\n ) as process:\n await _stream_capture_process_output(\n process,\n stdout_sink=stdout_sink,\n stderr_sink=stderr_sink,\n stream_output=stream_output,\n )\n await process.wait()\n\n if process.returncode != 0:\n raise RuntimeError(\n f\"pip_install_requirements failed with error code {process.returncode}:\"\n f\" {stderr_sink.getvalue()}\"\n )\n\n return {\n \"stdout\": stdout_sink.getvalue().strip(),\n \"stderr\": stderr_sink.getvalue().strip(),\n }\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.run_shell_script","title":"run_shell_script
async
","text":"Runs one or more shell commands in a subprocess. Returns the standard output and standard error of the script.
Parameters:
Name Type Description Defaultscript
str
The script to run
requireddirectory
Optional[str]
The directory to run the script in. Defaults to the current working directory.
None
env
Optional[Dict[str, str]]
A dictionary of environment variables to set for the script
None
stream_output
bool
Whether to stream the output of the script to stdout/stderr
True
expand_env_vars
bool
Whether to expand environment variables in the script before running it
False
Returns:
Type DescriptionRunShellScriptResult
A dictionary with the keys stdout
and stderr
containing the output of the script
Examples:
Retrieve the short Git commit hash of the current repository to use as a Docker image tag:
build:\n - prefect.deployments.steps.run_shell_script:\n id: get-commit-hash\n script: git rev-parse --short HEAD\n stream_output: false\n - prefect_docker.deployments.steps.build_docker_image:\n requires: prefect-docker\n image_name: my-image\n image_tag: \"{{ get-commit-hash.stdout }}\"\n dockerfile: auto\n
Run a multi-line shell script:
build:\n - prefect.deployments.steps.run_shell_script:\n script: |\n echo \"Hello\"\n echo \"World\"\n
Run a shell script with environment variables:
build:\n - prefect.deployments.steps.run_shell_script:\n script: echo \"Hello $NAME\"\n env:\n NAME: World\n
Run a shell script with environment variables expanded from the current environment:
pull:\n - prefect.deployments.steps.run_shell_script:\n script: |\n echo \"User: $USER\"\n echo \"Home Directory: $HOME\"\n stream_output: true\n expand_env_vars: true\n
Run a shell script in a specific directory:
build:\n - prefect.deployments.steps.run_shell_script:\n script: echo \"Hello\"\n directory: /path/to/directory\n
Run a script stored in a file:
build:\n - prefect.deployments.steps.run_shell_script:\n script: \"bash path/to/script.sh\"\n
Source code in prefect/deployments/steps/utility.py
async def run_shell_script(\n script: str,\n directory: Optional[str] = None,\n env: Optional[Dict[str, str]] = None,\n stream_output: bool = True,\n expand_env_vars: bool = False,\n) -> RunShellScriptResult:\n \"\"\"\n Runs one or more shell commands in a subprocess. Returns the standard\n output and standard error of the script.\n\n Args:\n script: The script to run\n directory: The directory to run the script in. Defaults to the current\n working directory.\n env: A dictionary of environment variables to set for the script\n stream_output: Whether to stream the output of the script to\n stdout/stderr\n expand_env_vars: Whether to expand environment variables in the script\n before running it\n\n Returns:\n A dictionary with the keys `stdout` and `stderr` containing the output\n of the script\n\n Examples:\n Retrieve the short Git commit hash of the current repository to use as\n a Docker image tag:\n ```yaml\n build:\n - prefect.deployments.steps.run_shell_script:\n id: get-commit-hash\n script: git rev-parse --short HEAD\n stream_output: false\n - prefect_docker.deployments.steps.build_docker_image:\n requires: prefect-docker\n image_name: my-image\n image_tag: \"{{ get-commit-hash.stdout }}\"\n dockerfile: auto\n ```\n\n Run a multi-line shell script:\n ```yaml\n build:\n - prefect.deployments.steps.run_shell_script:\n script: |\n echo \"Hello\"\n echo \"World\"\n ```\n\n Run a shell script with environment variables:\n ```yaml\n build:\n - prefect.deployments.steps.run_shell_script:\n script: echo \"Hello $NAME\"\n env:\n NAME: World\n ```\n\n Run a shell script with environment variables expanded\n from the current environment:\n ```yaml\n pull:\n - prefect.deployments.steps.run_shell_script:\n script: |\n echo \"User: $USER\"\n echo \"Home Directory: $HOME\"\n stream_output: true\n expand_env_vars: true\n ```\n\n Run a shell script in a specific directory:\n ```yaml\n build:\n - prefect.deployments.steps.run_shell_script:\n script: echo \"Hello\"\n directory: /path/to/directory\n ```\n\n Run a script stored in a file:\n ```yaml\n build:\n - prefect.deployments.steps.run_shell_script:\n script: \"bash path/to/script.sh\"\n ```\n \"\"\"\n current_env = os.environ.copy()\n current_env.update(env or {})\n\n commands = script.splitlines()\n stdout_sink = io.StringIO()\n stderr_sink = io.StringIO()\n\n for command in commands:\n if expand_env_vars:\n # Expand environment variables in command and provided environment\n command = string.Template(command).safe_substitute(current_env)\n split_command = shlex.split(command, posix=sys.platform != \"win32\")\n if not split_command:\n continue\n async with open_process(\n split_command,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n cwd=directory,\n env=current_env,\n ) as process:\n await _stream_capture_process_output(\n process,\n stdout_sink=stdout_sink,\n stderr_sink=stderr_sink,\n stream_output=stream_output,\n )\n\n await process.wait()\n\n if process.returncode != 0:\n raise RuntimeError(\n f\"`run_shell_script` failed with error code {process.returncode}:\"\n f\" {stderr_sink.getvalue()}\"\n )\n\n return {\n \"stdout\": stdout_sink.getvalue().strip(),\n \"stderr\": stderr_sink.getvalue().strip(),\n }\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/input/actions/","title":"actions","text":"","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions","title":"prefect.input.actions
","text":"","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions.create_flow_run_input","title":"create_flow_run_input
async
","text":"Create a new flow run input. The given value
will be serialized to JSON and stored as a flow run input value.
Parameters:
Name Type Description Default-
key (str
the flow run input key
required-
value (Any
the flow run input value
required-
flow_run_id (UUID
the, optional, flow run ID. If not given will default to pulling the flow run ID from the current context.
required Source code inprefect/input/actions.py
@sync_compatible\n@inject_client\nasync def create_flow_run_input(\n key: str,\n value: Any,\n flow_run_id: Optional[UUID] = None,\n sender: Optional[str] = None,\n client: \"PrefectClient\" = None,\n):\n \"\"\"\n Create a new flow run input. The given `value` will be serialized to JSON\n and stored as a flow run input value.\n\n Args:\n - key (str): the flow run input key\n - value (Any): the flow run input value\n - flow_run_id (UUID): the, optional, flow run ID. If not given will\n default to pulling the flow run ID from the current context.\n \"\"\"\n flow_run_id = ensure_flow_run_id(flow_run_id)\n\n await client.create_flow_run_input(\n flow_run_id=flow_run_id,\n key=key,\n sender=sender,\n value=orjson.dumps(value).decode(),\n )\n
","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions.delete_flow_run_input","title":"delete_flow_run_input
async
","text":"Delete a flow run input.
Parameters:
Name Type Description Default-
flow_run_id (UUID
the flow run ID
required-
key (str
the flow run input key
required Source code inprefect/input/actions.py
@sync_compatible\n@inject_client\nasync def delete_flow_run_input(\n key: str, flow_run_id: Optional[UUID] = None, client: \"PrefectClient\" = None\n):\n \"\"\"Delete a flow run input.\n\n Args:\n - flow_run_id (UUID): the flow run ID\n - key (str): the flow run input key\n \"\"\"\n\n flow_run_id = ensure_flow_run_id(flow_run_id)\n\n await client.delete_flow_run_input(flow_run_id=flow_run_id, key=key)\n
","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions.read_flow_run_input","title":"read_flow_run_input
async
","text":"Read a flow run input.
Parameters:
Name Type Description Default-
key (str
the flow run input key
required-
flow_run_id (UUID
the flow run ID
required Source code inprefect/input/actions.py
@sync_compatible\n@inject_client\nasync def read_flow_run_input(\n key: str, flow_run_id: Optional[UUID] = None, client: \"PrefectClient\" = None\n) -> Any:\n \"\"\"Read a flow run input.\n\n Args:\n - key (str): the flow run input key\n - flow_run_id (UUID): the flow run ID\n \"\"\"\n flow_run_id = ensure_flow_run_id(flow_run_id)\n\n try:\n value = await client.read_flow_run_input(flow_run_id=flow_run_id, key=key)\n except PrefectHTTPStatusError as exc:\n if exc.response.status_code == 404:\n return None\n raise\n else:\n return orjson.loads(value)\n
","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/run_input/","title":"run_input","text":"","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input","title":"prefect.input.run_input
","text":"This module contains functions that allow sending type-checked RunInput
data to flows at runtime. Flows can send back responses, establishing two-way channels with senders. These functions are particularly useful for systems that require ongoing data transfer or need to react to input quickly. real-time interaction and efficient data handling. It's designed to facilitate dynamic communication within distributed or microservices-oriented systems, making it ideal for scenarios requiring continuous data synchronization and processing. It's particularly useful for systems that require ongoing data input and output.
The following is an example of two flows. One sends a random number to the other and waits for a response. The other receives the number, squares it, and sends the result back. The sender flow then prints the result.
Sender flow:
import random\nfrom uuid import UUID\nfrom prefect import flow, get_run_logger\nfrom prefect.input import RunInput\n\nclass NumberData(RunInput):\n number: int\n\n\n@flow\nasync def sender_flow(receiver_flow_run_id: UUID):\n logger = get_run_logger()\n\n the_number = random.randint(1, 100)\n\n await NumberData(number=the_number).send_to(receiver_flow_run_id)\n\n receiver = NumberData.receive(flow_run_id=receiver_flow_run_id)\n squared = await receiver.next()\n\n logger.info(f\"{the_number} squared is {squared.number}\")\n
Receiver flow:
import random\nfrom uuid import UUID\nfrom prefect import flow, get_run_logger\nfrom prefect.input import RunInput\n\nclass NumberData(RunInput):\n number: int\n\n\n@flow\nasync def receiver_flow():\n async for data in NumberData.receive():\n squared = data.number ** 2\n data.respond(NumberData(number=squared))\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.AutomaticRunInput","title":"AutomaticRunInput
","text":" Bases: RunInput
, Generic[T]
prefect/input/run_input.py
class AutomaticRunInput(RunInput, Generic[T]):\n value: T\n\n @classmethod\n @sync_compatible\n async def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> T:\n \"\"\"\n Load the run input response from the given key.\n\n Args:\n - keyset (Keyset): the keyset to load the input for\n - flow_run_id (UUID, optional): the flow run ID to load the input for\n \"\"\"\n instance = await super().load(keyset, flow_run_id=flow_run_id)\n return instance.value\n\n @classmethod\n def subclass_from_type(cls, _type: Type[T]) -> Type[\"AutomaticRunInput[T]\"]:\n \"\"\"\n Create a new `AutomaticRunInput` subclass from the given type.\n \"\"\"\n fields = {\"value\": (_type, ...)}\n\n # Sending a value to a flow run that relies on an AutomaticRunInput will\n # produce a key prefix that includes the type name. For example, if the\n # value is a list, the key will include \"list\" as the type. If the user\n # then tries to receive the value with a type annotation like List[int],\n # we need to find the key we saved with \"list\" as the type (not\n # \"List[int]\"). Calling __name__.lower() on a type annotation like\n # List[int] produces the string \"list\", which is what we need.\n if hasattr(_type, \"__name__\"):\n type_prefix = _type.__name__.lower()\n elif hasattr(_type, \"_name\"):\n # On Python 3.9 and earlier, type annotation values don't have a\n # __name__ attribute, but they do have a _name.\n type_prefix = _type._name.lower()\n else:\n # If we can't identify a type name that we can use as a key\n # prefix that will match an input, we'll have to use\n # \"AutomaticRunInput\" as the generic name. This will match all\n # automatic inputs sent to the flow run, rather than a specific\n # type.\n type_prefix = \"\"\n class_name = f\"{type_prefix}AutomaticRunInput\"\n\n new_cls: Type[\"AutomaticRunInput\"] = pydantic.create_model(\n class_name, **fields, __base__=AutomaticRunInput\n )\n return new_cls\n\n @classmethod\n def receive(cls, *args, **kwargs):\n if kwargs.get(\"key_prefix\") is None:\n kwargs[\"key_prefix\"] = f\"{cls.__name__.lower()}-auto\"\n\n return GetAutomaticInputHandler(run_input_cls=cls, *args, **kwargs)\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.AutomaticRunInput.load","title":"load
async
classmethod
","text":"Load the run input response from the given key.
Parameters:
Name Type Description Default-
keyset (Keyset
the keyset to load the input for
required-
flow_run_id (UUID
the flow run ID to load the input for
required Source code inprefect/input/run_input.py
@classmethod\n@sync_compatible\nasync def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> T:\n \"\"\"\n Load the run input response from the given key.\n\n Args:\n - keyset (Keyset): the keyset to load the input for\n - flow_run_id (UUID, optional): the flow run ID to load the input for\n \"\"\"\n instance = await super().load(keyset, flow_run_id=flow_run_id)\n return instance.value\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.AutomaticRunInput.subclass_from_type","title":"subclass_from_type
classmethod
","text":"Create a new AutomaticRunInput
subclass from the given type.
prefect/input/run_input.py
@classmethod\ndef subclass_from_type(cls, _type: Type[T]) -> Type[\"AutomaticRunInput[T]\"]:\n \"\"\"\n Create a new `AutomaticRunInput` subclass from the given type.\n \"\"\"\n fields = {\"value\": (_type, ...)}\n\n # Sending a value to a flow run that relies on an AutomaticRunInput will\n # produce a key prefix that includes the type name. For example, if the\n # value is a list, the key will include \"list\" as the type. If the user\n # then tries to receive the value with a type annotation like List[int],\n # we need to find the key we saved with \"list\" as the type (not\n # \"List[int]\"). Calling __name__.lower() on a type annotation like\n # List[int] produces the string \"list\", which is what we need.\n if hasattr(_type, \"__name__\"):\n type_prefix = _type.__name__.lower()\n elif hasattr(_type, \"_name\"):\n # On Python 3.9 and earlier, type annotation values don't have a\n # __name__ attribute, but they do have a _name.\n type_prefix = _type._name.lower()\n else:\n # If we can't identify a type name that we can use as a key\n # prefix that will match an input, we'll have to use\n # \"AutomaticRunInput\" as the generic name. This will match all\n # automatic inputs sent to the flow run, rather than a specific\n # type.\n type_prefix = \"\"\n class_name = f\"{type_prefix}AutomaticRunInput\"\n\n new_cls: Type[\"AutomaticRunInput\"] = pydantic.create_model(\n class_name, **fields, __base__=AutomaticRunInput\n )\n return new_cls\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput","title":"RunInput
","text":" Bases: BaseModel
prefect/input/run_input.py
class RunInput(pydantic.BaseModel):\n class Config:\n extra = \"forbid\"\n\n _description: Optional[str] = pydantic.PrivateAttr(default=None)\n _metadata: RunInputMetadata = pydantic.PrivateAttr()\n\n @property\n def metadata(self) -> RunInputMetadata:\n return self._metadata\n\n @classmethod\n def keyset_from_type(cls) -> Keyset:\n return keyset_from_base_key(cls.__name__.lower())\n\n @classmethod\n @sync_compatible\n async def save(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n \"\"\"\n Save the run input response to the given key.\n\n Args:\n - keyset (Keyset): the keyset to save the input for\n - flow_run_id (UUID, optional): the flow run ID to save the input for\n \"\"\"\n\n if HAS_PYDANTIC_V2:\n schema = create_v2_schema(cls.__name__, model_base=cls)\n else:\n schema = cls.schema(by_alias=True)\n\n await create_flow_run_input(\n key=keyset[\"schema\"], value=schema, flow_run_id=flow_run_id\n )\n\n description = cls._description if isinstance(cls._description, str) else None\n if description:\n await create_flow_run_input(\n key=keyset[\"description\"],\n value=description,\n flow_run_id=flow_run_id,\n )\n\n @classmethod\n @sync_compatible\n async def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n \"\"\"\n Load the run input response from the given key.\n\n Args:\n - keyset (Keyset): the keyset to load the input for\n - flow_run_id (UUID, optional): the flow run ID to load the input for\n \"\"\"\n flow_run_id = ensure_flow_run_id(flow_run_id)\n value = await read_flow_run_input(keyset[\"response\"], flow_run_id=flow_run_id)\n if value:\n instance = cls(**value)\n else:\n instance = cls()\n instance._metadata = RunInputMetadata(\n key=keyset[\"response\"], sender=None, receiver=flow_run_id\n )\n return instance\n\n @classmethod\n def load_from_flow_run_input(cls, flow_run_input: \"FlowRunInput\"):\n \"\"\"\n Load the run input from a FlowRunInput object.\n\n Args:\n - flow_run_input (FlowRunInput): the flow run input to load the input for\n \"\"\"\n instance = cls(**flow_run_input.decoded_value)\n instance._metadata = RunInputMetadata(\n key=flow_run_input.key,\n sender=flow_run_input.sender,\n receiver=flow_run_input.flow_run_id,\n )\n return instance\n\n @classmethod\n def with_initial_data(\n cls: Type[R], description: Optional[str] = None, **kwargs: Any\n ) -> Type[R]:\n \"\"\"\n Create a new `RunInput` subclass with the given initial data as field\n defaults.\n\n Args:\n - description (str, optional): a description to show when resuming\n a flow run that requires input\n - kwargs (Any): the initial data to populate the subclass\n \"\"\"\n fields = {}\n for key, value in kwargs.items():\n fields[key] = (type(value), value)\n model = pydantic.create_model(cls.__name__, **fields, __base__=cls)\n\n if description is not None:\n model._description = description\n\n return model\n\n @sync_compatible\n async def respond(\n self,\n run_input: \"RunInput\",\n sender: Optional[str] = None,\n key_prefix: Optional[str] = None,\n ):\n flow_run_id = None\n if self.metadata.sender and self.metadata.sender.startswith(\"prefect.flow-run\"):\n _, _, id = self.metadata.sender.rpartition(\".\")\n flow_run_id = UUID(id)\n\n if not flow_run_id:\n raise RuntimeError(\n \"Cannot respond to an input that was not sent by a flow run.\"\n )\n\n await _send_input(\n flow_run_id=flow_run_id,\n run_input=run_input,\n sender=sender,\n key_prefix=key_prefix,\n )\n\n @sync_compatible\n async def send_to(\n self,\n flow_run_id: UUID,\n sender: Optional[str] = None,\n key_prefix: Optional[str] = None,\n ):\n await _send_input(\n flow_run_id=flow_run_id,\n run_input=self,\n sender=sender,\n key_prefix=key_prefix,\n )\n\n @classmethod\n def receive(\n cls,\n timeout: Optional[float] = 3600,\n poll_interval: float = 10,\n raise_timeout_error: bool = False,\n exclude_keys: Optional[Set[str]] = None,\n key_prefix: Optional[str] = None,\n flow_run_id: Optional[UUID] = None,\n ):\n if key_prefix is None:\n key_prefix = f\"{cls.__name__.lower()}-auto\"\n\n return GetInputHandler(\n run_input_cls=cls,\n key_prefix=key_prefix,\n timeout=timeout,\n poll_interval=poll_interval,\n raise_timeout_error=raise_timeout_error,\n exclude_keys=exclude_keys,\n flow_run_id=flow_run_id,\n )\n\n @classmethod\n def subclass_from_base_model_type(\n cls, model_cls: Type[pydantic.BaseModel]\n ) -> Type[\"RunInput\"]:\n \"\"\"\n Create a new `RunInput` subclass from the given `pydantic.BaseModel`\n subclass.\n\n Args:\n - model_cls (pydantic.BaseModel subclass): the class from which\n to create the new `RunInput` subclass\n \"\"\"\n return type(f\"{model_cls.__name__}RunInput\", (RunInput, model_cls), {}) # type: ignore\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.load","title":"load
async
classmethod
","text":"Load the run input response from the given key.
Parameters:
Name Type Description Default-
keyset (Keyset
the keyset to load the input for
required-
flow_run_id (UUID
the flow run ID to load the input for
required Source code inprefect/input/run_input.py
@classmethod\n@sync_compatible\nasync def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n \"\"\"\n Load the run input response from the given key.\n\n Args:\n - keyset (Keyset): the keyset to load the input for\n - flow_run_id (UUID, optional): the flow run ID to load the input for\n \"\"\"\n flow_run_id = ensure_flow_run_id(flow_run_id)\n value = await read_flow_run_input(keyset[\"response\"], flow_run_id=flow_run_id)\n if value:\n instance = cls(**value)\n else:\n instance = cls()\n instance._metadata = RunInputMetadata(\n key=keyset[\"response\"], sender=None, receiver=flow_run_id\n )\n return instance\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.load_from_flow_run_input","title":"load_from_flow_run_input
classmethod
","text":"Load the run input from a FlowRunInput object.
Parameters:
Name Type Description Default-
flow_run_input (FlowRunInput
the flow run input to load the input for
required Source code inprefect/input/run_input.py
@classmethod\ndef load_from_flow_run_input(cls, flow_run_input: \"FlowRunInput\"):\n \"\"\"\n Load the run input from a FlowRunInput object.\n\n Args:\n - flow_run_input (FlowRunInput): the flow run input to load the input for\n \"\"\"\n instance = cls(**flow_run_input.decoded_value)\n instance._metadata = RunInputMetadata(\n key=flow_run_input.key,\n sender=flow_run_input.sender,\n receiver=flow_run_input.flow_run_id,\n )\n return instance\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.save","title":"save
async
classmethod
","text":"Save the run input response to the given key.
Parameters:
Name Type Description Default-
keyset (Keyset
the keyset to save the input for
required-
flow_run_id (UUID
the flow run ID to save the input for
required Source code inprefect/input/run_input.py
@classmethod\n@sync_compatible\nasync def save(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n \"\"\"\n Save the run input response to the given key.\n\n Args:\n - keyset (Keyset): the keyset to save the input for\n - flow_run_id (UUID, optional): the flow run ID to save the input for\n \"\"\"\n\n if HAS_PYDANTIC_V2:\n schema = create_v2_schema(cls.__name__, model_base=cls)\n else:\n schema = cls.schema(by_alias=True)\n\n await create_flow_run_input(\n key=keyset[\"schema\"], value=schema, flow_run_id=flow_run_id\n )\n\n description = cls._description if isinstance(cls._description, str) else None\n if description:\n await create_flow_run_input(\n key=keyset[\"description\"],\n value=description,\n flow_run_id=flow_run_id,\n )\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.subclass_from_base_model_type","title":"subclass_from_base_model_type
classmethod
","text":"Create a new RunInput
subclass from the given pydantic.BaseModel
subclass.
Parameters:
Name Type Description Default-
model_cls (pydantic.BaseModel subclass
the class from which to create the new RunInput
subclass
prefect/input/run_input.py
@classmethod\ndef subclass_from_base_model_type(\n cls, model_cls: Type[pydantic.BaseModel]\n) -> Type[\"RunInput\"]:\n \"\"\"\n Create a new `RunInput` subclass from the given `pydantic.BaseModel`\n subclass.\n\n Args:\n - model_cls (pydantic.BaseModel subclass): the class from which\n to create the new `RunInput` subclass\n \"\"\"\n return type(f\"{model_cls.__name__}RunInput\", (RunInput, model_cls), {}) # type: ignore\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.with_initial_data","title":"with_initial_data
classmethod
","text":"Create a new RunInput
subclass with the given initial data as field defaults.
Parameters:
Name Type Description Default-
description (str
a description to show when resuming a flow run that requires input
required-
kwargs (Any
the initial data to populate the subclass
required Source code inprefect/input/run_input.py
@classmethod\ndef with_initial_data(\n cls: Type[R], description: Optional[str] = None, **kwargs: Any\n) -> Type[R]:\n \"\"\"\n Create a new `RunInput` subclass with the given initial data as field\n defaults.\n\n Args:\n - description (str, optional): a description to show when resuming\n a flow run that requires input\n - kwargs (Any): the initial data to populate the subclass\n \"\"\"\n fields = {}\n for key, value in kwargs.items():\n fields[key] = (type(value), value)\n model = pydantic.create_model(cls.__name__, **fields, __base__=cls)\n\n if description is not None:\n model._description = description\n\n return model\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.keyset_from_base_key","title":"keyset_from_base_key
","text":"Get the keyset for the given base key.
Parameters:
Name Type Description Default-
base_key (str
the base key to get the keyset for
requiredReturns:
Type DescriptionKeyset
prefect/input/run_input.py
def keyset_from_base_key(base_key: str) -> Keyset:\n \"\"\"\n Get the keyset for the given base key.\n\n Args:\n - base_key (str): the base key to get the keyset for\n\n Returns:\n - Dict[str, str]: the keyset\n \"\"\"\n return {\n \"description\": f\"{base_key}-description\",\n \"response\": f\"{base_key}-response\",\n \"schema\": f\"{base_key}-schema\",\n }\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.keyset_from_paused_state","title":"keyset_from_paused_state
","text":"Get the keyset for the given Paused state.
Parameters:
Name Type Description Default-
state (State
the state to get the keyset for
required Source code inprefect/input/run_input.py
def keyset_from_paused_state(state: \"State\") -> Keyset:\n \"\"\"\n Get the keyset for the given Paused state.\n\n Args:\n - state (State): the state to get the keyset for\n \"\"\"\n\n if not state.is_paused():\n raise RuntimeError(f\"{state.type.value!r} is unsupported.\")\n\n base_key = f\"{state.name.lower()}-{str(state.state_details.pause_key)}\"\n return keyset_from_base_key(base_key)\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.run_input_subclass_from_type","title":"run_input_subclass_from_type
","text":"Create a new RunInput
subclass from the given type.
prefect/input/run_input.py
def run_input_subclass_from_type(\n _type: Union[Type[R], Type[T], pydantic.BaseModel],\n) -> Union[Type[AutomaticRunInput[T]], Type[R]]:\n \"\"\"\n Create a new `RunInput` subclass from the given type.\n \"\"\"\n try:\n if issubclass(_type, RunInput):\n return cast(Type[R], _type)\n elif issubclass(_type, pydantic.BaseModel):\n return cast(Type[R], RunInput.subclass_from_base_model_type(_type))\n except TypeError:\n pass\n\n # Could be something like a typing._GenericAlias or any other type that\n # isn't a `RunInput` subclass or `pydantic.BaseModel` subclass. Try passing\n # it to AutomaticRunInput to see if we can create a model from it.\n return cast(Type[AutomaticRunInput[T]], AutomaticRunInput.subclass_from_type(_type))\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/logging/configuration/","title":"configuration","text":"\"\"\"
","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/configuration/#prefect.logging.configuration","title":"prefect.logging.configuration
","text":"","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/configuration/#prefect.logging.configuration.load_logging_config","title":"load_logging_config
","text":"Loads logging configuration from a path allowing override from the environment
Source code inprefect/logging/configuration.py
def load_logging_config(path: Path) -> dict:\n \"\"\"\n Loads logging configuration from a path allowing override from the environment\n \"\"\"\n template = string.Template(path.read_text())\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", category=DeprecationWarning)\n config = yaml.safe_load(\n # Substitute settings into the template in format $SETTING / ${SETTING}\n template.substitute(\n {\n setting.name: str(setting.value())\n for setting in SETTING_VARIABLES.values()\n if setting.value() is not None\n }\n )\n )\n\n # Load overrides from the environment\n flat_config = dict_to_flatdict(config)\n\n for key_tup, val in flat_config.items():\n env_val = os.environ.get(\n # Generate a valid environment variable with nesting indicated with '_'\n to_envvar(\"PREFECT_LOGGING_\" + \"_\".join(key_tup)).upper()\n )\n if env_val:\n val = env_val\n\n # reassign the updated value\n flat_config[key_tup] = val\n\n return flatdict_to_dict(flat_config)\n
","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/configuration/#prefect.logging.configuration.setup_logging","title":"setup_logging
","text":"Sets up logging.
Returns the config used.
Source code inprefect/logging/configuration.py
def setup_logging(incremental: Optional[bool] = None) -> dict:\n \"\"\"\n Sets up logging.\n\n Returns the config used.\n \"\"\"\n global PROCESS_LOGGING_CONFIG\n\n # If the user has specified a logging path and it exists we will ignore the\n # default entirely rather than dealing with complex merging\n config = load_logging_config(\n (\n PREFECT_LOGGING_SETTINGS_PATH.value()\n if PREFECT_LOGGING_SETTINGS_PATH.value().exists()\n else DEFAULT_LOGGING_SETTINGS_PATH\n )\n )\n\n incremental = (\n incremental if incremental is not None else bool(PROCESS_LOGGING_CONFIG)\n )\n\n # Perform an incremental update if setup has already been run\n config.setdefault(\"incremental\", incremental)\n\n try:\n logging.config.dictConfig(config)\n except ValueError:\n if incremental:\n setup_logging(incremental=False)\n\n # Copy configuration of the 'prefect.extra' logger to the extra loggers\n extra_config = logging.getLogger(\"prefect.extra\")\n\n for logger_name in PREFECT_LOGGING_EXTRA_LOGGERS.value():\n logger = logging.getLogger(logger_name)\n for handler in extra_config.handlers:\n if not config[\"incremental\"]:\n logger.addHandler(handler)\n if logger.level == logging.NOTSET:\n logger.setLevel(extra_config.level)\n logger.propagate = extra_config.propagate\n\n PROCESS_LOGGING_CONFIG = config\n\n return config\n
","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/formatters/","title":"formatters","text":"\"\"\"
","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/formatters/#prefect.logging.formatters","title":"prefect.logging.formatters
","text":"","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/formatters/#prefect.logging.formatters.JsonFormatter","title":"JsonFormatter
","text":" Bases: Formatter
Formats log records as a JSON string.
The format may be specified as \"pretty\" to format the JSON with indents and newlines.
Source code inprefect/logging/formatters.py
class JsonFormatter(logging.Formatter):\n \"\"\"\n Formats log records as a JSON string.\n\n The format may be specified as \"pretty\" to format the JSON with indents and\n newlines.\n \"\"\"\n\n def __init__(self, fmt, dmft, style) -> None: # noqa\n super().__init__()\n\n if fmt not in [\"pretty\", \"default\"]:\n raise ValueError(\"Format must be either 'pretty' or 'default'.\")\n\n self.serializer = JSONSerializer(\n jsonlib=\"orjson\",\n object_encoder=\"pydantic.json.pydantic_encoder\",\n dumps_kwargs={\"option\": orjson.OPT_INDENT_2} if fmt == \"pretty\" else {},\n )\n\n def format(self, record: logging.LogRecord) -> str:\n record_dict = record.__dict__.copy()\n\n # GCP severity detection compatibility\n record_dict.setdefault(\"severity\", record.levelname)\n\n # replace any exception tuples returned by `sys.exc_info()`\n # with a JSON-serializable `dict`.\n if record.exc_info:\n record_dict[\"exc_info\"] = format_exception_info(record.exc_info)\n\n log_json_bytes = self.serializer.dumps(record_dict)\n\n # JSONSerializer returns bytes; decode to string to conform to\n # the `logging.Formatter.format` interface\n return log_json_bytes.decode()\n
","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/formatters/#prefect.logging.formatters.PrefectFormatter","title":"PrefectFormatter
","text":" Bases: Formatter
prefect/logging/formatters.py
class PrefectFormatter(logging.Formatter):\n def __init__(\n self,\n format=None,\n datefmt=None,\n style=\"%\",\n validate=True,\n *,\n defaults=None,\n task_run_fmt: str = None,\n flow_run_fmt: str = None,\n ) -> None:\n \"\"\"\n Implementation of the standard Python formatter with support for multiple\n message formats.\n\n \"\"\"\n # See https://github.com/python/cpython/blob/c8c6113398ee9a7867fe9b08bc539cceb61e2aaa/Lib/logging/__init__.py#L546\n # for implementation details\n\n init_kwargs = {}\n style_kwargs = {}\n\n # defaults added in 3.10\n if sys.version_info >= (3, 10):\n init_kwargs[\"defaults\"] = defaults\n style_kwargs[\"defaults\"] = defaults\n\n # validate added in 3.8\n if sys.version_info >= (3, 8):\n init_kwargs[\"validate\"] = validate\n else:\n validate = False\n\n super().__init__(format, datefmt, style, **init_kwargs)\n\n self.flow_run_fmt = flow_run_fmt\n self.task_run_fmt = task_run_fmt\n\n # Retrieve the style class from the base class to avoid importing private\n # `_STYLES` mapping\n style_class = type(self._style)\n\n self._flow_run_style = (\n style_class(flow_run_fmt, **style_kwargs) if flow_run_fmt else self._style\n )\n self._task_run_style = (\n style_class(task_run_fmt, **style_kwargs) if task_run_fmt else self._style\n )\n if validate:\n self._flow_run_style.validate()\n self._task_run_style.validate()\n\n def formatMessage(self, record: logging.LogRecord):\n if record.name == \"prefect.flow_runs\":\n style = self._flow_run_style\n elif record.name == \"prefect.task_runs\":\n style = self._task_run_style\n else:\n style = self._style\n\n return style.format(record)\n
","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/handlers/","title":"handlers","text":"\"\"\"
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers","title":"prefect.logging.handlers
","text":"","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler","title":"APILogHandler
","text":" Bases: Handler
A logging handler that sends logs to the Prefect API.
Sends log records to the APILogWorker
which manages sending batches of logs in the background.
prefect/logging/handlers.py
class APILogHandler(logging.Handler):\n \"\"\"\n A logging handler that sends logs to the Prefect API.\n\n Sends log records to the `APILogWorker` which manages sending batches of logs in\n the background.\n \"\"\"\n\n @classmethod\n def flush(cls):\n \"\"\"\n Tell the `APILogWorker` to send any currently enqueued logs and block until\n completion.\n\n Use `aflush` from async contexts instead.\n \"\"\"\n loop = get_running_loop()\n if loop:\n if in_global_loop(): # Guard against internal misuse\n raise RuntimeError(\n \"Cannot call `APILogWorker.flush` from the global event loop; it\"\n \" would block the event loop and cause a deadlock. Use\"\n \" `APILogWorker.aflush` instead.\"\n )\n\n # Not ideal, but this method is called by the stdlib and cannot return a\n # coroutine so we just schedule the drain in a new thread and continue\n from_sync.call_soon_in_new_thread(create_call(APILogWorker.drain_all))\n return None\n else:\n # We set a timeout of 5s because we don't want to block forever if the worker\n # is stuck. This can occur when the handler is being shutdown and the\n # `logging._lock` is held but the worker is attempting to emit logs resulting\n # in a deadlock.\n return APILogWorker.drain_all(timeout=5)\n\n @classmethod\n def aflush(cls):\n \"\"\"\n Tell the `APILogWorker` to send any currently enqueued logs and block until\n completion.\n\n If called in a synchronous context, will only block up to 5s before returning.\n \"\"\"\n\n if not get_running_loop():\n raise RuntimeError(\n \"`aflush` cannot be used from a synchronous context; use `flush`\"\n \" instead.\"\n )\n\n return APILogWorker.drain_all()\n\n def emit(self, record: logging.LogRecord):\n \"\"\"\n Send a log to the `APILogWorker`\n \"\"\"\n try:\n profile = prefect.context.get_settings_context()\n\n if not PREFECT_LOGGING_TO_API_ENABLED.value_from(profile.settings):\n return # Respect the global settings toggle\n if not getattr(record, \"send_to_api\", True):\n return # Do not send records that have opted out\n if not getattr(record, \"send_to_orion\", True):\n return # Backwards compatibility\n\n log = self.prepare(record)\n APILogWorker.instance().send(log)\n\n except Exception:\n self.handleError(record)\n\n def handleError(self, record: logging.LogRecord) -> None:\n _, exc, _ = sys.exc_info()\n\n if isinstance(exc, MissingContextError):\n log_handling_when_missing_flow = (\n PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW.value()\n )\n if log_handling_when_missing_flow == \"warn\":\n # Warn when a logger is used outside of a run context, the stack level here\n # gets us to the user logging call\n warnings.warn(str(exc), stacklevel=8)\n return\n elif log_handling_when_missing_flow == \"ignore\":\n return\n else:\n raise exc\n\n # Display a longer traceback for other errors\n return super().handleError(record)\n\n def prepare(self, record: logging.LogRecord) -> Dict[str, Any]:\n \"\"\"\n Convert a `logging.LogRecord` to the API `LogCreate` schema and serialize.\n\n This infers the linked flow or task run from the log record or the current\n run context.\n\n If a flow run id cannot be found, the log will be dropped.\n\n Logs exceeding the maximum size will be dropped.\n \"\"\"\n flow_run_id = getattr(record, \"flow_run_id\", None)\n task_run_id = getattr(record, \"task_run_id\", None)\n\n if not flow_run_id:\n try:\n context = prefect.context.get_run_context()\n except MissingContextError:\n raise MissingContextError(\n f\"Logger {record.name!r} attempted to send logs to the API without\"\n \" a flow run id. The API log handler can only send logs within\"\n \" flow run contexts unless the flow run id is manually provided.\"\n ) from None\n\n if hasattr(context, \"flow_run\"):\n flow_run_id = context.flow_run.id\n elif hasattr(context, \"task_run\"):\n flow_run_id = context.task_run.flow_run_id\n task_run_id = task_run_id or context.task_run.id\n else:\n raise ValueError(\n \"Encountered malformed run context. Does not contain flow or task \"\n \"run information.\"\n )\n\n # Parsing to a `LogCreate` object here gives us nice parsing error messages\n # from the standard lib `handleError` method if something goes wrong and\n # prevents malformed logs from entering the queue\n try:\n is_uuid_like = isinstance(flow_run_id, uuid.UUID) or (\n isinstance(flow_run_id, str) and uuid.UUID(flow_run_id)\n )\n except ValueError:\n is_uuid_like = False\n\n log = LogCreate(\n flow_run_id=flow_run_id if is_uuid_like else None,\n task_run_id=task_run_id,\n name=record.name,\n level=record.levelno,\n timestamp=pendulum.from_timestamp(\n getattr(record, \"created\", None) or time.time()\n ),\n message=self.format(record),\n ).dict(json_compatible=True)\n\n log_size = log[\"__payload_size__\"] = self._get_payload_size(log)\n if log_size > PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value():\n raise ValueError(\n f\"Log of size {log_size} is greater than the max size of \"\n f\"{PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value()}\"\n )\n\n return log\n\n def _get_payload_size(self, log: Dict[str, Any]) -> int:\n return len(json.dumps(log).encode())\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.aflush","title":"aflush
classmethod
","text":"Tell the APILogWorker
to send any currently enqueued logs and block until completion.
If called in a synchronous context, will only block up to 5s before returning.
Source code inprefect/logging/handlers.py
@classmethod\ndef aflush(cls):\n \"\"\"\n Tell the `APILogWorker` to send any currently enqueued logs and block until\n completion.\n\n If called in a synchronous context, will only block up to 5s before returning.\n \"\"\"\n\n if not get_running_loop():\n raise RuntimeError(\n \"`aflush` cannot be used from a synchronous context; use `flush`\"\n \" instead.\"\n )\n\n return APILogWorker.drain_all()\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.emit","title":"emit
","text":"Send a log to the APILogWorker
prefect/logging/handlers.py
def emit(self, record: logging.LogRecord):\n \"\"\"\n Send a log to the `APILogWorker`\n \"\"\"\n try:\n profile = prefect.context.get_settings_context()\n\n if not PREFECT_LOGGING_TO_API_ENABLED.value_from(profile.settings):\n return # Respect the global settings toggle\n if not getattr(record, \"send_to_api\", True):\n return # Do not send records that have opted out\n if not getattr(record, \"send_to_orion\", True):\n return # Backwards compatibility\n\n log = self.prepare(record)\n APILogWorker.instance().send(log)\n\n except Exception:\n self.handleError(record)\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.flush","title":"flush
classmethod
","text":"Tell the APILogWorker
to send any currently enqueued logs and block until completion.
Use aflush
from async contexts instead.
prefect/logging/handlers.py
@classmethod\ndef flush(cls):\n \"\"\"\n Tell the `APILogWorker` to send any currently enqueued logs and block until\n completion.\n\n Use `aflush` from async contexts instead.\n \"\"\"\n loop = get_running_loop()\n if loop:\n if in_global_loop(): # Guard against internal misuse\n raise RuntimeError(\n \"Cannot call `APILogWorker.flush` from the global event loop; it\"\n \" would block the event loop and cause a deadlock. Use\"\n \" `APILogWorker.aflush` instead.\"\n )\n\n # Not ideal, but this method is called by the stdlib and cannot return a\n # coroutine so we just schedule the drain in a new thread and continue\n from_sync.call_soon_in_new_thread(create_call(APILogWorker.drain_all))\n return None\n else:\n # We set a timeout of 5s because we don't want to block forever if the worker\n # is stuck. This can occur when the handler is being shutdown and the\n # `logging._lock` is held but the worker is attempting to emit logs resulting\n # in a deadlock.\n return APILogWorker.drain_all(timeout=5)\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.prepare","title":"prepare
","text":"Convert a logging.LogRecord
to the API LogCreate
schema and serialize.
This infers the linked flow or task run from the log record or the current run context.
If a flow run id cannot be found, the log will be dropped.
Logs exceeding the maximum size will be dropped.
Source code inprefect/logging/handlers.py
def prepare(self, record: logging.LogRecord) -> Dict[str, Any]:\n \"\"\"\n Convert a `logging.LogRecord` to the API `LogCreate` schema and serialize.\n\n This infers the linked flow or task run from the log record or the current\n run context.\n\n If a flow run id cannot be found, the log will be dropped.\n\n Logs exceeding the maximum size will be dropped.\n \"\"\"\n flow_run_id = getattr(record, \"flow_run_id\", None)\n task_run_id = getattr(record, \"task_run_id\", None)\n\n if not flow_run_id:\n try:\n context = prefect.context.get_run_context()\n except MissingContextError:\n raise MissingContextError(\n f\"Logger {record.name!r} attempted to send logs to the API without\"\n \" a flow run id. The API log handler can only send logs within\"\n \" flow run contexts unless the flow run id is manually provided.\"\n ) from None\n\n if hasattr(context, \"flow_run\"):\n flow_run_id = context.flow_run.id\n elif hasattr(context, \"task_run\"):\n flow_run_id = context.task_run.flow_run_id\n task_run_id = task_run_id or context.task_run.id\n else:\n raise ValueError(\n \"Encountered malformed run context. Does not contain flow or task \"\n \"run information.\"\n )\n\n # Parsing to a `LogCreate` object here gives us nice parsing error messages\n # from the standard lib `handleError` method if something goes wrong and\n # prevents malformed logs from entering the queue\n try:\n is_uuid_like = isinstance(flow_run_id, uuid.UUID) or (\n isinstance(flow_run_id, str) and uuid.UUID(flow_run_id)\n )\n except ValueError:\n is_uuid_like = False\n\n log = LogCreate(\n flow_run_id=flow_run_id if is_uuid_like else None,\n task_run_id=task_run_id,\n name=record.name,\n level=record.levelno,\n timestamp=pendulum.from_timestamp(\n getattr(record, \"created\", None) or time.time()\n ),\n message=self.format(record),\n ).dict(json_compatible=True)\n\n log_size = log[\"__payload_size__\"] = self._get_payload_size(log)\n if log_size > PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value():\n raise ValueError(\n f\"Log of size {log_size} is greater than the max size of \"\n f\"{PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value()}\"\n )\n\n return log\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.PrefectConsoleHandler","title":"PrefectConsoleHandler
","text":" Bases: StreamHandler
prefect/logging/handlers.py
class PrefectConsoleHandler(logging.StreamHandler):\n def __init__(\n self,\n stream=None,\n highlighter: Highlighter = PrefectConsoleHighlighter,\n styles: Dict[str, str] = None,\n level: Union[int, str] = logging.NOTSET,\n ):\n \"\"\"\n The default console handler for Prefect, which highlights log levels,\n web and file URLs, flow and task (run) names, and state types in the\n local console (terminal).\n\n Highlighting can be toggled on/off with the PREFECT_LOGGING_COLORS setting.\n For finer control, use logging.yml to add or remove styles, and/or\n adjust colors.\n \"\"\"\n super().__init__(stream=stream)\n\n styled_console = PREFECT_LOGGING_COLORS.value()\n markup_console = PREFECT_LOGGING_MARKUP.value()\n if styled_console:\n highlighter = highlighter()\n theme = Theme(styles, inherit=False)\n else:\n highlighter = NullHighlighter()\n theme = Theme(inherit=False)\n\n self.level = level\n self.console = Console(\n highlighter=highlighter,\n theme=theme,\n file=self.stream,\n markup=markup_console,\n )\n\n def emit(self, record: logging.LogRecord):\n try:\n message = self.format(record)\n self.console.print(message, soft_wrap=True)\n except RecursionError:\n # This was copied over from logging.StreamHandler().emit()\n # https://bugs.python.org/issue36272\n raise\n except Exception:\n self.handleError(record)\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/highlighters/","title":"highlighters","text":"\"\"\"
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers","title":"prefect.logging.loggers
","text":"","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.PrefectLogAdapter","title":"PrefectLogAdapter
","text":" Bases: LoggerAdapter
Adapter that ensures extra kwargs are passed through correctly; without this the extra
fields set on the adapter would overshadow any provided on a log-by-log basis.
See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is not a bug in the LoggingAdapter and subclassing is the intended workaround.
Source code inprefect/logging/loggers.py
class PrefectLogAdapter(logging.LoggerAdapter):\n \"\"\"\n Adapter that ensures extra kwargs are passed through correctly; without this\n the `extra` fields set on the adapter would overshadow any provided on a\n log-by-log basis.\n\n See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is\n not a bug in the LoggingAdapter and subclassing is the intended workaround.\n \"\"\"\n\n def process(self, msg, kwargs):\n kwargs[\"extra\"] = {**(self.extra or {}), **(kwargs.get(\"extra\") or {})}\n\n from prefect._internal.compatibility.deprecated import (\n PrefectDeprecationWarning,\n generate_deprecation_message,\n )\n\n if \"send_to_orion\" in kwargs[\"extra\"]:\n warnings.warn(\n generate_deprecation_message(\n 'The \"send_to_orion\" option',\n start_date=\"May 2023\",\n help='Use \"send_to_api\" instead.',\n ),\n PrefectDeprecationWarning,\n stacklevel=4,\n )\n\n return (msg, kwargs)\n\n def getChild(\n self, suffix: str, extra: Optional[Dict[str, str]] = None\n ) -> \"PrefectLogAdapter\":\n if extra is None:\n extra = {}\n\n return PrefectLogAdapter(\n self.logger.getChild(suffix),\n extra={\n **self.extra,\n **extra,\n },\n )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.disable_logger","title":"disable_logger
","text":"Get a logger by name and disables it within the context manager. Upon exiting the context manager, the logger is returned to its original state.
Source code inprefect/logging/loggers.py
@contextmanager\ndef disable_logger(name: str):\n \"\"\"\n Get a logger by name and disables it within the context manager.\n Upon exiting the context manager, the logger is returned to its\n original state.\n \"\"\"\n logger = logging.getLogger(name=name)\n\n # determine if it's already disabled\n base_state = logger.disabled\n try:\n # disable the logger\n logger.disabled = True\n yield\n finally:\n # return to base state\n logger.disabled = base_state\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.disable_run_logger","title":"disable_run_logger
","text":"Gets both prefect.flow_run
and prefect.task_run
and disables them within the context manager. Upon exiting the context manager, both loggers are returned to its original state.
prefect/logging/loggers.py
@contextmanager\ndef disable_run_logger():\n \"\"\"\n Gets both `prefect.flow_run` and `prefect.task_run` and disables them\n within the context manager. Upon exiting the context manager, both loggers\n are returned to its original state.\n \"\"\"\n with disable_logger(\"prefect.flow_run\"), disable_logger(\"prefect.task_run\"):\n yield\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.flow_run_logger","title":"flow_run_logger
","text":"Create a flow run logger with the run's metadata attached.
Additional keyword arguments can be provided to attach custom data to the log records.
If the flow run context is available, see get_run_logger
instead.
prefect/logging/loggers.py
def flow_run_logger(\n flow_run: Union[\"FlowRun\", \"ClientFlowRun\"],\n flow: Optional[\"Flow\"] = None,\n **kwargs: str,\n):\n \"\"\"\n Create a flow run logger with the run's metadata attached.\n\n Additional keyword arguments can be provided to attach custom data to the log\n records.\n\n If the flow run context is available, see `get_run_logger` instead.\n \"\"\"\n return PrefectLogAdapter(\n get_logger(\"prefect.flow_runs\"),\n extra={\n **{\n \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n \"flow_run_id\": str(flow_run.id) if flow_run else \"<unknown>\",\n \"flow_name\": flow.name if flow else \"<unknown>\",\n },\n **kwargs,\n },\n )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.get_logger","title":"get_logger
cached
","text":"Get a prefect
logger. These loggers are intended for internal use within the prefect
package.
See get_run_logger
for retrieving loggers for use within task or flow runs. By default, only run-related loggers are connected to the APILogHandler
.
prefect/logging/loggers.py
@lru_cache()\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Get a `prefect` logger. These loggers are intended for internal use within the\n `prefect` package.\n\n See `get_run_logger` for retrieving loggers for use within task or flow runs.\n By default, only run-related loggers are connected to the `APILogHandler`.\n \"\"\"\n parent_logger = logging.getLogger(\"prefect\")\n\n if name:\n # Append the name if given but allow explicit full names e.g. \"prefect.test\"\n # should not become \"prefect.prefect.test\"\n if not name.startswith(parent_logger.name + \".\"):\n logger = parent_logger.getChild(name)\n else:\n logger = logging.getLogger(name)\n else:\n logger = parent_logger\n\n # Prevent the current API key from being logged in plain text\n obfuscate_api_key_filter = ObfuscateApiKeyFilter()\n logger.addFilter(obfuscate_api_key_filter)\n\n return logger\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.get_run_logger","title":"get_run_logger
","text":"Get a Prefect logger for the current task run or flow run.
The logger will be named either prefect.task_runs
or prefect.flow_runs
. Contextual data about the run will be attached to the log records.
These loggers are connected to the APILogHandler
by default to send log records to the API.
Parameters:
Name Type Description Defaultcontext
RunContext
A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed.
None
**kwargs
str
Additional keyword arguments will be attached to the log records in addition to the run metadata
{}
Raises:
Type DescriptionRuntimeError
If no context can be found
Source code inprefect/logging/loggers.py
def get_run_logger(\n context: \"RunContext\" = None, **kwargs: str\n) -> Union[logging.Logger, logging.LoggerAdapter]:\n \"\"\"\n Get a Prefect logger for the current task run or flow run.\n\n The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n Contextual data about the run will be attached to the log records.\n\n These loggers are connected to the `APILogHandler` by default to send log records to\n the API.\n\n Arguments:\n context: A specific context may be provided as an override. By default, the\n context is inferred from global state and this should not be needed.\n **kwargs: Additional keyword arguments will be attached to the log records in\n addition to the run metadata\n\n Raises:\n RuntimeError: If no context can be found\n \"\"\"\n # Check for existing contexts\n task_run_context = prefect.context.TaskRunContext.get()\n flow_run_context = prefect.context.FlowRunContext.get()\n\n # Apply the context override\n if context:\n if isinstance(context, prefect.context.FlowRunContext):\n flow_run_context = context\n elif isinstance(context, prefect.context.TaskRunContext):\n task_run_context = context\n else:\n raise TypeError(\n f\"Received unexpected type {type(context).__name__!r} for context. \"\n \"Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'.\"\n )\n\n # Determine if this is a task or flow run logger\n if task_run_context:\n logger = task_run_logger(\n task_run=task_run_context.task_run,\n task=task_run_context.task,\n flow_run=flow_run_context.flow_run if flow_run_context else None,\n flow=flow_run_context.flow if flow_run_context else None,\n **kwargs,\n )\n elif flow_run_context:\n logger = flow_run_logger(\n flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs\n )\n elif (\n get_logger(\"prefect.flow_run\").disabled\n and get_logger(\"prefect.task_run\").disabled\n ):\n logger = logging.getLogger(\"null\")\n else:\n raise MissingContextError(\"There is no active flow or task run context.\")\n\n return logger\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.patch_print","title":"patch_print
","text":"Patches the Python builtin print
method to use print_as_log
prefect/logging/loggers.py
@contextmanager\ndef patch_print():\n \"\"\"\n Patches the Python builtin `print` method to use `print_as_log`\n \"\"\"\n import builtins\n\n original = builtins.print\n\n try:\n builtins.print = print_as_log\n yield\n finally:\n builtins.print = original\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.print_as_log","title":"print_as_log
","text":"A patch for print
to send printed messages to the Prefect run logger.
If no run is active, print
will behave as if it were not patched.
If print
sends data to a file other than sys.stdout
or sys.stderr
, it will not be forwarded to the Prefect logger either.
prefect/logging/loggers.py
def print_as_log(*args, **kwargs):\n \"\"\"\n A patch for `print` to send printed messages to the Prefect run logger.\n\n If no run is active, `print` will behave as if it were not patched.\n\n If `print` sends data to a file other than `sys.stdout` or `sys.stderr`, it will\n not be forwarded to the Prefect logger either.\n \"\"\"\n from prefect.context import FlowRunContext, TaskRunContext\n\n context = TaskRunContext.get() or FlowRunContext.get()\n if (\n not context\n or not context.log_prints\n or kwargs.get(\"file\") not in {None, sys.stdout, sys.stderr}\n ):\n return print(*args, **kwargs)\n\n logger = get_run_logger()\n\n # Print to an in-memory buffer; so we do not need to implement `print`\n buffer = io.StringIO()\n kwargs[\"file\"] = buffer\n print(*args, **kwargs)\n\n # Remove trailing whitespace to prevent duplicates\n logger.info(buffer.getvalue().rstrip())\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.task_run_logger","title":"task_run_logger
","text":"Create a task run logger with the run's metadata attached.
Additional keyword arguments can be provided to attach custom data to the log records.
If the task run context is available, see get_run_logger
instead.
If only the flow run context is available, it will be used for default values of flow_run
and flow
.
prefect/logging/loggers.py
def task_run_logger(\n task_run: \"TaskRun\",\n task: \"Task\" = None,\n flow_run: \"FlowRun\" = None,\n flow: \"Flow\" = None,\n **kwargs: str,\n):\n \"\"\"\n Create a task run logger with the run's metadata attached.\n\n Additional keyword arguments can be provided to attach custom data to the log\n records.\n\n If the task run context is available, see `get_run_logger` instead.\n\n If only the flow run context is available, it will be used for default values\n of `flow_run` and `flow`.\n \"\"\"\n if not flow_run or not flow:\n flow_run_context = prefect.context.FlowRunContext.get()\n if flow_run_context:\n flow_run = flow_run or flow_run_context.flow_run\n flow = flow or flow_run_context.flow\n\n return PrefectLogAdapter(\n get_logger(\"prefect.task_runs\"),\n extra={\n **{\n \"task_run_id\": str(task_run.id),\n \"flow_run_id\": str(task_run.flow_run_id),\n \"task_run_name\": task_run.name,\n \"task_name\": task.name if task else \"<unknown>\",\n \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n \"flow_name\": flow.name if flow else \"<unknown>\",\n },\n **kwargs,\n },\n )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/","title":"loggers","text":"\"\"\"
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers","title":"prefect.logging.loggers
","text":"","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.PrefectLogAdapter","title":"PrefectLogAdapter
","text":" Bases: LoggerAdapter
Adapter that ensures extra kwargs are passed through correctly; without this the extra
fields set on the adapter would overshadow any provided on a log-by-log basis.
See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is not a bug in the LoggingAdapter and subclassing is the intended workaround.
Source code inprefect/logging/loggers.py
class PrefectLogAdapter(logging.LoggerAdapter):\n \"\"\"\n Adapter that ensures extra kwargs are passed through correctly; without this\n the `extra` fields set on the adapter would overshadow any provided on a\n log-by-log basis.\n\n See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is\n not a bug in the LoggingAdapter and subclassing is the intended workaround.\n \"\"\"\n\n def process(self, msg, kwargs):\n kwargs[\"extra\"] = {**(self.extra or {}), **(kwargs.get(\"extra\") or {})}\n\n from prefect._internal.compatibility.deprecated import (\n PrefectDeprecationWarning,\n generate_deprecation_message,\n )\n\n if \"send_to_orion\" in kwargs[\"extra\"]:\n warnings.warn(\n generate_deprecation_message(\n 'The \"send_to_orion\" option',\n start_date=\"May 2023\",\n help='Use \"send_to_api\" instead.',\n ),\n PrefectDeprecationWarning,\n stacklevel=4,\n )\n\n return (msg, kwargs)\n\n def getChild(\n self, suffix: str, extra: Optional[Dict[str, str]] = None\n ) -> \"PrefectLogAdapter\":\n if extra is None:\n extra = {}\n\n return PrefectLogAdapter(\n self.logger.getChild(suffix),\n extra={\n **self.extra,\n **extra,\n },\n )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.disable_logger","title":"disable_logger
","text":"Get a logger by name and disables it within the context manager. Upon exiting the context manager, the logger is returned to its original state.
Source code inprefect/logging/loggers.py
@contextmanager\ndef disable_logger(name: str):\n \"\"\"\n Get a logger by name and disables it within the context manager.\n Upon exiting the context manager, the logger is returned to its\n original state.\n \"\"\"\n logger = logging.getLogger(name=name)\n\n # determine if it's already disabled\n base_state = logger.disabled\n try:\n # disable the logger\n logger.disabled = True\n yield\n finally:\n # return to base state\n logger.disabled = base_state\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.disable_run_logger","title":"disable_run_logger
","text":"Gets both prefect.flow_run
and prefect.task_run
and disables them within the context manager. Upon exiting the context manager, both loggers are returned to its original state.
prefect/logging/loggers.py
@contextmanager\ndef disable_run_logger():\n \"\"\"\n Gets both `prefect.flow_run` and `prefect.task_run` and disables them\n within the context manager. Upon exiting the context manager, both loggers\n are returned to its original state.\n \"\"\"\n with disable_logger(\"prefect.flow_run\"), disable_logger(\"prefect.task_run\"):\n yield\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.flow_run_logger","title":"flow_run_logger
","text":"Create a flow run logger with the run's metadata attached.
Additional keyword arguments can be provided to attach custom data to the log records.
If the flow run context is available, see get_run_logger
instead.
prefect/logging/loggers.py
def flow_run_logger(\n flow_run: Union[\"FlowRun\", \"ClientFlowRun\"],\n flow: Optional[\"Flow\"] = None,\n **kwargs: str,\n):\n \"\"\"\n Create a flow run logger with the run's metadata attached.\n\n Additional keyword arguments can be provided to attach custom data to the log\n records.\n\n If the flow run context is available, see `get_run_logger` instead.\n \"\"\"\n return PrefectLogAdapter(\n get_logger(\"prefect.flow_runs\"),\n extra={\n **{\n \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n \"flow_run_id\": str(flow_run.id) if flow_run else \"<unknown>\",\n \"flow_name\": flow.name if flow else \"<unknown>\",\n },\n **kwargs,\n },\n )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.get_logger","title":"get_logger
cached
","text":"Get a prefect
logger. These loggers are intended for internal use within the prefect
package.
See get_run_logger
for retrieving loggers for use within task or flow runs. By default, only run-related loggers are connected to the APILogHandler
.
prefect/logging/loggers.py
@lru_cache()\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Get a `prefect` logger. These loggers are intended for internal use within the\n `prefect` package.\n\n See `get_run_logger` for retrieving loggers for use within task or flow runs.\n By default, only run-related loggers are connected to the `APILogHandler`.\n \"\"\"\n parent_logger = logging.getLogger(\"prefect\")\n\n if name:\n # Append the name if given but allow explicit full names e.g. \"prefect.test\"\n # should not become \"prefect.prefect.test\"\n if not name.startswith(parent_logger.name + \".\"):\n logger = parent_logger.getChild(name)\n else:\n logger = logging.getLogger(name)\n else:\n logger = parent_logger\n\n # Prevent the current API key from being logged in plain text\n obfuscate_api_key_filter = ObfuscateApiKeyFilter()\n logger.addFilter(obfuscate_api_key_filter)\n\n return logger\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.get_run_logger","title":"get_run_logger
","text":"Get a Prefect logger for the current task run or flow run.
The logger will be named either prefect.task_runs
or prefect.flow_runs
. Contextual data about the run will be attached to the log records.
These loggers are connected to the APILogHandler
by default to send log records to the API.
Parameters:
Name Type Description Defaultcontext
RunContext
A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed.
None
**kwargs
str
Additional keyword arguments will be attached to the log records in addition to the run metadata
{}
Raises:
Type DescriptionRuntimeError
If no context can be found
Source code inprefect/logging/loggers.py
def get_run_logger(\n context: \"RunContext\" = None, **kwargs: str\n) -> Union[logging.Logger, logging.LoggerAdapter]:\n \"\"\"\n Get a Prefect logger for the current task run or flow run.\n\n The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n Contextual data about the run will be attached to the log records.\n\n These loggers are connected to the `APILogHandler` by default to send log records to\n the API.\n\n Arguments:\n context: A specific context may be provided as an override. By default, the\n context is inferred from global state and this should not be needed.\n **kwargs: Additional keyword arguments will be attached to the log records in\n addition to the run metadata\n\n Raises:\n RuntimeError: If no context can be found\n \"\"\"\n # Check for existing contexts\n task_run_context = prefect.context.TaskRunContext.get()\n flow_run_context = prefect.context.FlowRunContext.get()\n\n # Apply the context override\n if context:\n if isinstance(context, prefect.context.FlowRunContext):\n flow_run_context = context\n elif isinstance(context, prefect.context.TaskRunContext):\n task_run_context = context\n else:\n raise TypeError(\n f\"Received unexpected type {type(context).__name__!r} for context. \"\n \"Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'.\"\n )\n\n # Determine if this is a task or flow run logger\n if task_run_context:\n logger = task_run_logger(\n task_run=task_run_context.task_run,\n task=task_run_context.task,\n flow_run=flow_run_context.flow_run if flow_run_context else None,\n flow=flow_run_context.flow if flow_run_context else None,\n **kwargs,\n )\n elif flow_run_context:\n logger = flow_run_logger(\n flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs\n )\n elif (\n get_logger(\"prefect.flow_run\").disabled\n and get_logger(\"prefect.task_run\").disabled\n ):\n logger = logging.getLogger(\"null\")\n else:\n raise MissingContextError(\"There is no active flow or task run context.\")\n\n return logger\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.patch_print","title":"patch_print
","text":"Patches the Python builtin print
method to use print_as_log
prefect/logging/loggers.py
@contextmanager\ndef patch_print():\n \"\"\"\n Patches the Python builtin `print` method to use `print_as_log`\n \"\"\"\n import builtins\n\n original = builtins.print\n\n try:\n builtins.print = print_as_log\n yield\n finally:\n builtins.print = original\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.print_as_log","title":"print_as_log
","text":"A patch for print
to send printed messages to the Prefect run logger.
If no run is active, print
will behave as if it were not patched.
If print
sends data to a file other than sys.stdout
or sys.stderr
, it will not be forwarded to the Prefect logger either.
prefect/logging/loggers.py
def print_as_log(*args, **kwargs):\n \"\"\"\n A patch for `print` to send printed messages to the Prefect run logger.\n\n If no run is active, `print` will behave as if it were not patched.\n\n If `print` sends data to a file other than `sys.stdout` or `sys.stderr`, it will\n not be forwarded to the Prefect logger either.\n \"\"\"\n from prefect.context import FlowRunContext, TaskRunContext\n\n context = TaskRunContext.get() or FlowRunContext.get()\n if (\n not context\n or not context.log_prints\n or kwargs.get(\"file\") not in {None, sys.stdout, sys.stderr}\n ):\n return print(*args, **kwargs)\n\n logger = get_run_logger()\n\n # Print to an in-memory buffer; so we do not need to implement `print`\n buffer = io.StringIO()\n kwargs[\"file\"] = buffer\n print(*args, **kwargs)\n\n # Remove trailing whitespace to prevent duplicates\n logger.info(buffer.getvalue().rstrip())\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.task_run_logger","title":"task_run_logger
","text":"Create a task run logger with the run's metadata attached.
Additional keyword arguments can be provided to attach custom data to the log records.
If the task run context is available, see get_run_logger
instead.
If only the flow run context is available, it will be used for default values of flow_run
and flow
.
prefect/logging/loggers.py
def task_run_logger(\n task_run: \"TaskRun\",\n task: \"Task\" = None,\n flow_run: \"FlowRun\" = None,\n flow: \"Flow\" = None,\n **kwargs: str,\n):\n \"\"\"\n Create a task run logger with the run's metadata attached.\n\n Additional keyword arguments can be provided to attach custom data to the log\n records.\n\n If the task run context is available, see `get_run_logger` instead.\n\n If only the flow run context is available, it will be used for default values\n of `flow_run` and `flow`.\n \"\"\"\n if not flow_run or not flow:\n flow_run_context = prefect.context.FlowRunContext.get()\n if flow_run_context:\n flow_run = flow_run or flow_run_context.flow_run\n flow = flow or flow_run_context.flow\n\n return PrefectLogAdapter(\n get_logger(\"prefect.task_runs\"),\n extra={\n **{\n \"task_run_id\": str(task_run.id),\n \"flow_run_id\": str(task_run.flow_run_id),\n \"task_run_name\": task_run.name,\n \"task_name\": task.name if task else \"<unknown>\",\n \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n \"flow_name\": flow.name if flow else \"<unknown>\",\n },\n **kwargs,\n },\n )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/runner/runner/","title":"runner","text":"","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner","title":"prefect.runner.runner
","text":"Runners are responsible for managing the execution of deployments created and managed by either flow.serve
or the serve
utility.
import time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n \"Fastest flow this side of the Mississippi.\"\n return\n\n\nif __name__ == \"__main__\":\n slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n fast_deploy = fast_flow.to_deployment(name=\"fast\")\n\n # serve generates a Runner instance\n serve(slow_deploy, fast_deploy)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner","title":"Runner
","text":"Source code in prefect/runner/runner.py
class Runner:\n def __init__(\n self,\n name: Optional[str] = None,\n query_seconds: Optional[float] = None,\n prefetch_seconds: float = 10,\n limit: Optional[int] = None,\n pause_on_shutdown: bool = True,\n webserver: bool = False,\n ):\n \"\"\"\n Responsible for managing the execution of remotely initiated flow runs.\n\n Args:\n name: The name of the runner. If not provided, a random one\n will be generated. If provided, it cannot contain '/' or '%'.\n query_seconds: The number of seconds to wait between querying for\n scheduled flow runs; defaults to `PREFECT_RUNNER_POLL_FREQUENCY`\n prefetch_seconds: The number of seconds to prefetch flow runs for.\n limit: The maximum number of flow runs this runner should be running at\n pause_on_shutdown: A boolean for whether or not to automatically pause\n deployment schedules on shutdown; defaults to `True`\n webserver: a boolean flag for whether to start a webserver for this runner\n\n Examples:\n Set up a Runner to manage the execute of scheduled flow runs for two flows:\n ```python\n from prefect import flow, Runner\n\n @flow\n def hello_flow(name):\n print(f\"hello {name}\")\n\n @flow\n def goodbye_flow(name):\n print(f\"goodbye {name}\")\n\n if __name__ == \"__main__\"\n runner = Runner(name=\"my-runner\")\n\n # Will be runnable via the API\n runner.add_flow(hello_flow)\n\n # Run on a cron schedule\n runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n runner.start()\n ```\n \"\"\"\n if name and (\"/\" in name or \"%\" in name):\n raise ValueError(\"Runner name cannot contain '/' or '%'\")\n self.name = Path(name).stem if name is not None else f\"runner-{uuid4()}\"\n self._logger = get_logger(\"runner\")\n\n self.started = False\n self.stopping = False\n self.pause_on_shutdown = pause_on_shutdown\n self.limit = limit or PREFECT_RUNNER_PROCESS_LIMIT.value()\n self.webserver = webserver\n\n self.query_seconds = query_seconds or PREFECT_RUNNER_POLL_FREQUENCY.value()\n self._prefetch_seconds = prefetch_seconds\n\n self._runs_task_group: anyio.abc.TaskGroup = anyio.create_task_group()\n self._loops_task_group: anyio.abc.TaskGroup = anyio.create_task_group()\n\n self._limiter: Optional[anyio.CapacityLimiter] = anyio.CapacityLimiter(\n self.limit\n )\n self._client = get_client()\n self._submitting_flow_run_ids = set()\n self._cancelling_flow_run_ids = set()\n self._scheduled_task_scopes = set()\n self._deployment_ids: Set[UUID] = set()\n self._flow_run_process_map = dict()\n\n self._tmp_dir: Path = (\n Path(tempfile.gettempdir()) / \"runner_storage\" / str(uuid4())\n )\n self._storage_objs: List[RunnerStorage] = []\n self._deployment_storage_map: Dict[UUID, RunnerStorage] = {}\n self._loop = asyncio.get_event_loop()\n\n @sync_compatible\n async def add_deployment(\n self,\n deployment: RunnerDeployment,\n ) -> UUID:\n \"\"\"\n Registers the deployment with the Prefect API and will monitor for work once\n the runner is started.\n\n Args:\n deployment: A deployment for the runner to register.\n \"\"\"\n deployment_id = await deployment.apply()\n storage = deployment.storage\n if storage is not None:\n storage = await self._add_storage(storage)\n self._deployment_storage_map[deployment_id] = storage\n self._deployment_ids.add(deployment_id)\n\n return deployment_id\n\n @sync_compatible\n async def add_flow(\n self,\n flow: Flow,\n name: str = None,\n interval: Optional[\n Union[\n Iterable[Union[int, float, datetime.timedelta]],\n int,\n float,\n datetime.timedelta,\n ]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n ) -> UUID:\n \"\"\"\n Provides a flow to the runner to be run based on the provided configuration.\n\n Will create a deployment for the provided flow and register the deployment\n with the runner.\n\n Args:\n flow: A flow for the runner to run.\n name: The name to give the created deployment. Will default to the name\n of the runner.\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n \"\"\"\n api = PREFECT_API_URL.value()\n if any([interval, cron, rrule]) and not api:\n self._logger.warning(\n \"Cannot schedule flows on an ephemeral server; run `prefect server\"\n \" start` to start the scheduler.\"\n )\n name = self.name if name is None else name\n\n deployment = await flow.to_deployment(\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedules=schedules,\n schedule=schedule,\n paused=paused,\n is_schedule_active=is_schedule_active,\n triggers=triggers,\n parameters=parameters,\n description=description,\n tags=tags,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n entrypoint_type=entrypoint_type,\n )\n return await self.add_deployment(deployment)\n\n @sync_compatible\n async def _add_storage(self, storage: RunnerStorage) -> RunnerStorage:\n \"\"\"\n Adds a storage object to the runner. The storage object will be used to pull\n code to the runner's working directory before the runner starts.\n\n Args:\n storage: The storage object to add to the runner.\n Returns:\n The updated storage object that was added to the runner.\n \"\"\"\n if storage not in self._storage_objs:\n storage_copy = deepcopy(storage)\n storage_copy.set_base_path(self._tmp_dir)\n\n self._logger.debug(\n f\"Adding storage {storage_copy!r} to runner at\"\n f\" {str(storage_copy.destination)!r}\"\n )\n self._storage_objs.append(storage_copy)\n\n return storage_copy\n else:\n return next(s for s in self._storage_objs if s == storage)\n\n def handle_sigterm(self, signum, frame):\n \"\"\"\n Gracefully shuts down the runner when a SIGTERM is received.\n \"\"\"\n self._logger.info(\"SIGTERM received, initiating graceful shutdown...\")\n from_sync.call_in_loop_thread(create_call(self.stop))\n\n sys.exit(0)\n\n @sync_compatible\n async def start(\n self, run_once: bool = False, webserver: Optional[bool] = None\n ) -> None:\n \"\"\"\n Starts a runner.\n\n The runner will begin monitoring for and executing any scheduled work for all added flows.\n\n Args:\n run_once: If True, the runner will through one query loop and then exit.\n webserver: a boolean for whether to start a webserver for this runner. If provided,\n overrides the default on the runner\n\n Examples:\n Initialize a Runner, add two flows, and serve them by starting the Runner:\n\n ```python\n from prefect import flow, Runner\n\n @flow\n def hello_flow(name):\n print(f\"hello {name}\")\n\n @flow\n def goodbye_flow(name):\n print(f\"goodbye {name}\")\n\n if __name__ == \"__main__\"\n runner = Runner(name=\"my-runner\")\n\n # Will be runnable via the API\n runner.add_flow(hello_flow)\n\n # Run on a cron schedule\n runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n runner.start()\n ```\n \"\"\"\n _register_signal(signal.SIGTERM, self.handle_sigterm)\n\n webserver = webserver if webserver is not None else self.webserver\n\n if webserver or PREFECT_RUNNER_SERVER_ENABLE.value():\n # we'll start the ASGI server in a separate thread so that\n # uvicorn does not block the main thread\n server_thread = threading.Thread(\n name=\"runner-server-thread\",\n target=partial(\n start_webserver,\n runner=self,\n ),\n daemon=True,\n )\n server_thread.start()\n\n async with self as runner:\n async with self._loops_task_group as tg:\n for storage in self._storage_objs:\n if storage.pull_interval:\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=storage.pull_code,\n interval=storage.pull_interval,\n run_once=run_once,\n jitter_range=0.3,\n )\n )\n else:\n tg.start_soon(storage.pull_code)\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=runner._get_and_submit_flow_runs,\n interval=self.query_seconds,\n run_once=run_once,\n jitter_range=0.3,\n )\n )\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=runner._check_for_cancelled_flow_runs,\n interval=self.query_seconds * 2,\n run_once=run_once,\n jitter_range=0.3,\n )\n )\n\n def execute_in_background(self, func, *args, **kwargs):\n \"\"\"\n Executes a function in the background.\n \"\"\"\n\n return asyncio.run_coroutine_threadsafe(func(*args, **kwargs), self._loop)\n\n async def cancel_all(self):\n runs_to_cancel = []\n\n # done to avoid dictionary size changing during iteration\n for info in self._flow_run_process_map.values():\n runs_to_cancel.append(info[\"flow_run\"])\n if runs_to_cancel:\n for run in runs_to_cancel:\n try:\n await self._cancel_run(run, state_msg=\"Runner is shutting down.\")\n except Exception:\n self._logger.exception(\n f\"Exception encountered while cancelling {run.id}\",\n exc_info=True,\n )\n\n @sync_compatible\n async def stop(self):\n \"\"\"Stops the runner's polling cycle.\"\"\"\n if not self.started:\n raise RuntimeError(\n \"Runner has not yet started. Please start the runner by calling\"\n \" .start()\"\n )\n\n self.started = False\n self.stopping = True\n await self.cancel_all()\n try:\n self._loops_task_group.cancel_scope.cancel()\n except Exception:\n self._logger.exception(\n \"Exception encountered while shutting down\", exc_info=True\n )\n\n async def execute_flow_run(\n self, flow_run_id: UUID, entrypoint: Optional[str] = None\n ):\n \"\"\"\n Executes a single flow run with the given ID.\n\n Execution will wait to monitor for cancellation requests. Exits once\n the flow run process has exited.\n \"\"\"\n self.pause_on_shutdown = False\n context = self if not self.started else asyncnullcontext()\n\n async with context:\n if not self._acquire_limit_slot(flow_run_id):\n return\n\n async with anyio.create_task_group() as tg:\n with anyio.CancelScope():\n self._submitting_flow_run_ids.add(flow_run_id)\n flow_run = await self._client.read_flow_run(flow_run_id)\n\n pid = await self._runs_task_group.start(\n partial(\n self._submit_run_and_capture_errors,\n flow_run=flow_run,\n entrypoint=entrypoint,\n ),\n )\n\n self._flow_run_process_map[flow_run.id] = dict(\n pid=pid, flow_run=flow_run\n )\n\n # We want this loop to stop when the flow run process exits\n # so we'll check if the flow run process is still alive on\n # each iteration and cancel the task group if it is not.\n workload = partial(\n self._check_for_cancelled_flow_runs,\n should_stop=lambda: not self._flow_run_process_map,\n on_stop=tg.cancel_scope.cancel,\n )\n\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=workload,\n interval=self.query_seconds,\n jitter_range=0.3,\n )\n )\n\n def _get_flow_run_logger(self, flow_run: \"FlowRun\") -> PrefectLogAdapter:\n return flow_run_logger(flow_run=flow_run).getChild(\n \"runner\",\n extra={\n \"runner_name\": self.name,\n },\n )\n\n async def _run_process(\n self,\n flow_run: \"FlowRun\",\n task_status: Optional[anyio.abc.TaskStatus] = None,\n entrypoint: Optional[str] = None,\n ):\n \"\"\"\n Runs the given flow run in a subprocess.\n\n Args:\n flow_run: Flow run to execute via process. The ID of this flow run\n is stored in the PREFECT__FLOW_RUN_ID environment variable to\n allow the engine to retrieve the corresponding flow's code and\n begin execution.\n task_status: anyio task status used to send a message to the caller\n than the flow run process has started.\n \"\"\"\n command = f\"{shlex.quote(sys.executable)} -m prefect.engine\"\n\n flow_run_logger = self._get_flow_run_logger(flow_run)\n\n # We must add creationflags to a dict so it is only passed as a function\n # parameter on Windows, because the presence of creationflags causes\n # errors on Unix even if set to None\n kwargs: Dict[str, object] = {}\n if sys.platform == \"win32\":\n kwargs[\"creationflags\"] = subprocess.CREATE_NEW_PROCESS_GROUP\n\n _use_threaded_child_watcher()\n flow_run_logger.info(\"Opening process...\")\n\n env = get_current_settings().to_environment_variables(exclude_unset=True)\n env.update(\n {\n **{\n \"PREFECT__FLOW_RUN_ID\": str(flow_run.id),\n \"PREFECT__STORAGE_BASE_PATH\": str(self._tmp_dir),\n \"PREFECT__ENABLE_CANCELLATION_AND_CRASHED_HOOKS\": \"false\",\n },\n **({\"PREFECT__FLOW_ENTRYPOINT\": entrypoint} if entrypoint else {}),\n }\n )\n env.update(**os.environ) # is this really necessary??\n\n storage = self._deployment_storage_map.get(flow_run.deployment_id)\n if storage and storage.pull_interval:\n # perform an adhoc pull of code before running the flow if an\n # adhoc pull hasn't been performed in the last pull_interval\n # TODO: Explore integrating this behavior with global concurrency.\n last_adhoc_pull = getattr(storage, \"last_adhoc_pull\", None)\n if (\n last_adhoc_pull is None\n or last_adhoc_pull\n < datetime.datetime.now()\n - datetime.timedelta(seconds=storage.pull_interval)\n ):\n self._logger.debug(\n \"Performing adhoc pull of code for flow run %s with storage %r\",\n flow_run.id,\n storage,\n )\n await storage.pull_code()\n setattr(storage, \"last_adhoc_pull\", datetime.datetime.now())\n\n process = await run_process(\n shlex.split(command),\n stream_output=True,\n task_status=task_status,\n env=env,\n **kwargs,\n cwd=storage.destination if storage else None,\n )\n\n # Use the pid for display if no name was given\n\n if process.returncode:\n help_message = None\n level = logging.ERROR\n if process.returncode == -9:\n level = logging.INFO\n help_message = (\n \"This indicates that the process exited due to a SIGKILL signal. \"\n \"Typically, this is either caused by manual cancellation or \"\n \"high memory usage causing the operating system to \"\n \"terminate the process.\"\n )\n if process.returncode == -15:\n level = logging.INFO\n help_message = (\n \"This indicates that the process exited due to a SIGTERM signal. \"\n \"Typically, this is caused by manual cancellation.\"\n )\n elif process.returncode == 247:\n help_message = (\n \"This indicates that the process was terminated due to high \"\n \"memory usage.\"\n )\n elif (\n sys.platform == \"win32\" and process.returncode == STATUS_CONTROL_C_EXIT\n ):\n level = logging.INFO\n help_message = (\n \"Process was terminated due to a Ctrl+C or Ctrl+Break signal. \"\n \"Typically, this is caused by manual cancellation.\"\n )\n\n flow_run_logger.log(\n level,\n f\"Process for flow run {flow_run.name!r} exited with status code:\"\n f\" {process.returncode}\"\n + (f\"; {help_message}\" if help_message else \"\"),\n )\n else:\n flow_run_logger.info(\n f\"Process for flow run {flow_run.name!r} exited cleanly.\"\n )\n\n return process.returncode\n\n async def _kill_process(\n self,\n pid: int,\n grace_seconds: int = 30,\n ):\n \"\"\"\n Kills a given flow run process.\n\n Args:\n pid: ID of the process to kill\n grace_seconds: Number of seconds to wait for the process to end.\n \"\"\"\n # In a non-windows environment first send a SIGTERM, then, after\n # `grace_seconds` seconds have passed subsequent send SIGKILL. In\n # Windows we use CTRL_BREAK_EVENT as SIGTERM is useless:\n # https://bugs.python.org/issue26350\n if sys.platform == \"win32\":\n try:\n os.kill(pid, signal.CTRL_BREAK_EVENT)\n except (ProcessLookupError, WindowsError):\n raise RuntimeError(\n f\"Unable to kill process {pid!r}: The process was not found.\"\n )\n else:\n try:\n os.kill(pid, signal.SIGTERM)\n except ProcessLookupError:\n raise RuntimeError(\n f\"Unable to kill process {pid!r}: The process was not found.\"\n )\n\n # Throttle how often we check if the process is still alive to keep\n # from making too many system calls in a short period of time.\n check_interval = max(grace_seconds / 10, 1)\n\n with anyio.move_on_after(grace_seconds):\n while True:\n await anyio.sleep(check_interval)\n\n # Detect if the process is still alive. If not do an early\n # return as the process respected the SIGTERM from above.\n try:\n os.kill(pid, 0)\n except ProcessLookupError:\n return\n\n try:\n os.kill(pid, signal.SIGKILL)\n except OSError:\n # We shouldn't ever end up here, but it's possible that the\n # process ended right after the check above.\n return\n\n async def _pause_schedules(self):\n \"\"\"\n Pauses all deployment schedules.\n \"\"\"\n self._logger.info(\"Pausing all deployments...\")\n for deployment_id in self._deployment_ids:\n self._logger.debug(f\"Pausing deployment '{deployment_id}'\")\n await self._client.set_deployment_paused_state(deployment_id, True)\n self._logger.info(\"All deployments have been paused!\")\n\n async def _get_and_submit_flow_runs(self):\n if self.stopping:\n return\n runs_response = await self._get_scheduled_flow_runs()\n self.last_polled = pendulum.now(\"UTC\")\n return await self._submit_scheduled_flow_runs(flow_run_response=runs_response)\n\n async def _check_for_cancelled_flow_runs(\n self, should_stop: Callable = lambda: False, on_stop: Callable = lambda: None\n ):\n \"\"\"\n Checks for flow runs with CANCELLING a cancelling state and attempts to\n cancel them.\n\n Args:\n should_stop: A callable that returns a boolean indicating whether or not\n the runner should stop checking for cancelled flow runs.\n on_stop: A callable that is called when the runner should stop checking\n for cancelled flow runs.\n \"\"\"\n if self.stopping:\n return\n if not self.started:\n raise RuntimeError(\n \"Runner is not set up. Please make sure you are running this runner \"\n \"as an async context manager.\"\n )\n\n if should_stop():\n self._logger.debug(\n \"Runner has no active flow runs or deployments. Sending message to loop\"\n \" service that no further cancellation checks are needed.\"\n )\n on_stop()\n\n self._logger.debug(\"Checking for cancelled flow runs...\")\n\n named_cancelling_flow_runs = await self._client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state=FlowRunFilterState(\n type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n ),\n # Avoid duplicate cancellation calls\n id=FlowRunFilterId(\n any_=list(\n self._flow_run_process_map.keys()\n - self._cancelling_flow_run_ids\n )\n ),\n ),\n )\n\n typed_cancelling_flow_runs = await self._client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state=FlowRunFilterState(\n type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n ),\n # Avoid duplicate cancellation calls\n id=FlowRunFilterId(\n any_=list(\n self._flow_run_process_map.keys()\n - self._cancelling_flow_run_ids\n )\n ),\n ),\n )\n\n cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n if cancelling_flow_runs:\n self._logger.info(\n f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n )\n\n for flow_run in cancelling_flow_runs:\n self._cancelling_flow_run_ids.add(flow_run.id)\n self._runs_task_group.start_soon(self._cancel_run, flow_run)\n\n return cancelling_flow_runs\n\n async def _cancel_run(self, flow_run: \"FlowRun\", state_msg: Optional[str] = None):\n run_logger = self._get_flow_run_logger(flow_run)\n\n pid = self._flow_run_process_map.get(flow_run.id, {}).get(\"pid\")\n if not pid:\n await self._run_on_cancellation_hooks(flow_run, flow_run.state)\n await self._mark_flow_run_as_cancelled(\n flow_run,\n state_updates={\n \"message\": (\n \"Could not find process ID for flow run\"\n \" and cancellation cannot be guaranteed.\"\n )\n },\n )\n return\n\n try:\n await self._kill_process(pid)\n except RuntimeError as exc:\n self._logger.warning(f\"{exc} Marking flow run as cancelled.\")\n await self._run_on_cancellation_hooks(flow_run, flow_run.state)\n await self._mark_flow_run_as_cancelled(flow_run)\n except Exception:\n run_logger.exception(\n \"Encountered exception while killing process for flow run \"\n f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n )\n # We will try again on generic exceptions\n self._cancelling_flow_run_ids.remove(flow_run.id)\n else:\n await self._run_on_cancellation_hooks(flow_run, flow_run.state)\n await self._mark_flow_run_as_cancelled(\n flow_run,\n state_updates={\n \"message\": state_msg or \"Flow run was cancelled successfully.\"\n },\n )\n run_logger.info(f\"Cancelled flow run '{flow_run.name}'!\")\n\n async def _get_scheduled_flow_runs(\n self,\n ) -> List[\"FlowRun\"]:\n \"\"\"\n Retrieve scheduled flow runs for this runner.\n \"\"\"\n scheduled_before = pendulum.now(\"utc\").add(seconds=int(self._prefetch_seconds))\n self._logger.debug(\n f\"Querying for flow runs scheduled before {scheduled_before}\"\n )\n\n scheduled_flow_runs = (\n await self._client.get_scheduled_flow_runs_for_deployments(\n deployment_ids=list(self._deployment_ids),\n scheduled_before=scheduled_before,\n )\n )\n self._logger.debug(f\"Discovered {len(scheduled_flow_runs)} scheduled_flow_runs\")\n return scheduled_flow_runs\n\n def has_slots_available(self) -> bool:\n \"\"\"\n Determine if the flow run limit has been reached.\n\n Returns:\n - bool: True if the limit has not been reached, False otherwise.\n \"\"\"\n return self._limiter.available_tokens > 0\n\n def _acquire_limit_slot(self, flow_run_id: str) -> bool:\n \"\"\"\n Enforces flow run limit set on runner.\n\n Returns:\n - bool: True if a slot was acquired, False otherwise.\n \"\"\"\n try:\n if self._limiter:\n self._limiter.acquire_on_behalf_of_nowait(flow_run_id)\n self._logger.debug(\"Limit slot acquired for flow run '%s'\", flow_run_id)\n return True\n except RuntimeError as exc:\n if (\n \"this borrower is already holding one of this CapacityLimiter's tokens\"\n in str(exc)\n ):\n self._logger.warning(\n f\"Duplicate submission of flow run '{flow_run_id}' detected. Runner\"\n \" will not re-submit flow run.\"\n )\n return False\n else:\n raise\n except anyio.WouldBlock:\n self._logger.info(\n f\"Flow run limit reached; {self._limiter.borrowed_tokens} flow runs\"\n \" in progress. You can control this limit by passing a `limit` value\"\n \" to `serve` or adjusting the PREFECT_RUNNER_PROCESS_LIMIT setting.\"\n )\n return False\n\n def _release_limit_slot(self, flow_run_id: str) -> None:\n \"\"\"\n Frees up a slot taken by the given flow run id.\n \"\"\"\n if self._limiter:\n self._limiter.release_on_behalf_of(flow_run_id)\n self._logger.debug(\"Limit slot released for flow run '%s'\", flow_run_id)\n\n async def _submit_scheduled_flow_runs(\n self,\n flow_run_response: List[\"FlowRun\"],\n entrypoints: Optional[List[str]] = None,\n ) -> List[\"FlowRun\"]:\n \"\"\"\n Takes a list of FlowRuns and submits the referenced flow runs\n for execution by the runner.\n \"\"\"\n submittable_flow_runs = flow_run_response\n submittable_flow_runs.sort(key=lambda run: run.next_scheduled_start_time)\n for i, flow_run in enumerate(submittable_flow_runs):\n if flow_run.id in self._submitting_flow_run_ids:\n continue\n\n if self._acquire_limit_slot(flow_run.id):\n run_logger = self._get_flow_run_logger(flow_run)\n run_logger.info(\n f\"Runner '{self.name}' submitting flow run '{flow_run.id}'\"\n )\n self._submitting_flow_run_ids.add(flow_run.id)\n self._runs_task_group.start_soon(\n partial(\n self._submit_run,\n flow_run=flow_run,\n entrypoint=(\n entrypoints[i] if entrypoints else None\n ), # TODO: avoid relying on index\n )\n )\n else:\n break\n\n return list(\n filter(\n lambda run: run.id in self._submitting_flow_run_ids,\n submittable_flow_runs,\n )\n )\n\n async def _submit_run(self, flow_run: \"FlowRun\", entrypoint: Optional[str] = None):\n \"\"\"\n Submits a given flow run for execution by the runner.\n \"\"\"\n run_logger = self._get_flow_run_logger(flow_run)\n\n ready_to_submit = await self._propose_pending_state(flow_run)\n\n if ready_to_submit:\n readiness_result = await self._runs_task_group.start(\n partial(\n self._submit_run_and_capture_errors,\n flow_run=flow_run,\n entrypoint=entrypoint,\n ),\n )\n\n if readiness_result and not isinstance(readiness_result, Exception):\n self._flow_run_process_map[flow_run.id] = dict(\n pid=readiness_result, flow_run=flow_run\n )\n\n run_logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n else:\n # If the run is not ready to submit, release the concurrency slot\n self._release_limit_slot(flow_run.id)\n\n self._submitting_flow_run_ids.remove(flow_run.id)\n\n async def _submit_run_and_capture_errors(\n self,\n flow_run: \"FlowRun\",\n task_status: Optional[anyio.abc.TaskStatus] = None,\n entrypoint: Optional[str] = None,\n ) -> Union[Optional[int], Exception]:\n run_logger = self._get_flow_run_logger(flow_run)\n\n try:\n status_code = await self._run_process(\n flow_run=flow_run,\n task_status=task_status,\n entrypoint=entrypoint,\n )\n except Exception as exc:\n if not task_status._future.done():\n # This flow run was being submitted and did not start successfully\n run_logger.exception(\n f\"Failed to start process for flow run '{flow_run.id}'.\"\n )\n # Mark the task as started to prevent agent crash\n task_status.started(exc)\n await self._propose_crashed_state(\n flow_run, \"Flow run process could not be started\"\n )\n else:\n run_logger.exception(\n f\"An error occurred while monitoring flow run '{flow_run.id}'. \"\n \"The flow run will not be marked as failed, but an issue may have \"\n \"occurred.\"\n )\n return exc\n finally:\n self._release_limit_slot(flow_run.id)\n self._flow_run_process_map.pop(flow_run.id, None)\n\n if status_code != 0:\n await self._propose_crashed_state(\n flow_run,\n f\"Flow run process exited with non-zero status code {status_code}.\",\n )\n\n api_flow_run = await self._client.read_flow_run(flow_run_id=flow_run.id)\n terminal_state = api_flow_run.state\n if terminal_state.is_crashed():\n await self._run_on_crashed_hooks(flow_run=flow_run, state=terminal_state)\n\n return status_code\n\n async def _propose_pending_state(self, flow_run: \"FlowRun\") -> bool:\n run_logger = self._get_flow_run_logger(flow_run)\n state = flow_run.state\n try:\n state = await propose_state(\n self._client, Pending(), flow_run_id=flow_run.id\n )\n except Abort as exc:\n run_logger.info(\n (\n f\"Aborted submission of flow run '{flow_run.id}'. \"\n f\"Server sent an abort signal: {exc}\"\n ),\n )\n return False\n except Exception:\n run_logger.exception(\n f\"Failed to update state of flow run '{flow_run.id}'\",\n )\n return False\n\n if not state.is_pending():\n run_logger.info(\n (\n f\"Aborted submission of flow run '{flow_run.id}': \"\n f\"Server returned a non-pending state {state.type.value!r}\"\n ),\n )\n return False\n\n return True\n\n async def _propose_failed_state(self, flow_run: \"FlowRun\", exc: Exception) -> None:\n run_logger = self._get_flow_run_logger(flow_run)\n try:\n await propose_state(\n self._client,\n await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n flow_run_id=flow_run.id,\n )\n except Abort:\n # We've already failed, no need to note the abort but we don't want it to\n # raise in the agent process\n pass\n except Exception:\n run_logger.error(\n f\"Failed to update state of flow run '{flow_run.id}'\",\n exc_info=True,\n )\n\n async def _propose_crashed_state(self, flow_run: \"FlowRun\", message: str) -> None:\n run_logger = self._get_flow_run_logger(flow_run)\n try:\n state = await propose_state(\n self._client,\n Crashed(message=message),\n flow_run_id=flow_run.id,\n )\n except Abort:\n # Flow run already marked as failed\n pass\n except Exception:\n run_logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n else:\n if state.is_crashed():\n run_logger.info(\n f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n )\n\n async def _mark_flow_run_as_cancelled(\n self, flow_run: \"FlowRun\", state_updates: Optional[dict] = None\n ) -> None:\n state_updates = state_updates or {}\n state_updates.setdefault(\"name\", \"Cancelled\")\n state_updates.setdefault(\"type\", StateType.CANCELLED)\n state = flow_run.state.copy(update=state_updates)\n\n await self._client.set_flow_run_state(flow_run.id, state, force=True)\n\n # Do not remove the flow run from the cancelling set immediately because\n # the API caches responses for the `read_flow_runs` and we do not want to\n # duplicate cancellations.\n await self._schedule_task(\n 60 * 10, self._cancelling_flow_run_ids.remove, flow_run.id\n )\n\n async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n \"\"\"\n Schedule a background task to start after some time.\n\n These tasks will be run immediately when the runner exits instead of waiting.\n\n The function may be async or sync. Async functions will be awaited.\n \"\"\"\n\n async def wrapper(task_status):\n # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n # time or shutdown\n if self.started:\n with anyio.CancelScope() as scope:\n self._scheduled_task_scopes.add(scope)\n task_status.started()\n await anyio.sleep(__in_seconds)\n\n self._scheduled_task_scopes.remove(scope)\n else:\n task_status.started()\n\n result = fn(*args, **kwargs)\n if inspect.iscoroutine(result):\n await result\n\n await self._runs_task_group.start(wrapper)\n\n async def _run_on_cancellation_hooks(\n self,\n flow_run: \"FlowRun\",\n state: State,\n ) -> None:\n \"\"\"\n Run the hooks for a flow.\n \"\"\"\n if state.is_cancelling():\n flow = await load_flow_from_flow_run(\n flow_run, client=self._client, storage_base_path=str(self._tmp_dir)\n )\n hooks = flow.on_cancellation or []\n\n await _run_hooks(hooks, flow_run, flow, state)\n\n async def _run_on_crashed_hooks(\n self,\n flow_run: \"FlowRun\",\n state: State,\n ) -> None:\n \"\"\"\n Run the hooks for a flow.\n \"\"\"\n if state.is_crashed():\n flow = await load_flow_from_flow_run(\n flow_run, client=self._client, storage_base_path=str(self._tmp_dir)\n )\n hooks = flow.on_crashed or []\n\n await _run_hooks(hooks, flow_run, flow, state)\n\n async def __aenter__(self):\n self._logger.debug(\"Starting runner...\")\n self._client = get_client()\n self._tmp_dir.mkdir(parents=True)\n await self._client.__aenter__()\n await self._runs_task_group.__aenter__()\n\n self.started = True\n return self\n\n async def __aexit__(self, *exc_info):\n self._logger.debug(\"Stopping runner...\")\n if self.pause_on_shutdown:\n await self._pause_schedules()\n self.started = False\n for scope in self._scheduled_task_scopes:\n scope.cancel()\n if self._runs_task_group:\n await self._runs_task_group.__aexit__(*exc_info)\n if self._client:\n await self._client.__aexit__(*exc_info)\n shutil.rmtree(str(self._tmp_dir))\n\n def __repr__(self):\n return f\"Runner(name={self.name!r})\"\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.add_deployment","title":"add_deployment
async
","text":"Registers the deployment with the Prefect API and will monitor for work once the runner is started.
Parameters:
Name Type Description Defaultdeployment
RunnerDeployment
A deployment for the runner to register.
required Source code inprefect/runner/runner.py
@sync_compatible\nasync def add_deployment(\n self,\n deployment: RunnerDeployment,\n) -> UUID:\n \"\"\"\n Registers the deployment with the Prefect API and will monitor for work once\n the runner is started.\n\n Args:\n deployment: A deployment for the runner to register.\n \"\"\"\n deployment_id = await deployment.apply()\n storage = deployment.storage\n if storage is not None:\n storage = await self._add_storage(storage)\n self._deployment_storage_map[deployment_id] = storage\n self._deployment_ids.add(deployment_id)\n\n return deployment_id\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.add_flow","title":"add_flow
async
","text":"Provides a flow to the runner to be run based on the provided configuration.
Will create a deployment for the provided flow and register the deployment with the runner.
Parameters:
Name Type Description Defaultflow
Flow
A flow for the runner to run.
requiredname
str
The name to give the created deployment. Will default to the name of the runner.
None
interval
Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]
An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.
None
cron
Optional[Union[Iterable[str], str]]
A cron schedule of when to execute runs of this flow.
None
rrule
Optional[Union[Iterable[str], str]]
An rrule schedule of when to execute runs of this flow.
None
schedule
Optional[SCHEDULE_TYPES]
A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.
None
is_schedule_active
Optional[bool]
Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.
None
triggers
Optional[List[DeploymentTrigger]]
A list of triggers that should kick of a run of this flow.
None
parameters
Optional[dict]
A dictionary of default parameter values to pass to runs of this flow.
None
description
Optional[str]
A description for the created deployment. Defaults to the flow's description if not provided.
None
tags
Optional[List[str]]
A list of tags to associate with the created deployment for organizational purposes.
None
version
Optional[str]
A version for the created deployment. Defaults to the flow's version.
None
entrypoint_type
EntrypointType
Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.
FILE_PATH
Source code in prefect/runner/runner.py
@sync_compatible\nasync def add_flow(\n self,\n flow: Flow,\n name: str = None,\n interval: Optional[\n Union[\n Iterable[Union[int, float, datetime.timedelta]],\n int,\n float,\n datetime.timedelta,\n ]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n) -> UUID:\n \"\"\"\n Provides a flow to the runner to be run based on the provided configuration.\n\n Will create a deployment for the provided flow and register the deployment\n with the runner.\n\n Args:\n flow: A flow for the runner to run.\n name: The name to give the created deployment. Will default to the name\n of the runner.\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n \"\"\"\n api = PREFECT_API_URL.value()\n if any([interval, cron, rrule]) and not api:\n self._logger.warning(\n \"Cannot schedule flows on an ephemeral server; run `prefect server\"\n \" start` to start the scheduler.\"\n )\n name = self.name if name is None else name\n\n deployment = await flow.to_deployment(\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedules=schedules,\n schedule=schedule,\n paused=paused,\n is_schedule_active=is_schedule_active,\n triggers=triggers,\n parameters=parameters,\n description=description,\n tags=tags,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n entrypoint_type=entrypoint_type,\n )\n return await self.add_deployment(deployment)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.execute_flow_run","title":"execute_flow_run
async
","text":"Executes a single flow run with the given ID.
Execution will wait to monitor for cancellation requests. Exits once the flow run process has exited.
Source code inprefect/runner/runner.py
async def execute_flow_run(\n self, flow_run_id: UUID, entrypoint: Optional[str] = None\n):\n \"\"\"\n Executes a single flow run with the given ID.\n\n Execution will wait to monitor for cancellation requests. Exits once\n the flow run process has exited.\n \"\"\"\n self.pause_on_shutdown = False\n context = self if not self.started else asyncnullcontext()\n\n async with context:\n if not self._acquire_limit_slot(flow_run_id):\n return\n\n async with anyio.create_task_group() as tg:\n with anyio.CancelScope():\n self._submitting_flow_run_ids.add(flow_run_id)\n flow_run = await self._client.read_flow_run(flow_run_id)\n\n pid = await self._runs_task_group.start(\n partial(\n self._submit_run_and_capture_errors,\n flow_run=flow_run,\n entrypoint=entrypoint,\n ),\n )\n\n self._flow_run_process_map[flow_run.id] = dict(\n pid=pid, flow_run=flow_run\n )\n\n # We want this loop to stop when the flow run process exits\n # so we'll check if the flow run process is still alive on\n # each iteration and cancel the task group if it is not.\n workload = partial(\n self._check_for_cancelled_flow_runs,\n should_stop=lambda: not self._flow_run_process_map,\n on_stop=tg.cancel_scope.cancel,\n )\n\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=workload,\n interval=self.query_seconds,\n jitter_range=0.3,\n )\n )\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.execute_in_background","title":"execute_in_background
","text":"Executes a function in the background.
Source code inprefect/runner/runner.py
def execute_in_background(self, func, *args, **kwargs):\n \"\"\"\n Executes a function in the background.\n \"\"\"\n\n return asyncio.run_coroutine_threadsafe(func(*args, **kwargs), self._loop)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.handle_sigterm","title":"handle_sigterm
","text":"Gracefully shuts down the runner when a SIGTERM is received.
Source code inprefect/runner/runner.py
def handle_sigterm(self, signum, frame):\n \"\"\"\n Gracefully shuts down the runner when a SIGTERM is received.\n \"\"\"\n self._logger.info(\"SIGTERM received, initiating graceful shutdown...\")\n from_sync.call_in_loop_thread(create_call(self.stop))\n\n sys.exit(0)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.has_slots_available","title":"has_slots_available
","text":"Determine if the flow run limit has been reached.
Returns:
Type Descriptionbool
prefect/runner/runner.py
def has_slots_available(self) -> bool:\n \"\"\"\n Determine if the flow run limit has been reached.\n\n Returns:\n - bool: True if the limit has not been reached, False otherwise.\n \"\"\"\n return self._limiter.available_tokens > 0\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.start","title":"start
async
","text":"Starts a runner.
The runner will begin monitoring for and executing any scheduled work for all added flows.
Parameters:
Name Type Description Defaultrun_once
bool
If True, the runner will through one query loop and then exit.
False
webserver
Optional[bool]
a boolean for whether to start a webserver for this runner. If provided, overrides the default on the runner
None
Examples:
Initialize a Runner, add two flows, and serve them by starting the Runner:
from prefect import flow, Runner\n\n@flow\ndef hello_flow(name):\n print(f\"hello {name}\")\n\n@flow\ndef goodbye_flow(name):\n print(f\"goodbye {name}\")\n\nif __name__ == \"__main__\"\n runner = Runner(name=\"my-runner\")\n\n # Will be runnable via the API\n runner.add_flow(hello_flow)\n\n # Run on a cron schedule\n runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n runner.start()\n
Source code in prefect/runner/runner.py
@sync_compatible\nasync def start(\n self, run_once: bool = False, webserver: Optional[bool] = None\n) -> None:\n \"\"\"\n Starts a runner.\n\n The runner will begin monitoring for and executing any scheduled work for all added flows.\n\n Args:\n run_once: If True, the runner will through one query loop and then exit.\n webserver: a boolean for whether to start a webserver for this runner. If provided,\n overrides the default on the runner\n\n Examples:\n Initialize a Runner, add two flows, and serve them by starting the Runner:\n\n ```python\n from prefect import flow, Runner\n\n @flow\n def hello_flow(name):\n print(f\"hello {name}\")\n\n @flow\n def goodbye_flow(name):\n print(f\"goodbye {name}\")\n\n if __name__ == \"__main__\"\n runner = Runner(name=\"my-runner\")\n\n # Will be runnable via the API\n runner.add_flow(hello_flow)\n\n # Run on a cron schedule\n runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n runner.start()\n ```\n \"\"\"\n _register_signal(signal.SIGTERM, self.handle_sigterm)\n\n webserver = webserver if webserver is not None else self.webserver\n\n if webserver or PREFECT_RUNNER_SERVER_ENABLE.value():\n # we'll start the ASGI server in a separate thread so that\n # uvicorn does not block the main thread\n server_thread = threading.Thread(\n name=\"runner-server-thread\",\n target=partial(\n start_webserver,\n runner=self,\n ),\n daemon=True,\n )\n server_thread.start()\n\n async with self as runner:\n async with self._loops_task_group as tg:\n for storage in self._storage_objs:\n if storage.pull_interval:\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=storage.pull_code,\n interval=storage.pull_interval,\n run_once=run_once,\n jitter_range=0.3,\n )\n )\n else:\n tg.start_soon(storage.pull_code)\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=runner._get_and_submit_flow_runs,\n interval=self.query_seconds,\n run_once=run_once,\n jitter_range=0.3,\n )\n )\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=runner._check_for_cancelled_flow_runs,\n interval=self.query_seconds * 2,\n run_once=run_once,\n jitter_range=0.3,\n )\n )\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.stop","title":"stop
async
","text":"Stops the runner's polling cycle.
Source code inprefect/runner/runner.py
@sync_compatible\nasync def stop(self):\n \"\"\"Stops the runner's polling cycle.\"\"\"\n if not self.started:\n raise RuntimeError(\n \"Runner has not yet started. Please start the runner by calling\"\n \" .start()\"\n )\n\n self.started = False\n self.stopping = True\n await self.cancel_all()\n try:\n self._loops_task_group.cancel_scope.cancel()\n except Exception:\n self._logger.exception(\n \"Exception encountered while shutting down\", exc_info=True\n )\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.serve","title":"serve
async
","text":"Serve the provided list of deployments.
Parameters:
Name Type Description Default*args
RunnerDeployment
A list of deployments to serve.
()
pause_on_shutdown
bool
A boolean for whether or not to automatically pause deployment schedules on shutdown.
True
print_starting_message
bool
Whether or not to print message to the console on startup.
True
limit
Optional[int]
The maximum number of runs that can be executed concurrently.
None
**kwargs
Additional keyword arguments to pass to the runner.
{}
Examples:
Prepare two deployments and serve them:
import datetime\n\nfrom prefect import flow, serve\n\n@flow\ndef my_flow(name):\n print(f\"hello {name}\")\n\n@flow\ndef my_other_flow(name):\n print(f\"goodbye {name}\")\n\nif __name__ == \"__main__\":\n # Run once a day\n hello_deploy = my_flow.to_deployment(\n \"hello\", tags=[\"dev\"], interval=datetime.timedelta(days=1)\n )\n\n # Run every Sunday at 4:00 AM\n bye_deploy = my_other_flow.to_deployment(\n \"goodbye\", tags=[\"dev\"], cron=\"0 4 * * sun\"\n )\n\n serve(hello_deploy, bye_deploy)\n
Source code in prefect/runner/runner.py
@sync_compatible\nasync def serve(\n *args: RunnerDeployment,\n pause_on_shutdown: bool = True,\n print_starting_message: bool = True,\n limit: Optional[int] = None,\n **kwargs,\n):\n \"\"\"\n Serve the provided list of deployments.\n\n Args:\n *args: A list of deployments to serve.\n pause_on_shutdown: A boolean for whether or not to automatically pause\n deployment schedules on shutdown.\n print_starting_message: Whether or not to print message to the console\n on startup.\n limit: The maximum number of runs that can be executed concurrently.\n **kwargs: Additional keyword arguments to pass to the runner.\n\n Examples:\n Prepare two deployments and serve them:\n\n ```python\n import datetime\n\n from prefect import flow, serve\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n @flow\n def my_other_flow(name):\n print(f\"goodbye {name}\")\n\n if __name__ == \"__main__\":\n # Run once a day\n hello_deploy = my_flow.to_deployment(\n \"hello\", tags=[\"dev\"], interval=datetime.timedelta(days=1)\n )\n\n # Run every Sunday at 4:00 AM\n bye_deploy = my_other_flow.to_deployment(\n \"goodbye\", tags=[\"dev\"], cron=\"0 4 * * sun\"\n )\n\n serve(hello_deploy, bye_deploy)\n ```\n \"\"\"\n runner = Runner(pause_on_shutdown=pause_on_shutdown, limit=limit, **kwargs)\n for deployment in args:\n await runner.add_deployment(deployment)\n\n if print_starting_message:\n help_message_top = (\n \"[green]Your deployments are being served and polling for\"\n \" scheduled runs!\\n[/]\"\n )\n\n table = Table(title=\"Deployments\", show_header=False)\n\n table.add_column(style=\"blue\", no_wrap=True)\n\n for deployment in args:\n table.add_row(f\"{deployment.flow_name}/{deployment.name}\")\n\n help_message_bottom = (\n \"\\nTo trigger any of these deployments, use the\"\n \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n \" [DEPLOYMENT_NAME]\\n[/]\"\n )\n if PREFECT_UI_URL:\n help_message_bottom += (\n \"\\nYou can also trigger your deployments via the Prefect UI:\"\n f\" [blue]{PREFECT_UI_URL.value()}/deployments[/]\\n\"\n )\n\n console = Console()\n console.print(\n Group(help_message_top, table, help_message_bottom), soft_wrap=True\n )\n\n await runner.start()\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/server/","title":"server","text":"","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server","title":"prefect.runner.server
","text":"","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server.build_server","title":"build_server
async
","text":"Build a FastAPI server for a runner.
Parameters:
Name Type Description Defaultrunner
Runner
the runner this server interacts with and monitors
requiredlog_level
str
the log level to use for the server
required Source code inprefect/runner/server.py
@sync_compatible\nasync def build_server(runner: \"Runner\") -> FastAPI:\n \"\"\"\n Build a FastAPI server for a runner.\n\n Args:\n runner (Runner): the runner this server interacts with and monitors\n log_level (str): the log level to use for the server\n \"\"\"\n webserver = FastAPI()\n router = APIRouter()\n\n router.add_api_route(\n \"/health\", perform_health_check(runner=runner), methods=[\"GET\"]\n )\n router.add_api_route(\"/run_count\", run_count(runner=runner), methods=[\"GET\"])\n router.add_api_route(\"/shutdown\", shutdown(runner=runner), methods=[\"POST\"])\n webserver.include_router(router)\n\n if PREFECT_EXPERIMENTAL_ENABLE_EXTRA_RUNNER_ENDPOINTS.value():\n deployments_router, deployment_schemas = await get_deployment_router(runner)\n webserver.include_router(deployments_router)\n\n subflow_schemas = await get_subflow_schemas(runner)\n webserver.add_api_route(\n \"/flow/run\",\n _build_generic_endpoint_for_flows(runner=runner, schemas=subflow_schemas),\n methods=[\"POST\"],\n name=\"Run flow in background\",\n description=\"Trigger any flow run as a background task on the runner.\",\n summary=\"Run flow\",\n )\n\n def customize_openapi():\n if webserver.openapi_schema:\n return webserver.openapi_schema\n\n openapi_schema = inject_schemas_into_openapi(webserver, deployment_schemas)\n webserver.openapi_schema = openapi_schema\n return webserver.openapi_schema\n\n webserver.openapi = customize_openapi\n\n return webserver\n
","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server.get_subflow_schemas","title":"get_subflow_schemas
async
","text":"Load available subflow schemas by filtering for only those subflows in the deployment entrypoint's import space.
Source code inprefect/runner/server.py
async def get_subflow_schemas(runner: \"Runner\") -> Dict[str, Dict]:\n \"\"\"\n Load available subflow schemas by filtering for only those subflows in the\n deployment entrypoint's import space.\n \"\"\"\n schemas = {}\n async with get_client() as client:\n for deployment_id in runner._deployment_ids:\n deployment = await client.read_deployment(deployment_id)\n if deployment.entrypoint is None:\n continue\n\n script = deployment.entrypoint.split(\":\")[0]\n subflows = load_flows_from_script(script)\n for flow in subflows:\n schemas[flow.name] = flow.parameters.dict()\n\n return schemas\n
","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server.start_webserver","title":"start_webserver
","text":"Run a FastAPI server for a runner.
Parameters:
Name Type Description Defaultrunner
Runner
the runner this server interacts with and monitors
requiredlog_level
str
the log level to use for the server
None
Source code in prefect/runner/server.py
def start_webserver(runner: \"Runner\", log_level: Optional[str] = None) -> None:\n \"\"\"\n Run a FastAPI server for a runner.\n\n Args:\n runner (Runner): the runner this server interacts with and monitors\n log_level (str): the log level to use for the server\n \"\"\"\n host = PREFECT_RUNNER_SERVER_HOST.value()\n port = PREFECT_RUNNER_SERVER_PORT.value()\n log_level = log_level or PREFECT_RUNNER_SERVER_LOG_LEVEL.value()\n webserver = build_server(runner)\n uvicorn.run(webserver, host=host, port=port, log_level=log_level)\n
","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/storage/","title":"storage","text":"","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage","title":"prefect.runner.storage
","text":"","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.BlockStorageAdapter","title":"BlockStorageAdapter
","text":"A storage adapter for a storage block object to allow it to be used as a runner storage object.
Source code inprefect/runner/storage.py
class BlockStorageAdapter:\n \"\"\"\n A storage adapter for a storage block object to allow it to be used as a\n runner storage object.\n \"\"\"\n\n def __init__(\n self,\n block: Union[ReadableDeploymentStorage, WritableDeploymentStorage],\n pull_interval: Optional[int] = 60,\n ):\n self._block = block\n self._pull_interval = pull_interval\n self._storage_base_path = Path.cwd()\n if not isinstance(block, Block):\n raise TypeError(\n f\"Expected a block object. Received a {type(block).__name__!r} object.\"\n )\n if not hasattr(block, \"get_directory\"):\n raise ValueError(\"Provided block must have a `get_directory` method.\")\n\n self._name = (\n f\"{block.get_block_type_slug()}-{block._block_document_name}\"\n if block._block_document_name\n else str(uuid4())\n )\n\n def set_base_path(self, path: Path):\n self._storage_base_path = path\n\n @property\n def pull_interval(self) -> Optional[int]:\n return self._pull_interval\n\n @property\n def destination(self) -> Path:\n return self._storage_base_path / self._name\n\n async def pull_code(self):\n if not self.destination.exists():\n self.destination.mkdir(parents=True, exist_ok=True)\n await self._block.get_directory(local_path=str(self.destination))\n\n def to_pull_step(self) -> dict:\n # Give blocks the change to implement their own pull step\n if hasattr(self._block, \"get_pull_step\"):\n return self._block.get_pull_step()\n else:\n if not self._block._block_document_name:\n raise BlockNotSavedError(\n \"Block must be saved with `.save()` before it can be converted to a\"\n \" pull step.\"\n )\n return {\n \"prefect.deployments.steps.pull_with_block\": {\n \"block_type_slug\": self._block.get_block_type_slug(),\n \"block_document_name\": self._block._block_document_name,\n }\n }\n\n def __eq__(self, __value) -> bool:\n if isinstance(__value, BlockStorageAdapter):\n return self._block == __value._block\n return False\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.GitRepository","title":"GitRepository
","text":"Pulls the contents of a git repository to the local filesystem.
Parameters:
Name Type Description Defaulturl
str
The URL of the git repository to pull from
requiredcredentials
Union[GitCredentials, Block, Dict[str, Any], None]
A dictionary of credentials to use when pulling from the repository. If a username is provided, an access token must also be provided.
None
name
Optional[str]
The name of the repository. If not provided, the name will be inferred from the repository URL.
None
branch
Optional[str]
The branch to pull from. Defaults to \"main\".
None
pull_interval
Optional[int]
The interval in seconds at which to pull contents from remote storage to local storage. If None, remote storage will perform a one-time sync.
60
Examples:
Pull the contents of a private git repository to the local filesystem:
from prefect.runner.storage import GitRepository\n\nstorage = GitRepository(\n url=\"https://github.com/org/repo.git\",\n credentials={\"username\": \"oauth2\", \"access_token\": \"my-access-token\"},\n)\n\nawait storage.pull_code()\n
Source code in prefect/runner/storage.py
class GitRepository:\n \"\"\"\n Pulls the contents of a git repository to the local filesystem.\n\n Parameters:\n url: The URL of the git repository to pull from\n credentials: A dictionary of credentials to use when pulling from the\n repository. If a username is provided, an access token must also be\n provided.\n name: The name of the repository. If not provided, the name will be\n inferred from the repository URL.\n branch: The branch to pull from. Defaults to \"main\".\n pull_interval: The interval in seconds at which to pull contents from\n remote storage to local storage. If None, remote storage will perform\n a one-time sync.\n\n Examples:\n Pull the contents of a private git repository to the local filesystem:\n\n ```python\n from prefect.runner.storage import GitRepository\n\n storage = GitRepository(\n url=\"https://github.com/org/repo.git\",\n credentials={\"username\": \"oauth2\", \"access_token\": \"my-access-token\"},\n )\n\n await storage.pull_code()\n ```\n \"\"\"\n\n def __init__(\n self,\n url: str,\n credentials: Union[GitCredentials, Block, Dict[str, Any], None] = None,\n name: Optional[str] = None,\n branch: Optional[str] = None,\n include_submodules: bool = False,\n pull_interval: Optional[int] = 60,\n ):\n if credentials is None:\n credentials = {}\n\n if (\n isinstance(credentials, dict)\n and credentials.get(\"username\")\n and not (credentials.get(\"access_token\") or credentials.get(\"password\"))\n ):\n raise ValueError(\n \"If a username is provided, an access token or password must also be\"\n \" provided.\"\n )\n self._url = url\n self._branch = branch\n self._credentials = credentials\n self._include_submodules = include_submodules\n repo_name = urlparse(url).path.split(\"/\")[-1].replace(\".git\", \"\")\n default_name = f\"{repo_name}-{branch}\" if branch else repo_name\n self._name = name or default_name\n self._logger = get_logger(f\"runner.storage.git-repository.{self._name}\")\n self._storage_base_path = Path.cwd()\n self._pull_interval = pull_interval\n\n @property\n def destination(self) -> Path:\n return self._storage_base_path / self._name\n\n def set_base_path(self, path: Path):\n self._storage_base_path = path\n\n @property\n def pull_interval(self) -> Optional[int]:\n return self._pull_interval\n\n @property\n def _repository_url_with_credentials(self) -> str:\n if not self._credentials:\n return self._url\n\n url_components = urlparse(self._url)\n\n credentials = (\n self._credentials.dict()\n if isinstance(self._credentials, Block)\n else deepcopy(self._credentials)\n )\n\n for k, v in credentials.items():\n if isinstance(v, Secret):\n credentials[k] = v.get()\n elif isinstance(v, SecretStr):\n credentials[k] = v.get_secret_value()\n\n formatted_credentials = _format_token_from_credentials(\n urlparse(self._url).netloc, credentials\n )\n if url_components.scheme == \"https\" and formatted_credentials is not None:\n updated_components = url_components._replace(\n netloc=f\"{formatted_credentials}@{url_components.netloc}\"\n )\n repository_url = urlunparse(updated_components)\n else:\n repository_url = self._url\n\n return repository_url\n\n async def pull_code(self):\n \"\"\"\n Pulls the contents of the configured repository to the local filesystem.\n \"\"\"\n self._logger.debug(\n \"Pulling contents from repository '%s' to '%s'...\",\n self._name,\n self.destination,\n )\n\n git_dir = self.destination / \".git\"\n\n if git_dir.exists():\n # Check if the existing repository matches the configured repository\n result = await run_process(\n [\"git\", \"config\", \"--get\", \"remote.origin.url\"],\n cwd=str(self.destination),\n )\n existing_repo_url = None\n if result.stdout is not None:\n existing_repo_url = _strip_auth_from_url(result.stdout.decode().strip())\n\n if existing_repo_url != self._url:\n raise ValueError(\n f\"The existing repository at {str(self.destination)} \"\n f\"does not match the configured repository {self._url}\"\n )\n\n self._logger.debug(\"Pulling latest changes from origin/%s\", self._branch)\n # Update the existing repository\n cmd = [\"git\", \"pull\", \"origin\"]\n if self._branch:\n cmd += [self._branch]\n if self._include_submodules:\n cmd += [\"--recurse-submodules\"]\n cmd += [\"--depth\", \"1\"]\n try:\n await run_process(cmd, cwd=self.destination)\n self._logger.debug(\"Successfully pulled latest changes\")\n except subprocess.CalledProcessError as exc:\n self._logger.error(\n f\"Failed to pull latest changes with exit code {exc}\"\n )\n shutil.rmtree(self.destination)\n await self._clone_repo()\n\n else:\n await self._clone_repo()\n\n async def _clone_repo(self):\n \"\"\"\n Clones the repository into the local destination.\n \"\"\"\n self._logger.debug(\"Cloning repository %s\", self._url)\n\n repository_url = self._repository_url_with_credentials\n\n cmd = [\n \"git\",\n \"clone\",\n repository_url,\n ]\n if self._branch:\n cmd += [\"--branch\", self._branch]\n if self._include_submodules:\n cmd += [\"--recurse-submodules\"]\n\n # Limit git history and set path to clone to\n cmd += [\"--depth\", \"1\", str(self.destination)]\n\n try:\n await run_process(cmd)\n except subprocess.CalledProcessError as exc:\n # Hide the command used to avoid leaking the access token\n exc_chain = None if self._credentials else exc\n raise RuntimeError(\n f\"Failed to clone repository {self._url!r} with exit code\"\n f\" {exc.returncode}.\"\n ) from exc_chain\n\n def __eq__(self, __value) -> bool:\n if isinstance(__value, GitRepository):\n return (\n self._url == __value._url\n and self._branch == __value._branch\n and self._name == __value._name\n )\n return False\n\n def __repr__(self) -> str:\n return (\n f\"GitRepository(name={self._name!r} repository={self._url!r},\"\n f\" branch={self._branch!r})\"\n )\n\n def to_pull_step(self) -> Dict:\n pull_step = {\n \"prefect.deployments.steps.git_clone\": {\n \"repository\": self._url,\n \"branch\": self._branch,\n }\n }\n if isinstance(self._credentials, Block):\n pull_step[\"prefect.deployments.steps.git_clone\"][\n \"credentials\"\n ] = f\"{{{{ {self._credentials.get_block_placeholder()} }}}}\"\n elif isinstance(self._credentials, dict):\n if isinstance(self._credentials.get(\"access_token\"), Secret):\n pull_step[\"prefect.deployments.steps.git_clone\"][\"credentials\"] = {\n **self._credentials,\n \"access_token\": (\n \"{{\"\n f\" {self._credentials['access_token'].get_block_placeholder()} }}}}\"\n ),\n }\n elif self._credentials.get(\"access_token\") is not None:\n raise ValueError(\n \"Please save your access token as a Secret block before converting\"\n \" this storage object to a pull step.\"\n )\n\n return pull_step\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.GitRepository.pull_code","title":"pull_code
async
","text":"Pulls the contents of the configured repository to the local filesystem.
Source code inprefect/runner/storage.py
async def pull_code(self):\n \"\"\"\n Pulls the contents of the configured repository to the local filesystem.\n \"\"\"\n self._logger.debug(\n \"Pulling contents from repository '%s' to '%s'...\",\n self._name,\n self.destination,\n )\n\n git_dir = self.destination / \".git\"\n\n if git_dir.exists():\n # Check if the existing repository matches the configured repository\n result = await run_process(\n [\"git\", \"config\", \"--get\", \"remote.origin.url\"],\n cwd=str(self.destination),\n )\n existing_repo_url = None\n if result.stdout is not None:\n existing_repo_url = _strip_auth_from_url(result.stdout.decode().strip())\n\n if existing_repo_url != self._url:\n raise ValueError(\n f\"The existing repository at {str(self.destination)} \"\n f\"does not match the configured repository {self._url}\"\n )\n\n self._logger.debug(\"Pulling latest changes from origin/%s\", self._branch)\n # Update the existing repository\n cmd = [\"git\", \"pull\", \"origin\"]\n if self._branch:\n cmd += [self._branch]\n if self._include_submodules:\n cmd += [\"--recurse-submodules\"]\n cmd += [\"--depth\", \"1\"]\n try:\n await run_process(cmd, cwd=self.destination)\n self._logger.debug(\"Successfully pulled latest changes\")\n except subprocess.CalledProcessError as exc:\n self._logger.error(\n f\"Failed to pull latest changes with exit code {exc}\"\n )\n shutil.rmtree(self.destination)\n await self._clone_repo()\n\n else:\n await self._clone_repo()\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage","title":"RemoteStorage
","text":"Pulls the contents of a remote storage location to the local filesystem.
Parameters:
Name Type Description Defaulturl
str
The URL of the remote storage location to pull from. Supports fsspec
URLs. Some protocols may require an additional fsspec
dependency to be installed. Refer to the fsspec
docs for more details.
pull_interval
Optional[int]
The interval in seconds at which to pull contents from remote storage to local storage. If None, remote storage will perform a one-time sync.
60
**settings
Any
Any additional settings to pass the fsspec
filesystem class.
{}
Examples:
Pull the contents of a remote storage location to the local filesystem:
from prefect.runner.storage import RemoteStorage\n\nstorage = RemoteStorage(url=\"s3://my-bucket/my-folder\")\n\nawait storage.pull_code()\n
Pull the contents of a remote storage location to the local filesystem with additional settings:
from prefect.runner.storage import RemoteStorage\nfrom prefect.blocks.system import Secret\n\nstorage = RemoteStorage(\n url=\"s3://my-bucket/my-folder\",\n # Use Secret blocks to keep credentials out of your code\n key=Secret.load(\"my-aws-access-key\"),\n secret=Secret.load(\"my-aws-secret-key\"),\n)\n\nawait storage.pull_code()\n
Source code in prefect/runner/storage.py
class RemoteStorage:\n \"\"\"\n Pulls the contents of a remote storage location to the local filesystem.\n\n Parameters:\n url: The URL of the remote storage location to pull from. Supports\n `fsspec` URLs. Some protocols may require an additional `fsspec`\n dependency to be installed. Refer to the\n [`fsspec` docs](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations)\n for more details.\n pull_interval: The interval in seconds at which to pull contents from\n remote storage to local storage. If None, remote storage will perform\n a one-time sync.\n **settings: Any additional settings to pass the `fsspec` filesystem class.\n\n Examples:\n Pull the contents of a remote storage location to the local filesystem:\n\n ```python\n from prefect.runner.storage import RemoteStorage\n\n storage = RemoteStorage(url=\"s3://my-bucket/my-folder\")\n\n await storage.pull_code()\n ```\n\n Pull the contents of a remote storage location to the local filesystem\n with additional settings:\n\n ```python\n from prefect.runner.storage import RemoteStorage\n from prefect.blocks.system import Secret\n\n storage = RemoteStorage(\n url=\"s3://my-bucket/my-folder\",\n # Use Secret blocks to keep credentials out of your code\n key=Secret.load(\"my-aws-access-key\"),\n secret=Secret.load(\"my-aws-secret-key\"),\n )\n\n await storage.pull_code()\n ```\n \"\"\"\n\n def __init__(\n self,\n url: str,\n pull_interval: Optional[int] = 60,\n **settings: Any,\n ):\n self._url = url\n self._settings = settings\n self._logger = get_logger(\"runner.storage.remote-storage\")\n self._storage_base_path = Path.cwd()\n self._pull_interval = pull_interval\n\n @staticmethod\n def _get_required_package_for_scheme(scheme: str) -> Optional[str]:\n # attempt to discover the package name for the given scheme\n # from fsspec's registry\n known_implementation = fsspec.registry.get(scheme)\n if known_implementation:\n return known_implementation.__module__.split(\".\")[0]\n # if we don't know the implementation, try to guess it for some\n # common schemes\n elif scheme == \"s3\":\n return \"s3fs\"\n elif scheme == \"gs\" or scheme == \"gcs\":\n return \"gcsfs\"\n elif scheme == \"abfs\" or scheme == \"az\":\n return \"adlfs\"\n else:\n return None\n\n @property\n def _filesystem(self) -> fsspec.AbstractFileSystem:\n scheme, _, _, _, _ = urlsplit(self._url)\n\n def replace_blocks_with_values(obj: Any) -> Any:\n if isinstance(obj, Block):\n if hasattr(obj, \"get\"):\n return obj.get()\n if hasattr(obj, \"value\"):\n return obj.value\n else:\n return obj.dict()\n return obj\n\n settings_with_block_values = visit_collection(\n self._settings, replace_blocks_with_values, return_data=True\n )\n\n return fsspec.filesystem(scheme, **settings_with_block_values)\n\n def set_base_path(self, path: Path):\n self._storage_base_path = path\n\n @property\n def pull_interval(self) -> Optional[int]:\n \"\"\"\n The interval at which contents from remote storage should be pulled to\n local storage. If None, remote storage will perform a one-time sync.\n \"\"\"\n return self._pull_interval\n\n @property\n def destination(self) -> Path:\n \"\"\"\n The local file path to pull contents from remote storage to.\n \"\"\"\n return self._storage_base_path / self._remote_path\n\n @property\n def _remote_path(self) -> Path:\n \"\"\"\n The remote file path to pull contents from remote storage to.\n \"\"\"\n _, netloc, urlpath, _, _ = urlsplit(self._url)\n return Path(netloc) / Path(urlpath.lstrip(\"/\"))\n\n async def pull_code(self):\n \"\"\"\n Pulls contents from remote storage to the local filesystem.\n \"\"\"\n self._logger.debug(\n \"Pulling contents from remote storage '%s' to '%s'...\",\n self._url,\n self.destination,\n )\n\n if not self.destination.exists():\n self.destination.mkdir(parents=True, exist_ok=True)\n\n remote_path = str(self._remote_path) + \"/\"\n\n try:\n await from_async.wait_for_call_in_new_thread(\n create_call(\n self._filesystem.get,\n remote_path,\n str(self.destination),\n recursive=True,\n )\n )\n except Exception as exc:\n raise RuntimeError(\n f\"Failed to pull contents from remote storage {self._url!r} to\"\n f\" {self.destination!r}\"\n ) from exc\n\n def to_pull_step(self) -> dict:\n \"\"\"\n Returns a dictionary representation of the storage object that can be\n used as a deployment pull step.\n \"\"\"\n\n def replace_block_with_placeholder(obj: Any) -> Any:\n if isinstance(obj, Block):\n return f\"{{{{ {obj.get_block_placeholder()} }}}}\"\n return obj\n\n settings_with_placeholders = visit_collection(\n self._settings, replace_block_with_placeholder, return_data=True\n )\n required_package = self._get_required_package_for_scheme(\n urlparse(self._url).scheme\n )\n step = {\n \"prefect.deployments.steps.pull_from_remote_storage\": {\n \"url\": self._url,\n **settings_with_placeholders,\n }\n }\n if required_package:\n step[\"prefect.deployments.steps.pull_from_remote_storage\"][\n \"requires\"\n ] = required_package\n return step\n\n def __eq__(self, __value) -> bool:\n \"\"\"\n Equality check for runner storage objects.\n \"\"\"\n if isinstance(__value, RemoteStorage):\n return self._url == __value._url and self._settings == __value._settings\n return False\n\n def __repr__(self) -> str:\n return f\"RemoteStorage(url={self._url!r})\"\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.destination","title":"destination: Path
property
","text":"The local file path to pull contents from remote storage to.
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.pull_interval","title":"pull_interval: Optional[int]
property
","text":"The interval at which contents from remote storage should be pulled to local storage. If None, remote storage will perform a one-time sync.
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.pull_code","title":"pull_code
async
","text":"Pulls contents from remote storage to the local filesystem.
Source code inprefect/runner/storage.py
async def pull_code(self):\n \"\"\"\n Pulls contents from remote storage to the local filesystem.\n \"\"\"\n self._logger.debug(\n \"Pulling contents from remote storage '%s' to '%s'...\",\n self._url,\n self.destination,\n )\n\n if not self.destination.exists():\n self.destination.mkdir(parents=True, exist_ok=True)\n\n remote_path = str(self._remote_path) + \"/\"\n\n try:\n await from_async.wait_for_call_in_new_thread(\n create_call(\n self._filesystem.get,\n remote_path,\n str(self.destination),\n recursive=True,\n )\n )\n except Exception as exc:\n raise RuntimeError(\n f\"Failed to pull contents from remote storage {self._url!r} to\"\n f\" {self.destination!r}\"\n ) from exc\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.to_pull_step","title":"to_pull_step
","text":"Returns a dictionary representation of the storage object that can be used as a deployment pull step.
Source code inprefect/runner/storage.py
def to_pull_step(self) -> dict:\n \"\"\"\n Returns a dictionary representation of the storage object that can be\n used as a deployment pull step.\n \"\"\"\n\n def replace_block_with_placeholder(obj: Any) -> Any:\n if isinstance(obj, Block):\n return f\"{{{{ {obj.get_block_placeholder()} }}}}\"\n return obj\n\n settings_with_placeholders = visit_collection(\n self._settings, replace_block_with_placeholder, return_data=True\n )\n required_package = self._get_required_package_for_scheme(\n urlparse(self._url).scheme\n )\n step = {\n \"prefect.deployments.steps.pull_from_remote_storage\": {\n \"url\": self._url,\n **settings_with_placeholders,\n }\n }\n if required_package:\n step[\"prefect.deployments.steps.pull_from_remote_storage\"][\n \"requires\"\n ] = required_package\n return step\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage","title":"RunnerStorage
","text":" Bases: Protocol
A storage interface for a runner to use to retrieve remotely stored flow code.
Source code inprefect/runner/storage.py
@runtime_checkable\nclass RunnerStorage(Protocol):\n \"\"\"\n A storage interface for a runner to use to retrieve\n remotely stored flow code.\n \"\"\"\n\n def set_base_path(self, path: Path):\n \"\"\"\n Sets the base path to use when pulling contents from remote storage to\n local storage.\n \"\"\"\n ...\n\n @property\n def pull_interval(self) -> Optional[int]:\n \"\"\"\n The interval at which contents from remote storage should be pulled to\n local storage. If None, remote storage will perform a one-time sync.\n \"\"\"\n ...\n\n @property\n def destination(self) -> Path:\n \"\"\"\n The local file path to pull contents from remote storage to.\n \"\"\"\n ...\n\n async def pull_code(self):\n \"\"\"\n Pulls contents from remote storage to the local filesystem.\n \"\"\"\n ...\n\n def to_pull_step(self) -> dict:\n \"\"\"\n Returns a dictionary representation of the storage object that can be\n used as a deployment pull step.\n \"\"\"\n ...\n\n def __eq__(self, __value) -> bool:\n \"\"\"\n Equality check for runner storage objects.\n \"\"\"\n ...\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.destination","title":"destination: Path
property
","text":"The local file path to pull contents from remote storage to.
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.pull_interval","title":"pull_interval: Optional[int]
property
","text":"The interval at which contents from remote storage should be pulled to local storage. If None, remote storage will perform a one-time sync.
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.pull_code","title":"pull_code
async
","text":"Pulls contents from remote storage to the local filesystem.
Source code inprefect/runner/storage.py
async def pull_code(self):\n \"\"\"\n Pulls contents from remote storage to the local filesystem.\n \"\"\"\n ...\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.set_base_path","title":"set_base_path
","text":"Sets the base path to use when pulling contents from remote storage to local storage.
Source code inprefect/runner/storage.py
def set_base_path(self, path: Path):\n \"\"\"\n Sets the base path to use when pulling contents from remote storage to\n local storage.\n \"\"\"\n ...\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.to_pull_step","title":"to_pull_step
","text":"Returns a dictionary representation of the storage object that can be used as a deployment pull step.
Source code inprefect/runner/storage.py
def to_pull_step(self) -> dict:\n \"\"\"\n Returns a dictionary representation of the storage object that can be\n used as a deployment pull step.\n \"\"\"\n ...\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.create_storage_from_url","title":"create_storage_from_url
","text":"Creates a storage object from a URL.
Parameters:
Name Type Description Defaulturl
str
The URL to create a storage object from. Supports git and fsspec
URLs.
pull_interval
Optional[int]
The interval at which to pull contents from remote storage to local storage
60
Returns:
Name Type DescriptionRunnerStorage
RunnerStorage
A runner storage compatible object
Source code inprefect/runner/storage.py
def create_storage_from_url(\n url: str, pull_interval: Optional[int] = 60\n) -> RunnerStorage:\n \"\"\"\n Creates a storage object from a URL.\n\n Args:\n url: The URL to create a storage object from. Supports git and `fsspec`\n URLs.\n pull_interval: The interval at which to pull contents from remote storage to\n local storage\n\n Returns:\n RunnerStorage: A runner storage compatible object\n \"\"\"\n parsed_url = urlparse(url)\n if parsed_url.scheme == \"git\" or parsed_url.path.endswith(\".git\"):\n return GitRepository(url=url, pull_interval=pull_interval)\n else:\n return RemoteStorage(url=url, pull_interval=pull_interval)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/utils/","title":"utils","text":"","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils","title":"prefect.runner.utils
","text":"","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.inject_schemas_into_openapi","title":"inject_schemas_into_openapi
","text":"Augments the webserver's OpenAPI schema with additional schemas from deployments / flows / tasks.
Parameters:
Name Type Description Defaultwebserver
FastAPI
The FastAPI instance representing the webserver.
requiredschemas_to_inject
Dict[str, Any]
A dictionary of OpenAPI schemas to integrate.
requiredReturns:
Type DescriptionDict[str, Any]
The augmented OpenAPI schema dictionary.
Source code inprefect/runner/utils.py
def inject_schemas_into_openapi(\n webserver: FastAPI, schemas_to_inject: Dict[str, Any]\n) -> Dict[str, Any]:\n \"\"\"\n Augments the webserver's OpenAPI schema with additional schemas from deployments / flows / tasks.\n\n Args:\n webserver: The FastAPI instance representing the webserver.\n schemas_to_inject: A dictionary of OpenAPI schemas to integrate.\n\n Returns:\n The augmented OpenAPI schema dictionary.\n \"\"\"\n openapi_schema = get_openapi(\n title=\"FastAPI Prefect Runner\", version=PREFECT_VERSION, routes=webserver.routes\n )\n\n augmented_schema = merge_definitions(schemas_to_inject, openapi_schema)\n return update_refs_to_components(augmented_schema)\n
","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.merge_definitions","title":"merge_definitions
","text":"Integrates definitions from injected schemas into the OpenAPI components.
Parameters:
Name Type Description Defaultinjected_schemas
Dict[str, Any]
A dictionary of deployment-specific schemas.
requiredopenapi_schema
Dict[str, Any]
The base OpenAPI schema to update.
required Source code inprefect/runner/utils.py
def merge_definitions(\n injected_schemas: Dict[str, Any], openapi_schema: Dict[str, Any]\n) -> Dict[str, Any]:\n \"\"\"\n Integrates definitions from injected schemas into the OpenAPI components.\n\n Args:\n injected_schemas: A dictionary of deployment-specific schemas.\n openapi_schema: The base OpenAPI schema to update.\n \"\"\"\n openapi_schema_copy = deepcopy(openapi_schema)\n components = openapi_schema_copy.setdefault(\"components\", {}).setdefault(\n \"schemas\", {}\n )\n for definitions in injected_schemas.values():\n if \"definitions\" in definitions:\n for def_name, def_schema in definitions[\"definitions\"].items():\n def_schema_copy = deepcopy(def_schema)\n update_refs_in_schema(def_schema_copy, \"#/components/schemas/\")\n components[def_name] = def_schema_copy\n return openapi_schema_copy\n
","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.update_refs_in_schema","title":"update_refs_in_schema
","text":"Recursively replaces $ref
with a new reference base in a schema item.
Parameters:
Name Type Description Defaultschema_item
Any
A schema or part of a schema to update references in.
requirednew_ref
str
The new base string to replace in $ref
values.
prefect/runner/utils.py
def update_refs_in_schema(schema_item: Any, new_ref: str) -> None:\n \"\"\"\n Recursively replaces `$ref` with a new reference base in a schema item.\n\n Args:\n schema_item: A schema or part of a schema to update references in.\n new_ref: The new base string to replace in `$ref` values.\n \"\"\"\n if isinstance(schema_item, dict):\n if \"$ref\" in schema_item:\n schema_item[\"$ref\"] = schema_item[\"$ref\"].replace(\"#/definitions/\", new_ref)\n for value in schema_item.values():\n update_refs_in_schema(value, new_ref)\n elif isinstance(schema_item, list):\n for item in schema_item:\n update_refs_in_schema(item, new_ref)\n
","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.update_refs_to_components","title":"update_refs_to_components
","text":"Updates all $ref
fields in the OpenAPI schema to reference the components section.
Parameters:
Name Type Description Defaultopenapi_schema
Dict[str, Any]
The OpenAPI schema to modify $ref
fields in.
prefect/runner/utils.py
def update_refs_to_components(openapi_schema: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Updates all `$ref` fields in the OpenAPI schema to reference the components section.\n\n Args:\n openapi_schema: The OpenAPI schema to modify `$ref` fields in.\n \"\"\"\n for path_item in openapi_schema.get(\"paths\", {}).values():\n for operation in path_item.values():\n schema = (\n operation.get(\"requestBody\", {})\n .get(\"content\", {})\n .get(\"application/json\", {})\n .get(\"schema\", {})\n )\n update_refs_in_schema(schema, \"#/components/schemas/\")\n\n for definition in openapi_schema.get(\"definitions\", {}).values():\n update_refs_in_schema(definition, \"#/components/schemas/\")\n\n return openapi_schema\n
","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runtime/deployment/","title":"deployment","text":"","tags":["Python API","deployment context","context"]},{"location":"api-ref/prefect/runtime/deployment/#prefect.runtime.deployment","title":"prefect.runtime.deployment
","text":"Access attributes of the current deployment run dynamically.
Note that if a deployment is not currently being run, all attributes will return empty values.
You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__DEPLOYMENT
.
from prefect.runtime import deployment\n\ndef get_task_runner():\n task_runner_config = deployment.parameters.get(\"runner_config\", \"default config here\")\n return DummyTaskRunner(task_runner_specs=task_runner_config)\n
Available attributes id
: the deployment's unique IDname
: the deployment's nameversion
: the deployment's versionflow_run_id
: the current flow run ID for this deploymentparameters
: the parameters that were passed to this run; note that these do not necessarily include default values set on the flow function, only the parameter values set on the deployment object or those directly provided via API for this runprefect.runtime.flow_run
","text":"Access attributes of the current flow run dynamically.
Note that if a flow run cannot be discovered, all attributes will return empty values.
You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__FLOW_RUN
.
id
: the flow run's unique IDtags
: the flow run's set of tagsscheduled_start_time
: the flow run's expected scheduled start time; defaults to now if not presentname
: the name of the flow runflow_name
: the name of the flowparameters
: the parameters that were passed to this run; note that these do not necessarily include default values set on the flow function, only the parameter values explicitly passed for the runparent_flow_run_id
: the ID of the flow run that triggered this run, if anyparent_deployment_id
: the ID of the deployment that triggered this run, if anyrun_count
: the number of times this flow run has been runprefect.runtime.task_run
","text":"Access attributes of the current task run dynamically.
Note that if a task run cannot be discovered, all attributes will return empty values.
You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__TASK_RUN
.
id
: the task run's unique IDname
: the name of the task runtags
: the task run's set of tagsparameters
: the parameters the task was called withrun_count
: the number of times this task run has been runtask_name
: the name of the taskprefect.utilities.annotations
","text":"","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.BaseAnnotation","title":"BaseAnnotation
","text":" Bases: namedtuple('BaseAnnotation', field_names='value')
, ABC
, Generic[T]
Base class for Prefect annotation types.
Inherits from namedtuple
for unpacking support in another tools.
prefect/utilities/annotations.py
class BaseAnnotation(\n namedtuple(\"BaseAnnotation\", field_names=\"value\"), ABC, Generic[T]\n):\n \"\"\"\n Base class for Prefect annotation types.\n\n Inherits from `namedtuple` for unpacking support in another tools.\n \"\"\"\n\n def unwrap(self) -> T:\n if sys.version_info < (3, 8):\n # cannot simply return self.value due to recursion error in Python 3.7\n # also _asdict does not follow convention; it's not an internal method\n # https://stackoverflow.com/a/26180604\n return self._asdict()[\"value\"]\n else:\n return self.value\n\n def rewrap(self, value: T) -> \"BaseAnnotation[T]\":\n return type(self)(value)\n\n def __eq__(self, other: object) -> bool:\n if not type(self) == type(other):\n return False\n return self.unwrap() == other.unwrap()\n\n def __repr__(self) -> str:\n return f\"{type(self).__name__}({self.value!r})\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.NotSet","title":"NotSet
","text":"Singleton to distinguish None
from a value that is not provided by the user.
prefect/utilities/annotations.py
class NotSet:\n \"\"\"\n Singleton to distinguish `None` from a value that is not provided by the user.\n \"\"\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.allow_failure","title":"allow_failure
","text":" Bases: BaseAnnotation[T]
Wrapper for states or futures.
Indicates that the upstream run for this input can be failed.
Generally, Prefect will not allow a downstream run to start if any of its inputs are failed. This annotation allows you to opt into receiving a failed input downstream.
If the input is from a failed run, the attached exception will be passed to your function.
Source code inprefect/utilities/annotations.py
class allow_failure(BaseAnnotation[T]):\n \"\"\"\n Wrapper for states or futures.\n\n Indicates that the upstream run for this input can be failed.\n\n Generally, Prefect will not allow a downstream run to start if any of its inputs\n are failed. This annotation allows you to opt into receiving a failed input\n downstream.\n\n If the input is from a failed run, the attached exception will be passed to your\n function.\n \"\"\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.quote","title":"quote
","text":" Bases: BaseAnnotation[T]
Simple wrapper to mark an expression as a different type so it will not be coerced by Prefect. For example, if you want to return a state from a flow without having the flow assume that state.
quote will also instruct prefect to ignore introspection of the wrapped object when passed as flow or task parameter. Parameter introspection can be a significant performance hit when the object is a large collection, e.g. a large dictionary or DataFrame, and each element needs to be visited. This will disable task dependency tracking for the wrapped object, but likely will increase performance.
@task\ndef my_task(df):\n ...\n\n@flow\ndef my_flow():\n my_task(quote(df))\n
Source code in prefect/utilities/annotations.py
class quote(BaseAnnotation[T]):\n \"\"\"\n Simple wrapper to mark an expression as a different type so it will not be coerced\n by Prefect. For example, if you want to return a state from a flow without having\n the flow assume that state.\n\n quote will also instruct prefect to ignore introspection of the wrapped object\n when passed as flow or task parameter. Parameter introspection can be a\n significant performance hit when the object is a large collection,\n e.g. a large dictionary or DataFrame, and each element needs to be visited. This\n will disable task dependency tracking for the wrapped object, but likely will\n increase performance.\n\n ```\n @task\n def my_task(df):\n ...\n\n @flow\n def my_flow():\n my_task(quote(df))\n ```\n \"\"\"\n\n def unquote(self) -> T:\n return self.unwrap()\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.unmapped","title":"unmapped
","text":" Bases: BaseAnnotation[T]
Wrapper for iterables.
Indicates that this input should be sent as-is to all runs created during a mapping operation instead of being split.
Source code inprefect/utilities/annotations.py
class unmapped(BaseAnnotation[T]):\n \"\"\"\n Wrapper for iterables.\n\n Indicates that this input should be sent as-is to all runs created during a mapping\n operation instead of being split.\n \"\"\"\n\n def __getitem__(self, _) -> T:\n # Internally, this acts as an infinite array where all items are the same value\n return self.unwrap()\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/asyncutils/","title":"asyncutils","text":"","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils","title":"prefect.utilities.asyncutils
","text":"Utilities for interoperability with async functions and workers from various contexts.
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherIncomplete","title":"GatherIncomplete
","text":" Bases: RuntimeError
Used to indicate retrieving gather results before completion
Source code inprefect/utilities/asyncutils.py
class GatherIncomplete(RuntimeError):\n \"\"\"Used to indicate retrieving gather results before completion\"\"\"\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherTaskGroup","title":"GatherTaskGroup
","text":" Bases: TaskGroup
A task group that gathers results.
AnyIO does not include support gather
. This class extends the TaskGroup
interface to allow simple gathering.
See https://github.com/agronholm/anyio/issues/100
This class should be instantiated with create_gather_task_group
.
prefect/utilities/asyncutils.py
class GatherTaskGroup(anyio.abc.TaskGroup):\n \"\"\"\n A task group that gathers results.\n\n AnyIO does not include support `gather`. This class extends the `TaskGroup`\n interface to allow simple gathering.\n\n See https://github.com/agronholm/anyio/issues/100\n\n This class should be instantiated with `create_gather_task_group`.\n \"\"\"\n\n def __init__(self, task_group: anyio.abc.TaskGroup):\n self._results: Dict[UUID, Any] = {}\n # The concrete task group implementation to use\n self._task_group: anyio.abc.TaskGroup = task_group\n\n async def _run_and_store(self, key, fn, args):\n self._results[key] = await fn(*args)\n\n def start_soon(self, fn, *args) -> UUID:\n key = uuid4()\n # Put a placeholder in-case the result is retrieved earlier\n self._results[key] = GatherIncomplete\n self._task_group.start_soon(self._run_and_store, key, fn, args)\n return key\n\n async def start(self, fn, *args):\n \"\"\"\n Since `start` returns the result of `task_status.started()` but here we must\n return the key instead, we just won't support this method for now.\n \"\"\"\n raise RuntimeError(\"`GatherTaskGroup` does not support `start`.\")\n\n def get_result(self, key: UUID) -> Any:\n result = self._results[key]\n if result is GatherIncomplete:\n raise GatherIncomplete(\n \"Task is not complete. \"\n \"Results should not be retrieved until the task group exits.\"\n )\n return result\n\n async def __aenter__(self):\n await self._task_group.__aenter__()\n return self\n\n async def __aexit__(self, *tb):\n try:\n retval = await self._task_group.__aexit__(*tb)\n return retval\n finally:\n del self._task_group\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherTaskGroup.start","title":"start
async
","text":"Since start
returns the result of task_status.started()
but here we must return the key instead, we just won't support this method for now.
prefect/utilities/asyncutils.py
async def start(self, fn, *args):\n \"\"\"\n Since `start` returns the result of `task_status.started()` but here we must\n return the key instead, we just won't support this method for now.\n \"\"\"\n raise RuntimeError(\"`GatherTaskGroup` does not support `start`.\")\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.add_event_loop_shutdown_callback","title":"add_event_loop_shutdown_callback
async
","text":"Adds a callback to the given callable on event loop closure. The callable must be a coroutine function. It will be awaited when the current event loop is shutting down.
Requires use of asyncio.run()
which waits for async generator shutdown by default or explicit call of asyncio.shutdown_asyncgens()
. If the application is entered with asyncio.run_until_complete()
and the user calls asyncio.close()
without the generator shutdown call, this will not trigger callbacks.
asyncio does not provided any other way to clean up a resource when the event loop is about to close.
Source code inprefect/utilities/asyncutils.py
async def add_event_loop_shutdown_callback(coroutine_fn: Callable[[], Awaitable]):\n \"\"\"\n Adds a callback to the given callable on event loop closure. The callable must be\n a coroutine function. It will be awaited when the current event loop is shutting\n down.\n\n Requires use of `asyncio.run()` which waits for async generator shutdown by\n default or explicit call of `asyncio.shutdown_asyncgens()`. If the application\n is entered with `asyncio.run_until_complete()` and the user calls\n `asyncio.close()` without the generator shutdown call, this will not trigger\n callbacks.\n\n asyncio does not provided _any_ other way to clean up a resource when the event\n loop is about to close.\n \"\"\"\n\n async def on_shutdown(key):\n # It appears that EVENT_LOOP_GC_REFS is somehow being garbage collected early.\n # We hold a reference to it so as to preserve it, at least for the lifetime of\n # this coroutine. See the issue below for the initial report/discussion:\n # https://github.com/PrefectHQ/prefect/issues/7709#issuecomment-1560021109\n _ = EVENT_LOOP_GC_REFS\n try:\n yield\n except GeneratorExit:\n await coroutine_fn()\n # Remove self from the garbage collection set\n EVENT_LOOP_GC_REFS.pop(key)\n\n # Create the iterator and store it in a global variable so it is not garbage\n # collected. If the iterator is garbage collected before the event loop closes, the\n # callback will not run. Since this function does not know the scope of the event\n # loop that is calling it, a reference with global scope is necessary to ensure\n # garbage collection does not occur until after event loop closure.\n key = id(on_shutdown)\n EVENT_LOOP_GC_REFS[key] = on_shutdown(key)\n\n # Begin iterating so it will be cleaned up as an incomplete generator\n try:\n await EVENT_LOOP_GC_REFS[key].__anext__()\n # There is a poorly understood edge case we've seen in CI where the key is\n # removed from the dict before we begin generator iteration.\n except KeyError:\n logger.warn(\"The event loop shutdown callback was not properly registered. \")\n pass\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.create_gather_task_group","title":"create_gather_task_group
","text":"Create a new task group that gathers results
Source code inprefect/utilities/asyncutils.py
def create_gather_task_group() -> GatherTaskGroup:\n \"\"\"Create a new task group that gathers results\"\"\"\n # This function matches the AnyIO API which uses callables since the concrete\n # task group class depends on the async library being used and cannot be\n # determined until runtime\n return GatherTaskGroup(anyio.create_task_group())\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.gather","title":"gather
async
","text":"Run calls concurrently and gather their results.
Unlike asyncio.gather
this expects to receive callables not coroutines. This matches anyio
semantics.
prefect/utilities/asyncutils.py
async def gather(*calls: Callable[[], Coroutine[Any, Any, T]]) -> List[T]:\n \"\"\"\n Run calls concurrently and gather their results.\n\n Unlike `asyncio.gather` this expects to receive _callables_ not _coroutines_.\n This matches `anyio` semantics.\n \"\"\"\n keys = []\n async with create_gather_task_group() as tg:\n for call in calls:\n keys.append(tg.start_soon(call))\n return [tg.get_result(key) for key in keys]\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.is_async_fn","title":"is_async_fn
","text":"Returns True
if a function returns a coroutine.
See https://github.com/microsoft/pyright/issues/2142 for an example use
Source code inprefect/utilities/asyncutils.py
def is_async_fn(\n func: Union[Callable[P, R], Callable[P, Awaitable[R]]],\n) -> TypeGuard[Callable[P, Awaitable[R]]]:\n \"\"\"\n Returns `True` if a function returns a coroutine.\n\n See https://github.com/microsoft/pyright/issues/2142 for an example use\n \"\"\"\n while hasattr(func, \"__wrapped__\"):\n func = func.__wrapped__\n\n return inspect.iscoroutinefunction(func)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.is_async_gen_fn","title":"is_async_gen_fn
","text":"Returns True
if a function is an async generator.
prefect/utilities/asyncutils.py
def is_async_gen_fn(func):\n \"\"\"\n Returns `True` if a function is an async generator.\n \"\"\"\n while hasattr(func, \"__wrapped__\"):\n func = func.__wrapped__\n\n return inspect.isasyncgenfunction(func)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.raise_async_exception_in_thread","title":"raise_async_exception_in_thread
","text":"Raise an exception in a thread asynchronously.
This will not interrupt long-running system calls like sleep
or wait
.
prefect/utilities/asyncutils.py
def raise_async_exception_in_thread(thread: Thread, exc_type: Type[BaseException]):\n \"\"\"\n Raise an exception in a thread asynchronously.\n\n This will not interrupt long-running system calls like `sleep` or `wait`.\n \"\"\"\n ret = ctypes.pythonapi.PyThreadState_SetAsyncExc(\n ctypes.c_long(thread.ident), ctypes.py_object(exc_type)\n )\n if ret == 0:\n raise ValueError(\"Thread not found.\")\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_async_from_worker_thread","title":"run_async_from_worker_thread
","text":"Runs an async function in the main thread's event loop, blocking the worker thread until completion
Source code inprefect/utilities/asyncutils.py
def run_async_from_worker_thread(\n __fn: Callable[..., Awaitable[T]], *args: Any, **kwargs: Any\n) -> T:\n \"\"\"\n Runs an async function in the main thread's event loop, blocking the worker\n thread until completion\n \"\"\"\n call = partial(__fn, *args, **kwargs)\n return anyio.from_thread.run(call)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_sync_in_interruptible_worker_thread","title":"run_sync_in_interruptible_worker_thread
async
","text":"Runs a sync function in a new interruptible worker thread so that the main thread's event loop is not blocked
Unlike the anyio function, this performs best-effort cancellation of the thread using the C API. Cancellation will not interrupt system calls like sleep
.
prefect/utilities/asyncutils.py
async def run_sync_in_interruptible_worker_thread(\n __fn: Callable[..., T], *args: Any, **kwargs: Any\n) -> T:\n \"\"\"\n Runs a sync function in a new interruptible worker thread so that the main\n thread's event loop is not blocked\n\n Unlike the anyio function, this performs best-effort cancellation of the\n thread using the C API. Cancellation will not interrupt system calls like\n `sleep`.\n \"\"\"\n\n class NotSet:\n pass\n\n thread: Thread = None\n result = NotSet\n event = asyncio.Event()\n loop = asyncio.get_running_loop()\n\n def capture_worker_thread_and_result():\n # Captures the worker thread that AnyIO is using to execute the function so\n # the main thread can perform actions on it\n nonlocal thread, result\n try:\n thread = threading.current_thread()\n result = __fn(*args, **kwargs)\n except BaseException as exc:\n result = exc\n raise\n finally:\n loop.call_soon_threadsafe(event.set)\n\n async def send_interrupt_to_thread():\n # This task waits until the result is returned from the thread, if cancellation\n # occurs during that time, we will raise the exception in the thread as well\n try:\n await event.wait()\n except anyio.get_cancelled_exc_class():\n # NOTE: We could send a SIGINT here which allow us to interrupt system\n # calls but the interrupt bubbles from the child thread into the main thread\n # and there is not a clear way to prevent it.\n raise_async_exception_in_thread(thread, anyio.get_cancelled_exc_class())\n raise\n\n async with anyio.create_task_group() as tg:\n tg.start_soon(send_interrupt_to_thread)\n tg.start_soon(\n partial(\n anyio.to_thread.run_sync,\n capture_worker_thread_and_result,\n cancellable=True,\n limiter=get_thread_limiter(),\n )\n )\n\n assert result is not NotSet\n return result\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_sync_in_worker_thread","title":"run_sync_in_worker_thread
async
","text":"Runs a sync function in a new worker thread so that the main thread's event loop is not blocked
Unlike the anyio function, this defaults to a cancellable thread and does not allow passing arguments to the anyio function so users can pass kwargs to their function.
Note that cancellation of threads will not result in interrupted computation, the thread may continue running \u2014 the outcome will just be ignored.
Source code inprefect/utilities/asyncutils.py
async def run_sync_in_worker_thread(\n __fn: Callable[..., T], *args: Any, **kwargs: Any\n) -> T:\n \"\"\"\n Runs a sync function in a new worker thread so that the main thread's event loop\n is not blocked\n\n Unlike the anyio function, this defaults to a cancellable thread and does not allow\n passing arguments to the anyio function so users can pass kwargs to their function.\n\n Note that cancellation of threads will not result in interrupted computation, the\n thread may continue running \u2014 the outcome will just be ignored.\n \"\"\"\n call = partial(__fn, *args, **kwargs)\n return await anyio.to_thread.run_sync(\n call, cancellable=True, limiter=get_thread_limiter()\n )\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.sync","title":"sync
","text":"Call an async function from a synchronous context. Block until completion.
If in an asynchronous context, we will run the code in a separate loop instead of failing but a warning will be displayed since this is not recommended.
Source code inprefect/utilities/asyncutils.py
def sync(__async_fn: Callable[P, Awaitable[T]], *args: P.args, **kwargs: P.kwargs) -> T:\n \"\"\"\n Call an async function from a synchronous context. Block until completion.\n\n If in an asynchronous context, we will run the code in a separate loop instead of\n failing but a warning will be displayed since this is not recommended.\n \"\"\"\n if in_async_main_thread():\n warnings.warn(\n \"`sync` called from an asynchronous context; \"\n \"you should `await` the async function directly instead.\"\n )\n with anyio.start_blocking_portal() as portal:\n return portal.call(partial(__async_fn, *args, **kwargs))\n elif in_async_worker_thread():\n # In a sync context but we can access the event loop thread; send the async\n # call to the parent\n return run_async_from_worker_thread(__async_fn, *args, **kwargs)\n else:\n # In a sync context and there is no event loop; just create an event loop\n # to run the async code then tear it down\n return run_async_in_new_loop(__async_fn, *args, **kwargs)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.sync_compatible","title":"sync_compatible
","text":"Converts an async function into a dual async and sync function.
When the returned function is called, we will attempt to determine the best way to enter the async function.
prefect/utilities/asyncutils.py
def sync_compatible(async_fn: T) -> T:\n \"\"\"\n Converts an async function into a dual async and sync function.\n\n When the returned function is called, we will attempt to determine the best way\n to enter the async function.\n\n - If in a thread with a running event loop, we will return the coroutine for the\n caller to await. This is normal async behavior.\n - If in a blocking worker thread with access to an event loop in another thread, we\n will submit the async method to the event loop.\n - If we cannot find an event loop, we will create a new one and run the async method\n then tear down the loop.\n \"\"\"\n\n @wraps(async_fn)\n def coroutine_wrapper(*args, **kwargs):\n from prefect._internal.concurrency.api import create_call, from_sync\n from prefect._internal.concurrency.calls import get_current_call, logger\n from prefect._internal.concurrency.event_loop import get_running_loop\n from prefect._internal.concurrency.threads import get_global_loop\n\n global_thread_portal = get_global_loop()\n current_thread = threading.current_thread()\n current_call = get_current_call()\n current_loop = get_running_loop()\n\n if current_thread.ident == global_thread_portal.thread.ident:\n logger.debug(f\"{async_fn} --> return coroutine for internal await\")\n # In the prefect async context; return the coro for us to await\n return async_fn(*args, **kwargs)\n elif in_async_main_thread() and (\n not current_call or is_async_fn(current_call.fn)\n ):\n # In the main async context; return the coro for them to await\n logger.debug(f\"{async_fn} --> return coroutine for user await\")\n return async_fn(*args, **kwargs)\n elif in_async_worker_thread():\n # In a sync context but we can access the event loop thread; send the async\n # call to the parent\n return run_async_from_worker_thread(async_fn, *args, **kwargs)\n elif current_loop is not None:\n logger.debug(f\"{async_fn} --> run async in global loop portal\")\n # An event loop is already present but we are in a sync context, run the\n # call in Prefect's event loop thread\n return from_sync.call_soon_in_loop_thread(\n create_call(async_fn, *args, **kwargs)\n ).result()\n else:\n logger.debug(f\"{async_fn} --> run async in new loop\")\n # Run in a new event loop, but use a `Call` for nested context detection\n call = create_call(async_fn, *args, **kwargs)\n return call()\n\n # TODO: This is breaking type hints on the callable... mypy is behind the curve\n # on argument annotations. We can still fix this for editors though.\n if is_async_fn(async_fn):\n wrapper = coroutine_wrapper\n elif is_async_gen_fn(async_fn):\n raise ValueError(\"Async generators cannot yet be marked as `sync_compatible`\")\n else:\n raise TypeError(\"The decorated function must be async.\")\n\n wrapper.aio = async_fn\n return wrapper\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/callables/","title":"callables","text":"","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables","title":"prefect.utilities.callables
","text":"Utilities for working with Python callables.
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.ParameterSchema","title":"ParameterSchema
","text":" Bases: BaseModel
Simple data model corresponding to an OpenAPI Schema
.
prefect/utilities/callables.py
class ParameterSchema(pydantic.BaseModel):\n \"\"\"Simple data model corresponding to an OpenAPI `Schema`.\"\"\"\n\n title: Literal[\"Parameters\"] = \"Parameters\"\n type: Literal[\"object\"] = \"object\"\n properties: Dict[str, Any] = pydantic.Field(default_factory=dict)\n required: List[str] = None\n definitions: Dict[str, Any] = None\n\n def dict(self, *args, **kwargs):\n \"\"\"Exclude `None` fields by default to comply with\n the OpenAPI spec.\n \"\"\"\n kwargs.setdefault(\"exclude_none\", True)\n return super().dict(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.ParameterSchema.dict","title":"dict
","text":"Exclude None
fields by default to comply with the OpenAPI spec.
prefect/utilities/callables.py
def dict(self, *args, **kwargs):\n \"\"\"Exclude `None` fields by default to comply with\n the OpenAPI spec.\n \"\"\"\n kwargs.setdefault(\"exclude_none\", True)\n return super().dict(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.call_with_parameters","title":"call_with_parameters
","text":"Call a function with parameters extracted with get_call_parameters
The function must have an identical signature to the original function or this will fail. If you need to send to a function with a different signature, extract the args/kwargs using parameters_to_positional_and_keyword
directly
prefect/utilities/callables.py
def call_with_parameters(fn: Callable, parameters: Dict[str, Any]):\n \"\"\"\n Call a function with parameters extracted with `get_call_parameters`\n\n The function _must_ have an identical signature to the original function or this\n will fail. If you need to send to a function with a different signature, extract\n the args/kwargs using `parameters_to_positional_and_keyword` directly\n \"\"\"\n args, kwargs = parameters_to_args_kwargs(fn, parameters)\n return fn(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.cloudpickle_wrapped_call","title":"cloudpickle_wrapped_call
","text":"Serializes a function call using cloudpickle then returns a callable which will execute that call and return a cloudpickle serialized return value
This is particularly useful for sending calls to libraries that only use the Python built-in pickler (e.g. anyio.to_process
and multiprocessing
) but may require a wider range of pickling support.
prefect/utilities/callables.py
def cloudpickle_wrapped_call(\n __fn: Callable, *args: Any, **kwargs: Any\n) -> Callable[[], bytes]:\n \"\"\"\n Serializes a function call using cloudpickle then returns a callable which will\n execute that call and return a cloudpickle serialized return value\n\n This is particularly useful for sending calls to libraries that only use the Python\n built-in pickler (e.g. `anyio.to_process` and `multiprocessing`) but may require\n a wider range of pickling support.\n \"\"\"\n payload = cloudpickle.dumps((__fn, args, kwargs))\n return partial(_run_serialized_call, payload)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.collapse_variadic_parameters","title":"collapse_variadic_parameters
","text":"Given a parameter dictionary, move any parameters stored not present in the signature into the variadic keyword argument.
Example:
```python\ndef foo(a, b, **kwargs):\n pass\n\nparameters = {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\ncollapse_variadic_parameters(foo, parameters)\n# {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n```\n
Source code in prefect/utilities/callables.py
def collapse_variadic_parameters(\n fn: Callable, parameters: Dict[str, Any]\n) -> Dict[str, Any]:\n \"\"\"\n Given a parameter dictionary, move any parameters stored not present in the\n signature into the variadic keyword argument.\n\n Example:\n\n ```python\n def foo(a, b, **kwargs):\n pass\n\n parameters = {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n collapse_variadic_parameters(foo, parameters)\n # {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n ```\n \"\"\"\n signature_parameters = inspect.signature(fn).parameters\n variadic_key = None\n for key, parameter in signature_parameters.items():\n if parameter.kind == parameter.VAR_KEYWORD:\n variadic_key = key\n break\n\n missing_parameters = set(parameters.keys()) - set(signature_parameters.keys())\n\n if not variadic_key and missing_parameters:\n raise ValueError(\n f\"Signature for {fn} does not include any variadic keyword argument \"\n \"but parameters were given that are not present in the signature.\"\n )\n\n if variadic_key and not missing_parameters:\n # variadic key is present but no missing parameters, return parameters unchanged\n return parameters\n\n new_parameters = parameters.copy()\n if variadic_key:\n new_parameters[variadic_key] = {}\n\n for key in missing_parameters:\n new_parameters[variadic_key][key] = new_parameters.pop(key)\n\n return new_parameters\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.explode_variadic_parameter","title":"explode_variadic_parameter
","text":"Given a parameter dictionary, move any parameters stored in a variadic keyword argument parameter (i.e. **kwargs) into the top level.
Example:
```python\ndef foo(a, b, **kwargs):\n pass\n\nparameters = {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\nexplode_variadic_parameter(foo, parameters)\n# {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n```\n
Source code in prefect/utilities/callables.py
def explode_variadic_parameter(\n fn: Callable, parameters: Dict[str, Any]\n) -> Dict[str, Any]:\n \"\"\"\n Given a parameter dictionary, move any parameters stored in a variadic keyword\n argument parameter (i.e. **kwargs) into the top level.\n\n Example:\n\n ```python\n def foo(a, b, **kwargs):\n pass\n\n parameters = {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n explode_variadic_parameter(foo, parameters)\n # {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n ```\n \"\"\"\n variadic_key = None\n for key, parameter in inspect.signature(fn).parameters.items():\n if parameter.kind == parameter.VAR_KEYWORD:\n variadic_key = key\n break\n\n if not variadic_key:\n return parameters\n\n new_parameters = parameters.copy()\n for key, value in new_parameters.pop(variadic_key, {}).items():\n new_parameters[key] = value\n\n return new_parameters\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.get_call_parameters","title":"get_call_parameters
","text":"Bind a call to a function to get parameter/value mapping. Default values on the signature will be included if not overridden.
Raises a ParameterBindError if the arguments/kwargs are not valid for the function
Source code inprefect/utilities/callables.py
def get_call_parameters(\n fn: Callable,\n call_args: Tuple[Any, ...],\n call_kwargs: Dict[str, Any],\n apply_defaults: bool = True,\n) -> Dict[str, Any]:\n \"\"\"\n Bind a call to a function to get parameter/value mapping. Default values on the\n signature will be included if not overridden.\n\n Raises a ParameterBindError if the arguments/kwargs are not valid for the function\n \"\"\"\n try:\n bound_signature = inspect.signature(fn).bind(*call_args, **call_kwargs)\n except TypeError as exc:\n raise ParameterBindError.from_bind_failure(fn, exc, call_args, call_kwargs)\n\n if apply_defaults:\n bound_signature.apply_defaults()\n\n # We cast from `OrderedDict` to `dict` because Dask will not convert futures in an\n # ordered dictionary to values during execution; this is the default behavior in\n # Python 3.9 anyway.\n return dict(bound_signature.arguments)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.get_parameter_defaults","title":"get_parameter_defaults
","text":"Get default parameter values for a callable.
Source code inprefect/utilities/callables.py
def get_parameter_defaults(\n fn: Callable,\n) -> Dict[str, Any]:\n \"\"\"\n Get default parameter values for a callable.\n \"\"\"\n signature = inspect.signature(fn)\n\n parameter_defaults = {}\n\n for name, param in signature.parameters.items():\n if param.default is not signature.empty:\n parameter_defaults[name] = param.default\n\n return parameter_defaults\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameter_docstrings","title":"parameter_docstrings
","text":"Given a docstring in Google docstring format, parse the parameter section and return a dictionary that maps parameter names to docstring.
Parameters:
Name Type Description Defaultdocstring
Optional[str]
The function's docstring.
requiredReturns:
Type DescriptionDict[str, str]
Mapping from parameter names to docstrings.
Source code inprefect/utilities/callables.py
def parameter_docstrings(docstring: Optional[str]) -> Dict[str, str]:\n \"\"\"\n Given a docstring in Google docstring format, parse the parameter section\n and return a dictionary that maps parameter names to docstring.\n\n Args:\n docstring: The function's docstring.\n\n Returns:\n Mapping from parameter names to docstrings.\n \"\"\"\n param_docstrings = {}\n\n if not docstring:\n return param_docstrings\n\n with disable_logger(\"griffe.docstrings.google\"), disable_logger(\n \"griffe.agents.nodes\"\n ):\n parsed = parse(Docstring(docstring), Parser.google)\n for section in parsed:\n if section.kind != DocstringSectionKind.parameters:\n continue\n param_docstrings = {\n parameter.name: parameter.description for parameter in section.value\n }\n\n return param_docstrings\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameter_schema","title":"parameter_schema
","text":"Given a function, generates an OpenAPI-compatible description of the function's arguments, including: - name - typing information - whether it is required - a default value - additional constraints (like possible enum values)
Parameters:
Name Type Description Defaultfn
Callable
The function whose arguments will be serialized
requiredReturns:
Name Type DescriptionParameterSchema
ParameterSchema
the argument schema
Source code inprefect/utilities/callables.py
def parameter_schema(fn: Callable) -> ParameterSchema:\n \"\"\"Given a function, generates an OpenAPI-compatible description\n of the function's arguments, including:\n - name\n - typing information\n - whether it is required\n - a default value\n - additional constraints (like possible enum values)\n\n Args:\n fn (Callable): The function whose arguments will be serialized\n\n Returns:\n ParameterSchema: the argument schema\n \"\"\"\n try:\n signature = inspect.signature(fn, eval_str=True)\n except (NameError, TypeError):\n signature = inspect.signature(fn)\n\n model_fields = {}\n aliases = {}\n docstrings = parameter_docstrings(inspect.getdoc(fn))\n\n class ModelConfig:\n arbitrary_types_allowed = True\n\n if HAS_PYDANTIC_V2 and has_v2_type_as_param(signature):\n create_schema = create_v2_schema\n process_params = process_v2_params\n else:\n create_schema = create_v1_schema\n process_params = process_v1_params\n\n for position, param in enumerate(signature.parameters.values()):\n name, type_, field = process_params(\n param, position=position, docstrings=docstrings, aliases=aliases\n )\n # Generate a Pydantic model at each step so we can check if this parameter\n # type supports schema generation\n try:\n create_schema(\n \"CheckParameter\", model_cfg=ModelConfig, **{name: (type_, field)}\n )\n except (ValueError, TypeError):\n # This field's type is not valid for schema creation, update it to `Any`\n type_ = Any\n model_fields[name] = (type_, field)\n\n # Generate the final model and schema\n schema = create_schema(\"Parameters\", model_cfg=ModelConfig, **model_fields)\n return ParameterSchema(**schema)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameters_to_args_kwargs","title":"parameters_to_args_kwargs
","text":"Convert a parameters
dictionary to positional and keyword arguments
The function must have an identical signature to the original function or this will return an empty tuple and dict.
Source code inprefect/utilities/callables.py
def parameters_to_args_kwargs(\n fn: Callable,\n parameters: Dict[str, Any],\n) -> Tuple[Tuple[Any, ...], Dict[str, Any]]:\n \"\"\"\n Convert a `parameters` dictionary to positional and keyword arguments\n\n The function _must_ have an identical signature to the original function or this\n will return an empty tuple and dict.\n \"\"\"\n function_params = dict(inspect.signature(fn).parameters).keys()\n # Check for parameters that are not present in the function signature\n unknown_params = parameters.keys() - function_params\n if unknown_params:\n raise SignatureMismatchError.from_bad_params(\n list(function_params), list(parameters.keys())\n )\n bound_signature = inspect.signature(fn).bind_partial()\n bound_signature.arguments = parameters\n\n return bound_signature.args, bound_signature.kwargs\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.raise_for_reserved_arguments","title":"raise_for_reserved_arguments
","text":"Raise a ReservedArgumentError if fn
has any parameters that conflict with the names contained in reserved_arguments
.
prefect/utilities/callables.py
def raise_for_reserved_arguments(fn: Callable, reserved_arguments: Iterable[str]):\n \"\"\"Raise a ReservedArgumentError if `fn` has any parameters that conflict\n with the names contained in `reserved_arguments`.\"\"\"\n function_paremeters = inspect.signature(fn).parameters\n\n for argument in reserved_arguments:\n if argument in function_paremeters:\n raise ReservedArgumentError(\n f\"{argument!r} is a reserved argument name and cannot be used.\"\n )\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/collections/","title":"collections","text":"","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections","title":"prefect.utilities.collections
","text":"Utilities for extensions of and operations on Python collections.
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.AutoEnum","title":"AutoEnum
","text":" Bases: str
, Enum
An enum class that automatically generates value from variable names.
This guards against common errors where variable names are updated but values are not.
In addition, because AutoEnums inherit from str
, they are automatically JSON-serializable.
See https://docs.python.org/3/library/enum.html#using-automatic-values
Exampleclass MyEnum(AutoEnum):\n RED = AutoEnum.auto() # equivalent to RED = 'RED'\n BLUE = AutoEnum.auto() # equivalent to BLUE = 'BLUE'\n
Source code in prefect/utilities/collections.py
class AutoEnum(str, Enum):\n \"\"\"\n An enum class that automatically generates value from variable names.\n\n This guards against common errors where variable names are updated but values are\n not.\n\n In addition, because AutoEnums inherit from `str`, they are automatically\n JSON-serializable.\n\n See https://docs.python.org/3/library/enum.html#using-automatic-values\n\n Example:\n ```python\n class MyEnum(AutoEnum):\n RED = AutoEnum.auto() # equivalent to RED = 'RED'\n BLUE = AutoEnum.auto() # equivalent to BLUE = 'BLUE'\n ```\n \"\"\"\n\n def _generate_next_value_(name, start, count, last_values):\n return name\n\n @staticmethod\n def auto():\n \"\"\"\n Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n \"\"\"\n return auto()\n\n def __repr__(self) -> str:\n return f\"{type(self).__name__}.{self.value}\"\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.AutoEnum.auto","title":"auto
staticmethod
","text":"Exposes enum.auto()
to avoid requiring a second import to use AutoEnum
prefect/utilities/collections.py
@staticmethod\ndef auto():\n \"\"\"\n Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n \"\"\"\n return auto()\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.StopVisiting","title":"StopVisiting
","text":" Bases: BaseException
A special exception used to stop recursive visits in visit_collection
.
When raised, the expression is returned without modification and recursive visits in that path will end.
Source code inprefect/utilities/collections.py
class StopVisiting(BaseException):\n \"\"\"\n A special exception used to stop recursive visits in `visit_collection`.\n\n When raised, the expression is returned without modification and recursive visits\n in that path will end.\n \"\"\"\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.batched_iterable","title":"batched_iterable
","text":"Yield batches of a certain size from an iterable
Parameters:
Name Type Description Defaultiterable
Iterable
An iterable
requiredsize
int
The batch size to return
requiredYields:
Name Type Descriptiontuple
T
A batch of the iterable
Source code inprefect/utilities/collections.py
def batched_iterable(iterable: Iterable[T], size: int) -> Iterator[Tuple[T, ...]]:\n \"\"\"\n Yield batches of a certain size from an iterable\n\n Args:\n iterable (Iterable): An iterable\n size (int): The batch size to return\n\n Yields:\n tuple: A batch of the iterable\n \"\"\"\n it = iter(iterable)\n while True:\n batch = tuple(itertools.islice(it, size))\n if not batch:\n break\n yield batch\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.dict_to_flatdict","title":"dict_to_flatdict
","text":"Converts a (nested) dictionary to a flattened representation.
Each key of the flat dict will be a CompoundKey tuple containing the \"chain of keys\" for the corresponding value.
Parameters:
Name Type Description Defaultdct
dict
The dictionary to flatten
required_parent
Tuple
The current parent for recursion
None
Returns:
Type DescriptionDict[Tuple[KT, ...], Any]
A flattened dict of the same type as dct
Source code inprefect/utilities/collections.py
def dict_to_flatdict(\n dct: Dict[KT, Union[Any, Dict[KT, Any]]], _parent: Tuple[KT, ...] = None\n) -> Dict[Tuple[KT, ...], Any]:\n \"\"\"Converts a (nested) dictionary to a flattened representation.\n\n Each key of the flat dict will be a CompoundKey tuple containing the \"chain of keys\"\n for the corresponding value.\n\n Args:\n dct (dict): The dictionary to flatten\n _parent (Tuple, optional): The current parent for recursion\n\n Returns:\n A flattened dict of the same type as dct\n \"\"\"\n typ = cast(Type[Dict[Tuple[KT, ...], Any]], type(dct))\n items: List[Tuple[Tuple[KT, ...], Any]] = []\n parent = _parent or tuple()\n\n for k, v in dct.items():\n k_parent = tuple(parent + (k,))\n # if v is a non-empty dict, recurse\n if isinstance(v, dict) and v:\n items.extend(dict_to_flatdict(v, _parent=k_parent).items())\n else:\n items.append((k_parent, v))\n return typ(items)\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.extract_instances","title":"extract_instances
","text":"Extract objects from a file and returns a dict of type -> instances
Parameters:
Name Type Description Defaultobjects
Iterable
An iterable of objects
requiredtypes
Union[Type[T], Tuple[Type[T], ...]]
A type or tuple of types to extract, defaults to all objects
object
Returns:
Type DescriptionUnion[List[T], Dict[Type[T], T]]
If a single type is given: a list of instances of that type
Union[List[T], Dict[Type[T], T]]
If a tuple of types is given: a mapping of type to a list of instances
Source code inprefect/utilities/collections.py
def extract_instances(\n objects: Iterable,\n types: Union[Type[T], Tuple[Type[T], ...]] = object,\n) -> Union[List[T], Dict[Type[T], T]]:\n \"\"\"\n Extract objects from a file and returns a dict of type -> instances\n\n Args:\n objects: An iterable of objects\n types: A type or tuple of types to extract, defaults to all objects\n\n Returns:\n If a single type is given: a list of instances of that type\n If a tuple of types is given: a mapping of type to a list of instances\n \"\"\"\n types = ensure_iterable(types)\n\n # Create a mapping of type -> instance from the exec values\n ret = defaultdict(list)\n\n for o in objects:\n # We iterate here so that the key is the passed type rather than type(o)\n for type_ in types:\n if isinstance(o, type_):\n ret[type_].append(o)\n\n if len(types) == 1:\n return ret[types[0]]\n\n return ret\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.flatdict_to_dict","title":"flatdict_to_dict
","text":"Converts a flattened dictionary back to a nested dictionary.
Parameters:
Name Type Description Defaultdct
dict
The dictionary to be nested. Each key should be a tuple of keys as generated by dict_to_flatdict
Returns A nested dict of the same type as dct
Source code inprefect/utilities/collections.py
def flatdict_to_dict(\n dct: Dict[Tuple[KT, ...], VT],\n) -> Dict[KT, Union[VT, Dict[KT, VT]]]:\n \"\"\"Converts a flattened dictionary back to a nested dictionary.\n\n Args:\n dct (dict): The dictionary to be nested. Each key should be a tuple of keys\n as generated by `dict_to_flatdict`\n\n Returns\n A nested dict of the same type as dct\n \"\"\"\n typ = type(dct)\n result = cast(Dict[KT, Union[VT, Dict[KT, VT]]], typ())\n for key_tuple, value in dct.items():\n current_dict = result\n for prefix_key in key_tuple[:-1]:\n # Build nested dictionaries up for the current key tuple\n # Use `setdefault` in case the nested dict has already been created\n current_dict = current_dict.setdefault(prefix_key, typ()) # type: ignore\n # Set the value\n current_dict[key_tuple[-1]] = value\n\n return result\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.get_from_dict","title":"get_from_dict
","text":"Fetch a value from a nested dictionary or list using a sequence of keys.
This function allows to fetch a value from a deeply nested structure of dictionaries and lists using either a dot-separated string or a list of keys. If a requested key does not exist, the function returns the provided default value.
Parameters:
Name Type Description Defaultdct
Dict
The nested dictionary or list from which to fetch the value.
requiredkeys
Union[str, List[str]]
The sequence of keys to use for access. Can be a dot-separated string or a list of keys. List indices can be included in the sequence as either integer keys or as string indices in square brackets.
requireddefault
Any
The default value to return if the requested key path does not exist. Defaults to None.
None
Returns:
Type DescriptionAny
The fetched value if the key exists, or the default value if it does not.
get_from_dict({'a': {'b': {'c': [1, 2, 3, 4]}}}, 'a.b.c[1]') 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, ['a', 'b', 1, 'c', 1]) 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, 'a.b.1.c.2', 'default') 'default'
Source code inprefect/utilities/collections.py
def get_from_dict(dct: Dict, keys: Union[str, List[str]], default: Any = None) -> Any:\n \"\"\"\n Fetch a value from a nested dictionary or list using a sequence of keys.\n\n This function allows to fetch a value from a deeply nested structure\n of dictionaries and lists using either a dot-separated string or a list\n of keys. If a requested key does not exist, the function returns the\n provided default value.\n\n Args:\n dct: The nested dictionary or list from which to fetch the value.\n keys: The sequence of keys to use for access. Can be a\n dot-separated string or a list of keys. List indices can be included\n in the sequence as either integer keys or as string indices in square\n brackets.\n default: The default value to return if the requested key path does not\n exist. Defaults to None.\n\n Returns:\n The fetched value if the key exists, or the default value if it does not.\n\n Examples:\n >>> get_from_dict({'a': {'b': {'c': [1, 2, 3, 4]}}}, 'a.b.c[1]')\n 2\n >>> get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, ['a', 'b', 1, 'c', 1])\n 2\n >>> get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, 'a.b.1.c.2', 'default')\n 'default'\n \"\"\"\n if isinstance(keys, str):\n keys = keys.replace(\"[\", \".\").replace(\"]\", \"\").split(\".\")\n try:\n for key in keys:\n try:\n # Try to cast to int to handle list indices\n key = int(key)\n except ValueError:\n # If it's not an int, use the key as-is\n # for dict lookup\n pass\n dct = dct[key]\n return dct\n except (TypeError, KeyError, IndexError):\n return default\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.isiterable","title":"isiterable
","text":"Return a boolean indicating if an object is iterable.
Excludes types that are iterable but typically used as singletons: - str - bytes - IO objects
Source code inprefect/utilities/collections.py
def isiterable(obj: Any) -> bool:\n \"\"\"\n Return a boolean indicating if an object is iterable.\n\n Excludes types that are iterable but typically used as singletons:\n - str\n - bytes\n - IO objects\n \"\"\"\n try:\n iter(obj)\n except TypeError:\n return False\n else:\n return not isinstance(obj, (str, bytes, io.IOBase))\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.remove_nested_keys","title":"remove_nested_keys
","text":"Recurses a dictionary returns a copy without all keys that match an entry in key_to_remove
. Return obj
unchanged if not a dictionary.
Parameters:
Name Type Description Defaultkeys_to_remove
List[Hashable]
A list of keys to remove from obj obj: The object to remove keys from.
requiredReturns:
Type Descriptionobj
without keys matching an entry in keys_to_remove
if obj
is a dictionary. obj
if obj
is not a dictionary.
prefect/utilities/collections.py
def remove_nested_keys(keys_to_remove: List[Hashable], obj):\n \"\"\"\n Recurses a dictionary returns a copy without all keys that match an entry in\n `key_to_remove`. Return `obj` unchanged if not a dictionary.\n\n Args:\n keys_to_remove: A list of keys to remove from obj obj: The object to remove keys\n from.\n\n Returns:\n `obj` without keys matching an entry in `keys_to_remove` if `obj` is a\n dictionary. `obj` if `obj` is not a dictionary.\n \"\"\"\n if not isinstance(obj, dict):\n return obj\n return {\n key: remove_nested_keys(keys_to_remove, value)\n for key, value in obj.items()\n if key not in keys_to_remove\n }\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.visit_collection","title":"visit_collection
","text":"This function visits every element of an arbitrary Python collection. If an element is a Python collection, it will be visited recursively. If an element is not a collection, visit_fn
will be called with the element. The return value of visit_fn
can be used to alter the element if return_data
is set.
Note that when using return_data
a copy of each collection is created to avoid mutating the original object. This may have significant performance penalties and should only be used if you intend to transform the collection.
Supported types: - List - Tuple - Set - Dict (note: keys are also visited recursively) - Dataclass - Pydantic model - Prefect annotations
Parameters:
Name Type Description Defaultexpr
Any
a Python object or expression
requiredvisit_fn
Callable[[Any], Awaitable[Any]]
an async function that will be applied to every non-collection element of expr.
requiredreturn_data
bool
if True
, a copy of expr
containing data modified by visit_fn
will be returned. This is slower than return_data=False
(the default).
False
max_depth
int
Controls the depth of recursive visitation. If set to zero, no recursion will occur. If set to a positive integer N, visitation will only descend to N layers deep. If set to any negative integer, no limit will be enforced and recursion will continue until terminal items are reached. By default, recursion is unlimited.
-1
context
Optional[dict]
An optional dictionary. If passed, the context will be sent to each call to the visit_fn
. The context can be mutated by each visitor and will be available for later visits to expressions at the given depth. Values will not be available \"up\" a level from a given expression.
The context will be automatically populated with an 'annotation' key when visiting collections within a BaseAnnotation
type. This requires the caller to pass context={}
and will not be activated by default.
None
remove_annotations
bool
If set, annotations will be replaced by their contents. By default, annotations are preserved but their contents are visited.
False
Source code in prefect/utilities/collections.py
def visit_collection(\n expr,\n visit_fn: Callable[[Any], Any],\n return_data: bool = False,\n max_depth: int = -1,\n context: Optional[dict] = None,\n remove_annotations: bool = False,\n):\n \"\"\"\n This function visits every element of an arbitrary Python collection. If an element\n is a Python collection, it will be visited recursively. If an element is not a\n collection, `visit_fn` will be called with the element. The return value of\n `visit_fn` can be used to alter the element if `return_data` is set.\n\n Note that when using `return_data` a copy of each collection is created to avoid\n mutating the original object. This may have significant performance penalties and\n should only be used if you intend to transform the collection.\n\n Supported types:\n - List\n - Tuple\n - Set\n - Dict (note: keys are also visited recursively)\n - Dataclass\n - Pydantic model\n - Prefect annotations\n\n Args:\n expr (Any): a Python object or expression\n visit_fn (Callable[[Any], Awaitable[Any]]): an async function that\n will be applied to every non-collection element of expr.\n return_data (bool): if `True`, a copy of `expr` containing data modified\n by `visit_fn` will be returned. This is slower than `return_data=False`\n (the default).\n max_depth: Controls the depth of recursive visitation. If set to zero, no\n recursion will occur. If set to a positive integer N, visitation will only\n descend to N layers deep. If set to any negative integer, no limit will be\n enforced and recursion will continue until terminal items are reached. By\n default, recursion is unlimited.\n context: An optional dictionary. If passed, the context will be sent to each\n call to the `visit_fn`. The context can be mutated by each visitor and will\n be available for later visits to expressions at the given depth. Values\n will not be available \"up\" a level from a given expression.\n\n The context will be automatically populated with an 'annotation' key when\n visiting collections within a `BaseAnnotation` type. This requires the\n caller to pass `context={}` and will not be activated by default.\n remove_annotations: If set, annotations will be replaced by their contents. By\n default, annotations are preserved but their contents are visited.\n \"\"\"\n\n def visit_nested(expr):\n # Utility for a recursive call, preserving options and updating the depth.\n return visit_collection(\n expr,\n visit_fn=visit_fn,\n return_data=return_data,\n remove_annotations=remove_annotations,\n max_depth=max_depth - 1,\n # Copy the context on nested calls so it does not \"propagate up\"\n context=context.copy() if context is not None else None,\n )\n\n def visit_expression(expr):\n if context is not None:\n return visit_fn(expr, context)\n else:\n return visit_fn(expr)\n\n # Visit every expression\n try:\n result = visit_expression(expr)\n except StopVisiting:\n max_depth = 0\n result = expr\n\n if return_data:\n # Only mutate the expression while returning data, otherwise it could be null\n expr = result\n\n # Then, visit every child of the expression recursively\n\n # If we have reached the maximum depth, do not perform any recursion\n if max_depth == 0:\n return result if return_data else None\n\n # Get the expression type; treat iterators like lists\n typ = list if isinstance(expr, IteratorABC) and isiterable(expr) else type(expr)\n typ = cast(type, typ) # mypy treats this as 'object' otherwise and complains\n\n # Then visit every item in the expression if it is a collection\n if isinstance(expr, Mock):\n # Do not attempt to recurse into mock objects\n result = expr\n\n elif isinstance(expr, BaseAnnotation):\n if context is not None:\n context[\"annotation\"] = expr\n value = visit_nested(expr.unwrap())\n\n if remove_annotations:\n result = value if return_data else None\n else:\n result = expr.rewrap(value) if return_data else None\n\n elif typ in (list, tuple, set):\n items = [visit_nested(o) for o in expr]\n result = typ(items) if return_data else None\n\n elif typ in (dict, OrderedDict):\n assert isinstance(expr, (dict, OrderedDict)) # typecheck assertion\n items = [(visit_nested(k), visit_nested(v)) for k, v in expr.items()]\n result = typ(items) if return_data else None\n\n elif is_dataclass(expr) and not isinstance(expr, type):\n values = [visit_nested(getattr(expr, f.name)) for f in fields(expr)]\n items = {field.name: value for field, value in zip(fields(expr), values)}\n result = typ(**items) if return_data else None\n\n elif isinstance(expr, pydantic.BaseModel):\n # NOTE: This implementation *does not* traverse private attributes\n # Pydantic does not expose extras in `__fields__` so we use `__fields_set__`\n # as well to get all of the relevant attributes\n # Check for presence of attrs even if they're in the field set due to pydantic#4916\n model_fields = {\n f for f in expr.__fields_set__.union(expr.__fields__) if hasattr(expr, f)\n }\n items = [visit_nested(getattr(expr, key)) for key in model_fields]\n\n if return_data:\n # Collect fields with aliases so reconstruction can use the correct field name\n aliases = {\n key: value.alias\n for key, value in expr.__fields__.items()\n if value.has_alias\n }\n\n model_instance = typ(\n **{\n aliases.get(key) or key: value\n for key, value in zip(model_fields, items)\n }\n )\n\n # Private attributes are not included in `__fields_set__` but we do not want\n # to drop them from the model so we restore them after constructing a new\n # model\n for attr in expr.__private_attributes__:\n # Use `object.__setattr__` to avoid errors on immutable models\n object.__setattr__(model_instance, attr, getattr(expr, attr))\n\n # Preserve data about which fields were explicitly set on the original model\n object.__setattr__(model_instance, \"__fields_set__\", expr.__fields_set__)\n result = model_instance\n else:\n result = None\n\n else:\n result = result if return_data else None\n\n return result\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/compat/","title":"compat","text":"","tags":["Python API","utilities","compatibility"]},{"location":"api-ref/prefect/utilities/compat/#prefect.utilities.compat","title":"prefect.utilities.compat
","text":"Utilities for Python version compatibility
","tags":["Python API","utilities","compatibility"]},{"location":"api-ref/prefect/utilities/context/","title":"context","text":"","tags":["Python API","utilities","context"]},{"location":"api-ref/prefect/utilities/context/#prefect.utilities.context","title":"prefect.utilities.context
","text":"","tags":["Python API","utilities","context"]},{"location":"api-ref/prefect/utilities/dispatch/","title":"dispatch","text":"","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch","title":"prefect.utilities.dispatch
","text":"Provides methods for performing dynamic dispatch for actions on base type to one of its subtypes.
Example:
@register_base_type\nclass Base:\n @classmethod\n def __dispatch_key__(cls):\n return cls.__name__.lower()\n\n\nclass Foo(Base):\n ...\n\nkey = get_dispatch_key(Foo) # 'foo'\nlookup_type(Base, key) # Foo\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.get_dispatch_key","title":"get_dispatch_key
","text":"Retrieve the unique dispatch key for a class type or instance.
This key is defined at the __dispatch_key__
attribute. If it is a callable, it will be resolved.
If allow_missing
is False
, an exception will be raised if the attribute is not defined or the key is null. If True
, None
will be returned in these cases.
prefect/utilities/dispatch.py
def get_dispatch_key(\n cls_or_instance: Any, allow_missing: bool = False\n) -> Optional[str]:\n \"\"\"\n Retrieve the unique dispatch key for a class type or instance.\n\n This key is defined at the `__dispatch_key__` attribute. If it is a callable, it\n will be resolved.\n\n If `allow_missing` is `False`, an exception will be raised if the attribute is not\n defined or the key is null. If `True`, `None` will be returned in these cases.\n \"\"\"\n dispatch_key = getattr(cls_or_instance, \"__dispatch_key__\", None)\n\n type_name = (\n cls_or_instance.__name__\n if isinstance(cls_or_instance, type)\n else type(cls_or_instance).__name__\n )\n\n if dispatch_key is None:\n if allow_missing:\n return None\n raise ValueError(\n f\"Type {type_name!r} does not define a value for \"\n \"'__dispatch_key__' which is required for registry lookup.\"\n )\n\n if callable(dispatch_key):\n dispatch_key = dispatch_key()\n\n if allow_missing and dispatch_key is None:\n return None\n\n if not isinstance(dispatch_key, str):\n raise TypeError(\n f\"Type {type_name!r} has a '__dispatch_key__' of type \"\n f\"{type(dispatch_key).__name__} but a type of 'str' is required.\"\n )\n\n return dispatch_key\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.get_registry_for_type","title":"get_registry_for_type
","text":"Get the first matching registry for a class or any of its base classes.
If not found, None
is returned.
prefect/utilities/dispatch.py
def get_registry_for_type(cls: T) -> Optional[Dict[str, T]]:\n \"\"\"\n Get the first matching registry for a class or any of its base classes.\n\n If not found, `None` is returned.\n \"\"\"\n return next(\n filter(\n lambda registry: registry is not None,\n (_TYPE_REGISTRIES.get(cls) for cls in cls.mro()),\n ),\n None,\n )\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.lookup_type","title":"lookup_type
","text":"Look up a dispatch key in the type registry for the given class.
Source code inprefect/utilities/dispatch.py
def lookup_type(cls: T, dispatch_key: str) -> T:\n \"\"\"\n Look up a dispatch key in the type registry for the given class.\n \"\"\"\n # Get the first matching registry for the class or one of its bases\n registry = get_registry_for_type(cls)\n\n # Look up this type in the registry\n subcls = registry.get(dispatch_key)\n\n if subcls is None:\n raise KeyError(\n f\"No class found for dispatch key {dispatch_key!r} in registry for type \"\n f\"{cls.__name__!r}.\"\n )\n\n return subcls\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.register_base_type","title":"register_base_type
","text":"Register a base type allowing child types to be registered for dispatch with register_type
.
The base class may or may not define a __dispatch_key__
to allow lookups of the base type.
prefect/utilities/dispatch.py
def register_base_type(cls: T) -> T:\n \"\"\"\n Register a base type allowing child types to be registered for dispatch with\n `register_type`.\n\n The base class may or may not define a `__dispatch_key__` to allow lookups of the\n base type.\n \"\"\"\n registry = _TYPE_REGISTRIES.setdefault(cls, {})\n base_key = get_dispatch_key(cls, allow_missing=True)\n if base_key is not None:\n registry[base_key] = cls\n\n # Add automatic subtype registration\n cls.__init_subclass_original__ = getattr(cls, \"__init_subclass__\")\n cls.__init_subclass__ = _register_subclass_of_base_type\n\n return cls\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.register_type","title":"register_type
","text":"Register a type for lookup with dispatch.
The type or one of its parents must define a unique __dispatch_key__
.
One of the classes base types must be registered using register_base_type
.
prefect/utilities/dispatch.py
def register_type(cls: T) -> T:\n \"\"\"\n Register a type for lookup with dispatch.\n\n The type or one of its parents must define a unique `__dispatch_key__`.\n\n One of the classes base types must be registered using `register_base_type`.\n \"\"\"\n # Lookup the registry for this type\n registry = get_registry_for_type(cls)\n\n # Check if a base type is registered\n if registry is None:\n # Include a description of registered base types\n known = \", \".join(repr(base.__name__) for base in _TYPE_REGISTRIES)\n known_message = (\n f\" Did you mean to inherit from one of the following known types: {known}.\"\n if known\n else \"\"\n )\n\n # And a list of all base types for the type they tried to register\n bases = \", \".join(\n repr(base.__name__) for base in cls.mro() if base not in (object, cls)\n )\n\n raise ValueError(\n f\"No registry found for type {cls.__name__!r} with bases {bases}.\"\n + known_message\n )\n\n key = get_dispatch_key(cls)\n existing_value = registry.get(key)\n if existing_value is not None and id(existing_value) != id(cls):\n # Get line numbers for debugging\n file = inspect.getsourcefile(cls)\n line_number = inspect.getsourcelines(cls)[1]\n existing_file = inspect.getsourcefile(existing_value)\n existing_line_number = inspect.getsourcelines(existing_value)[1]\n warnings.warn(\n f\"Type {cls.__name__!r} at {file}:{line_number} has key {key!r} that \"\n f\"matches existing registered type {existing_value.__name__!r} from \"\n f\"{existing_file}:{existing_line_number}. The existing type will be \"\n \"overridden.\"\n )\n\n # Add to the registry\n registry[key] = cls\n\n return cls\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dockerutils/","title":"dockerutils","text":"","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils","title":"prefect.utilities.dockerutils
","text":"","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.BuildError","title":"BuildError
","text":" Bases: Exception
Raised when a Docker build fails
Source code inprefect/utilities/dockerutils.py
class BuildError(Exception):\n \"\"\"Raised when a Docker build fails\"\"\"\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder","title":"ImageBuilder
","text":"An interface for preparing Docker build contexts and building images
Source code inprefect/utilities/dockerutils.py
class ImageBuilder:\n \"\"\"An interface for preparing Docker build contexts and building images\"\"\"\n\n base_directory: Path\n context: Optional[Path]\n platform: Optional[str]\n dockerfile_lines: List[str]\n\n def __init__(\n self,\n base_image: str,\n base_directory: Path = None,\n platform: str = None,\n context: Path = None,\n ):\n \"\"\"Create an ImageBuilder\n\n Args:\n base_image: the base image to use\n base_directory: the starting point on your host for relative file locations,\n defaulting to the current directory\n context: use this path as the build context (if not provided, will create a\n temporary directory for the context)\n\n Returns:\n The image ID\n \"\"\"\n self.base_directory = base_directory or context or Path().absolute()\n self.temporary_directory = None\n self.context = context\n self.platform = platform\n self.dockerfile_lines = []\n\n if self.context:\n dockerfile_path: Path = self.context / \"Dockerfile\"\n if dockerfile_path.exists():\n raise ValueError(f\"There is already a Dockerfile at {context}\")\n\n self.add_line(f\"FROM {base_image}\")\n\n def __enter__(self) -> Self:\n if self.context and not self.temporary_directory:\n return self\n\n self.temporary_directory = TemporaryDirectory()\n self.context = Path(self.temporary_directory.__enter__())\n return self\n\n def __exit__(\n self, exc: Type[BaseException], value: BaseException, traceback: TracebackType\n ) -> None:\n if not self.temporary_directory:\n return\n\n self.temporary_directory.__exit__(exc, value, traceback)\n self.temporary_directory = None\n self.context = None\n\n def add_line(self, line: str) -> None:\n \"\"\"Add a line to this image's Dockerfile\"\"\"\n self.add_lines([line])\n\n def add_lines(self, lines: Iterable[str]) -> None:\n \"\"\"Add lines to this image's Dockerfile\"\"\"\n self.dockerfile_lines.extend(lines)\n\n def copy(self, source: Union[str, Path], destination: Union[str, PurePosixPath]):\n \"\"\"Copy a file to this image\"\"\"\n if not self.context:\n raise Exception(\"No context available\")\n\n if not isinstance(destination, PurePosixPath):\n destination = PurePosixPath(destination)\n\n if not isinstance(source, Path):\n source = Path(source)\n\n if source.is_absolute():\n source = source.resolve().relative_to(self.base_directory)\n\n if self.temporary_directory:\n os.makedirs(self.context / source.parent, exist_ok=True)\n\n if source.is_dir():\n shutil.copytree(self.base_directory / source, self.context / source)\n else:\n shutil.copy2(self.base_directory / source, self.context / source)\n\n self.add_line(f\"COPY {source} {destination}\")\n\n def write_text(self, text: str, destination: Union[str, PurePosixPath]):\n if not self.context:\n raise Exception(\"No context available\")\n\n if not isinstance(destination, PurePosixPath):\n destination = PurePosixPath(destination)\n\n source_hash = hashlib.sha256(text.encode()).hexdigest()\n (self.context / f\".{source_hash}\").write_text(text)\n self.add_line(f\"COPY .{source_hash} {destination}\")\n\n def build(\n self, pull: bool = False, stream_progress_to: Optional[TextIO] = None\n ) -> str:\n \"\"\"Build the Docker image from the current state of the ImageBuilder\n\n Args:\n pull: True to pull the base image during the build\n stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO)\n that will collect the build output as it is reported by Docker\n\n Returns:\n The image ID\n \"\"\"\n dockerfile_path: Path = self.context / \"Dockerfile\"\n\n with dockerfile_path.open(\"w\") as dockerfile:\n dockerfile.writelines(line + \"\\n\" for line in self.dockerfile_lines)\n\n try:\n return build_image(\n self.context,\n platform=self.platform,\n pull=pull,\n stream_progress_to=stream_progress_to,\n )\n finally:\n os.unlink(dockerfile_path)\n\n def assert_has_line(self, line: str) -> None:\n \"\"\"Asserts that the given line is in the Dockerfile\"\"\"\n all_lines = \"\\n\".join(\n [f\" {i+1:>3}: {line}\" for i, line in enumerate(self.dockerfile_lines)]\n )\n message = (\n f\"Expected {line!r} not found in Dockerfile. Dockerfile:\\n{all_lines}\"\n )\n assert line in self.dockerfile_lines, message\n\n def assert_line_absent(self, line: str) -> None:\n \"\"\"Asserts that the given line is absent from the Dockerfile\"\"\"\n if line not in self.dockerfile_lines:\n return\n\n i = self.dockerfile_lines.index(line)\n\n surrounding_lines = \"\\n\".join(\n [\n f\" {i+1:>3}: {line}\"\n for i, line in enumerate(self.dockerfile_lines[i - 2 : i + 2])\n ]\n )\n message = (\n f\"Unexpected {line!r} found in Dockerfile at line {i+1}. \"\n f\"Surrounding lines:\\n{surrounding_lines}\"\n )\n\n assert line not in self.dockerfile_lines, message\n\n def assert_line_before(self, first: str, second: str) -> None:\n \"\"\"Asserts that the first line appears before the second line\"\"\"\n self.assert_has_line(first)\n self.assert_has_line(second)\n\n first_index = self.dockerfile_lines.index(first)\n second_index = self.dockerfile_lines.index(second)\n\n surrounding_lines = \"\\n\".join(\n [\n f\" {i+1:>3}: {line}\"\n for i, line in enumerate(\n self.dockerfile_lines[second_index - 2 : first_index + 2]\n )\n ]\n )\n\n message = (\n f\"Expected {first!r} to appear before {second!r} in the Dockerfile, but \"\n f\"{first!r} was at line {first_index+1} and {second!r} as at line \"\n f\"{second_index+1}. Surrounding lines:\\n{surrounding_lines}\"\n )\n\n assert first_index < second_index, message\n\n def assert_line_after(self, second: str, first: str) -> None:\n \"\"\"Asserts that the second line appears after the first line\"\"\"\n self.assert_line_before(first, second)\n\n def assert_has_file(self, source: Path, container_path: PurePosixPath) -> None:\n \"\"\"Asserts that the given file or directory will be copied into the container\n at the given path\"\"\"\n if source.is_absolute():\n source = source.relative_to(self.base_directory)\n\n self.assert_has_line(f\"COPY {source} {container_path}\")\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.add_line","title":"add_line
","text":"Add a line to this image's Dockerfile
Source code inprefect/utilities/dockerutils.py
def add_line(self, line: str) -> None:\n \"\"\"Add a line to this image's Dockerfile\"\"\"\n self.add_lines([line])\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.add_lines","title":"add_lines
","text":"Add lines to this image's Dockerfile
Source code inprefect/utilities/dockerutils.py
def add_lines(self, lines: Iterable[str]) -> None:\n \"\"\"Add lines to this image's Dockerfile\"\"\"\n self.dockerfile_lines.extend(lines)\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_has_file","title":"assert_has_file
","text":"Asserts that the given file or directory will be copied into the container at the given path
Source code inprefect/utilities/dockerutils.py
def assert_has_file(self, source: Path, container_path: PurePosixPath) -> None:\n \"\"\"Asserts that the given file or directory will be copied into the container\n at the given path\"\"\"\n if source.is_absolute():\n source = source.relative_to(self.base_directory)\n\n self.assert_has_line(f\"COPY {source} {container_path}\")\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_has_line","title":"assert_has_line
","text":"Asserts that the given line is in the Dockerfile
Source code inprefect/utilities/dockerutils.py
def assert_has_line(self, line: str) -> None:\n \"\"\"Asserts that the given line is in the Dockerfile\"\"\"\n all_lines = \"\\n\".join(\n [f\" {i+1:>3}: {line}\" for i, line in enumerate(self.dockerfile_lines)]\n )\n message = (\n f\"Expected {line!r} not found in Dockerfile. Dockerfile:\\n{all_lines}\"\n )\n assert line in self.dockerfile_lines, message\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_line_absent","title":"assert_line_absent
","text":"Asserts that the given line is absent from the Dockerfile
Source code inprefect/utilities/dockerutils.py
def assert_line_absent(self, line: str) -> None:\n \"\"\"Asserts that the given line is absent from the Dockerfile\"\"\"\n if line not in self.dockerfile_lines:\n return\n\n i = self.dockerfile_lines.index(line)\n\n surrounding_lines = \"\\n\".join(\n [\n f\" {i+1:>3}: {line}\"\n for i, line in enumerate(self.dockerfile_lines[i - 2 : i + 2])\n ]\n )\n message = (\n f\"Unexpected {line!r} found in Dockerfile at line {i+1}. \"\n f\"Surrounding lines:\\n{surrounding_lines}\"\n )\n\n assert line not in self.dockerfile_lines, message\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_line_after","title":"assert_line_after
","text":"Asserts that the second line appears after the first line
Source code inprefect/utilities/dockerutils.py
def assert_line_after(self, second: str, first: str) -> None:\n \"\"\"Asserts that the second line appears after the first line\"\"\"\n self.assert_line_before(first, second)\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_line_before","title":"assert_line_before
","text":"Asserts that the first line appears before the second line
Source code inprefect/utilities/dockerutils.py
def assert_line_before(self, first: str, second: str) -> None:\n \"\"\"Asserts that the first line appears before the second line\"\"\"\n self.assert_has_line(first)\n self.assert_has_line(second)\n\n first_index = self.dockerfile_lines.index(first)\n second_index = self.dockerfile_lines.index(second)\n\n surrounding_lines = \"\\n\".join(\n [\n f\" {i+1:>3}: {line}\"\n for i, line in enumerate(\n self.dockerfile_lines[second_index - 2 : first_index + 2]\n )\n ]\n )\n\n message = (\n f\"Expected {first!r} to appear before {second!r} in the Dockerfile, but \"\n f\"{first!r} was at line {first_index+1} and {second!r} as at line \"\n f\"{second_index+1}. Surrounding lines:\\n{surrounding_lines}\"\n )\n\n assert first_index < second_index, message\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.build","title":"build
","text":"Build the Docker image from the current state of the ImageBuilder
Parameters:
Name Type Description Defaultpull
bool
True to pull the base image during the build
False
stream_progress_to
Optional[TextIO]
an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker
None
Returns:
Type Descriptionstr
The image ID
Source code inprefect/utilities/dockerutils.py
def build(\n self, pull: bool = False, stream_progress_to: Optional[TextIO] = None\n) -> str:\n \"\"\"Build the Docker image from the current state of the ImageBuilder\n\n Args:\n pull: True to pull the base image during the build\n stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO)\n that will collect the build output as it is reported by Docker\n\n Returns:\n The image ID\n \"\"\"\n dockerfile_path: Path = self.context / \"Dockerfile\"\n\n with dockerfile_path.open(\"w\") as dockerfile:\n dockerfile.writelines(line + \"\\n\" for line in self.dockerfile_lines)\n\n try:\n return build_image(\n self.context,\n platform=self.platform,\n pull=pull,\n stream_progress_to=stream_progress_to,\n )\n finally:\n os.unlink(dockerfile_path)\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.copy","title":"copy
","text":"Copy a file to this image
Source code inprefect/utilities/dockerutils.py
def copy(self, source: Union[str, Path], destination: Union[str, PurePosixPath]):\n \"\"\"Copy a file to this image\"\"\"\n if not self.context:\n raise Exception(\"No context available\")\n\n if not isinstance(destination, PurePosixPath):\n destination = PurePosixPath(destination)\n\n if not isinstance(source, Path):\n source = Path(source)\n\n if source.is_absolute():\n source = source.resolve().relative_to(self.base_directory)\n\n if self.temporary_directory:\n os.makedirs(self.context / source.parent, exist_ok=True)\n\n if source.is_dir():\n shutil.copytree(self.base_directory / source, self.context / source)\n else:\n shutil.copy2(self.base_directory / source, self.context / source)\n\n self.add_line(f\"COPY {source} {destination}\")\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.PushError","title":"PushError
","text":" Bases: Exception
Raised when a Docker image push fails
Source code inprefect/utilities/dockerutils.py
class PushError(Exception):\n \"\"\"Raised when a Docker image push fails\"\"\"\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.build_image","title":"build_image
","text":"Builds a Docker image, returning the image ID
Parameters:
Name Type Description Defaultcontext
Path
the root directory for the Docker build context
requireddockerfile
str
the path to the Dockerfile, relative to the context
'Dockerfile'
tag
Optional[str]
the tag to give this image
None
pull
bool
True to pull the base image during the build
False
stream_progress_to
Optional[TextIO]
an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker
None
Returns:
Type Descriptionstr
The image ID
Source code inprefect/utilities/dockerutils.py
@silence_docker_warnings()\ndef build_image(\n context: Path,\n dockerfile: str = \"Dockerfile\",\n tag: Optional[str] = None,\n pull: bool = False,\n platform: str = None,\n stream_progress_to: Optional[TextIO] = None,\n **kwargs,\n) -> str:\n \"\"\"Builds a Docker image, returning the image ID\n\n Args:\n context: the root directory for the Docker build context\n dockerfile: the path to the Dockerfile, relative to the context\n tag: the tag to give this image\n pull: True to pull the base image during the build\n stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO) that\n will collect the build output as it is reported by Docker\n\n Returns:\n The image ID\n \"\"\"\n\n if not context:\n raise ValueError(\"context required to build an image\")\n\n if not Path(context).exists():\n raise ValueError(f\"Context path {context} does not exist\")\n\n kwargs = {key: kwargs[key] for key in kwargs if key not in [\"decode\", \"labels\"]}\n\n image_id = None\n with docker_client() as client:\n events = client.api.build(\n path=str(context),\n tag=tag,\n dockerfile=dockerfile,\n pull=pull,\n decode=True,\n labels=IMAGE_LABELS,\n platform=platform,\n **kwargs,\n )\n\n try:\n for event in events:\n if \"stream\" in event:\n if not stream_progress_to:\n continue\n stream_progress_to.write(event[\"stream\"])\n stream_progress_to.flush()\n elif \"aux\" in event:\n image_id = event[\"aux\"][\"ID\"]\n elif \"error\" in event:\n raise BuildError(event[\"error\"])\n elif \"message\" in event:\n raise BuildError(event[\"message\"])\n except docker.errors.APIError as e:\n raise BuildError(e.explanation) from e\n\n assert image_id, \"The Docker daemon did not return an image ID\"\n return image_id\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.docker_client","title":"docker_client
","text":"Get the environmentally-configured Docker client
Source code inprefect/utilities/dockerutils.py
@contextmanager\ndef docker_client() -> Generator[\"DockerClient\", None, None]:\n \"\"\"Get the environmentally-configured Docker client\"\"\"\n client = None\n try:\n with silence_docker_warnings():\n client = docker.DockerClient.from_env()\n\n yield client\n except docker.errors.DockerException as exc:\n raise RuntimeError(\n \"This error is often thrown because Docker is not running. Please ensure Docker is running.\"\n ) from exc\n finally:\n client is not None and client.close()\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.format_outlier_version_name","title":"format_outlier_version_name
","text":"Formats outlier docker version names to pass packaging.version.parse
validation - Current cases are simple, but creates stub for more complicated formatting if eventually needed. - Example outlier versions that throw a parsing exception: - \"20.10.0-ce\" (variant of community edition label) - \"20.10.0-ee\" (variant of enterprise edition label)
Parameters:
Name Type Description Defaultversion
str
raw docker version value
requiredReturns:
Name Type Descriptionstr
value that can pass packaging.version.parse
validation
prefect/utilities/dockerutils.py
def format_outlier_version_name(version: str):\n \"\"\"\n Formats outlier docker version names to pass `packaging.version.parse` validation\n - Current cases are simple, but creates stub for more complicated formatting if eventually needed.\n - Example outlier versions that throw a parsing exception:\n - \"20.10.0-ce\" (variant of community edition label)\n - \"20.10.0-ee\" (variant of enterprise edition label)\n\n Args:\n version (str): raw docker version value\n\n Returns:\n str: value that can pass `packaging.version.parse` validation\n \"\"\"\n return version.replace(\"-ce\", \"\").replace(\"-ee\", \"\")\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.generate_default_dockerfile","title":"generate_default_dockerfile
","text":"Generates a default Dockerfile used for deploying flows. The Dockerfile is written to a temporary file and yielded. The temporary file is removed after the context manager exits.
Parameters:
Name Type Description Default-
context
The context to use for the Dockerfile. Defaults to the current working directory.
required Source code inprefect/utilities/dockerutils.py
@contextmanager\ndef generate_default_dockerfile(context: Optional[Path] = None):\n \"\"\"\n Generates a default Dockerfile used for deploying flows. The Dockerfile is written\n to a temporary file and yielded. The temporary file is removed after the context\n manager exits.\n\n Args:\n - context: The context to use for the Dockerfile. Defaults to\n the current working directory.\n \"\"\"\n if not context:\n context = Path.cwd()\n lines = []\n base_image = get_prefect_image_name()\n lines.append(f\"FROM {base_image}\")\n dir_name = context.name\n\n if (context / \"requirements.txt\").exists():\n lines.append(f\"COPY requirements.txt /opt/prefect/{dir_name}/requirements.txt\")\n lines.append(\n f\"RUN python -m pip install -r /opt/prefect/{dir_name}/requirements.txt\"\n )\n\n lines.append(f\"COPY . /opt/prefect/{dir_name}/\")\n lines.append(f\"WORKDIR /opt/prefect/{dir_name}/\")\n\n temp_dockerfile = context / \"Dockerfile\"\n if Path(temp_dockerfile).exists():\n raise RuntimeError(\n \"Failed to generate Dockerfile. Dockerfile already exists in the\"\n \" current directory.\"\n )\n\n with Path(temp_dockerfile).open(\"w\") as f:\n f.writelines(line + \"\\n\" for line in lines)\n\n try:\n yield temp_dockerfile\n finally:\n temp_dockerfile.unlink()\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.get_prefect_image_name","title":"get_prefect_image_name
","text":"Get the Prefect image name matching the current Prefect and Python versions.
Parameters:
Name Type Description Defaultprefect_version
str
An optional override for the Prefect version.
None
python_version
str
An optional override for the Python version; must be at the minor level e.g. '3.9'.
None
flavor
str
An optional alternative image flavor to build, like 'conda'
None
Source code in prefect/utilities/dockerutils.py
def get_prefect_image_name(\n prefect_version: str = None, python_version: str = None, flavor: str = None\n) -> str:\n \"\"\"\n Get the Prefect image name matching the current Prefect and Python versions.\n\n Args:\n prefect_version: An optional override for the Prefect version.\n python_version: An optional override for the Python version; must be at the\n minor level e.g. '3.9'.\n flavor: An optional alternative image flavor to build, like 'conda'\n \"\"\"\n parsed_version = (prefect_version or prefect.__version__).split(\"+\")\n is_prod_build = len(parsed_version) == 1\n prefect_version = (\n parsed_version[0]\n if is_prod_build\n else \"sha-\" + prefect.__version_info__[\"full-revisionid\"][:7]\n )\n\n python_version = python_version or python_version_minor()\n\n tag = slugify(\n f\"{prefect_version}-python{python_version}\" + (f\"-{flavor}\" if flavor else \"\"),\n lowercase=False,\n max_length=128,\n # Docker allows these characters for tag names\n regex_pattern=r\"[^a-zA-Z0-9_.-]+\",\n )\n\n image = \"prefect\" if is_prod_build else \"prefect-dev\"\n return f\"prefecthq/{image}:{tag}\"\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.parse_image_tag","title":"parse_image_tag
","text":"Parse Docker Image String
Parameters:
Name Type Description Defaultname
str
Name of Docker Image
required Return Source code inprefect/utilities/dockerutils.py
def parse_image_tag(name: str) -> Tuple[str, Optional[str]]:\n \"\"\"\n Parse Docker Image String\n\n - If a tag exists, this function parses and returns the image registry and tag,\n separately as a tuple.\n - Example 1: 'prefecthq/prefect:latest' -> ('prefecthq/prefect', 'latest')\n - Example 2: 'hostname.io:5050/folder/subfolder:latest' -> ('hostname.io:5050/folder/subfolder', 'latest')\n - Supports parsing Docker Image strings that follow Docker Image Specification v1.1.0\n - Image building tools typically enforce this standard\n\n Args:\n name (str): Name of Docker Image\n\n Return:\n tuple: image registry, image tag\n \"\"\"\n tag = None\n name_parts = name.split(\"/\")\n # First handles the simplest image names (DockerHub-based, index-free, potentionally with a tag)\n # - Example: simplename:latest\n if len(name_parts) == 1:\n if \":\" in name_parts[0]:\n image_name, tag = name_parts[0].split(\":\")\n else:\n image_name = name_parts[0]\n else:\n # 1. Separates index (hostname.io or prefecthq) from path:tag (folder/subfolder:latest or prefect:latest)\n # 2. Separates path and tag (if tag exists)\n # 3. Reunites index and path (without tag) as image name\n index_name = name_parts[0]\n image_path = \"/\".join(name_parts[1:])\n if \":\" in image_path:\n image_path, tag = image_path.split(\":\")\n image_name = f\"{index_name}/{image_path}\"\n return image_name, tag\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.push_image","title":"push_image
","text":"Pushes a local image to a Docker registry, returning the registry-qualified tag for that image
This assumes that the environment's Docker daemon is already authenticated to the given registry, and currently makes no attempt to authenticate.
Parameters:
Name Type Description Defaultimage_id
str
a Docker image ID
requiredregistry_url
str
the URL of a Docker registry
requiredname
str
the name of this image
requiredtag
str
the tag to give this image (defaults to a short representation of the image's ID)
None
stream_progress_to
Optional[TextIO]
an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker
None
Returns:
Type Descriptionstr
A registry-qualified tag, like my-registry.example.com/my-image:abcdefg
Source code inprefect/utilities/dockerutils.py
@silence_docker_warnings()\ndef push_image(\n image_id: str,\n registry_url: str,\n name: str,\n tag: Optional[str] = None,\n stream_progress_to: Optional[TextIO] = None,\n) -> str:\n \"\"\"Pushes a local image to a Docker registry, returning the registry-qualified tag\n for that image\n\n This assumes that the environment's Docker daemon is already authenticated to the\n given registry, and currently makes no attempt to authenticate.\n\n Args:\n image_id (str): a Docker image ID\n registry_url (str): the URL of a Docker registry\n name (str): the name of this image\n tag (str): the tag to give this image (defaults to a short representation of\n the image's ID)\n stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO) that\n will collect the build output as it is reported by Docker\n\n Returns:\n A registry-qualified tag, like my-registry.example.com/my-image:abcdefg\n \"\"\"\n\n if not tag:\n tag = slugify(pendulum.now(\"utc\").isoformat())\n\n _, registry, _, _, _ = urlsplit(registry_url)\n repository = f\"{registry}/{name}\"\n\n with docker_client() as client:\n image: \"docker.Image\" = client.images.get(image_id)\n image.tag(repository, tag=tag)\n events = client.api.push(repository, tag=tag, stream=True, decode=True)\n try:\n for event in events:\n if \"status\" in event:\n if not stream_progress_to:\n continue\n stream_progress_to.write(event[\"status\"])\n if \"progress\" in event:\n stream_progress_to.write(\" \" + event[\"progress\"])\n stream_progress_to.write(\"\\n\")\n stream_progress_to.flush()\n elif \"error\" in event:\n raise PushError(event[\"error\"])\n finally:\n client.api.remove_image(f\"{repository}:{tag}\", noprune=True)\n\n return f\"{repository}:{tag}\"\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.split_repository_path","title":"split_repository_path
","text":"Splits a Docker repository path into its namespace and repository components.
Parameters:
Name Type Description Defaultrepository_path
str
The Docker repository path to split.
requiredReturns:
Type DescriptionTuple[Optional[str], str]
Tuple[Optional[str], str]: A tuple containing the namespace and repository components. - namespace (Optional[str]): The Docker namespace, combining the registry and organization. None if not present. - repository (Optionals[str]): The repository name.
Source code inprefect/utilities/dockerutils.py
def split_repository_path(repository_path: str) -> Tuple[Optional[str], str]:\n \"\"\"\n Splits a Docker repository path into its namespace and repository components.\n\n Args:\n repository_path: The Docker repository path to split.\n\n Returns:\n Tuple[Optional[str], str]: A tuple containing the namespace and repository components.\n - namespace (Optional[str]): The Docker namespace, combining the registry and organization. None if not present.\n - repository (Optionals[str]): The repository name.\n \"\"\"\n parts = repository_path.split(\"/\", 2)\n\n # Check if the path includes a registry and organization or just organization/repository\n if len(parts) == 3 or (len(parts) == 2 and (\".\" in parts[0] or \":\" in parts[0])):\n # Namespace includes registry and organization\n namespace = \"/\".join(parts[:-1])\n repository = parts[-1]\n elif len(parts) == 2:\n # Only organization/repository provided, so namespace is just the first part\n namespace = parts[0]\n repository = parts[1]\n else:\n # No namespace provided\n namespace = None\n repository = parts[0]\n\n return namespace, repository\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.to_run_command","title":"to_run_command
","text":"Convert a process-style list of command arguments to a single Dockerfile RUN instruction.
Source code inprefect/utilities/dockerutils.py
def to_run_command(command: List[str]) -> str:\n \"\"\"\n Convert a process-style list of command arguments to a single Dockerfile RUN\n instruction.\n \"\"\"\n if not command:\n return \"\"\n\n run_command = f\"RUN {command[0]}\"\n if len(command) > 1:\n run_command += \" \" + \" \".join([repr(arg) for arg in command[1:]])\n\n # TODO: Consider performing text-wrapping to improve readability of the generated\n # Dockerfile\n # return textwrap.wrap(\n # run_command,\n # subsequent_indent=\" \" * 4,\n # break_on_hyphens=False,\n # break_long_words=False\n # )\n\n return run_command\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/filesystem/","title":"filesystem","text":"","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem","title":"prefect.utilities.filesystem
","text":"Utilities for working with file systems
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.create_default_ignore_file","title":"create_default_ignore_file
","text":"Creates default ignore file in the provided path if one does not already exist; returns boolean specifying whether a file was created.
Source code inprefect/utilities/filesystem.py
def create_default_ignore_file(path: str) -> bool:\n \"\"\"\n Creates default ignore file in the provided path if one does not already exist; returns boolean specifying\n whether a file was created.\n \"\"\"\n path = pathlib.Path(path)\n ignore_file = path / \".prefectignore\"\n if ignore_file.exists():\n return False\n default_file = pathlib.Path(prefect.__module_path__) / \".prefectignore\"\n with ignore_file.open(mode=\"w\") as f:\n f.write(default_file.read_text())\n return True\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.filename","title":"filename
","text":"Extract the file name from a path with remote file system support
Source code inprefect/utilities/filesystem.py
def filename(path: str) -> str:\n \"\"\"Extract the file name from a path with remote file system support\"\"\"\n try:\n of: OpenFile = fsspec.open(path)\n sep = of.fs.sep\n except (ImportError, AttributeError):\n sep = \"\\\\\" if \"\\\\\" in path else \"/\"\n return path.split(sep)[-1]\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.filter_files","title":"filter_files
","text":"This function accepts a root directory path and a list of file patterns to ignore, and returns a list of files that excludes those that should be ignored.
The specification matches that of .gitignore files.
Source code inprefect/utilities/filesystem.py
def filter_files(\n root: str = \".\", ignore_patterns: list = None, include_dirs: bool = True\n) -> set:\n \"\"\"\n This function accepts a root directory path and a list of file patterns to ignore, and returns\n a list of files that excludes those that should be ignored.\n\n The specification matches that of [.gitignore files](https://git-scm.com/docs/gitignore).\n \"\"\"\n if ignore_patterns is None:\n ignore_patterns = []\n spec = pathspec.PathSpec.from_lines(\"gitwildmatch\", ignore_patterns)\n ignored_files = {p.path for p in spec.match_tree_entries(root)}\n if include_dirs:\n all_files = {p.path for p in pathspec.util.iter_tree_entries(root)}\n else:\n all_files = set(pathspec.util.iter_tree_files(root))\n included_files = all_files - ignored_files\n return included_files\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.get_open_file_limit","title":"get_open_file_limit
","text":"Get the maximum number of open files allowed for the current process
Source code inprefect/utilities/filesystem.py
def get_open_file_limit() -> int:\n \"\"\"Get the maximum number of open files allowed for the current process\"\"\"\n\n try:\n if os.name == \"nt\":\n import ctypes\n\n return ctypes.cdll.ucrtbase._getmaxstdio()\n else:\n import resource\n\n soft_limit, _ = resource.getrlimit(resource.RLIMIT_NOFILE)\n return soft_limit\n except Exception:\n # Catch all exceptions, as ctypes can raise several errors\n # depending on what went wrong. Return a safe default if we\n # can't get the limit from the OS.\n return 200\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.is_local_path","title":"is_local_path
","text":"Check if the given path points to a local or remote file system
Source code inprefect/utilities/filesystem.py
def is_local_path(path: Union[str, pathlib.Path, OpenFile]):\n \"\"\"Check if the given path points to a local or remote file system\"\"\"\n if isinstance(path, str):\n try:\n of = fsspec.open(path)\n except ImportError:\n # The path is a remote file system that uses a lib that is not installed\n return False\n elif isinstance(path, pathlib.Path):\n return True\n elif isinstance(path, OpenFile):\n of = path\n else:\n raise TypeError(f\"Invalid path of type {type(path).__name__!r}\")\n\n return type(of.fs) == LocalFileSystem\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.relative_path_to_current_platform","title":"relative_path_to_current_platform
","text":"Converts a relative path generated on any platform to a relative path for the current platform.
Source code inprefect/utilities/filesystem.py
def relative_path_to_current_platform(path_str: str) -> Path:\n \"\"\"\n Converts a relative path generated on any platform to a relative path for the\n current platform.\n \"\"\"\n\n return Path(PureWindowsPath(path_str).as_posix())\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.tmpchdir","title":"tmpchdir
","text":"Change current-working directories for the duration of the context
Source code inprefect/utilities/filesystem.py
@contextmanager\ndef tmpchdir(path: str):\n \"\"\"\n Change current-working directories for the duration of the context\n \"\"\"\n path = os.path.abspath(path)\n if os.path.isfile(path) or (not os.path.exists(path) and not path.endswith(\"/\")):\n path = os.path.dirname(path)\n\n owd = os.getcwd()\n\n with chdir_lock:\n try:\n os.chdir(path)\n yield path\n finally:\n os.chdir(owd)\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.to_display_path","title":"to_display_path
","text":"Convert a path to a displayable path. The absolute path or relative path to the current (or given) directory will be returned, whichever is shorter.
Source code inprefect/utilities/filesystem.py
def to_display_path(\n path: Union[pathlib.Path, str], relative_to: Union[pathlib.Path, str] = None\n) -> str:\n \"\"\"\n Convert a path to a displayable path. The absolute path or relative path to the\n current (or given) directory will be returned, whichever is shorter.\n \"\"\"\n path, relative_to = (\n pathlib.Path(path).resolve(),\n pathlib.Path(relative_to or \".\").resolve(),\n )\n relative_path = str(path.relative_to(relative_to))\n absolute_path = str(path)\n return relative_path if len(relative_path) < len(absolute_path) else absolute_path\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/hashing/","title":"hashing","text":"","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing","title":"prefect.utilities.hashing
","text":"","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.file_hash","title":"file_hash
","text":"Given a path to a file, produces a stable hash of the file contents.
Parameters:
Name Type Description Defaultpath
str
the path to a file
requiredhash_algo
Hash algorithm from hashlib to use.
_md5
Returns:
Name Type Descriptionstr
str
a hash of the file contents
Source code inprefect/utilities/hashing.py
def file_hash(path: str, hash_algo=_md5) -> str:\n \"\"\"Given a path to a file, produces a stable hash of the file contents.\n\n Args:\n path (str): the path to a file\n hash_algo: Hash algorithm from hashlib to use.\n\n Returns:\n str: a hash of the file contents\n \"\"\"\n contents = Path(path).read_bytes()\n return stable_hash(contents, hash_algo=hash_algo)\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.hash_objects","title":"hash_objects
","text":"Attempt to hash objects by dumping to JSON or serializing with cloudpickle. On failure of both, None
will be returned
prefect/utilities/hashing.py
def hash_objects(*args, hash_algo=_md5, **kwargs) -> Optional[str]:\n \"\"\"\n Attempt to hash objects by dumping to JSON or serializing with cloudpickle.\n On failure of both, `None` will be returned\n \"\"\"\n try:\n serializer = JSONSerializer(dumps_kwargs={\"sort_keys\": True})\n return stable_hash(serializer.dumps((args, kwargs)), hash_algo=hash_algo)\n except Exception:\n pass\n\n try:\n return stable_hash(cloudpickle.dumps((args, kwargs)), hash_algo=hash_algo)\n except Exception:\n pass\n\n return None\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.stable_hash","title":"stable_hash
","text":"Given some arguments, produces a stable 64-bit hash of their contents.
Supports bytes and strings. Strings will be UTF-8 encoded.
Parameters:
Name Type Description Default*args
Union[str, bytes]
Items to include in the hash.
()
hash_algo
Hash algorithm from hashlib to use.
_md5
Returns:
Type Descriptionstr
A hex hash.
Source code inprefect/utilities/hashing.py
def stable_hash(*args: Union[str, bytes], hash_algo=_md5) -> str:\n \"\"\"Given some arguments, produces a stable 64-bit hash of their contents.\n\n Supports bytes and strings. Strings will be UTF-8 encoded.\n\n Args:\n *args: Items to include in the hash.\n hash_algo: Hash algorithm from hashlib to use.\n\n Returns:\n A hex hash.\n \"\"\"\n h = hash_algo()\n for a in args:\n if isinstance(a, str):\n a = a.encode()\n h.update(a)\n return h.hexdigest()\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/importtools/","title":"importtools","text":"","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools","title":"prefect.utilities.importtools
","text":"","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.AliasedModuleDefinition","title":"AliasedModuleDefinition
","text":" Bases: NamedTuple
A definition for the AliasedModuleFinder
.
Parameters:
Name Type Description Defaultalias
The import name to create
requiredreal
The import name of the module to reference for the alias
requiredcallback
A function to call when the alias module is loaded
required Source code inprefect/utilities/importtools.py
class AliasedModuleDefinition(NamedTuple):\n \"\"\"\n A definition for the `AliasedModuleFinder`.\n\n Args:\n alias: The import name to create\n real: The import name of the module to reference for the alias\n callback: A function to call when the alias module is loaded\n \"\"\"\n\n alias: str\n real: str\n callback: Optional[Callable[[str], None]]\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.AliasedModuleFinder","title":"AliasedModuleFinder
","text":" Bases: MetaPathFinder
prefect/utilities/importtools.py
class AliasedModuleFinder(MetaPathFinder):\n def __init__(self, aliases: Iterable[AliasedModuleDefinition]):\n \"\"\"\n See `AliasedModuleDefinition` for alias specification.\n\n Aliases apply to all modules nested within an alias.\n \"\"\"\n self.aliases = aliases\n\n def find_spec(\n self,\n fullname: str,\n path=None,\n target=None,\n ) -> Optional[ModuleSpec]:\n \"\"\"\n The fullname is the imported path, e.g. \"foo.bar\". If there is an alias \"phi\"\n for \"foo\" then on import of \"phi.bar\" we will find the spec for \"foo.bar\" and\n create a new spec for \"phi.bar\" that points to \"foo.bar\".\n \"\"\"\n for alias, real, callback in self.aliases:\n if fullname.startswith(alias):\n # Retrieve the spec of the real module\n real_spec = importlib.util.find_spec(fullname.replace(alias, real, 1))\n # Create a new spec for the alias\n return ModuleSpec(\n fullname,\n AliasedModuleLoader(fullname, callback, real_spec),\n origin=real_spec.origin,\n is_package=real_spec.submodule_search_locations is not None,\n )\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.AliasedModuleFinder.find_spec","title":"find_spec
","text":"The fullname is the imported path, e.g. \"foo.bar\". If there is an alias \"phi\" for \"foo\" then on import of \"phi.bar\" we will find the spec for \"foo.bar\" and create a new spec for \"phi.bar\" that points to \"foo.bar\".
Source code inprefect/utilities/importtools.py
def find_spec(\n self,\n fullname: str,\n path=None,\n target=None,\n) -> Optional[ModuleSpec]:\n \"\"\"\n The fullname is the imported path, e.g. \"foo.bar\". If there is an alias \"phi\"\n for \"foo\" then on import of \"phi.bar\" we will find the spec for \"foo.bar\" and\n create a new spec for \"phi.bar\" that points to \"foo.bar\".\n \"\"\"\n for alias, real, callback in self.aliases:\n if fullname.startswith(alias):\n # Retrieve the spec of the real module\n real_spec = importlib.util.find_spec(fullname.replace(alias, real, 1))\n # Create a new spec for the alias\n return ModuleSpec(\n fullname,\n AliasedModuleLoader(fullname, callback, real_spec),\n origin=real_spec.origin,\n is_package=real_spec.submodule_search_locations is not None,\n )\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.DelayedImportErrorModule","title":"DelayedImportErrorModule
","text":" Bases: ModuleType
A fake module returned by lazy_import
when the module cannot be found. When any of the module's attributes are accessed, we will throw a ModuleNotFoundError
.
Adapted from lazy_loader
Source code inprefect/utilities/importtools.py
class DelayedImportErrorModule(ModuleType):\n \"\"\"\n A fake module returned by `lazy_import` when the module cannot be found. When any\n of the module's attributes are accessed, we will throw a `ModuleNotFoundError`.\n\n Adapted from [lazy_loader][1]\n\n [1]: https://github.com/scientific-python/lazy_loader\n \"\"\"\n\n def __init__(self, frame_data, help_message, *args, **kwargs):\n self.__frame_data = frame_data\n self.__help_message = (\n help_message or \"Import errors for this module are only reported when used.\"\n )\n super().__init__(*args, **kwargs)\n\n def __getattr__(self, attr):\n if attr in (\"__class__\", \"__file__\", \"__frame_data\", \"__help_message\"):\n super().__getattr__(attr)\n else:\n fd = self.__frame_data\n raise ModuleNotFoundError(\n f\"No module named '{fd['spec']}'\\n\\nThis module was originally imported\"\n f\" at:\\n File \\\"{fd['filename']}\\\", line {fd['lineno']}, in\"\n f\" {fd['function']}\\n\\n {''.join(fd['code_context']).strip()}\\n\"\n + self.__help_message\n )\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.from_qualified_name","title":"from_qualified_name
","text":"Import an object given a fully-qualified name.
Parameters:
Name Type Description Defaultname
str
The fully-qualified name of the object to import.
requiredReturns:
Type DescriptionAny
the imported object
Examples:
>>> obj = from_qualified_name(\"random.randint\")\n>>> import random\n>>> obj == random.randint\nTrue\n
Source code in prefect/utilities/importtools.py
def from_qualified_name(name: str) -> Any:\n \"\"\"\n Import an object given a fully-qualified name.\n\n Args:\n name: The fully-qualified name of the object to import.\n\n Returns:\n the imported object\n\n Examples:\n >>> obj = from_qualified_name(\"random.randint\")\n >>> import random\n >>> obj == random.randint\n True\n \"\"\"\n # Try importing it first so we support \"module\" or \"module.sub_module\"\n try:\n module = importlib.import_module(name)\n return module\n except ImportError:\n # If no subitem was included raise the import error\n if \".\" not in name:\n raise\n\n # Otherwise, we'll try to load it as an attribute of a module\n mod_name, attr_name = name.rsplit(\".\", 1)\n module = importlib.import_module(mod_name)\n return getattr(module, attr_name)\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.import_object","title":"import_object
","text":"Load an object from an import path.
Import paths can be formatted as one of: - module.object - module:object - /path/to/script.py:object
This function is not thread safe as it modifies the 'sys' module during execution.
Source code inprefect/utilities/importtools.py
def import_object(import_path: str):\n \"\"\"\n Load an object from an import path.\n\n Import paths can be formatted as one of:\n - module.object\n - module:object\n - /path/to/script.py:object\n\n This function is not thread safe as it modifies the 'sys' module during execution.\n \"\"\"\n if \".py:\" in import_path:\n script_path, object_name = import_path.rsplit(\":\", 1)\n module = load_script_as_module(script_path)\n else:\n if \":\" in import_path:\n module_name, object_name = import_path.rsplit(\":\", 1)\n elif \".\" in import_path:\n module_name, object_name = import_path.rsplit(\".\", 1)\n else:\n raise ValueError(\n f\"Invalid format for object import. Received {import_path!r}.\"\n )\n\n module = load_module(module_name)\n\n return getattr(module, object_name)\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.lazy_import","title":"lazy_import
","text":"Create a lazily-imported module to use in place of the module of the given name. Use this to retain module-level imports for libraries that we don't want to actually import until they are needed.
Adapted from the Python documentation and lazy_loader
Source code inprefect/utilities/importtools.py
def lazy_import(\n name: str, error_on_import: bool = False, help_message: str = \"\"\n) -> ModuleType:\n \"\"\"\n Create a lazily-imported module to use in place of the module of the given name.\n Use this to retain module-level imports for libraries that we don't want to\n actually import until they are needed.\n\n Adapted from the [Python documentation][1] and [lazy_loader][2]\n\n [1]: https://docs.python.org/3/library/importlib.html#implementing-lazy-imports\n [2]: https://github.com/scientific-python/lazy_loader\n \"\"\"\n\n try:\n return sys.modules[name]\n except KeyError:\n pass\n\n spec = importlib.util.find_spec(name)\n if spec is None:\n if error_on_import:\n raise ModuleNotFoundError(f\"No module named '{name}'.\\n{help_message}\")\n else:\n try:\n parent = inspect.stack()[1]\n frame_data = {\n \"spec\": name,\n \"filename\": parent.filename,\n \"lineno\": parent.lineno,\n \"function\": parent.function,\n \"code_context\": parent.code_context,\n }\n return DelayedImportErrorModule(\n frame_data, help_message, \"DelayedImportErrorModule\"\n )\n finally:\n del parent\n\n module = importlib.util.module_from_spec(spec)\n sys.modules[name] = module\n\n loader = importlib.util.LazyLoader(spec.loader)\n loader.exec_module(module)\n\n return module\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.load_module","title":"load_module
","text":"Import a module with support for relative imports within the module.
Source code inprefect/utilities/importtools.py
def load_module(module_name: str) -> ModuleType:\n \"\"\"\n Import a module with support for relative imports within the module.\n \"\"\"\n # Ensure relative imports within the imported module work if the user is in the\n # correct working directory\n working_directory = os.getcwd()\n sys.path.insert(0, working_directory)\n\n try:\n return importlib.import_module(module_name)\n finally:\n sys.path.remove(working_directory)\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.load_script_as_module","title":"load_script_as_module
","text":"Execute a script at the given path.
Sets the module name to __prefect_loader__
.
If an exception occurs during execution of the script, a prefect.exceptions.ScriptError
is created to wrap the exception and raised.
During the duration of this function call, sys
is modified to support loading. These changes are reverted after completion, but this function is not thread safe and use of it in threaded contexts may result in undesirable behavior.
See https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly
Source code inprefect/utilities/importtools.py
def load_script_as_module(path: str) -> ModuleType:\n \"\"\"\n Execute a script at the given path.\n\n Sets the module name to `__prefect_loader__`.\n\n If an exception occurs during execution of the script, a\n `prefect.exceptions.ScriptError` is created to wrap the exception and raised.\n\n During the duration of this function call, `sys` is modified to support loading.\n These changes are reverted after completion, but this function is not thread safe\n and use of it in threaded contexts may result in undesirable behavior.\n\n See https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly\n \"\"\"\n # We will add the parent directory to search locations to support relative imports\n # during execution of the script\n if not path.endswith(\".py\"):\n raise ValueError(f\"The provided path does not point to a python file: {path!r}\")\n\n parent_path = str(Path(path).resolve().parent)\n working_directory = os.getcwd()\n\n spec = importlib.util.spec_from_file_location(\n \"__prefect_loader__\",\n path,\n # Support explicit relative imports i.e. `from .foo import bar`\n submodule_search_locations=[parent_path, working_directory],\n )\n module = importlib.util.module_from_spec(spec)\n sys.modules[\"__prefect_loader__\"] = module\n\n # Support implicit relative imports i.e. `from foo import bar`\n sys.path.insert(0, working_directory)\n sys.path.insert(0, parent_path)\n try:\n spec.loader.exec_module(module)\n except Exception as exc:\n raise ScriptError(user_exc=exc, path=path) from exc\n finally:\n sys.modules.pop(\"__prefect_loader__\")\n sys.path.remove(parent_path)\n sys.path.remove(working_directory)\n\n return module\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.objects_from_script","title":"objects_from_script
","text":"Run a python script and return all the global variables
Supports remote paths by copying to a local temporary file.
WARNING: The Python documentation does not recommend using runpy for this pattern.
Furthermore, any functions and classes defined by the executed code are not guaranteed to work correctly after a runpy function has returned. If that limitation is not acceptable for a given use case, importlib is likely to be a more suitable choice than this module.
The function load_script_as_module
uses importlib instead and should be used instead for loading objects from scripts.
Parameters:
Name Type Description Defaultpath
str
The path to the script to run
requiredtext
Union[str, bytes]
Optionally, the text of the script. Skips loading the contents if given.
None
Returns:
Type DescriptionDict[str, Any]
A dictionary mapping variable name to value
Raises:
Type DescriptionScriptError
if the script raises an exception during execution
Source code inprefect/utilities/importtools.py
def objects_from_script(path: str, text: Union[str, bytes] = None) -> Dict[str, Any]:\n \"\"\"\n Run a python script and return all the global variables\n\n Supports remote paths by copying to a local temporary file.\n\n WARNING: The Python documentation does not recommend using runpy for this pattern.\n\n > Furthermore, any functions and classes defined by the executed code are not\n > guaranteed to work correctly after a runpy function has returned. If that\n > limitation is not acceptable for a given use case, importlib is likely to be a\n > more suitable choice than this module.\n\n The function `load_script_as_module` uses importlib instead and should be used\n instead for loading objects from scripts.\n\n Args:\n path: The path to the script to run\n text: Optionally, the text of the script. Skips loading the contents if given.\n\n Returns:\n A dictionary mapping variable name to value\n\n Raises:\n ScriptError: if the script raises an exception during execution\n \"\"\"\n\n def run_script(run_path: str):\n # Cast to an absolute path before changing directories to ensure relative paths\n # are not broken\n abs_run_path = os.path.abspath(run_path)\n with tmpchdir(run_path):\n try:\n return runpy.run_path(abs_run_path)\n except Exception as exc:\n raise ScriptError(user_exc=exc, path=path) from exc\n\n if text:\n with NamedTemporaryFile(\n mode=\"wt\" if isinstance(text, str) else \"wb\",\n prefix=f\"run-{filename(path)}\",\n suffix=\".py\",\n ) as tmpfile:\n tmpfile.write(text)\n tmpfile.flush()\n return run_script(tmpfile.name)\n\n else:\n if not is_local_path(path):\n # Remote paths need to be local to run\n with fsspec.open(path) as f:\n contents = f.read()\n return objects_from_script(path, contents)\n else:\n return run_script(path)\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.to_qualified_name","title":"to_qualified_name
","text":"Given an object, returns its fully-qualified name: a string that represents its Python import path.
Parameters:
Name Type Description Defaultobj
Any
an importable Python object
requiredReturns:
Name Type Descriptionstr
str
the qualified name
Source code inprefect/utilities/importtools.py
def to_qualified_name(obj: Any) -> str:\n \"\"\"\n Given an object, returns its fully-qualified name: a string that represents its\n Python import path.\n\n Args:\n obj (Any): an importable Python object\n\n Returns:\n str: the qualified name\n \"\"\"\n if sys.version_info < (3, 10):\n # These attributes are only available in Python 3.10+\n if isinstance(obj, (classmethod, staticmethod)):\n obj = obj.__func__\n return obj.__module__ + \".\" + obj.__qualname__\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/math/","title":"math","text":"","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math","title":"prefect.utilities.math
","text":"","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.bounded_poisson_interval","title":"bounded_poisson_interval
","text":"Bounds Poisson \"inter-arrival times\" to a range.
Unlike clamped_poisson_interval
this does not take a target average interval. Instead, the interval is predetermined and the average is calculated as their midpoint. This allows Poisson intervals to be used in cases where a lower bound must be enforced.
prefect/utilities/math.py
def bounded_poisson_interval(lower_bound, upper_bound):\n \"\"\"\n Bounds Poisson \"inter-arrival times\" to a range.\n\n Unlike `clamped_poisson_interval` this does not take a target average interval.\n Instead, the interval is predetermined and the average is calculated as their\n midpoint. This allows Poisson intervals to be used in cases where a lower bound\n must be enforced.\n \"\"\"\n average = (float(lower_bound) + float(upper_bound)) / 2.0\n upper_rv = exponential_cdf(upper_bound, average)\n lower_rv = exponential_cdf(lower_bound, average)\n return poisson_interval(average, lower_rv, upper_rv)\n
","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.clamped_poisson_interval","title":"clamped_poisson_interval
","text":"Bounds Poisson \"inter-arrival times\" to a range defined by the clamping factor.
The upper bound for this random variate is: average_interval * (1 + clamping_factor). A lower bound is picked so that the average interval remains approximately fixed.
Source code inprefect/utilities/math.py
def clamped_poisson_interval(average_interval, clamping_factor=0.3):\n \"\"\"\n Bounds Poisson \"inter-arrival times\" to a range defined by the clamping factor.\n\n The upper bound for this random variate is: average_interval * (1 + clamping_factor).\n A lower bound is picked so that the average interval remains approximately fixed.\n \"\"\"\n if clamping_factor <= 0:\n raise ValueError(\"`clamping_factor` must be >= 0.\")\n\n upper_clamp_multiple = 1 + clamping_factor\n upper_bound = average_interval * upper_clamp_multiple\n lower_bound = max(0, average_interval * lower_clamp_multiple(upper_clamp_multiple))\n\n upper_rv = exponential_cdf(upper_bound, average_interval)\n lower_rv = exponential_cdf(lower_bound, average_interval)\n return poisson_interval(average_interval, lower_rv, upper_rv)\n
","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.lower_clamp_multiple","title":"lower_clamp_multiple
","text":"Computes a lower clamp multiple that can be used to bound a random variate drawn from an exponential distribution.
Given an upper clamp multiple k
(and corresponding upper bound k * average_interval), this function computes a lower clamp multiple c
(corresponding to a lower bound c * average_interval) where the probability mass between the lower bound and the median is equal to the probability mass between the median and the upper bound.
prefect/utilities/math.py
def lower_clamp_multiple(k):\n \"\"\"\n Computes a lower clamp multiple that can be used to bound a random variate drawn\n from an exponential distribution.\n\n Given an upper clamp multiple `k` (and corresponding upper bound k * average_interval),\n this function computes a lower clamp multiple `c` (corresponding to a lower bound\n c * average_interval) where the probability mass between the lower bound and the\n median is equal to the probability mass between the median and the upper bound.\n \"\"\"\n if k >= 50:\n # return 0 for large values of `k` to prevent numerical overflow\n return 0.0\n\n return math.log(max(2**k / (2**k - 1), 1e-10), 2)\n
","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.poisson_interval","title":"poisson_interval
","text":"Generates an \"inter-arrival time\" for a Poisson process.
Draws a random variable from an exponential distribution using the inverse-CDF method. Can optionally be passed a lower and upper bound between (0, 1] to clamp the potential output values.
Source code inprefect/utilities/math.py
def poisson_interval(average_interval, lower=0, upper=1):\n \"\"\"\n Generates an \"inter-arrival time\" for a Poisson process.\n\n Draws a random variable from an exponential distribution using the inverse-CDF\n method. Can optionally be passed a lower and upper bound between (0, 1] to clamp\n the potential output values.\n \"\"\"\n\n # note that we ensure the argument to the logarithm is stabilized to prevent\n # calling log(0), which results in a DomainError\n return -math.log(max(1 - random.uniform(lower, upper), 1e-10)) * average_interval\n
","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/names/","title":"names","text":"","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names","title":"prefect.utilities.names
","text":"","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names.generate_slug","title":"generate_slug
","text":"Generates a random slug.
Parameters:
Name Type Description Default-
n_words (int
the number of words in the slug
required Source code inprefect/utilities/names.py
def generate_slug(n_words: int) -> str:\n \"\"\"\n Generates a random slug.\n\n Args:\n - n_words (int): the number of words in the slug\n \"\"\"\n words = coolname.generate(n_words)\n\n # regenerate words if they include ignored words\n while IGNORE_LIST.intersection(words):\n words = coolname.generate(n_words)\n\n return \"-\".join(words)\n
","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names.obfuscate","title":"obfuscate
","text":"Obfuscates any data type's string representation. See obfuscate_string
.
prefect/utilities/names.py
def obfuscate(s: Any, show_tail=False) -> str:\n \"\"\"\n Obfuscates any data type's string representation. See `obfuscate_string`.\n \"\"\"\n if s is None:\n return OBFUSCATED_PREFIX + \"*\" * 4\n\n return obfuscate_string(str(s), show_tail=show_tail)\n
","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names.obfuscate_string","title":"obfuscate_string
","text":"Obfuscates a string by returning a new string of 8 characters. If the input string is longer than 10 characters and show_tail is True, then up to 4 of its final characters will become final characters of the obfuscated string; all other characters are \"*\".
\"abc\" -> \"*\" \"abcdefgh\" -> \"*\" \"abcdefghijk\" -> \"*k\" \"abcdefghijklmnopqrs\" -> \"****pqrs\"
Source code inprefect/utilities/names.py
def obfuscate_string(s: str, show_tail=False) -> str:\n \"\"\"\n Obfuscates a string by returning a new string of 8 characters. If the input\n string is longer than 10 characters and show_tail is True, then up to 4 of\n its final characters will become final characters of the obfuscated string;\n all other characters are \"*\".\n\n \"abc\" -> \"********\"\n \"abcdefgh\" -> \"********\"\n \"abcdefghijk\" -> \"*******k\"\n \"abcdefghijklmnopqrs\" -> \"****pqrs\"\n \"\"\"\n result = OBFUSCATED_PREFIX + \"*\" * 4\n # take up to 4 characters, but only after the 10th character\n suffix = s[10:][-4:]\n if suffix and show_tail:\n result = f\"{result[:-len(suffix)]}{suffix}\"\n return result\n
","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/processutils/","title":"processutils","text":""},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils","title":"prefect.utilities.processutils
","text":""},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.forward_signal_handler","title":"forward_signal_handler
","text":"Forward subsequent signum events (e.g. interrupts) to respective signums.
Source code inprefect/utilities/processutils.py
def forward_signal_handler(\n pid: int, signum: int, *signums: int, process_name: str, print_fn: Callable\n):\n \"\"\"Forward subsequent signum events (e.g. interrupts) to respective signums.\"\"\"\n current_signal, future_signals = signums[0], signums[1:]\n\n # avoid RecursionError when setting up a direct signal forward to the same signal for the main pid\n avoid_infinite_recursion = signum == current_signal and pid == os.getpid()\n if avoid_infinite_recursion:\n # store the vanilla handler so it can be temporarily restored below\n original_handler = signal.getsignal(current_signal)\n\n def handler(*args):\n print_fn(\n f\"Received {getattr(signum, 'name', signum)}. \"\n f\"Sending {getattr(current_signal, 'name', current_signal)} to\"\n f\" {process_name} (PID {pid})...\"\n )\n if avoid_infinite_recursion:\n signal.signal(current_signal, original_handler)\n os.kill(pid, current_signal)\n if future_signals:\n forward_signal_handler(\n pid,\n signum,\n *future_signals,\n process_name=process_name,\n print_fn=print_fn,\n )\n\n # register current and future signal handlers\n _register_signal(signum, handler)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.open_process","title":"open_process
async
","text":"Like anyio.open_process
but with: - Support for Windows command joining - Termination of the process on exception during yield - Forced cleanup of process resources during cancellation
prefect/utilities/processutils.py
@asynccontextmanager\nasync def open_process(command: List[str], **kwargs):\n \"\"\"\n Like `anyio.open_process` but with:\n - Support for Windows command joining\n - Termination of the process on exception during yield\n - Forced cleanup of process resources during cancellation\n \"\"\"\n # Passing a string to open_process is equivalent to shell=True which is\n # generally necessary for Unix-like commands on Windows but otherwise should\n # be avoided\n if not isinstance(command, list):\n raise TypeError(\n \"The command passed to open process must be a list. You passed the command\"\n f\"'{command}', which is type '{type(command)}'.\"\n )\n\n if sys.platform == \"win32\":\n command = \" \".join(command)\n process = await _open_anyio_process(command, **kwargs)\n else:\n process = await anyio.open_process(command, **kwargs)\n\n # if there's a creationflags kwarg and it contains CREATE_NEW_PROCESS_GROUP,\n # use SetConsoleCtrlHandler to handle CTRL-C\n win32_process_group = False\n if (\n sys.platform == \"win32\"\n and \"creationflags\" in kwargs\n and kwargs[\"creationflags\"] & subprocess.CREATE_NEW_PROCESS_GROUP\n ):\n win32_process_group = True\n _windows_process_group_pids.add(process.pid)\n # Add a handler for CTRL-C. Re-adding the handler is safe as Windows\n # will not add a duplicate handler if _win32_ctrl_handler is\n # already registered.\n windll.kernel32.SetConsoleCtrlHandler(_win32_ctrl_handler, 1)\n\n try:\n async with process:\n yield process\n finally:\n try:\n process.terminate()\n if win32_process_group:\n _windows_process_group_pids.remove(process.pid)\n\n except OSError:\n # Occurs if the process is already terminated\n pass\n\n # Ensure the process resource is closed. If not shielded from cancellation,\n # this resource can be left open and the subprocess output can appear after\n # the parent process has exited.\n with anyio.CancelScope(shield=True):\n await process.aclose()\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.run_process","title":"run_process
async
","text":"Like anyio.run_process
but with:
open_process
utility to ensure resources are cleaned upstream_output
support to connect the subprocess to the parent stdout/errTaskGroup.start
marking as 'started' after the process has been created. When used, the PID is returned to the task status.prefect/utilities/processutils.py
async def run_process(\n command: List[str],\n stream_output: Union[bool, Tuple[Optional[TextSink], Optional[TextSink]]] = False,\n task_status: Optional[anyio.abc.TaskStatus] = None,\n task_status_handler: Optional[Callable[[anyio.abc.Process], Any]] = None,\n **kwargs,\n):\n \"\"\"\n Like `anyio.run_process` but with:\n\n - Use of our `open_process` utility to ensure resources are cleaned up\n - Simple `stream_output` support to connect the subprocess to the parent stdout/err\n - Support for submission with `TaskGroup.start` marking as 'started' after the\n process has been created. When used, the PID is returned to the task status.\n\n \"\"\"\n if stream_output is True:\n stream_output = (sys.stdout, sys.stderr)\n\n async with open_process(\n command,\n stdout=subprocess.PIPE if stream_output else subprocess.DEVNULL,\n stderr=subprocess.PIPE if stream_output else subprocess.DEVNULL,\n **kwargs,\n ) as process:\n if task_status is not None:\n if not task_status_handler:\n\n def task_status_handler(process):\n return process.pid\n\n task_status.started(task_status_handler(process))\n\n if stream_output:\n await consume_process_output(\n process, stdout_sink=stream_output[0], stderr_sink=stream_output[1]\n )\n\n await process.wait()\n\n return process\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_agent","title":"setup_signal_handlers_agent
","text":"Handle interrupts of the agent gracefully.
Source code inprefect/utilities/processutils.py
def setup_signal_handlers_agent(pid: int, process_name: str, print_fn: Callable):\n \"\"\"Handle interrupts of the agent gracefully.\"\"\"\n setup_handler = partial(\n forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n )\n # when agent receives SIGINT, it stops dequeueing new FlowRuns, and runs until the subprocesses finish\n # the signal is not forwarded to subprocesses, so they can continue to run and hopefully still complete\n if sys.platform == \"win32\":\n # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n # https://bugs.python.org/issue26350\n setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n else:\n # forward first SIGINT directly, send SIGKILL on subsequent interrupt\n setup_handler(signal.SIGINT, signal.SIGINT, signal.SIGKILL)\n # first SIGTERM: send SIGINT, send SIGKILL on subsequent SIGTERM\n setup_handler(signal.SIGTERM, signal.SIGINT, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_server","title":"setup_signal_handlers_server
","text":"Handle interrupts of the server gracefully.
Source code inprefect/utilities/processutils.py
def setup_signal_handlers_server(pid: int, process_name: str, print_fn: Callable):\n \"\"\"Handle interrupts of the server gracefully.\"\"\"\n setup_handler = partial(\n forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n )\n # when server receives a signal, it needs to be propagated to the uvicorn subprocess\n if sys.platform == \"win32\":\n # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n # https://bugs.python.org/issue26350\n setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n else:\n # first interrupt: SIGTERM, second interrupt: SIGKILL\n setup_handler(signal.SIGINT, signal.SIGTERM, signal.SIGKILL)\n # forward first SIGTERM directly, send SIGKILL on subsequent SIGTERM\n setup_handler(signal.SIGTERM, signal.SIGTERM, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_worker","title":"setup_signal_handlers_worker
","text":"Handle interrupts of workers gracefully.
Source code inprefect/utilities/processutils.py
def setup_signal_handlers_worker(pid: int, process_name: str, print_fn: Callable):\n \"\"\"Handle interrupts of workers gracefully.\"\"\"\n setup_handler = partial(\n forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n )\n # when agent receives SIGINT, it stops dequeueing new FlowRuns, and runs until the subprocesses finish\n # the signal is not forwarded to subprocesses, so they can continue to run and hopefully still complete\n if sys.platform == \"win32\":\n # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n # https://bugs.python.org/issue26350\n setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n else:\n # forward first SIGINT directly, send SIGKILL on subsequent interrupt\n setup_handler(signal.SIGINT, signal.SIGINT, signal.SIGKILL)\n # first SIGTERM: send SIGINT, send SIGKILL on subsequent SIGTERM\n setup_handler(signal.SIGTERM, signal.SIGINT, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/utilities/pydantic/","title":"pydantic","text":"","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic","title":"prefect.utilities.pydantic
","text":"","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.PartialModel","title":"PartialModel
","text":" Bases: Generic[M]
A utility for creating a Pydantic model in several steps.
Fields may be set at initialization, via attribute assignment, or at finalization when the concrete model is returned.
Pydantic validation does not occur until finalization.
Each field can only be set once and a ValueError
will be raised on assignment if a field already has a value.
class MyModel(pydantic.BaseModel): x: int y: str z: float
partial_model = PartialModel(MyModel, x=1) partial_model.y = \"two\" model = partial_model.finalize(z=3.0)
Source code inprefect/utilities/pydantic.py
class PartialModel(Generic[M]):\n \"\"\"\n A utility for creating a Pydantic model in several steps.\n\n Fields may be set at initialization, via attribute assignment, or at finalization\n when the concrete model is returned.\n\n Pydantic validation does not occur until finalization.\n\n Each field can only be set once and a `ValueError` will be raised on assignment if\n a field already has a value.\n\n Example:\n >>> class MyModel(pydantic.BaseModel):\n >>> x: int\n >>> y: str\n >>> z: float\n >>>\n >>> partial_model = PartialModel(MyModel, x=1)\n >>> partial_model.y = \"two\"\n >>> model = partial_model.finalize(z=3.0)\n \"\"\"\n\n def __init__(self, __model_cls: Type[M], **kwargs: Any) -> None:\n self.fields = kwargs\n # Set fields first to avoid issues if `fields` is also set on the `model_cls`\n # in our custom `setattr` implementation.\n self.model_cls = __model_cls\n\n for name in kwargs.keys():\n self.raise_if_not_in_model(name)\n\n def finalize(self, **kwargs: Any) -> M:\n for name in kwargs.keys():\n self.raise_if_already_set(name)\n self.raise_if_not_in_model(name)\n return self.model_cls(**self.fields, **kwargs)\n\n def raise_if_already_set(self, name):\n if name in self.fields:\n raise ValueError(f\"Field {name!r} has already been set.\")\n\n def raise_if_not_in_model(self, name):\n if name not in self.model_cls.__fields__:\n raise ValueError(f\"Field {name!r} is not present in the model.\")\n\n def __setattr__(self, __name: str, __value: Any) -> None:\n if __name in {\"fields\", \"model_cls\"}:\n return super().__setattr__(__name, __value)\n\n self.raise_if_already_set(__name)\n self.raise_if_not_in_model(__name)\n self.fields[__name] = __value\n\n def __repr__(self) -> str:\n dsp_fields = \", \".join(\n f\"{key}={repr(value)}\" for key, value in self.fields.items()\n )\n return f\"PartialModel(cls={self.model_cls.__name__}, {dsp_fields})\"\n
","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.add_cloudpickle_reduction","title":"add_cloudpickle_reduction
","text":"Adds a __reducer__
to the given class that ensures it is cloudpickle compatible.
Workaround for issues with cloudpickle when using cythonized pydantic which throws exceptions when attempting to pickle the class which has \"compiled\" validator methods dynamically attached to it.
We cannot define this utility in the model class itself because the class is the type that contains unserializable methods.
Any model using some features of Pydantic (e.g. Path
validation) with a Cython compiled Pydantic installation may encounter pickling issues.
See related issue at https://github.com/cloudpipe/cloudpickle/issues/408
Source code inprefect/utilities/pydantic.py
def add_cloudpickle_reduction(__model_cls: Type[M] = None, **kwargs: Any):\n \"\"\"\n Adds a `__reducer__` to the given class that ensures it is cloudpickle compatible.\n\n Workaround for issues with cloudpickle when using cythonized pydantic which\n throws exceptions when attempting to pickle the class which has \"compiled\"\n validator methods dynamically attached to it.\n\n We cannot define this utility in the model class itself because the class is the\n type that contains unserializable methods.\n\n Any model using some features of Pydantic (e.g. `Path` validation) with a Cython\n compiled Pydantic installation may encounter pickling issues.\n\n See related issue at https://github.com/cloudpipe/cloudpickle/issues/408\n \"\"\"\n if __model_cls:\n __model_cls.__reduce__ = _reduce_model\n __model_cls.__reduce_kwargs__ = kwargs\n return __model_cls\n else:\n return cast(\n Callable[[Type[M]], Type[M]],\n partial(\n add_cloudpickle_reduction,\n **kwargs,\n ),\n )\n
","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.add_type_dispatch","title":"add_type_dispatch
","text":"Extend a Pydantic model to add a 'type' field that is used a discriminator field to dynamically determine the subtype that when deserializing models.
This allows automatic resolution to subtypes of the decorated model.
If a type field already exists, it should be a string literal field that has a constant value for each subclass. The default value of this field will be used as the dispatch key.
If a type field does not exist, one will be added. In this case, the value of the field will be set to the value of the __dispatch_key__
. The base class should define a __dispatch_key__
class method that is used to determine the unique key for each subclass. Alternatively, each subclass can define the __dispatch_key__
as a string literal.
The base class must not define a 'type' field. If it is not desirable to add a field to the model and the dispatch key can be tracked separately, the lower level utilities in prefect.utilities.dispatch
should be used directly.
prefect/utilities/pydantic.py
def add_type_dispatch(model_cls: Type[M]) -> Type[M]:\n \"\"\"\n Extend a Pydantic model to add a 'type' field that is used a discriminator field\n to dynamically determine the subtype that when deserializing models.\n\n This allows automatic resolution to subtypes of the decorated model.\n\n If a type field already exists, it should be a string literal field that has a\n constant value for each subclass. The default value of this field will be used as\n the dispatch key.\n\n If a type field does not exist, one will be added. In this case, the value of the\n field will be set to the value of the `__dispatch_key__`. The base class should\n define a `__dispatch_key__` class method that is used to determine the unique key\n for each subclass. Alternatively, each subclass can define the `__dispatch_key__`\n as a string literal.\n\n The base class must not define a 'type' field. If it is not desirable to add a field\n to the model and the dispatch key can be tracked separately, the lower level\n utilities in `prefect.utilities.dispatch` should be used directly.\n \"\"\"\n defines_dispatch_key = hasattr(\n model_cls, \"__dispatch_key__\"\n ) or \"__dispatch_key__\" in getattr(model_cls, \"__annotations__\", {})\n\n defines_type_field = \"type\" in model_cls.__fields__\n\n if not defines_dispatch_key and not defines_type_field:\n raise ValueError(\n f\"Model class {model_cls.__name__!r} does not define a `__dispatch_key__` \"\n \"or a type field. One of these is required for dispatch.\"\n )\n\n elif defines_dispatch_key and not defines_type_field:\n # Add a type field to store the value of the dispatch key\n model_cls.__fields__[\"type\"] = pydantic.fields.ModelField(\n name=\"type\",\n type_=str,\n required=True,\n class_validators=None,\n model_config=model_cls.__config__,\n )\n\n elif not defines_dispatch_key and defines_type_field:\n field_type_annotation = model_cls.__fields__[\"type\"].type_\n if field_type_annotation != str:\n raise TypeError(\n f\"Model class {model_cls.__name__!r} defines a 'type' field with \"\n f\"type {field_type_annotation.__name__!r} but it must be 'str'.\"\n )\n\n # Set the dispatch key to retrieve the value from the type field\n @classmethod\n def dispatch_key_from_type_field(cls):\n return cls.__fields__[\"type\"].default\n\n model_cls.__dispatch_key__ = dispatch_key_from_type_field\n\n else:\n raise ValueError(\n f\"Model class {model_cls.__name__!r} defines a `__dispatch_key__` \"\n \"and a type field. Only one of these may be defined for dispatch.\"\n )\n\n cls_init = model_cls.__init__\n cls_new = model_cls.__new__\n\n def __init__(__pydantic_self__, **data: Any) -> None:\n type_string = (\n get_dispatch_key(__pydantic_self__)\n if type(__pydantic_self__) != model_cls\n else \"__base__\"\n )\n data.setdefault(\"type\", type_string)\n cls_init(__pydantic_self__, **data)\n\n def __new__(cls: Type[Self], **kwargs) -> Self:\n if \"type\" in kwargs:\n try:\n subcls = lookup_type(cls, dispatch_key=kwargs[\"type\"])\n except KeyError as exc:\n raise pydantic.ValidationError(errors=[exc], model=cls)\n return cls_new(subcls)\n else:\n return cls_new(cls)\n\n model_cls.__init__ = __init__\n model_cls.__new__ = __new__\n\n register_base_type(model_cls)\n\n return model_cls\n
","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.get_class_fields_only","title":"get_class_fields_only
","text":"Gets all the field names defined on the model class but not any parent classes. Any fields that are on the parent but redefined on the subclass are included.
Source code inprefect/utilities/pydantic.py
def get_class_fields_only(model: Type[pydantic.BaseModel]) -> set:\n \"\"\"\n Gets all the field names defined on the model class but not any parent classes.\n Any fields that are on the parent but redefined on the subclass are included.\n \"\"\"\n subclass_class_fields = set(model.__annotations__.keys())\n parent_class_fields = set()\n\n for base in model.__class__.__bases__:\n if issubclass(base, pydantic.BaseModel):\n parent_class_fields.update(base.__annotations__.keys())\n\n return (subclass_class_fields - parent_class_fields) | (\n subclass_class_fields & parent_class_fields\n )\n
","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/render_swagger/","title":"render_swagger","text":"","tags":["Python API","Swagger"]},{"location":"api-ref/prefect/utilities/render_swagger/#prefect.utilities.render_swagger","title":"prefect.utilities.render_swagger
","text":"","tags":["Python API","Swagger"]},{"location":"api-ref/prefect/utilities/render_swagger/#prefect.utilities.render_swagger.swagger_lib","title":"swagger_lib
","text":"Provides the actual swagger library used
Source code inprefect/utilities/render_swagger.py
def swagger_lib(config) -> dict:\n \"\"\"\n Provides the actual swagger library used\n \"\"\"\n lib_swagger = {\n \"css\": \"https://unpkg.com/swagger-ui-dist@3/swagger-ui.css\",\n \"js\": \"https://unpkg.com/swagger-ui-dist@3/swagger-ui-bundle.js\",\n }\n\n extra_javascript = config.get(\"extra_javascript\", [])\n extra_css = config.get(\"extra_css\", [])\n for lib in extra_javascript:\n if os.path.basename(urllib.parse.urlparse(lib).path) == \"swagger-ui-bundle.js\":\n lib_swagger[\"js\"] = lib\n break\n\n for css in extra_css:\n if os.path.basename(urllib.parse.urlparse(css).path) == \"swagger-ui.css\":\n lib_swagger[\"css\"] = css\n break\n return lib_swagger\n
","tags":["Python API","Swagger"]},{"location":"api-ref/prefect/utilities/services/","title":"services","text":"","tags":["Python API","services"]},{"location":"api-ref/prefect/utilities/services/#prefect.utilities.services","title":"prefect.utilities.services
","text":"","tags":["Python API","services"]},{"location":"api-ref/prefect/utilities/services/#prefect.utilities.services.critical_service_loop","title":"critical_service_loop
async
","text":"Runs the given workload
function on the specified interval
, while being forgiving of intermittent issues like temporary HTTP errors. If more than a certain number of consecutive
errors occur, print a summary of up to memory
recent exceptions to printer
, then begin backoff.
The loop will exit after reaching the consecutive error limit backoff
times. On each backoff, the interval will be doubled. On a successful loop, the backoff will be reset.
Parameters:
Name Type Description Defaultworkload
Callable[..., Coroutine]
the function to call
requiredinterval
float
how frequently to call it
requiredmemory
int
how many recent errors to remember
10
consecutive
int
how many consecutive errors must we see before we begin backoff
3
backoff
int
how many times we should allow consecutive errors before exiting
1
printer
Callable[..., None]
a print
-like function where errors will be reported
print
run_once
bool
if set, the loop will only run once then return
False
jitter_range
float
if set, the interval will be a random variable (rv) drawn from a clamped Poisson distribution where lambda = interval and the rv is bound between interval * (1 - range) < rv < interval * (1 + range)
None
Source code in prefect/utilities/services.py
async def critical_service_loop(\n workload: Callable[..., Coroutine],\n interval: float,\n memory: int = 10,\n consecutive: int = 3,\n backoff: int = 1,\n printer: Callable[..., None] = print,\n run_once: bool = False,\n jitter_range: float = None,\n):\n \"\"\"\n Runs the given `workload` function on the specified `interval`, while being\n forgiving of intermittent issues like temporary HTTP errors. If more than a certain\n number of `consecutive` errors occur, print a summary of up to `memory` recent\n exceptions to `printer`, then begin backoff.\n\n The loop will exit after reaching the consecutive error limit `backoff` times.\n On each backoff, the interval will be doubled. On a successful loop, the backoff\n will be reset.\n\n Args:\n workload: the function to call\n interval: how frequently to call it\n memory: how many recent errors to remember\n consecutive: how many consecutive errors must we see before we begin backoff\n backoff: how many times we should allow consecutive errors before exiting\n printer: a `print`-like function where errors will be reported\n run_once: if set, the loop will only run once then return\n jitter_range: if set, the interval will be a random variable (rv) drawn from\n a clamped Poisson distribution where lambda = interval and the rv is bound\n between `interval * (1 - range) < rv < interval * (1 + range)`\n \"\"\"\n\n track_record: Deque[bool] = deque([True] * consecutive, maxlen=consecutive)\n failures: Deque[Tuple[Exception, TracebackType]] = deque(maxlen=memory)\n backoff_count = 0\n\n while True:\n try:\n workload_display_name = (\n workload.__name__ if hasattr(workload, \"__name__\") else workload\n )\n logger.debug(f\"Starting run of {workload_display_name!r}\")\n await workload()\n\n # Reset the backoff count on success; we may want to consider resetting\n # this only if the track record is _all_ successful to avoid ending backoff\n # prematurely\n if backoff_count > 0:\n printer(\"Resetting backoff due to successful run.\")\n backoff_count = 0\n\n track_record.append(True)\n except httpx.TransportError as exc:\n # httpx.TransportError is the base class for any kind of communications\n # error, like timeouts, connection failures, etc. This does _not_ cover\n # routine HTTP error codes (even 5xx errors like 502/503) so this\n # handler should not be attempting to cover cases where the Prefect server\n # or Prefect Cloud is having an outage (which will be covered by the\n # exception clause below)\n track_record.append(False)\n failures.append((exc, sys.exc_info()[-1]))\n logger.debug(\n f\"Run of {workload!r} failed with TransportError\", exc_info=exc\n )\n except httpx.HTTPStatusError as exc:\n if exc.response.status_code >= 500:\n # 5XX codes indicate a potential outage of the Prefect API which is\n # likely to be temporary and transient. Don't quit over these unless\n # it is prolonged.\n track_record.append(False)\n failures.append((exc, sys.exc_info()[-1]))\n logger.debug(\n f\"Run of {workload!r} failed with HTTPStatusError\", exc_info=exc\n )\n else:\n raise\n\n # Decide whether to exit now based on recent history.\n #\n # Given some typical background error rate of, say, 1%, we may still observe\n # quite a few errors in our recent samples, but this is not necessarily a cause\n # for concern.\n #\n # Imagine two distributions that could reflect our situation at any time: the\n # everything-is-fine distribution of errors, and the everything-is-on-fire\n # distribution of errors. We are trying to determine which of those two worlds\n # we are currently experiencing. We compare the likelihood that we'd draw N\n # consecutive errors from each. In the everything-is-fine distribution, that\n # would be a very low-probability occurrence, but in the everything-is-on-fire\n # distribution, that is a high-probability occurrence.\n #\n # Remarkably, we only need to look back for a small number of consecutive\n # errors to have reasonable confidence that this is indeed an anomaly.\n # @anticorrelator and @chrisguidry estimated that we should only need to look\n # back for 3 consecutive errors.\n if not any(track_record):\n # We've failed enough times to be sure something is wrong, the writing is\n # on the wall. Let's explain what we've seen and exit.\n printer(\n f\"\\nFailed the last {consecutive} attempts. \"\n \"Please check your environment and configuration.\"\n )\n\n printer(\"Examples of recent errors:\\n\")\n\n failures_by_type = distinct(\n reversed(failures),\n key=lambda pair: type(pair[0]), # Group by the type of exception\n )\n for exception, traceback in failures_by_type:\n printer(\"\".join(format_exception(None, exception, traceback)))\n printer()\n\n backoff_count += 1\n\n if backoff_count >= backoff:\n raise RuntimeError(\"Service exceeded error threshold.\")\n\n # Reset the track record\n track_record.extend([True] * consecutive)\n failures.clear()\n printer(\n \"Backing off due to consecutive errors, using increased interval of \"\n f\" {interval * 2**backoff_count}s.\"\n )\n\n if run_once:\n return\n\n if jitter_range is not None:\n sleep = clamped_poisson_interval(interval, clamping_factor=jitter_range)\n else:\n sleep = interval * 2**backoff_count\n\n await anyio.sleep(sleep)\n
","tags":["Python API","services"]},{"location":"api-ref/prefect/utilities/slugify/","title":"slugify","text":"","tags":["Python API","slugify"]},{"location":"api-ref/prefect/utilities/slugify/#prefect.utilities.slugify","title":"prefect.utilities.slugify
","text":"","tags":["Python API","slugify"]},{"location":"api-ref/prefect/utilities/templating/","title":"templating","text":"","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating","title":"prefect.utilities.templating
","text":"","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.apply_values","title":"apply_values
","text":"Replaces placeholders in a template with values from a supplied dictionary.
Will recursively replace placeholders in dictionaries and lists.
If a value has no placeholders, it will be returned unchanged.
If a template contains only a single placeholder, the placeholder will be fully replaced with the value.
If a template contains text before or after a placeholder or there are multiple placeholders, the placeholders will be replaced with the corresponding variable values.
If a template contains a placeholder that is not in values
, NotSet will be returned to signify that no placeholder replacement occurred. If template
is a dictionary that contains a key with a value of NotSet, the key will be removed in the return value unless remove_notset
is set to False.
Parameters:
Name Type Description Defaulttemplate
T
template to discover and replace values in
requiredvalues
Dict[str, Any]
The values to apply to placeholders in the template
requiredremove_notset
bool
If True, remove keys with an unset value
True
Returns:
Type DescriptionUnion[T, Type[NotSet]]
The template with the values applied
Source code inprefect/utilities/templating.py
def apply_values(\n template: T, values: Dict[str, Any], remove_notset: bool = True\n) -> Union[T, Type[NotSet]]:\n \"\"\"\n Replaces placeholders in a template with values from a supplied dictionary.\n\n Will recursively replace placeholders in dictionaries and lists.\n\n If a value has no placeholders, it will be returned unchanged.\n\n If a template contains only a single placeholder, the placeholder will be\n fully replaced with the value.\n\n If a template contains text before or after a placeholder or there are\n multiple placeholders, the placeholders will be replaced with the\n corresponding variable values.\n\n If a template contains a placeholder that is not in `values`, NotSet will\n be returned to signify that no placeholder replacement occurred. If\n `template` is a dictionary that contains a key with a value of NotSet,\n the key will be removed in the return value unless `remove_notset` is set to False.\n\n Args:\n template: template to discover and replace values in\n values: The values to apply to placeholders in the template\n remove_notset: If True, remove keys with an unset value\n\n Returns:\n The template with the values applied\n \"\"\"\n if isinstance(template, (int, float, bool, type(NotSet), type(None))):\n return template\n if isinstance(template, str):\n placeholders = find_placeholders(template)\n if not placeholders:\n # If there are no values, we can just use the template\n return template\n elif (\n len(placeholders) == 1\n and list(placeholders)[0].full_match == template\n and list(placeholders)[0].type is PlaceholderType.STANDARD\n ):\n # If there is only one variable with no surrounding text,\n # we can replace it. If there is no variable value, we\n # return NotSet to indicate that the value should not be included.\n return get_from_dict(values, list(placeholders)[0].name, NotSet)\n else:\n for full_match, name, placeholder_type in placeholders:\n if placeholder_type is PlaceholderType.STANDARD:\n value = get_from_dict(values, name, NotSet)\n elif placeholder_type is PlaceholderType.ENV_VAR:\n name = name.lstrip(ENV_VAR_PLACEHOLDER_PREFIX)\n value = os.environ.get(name, NotSet)\n else:\n continue\n\n if value is NotSet and not remove_notset:\n continue\n elif value is NotSet:\n template = template.replace(full_match, \"\")\n else:\n template = template.replace(full_match, str(value))\n\n return template\n elif isinstance(template, dict):\n updated_template = {}\n for key, value in template.items():\n updated_value = apply_values(value, values, remove_notset=remove_notset)\n if updated_value is not NotSet:\n updated_template[key] = updated_value\n elif not remove_notset:\n updated_template[key] = value\n\n return updated_template\n elif isinstance(template, list):\n updated_list = []\n for value in template:\n updated_value = apply_values(value, values, remove_notset=remove_notset)\n if updated_value is not NotSet:\n updated_list.append(updated_value)\n return updated_list\n else:\n raise ValueError(f\"Unexpected template type {type(template).__name__!r}\")\n
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.determine_placeholder_type","title":"determine_placeholder_type
","text":"Determines the type of a placeholder based on its name.
Parameters:
Name Type Description Defaultname
str
The name of the placeholder
requiredReturns:
Type DescriptionPlaceholderType
The type of the placeholder
Source code inprefect/utilities/templating.py
def determine_placeholder_type(name: str) -> PlaceholderType:\n \"\"\"\n Determines the type of a placeholder based on its name.\n\n Args:\n name: The name of the placeholder\n\n Returns:\n The type of the placeholder\n \"\"\"\n if name.startswith(BLOCK_DOCUMENT_PLACEHOLDER_PREFIX):\n return PlaceholderType.BLOCK_DOCUMENT\n elif name.startswith(VARIABLE_PLACEHOLDER_PREFIX):\n return PlaceholderType.VARIABLE\n elif name.startswith(ENV_VAR_PLACEHOLDER_PREFIX):\n return PlaceholderType.ENV_VAR\n else:\n return PlaceholderType.STANDARD\n
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.find_placeholders","title":"find_placeholders
","text":"Finds all placeholders in a template.
Parameters:
Name Type Description Defaulttemplate
T
template to discover placeholders in
requiredReturns:
Type DescriptionSet[Placeholder]
A set of all placeholders in the template
Source code inprefect/utilities/templating.py
def find_placeholders(template: T) -> Set[Placeholder]:\n \"\"\"\n Finds all placeholders in a template.\n\n Args:\n template: template to discover placeholders in\n\n Returns:\n A set of all placeholders in the template\n \"\"\"\n if isinstance(template, (int, float, bool)):\n return set()\n if isinstance(template, str):\n result = PLACEHOLDER_CAPTURE_REGEX.findall(template)\n return {\n Placeholder(full_match, name, determine_placeholder_type(name))\n for full_match, name in result\n }\n elif isinstance(template, dict):\n return set().union(\n *[find_placeholders(value) for key, value in template.items()]\n )\n elif isinstance(template, list):\n return set().union(*[find_placeholders(item) for item in template])\n else:\n raise ValueError(f\"Unexpected type: {type(template)}\")\n
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_block_document_references","title":"resolve_block_document_references
async
","text":"Resolve block document references in a template by replacing each reference with the data of the block document.
Recursively searches for block document references in dictionaries and lists.
Identifies block document references by the as dictionary with the following structure:
{\n \"$ref\": {\n \"block_document_id\": <block_document_id>\n }\n}\n
where <block_document_id>
is the ID of the block document to resolve. Once the block document is retrieved from the API, the data of the block document is used to replace the reference.
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_block_document_references--accessing-values","title":"Accessing Values:","text":"To access different values in a block document, use dot notation combined with the block document's prefix, slug, and block name.
For a block document with the structure:
{\n \"value\": {\n \"key\": {\n \"nested-key\": \"nested-value\"\n },\n \"list\": [\n {\"list-key\": \"list-value\"},\n 1,\n 2\n ]\n }\n}\n
examples of value resolution are as follows: Accessing a nested dictionary: Format: prefect.blocks...value.key Example: Returns {\"nested-key\": \"nested-value\"}
Accessing a specific nested value: Format: prefect.blocks...value.key.nested-key Example: Returns \"nested-value\"
Accessing a list element's key-value: Format: prefect.blocks...value.list[0].list-key Example: Returns \"list-value\"","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_block_document_references--default-resolution-for-system-blocks","title":"Default Resolution for System Blocks:","text":"
For system blocks, which only contain a value
attribute, this attribute is resolved by default.
Parameters:
Name Type Description Defaulttemplate
T
The template to resolve block documents in
requiredReturns:
Type DescriptionUnion[T, Dict[str, Any]]
The template with block documents resolved
Source code inprefect/utilities/templating.py
@inject_client\nasync def resolve_block_document_references(\n template: T, client: \"PrefectClient\" = None\n) -> Union[T, Dict[str, Any]]:\n \"\"\"\n Resolve block document references in a template by replacing each reference with\n the data of the block document.\n\n Recursively searches for block document references in dictionaries and lists.\n\n Identifies block document references by the as dictionary with the following\n structure:\n ```\n {\n \"$ref\": {\n \"block_document_id\": <block_document_id>\n }\n }\n ```\n where `<block_document_id>` is the ID of the block document to resolve.\n\n Once the block document is retrieved from the API, the data of the block document\n is used to replace the reference.\n\n Accessing Values:\n -----------------\n To access different values in a block document, use dot notation combined with the block document's prefix, slug, and block name.\n\n For a block document with the structure:\n ```json\n {\n \"value\": {\n \"key\": {\n \"nested-key\": \"nested-value\"\n },\n \"list\": [\n {\"list-key\": \"list-value\"},\n 1,\n 2\n ]\n }\n }\n ```\n examples of value resolution are as follows:\n\n 1. Accessing a nested dictionary:\n Format: prefect.blocks.<block_type_slug>.<block_document_name>.value.key\n Example: Returns {\"nested-key\": \"nested-value\"}\n\n 2. Accessing a specific nested value:\n Format: prefect.blocks.<block_type_slug>.<block_document_name>.value.key.nested-key\n Example: Returns \"nested-value\"\n\n 3. Accessing a list element's key-value:\n Format: prefect.blocks.<block_type_slug>.<block_document_name>.value.list[0].list-key\n Example: Returns \"list-value\"\n\n Default Resolution for System Blocks:\n -------------------------------------\n For system blocks, which only contain a `value` attribute, this attribute is resolved by default.\n\n Args:\n template: The template to resolve block documents in\n\n Returns:\n The template with block documents resolved\n \"\"\"\n if isinstance(template, dict):\n block_document_id = template.get(\"$ref\", {}).get(\"block_document_id\")\n if block_document_id:\n block_document = await client.read_block_document(block_document_id)\n return block_document.data\n updated_template = {}\n for key, value in template.items():\n updated_value = await resolve_block_document_references(\n value, client=client\n )\n updated_template[key] = updated_value\n return updated_template\n elif isinstance(template, list):\n return [\n await resolve_block_document_references(item, client=client)\n for item in template\n ]\n elif isinstance(template, str):\n placeholders = find_placeholders(template)\n has_block_document_placeholder = any(\n placeholder.type is PlaceholderType.BLOCK_DOCUMENT\n for placeholder in placeholders\n )\n if len(placeholders) == 0 or not has_block_document_placeholder:\n return template\n elif (\n len(placeholders) == 1\n and list(placeholders)[0].full_match == template\n and list(placeholders)[0].type is PlaceholderType.BLOCK_DOCUMENT\n ):\n # value_keypath will be a list containing a dot path if additional\n # attributes are accessed and an empty list otherwise.\n block_type_slug, block_document_name, *value_keypath = (\n list(placeholders)[0]\n .name.replace(BLOCK_DOCUMENT_PLACEHOLDER_PREFIX, \"\")\n .split(\".\", 2)\n )\n block_document = await client.read_block_document_by_name(\n name=block_document_name, block_type_slug=block_type_slug\n )\n value = block_document.data\n\n # resolving system blocks to their data for backwards compatibility\n if len(value) == 1 and \"value\" in value:\n # only resolve the value if the keypath is not already pointing to \"value\"\n if len(value_keypath) == 0 or value_keypath[0][:5] != \"value\":\n value = value[\"value\"]\n\n # resolving keypath/block attributes\n if len(value_keypath) > 0:\n value_keypath: str = value_keypath[0]\n value = get_from_dict(value, value_keypath, default=NotSet)\n if value is NotSet:\n raise ValueError(\n f\"Invalid template: {template!r}. Could not resolve the\"\n \" keypath in the block document data.\"\n )\n\n return value\n else:\n raise ValueError(\n f\"Invalid template: {template!r}. Only a single block placeholder is\"\n \" allowed in a string and no surrounding text is allowed.\"\n )\n\n return template\n
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_variables","title":"resolve_variables
async
","text":"Resolve variables in a template by replacing each variable placeholder with the value of the variable.
Recursively searches for variable placeholders in dictionaries and lists.
Strips variable placeholders if the variable is not found.
Parameters:
Name Type Description Defaulttemplate
T
The template to resolve variables in
requiredReturns:
Type DescriptionThe template with variables resolved
Source code inprefect/utilities/templating.py
@inject_client\nasync def resolve_variables(template: T, client: \"PrefectClient\" = None):\n \"\"\"\n Resolve variables in a template by replacing each variable placeholder with the\n value of the variable.\n\n Recursively searches for variable placeholders in dictionaries and lists.\n\n Strips variable placeholders if the variable is not found.\n\n Args:\n template: The template to resolve variables in\n\n Returns:\n The template with variables resolved\n \"\"\"\n if isinstance(template, str):\n placeholders = find_placeholders(template)\n has_variable_placeholder = any(\n placeholder.type is PlaceholderType.VARIABLE for placeholder in placeholders\n )\n if not placeholders or not has_variable_placeholder:\n # If there are no values, we can just use the template\n return template\n elif (\n len(placeholders) == 1\n and list(placeholders)[0].full_match == template\n and list(placeholders)[0].type is PlaceholderType.VARIABLE\n ):\n variable_name = list(placeholders)[0].name.replace(\n VARIABLE_PLACEHOLDER_PREFIX, \"\"\n )\n variable = await client.read_variable_by_name(name=variable_name)\n if variable is None:\n return \"\"\n else:\n return variable.value\n else:\n for full_match, name, placeholder_type in placeholders:\n if placeholder_type is PlaceholderType.VARIABLE:\n variable_name = name.replace(VARIABLE_PLACEHOLDER_PREFIX, \"\")\n variable = await client.read_variable_by_name(name=variable_name)\n if variable is None:\n template = template.replace(full_match, \"\")\n else:\n template = template.replace(full_match, variable.value)\n return template\n elif isinstance(template, dict):\n return {\n key: await resolve_variables(value, client=client)\n for key, value in template.items()\n }\n elif isinstance(template, list):\n return [await resolve_variables(item, client=client) for item in template]\n else:\n return template\n
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/text/","title":"text","text":"","tags":["Python API","text"]},{"location":"api-ref/prefect/utilities/text/#prefect.utilities.text","title":"prefect.utilities.text
","text":"","tags":["Python API","text"]},{"location":"api-ref/prefect/utilities/validation/","title":"validation","text":"","tags":["Python API","validation"]},{"location":"api-ref/prefect/utilities/validation/#prefect.utilities.validation","title":"prefect.utilities.validation
","text":"","tags":["Python API","validation"]},{"location":"api-ref/prefect/utilities/validation/#prefect.utilities.validation.validate_schema","title":"validate_schema
","text":"Validate that the provided schema is a valid json schema.
Parameters:
Name Type Description Defaultschema
dict
The schema to validate.
requiredRaises:
Type DescriptionValueError
If the provided schema is not a valid json schema.
Source code inprefect/utilities/validation.py
def validate_schema(schema: dict):\n \"\"\"\n Validate that the provided schema is a valid json schema.\n\n Args:\n schema: The schema to validate.\n\n Raises:\n ValueError: If the provided schema is not a valid json schema.\n\n \"\"\"\n try:\n if schema is not None:\n # Most closely matches the schemas generated by pydantic\n jsonschema.Draft4Validator.check_schema(schema)\n except jsonschema.SchemaError as exc:\n raise ValueError(\n \"The provided schema is not a valid json schema. Schema error:\"\n f\" {exc.message}\"\n ) from exc\n
","tags":["Python API","validation"]},{"location":"api-ref/prefect/utilities/validation/#prefect.utilities.validation.validate_values_conform_to_schema","title":"validate_values_conform_to_schema
","text":"Validate that the provided values conform to the provided json schema.
Parameters:
Name Type Description Defaultvalues
dict
The values to validate.
requiredschema
dict
The schema to validate against.
requiredignore_required
bool
Whether to ignore the required fields in the schema. Should be used when a partial set of values is acceptable.
False
Raises:
Type DescriptionValueError
If the parameters do not conform to the schema.
Source code inprefect/utilities/validation.py
def validate_values_conform_to_schema(\n values: dict, schema: dict, ignore_required: bool = False\n):\n \"\"\"\n Validate that the provided values conform to the provided json schema.\n\n Args:\n values: The values to validate.\n schema: The schema to validate against.\n ignore_required: Whether to ignore the required fields in the schema. Should be\n used when a partial set of values is acceptable.\n\n Raises:\n ValueError: If the parameters do not conform to the schema.\n\n \"\"\"\n if ignore_required:\n schema = remove_nested_keys([\"required\"], schema)\n\n try:\n if schema is not None and values is not None:\n jsonschema.validate(values, schema)\n except jsonschema.ValidationError as exc:\n if exc.json_path == \"$\":\n error_message = \"Validation failed.\"\n else:\n error_message = (\n f\"Validation failed for field {exc.json_path.replace('$.', '')!r}.\"\n )\n error_message += f\" Failure reason: {exc.message}\"\n raise ValueError(error_message) from exc\n except jsonschema.SchemaError as exc:\n raise ValueError(\n \"The provided schema is not a valid json schema. Schema error:\"\n f\" {exc.message}\"\n ) from exc\n
","tags":["Python API","validation"]},{"location":"api-ref/prefect/utilities/visualization/","title":"visualization","text":"","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization","title":"prefect.utilities.visualization
","text":"Utilities for working with Flow.visualize()
","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.TaskVizTracker","title":"TaskVizTracker
","text":"Source code in prefect/utilities/visualization.py
class TaskVizTracker:\n def __init__(self):\n self.tasks = []\n self.dynamic_task_counter = {}\n self.object_id_to_task = {}\n\n def add_task(self, task: VizTask):\n if task.name not in self.dynamic_task_counter:\n self.dynamic_task_counter[task.name] = 0\n else:\n self.dynamic_task_counter[task.name] += 1\n\n task.name = f\"{task.name}-{self.dynamic_task_counter[task.name]}\"\n self.tasks.append(task)\n\n def __enter__(self):\n TaskVizTrackerState.current = self\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n TaskVizTrackerState.current = None\n\n def link_viz_return_value_to_viz_task(\n self, viz_return_value: Any, viz_task: VizTask\n ) -> None:\n \"\"\"\n We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256\n because they are singletons.\n \"\"\"\n from prefect.engine import UNTRACKABLE_TYPES\n\n if (type(viz_return_value) in UNTRACKABLE_TYPES) or (\n isinstance(viz_return_value, int) and (-5 <= viz_return_value <= 256)\n ):\n return\n self.object_id_to_task[id(viz_return_value)] = viz_task\n
","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.TaskVizTracker.link_viz_return_value_to_viz_task","title":"link_viz_return_value_to_viz_task
","text":"We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256 because they are singletons.
Source code inprefect/utilities/visualization.py
def link_viz_return_value_to_viz_task(\n self, viz_return_value: Any, viz_task: VizTask\n) -> None:\n \"\"\"\n We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256\n because they are singletons.\n \"\"\"\n from prefect.engine import UNTRACKABLE_TYPES\n\n if (type(viz_return_value) in UNTRACKABLE_TYPES) or (\n isinstance(viz_return_value, int) and (-5 <= viz_return_value <= 256)\n ):\n return\n self.object_id_to_task[id(viz_return_value)] = viz_task\n
","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.build_task_dependencies","title":"build_task_dependencies
","text":"Constructs a Graphviz directed graph object that represents the dependencies between tasks in the given TaskVizTracker.
Raises: - GraphvizImportError: If there's an ImportError related to graphviz. - FlowVisualizationError: If there's any other error during the visualization process or if return values of tasks are directly accessed without specifying a viz_return_value
.
prefect/utilities/visualization.py
def build_task_dependencies(task_run_tracker: TaskVizTracker):\n \"\"\"\n Constructs a Graphviz directed graph object that represents the dependencies\n between tasks in the given TaskVizTracker.\n\n Parameters:\n - task_run_tracker (TaskVizTracker): An object containing tasks and their\n dependencies.\n\n Returns:\n - graphviz.Digraph: A directed graph object depicting the relationships and\n dependencies between tasks.\n\n Raises:\n - GraphvizImportError: If there's an ImportError related to graphviz.\n - FlowVisualizationError: If there's any other error during the visualization\n process or if return values of tasks are directly accessed without\n specifying a `viz_return_value`.\n \"\"\"\n try:\n g = graphviz.Digraph()\n for task in task_run_tracker.tasks:\n g.node(task.name)\n for upstream in task.upstream_tasks:\n g.edge(upstream.name, task.name)\n return g\n except ImportError as exc:\n raise GraphvizImportError from exc\n except Exception:\n raise FlowVisualizationError(\n \"Something went wrong building the flow's visualization.\"\n \" If you're interacting with the return value of a task\"\n \" directly inside of your flow, you must set a set a `viz_return_value`\"\n \", for example `@task(viz_return_value=[1, 2, 3])`.\"\n )\n
","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.track_viz_task","title":"track_viz_task
","text":"Return a result if sync otherwise return a coroutine that returns the result
Source code inprefect/utilities/visualization.py
def track_viz_task(\n is_async: bool,\n task_name: str,\n parameters: dict,\n viz_return_value: Optional[Any] = None,\n):\n \"\"\"Return a result if sync otherwise return a coroutine that returns the result\"\"\"\n if is_async:\n return from_async.wait_for_call_in_loop_thread(\n partial(_track_viz_task, task_name, parameters, viz_return_value)\n )\n else:\n return _track_viz_task(task_name, parameters, viz_return_value)\n
","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.visualize_task_dependencies","title":"visualize_task_dependencies
","text":"Renders and displays a Graphviz directed graph representing task dependencies.
The graph is rendered in PNG format and saved with the name specified by flow_run_name. After rendering, the visualization is opened and displayed.
Parameters: - graph (graphviz.Digraph): The directed graph object to visualize. - flow_run_name (str): The name to use when saving the rendered graph image.
Raises: - GraphvizExecutableNotFoundError: If Graphviz isn't found on the system. - FlowVisualizationError: If there's any other error during the visualization process or if return values of tasks are directly accessed without specifying a viz_return_value
.
prefect/utilities/visualization.py
def visualize_task_dependencies(graph: graphviz.Digraph, flow_run_name: str):\n \"\"\"\n Renders and displays a Graphviz directed graph representing task dependencies.\n\n The graph is rendered in PNG format and saved with the name specified by\n flow_run_name. After rendering, the visualization is opened and displayed.\n\n Parameters:\n - graph (graphviz.Digraph): The directed graph object to visualize.\n - flow_run_name (str): The name to use when saving the rendered graph image.\n\n Raises:\n - GraphvizExecutableNotFoundError: If Graphviz isn't found on the system.\n - FlowVisualizationError: If there's any other error during the visualization\n process or if return values of tasks are directly accessed without\n specifying a `viz_return_value`.\n \"\"\"\n try:\n graph.render(filename=flow_run_name, view=True, format=\"png\", cleanup=True)\n except graphviz.backend.ExecutableNotFound as exc:\n msg = (\n \"It appears you do not have Graphviz installed, or it is not on your \"\n \"PATH. Please install Graphviz from http://www.graphviz.org/download/. \"\n \"Note: Just installing the `graphviz` python package is not \"\n \"sufficient.\"\n )\n raise GraphvizExecutableNotFoundError(msg) from exc\n except Exception:\n raise FlowVisualizationError(\n \"Something went wrong building the flow's visualization.\"\n \" If you're interacting with the return value of a task\"\n \" directly inside of your flow, you must set a set a `viz_return_value`\"\n \", for example `@task(viz_return_value=[1, 2, 3])`.\"\n )\n
","tags":["Python API","visualization"]},{"location":"api-ref/prefect/workers/base/","title":"base","text":"","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base","title":"prefect.workers.base
","text":"","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration","title":"BaseJobConfiguration
","text":" Bases: BaseModel
prefect/workers/base.py
class BaseJobConfiguration(BaseModel):\n command: Optional[str] = Field(\n default=None,\n description=(\n \"The command to use when starting a flow run. \"\n \"In most cases, this should be left blank and the command \"\n \"will be automatically generated by the worker.\"\n ),\n )\n env: Dict[str, Optional[str]] = Field(\n default_factory=dict,\n title=\"Environment Variables\",\n description=\"Environment variables to set when starting a flow run.\",\n )\n labels: Dict[str, str] = Field(\n default_factory=dict,\n description=(\n \"Labels applied to infrastructure created by the worker using \"\n \"this job configuration.\"\n ),\n )\n name: Optional[str] = Field(\n default=None,\n description=(\n \"Name given to infrastructure created by the worker using this \"\n \"job configuration.\"\n ),\n )\n\n _related_objects: Dict[str, Any] = PrivateAttr(default_factory=dict)\n\n @property\n def is_using_a_runner(self):\n return self.command is not None and \"prefect flow-run execute\" in self.command\n\n @validator(\"command\")\n def _coerce_command(cls, v):\n \"\"\"Make sure that empty strings are treated as None\"\"\"\n if not v:\n return None\n return v\n\n @staticmethod\n def _get_base_config_defaults(variables: dict) -> dict:\n \"\"\"Get default values from base config for all variables that have them.\"\"\"\n defaults = dict()\n for variable_name, attrs in variables.items():\n if \"default\" in attrs:\n defaults[variable_name] = attrs[\"default\"]\n\n return defaults\n\n @classmethod\n @inject_client\n async def from_template_and_values(\n cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n ):\n \"\"\"Creates a valid worker configuration object from the provided base\n configuration and overrides.\n\n Important: this method expects that the base_job_template was already\n validated server-side.\n \"\"\"\n job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n variables_schema = base_job_template[\"variables\"]\n variables = cls._get_base_config_defaults(\n variables_schema.get(\"properties\", {})\n )\n variables.update(values)\n\n populated_configuration = apply_values(template=job_config, values=variables)\n populated_configuration = await resolve_block_document_references(\n template=populated_configuration, client=client\n )\n populated_configuration = await resolve_variables(\n template=populated_configuration, client=client\n )\n return cls(**populated_configuration)\n\n @classmethod\n def json_template(cls) -> dict:\n \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n Defaults to using the job configuration parameter name as the template variable name.\n\n e.g.\n {\n key1: '{{ key1 }}', # default variable template\n key2: '{{ template2 }}', # `template2` specifically provide as template\n }\n \"\"\"\n configuration = {}\n properties = cls.schema()[\"properties\"]\n for k, v in properties.items():\n if v.get(\"template\"):\n template = v[\"template\"]\n else:\n template = \"{{ \" + k + \" }}\"\n configuration[k] = template\n\n return configuration\n\n def prepare_for_flow_run(\n self,\n flow_run: \"FlowRun\",\n deployment: Optional[\"DeploymentResponse\"] = None,\n flow: Optional[\"Flow\"] = None,\n ):\n \"\"\"\n Prepare the job configuration for a flow run.\n\n This method is called by the worker before starting a flow run. It\n should be used to set any configuration values that are dependent on\n the flow run.\n\n Args:\n flow_run: The flow run to be executed.\n deployment: The deployment that the flow run is associated with.\n flow: The flow that the flow run is associated with.\n \"\"\"\n\n self._related_objects = {\n \"deployment\": deployment,\n \"flow\": flow,\n \"flow-run\": flow_run,\n }\n if deployment is not None:\n deployment_labels = self._base_deployment_labels(deployment)\n else:\n deployment_labels = {}\n\n if flow is not None:\n flow_labels = self._base_flow_labels(flow)\n else:\n flow_labels = {}\n\n env = {\n **self._base_environment(),\n **self._base_flow_run_environment(flow_run),\n **self.env,\n }\n self.env = {key: value for key, value in env.items() if value is not None}\n self.labels = {\n **self._base_flow_run_labels(flow_run),\n **deployment_labels,\n **flow_labels,\n **self.labels,\n }\n self.name = self.name or flow_run.name\n self.command = self.command or self._base_flow_run_command()\n\n @staticmethod\n def _base_flow_run_command() -> str:\n \"\"\"\n Generate a command for a flow run job.\n \"\"\"\n if experiment_enabled(\"enhanced_cancellation\"):\n if (\n PREFECT_EXPERIMENTAL_WARN\n and PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION\n ):\n warnings.warn(\n EXPERIMENTAL_WARNING.format(\n feature=\"Enhanced flow run cancellation\",\n group=\"enhanced_cancellation\",\n help=\"\",\n ),\n ExperimentalFeature,\n stacklevel=3,\n )\n return \"prefect flow-run execute\"\n return \"python -m prefect.engine\"\n\n @staticmethod\n def _base_flow_run_labels(flow_run: \"FlowRun\") -> Dict[str, str]:\n \"\"\"\n Generate a dictionary of labels for a flow run job.\n \"\"\"\n return {\n \"prefect.io/flow-run-id\": str(flow_run.id),\n \"prefect.io/flow-run-name\": flow_run.name,\n \"prefect.io/version\": prefect.__version__,\n }\n\n @classmethod\n def _base_environment(cls) -> Dict[str, str]:\n \"\"\"\n Environment variables that should be passed to all created infrastructure.\n\n These values should be overridable with the `env` field.\n \"\"\"\n return get_current_settings().to_environment_variables(exclude_unset=True)\n\n @staticmethod\n def _base_flow_run_environment(flow_run: \"FlowRun\") -> Dict[str, str]:\n \"\"\"\n Generate a dictionary of environment variables for a flow run job.\n \"\"\"\n return {\"PREFECT__FLOW_RUN_ID\": str(flow_run.id)}\n\n @staticmethod\n def _base_deployment_labels(deployment: \"DeploymentResponse\") -> Dict[str, str]:\n labels = {\n \"prefect.io/deployment-id\": str(deployment.id),\n \"prefect.io/deployment-name\": deployment.name,\n }\n if deployment.updated is not None:\n labels[\"prefect.io/deployment-updated\"] = deployment.updated.in_timezone(\n \"utc\"\n ).to_iso8601_string()\n return labels\n\n @staticmethod\n def _base_flow_labels(flow: \"Flow\") -> Dict[str, str]:\n return {\n \"prefect.io/flow-id\": str(flow.id),\n \"prefect.io/flow-name\": flow.name,\n }\n\n def _related_resources(self) -> List[RelatedResource]:\n tags = set()\n related = []\n\n for kind, obj in self._related_objects.items():\n if obj is None:\n continue\n if hasattr(obj, \"tags\"):\n tags.update(obj.tags)\n related.append(object_as_related_resource(kind=kind, role=kind, object=obj))\n\n return related + tags_as_related_resources(tags)\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.from_template_and_values","title":"from_template_and_values
async
classmethod
","text":"Creates a valid worker configuration object from the provided base configuration and overrides.
Important: this method expects that the base_job_template was already validated server-side.
Source code inprefect/workers/base.py
@classmethod\n@inject_client\nasync def from_template_and_values(\n cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n):\n \"\"\"Creates a valid worker configuration object from the provided base\n configuration and overrides.\n\n Important: this method expects that the base_job_template was already\n validated server-side.\n \"\"\"\n job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n variables_schema = base_job_template[\"variables\"]\n variables = cls._get_base_config_defaults(\n variables_schema.get(\"properties\", {})\n )\n variables.update(values)\n\n populated_configuration = apply_values(template=job_config, values=variables)\n populated_configuration = await resolve_block_document_references(\n template=populated_configuration, client=client\n )\n populated_configuration = await resolve_variables(\n template=populated_configuration, client=client\n )\n return cls(**populated_configuration)\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.json_template","title":"json_template
classmethod
","text":"Returns a dict with job configuration as keys and the corresponding templates as values
Defaults to using the job configuration parameter name as the template variable name.
e.g. { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # template2
specifically provide as template }
prefect/workers/base.py
@classmethod\ndef json_template(cls) -> dict:\n \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n Defaults to using the job configuration parameter name as the template variable name.\n\n e.g.\n {\n key1: '{{ key1 }}', # default variable template\n key2: '{{ template2 }}', # `template2` specifically provide as template\n }\n \"\"\"\n configuration = {}\n properties = cls.schema()[\"properties\"]\n for k, v in properties.items():\n if v.get(\"template\"):\n template = v[\"template\"]\n else:\n template = \"{{ \" + k + \" }}\"\n configuration[k] = template\n\n return configuration\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.prepare_for_flow_run","title":"prepare_for_flow_run
","text":"Prepare the job configuration for a flow run.
This method is called by the worker before starting a flow run. It should be used to set any configuration values that are dependent on the flow run.
Parameters:
Name Type Description Defaultflow_run
FlowRun
The flow run to be executed.
requireddeployment
Optional[DeploymentResponse]
The deployment that the flow run is associated with.
None
flow
Optional[Flow]
The flow that the flow run is associated with.
None
Source code in prefect/workers/base.py
def prepare_for_flow_run(\n self,\n flow_run: \"FlowRun\",\n deployment: Optional[\"DeploymentResponse\"] = None,\n flow: Optional[\"Flow\"] = None,\n):\n \"\"\"\n Prepare the job configuration for a flow run.\n\n This method is called by the worker before starting a flow run. It\n should be used to set any configuration values that are dependent on\n the flow run.\n\n Args:\n flow_run: The flow run to be executed.\n deployment: The deployment that the flow run is associated with.\n flow: The flow that the flow run is associated with.\n \"\"\"\n\n self._related_objects = {\n \"deployment\": deployment,\n \"flow\": flow,\n \"flow-run\": flow_run,\n }\n if deployment is not None:\n deployment_labels = self._base_deployment_labels(deployment)\n else:\n deployment_labels = {}\n\n if flow is not None:\n flow_labels = self._base_flow_labels(flow)\n else:\n flow_labels = {}\n\n env = {\n **self._base_environment(),\n **self._base_flow_run_environment(flow_run),\n **self.env,\n }\n self.env = {key: value for key, value in env.items() if value is not None}\n self.labels = {\n **self._base_flow_run_labels(flow_run),\n **deployment_labels,\n **flow_labels,\n **self.labels,\n }\n self.name = self.name or flow_run.name\n self.command = self.command or self._base_flow_run_command()\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker","title":"BaseWorker
","text":" Bases: ABC
prefect/workers/base.py
@register_base_type\nclass BaseWorker(abc.ABC):\n type: str\n job_configuration: Type[BaseJobConfiguration] = BaseJobConfiguration\n job_configuration_variables: Optional[Type[BaseVariables]] = None\n\n _documentation_url = \"\"\n _logo_url = \"\"\n _description = \"\"\n\n def __init__(\n self,\n work_pool_name: str,\n work_queues: Optional[List[str]] = None,\n name: Optional[str] = None,\n prefetch_seconds: Optional[float] = None,\n create_pool_if_not_found: bool = True,\n limit: Optional[int] = None,\n heartbeat_interval_seconds: Optional[int] = None,\n *,\n base_job_template: Optional[Dict[str, Any]] = None,\n ):\n \"\"\"\n Base class for all Prefect workers.\n\n Args:\n name: The name of the worker. If not provided, a random one\n will be generated. If provided, it cannot contain '/' or '%'.\n The name is used to identify the worker in the UI; if two\n processes have the same name, they will be treated as the same\n worker.\n work_pool_name: The name of the work pool to poll.\n work_queues: A list of work queues to poll. If not provided, all\n work queue in the work pool will be polled.\n prefetch_seconds: The number of seconds to prefetch flow runs for.\n create_pool_if_not_found: Whether to create the work pool\n if it is not found. Defaults to `True`, but can be set to `False` to\n ensure that work pools are not created accidentally.\n limit: The maximum number of flow runs this worker should be running at\n a given time.\n base_job_template: If creating the work pool, provide the base job\n template to use. Logs a warning if the pool already exists.\n \"\"\"\n if name and (\"/\" in name or \"%\" in name):\n raise ValueError(\"Worker name cannot contain '/' or '%'\")\n self.name = name or f\"{self.__class__.__name__} {uuid4()}\"\n self._logger = get_logger(f\"worker.{self.__class__.type}.{self.name.lower()}\")\n\n self.is_setup = False\n self._create_pool_if_not_found = create_pool_if_not_found\n self._base_job_template = base_job_template\n self._work_pool_name = work_pool_name\n self._work_queues: Set[str] = set(work_queues) if work_queues else set()\n\n self._prefetch_seconds: float = (\n prefetch_seconds or PREFECT_WORKER_PREFETCH_SECONDS.value()\n )\n self.heartbeat_interval_seconds = (\n heartbeat_interval_seconds or PREFECT_WORKER_HEARTBEAT_SECONDS.value()\n )\n\n self._work_pool: Optional[WorkPool] = None\n self._runs_task_group: Optional[anyio.abc.TaskGroup] = None\n self._client: Optional[PrefectClient] = None\n self._last_polled_time: pendulum.DateTime = pendulum.now(\"utc\")\n self._limit = limit\n self._limiter: Optional[anyio.CapacityLimiter] = None\n self._submitting_flow_run_ids = set()\n self._cancelling_flow_run_ids = set()\n self._scheduled_task_scopes = set()\n\n @classmethod\n def get_documentation_url(cls) -> str:\n return cls._documentation_url\n\n @classmethod\n def get_logo_url(cls) -> str:\n return cls._logo_url\n\n @classmethod\n def get_description(cls) -> str:\n return cls._description\n\n @classmethod\n def get_default_base_job_template(cls) -> Dict:\n if cls.job_configuration_variables is None:\n schema = cls.job_configuration.schema()\n # remove \"template\" key from all dicts in schema['properties'] because it is not a\n # relevant field\n for key, value in schema[\"properties\"].items():\n if isinstance(value, dict):\n schema[\"properties\"][key].pop(\"template\", None)\n variables_schema = schema\n else:\n variables_schema = cls.job_configuration_variables.schema()\n variables_schema.pop(\"title\", None)\n return {\n \"job_configuration\": cls.job_configuration.json_template(),\n \"variables\": variables_schema,\n }\n\n @staticmethod\n def get_worker_class_from_type(type: str) -> Optional[Type[\"BaseWorker\"]]:\n \"\"\"\n Returns the worker class for a given worker type. If the worker type\n is not recognized, returns None.\n \"\"\"\n load_prefect_collections()\n worker_registry = get_registry_for_type(BaseWorker)\n if worker_registry is not None:\n return worker_registry.get(type)\n\n @staticmethod\n def get_all_available_worker_types() -> List[str]:\n \"\"\"\n Returns all worker types available in the local registry.\n \"\"\"\n load_prefect_collections()\n worker_registry = get_registry_for_type(BaseWorker)\n if worker_registry is not None:\n return list(worker_registry.keys())\n return []\n\n def get_name_slug(self):\n return slugify(self.name)\n\n def get_flow_run_logger(self, flow_run: \"FlowRun\") -> PrefectLogAdapter:\n return flow_run_logger(flow_run=flow_run).getChild(\n \"worker\",\n extra={\n \"worker_name\": self.name,\n \"work_pool_name\": (\n self._work_pool_name if self._work_pool else \"<unknown>\"\n ),\n \"work_pool_id\": str(getattr(self._work_pool, \"id\", \"unknown\")),\n },\n )\n\n @abc.abstractmethod\n async def run(\n self,\n flow_run: \"FlowRun\",\n configuration: BaseJobConfiguration,\n task_status: Optional[anyio.abc.TaskStatus] = None,\n ) -> BaseWorkerResult:\n \"\"\"\n Runs a given flow run on the current worker.\n \"\"\"\n raise NotImplementedError(\n \"Workers must implement a method for running submitted flow runs\"\n )\n\n async def kill_infrastructure(\n self,\n infrastructure_pid: str,\n configuration: BaseJobConfiguration,\n grace_seconds: int = 30,\n ):\n \"\"\"\n Method for killing infrastructure created by a worker. Should be implemented by\n individual workers if they support killing infrastructure.\n \"\"\"\n raise NotImplementedError(\n \"This worker does not support killing infrastructure.\"\n )\n\n @classmethod\n def __dispatch_key__(cls):\n if cls.__name__ == \"BaseWorker\":\n return None # The base class is abstract\n return cls.type\n\n async def setup(self):\n \"\"\"Prepares the worker to run.\"\"\"\n self._logger.debug(\"Setting up worker...\")\n self._runs_task_group = anyio.create_task_group()\n self._limiter = (\n anyio.CapacityLimiter(self._limit) if self._limit is not None else None\n )\n self._client = get_client()\n await self._client.__aenter__()\n await self._runs_task_group.__aenter__()\n\n self.is_setup = True\n\n async def teardown(self, *exc_info):\n \"\"\"Cleans up resources after the worker is stopped.\"\"\"\n self._logger.debug(\"Tearing down worker...\")\n self.is_setup = False\n for scope in self._scheduled_task_scopes:\n scope.cancel()\n if self._runs_task_group:\n await self._runs_task_group.__aexit__(*exc_info)\n if self._client:\n await self._client.__aexit__(*exc_info)\n self._runs_task_group = None\n self._client = None\n\n def is_worker_still_polling(self, query_interval_seconds: int) -> bool:\n \"\"\"\n This method is invoked by a webserver healthcheck handler\n and returns a boolean indicating if the worker has recorded a\n scheduled flow run poll within a variable amount of time.\n\n The `query_interval_seconds` is the same value that is used by\n the loop services - we will evaluate if the _last_polled_time\n was within that interval x 30 (so 10s -> 5m)\n\n The instance property `self._last_polled_time`\n is currently set/updated in `get_and_submit_flow_runs()`\n \"\"\"\n threshold_seconds = query_interval_seconds * 30\n\n seconds_since_last_poll = (\n pendulum.now(\"utc\") - self._last_polled_time\n ).in_seconds()\n\n is_still_polling = seconds_since_last_poll <= threshold_seconds\n\n if not is_still_polling:\n self._logger.error(\n f\"Worker has not polled in the last {seconds_since_last_poll} seconds \"\n \"and should be restarted\"\n )\n\n return is_still_polling\n\n async def get_and_submit_flow_runs(self):\n runs_response = await self._get_scheduled_flow_runs()\n\n self._last_polled_time = pendulum.now(\"utc\")\n\n return await self._submit_scheduled_flow_runs(flow_run_response=runs_response)\n\n async def check_for_cancelled_flow_runs(self):\n if not self.is_setup:\n raise RuntimeError(\n \"Worker is not set up. Please make sure you are running this worker \"\n \"as an async context manager.\"\n )\n\n self._logger.debug(\"Checking for cancelled flow runs...\")\n\n work_queue_filter = (\n WorkQueueFilter(name=WorkQueueFilterName(any_=list(self._work_queues)))\n if self._work_queues\n else None\n )\n\n named_cancelling_flow_runs = await self._client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state=FlowRunFilterState(\n type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n ),\n # Avoid duplicate cancellation calls\n id=FlowRunFilterId(not_any_=list(self._cancelling_flow_run_ids)),\n ),\n work_pool_filter=WorkPoolFilter(\n name=WorkPoolFilterName(any_=[self._work_pool_name])\n ),\n work_queue_filter=work_queue_filter,\n )\n\n typed_cancelling_flow_runs = await self._client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state=FlowRunFilterState(\n type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n ),\n # Avoid duplicate cancellation calls\n id=FlowRunFilterId(not_any_=list(self._cancelling_flow_run_ids)),\n ),\n work_pool_filter=WorkPoolFilter(\n name=WorkPoolFilterName(any_=[self._work_pool_name])\n ),\n work_queue_filter=work_queue_filter,\n )\n\n cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n if cancelling_flow_runs:\n self._logger.info(\n f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n )\n\n for flow_run in cancelling_flow_runs:\n self._cancelling_flow_run_ids.add(flow_run.id)\n self._runs_task_group.start_soon(self.cancel_run, flow_run)\n\n return cancelling_flow_runs\n\n async def cancel_run(self, flow_run: \"FlowRun\"):\n run_logger = self.get_flow_run_logger(flow_run)\n\n try:\n configuration = await self._get_configuration(flow_run)\n if configuration.is_using_a_runner:\n self._logger.info(\n f\"Skipping cancellation because flow run {str(flow_run.id)!r} is\"\n \" using enhanced cancellation. A dedicated runner will handle\"\n \" cancellation.\"\n )\n return\n except ObjectNotFound:\n self._logger.warning(\n f\"Flow run {flow_run.id!r} cannot be cancelled by this worker:\"\n f\" associated deployment {flow_run.deployment_id!r} does not exist.\"\n )\n\n if not flow_run.infrastructure_pid:\n run_logger.error(\n f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n \" attached. Cancellation cannot be guaranteed.\"\n )\n await self._mark_flow_run_as_cancelled(\n flow_run,\n state_updates={\n \"message\": (\n \"This flow run is missing infrastructure tracking information\"\n \" and cancellation cannot be guaranteed.\"\n )\n },\n )\n return\n\n try:\n await self.kill_infrastructure(\n infrastructure_pid=flow_run.infrastructure_pid,\n configuration=configuration,\n )\n except NotImplementedError:\n self._logger.error(\n f\"Worker type {self.type!r} does not support killing created \"\n \"infrastructure. Cancellation cannot be guaranteed.\"\n )\n except InfrastructureNotFound as exc:\n self._logger.warning(f\"{exc} Marking flow run as cancelled.\")\n await self._mark_flow_run_as_cancelled(flow_run)\n except InfrastructureNotAvailable as exc:\n self._logger.warning(f\"{exc} Flow run cannot be cancelled by this worker.\")\n except Exception:\n run_logger.exception(\n \"Encountered exception while killing infrastructure for flow run \"\n f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n )\n # We will try again on generic exceptions\n self._cancelling_flow_run_ids.remove(flow_run.id)\n return\n else:\n self._emit_flow_run_cancelled_event(\n flow_run=flow_run, configuration=configuration\n )\n await self._mark_flow_run_as_cancelled(flow_run)\n run_logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n\n async def _update_local_work_pool_info(self):\n try:\n work_pool = await self._client.read_work_pool(\n work_pool_name=self._work_pool_name\n )\n except ObjectNotFound:\n if self._create_pool_if_not_found:\n wp = WorkPoolCreate(\n name=self._work_pool_name,\n type=self.type,\n )\n if self._base_job_template is not None:\n wp.base_job_template = self._base_job_template\n\n work_pool = await self._client.create_work_pool(work_pool=wp)\n self._logger.info(f\"Work pool {self._work_pool_name!r} created.\")\n else:\n self._logger.warning(f\"Work pool {self._work_pool_name!r} not found!\")\n if self._base_job_template is not None:\n self._logger.warning(\n \"Ignoring supplied base job template because the work pool\"\n \" already exists\"\n )\n return\n\n # if the remote config type changes (or if it's being loaded for the\n # first time), check if it matches the local type and warn if not\n if getattr(self._work_pool, \"type\", 0) != work_pool.type:\n if work_pool.type != self.__class__.type:\n self._logger.warning(\n \"Worker type mismatch! This worker process expects type \"\n f\"{self.type!r} but received {work_pool.type!r}\"\n \" from the server. Unexpected behavior may occur.\"\n )\n\n # once the work pool is loaded, verify that it has a `base_job_template` and\n # set it if not\n if not work_pool.base_job_template:\n job_template = self.__class__.get_default_base_job_template()\n await self._set_work_pool_template(work_pool, job_template)\n work_pool.base_job_template = job_template\n\n self._work_pool = work_pool\n\n async def _send_worker_heartbeat(self):\n if self._work_pool:\n await self._client.send_worker_heartbeat(\n work_pool_name=self._work_pool_name,\n worker_name=self.name,\n heartbeat_interval_seconds=self.heartbeat_interval_seconds,\n )\n\n async def sync_with_backend(self):\n \"\"\"\n Updates the worker's local information about it's current work pool and\n queues. Sends a worker heartbeat to the API.\n \"\"\"\n await self._update_local_work_pool_info()\n\n await self._send_worker_heartbeat()\n\n self._logger.debug(\"Worker synchronized with the Prefect API server.\")\n\n async def _get_scheduled_flow_runs(\n self,\n ) -> List[\"WorkerFlowRunResponse\"]:\n \"\"\"\n Retrieve scheduled flow runs from the work pool's queues.\n \"\"\"\n scheduled_before = pendulum.now(\"utc\").add(seconds=int(self._prefetch_seconds))\n self._logger.debug(\n f\"Querying for flow runs scheduled before {scheduled_before}\"\n )\n try:\n scheduled_flow_runs = (\n await self._client.get_scheduled_flow_runs_for_work_pool(\n work_pool_name=self._work_pool_name,\n scheduled_before=scheduled_before,\n work_queue_names=list(self._work_queues),\n )\n )\n self._logger.debug(\n f\"Discovered {len(scheduled_flow_runs)} scheduled_flow_runs\"\n )\n return scheduled_flow_runs\n except ObjectNotFound:\n # the pool doesn't exist; it will be created on the next\n # heartbeat (or an appropriate warning will be logged)\n return []\n\n async def _submit_scheduled_flow_runs(\n self, flow_run_response: List[\"WorkerFlowRunResponse\"]\n ) -> List[\"FlowRun\"]:\n \"\"\"\n Takes a list of WorkerFlowRunResponses and submits the referenced flow runs\n for execution by the worker.\n \"\"\"\n submittable_flow_runs = [entry.flow_run for entry in flow_run_response]\n submittable_flow_runs.sort(key=lambda run: run.next_scheduled_start_time)\n for flow_run in submittable_flow_runs:\n if flow_run.id in self._submitting_flow_run_ids:\n continue\n\n try:\n if self._limiter:\n self._limiter.acquire_on_behalf_of_nowait(flow_run.id)\n except anyio.WouldBlock:\n self._logger.info(\n f\"Flow run limit reached; {self._limiter.borrowed_tokens} flow runs\"\n \" in progress.\"\n )\n break\n else:\n run_logger = self.get_flow_run_logger(flow_run)\n run_logger.info(\n f\"Worker '{self.name}' submitting flow run '{flow_run.id}'\"\n )\n self._submitting_flow_run_ids.add(flow_run.id)\n self._runs_task_group.start_soon(\n self._submit_run,\n flow_run,\n )\n\n return list(\n filter(\n lambda run: run.id in self._submitting_flow_run_ids,\n submittable_flow_runs,\n )\n )\n\n async def _check_flow_run(self, flow_run: \"FlowRun\") -> None:\n \"\"\"\n Performs a check on a submitted flow run to warn the user if the flow run\n was created from a deployment with a storage block.\n \"\"\"\n if flow_run.deployment_id:\n deployment = await self._client.read_deployment(flow_run.deployment_id)\n if deployment.storage_document_id:\n raise ValueError(\n f\"Flow run {flow_run.id!r} was created from deployment\"\n f\" {deployment.name!r} which is configured with a storage block.\"\n \" Please use an\"\n \" agent to execute this flow run.\"\n )\n\n async def _submit_run(self, flow_run: \"FlowRun\") -> None:\n \"\"\"\n Submits a given flow run for execution by the worker.\n \"\"\"\n run_logger = self.get_flow_run_logger(flow_run)\n\n try:\n await self._check_flow_run(flow_run)\n except (ValueError, ObjectNotFound):\n self._logger.exception(\n (\n \"Flow run %s did not pass checks and will not be submitted for\"\n \" execution\"\n ),\n flow_run.id,\n )\n self._submitting_flow_run_ids.remove(flow_run.id)\n return\n\n ready_to_submit = await self._propose_pending_state(flow_run)\n\n if ready_to_submit:\n readiness_result = await self._runs_task_group.start(\n self._submit_run_and_capture_errors, flow_run\n )\n\n if readiness_result and not isinstance(readiness_result, Exception):\n try:\n await self._client.update_flow_run(\n flow_run_id=flow_run.id,\n infrastructure_pid=str(readiness_result),\n )\n except Exception:\n run_logger.exception(\n \"An error occurred while setting the `infrastructure_pid` on \"\n f\"flow run {flow_run.id!r}. The flow run will \"\n \"not be cancellable.\"\n )\n\n run_logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n else:\n # If the run is not ready to submit, release the concurrency slot\n if self._limiter:\n self._limiter.release_on_behalf_of(flow_run.id)\n\n self._submitting_flow_run_ids.remove(flow_run.id)\n\n async def _submit_run_and_capture_errors(\n self, flow_run: \"FlowRun\", task_status: anyio.abc.TaskStatus = None\n ) -> Union[BaseWorkerResult, Exception]:\n run_logger = self.get_flow_run_logger(flow_run)\n\n try:\n configuration = await self._get_configuration(flow_run)\n submitted_event = self._emit_flow_run_submitted_event(configuration)\n result = await self.run(\n flow_run=flow_run,\n task_status=task_status,\n configuration=configuration,\n )\n except Exception as exc:\n if not task_status._future.done():\n # This flow run was being submitted and did not start successfully\n run_logger.exception(\n f\"Failed to submit flow run '{flow_run.id}' to infrastructure.\"\n )\n # Mark the task as started to prevent agent crash\n task_status.started(exc)\n await self._propose_crashed_state(\n flow_run, \"Flow run could not be submitted to infrastructure\"\n )\n else:\n run_logger.exception(\n f\"An error occurred while monitoring flow run '{flow_run.id}'. \"\n \"The flow run will not be marked as failed, but an issue may have \"\n \"occurred.\"\n )\n return exc\n finally:\n if self._limiter:\n self._limiter.release_on_behalf_of(flow_run.id)\n\n if not task_status._future.done():\n run_logger.error(\n f\"Infrastructure returned without reporting flow run '{flow_run.id}' \"\n \"as started or raising an error. This behavior is not expected and \"\n \"generally indicates improper implementation of infrastructure. The \"\n \"flow run will not be marked as failed, but an issue may have occurred.\"\n )\n # Mark the task as started to prevent agent crash\n task_status.started()\n\n if result.status_code != 0:\n await self._propose_crashed_state(\n flow_run,\n (\n \"Flow run infrastructure exited with non-zero status code\"\n f\" {result.status_code}.\"\n ),\n )\n\n self._emit_flow_run_executed_event(result, configuration, submitted_event)\n\n return result\n\n def get_status(self):\n \"\"\"\n Retrieves the status of the current worker including its name, current worker\n pool, the work pool queues it is polling, and its local settings.\n \"\"\"\n return {\n \"name\": self.name,\n \"work_pool\": (\n self._work_pool.dict(json_compatible=True)\n if self._work_pool is not None\n else None\n ),\n \"settings\": {\n \"prefetch_seconds\": self._prefetch_seconds,\n },\n }\n\n async def _get_configuration(\n self,\n flow_run: \"FlowRun\",\n ) -> BaseJobConfiguration:\n deployment = await self._client.read_deployment(flow_run.deployment_id)\n flow = await self._client.read_flow(flow_run.flow_id)\n\n deployment_vars = deployment.infra_overrides or {}\n flow_run_vars = flow_run.job_variables or {}\n job_variables = {**deployment_vars, **flow_run_vars}\n\n configuration = await self.job_configuration.from_template_and_values(\n base_job_template=self._work_pool.base_job_template,\n values=job_variables,\n client=self._client,\n )\n configuration.prepare_for_flow_run(\n flow_run=flow_run, deployment=deployment, flow=flow\n )\n return configuration\n\n async def _propose_pending_state(self, flow_run: \"FlowRun\") -> bool:\n run_logger = self.get_flow_run_logger(flow_run)\n state = flow_run.state\n try:\n state = await propose_state(\n self._client, Pending(), flow_run_id=flow_run.id\n )\n except Abort as exc:\n run_logger.info(\n (\n f\"Aborted submission of flow run '{flow_run.id}'. \"\n f\"Server sent an abort signal: {exc}\"\n ),\n )\n return False\n except Exception:\n run_logger.exception(\n f\"Failed to update state of flow run '{flow_run.id}'\",\n )\n return False\n\n if not state.is_pending():\n run_logger.info(\n (\n f\"Aborted submission of flow run '{flow_run.id}': \"\n f\"Server returned a non-pending state {state.type.value!r}\"\n ),\n )\n return False\n\n return True\n\n async def _propose_failed_state(self, flow_run: \"FlowRun\", exc: Exception) -> None:\n run_logger = self.get_flow_run_logger(flow_run)\n try:\n await propose_state(\n self._client,\n await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n flow_run_id=flow_run.id,\n )\n except Abort:\n # We've already failed, no need to note the abort but we don't want it to\n # raise in the agent process\n pass\n except Exception:\n run_logger.error(\n f\"Failed to update state of flow run '{flow_run.id}'\",\n exc_info=True,\n )\n\n async def _propose_crashed_state(self, flow_run: \"FlowRun\", message: str) -> None:\n run_logger = self.get_flow_run_logger(flow_run)\n try:\n state = await propose_state(\n self._client,\n Crashed(message=message),\n flow_run_id=flow_run.id,\n )\n except Abort:\n # Flow run already marked as failed\n pass\n except Exception:\n run_logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n else:\n if state.is_crashed():\n run_logger.info(\n f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n )\n\n async def _mark_flow_run_as_cancelled(\n self, flow_run: \"FlowRun\", state_updates: Optional[dict] = None\n ) -> None:\n state_updates = state_updates or {}\n state_updates.setdefault(\"name\", \"Cancelled\")\n state_updates.setdefault(\"type\", StateType.CANCELLED)\n state = flow_run.state.copy(update=state_updates)\n\n await self._client.set_flow_run_state(flow_run.id, state, force=True)\n\n # Do not remove the flow run from the cancelling set immediately because\n # the API caches responses for the `read_flow_runs` and we do not want to\n # duplicate cancellations.\n await self._schedule_task(\n 60 * 10, self._cancelling_flow_run_ids.remove, flow_run.id\n )\n\n async def _set_work_pool_template(self, work_pool, job_template):\n \"\"\"Updates the `base_job_template` for the worker's work pool server side.\"\"\"\n await self._client.update_work_pool(\n work_pool_name=work_pool.name,\n work_pool=WorkPoolUpdate(\n base_job_template=job_template,\n ),\n )\n\n async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n \"\"\"\n Schedule a background task to start after some time.\n\n These tasks will be run immediately when the worker exits instead of waiting.\n\n The function may be async or sync. Async functions will be awaited.\n \"\"\"\n\n async def wrapper(task_status):\n # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n # time or shutdown\n if self.is_setup:\n with anyio.CancelScope() as scope:\n self._scheduled_task_scopes.add(scope)\n task_status.started()\n await anyio.sleep(__in_seconds)\n\n self._scheduled_task_scopes.remove(scope)\n else:\n task_status.started()\n\n result = fn(*args, **kwargs)\n if inspect.iscoroutine(result):\n await result\n\n await self._runs_task_group.start(wrapper)\n\n async def __aenter__(self):\n self._logger.debug(\"Entering worker context...\")\n await self.setup()\n return self\n\n async def __aexit__(self, *exc_info):\n self._logger.debug(\"Exiting worker context...\")\n await self.teardown(*exc_info)\n\n def __repr__(self):\n return f\"Worker(pool={self._work_pool_name!r}, name={self.name!r})\"\n\n def _event_resource(self):\n return {\n \"prefect.resource.id\": f\"prefect.worker.{self.type}.{self.get_name_slug()}\",\n \"prefect.resource.name\": self.name,\n \"prefect.version\": prefect.__version__,\n \"prefect.worker-type\": self.type,\n }\n\n def _event_related_resources(\n self,\n configuration: Optional[BaseJobConfiguration] = None,\n include_self: bool = False,\n ) -> List[RelatedResource]:\n related = []\n if configuration:\n related += configuration._related_resources()\n\n if self._work_pool:\n related.append(\n object_as_related_resource(\n kind=\"work-pool\", role=\"work-pool\", object=self._work_pool\n )\n )\n\n if include_self:\n worker_resource = self._event_resource()\n worker_resource[\"prefect.resource.role\"] = \"worker\"\n related.append(RelatedResource(__root__=worker_resource))\n\n return related\n\n def _emit_flow_run_submitted_event(\n self, configuration: BaseJobConfiguration\n ) -> Event:\n return emit_event(\n event=\"prefect.worker.submitted-flow-run\",\n resource=self._event_resource(),\n related=self._event_related_resources(configuration=configuration),\n )\n\n def _emit_flow_run_executed_event(\n self,\n result: BaseWorkerResult,\n configuration: BaseJobConfiguration,\n submitted_event: Event,\n ):\n related = self._event_related_resources(configuration=configuration)\n\n for resource in related:\n if resource.role == \"flow-run\":\n resource[\"prefect.infrastructure.identifier\"] = str(result.identifier)\n resource[\"prefect.infrastructure.status-code\"] = str(result.status_code)\n\n emit_event(\n event=\"prefect.worker.executed-flow-run\",\n resource=self._event_resource(),\n related=related,\n follows=submitted_event,\n )\n\n async def _emit_worker_started_event(self) -> Event:\n return emit_event(\n \"prefect.worker.started\",\n resource=self._event_resource(),\n related=self._event_related_resources(),\n )\n\n async def _emit_worker_stopped_event(self, started_event: Event):\n emit_event(\n \"prefect.worker.stopped\",\n resource=self._event_resource(),\n related=self._event_related_resources(),\n follows=started_event,\n )\n\n def _emit_flow_run_cancelled_event(\n self, flow_run: \"FlowRun\", configuration: BaseJobConfiguration\n ):\n related = self._event_related_resources(configuration=configuration)\n\n for resource in related:\n if resource.role == \"flow-run\":\n resource[\"prefect.infrastructure.identifier\"] = str(\n flow_run.infrastructure_pid\n )\n\n emit_event(\n event=\"prefect.worker.cancelled-flow-run\",\n resource=self._event_resource(),\n related=related,\n )\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_all_available_worker_types","title":"get_all_available_worker_types
staticmethod
","text":"Returns all worker types available in the local registry.
Source code inprefect/workers/base.py
@staticmethod\ndef get_all_available_worker_types() -> List[str]:\n \"\"\"\n Returns all worker types available in the local registry.\n \"\"\"\n load_prefect_collections()\n worker_registry = get_registry_for_type(BaseWorker)\n if worker_registry is not None:\n return list(worker_registry.keys())\n return []\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_status","title":"get_status
","text":"Retrieves the status of the current worker including its name, current worker pool, the work pool queues it is polling, and its local settings.
Source code inprefect/workers/base.py
def get_status(self):\n \"\"\"\n Retrieves the status of the current worker including its name, current worker\n pool, the work pool queues it is polling, and its local settings.\n \"\"\"\n return {\n \"name\": self.name,\n \"work_pool\": (\n self._work_pool.dict(json_compatible=True)\n if self._work_pool is not None\n else None\n ),\n \"settings\": {\n \"prefetch_seconds\": self._prefetch_seconds,\n },\n }\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_worker_class_from_type","title":"get_worker_class_from_type
staticmethod
","text":"Returns the worker class for a given worker type. If the worker type is not recognized, returns None.
Source code inprefect/workers/base.py
@staticmethod\ndef get_worker_class_from_type(type: str) -> Optional[Type[\"BaseWorker\"]]:\n \"\"\"\n Returns the worker class for a given worker type. If the worker type\n is not recognized, returns None.\n \"\"\"\n load_prefect_collections()\n worker_registry = get_registry_for_type(BaseWorker)\n if worker_registry is not None:\n return worker_registry.get(type)\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.is_worker_still_polling","title":"is_worker_still_polling
","text":"This method is invoked by a webserver healthcheck handler and returns a boolean indicating if the worker has recorded a scheduled flow run poll within a variable amount of time.
The query_interval_seconds
is the same value that is used by the loop services - we will evaluate if the _last_polled_time was within that interval x 30 (so 10s -> 5m)
The instance property self._last_polled_time
is currently set/updated in get_and_submit_flow_runs()
prefect/workers/base.py
def is_worker_still_polling(self, query_interval_seconds: int) -> bool:\n \"\"\"\n This method is invoked by a webserver healthcheck handler\n and returns a boolean indicating if the worker has recorded a\n scheduled flow run poll within a variable amount of time.\n\n The `query_interval_seconds` is the same value that is used by\n the loop services - we will evaluate if the _last_polled_time\n was within that interval x 30 (so 10s -> 5m)\n\n The instance property `self._last_polled_time`\n is currently set/updated in `get_and_submit_flow_runs()`\n \"\"\"\n threshold_seconds = query_interval_seconds * 30\n\n seconds_since_last_poll = (\n pendulum.now(\"utc\") - self._last_polled_time\n ).in_seconds()\n\n is_still_polling = seconds_since_last_poll <= threshold_seconds\n\n if not is_still_polling:\n self._logger.error(\n f\"Worker has not polled in the last {seconds_since_last_poll} seconds \"\n \"and should be restarted\"\n )\n\n return is_still_polling\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.kill_infrastructure","title":"kill_infrastructure
async
","text":"Method for killing infrastructure created by a worker. Should be implemented by individual workers if they support killing infrastructure.
Source code inprefect/workers/base.py
async def kill_infrastructure(\n self,\n infrastructure_pid: str,\n configuration: BaseJobConfiguration,\n grace_seconds: int = 30,\n):\n \"\"\"\n Method for killing infrastructure created by a worker. Should be implemented by\n individual workers if they support killing infrastructure.\n \"\"\"\n raise NotImplementedError(\n \"This worker does not support killing infrastructure.\"\n )\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.run","title":"run
abstractmethod
async
","text":"Runs a given flow run on the current worker.
Source code inprefect/workers/base.py
@abc.abstractmethod\nasync def run(\n self,\n flow_run: \"FlowRun\",\n configuration: BaseJobConfiguration,\n task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> BaseWorkerResult:\n \"\"\"\n Runs a given flow run on the current worker.\n \"\"\"\n raise NotImplementedError(\n \"Workers must implement a method for running submitted flow runs\"\n )\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.setup","title":"setup
async
","text":"Prepares the worker to run.
Source code inprefect/workers/base.py
async def setup(self):\n \"\"\"Prepares the worker to run.\"\"\"\n self._logger.debug(\"Setting up worker...\")\n self._runs_task_group = anyio.create_task_group()\n self._limiter = (\n anyio.CapacityLimiter(self._limit) if self._limit is not None else None\n )\n self._client = get_client()\n await self._client.__aenter__()\n await self._runs_task_group.__aenter__()\n\n self.is_setup = True\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.sync_with_backend","title":"sync_with_backend
async
","text":"Updates the worker's local information about it's current work pool and queues. Sends a worker heartbeat to the API.
Source code inprefect/workers/base.py
async def sync_with_backend(self):\n \"\"\"\n Updates the worker's local information about it's current work pool and\n queues. Sends a worker heartbeat to the API.\n \"\"\"\n await self._update_local_work_pool_info()\n\n await self._send_worker_heartbeat()\n\n self._logger.debug(\"Worker synchronized with the Prefect API server.\")\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.teardown","title":"teardown
async
","text":"Cleans up resources after the worker is stopped.
Source code inprefect/workers/base.py
async def teardown(self, *exc_info):\n \"\"\"Cleans up resources after the worker is stopped.\"\"\"\n self._logger.debug(\"Tearing down worker...\")\n self.is_setup = False\n for scope in self._scheduled_task_scopes:\n scope.cancel()\n if self._runs_task_group:\n await self._runs_task_group.__aexit__(*exc_info)\n if self._client:\n await self._client.__aexit__(*exc_info)\n self._runs_task_group = None\n self._client = None\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/block/","title":"block","text":"","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block","title":"prefect.workers.block
","text":"","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerJobConfiguration","title":"BlockWorkerJobConfiguration
","text":" Bases: BaseModel
prefect/workers/block.py
class BlockWorkerJobConfiguration(BaseModel):\n block: Block = Field(\n default=..., description=\"The infrastructure block to use for job creation.\"\n )\n\n @validator(\"block\")\n def _validate_block_is_infrastructure(cls, v):\n print(\"v: \", v)\n if not isinstance(v, Infrastructure):\n raise TypeError(\"Provided block is not a valid infrastructure block.\")\n\n return v\n\n _related_objects: Dict[str, Any] = PrivateAttr(default_factory=dict)\n\n @property\n def is_using_a_runner(self):\n return (\n self.block.command is not None\n and \"prefect flow-run execute\" in shlex.join(self.block.command)\n )\n\n @staticmethod\n def _get_base_config_defaults(variables: dict) -> dict:\n \"\"\"Get default values from base config for all variables that have them.\"\"\"\n defaults = dict()\n for variable_name, attrs in variables.items():\n if \"default\" in attrs:\n defaults[variable_name] = attrs[\"default\"]\n\n return defaults\n\n @classmethod\n @inject_client\n async def from_template_and_values(\n cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n ):\n \"\"\"Creates a valid worker configuration object from the provided base\n configuration and overrides.\n\n Important: this method expects that the base_job_template was already\n validated server-side.\n \"\"\"\n job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n variables_schema = base_job_template[\"variables\"]\n variables = cls._get_base_config_defaults(\n variables_schema.get(\"properties\", {})\n )\n variables.update(values)\n\n populated_configuration = apply_values(template=job_config, values=variables)\n\n block_document_id = get_from_dict(\n populated_configuration, \"block.$ref.block_document_id\"\n )\n if not block_document_id:\n raise ValueError(\n \"Base job template is invalid for this worker type because it does not\"\n \" contain a block_document_id after variable resolution.\"\n )\n\n block_document = await client.read_block_document(\n block_document_id=block_document_id\n )\n infrastructure_block = Block._from_block_document(block_document)\n\n populated_configuration[\"block\"] = infrastructure_block\n\n return cls(**populated_configuration)\n\n @classmethod\n def json_template(cls) -> dict:\n \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n Defaults to using the job configuration parameter name as the template variable name.\n\n e.g.\n {\n key1: '{{ key1 }}', # default variable template\n key2: '{{ template2 }}', # `template2` specifically provide as template\n }\n \"\"\"\n configuration = {}\n properties = cls.schema()[\"properties\"]\n for k, v in properties.items():\n if v.get(\"template\"):\n template = v[\"template\"]\n else:\n template = \"{{ \" + k + \" }}\"\n configuration[k] = template\n\n return configuration\n\n def _related_resources(self) -> List[RelatedResource]:\n tags = set()\n related = []\n\n for kind, obj in self._related_objects.items():\n if obj is None:\n continue\n if hasattr(obj, \"tags\"):\n tags.update(obj.tags)\n related.append(object_as_related_resource(kind=kind, role=kind, object=obj))\n\n return related + tags_as_related_resources(tags)\n\n def prepare_for_flow_run(\n self,\n flow_run: \"FlowRun\",\n deployment: Optional[\"DeploymentResponse\"] = None,\n flow: Optional[\"Flow\"] = None,\n ):\n self.block = self.block.prepare_for_flow_run(\n flow_run=flow_run, deployment=deployment, flow=flow\n )\n
","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerJobConfiguration.from_template_and_values","title":"from_template_and_values
async
classmethod
","text":"Creates a valid worker configuration object from the provided base configuration and overrides.
Important: this method expects that the base_job_template was already validated server-side.
Source code inprefect/workers/block.py
@classmethod\n@inject_client\nasync def from_template_and_values(\n cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n):\n \"\"\"Creates a valid worker configuration object from the provided base\n configuration and overrides.\n\n Important: this method expects that the base_job_template was already\n validated server-side.\n \"\"\"\n job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n variables_schema = base_job_template[\"variables\"]\n variables = cls._get_base_config_defaults(\n variables_schema.get(\"properties\", {})\n )\n variables.update(values)\n\n populated_configuration = apply_values(template=job_config, values=variables)\n\n block_document_id = get_from_dict(\n populated_configuration, \"block.$ref.block_document_id\"\n )\n if not block_document_id:\n raise ValueError(\n \"Base job template is invalid for this worker type because it does not\"\n \" contain a block_document_id after variable resolution.\"\n )\n\n block_document = await client.read_block_document(\n block_document_id=block_document_id\n )\n infrastructure_block = Block._from_block_document(block_document)\n\n populated_configuration[\"block\"] = infrastructure_block\n\n return cls(**populated_configuration)\n
","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerJobConfiguration.json_template","title":"json_template
classmethod
","text":"Returns a dict with job configuration as keys and the corresponding templates as values
Defaults to using the job configuration parameter name as the template variable name.
e.g. { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # template2
specifically provide as template }
prefect/workers/block.py
@classmethod\ndef json_template(cls) -> dict:\n \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n Defaults to using the job configuration parameter name as the template variable name.\n\n e.g.\n {\n key1: '{{ key1 }}', # default variable template\n key2: '{{ template2 }}', # `template2` specifically provide as template\n }\n \"\"\"\n configuration = {}\n properties = cls.schema()[\"properties\"]\n for k, v in properties.items():\n if v.get(\"template\"):\n template = v[\"template\"]\n else:\n template = \"{{ \" + k + \" }}\"\n configuration[k] = template\n\n return configuration\n
","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerResult","title":"BlockWorkerResult
","text":" Bases: BaseWorkerResult
Result of a block worker job
Source code inprefect/workers/block.py
class BlockWorkerResult(BaseWorkerResult):\n \"\"\"Result of a block worker job\"\"\"\n
","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/process/","title":"process","text":"","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process","title":"prefect.workers.process
","text":"Module containing the Process worker used for executing flow runs as subprocesses.
To start a Process worker, run the following command:
prefect worker start --pool 'my-work-pool' --type process\n
Replace my-work-pool
with the name of the work pool you want the worker to poll for flow runs.
For more information about work pools and workers, checkout out the Prefect docs.
","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessJobConfiguration","title":"ProcessJobConfiguration
","text":" Bases: BaseJobConfiguration
prefect/workers/process.py
class ProcessJobConfiguration(BaseJobConfiguration):\n stream_output: bool = Field(default=True)\n working_dir: Optional[Path] = Field(default=None)\n\n @validator(\"working_dir\")\n def validate_command(cls, v):\n \"\"\"Make sure that the working directory is formatted for the current platform.\"\"\"\n if v:\n return relative_path_to_current_platform(v)\n return v\n\n def prepare_for_flow_run(\n self,\n flow_run: \"FlowRun\",\n deployment: Optional[\"DeploymentResponse\"] = None,\n flow: Optional[\"Flow\"] = None,\n ):\n super().prepare_for_flow_run(flow_run, deployment, flow)\n\n self.env = {**os.environ, **self.env}\n self.command = (\n f\"{get_sys_executable()} -m prefect.engine\"\n if self.command == self._base_flow_run_command()\n else self.command\n )\n\n def _base_flow_run_command(self) -> str:\n \"\"\"\n Override the base flow run command because enhanced cancellation doesn't\n work with the process worker.\n \"\"\"\n return \"python -m prefect.engine\"\n
","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessJobConfiguration.validate_command","title":"validate_command
","text":"Make sure that the working directory is formatted for the current platform.
Source code inprefect/workers/process.py
@validator(\"working_dir\")\ndef validate_command(cls, v):\n \"\"\"Make sure that the working directory is formatted for the current platform.\"\"\"\n if v:\n return relative_path_to_current_platform(v)\n return v\n
","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessWorkerResult","title":"ProcessWorkerResult
","text":" Bases: BaseWorkerResult
Contains information about the final state of a completed process
Source code inprefect/workers/process.py
class ProcessWorkerResult(BaseWorkerResult):\n \"\"\"Contains information about the final state of a completed process\"\"\"\n
","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/server/","title":"server","text":"","tags":["Python API","workers","server"]},{"location":"api-ref/prefect/workers/server/#prefect.workers.server","title":"prefect.workers.server
","text":"","tags":["Python API","workers","server"]},{"location":"api-ref/prefect/workers/server/#prefect.workers.server.start_healthcheck_server","title":"start_healthcheck_server
","text":"Run a healthcheck FastAPI server for a worker.
Parameters:
Name Type Description Defaultworker
BaseWorker | ProcessWorker
the worker whose health we will check
requiredlog_level
str
the log level to use for the server
'error'
Source code in prefect/workers/server.py
def start_healthcheck_server(\n worker: Union[BaseWorker, ProcessWorker],\n query_interval_seconds: int,\n log_level: str = \"error\",\n) -> None:\n \"\"\"\n Run a healthcheck FastAPI server for a worker.\n\n Args:\n worker (BaseWorker | ProcessWorker): the worker whose health we will check\n log_level (str): the log level to use for the server\n \"\"\"\n webserver = FastAPI()\n router = APIRouter()\n\n def perform_health_check():\n did_recently_poll = worker.is_worker_still_polling(\n query_interval_seconds=query_interval_seconds\n )\n\n if not did_recently_poll:\n return JSONResponse(\n status_code=status.HTTP_503_SERVICE_UNAVAILABLE,\n content={\"message\": \"Worker may be unresponsive at this time\"},\n )\n return JSONResponse(status_code=status.HTTP_200_OK, content={\"message\": \"OK\"})\n\n router.add_api_route(\"/health\", perform_health_check, methods=[\"GET\"])\n\n webserver.include_router(router)\n\n uvicorn.run(\n webserver,\n host=PREFECT_WORKER_WEBSERVER_HOST.value(),\n port=PREFECT_WORKER_WEBSERVER_PORT.value(),\n log_level=log_level,\n )\n
","tags":["Python API","workers","server"]},{"location":"api-ref/prefect/workers/utilities/","title":"utilities","text":"","tags":["Python API","workers","utilities"]},{"location":"api-ref/prefect/workers/utilities/#prefect.workers.utilities","title":"prefect.workers.utilities
","text":"","tags":["Python API","workers","utilities"]},{"location":"api-ref/python/","title":"Python SDK","text":"The Prefect Python SDK is used to build, test, and execute workflows against the Prefect API.
Explore the modules in the navigation bar to the left to learn more.
","tags":["API","Python SDK"]},{"location":"api-ref/rest-api/","title":"REST API","text":"The Prefect REST API is used for communicating data from clients to a Prefect server so that orchestration can be performed. This API is consumed by clients such as the Prefect Python SDK or the server dashboard.
Prefect Cloud and a locally hosted Prefect server each provide a REST API.
http://localhost:4200/docs
or the /docs
endpoint of the PREFECT_API_URL you have configured to access the server. You must have the server running with prefect server start
to access the interactive documentation.You have many options to interact with the Prefect REST API:
PrefectClient
This example uses PrefectClient
with a locally hosted Prefect server:
import asyncio\nfrom prefect.client import get_client\n\nasync def get_flows():\n client = get_client()\n r = await client.read_flows(limit=5)\n return r\n\nr = asyncio.run(get_flows())\n\nfor flow in r:\n print(flow.name, flow.id)\n\nif __name__ == \"__main__\":\n asyncio.run(get_flows())\n
Output:
cat-facts 58ed68b1-0201-4f37-adef-0ea24bd2a022\ndog-facts e7c0403d-44e7-45cf-a6c8-79117b7f3766\nsloth-facts 771c0574-f5bf-4f59-a69d-3be3e061a62d\ncapybara-facts fbadaf8b-584f-48b9-b092-07d351edd424\nlemur-facts 53f710e7-3b0f-4b2f-ab6b-44934111818c\n
","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#requests-with-prefect","title":"Requests with Prefect","text":"This example uses the Requests library with Prefect Cloud to return the five newest artifacts.
import requests\n\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/abc-my-cloud-account-id-is-here/workspaces/123-my-workspace-id-is-here\"\nPREFECT_API_KEY=\"123abc_my_api_key_goes_here\"\ndata = {\n \"sort\": \"CREATED_DESC\",\n \"limit\": 5,\n \"artifacts\": {\n \"key\": {\n \"exists_\": True\n }\n }\n}\n\nheaders = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\nendpoint = f\"{PREFECT_API_URL}/artifacts/filter\"\n\nresponse = requests.post(endpoint, headers=headers, json=data)\nassert response.status_code == 200\nfor artifact in response.json():\n print(artifact)\n
","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#curl-with-prefect-cloud","title":"curl with Prefect Cloud","text":"This example uses curl with Prefect Cloud to create a flow run:
ACCOUNT_ID=\"abc-my-cloud-account-id-goes-here\"\nWORKSPACE_ID=\"123-my-workspace-id-goes-here\"\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/$ACCOUNT_ID/workspaces/$WORKSPACE_ID\"\nPREFECT_API_KEY=\"123abc_my_api_key_goes_here\"\nDEPLOYMENT_ID=\"my_deployment_id\"\n\ncurl --location --request POST \"$PREFECT_API_URL/deployments/$DEPLOYMENT_ID/create_flow_run\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Authorization: Bearer $PREFECT_API_KEY\" \\\n --header \"X-PREFECT-API-VERSION: 0.8.4\" \\\n --data-raw \"{}\"\n
Note that in this example --data-raw \"{}\"
is required and is where you can specify other aspects of the flow run such as the state. Windows users substitute ^
for \\
for line multi-line commands.
When working with the Prefect Cloud REST API you will need your Account ID and often the Workspace ID for the workspace you want to interact with. You can find both IDs for a Prefect profile in the CLI with prefect profile inspect my_profile
. This command will also display your Prefect API key, as shown below:
PREFECT_API_URL='https://api.prefect.cloud/api/accounts/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here'\nPREFECT_API_KEY='123abc_my_api_key_is_here'\n
Alternatively, view your Account ID and Workspace ID in your browser URL. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here
.
The REST APIs adhere to the following guidelines:
/flows
or /runs
).GET /flows/:id
.GET /task_runs
./task_runs
with a flow run filter instead of accessing /flow_runs/:id/task_runs
./api/:version
prefix that (optionally) allows versioning in the future. By convention, we treat that as part of the base URL and do not include that in API examples.POST
requests where applicable.limit
and offset
.sort
parameter.GET
, PUT
and DELETE
requests are always idempotent. POST
and PATCH
are not guaranteed to be idempotent.GET
requests cannot receive information from the request body.POST
requests can receive information from the request body.POST /collection
creates a new member of the collection.GET /collection
lists all members of the collection.GET /collection/:id
gets a specific member of the collection by ID.DELETE /collection/:id
deletes a specific member of the collection.PUT /collection/:id
creates or replaces a specific member of the collection.PATCH /collection/:id
partially updates a specific member of the collection.POST /collection/action
is how we implement non-CRUD actions. For example, to set a flow run's state, we use POST /flow_runs/:id/set_state
.POST /collection/action
may also be used for read-only queries. This is to allow us to send complex arguments as body arguments (which often cannot be done via GET
). Examples include POST /flow_runs/filter
, POST /flow_runs/count
, and POST /flow_runs/history
.Objects can be filtered by providing filter criteria in the body of a POST
request. When multiple criteria are specified, logical AND will be applied to the criteria.
Filter criteria are structured as follows:
{\n \"objects\": {\n \"object_field\": {\n \"field_operator_\": <field_value>\n }\n }\n}\n
In this example, objects
is the name of the collection to filter over (for example, flows
). The collection can be either the object being queried for (flows
for POST /flows/filter
) or a related object (flow_runs
for POST /flows/filter
).
object_field
is the name of the field over which to filter (name
for flows
). Note that some objects may have nested object fields, such as {flow_run: {state: {type: {any_: []}}}}
.
field_operator_
is the operator to apply to a field when filtering. Common examples include:
any_
: return objects where this field matches any of the following values.is_null_
: return objects where this field is or is not null.eq_
: return objects where this field is equal to the following value.all_
: return objects where this field matches all of the following values.before_
: return objects where this datetime field is less than or equal to the following value.after_
: return objects where this datetime field is greater than or equal to the following value.For example, to query for flows with the tag \"database\"
and failed flow runs, POST /flows/filter
with the following request body:
{\n \"flows\": {\n \"tags\": {\n \"all_\": [\"database\"]\n }\n },\n \"flow_runs\": {\n \"state\": {\n \"type\": {\n \"any_\": [\"FAILED\"]\n }\n }\n }\n}\n
","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#openapi","title":"OpenAPI","text":"The Prefect REST API can be fully described with an OpenAPI 3.0 compliant document. OpenAPI is a standard specification for describing REST APIs.
To generate the Prefect server's complete OpenAPI document, run the following commands in an interactive Python session:
from prefect.server.api.server import create_app\n\napp = create_app()\nopenapi_doc = app.openapi()\n
This document allows you to generate your own API client, explore the API using an API inspection tool, or write tests to ensure API compliance.
","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/server/","title":"Server API","text":"The Prefect server API is used by the server to work with workflow metadata and enforce orchestration logic. This API is primarily used by Prefect developers.
Select links in the left navigation menu to explore.
","tags":["API","Server API"]},{"location":"api-ref/server/api/admin/","title":"server.api.admin","text":"","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin","title":"prefect.server.api.admin
","text":"Routes for admin-level interactions with the Prefect REST API.
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.clear_database","title":"clear_database
async
","text":"Clear all database tables without dropping them.
Source code inprefect/server/api/admin.py
@router.post(\"/database/clear\", status_code=status.HTTP_204_NO_CONTENT)\nasync def clear_database(\n db: PrefectDBInterface = Depends(provide_database_interface),\n confirm: bool = Body(\n False,\n embed=True,\n description=\"Pass confirm=True to confirm you want to modify the database.\",\n ),\n response: Response = None,\n):\n \"\"\"Clear all database tables without dropping them.\"\"\"\n if not confirm:\n response.status_code = status.HTTP_400_BAD_REQUEST\n return\n async with db.session_context(begin_transaction=True) as session:\n # work pool has a circular dependency on pool queue; delete it first\n await session.execute(db.WorkPool.__table__.delete())\n for table in reversed(db.Base.metadata.sorted_tables):\n await session.execute(table.delete())\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.create_database","title":"create_database
async
","text":"Create all database objects.
Source code inprefect/server/api/admin.py
@router.post(\"/database/create\", status_code=status.HTTP_204_NO_CONTENT)\nasync def create_database(\n db: PrefectDBInterface = Depends(provide_database_interface),\n confirm: bool = Body(\n False,\n embed=True,\n description=\"Pass confirm=True to confirm you want to modify the database.\",\n ),\n response: Response = None,\n):\n \"\"\"Create all database objects.\"\"\"\n if not confirm:\n response.status_code = status.HTTP_400_BAD_REQUEST\n return\n\n await db.create_db()\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.drop_database","title":"drop_database
async
","text":"Drop all database objects.
Source code inprefect/server/api/admin.py
@router.post(\"/database/drop\", status_code=status.HTTP_204_NO_CONTENT)\nasync def drop_database(\n db: PrefectDBInterface = Depends(provide_database_interface),\n confirm: bool = Body(\n False,\n embed=True,\n description=\"Pass confirm=True to confirm you want to modify the database.\",\n ),\n response: Response = None,\n):\n \"\"\"Drop all database objects.\"\"\"\n if not confirm:\n response.status_code = status.HTTP_400_BAD_REQUEST\n return\n\n await db.drop_db()\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.read_settings","title":"read_settings
async
","text":"Get the current Prefect REST API settings.
Secret setting values will be obfuscated.
Source code inprefect/server/api/admin.py
@router.get(\"/settings\")\nasync def read_settings() -> prefect.settings.Settings:\n \"\"\"\n Get the current Prefect REST API settings.\n\n Secret setting values will be obfuscated.\n \"\"\"\n return prefect.settings.get_current_settings().with_obfuscated_secrets()\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.read_version","title":"read_version
async
","text":"Returns the Prefect version number
Source code inprefect/server/api/admin.py
@router.get(\"/version\")\nasync def read_version() -> str:\n \"\"\"Returns the Prefect version number\"\"\"\n return prefect.__version__\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/dependencies/","title":"server.api.dependencies","text":"","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies","title":"prefect.server.api.dependencies
","text":"Utilities for injecting FastAPI dependencies.
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.EnforceMinimumAPIVersion","title":"EnforceMinimumAPIVersion
","text":"FastAPI Dependency used to check compatibility between the version of the api and a given request.
Looks for the header 'X-PREFECT-API-VERSION' in the request and compares it to the api's version. Rejects requests that are lower than the minimum version.
Source code inprefect/server/api/dependencies.py
class EnforceMinimumAPIVersion:\n \"\"\"\n FastAPI Dependency used to check compatibility between the version of the api\n and a given request.\n\n Looks for the header 'X-PREFECT-API-VERSION' in the request and compares it\n to the api's version. Rejects requests that are lower than the minimum version.\n \"\"\"\n\n def __init__(self, minimum_api_version: str, logger: logging.Logger):\n self.minimum_api_version = minimum_api_version\n versions = [int(v) for v in minimum_api_version.split(\".\")]\n self.api_major = versions[0]\n self.api_minor = versions[1]\n self.api_patch = versions[2]\n self.logger = logger\n\n async def __call__(\n self,\n x_prefect_api_version: str = Header(None),\n ):\n request_version = x_prefect_api_version\n\n # if no version header, assume latest and continue\n if not request_version:\n return\n\n # parse version\n try:\n major, minor, patch = [int(v) for v in request_version.split(\".\")]\n except ValueError:\n await self._notify_of_invalid_value(request_version)\n raise HTTPException(\n status_code=status.HTTP_400_BAD_REQUEST,\n detail=(\n \"Invalid X-PREFECT-API-VERSION header format.\"\n f\"Expected header in format 'x.y.z' but received {request_version}\"\n ),\n )\n\n if (major, minor, patch) < (self.api_major, self.api_minor, self.api_patch):\n await self._notify_of_outdated_version(request_version)\n raise HTTPException(\n status_code=status.HTTP_400_BAD_REQUEST,\n detail=(\n f\"The request specified API version {request_version} but this \"\n f\"server requires version {self.minimum_api_version} or higher.\"\n ),\n )\n\n async def _notify_of_invalid_value(self, request_version: str):\n self.logger.error(\n f\"Invalid X-PREFECT-API-VERSION header format: '{request_version}'\"\n )\n\n async def _notify_of_outdated_version(self, request_version: str):\n self.logger.error(\n f\"X-PREFECT-API-VERSION header specifies version '{request_version}' \"\n f\"but minimum allowed version is '{self.minimum_api_version}'\"\n )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.LimitBody","title":"LimitBody
","text":"A fastapi.Depends
factory for pulling a limit: int
parameter from the request body while determining the default from the current settings.
prefect/server/api/dependencies.py
def LimitBody() -> Depends:\n \"\"\"\n A `fastapi.Depends` factory for pulling a `limit: int` parameter from the\n request body while determining the default from the current settings.\n \"\"\"\n\n def get_limit(\n limit: int = Body(\n None,\n description=\"Defaults to PREFECT_API_DEFAULT_LIMIT if not provided.\",\n ),\n ):\n default_limit = PREFECT_API_DEFAULT_LIMIT.value()\n limit = limit if limit is not None else default_limit\n if not limit >= 0:\n raise HTTPException(\n status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n detail=\"Invalid limit: must be greater than or equal to 0.\",\n )\n if limit > default_limit:\n raise HTTPException(\n status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n detail=f\"Invalid limit: must be less than or equal to {default_limit}.\",\n )\n return limit\n\n return Depends(get_limit)\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/deployments/","title":"server.api.deployments","text":"","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments","title":"prefect.server.api.deployments
","text":"Routes for interacting with Deployment objects.
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.count_deployments","title":"count_deployments
async
","text":"Count deployments.
Source code inprefect/server/api/deployments.py
@router.post(\"/count\")\nasync def count_deployments(\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n work_pool_queues: schemas.filters.WorkQueueFilter = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n \"\"\"\n Count deployments.\n \"\"\"\n async with db.session_context() as session:\n return await models.deployments.count_deployments(\n session=session,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n work_queue_filter=work_pool_queues,\n )\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.create_deployment","title":"create_deployment
async
","text":"Gracefully creates a new deployment from the provided schema. If a deployment with the same name and flow_id already exists, the deployment is updated.
If the deployment has an active schedule, flow runs will be scheduled. When upserting, any scheduled runs from the existing deployment will be deleted.
Source code inprefect/server/api/deployments.py
@router.post(\"/\")\nasync def create_deployment(\n deployment: schemas.actions.DeploymentCreate,\n response: Response,\n worker_lookups: WorkerLookups = Depends(WorkerLookups),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n \"\"\"\n Gracefully creates a new deployment from the provided schema. If a deployment with\n the same name and flow_id already exists, the deployment is updated.\n\n If the deployment has an active schedule, flow runs will be scheduled.\n When upserting, any scheduled runs from the existing deployment will be deleted.\n \"\"\"\n\n data = deployment.dict(exclude_unset=True)\n\n async with db.session_context(begin_transaction=True) as session:\n if (\n deployment.work_pool_name\n and deployment.work_pool_name != DEFAULT_AGENT_WORK_POOL_NAME\n ):\n # Make sure that deployment is valid before beginning creation process\n work_pool = await models.workers.read_work_pool_by_name(\n session=session, work_pool_name=deployment.work_pool_name\n )\n if work_pool is None:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND,\n detail=f'Work pool \"{deployment.work_pool_name}\" not found.',\n )\n try:\n deployment.check_valid_configuration(work_pool.base_job_template)\n except (MissingVariableError, jsonschema.exceptions.ValidationError) as exc:\n raise HTTPException(\n status_code=status.HTTP_409_CONFLICT,\n detail=f\"Error creating deployment: {exc!r}\",\n )\n\n # hydrate the input model into a full model\n deployment_dict = deployment.dict(exclude={\"work_pool_name\"})\n if deployment.work_pool_name and deployment.work_queue_name:\n # If a specific pool name/queue name combination was provided, get the\n # ID for that work pool queue.\n deployment_dict[\n \"work_queue_id\"\n ] = await worker_lookups._get_work_queue_id_from_name(\n session=session,\n work_pool_name=deployment.work_pool_name,\n work_queue_name=deployment.work_queue_name,\n create_queue_if_not_found=True,\n )\n elif deployment.work_pool_name:\n # If just a pool name was provided, get the ID for its default\n # work pool queue.\n deployment_dict[\n \"work_queue_id\"\n ] = await worker_lookups._get_default_work_queue_id_from_work_pool_name(\n session=session,\n work_pool_name=deployment.work_pool_name,\n )\n elif deployment.work_queue_name:\n # If just a queue name was provided, ensure that the queue exists and\n # get its ID.\n work_queue = await models.work_queues._ensure_work_queue_exists(\n session=session, name=deployment.work_queue_name\n )\n deployment_dict[\"work_queue_id\"] = work_queue.id\n\n deployment = schemas.core.Deployment(**deployment_dict)\n # check to see if relevant blocks exist, allowing us throw a useful error message\n # for debugging\n if deployment.infrastructure_document_id is not None:\n infrastructure_block = (\n await models.block_documents.read_block_document_by_id(\n session=session,\n block_document_id=deployment.infrastructure_document_id,\n )\n )\n if not infrastructure_block:\n raise HTTPException(\n status_code=status.HTTP_409_CONFLICT,\n detail=(\n \"Error creating deployment. Could not find infrastructure\"\n f\" block with id: {deployment.infrastructure_document_id}. This\"\n \" usually occurs when applying a deployment specification that\"\n \" was built against a different Prefect database / workspace.\"\n ),\n )\n\n if deployment.storage_document_id is not None:\n infrastructure_block = (\n await models.block_documents.read_block_document_by_id(\n session=session,\n block_document_id=deployment.storage_document_id,\n )\n )\n if not infrastructure_block:\n raise HTTPException(\n status_code=status.HTTP_409_CONFLICT,\n detail=(\n \"Error creating deployment. Could not find storage block with\"\n f\" id: {deployment.storage_document_id}. This usually occurs\"\n \" when applying a deployment specification that was built\"\n \" against a different Prefect database / workspace.\"\n ),\n )\n\n # Ensure that `paused` and `is_schedule_active` are consistent.\n if \"paused\" in data:\n deployment.is_schedule_active = not data[\"paused\"]\n elif \"is_schedule_active\" in data:\n deployment.paused = not data[\"is_schedule_active\"]\n\n now = pendulum.now(\"UTC\")\n model = await models.deployments.create_deployment(\n session=session, deployment=deployment\n )\n\n if model.created >= now:\n response.status_code = status.HTTP_201_CREATED\n\n return schemas.responses.DeploymentResponse.from_orm(model)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.create_flow_run_from_deployment","title":"create_flow_run_from_deployment
async
","text":"Create a flow run from a deployment.
Any parameters not provided will be inferred from the deployment's parameters. If tags are not provided, the deployment's tags will be used.
If no state is provided, the flow run will be created in a SCHEDULED state.
Source code inprefect/server/api/deployments.py
@router.post(\"/{id}/create_flow_run\")\nasync def create_flow_run_from_deployment(\n flow_run: schemas.actions.DeploymentFlowRunCreate,\n deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n worker_lookups: WorkerLookups = Depends(WorkerLookups),\n response: Response = None,\n) -> schemas.responses.FlowRunResponse:\n \"\"\"\n Create a flow run from a deployment.\n\n Any parameters not provided will be inferred from the deployment's parameters.\n If tags are not provided, the deployment's tags will be used.\n\n If no state is provided, the flow run will be created in a SCHEDULED state.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n # get relevant info from the deployment\n deployment = await models.deployments.read_deployment(\n session=session, deployment_id=deployment_id\n )\n\n if not deployment:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n )\n\n if experiment_enabled(\"enhanced_deployment_parameters\"):\n try:\n dehydrated_params = deployment.parameters\n dehydrated_params.update(flow_run.parameters or {})\n ctx = await HydrationContext.build(session=session, raise_on_error=True)\n parameters = hydrate(dehydrated_params, ctx)\n except HydrationError as exc:\n raise HTTPException(\n status.HTTP_400_BAD_REQUEST,\n detail=f\"Error hydrating flow run parameters: {exc}\",\n )\n else:\n parameters = deployment.parameters\n parameters.update(flow_run.parameters or {})\n\n if deployment.enforce_parameter_schema:\n if not isinstance(deployment.parameter_openapi_schema, dict):\n raise HTTPException(\n status.HTTP_409_CONFLICT,\n detail=(\n \"Error updating deployment: Cannot update parameters because\"\n \" parameter schema enforcement is enabled and the deployment\"\n \" does not have a valid parameter schema.\"\n ),\n )\n try:\n validate(\n parameters, deployment.parameter_openapi_schema, raise_on_error=True\n )\n except ValidationError as exc:\n raise HTTPException(\n status.HTTP_409_CONFLICT,\n detail=f\"Error creating flow run: {exc}\",\n )\n except CircularSchemaRefError:\n raise HTTPException(\n status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n detail=\"Invalid schema: Unable to validate schema with circular references.\",\n )\n\n if PREFECT_EXPERIMENTAL_ENABLE_FLOW_RUN_INFRA_OVERRIDES:\n validate_job_variables_for_flow_run(flow_run, deployment)\n\n work_queue_name = deployment.work_queue_name\n work_queue_id = deployment.work_queue_id\n\n if flow_run.work_queue_name:\n # can't mutate the ORM model or else it will commit the changes back\n work_queue_id = await worker_lookups._get_work_queue_id_from_name(\n session=session,\n work_pool_name=deployment.work_queue.work_pool.name,\n work_queue_name=flow_run.work_queue_name,\n create_queue_if_not_found=True,\n )\n work_queue_name = flow_run.work_queue_name\n\n # hydrate the input model into a full flow run / state model\n flow_run = schemas.core.FlowRun(\n **flow_run.dict(\n exclude={\n \"parameters\",\n \"tags\",\n \"infrastructure_document_id\",\n \"work_queue_name\",\n }\n ),\n flow_id=deployment.flow_id,\n deployment_id=deployment.id,\n parameters=parameters,\n tags=set(deployment.tags).union(flow_run.tags),\n infrastructure_document_id=(\n flow_run.infrastructure_document_id\n or deployment.infrastructure_document_id\n ),\n work_queue_name=work_queue_name,\n work_queue_id=work_queue_id,\n )\n\n if not flow_run.state:\n flow_run.state = schemas.states.Scheduled()\n\n now = pendulum.now(\"UTC\")\n model = await models.flow_runs.create_flow_run(\n session=session, flow_run=flow_run\n )\n if model.created >= now:\n response.status_code = status.HTTP_201_CREATED\n return schemas.responses.FlowRunResponse.from_orm(model)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.delete_deployment","title":"delete_deployment
async
","text":"Delete a deployment by id.
Source code inprefect/server/api/deployments.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_deployment(\n deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Delete a deployment by id.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n result = await models.deployments.delete_deployment(\n session=session, deployment_id=deployment_id\n )\n if not result:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n )\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.get_scheduled_flow_runs_for_deployments","title":"get_scheduled_flow_runs_for_deployments
async
","text":"Get scheduled runs for a set of deployments. Used by a runner to poll for work.
Source code inprefect/server/api/deployments.py
@router.post(\"/get_scheduled_flow_runs\")\nasync def get_scheduled_flow_runs_for_deployments(\n deployment_ids: List[UUID] = Body(\n default=..., description=\"The deployment IDs to get scheduled runs for\"\n ),\n scheduled_before: DateTimeTZ = Body(\n None, description=\"The maximum time to look for scheduled flow runs\"\n ),\n limit: int = dependencies.LimitBody(),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.FlowRunResponse]:\n \"\"\"\n Get scheduled runs for a set of deployments. Used by a runner to poll for work.\n \"\"\"\n async with db.session_context() as session:\n orm_flow_runs = await models.flow_runs.read_flow_runs(\n session=session,\n limit=limit,\n deployment_filter=schemas.filters.DeploymentFilter(\n id=schemas.filters.DeploymentFilterId(any_=deployment_ids),\n ),\n flow_run_filter=schemas.filters.FlowRunFilter(\n next_scheduled_start_time=schemas.filters.FlowRunFilterNextScheduledStartTime(\n before_=scheduled_before\n ),\n state=schemas.filters.FlowRunFilterState(\n type=schemas.filters.FlowRunFilterStateType(\n any_=[schemas.states.StateType.SCHEDULED]\n )\n ),\n ),\n sort=schemas.sorting.FlowRunSort.NEXT_SCHEDULED_START_TIME_ASC,\n )\n\n flow_run_responses = [\n schemas.responses.FlowRunResponse.from_orm(orm_flow_run=orm_flow_run)\n for orm_flow_run in orm_flow_runs\n ]\n\n async with db.session_context(\n begin_transaction=True, with_for_update=True\n ) as session:\n await models.deployments._update_deployment_last_polled(\n session=session, deployment_ids=deployment_ids\n )\n\n return flow_run_responses\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployment","title":"read_deployment
async
","text":"Get a deployment by id.
Source code inprefect/server/api/deployments.py
@router.get(\"/{id}\")\nasync def read_deployment(\n deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n \"\"\"\n Get a deployment by id.\n \"\"\"\n async with db.session_context() as session:\n deployment = await models.deployments.read_deployment(\n session=session, deployment_id=deployment_id\n )\n if not deployment:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n )\n return schemas.responses.DeploymentResponse.from_orm(deployment)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployment_by_name","title":"read_deployment_by_name
async
","text":"Get a deployment using the name of the flow and the deployment.
Source code inprefect/server/api/deployments.py
@router.get(\"/name/{flow_name}/{deployment_name}\")\nasync def read_deployment_by_name(\n flow_name: str = Path(..., description=\"The name of the flow\"),\n deployment_name: str = Path(..., description=\"The name of the deployment\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n \"\"\"\n Get a deployment using the name of the flow and the deployment.\n \"\"\"\n async with db.session_context() as session:\n deployment = await models.deployments.read_deployment_by_name(\n session=session, name=deployment_name, flow_name=flow_name\n )\n if not deployment:\n raise HTTPException(\n status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n )\n return schemas.responses.DeploymentResponse.from_orm(deployment)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployments","title":"read_deployments
async
","text":"Query for deployments.
Source code inprefect/server/api/deployments.py
@router.post(\"/filter\")\nasync def read_deployments(\n limit: int = dependencies.LimitBody(),\n offset: int = Body(0, ge=0),\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n work_pool_queues: schemas.filters.WorkQueueFilter = None,\n sort: schemas.sorting.DeploymentSort = Body(\n schemas.sorting.DeploymentSort.NAME_ASC\n ),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.DeploymentResponse]:\n \"\"\"\n Query for deployments.\n \"\"\"\n async with db.session_context() as session:\n response = await models.deployments.read_deployments(\n session=session,\n offset=offset,\n sort=sort,\n limit=limit,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n work_queue_filter=work_pool_queues,\n )\n return [\n schemas.responses.DeploymentResponse.from_orm(orm_deployment=deployment)\n for deployment in response\n ]\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.schedule_deployment","title":"schedule_deployment
async
","text":"Schedule runs for a deployment. For backfills, provide start/end times in the past.
This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected.
- Runs will be generated starting on or after the `start_time`\n- No more than `max_runs` runs will be generated\n- No runs will be generated after `end_time` is reached\n- At least `min_runs` runs will be generated\n- Runs will be generated until at least `start_time + min_time` is reached\n
Source code in prefect/server/api/deployments.py
@router.post(\"/{id}/schedule\")\nasync def schedule_deployment(\n deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n start_time: DateTimeTZ = Body(None, description=\"The earliest date to schedule\"),\n end_time: DateTimeTZ = Body(None, description=\"The latest date to schedule\"),\n min_time: datetime.timedelta = Body(\n None,\n description=(\n \"Runs will be scheduled until at least this long after the `start_time`\"\n ),\n ),\n min_runs: int = Body(None, description=\"The minimum number of runs to schedule\"),\n max_runs: int = Body(None, description=\"The maximum number of runs to schedule\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n \"\"\"\n Schedule runs for a deployment. For backfills, provide start/end times in the past.\n\n This function will generate the minimum number of runs that satisfy the min\n and max times, and the min and max counts. Specifically, the following order\n will be respected.\n\n - Runs will be generated starting on or after the `start_time`\n - No more than `max_runs` runs will be generated\n - No runs will be generated after `end_time` is reached\n - At least `min_runs` runs will be generated\n - Runs will be generated until at least `start_time + min_time` is reached\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n await models.deployments.schedule_runs(\n session=session,\n deployment_id=deployment_id,\n start_time=start_time,\n min_time=min_time,\n end_time=end_time,\n min_runs=min_runs,\n max_runs=max_runs,\n )\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.set_schedule_active","title":"set_schedule_active
async
","text":"Set a deployment schedule to active. Runs will be scheduled immediately.
Source code inprefect/server/api/deployments.py
@router.post(\"/{id}/set_schedule_active\")\nasync def set_schedule_active(\n deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n \"\"\"\n Set a deployment schedule to active. Runs will be scheduled immediately.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n deployment = await models.deployments.read_deployment(\n session=session, deployment_id=deployment_id\n )\n if not deployment:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n )\n deployment.is_schedule_active = True\n deployment.paused = False\n\n # Ensure that we're updating the replicated schedule's `active` field,\n # if there is only a single schedule. This is support for legacy\n # clients.\n\n number_of_schedules = len(deployment.schedules)\n\n if number_of_schedules == 1:\n deployment.schedules[0].active = True\n elif number_of_schedules > 1:\n raise _multiple_schedules_error(deployment_id)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.set_schedule_inactive","title":"set_schedule_inactive
async
","text":"Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled state will be deleted.
Source code inprefect/server/api/deployments.py
@router.post(\"/{id}/set_schedule_inactive\")\nasync def set_schedule_inactive(\n deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n \"\"\"\n Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled\n state will be deleted.\n \"\"\"\n async with db.session_context(begin_transaction=False) as session:\n deployment = await models.deployments.read_deployment(\n session=session, deployment_id=deployment_id\n )\n if not deployment:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n )\n deployment.is_schedule_active = False\n deployment.paused = False\n\n # Ensure that we're updating the replicated schedule's `active` field,\n # if there is only a single schedule. This is support for legacy\n # clients.\n\n number_of_schedules = len(deployment.schedules)\n\n if number_of_schedules == 1:\n deployment.schedules[0].active = False\n elif number_of_schedules > 1:\n raise _multiple_schedules_error(deployment_id)\n\n # commit here to make the inactive schedule \"visible\" to the scheduler service\n await session.commit()\n\n # delete any auto scheduled runs\n await models.deployments._delete_scheduled_runs(\n session=session,\n deployment_id=deployment_id,\n db=db,\n auto_scheduled_only=True,\n )\n\n await session.commit()\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.work_queue_check_for_deployment","title":"work_queue_check_for_deployment
async
","text":"Get list of work-queues that are able to pick up the specified deployment.
This endpoint is intended to be used by the UI to provide users warnings about deployments that are unable to be executed because there are no work queues that will pick up their runs, based on existing filter criteria. It may be deprecated in the future because there is not a strict relationship between work queues and deployments.
Source code inprefect/server/api/deployments.py
@router.get(\"/{id}/work_queue_check\", deprecated=True)\nasync def work_queue_check_for_deployment(\n deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.WorkQueue]:\n \"\"\"\n Get list of work-queues that are able to pick up the specified deployment.\n\n This endpoint is intended to be used by the UI to provide users warnings\n about deployments that are unable to be executed because there are no work\n queues that will pick up their runs, based on existing filter criteria. It\n may be deprecated in the future because there is not a strict relationship\n between work queues and deployments.\n \"\"\"\n try:\n async with db.session_context() as session:\n work_queues = await models.deployments.check_work_queues_for_deployment(\n session=session, deployment_id=deployment_id\n )\n except ObjectNotFoundError:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n )\n return work_queues\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/flow_run_states/","title":"server.api.flow_run_states","text":"","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states","title":"prefect.server.api.flow_run_states
","text":"Routes for interacting with flow run state objects.
","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states.read_flow_run_state","title":"read_flow_run_state
async
","text":"Get a flow run state by id.
Source code inprefect/server/api/flow_run_states.py
@router.get(\"/{id}\")\nasync def read_flow_run_state(\n flow_run_state_id: UUID = Path(\n ..., description=\"The flow run state id\", alias=\"id\"\n ),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.states.State:\n \"\"\"\n Get a flow run state by id.\n \"\"\"\n async with db.session_context() as session:\n flow_run_state = await models.flow_run_states.read_flow_run_state(\n session=session, flow_run_state_id=flow_run_state_id\n )\n if not flow_run_state:\n raise HTTPException(\n status.HTTP_404_NOT_FOUND, detail=\"Flow run state not found\"\n )\n return flow_run_state\n
","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states.read_flow_run_states","title":"read_flow_run_states
async
","text":"Get states associated with a flow run.
Source code inprefect/server/api/flow_run_states.py
@router.get(\"/\")\nasync def read_flow_run_states(\n flow_run_id: UUID,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.states.State]:\n \"\"\"\n Get states associated with a flow run.\n \"\"\"\n async with db.session_context() as session:\n return await models.flow_run_states.read_flow_run_states(\n session=session, flow_run_id=flow_run_id\n )\n
","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_runs/","title":"server.api.flow_runs","text":"","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs","title":"prefect.server.api.flow_runs
","text":"Routes for interacting with flow run objects.
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.average_flow_run_lateness","title":"average_flow_run_lateness
async
","text":"Query for average flow-run lateness in seconds.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/lateness\")\nasync def average_flow_run_lateness(\n flows: Optional[schemas.filters.FlowFilter] = None,\n flow_runs: Optional[schemas.filters.FlowRunFilter] = None,\n task_runs: Optional[schemas.filters.TaskRunFilter] = None,\n deployments: Optional[schemas.filters.DeploymentFilter] = None,\n work_pools: Optional[schemas.filters.WorkPoolFilter] = None,\n work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> Optional[float]:\n \"\"\"\n Query for average flow-run lateness in seconds.\n \"\"\"\n async with db.session_context() as session:\n if db.dialect.name == \"sqlite\":\n # Since we want an _average_ of the lateness we're unable to use\n # the existing FlowRun.expected_start_time_delta property as it\n # returns a timedelta and SQLite is unable to properly deal with it\n # and always returns 1970.0 as the average. This copies the same\n # logic but ensures that it returns the number of seconds instead\n # so it's compatible with SQLite.\n base_query = sa.case(\n (\n db.FlowRun.start_time > db.FlowRun.expected_start_time,\n sa.func.strftime(\"%s\", db.FlowRun.start_time)\n - sa.func.strftime(\"%s\", db.FlowRun.expected_start_time),\n ),\n (\n db.FlowRun.start_time.is_(None)\n & db.FlowRun.state_type.notin_(schemas.states.TERMINAL_STATES)\n & (db.FlowRun.expected_start_time < sa.func.datetime(\"now\")),\n sa.func.strftime(\"%s\", sa.func.datetime(\"now\"))\n - sa.func.strftime(\"%s\", db.FlowRun.expected_start_time),\n ),\n else_=0,\n )\n else:\n base_query = db.FlowRun.estimated_start_time_delta\n\n query = await models.flow_runs._apply_flow_run_filters(\n sa.select(sa.func.avg(base_query)),\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n work_queue_filter=work_pool_queues,\n )\n result = await session.execute(query)\n\n avg_lateness = result.scalar()\n\n if avg_lateness is None:\n return None\n elif isinstance(avg_lateness, datetime.timedelta):\n return avg_lateness.total_seconds()\n else:\n return avg_lateness\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.count_flow_runs","title":"count_flow_runs
async
","text":"Query for flow runs.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/count\")\nasync def count_flow_runs(\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n work_pool_queues: schemas.filters.WorkQueueFilter = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n \"\"\"\n Query for flow runs.\n \"\"\"\n async with db.session_context() as session:\n return await models.flow_runs.count_flow_runs(\n session=session,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n work_queue_filter=work_pool_queues,\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.create_flow_run","title":"create_flow_run
async
","text":"Create a flow run. If a flow run with the same flow_id and idempotency key already exists, the existing flow run will be returned.
If no state is provided, the flow run will be created in a PENDING state.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/\")\nasync def create_flow_run(\n flow_run: schemas.actions.FlowRunCreate,\n db: PrefectDBInterface = Depends(provide_database_interface),\n response: Response = None,\n orchestration_parameters: dict = Depends(\n orchestration_dependencies.provide_flow_orchestration_parameters\n ),\n api_version=Depends(dependencies.provide_request_api_version),\n) -> schemas.responses.FlowRunResponse:\n \"\"\"\n Create a flow run. If a flow run with the same flow_id and\n idempotency key already exists, the existing flow run will be returned.\n\n If no state is provided, the flow run will be created in a PENDING state.\n \"\"\"\n # hydrate the input model into a full flow run / state model\n flow_run = schemas.core.FlowRun(**flow_run.dict())\n\n # pass the request version to the orchestration engine to support compatibility code\n orchestration_parameters.update({\"api-version\": api_version})\n\n if not flow_run.state:\n flow_run.state = schemas.states.Pending()\n\n now = pendulum.now(\"UTC\")\n\n async with db.session_context(begin_transaction=True) as session:\n model = await models.flow_runs.create_flow_run(\n session=session,\n flow_run=flow_run,\n orchestration_parameters=orchestration_parameters,\n )\n if model.created >= now:\n response.status_code = status.HTTP_201_CREATED\n\n return schemas.responses.FlowRunResponse.from_orm(model)\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.create_flow_run_input","title":"create_flow_run_input
async
","text":"Create a key/value input for a flow run.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/{id}/input\", status_code=status.HTTP_201_CREATED)\nasync def create_flow_run_input(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n key: str = Body(..., description=\"The input key\"),\n value: bytes = Body(..., description=\"The value of the input\"),\n sender: Optional[str] = Body(None, description=\"The sender of the input\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Create a key/value input for a flow run.\n \"\"\"\n async with db.session_context() as session:\n try:\n await models.flow_run_input.create_flow_run_input(\n session=session,\n flow_run_input=schemas.core.FlowRunInput(\n flow_run_id=flow_run_id,\n key=key,\n sender=sender,\n value=value.decode(),\n ),\n )\n await session.commit()\n\n except IntegrityError as exc:\n if \"unique constraint\" in str(exc).lower():\n raise HTTPException(\n status_code=status.HTTP_409_CONFLICT,\n detail=\"A flow run input with this key already exists.\",\n )\n else:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\"\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.delete_flow_run","title":"delete_flow_run
async
","text":"Delete a flow run by id.
Source code inprefect/server/api/flow_runs.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow_run(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Delete a flow run by id.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n result = await models.flow_runs.delete_flow_run(\n session=session, flow_run_id=flow_run_id\n )\n if not result:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\"\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.delete_flow_run_input","title":"delete_flow_run_input
async
","text":"Delete a flow run input
Source code inprefect/server/api/flow_runs.py
@router.delete(\"/{id}/input/{key}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow_run_input(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n key: str = Path(..., description=\"The input key\", alias=\"key\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Delete a flow run input\n \"\"\"\n\n async with db.session_context() as session:\n deleted = await models.flow_run_input.delete_flow_run_input(\n session=session, flow_run_id=flow_run_id, key=key\n )\n await session.commit()\n\n if not deleted:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run input not found\"\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.filter_flow_run_input","title":"filter_flow_run_input
async
","text":"Filter flow run inputs by key prefix
Source code inprefect/server/api/flow_runs.py
@router.post(\"/{id}/input/filter\")\nasync def filter_flow_run_input(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n prefix: str = Body(..., description=\"The input key prefix\", embed=True),\n limit: int = Body(\n 1, description=\"The maximum number of results to return\", embed=True\n ),\n exclude_keys: List[str] = Body(\n [], description=\"Exclude inputs with these keys\", embed=True\n ),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.FlowRunInput]:\n \"\"\"\n Filter flow run inputs by key prefix\n \"\"\"\n async with db.session_context() as session:\n return await models.flow_run_input.filter_flow_run_input(\n session=session,\n flow_run_id=flow_run_id,\n prefix=prefix,\n limit=limit,\n exclude_keys=exclude_keys,\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.flow_run_history","title":"flow_run_history
async
","text":"Query for flow run history data across a given range and interval.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/history\")\nasync def flow_run_history(\n history_start: DateTimeTZ = Body(..., description=\"The history's start time.\"),\n history_end: DateTimeTZ = Body(..., description=\"The history's end time.\"),\n history_interval: datetime.timedelta = Body(\n ...,\n description=(\n \"The size of each history interval, in seconds. Must be at least 1 second.\"\n ),\n alias=\"history_interval_seconds\",\n ),\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n work_queues: schemas.filters.WorkQueueFilter = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.HistoryResponse]:\n \"\"\"\n Query for flow run history data across a given range and interval.\n \"\"\"\n if history_interval < datetime.timedelta(seconds=1):\n raise HTTPException(\n status.HTTP_422_UNPROCESSABLE_ENTITY,\n detail=\"History interval must not be less than 1 second.\",\n )\n\n async with db.session_context() as session:\n return await run_history(\n session=session,\n run_type=\"flow_run\",\n history_start=history_start,\n history_end=history_end,\n history_interval=history_interval,\n flows=flows,\n flow_runs=flow_runs,\n task_runs=task_runs,\n deployments=deployments,\n work_pools=work_pools,\n work_queues=work_queues,\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run","title":"read_flow_run
async
","text":"Get a flow run by id.
Source code inprefect/server/api/flow_runs.py
@router.get(\"/{id}\")\nasync def read_flow_run(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.FlowRunResponse:\n \"\"\"\n Get a flow run by id.\n \"\"\"\n async with db.session_context() as session:\n flow_run = await models.flow_runs.read_flow_run(\n session=session, flow_run_id=flow_run_id\n )\n if not flow_run:\n raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\")\n return schemas.responses.FlowRunResponse.from_orm(flow_run)\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run_graph_v1","title":"read_flow_run_graph_v1
async
","text":"Get a task run dependency map for a given flow run.
Source code inprefect/server/api/flow_runs.py
@router.get(\"/{id}/graph\")\nasync def read_flow_run_graph_v1(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[DependencyResult]:\n \"\"\"\n Get a task run dependency map for a given flow run.\n \"\"\"\n async with db.session_context() as session:\n return await models.flow_runs.read_task_run_dependencies(\n session=session, flow_run_id=flow_run_id\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run_graph_v2","title":"read_flow_run_graph_v2
async
","text":"Get a graph of the tasks and subflow runs for the given flow run
Source code inprefect/server/api/flow_runs.py
@router.get(\"/{id:uuid}/graph-v2\")\nasync def read_flow_run_graph_v2(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n since: datetime.datetime = Query(\n datetime.datetime.min,\n description=\"Only include runs that start or end after this time.\",\n ),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> Graph:\n \"\"\"\n Get a graph of the tasks and subflow runs for the given flow run\n \"\"\"\n async with db.session_context() as session:\n try:\n return await read_flow_run_graph(\n session=session,\n flow_run_id=flow_run_id,\n since=since,\n )\n except FlowRunGraphTooLarge as e:\n raise HTTPException(\n status_code=status.HTTP_400_BAD_REQUEST,\n detail=str(e),\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run_input","title":"read_flow_run_input
async
","text":"Create a value from a flow run input
Source code inprefect/server/api/flow_runs.py
@router.get(\"/{id}/input/{key}\")\nasync def read_flow_run_input(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n key: str = Path(..., description=\"The input key\", alias=\"key\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> PlainTextResponse:\n \"\"\"\n Create a value from a flow run input\n \"\"\"\n\n async with db.session_context() as session:\n flow_run_input = await models.flow_run_input.read_flow_run_input(\n session=session, flow_run_id=flow_run_id, key=key\n )\n\n if flow_run_input:\n return PlainTextResponse(flow_run_input.value)\n else:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run input not found\"\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_runs","title":"read_flow_runs
async
","text":"Query for flow runs.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/filter\", response_class=ORJSONResponse)\nasync def read_flow_runs(\n sort: schemas.sorting.FlowRunSort = Body(schemas.sorting.FlowRunSort.ID_DESC),\n limit: int = dependencies.LimitBody(),\n offset: int = Body(0, ge=0),\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n work_pool_queues: schemas.filters.WorkQueueFilter = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.FlowRunResponse]:\n \"\"\"\n Query for flow runs.\n \"\"\"\n async with db.session_context() as session:\n db_flow_runs = await models.flow_runs.read_flow_runs(\n session=session,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n work_queue_filter=work_pool_queues,\n offset=offset,\n limit=limit,\n sort=sort,\n )\n\n # Instead of relying on fastapi.encoders.jsonable_encoder to convert the\n # response to JSON, we do so more efficiently ourselves.\n # In particular, the FastAPI encoder is very slow for large, nested objects.\n # See: https://github.com/tiangolo/fastapi/issues/1224\n encoded = [\n schemas.responses.FlowRunResponse.from_orm(fr).dict(json_compatible=True)\n for fr in db_flow_runs\n ]\n return ORJSONResponse(content=encoded)\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.resume_flow_run","title":"resume_flow_run
async
","text":"Resume a paused flow run.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/{id}/resume\")\nasync def resume_flow_run(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n run_input: Optional[Dict] = Body(default=None, embed=True),\n response: Response = None,\n flow_policy: BaseOrchestrationPolicy = Depends(\n orchestration_dependencies.provide_flow_policy\n ),\n task_policy: BaseOrchestrationPolicy = Depends(\n orchestration_dependencies.provide_task_policy\n ),\n orchestration_parameters: dict = Depends(\n orchestration_dependencies.provide_flow_orchestration_parameters\n ),\n api_version=Depends(dependencies.provide_request_api_version),\n) -> OrchestrationResult:\n \"\"\"\n Resume a paused flow run.\n \"\"\"\n now = pendulum.now(\"UTC\")\n\n async with db.session_context(begin_transaction=True) as session:\n flow_run = await models.flow_runs.read_flow_run(session, flow_run_id)\n state = flow_run.state\n\n if state is None or state.type != schemas.states.StateType.PAUSED:\n result = OrchestrationResult(\n state=None,\n status=schemas.responses.SetStateStatus.ABORT,\n details=schemas.responses.StateAbortDetails(\n reason=\"Cannot resume a flow run that is not paused.\"\n ),\n )\n return result\n\n orchestration_parameters.update({\"api-version\": api_version})\n\n keyset = state.state_details.run_input_keyset\n\n if keyset:\n run_input = run_input or {}\n\n if experiment_enabled(\"enhanced_deployment_parameters\"):\n try:\n hydration_context = await schema_tools.HydrationContext.build(\n session=session, raise_on_error=True\n )\n run_input = schema_tools.hydrate(run_input, hydration_context) or {}\n except schema_tools.HydrationError as exc:\n return OrchestrationResult(\n state=state,\n status=schemas.responses.SetStateStatus.REJECT,\n details=schemas.responses.StateAbortDetails(\n reason=f\"Error hydrating run input: {exc}\",\n ),\n )\n\n schema_json = await models.flow_run_input.read_flow_run_input(\n session=session, flow_run_id=flow_run.id, key=keyset[\"schema\"]\n )\n\n if schema_json is None:\n return OrchestrationResult(\n state=state,\n status=schemas.responses.SetStateStatus.REJECT,\n details=schemas.responses.StateAbortDetails(\n reason=\"Run input schema not found.\"\n ),\n )\n\n try:\n schema = orjson.loads(schema_json.value)\n except orjson.JSONDecodeError:\n return OrchestrationResult(\n state=state,\n status=schemas.responses.SetStateStatus.REJECT,\n details=schemas.responses.StateAbortDetails(\n reason=\"Run input schema is not valid JSON.\"\n ),\n )\n\n if experiment_enabled(\"enhanced_deployment_parameters\"):\n try:\n schema_tools.validate(run_input, schema, raise_on_error=True)\n except schema_tools.ValidationError as exc:\n return OrchestrationResult(\n state=state,\n status=schemas.responses.SetStateStatus.REJECT,\n details=schemas.responses.StateAbortDetails(\n reason=f\"Reason: {exc}\"\n ),\n )\n except schema_tools.CircularSchemaRefError:\n return OrchestrationResult(\n state=state,\n status=schemas.responses.SetStateStatus.REJECT,\n details=schemas.responses.StateAbortDetails(\n reason=\"Invalid schema: Unable to validate schema with circular references.\",\n ),\n )\n else:\n try:\n jsonschema.validate(run_input, schema)\n except (jsonschema.ValidationError, jsonschema.SchemaError) as exc:\n return OrchestrationResult(\n state=state,\n status=schemas.responses.SetStateStatus.REJECT,\n details=schemas.responses.StateAbortDetails(\n reason=f\"Run input validation failed: {exc.message}\"\n ),\n )\n\n if state.state_details.pause_reschedule:\n orchestration_result = await models.flow_runs.set_flow_run_state(\n session=session,\n flow_run_id=flow_run_id,\n state=schemas.states.Scheduled(\n name=\"Resuming\", scheduled_time=pendulum.now(\"UTC\")\n ),\n flow_policy=flow_policy,\n orchestration_parameters=orchestration_parameters,\n )\n else:\n orchestration_result = await models.flow_runs.set_flow_run_state(\n session=session,\n flow_run_id=flow_run_id,\n state=schemas.states.Running(),\n flow_policy=flow_policy,\n orchestration_parameters=orchestration_parameters,\n )\n\n if (\n keyset\n and run_input\n and orchestration_result.status == schemas.responses.SetStateStatus.ACCEPT\n ):\n # The state change is accepted, go ahead and store the validated\n # run input.\n await models.flow_run_input.create_flow_run_input(\n session=session,\n flow_run_input=schemas.core.FlowRunInput(\n flow_run_id=flow_run_id,\n key=keyset[\"response\"],\n value=orjson.dumps(run_input).decode(\"utf-8\"),\n ),\n )\n\n # set the 201 if a new state was created\n if orchestration_result.state and orchestration_result.state.timestamp >= now:\n response.status_code = status.HTTP_201_CREATED\n else:\n response.status_code = status.HTTP_200_OK\n\n return orchestration_result\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.set_flow_run_state","title":"set_flow_run_state
async
","text":"Set a flow run state, invoking any orchestration rules.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/{id}/set_state\")\nasync def set_flow_run_state(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n state: schemas.actions.StateCreate = Body(..., description=\"The intended state.\"),\n force: bool = Body(\n False,\n description=(\n \"If false, orchestration rules will be applied that may alter or prevent\"\n \" the state transition. If True, orchestration rules are not applied.\"\n ),\n ),\n db: PrefectDBInterface = Depends(provide_database_interface),\n response: Response = None,\n flow_policy: BaseOrchestrationPolicy = Depends(\n orchestration_dependencies.provide_flow_policy\n ),\n orchestration_parameters: dict = Depends(\n orchestration_dependencies.provide_flow_orchestration_parameters\n ),\n api_version=Depends(dependencies.provide_request_api_version),\n) -> OrchestrationResult:\n \"\"\"Set a flow run state, invoking any orchestration rules.\"\"\"\n\n # pass the request version to the orchestration engine to support compatibility code\n orchestration_parameters.update({\"api-version\": api_version})\n\n now = pendulum.now(\"UTC\")\n\n # create the state\n async with db.session_context(\n begin_transaction=True, with_for_update=True\n ) as session:\n orchestration_result = await models.flow_runs.set_flow_run_state(\n session=session,\n flow_run_id=flow_run_id,\n # convert to a full State object\n state=schemas.states.State.parse_obj(state),\n force=force,\n flow_policy=flow_policy,\n orchestration_parameters=orchestration_parameters,\n )\n\n # set the 201 if a new state was created\n if orchestration_result.state and orchestration_result.state.timestamp >= now:\n response.status_code = status.HTTP_201_CREATED\n else:\n response.status_code = status.HTTP_200_OK\n\n return orchestration_result\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.update_flow_run","title":"update_flow_run
async
","text":"Updates a flow run.
Source code inprefect/server/api/flow_runs.py
@router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_flow_run(\n flow_run: schemas.actions.FlowRunUpdate,\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Updates a flow run.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n if PREFECT_EXPERIMENTAL_ENABLE_FLOW_RUN_INFRA_OVERRIDES:\n if flow_run.job_variables is not None:\n this_run = await models.flow_runs.read_flow_run(\n session, flow_run_id=flow_run_id\n )\n if this_run is None:\n raise HTTPException(\n status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\"\n )\n if this_run.state.type != schemas.states.StateType.SCHEDULED:\n raise HTTPException(\n status_code=status.HTTP_400_BAD_REQUEST,\n detail=f\"Job variables for a flow run in state {this_run.state.type.name} cannot be updated\",\n )\n if this_run.deployment_id is None:\n raise HTTPException(\n status_code=status.HTTP_400_BAD_REQUEST,\n detail=\"A deployment for the flow run could not be found\",\n )\n\n deployment = await models.deployments.read_deployment(\n session=session, deployment_id=this_run.deployment_id\n )\n if deployment is None:\n raise HTTPException(\n status_code=status.HTTP_400_BAD_REQUEST,\n detail=\"A deployment for the flow run could not be found\",\n )\n\n validate_job_variables_for_flow_run(flow_run, deployment)\n\n result = await models.flow_runs.update_flow_run(\n session=session, flow_run=flow_run, flow_run_id=flow_run_id\n )\n if not result:\n raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\")\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flows/","title":"server.api.flows","text":"","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows","title":"prefect.server.api.flows
","text":"Routes for interacting with flow objects.
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.count_flows","title":"count_flows
async
","text":"Count flows.
Source code inprefect/server/api/flows.py
@router.post(\"/count\")\nasync def count_flows(\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n \"\"\"\n Count flows.\n \"\"\"\n async with db.session_context() as session:\n return await models.flows.count_flows(\n session=session,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.create_flow","title":"create_flow
async
","text":"Gracefully creates a new flow from the provided schema. If a flow with the same name already exists, the existing flow is returned.
Source code inprefect/server/api/flows.py
@router.post(\"/\")\nasync def create_flow(\n flow: schemas.actions.FlowCreate,\n response: Response,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n \"\"\"Gracefully creates a new flow from the provided schema. If a flow with the\n same name already exists, the existing flow is returned.\n \"\"\"\n # hydrate the input model into a full flow model\n flow = schemas.core.Flow(**flow.dict())\n\n now = pendulum.now(\"UTC\")\n\n async with db.session_context(begin_transaction=True) as session:\n model = await models.flows.create_flow(session=session, flow=flow)\n\n if model.created >= now:\n response.status_code = status.HTTP_201_CREATED\n return model\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.delete_flow","title":"delete_flow
async
","text":"Delete a flow by id.
Source code inprefect/server/api/flows.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow(\n flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Delete a flow by id.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n result = await models.flows.delete_flow(session=session, flow_id=flow_id)\n if not result:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flow","title":"read_flow
async
","text":"Get a flow by id.
Source code inprefect/server/api/flows.py
@router.get(\"/{id}\")\nasync def read_flow(\n flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n \"\"\"\n Get a flow by id.\n \"\"\"\n async with db.session_context() as session:\n flow = await models.flows.read_flow(session=session, flow_id=flow_id)\n if not flow:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n )\n return flow\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flow_by_name","title":"read_flow_by_name
async
","text":"Get a flow by name.
Source code inprefect/server/api/flows.py
@router.get(\"/name/{name}\")\nasync def read_flow_by_name(\n name: str = Path(..., description=\"The name of the flow\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n \"\"\"\n Get a flow by name.\n \"\"\"\n async with db.session_context() as session:\n flow = await models.flows.read_flow_by_name(session=session, name=name)\n if not flow:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n )\n return flow\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flows","title":"read_flows
async
","text":"Query for flows.
Source code inprefect/server/api/flows.py
@router.post(\"/filter\")\nasync def read_flows(\n limit: int = dependencies.LimitBody(),\n offset: int = Body(0, ge=0),\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n sort: schemas.sorting.FlowSort = Body(schemas.sorting.FlowSort.NAME_ASC),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.Flow]:\n \"\"\"\n Query for flows.\n \"\"\"\n async with db.session_context() as session:\n return await models.flows.read_flows(\n session=session,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n sort=sort,\n offset=offset,\n limit=limit,\n )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.update_flow","title":"update_flow
async
","text":"Updates a flow.
Source code inprefect/server/api/flows.py
@router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_flow(\n flow: schemas.actions.FlowUpdate,\n flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Updates a flow.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n result = await models.flows.update_flow(\n session=session, flow=flow, flow_id=flow_id\n )\n if not result:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/run_history/","title":"server.api.run_history","text":"","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/run_history/#prefect.server.api.run_history","title":"prefect.server.api.run_history
","text":"Utilities for querying flow and task run history.
","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/run_history/#prefect.server.api.run_history.run_history","title":"run_history
async
","text":"Produce a history of runs aggregated by interval and state
Source code inprefect/server/api/run_history.py
@inject_db\nasync def run_history(\n session: sa.orm.Session,\n db: PrefectDBInterface,\n run_type: Literal[\"flow_run\", \"task_run\"],\n history_start: DateTimeTZ,\n history_end: DateTimeTZ,\n history_interval: datetime.timedelta,\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n work_queues: schemas.filters.WorkQueueFilter = None,\n) -> List[schemas.responses.HistoryResponse]:\n \"\"\"\n Produce a history of runs aggregated by interval and state\n \"\"\"\n\n # SQLite has issues with very small intervals\n # (by 0.001 seconds it stops incrementing the interval)\n if history_interval < datetime.timedelta(seconds=1):\n raise ValueError(\"History interval must not be less than 1 second.\")\n\n # prepare run-specific models\n if run_type == \"flow_run\":\n run_model = db.FlowRun\n run_filter_function = models.flow_runs._apply_flow_run_filters\n elif run_type == \"task_run\":\n run_model = db.TaskRun\n run_filter_function = models.task_runs._apply_task_run_filters\n else:\n raise ValueError(\n f\"Unknown run type {run_type!r}. Expected 'flow_run' or 'task_run'.\"\n )\n\n # create a CTE for timestamp intervals\n intervals = db.make_timestamp_intervals(\n history_start,\n history_end,\n history_interval,\n ).cte(\"intervals\")\n\n # apply filters to the flow runs (and related states)\n runs = (\n await run_filter_function(\n sa.select(\n run_model.id,\n run_model.expected_start_time,\n run_model.estimated_run_time,\n run_model.estimated_start_time_delta,\n run_model.state_type,\n run_model.state_name,\n ).select_from(run_model),\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n work_queue_filter=work_queues,\n )\n ).alias(\"runs\")\n # outer join intervals to the filtered runs to create a dataset composed of\n # every interval and the aggregate of all its runs. The runs aggregate is represented\n # by a descriptive JSON object\n counts = (\n sa.select(\n intervals.c.interval_start,\n intervals.c.interval_end,\n # build a JSON object, ignoring the case where the count of runs is 0\n sa.case(\n (sa.func.count(runs.c.id) == 0, None),\n else_=db.build_json_object(\n \"state_type\",\n runs.c.state_type,\n \"state_name\",\n runs.c.state_name,\n \"count_runs\",\n sa.func.count(runs.c.id),\n # estimated run times only includes positive run times (to avoid any unexpected corner cases)\n \"sum_estimated_run_time\",\n sa.func.sum(\n db.greatest(0, sa.extract(\"epoch\", runs.c.estimated_run_time))\n ),\n # estimated lateness is the sum of any positive start time deltas\n \"sum_estimated_lateness\",\n sa.func.sum(\n db.greatest(\n 0, sa.extract(\"epoch\", runs.c.estimated_start_time_delta)\n )\n ),\n ),\n ).label(\"state_agg\"),\n )\n .select_from(intervals)\n .join(\n runs,\n sa.and_(\n runs.c.expected_start_time >= intervals.c.interval_start,\n runs.c.expected_start_time < intervals.c.interval_end,\n ),\n isouter=True,\n )\n .group_by(\n intervals.c.interval_start,\n intervals.c.interval_end,\n runs.c.state_type,\n runs.c.state_name,\n )\n ).alias(\"counts\")\n\n # aggregate all state-aggregate objects into a single array for each interval,\n # ensuring that intervals with no runs have an empty array\n query = (\n sa.select(\n counts.c.interval_start,\n counts.c.interval_end,\n sa.func.coalesce(\n db.json_arr_agg(db.cast_to_json(counts.c.state_agg)).filter(\n counts.c.state_agg.is_not(None)\n ),\n sa.text(\"'[]'\"),\n ).label(\"states\"),\n )\n .group_by(counts.c.interval_start, counts.c.interval_end)\n .order_by(counts.c.interval_start)\n # return no more than 500 bars\n .limit(500)\n )\n\n # issue the query\n result = await session.execute(query)\n records = result.mappings()\n\n # load and parse the record if the database returns JSON as strings\n if db.uses_json_strings:\n records = [dict(r) for r in records]\n for r in records:\n r[\"states\"] = json.loads(r[\"states\"])\n\n return pydantic.parse_obj_as(List[schemas.responses.HistoryResponse], list(records))\n
","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/saved_searches/","title":"server.api.saved_searches","text":"","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches","title":"prefect.server.api.saved_searches
","text":"Routes for interacting with saved search objects.
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.create_saved_search","title":"create_saved_search
async
","text":"Gracefully creates a new saved search from the provided schema.
If a saved search with the same name already exists, the saved search's fields are replaced.
Source code inprefect/server/api/saved_searches.py
@router.put(\"/\")\nasync def create_saved_search(\n saved_search: schemas.actions.SavedSearchCreate,\n response: Response,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.SavedSearch:\n \"\"\"Gracefully creates a new saved search from the provided schema.\n\n If a saved search with the same name already exists, the saved search's fields are\n replaced.\n \"\"\"\n\n # hydrate the input model into a full model\n saved_search = schemas.core.SavedSearch(**saved_search.dict())\n\n now = pendulum.now(\"UTC\")\n\n async with db.session_context(begin_transaction=True) as session:\n model = await models.saved_searches.create_saved_search(\n session=session, saved_search=saved_search\n )\n\n if model.created >= now:\n response.status_code = status.HTTP_201_CREATED\n\n return model\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.delete_saved_search","title":"delete_saved_search
async
","text":"Delete a saved search by id.
Source code inprefect/server/api/saved_searches.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_saved_search(\n saved_search_id: UUID = Path(..., description=\"The saved search id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Delete a saved search by id.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n result = await models.saved_searches.delete_saved_search(\n session=session, saved_search_id=saved_search_id\n )\n if not result:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Saved search not found\"\n )\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.read_saved_search","title":"read_saved_search
async
","text":"Get a saved search by id.
Source code inprefect/server/api/saved_searches.py
@router.get(\"/{id}\")\nasync def read_saved_search(\n saved_search_id: UUID = Path(..., description=\"The saved search id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.SavedSearch:\n \"\"\"\n Get a saved search by id.\n \"\"\"\n async with db.session_context() as session:\n saved_search = await models.saved_searches.read_saved_search(\n session=session, saved_search_id=saved_search_id\n )\n if not saved_search:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Saved search not found\"\n )\n return saved_search\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.read_saved_searches","title":"read_saved_searches
async
","text":"Query for saved searches.
Source code inprefect/server/api/saved_searches.py
@router.post(\"/filter\")\nasync def read_saved_searches(\n limit: int = dependencies.LimitBody(),\n offset: int = Body(0, ge=0),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.SavedSearch]:\n \"\"\"\n Query for saved searches.\n \"\"\"\n async with db.session_context() as session:\n return await models.saved_searches.read_saved_searches(\n session=session,\n offset=offset,\n limit=limit,\n )\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/server/","title":"server.api.server","text":"","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server","title":"prefect.server.api.server
","text":"Defines the Prefect REST API FastAPI app.
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.RequestLimitMiddleware","title":"RequestLimitMiddleware
","text":"A middleware that limits the number of concurrent requests handled by the API.
This is a blunt tool for limiting SQLite concurrent writes which will cause failures at high volume. Ideally, we would only apply the limit to routes that perform writes.
Source code inprefect/server/api/server.py
class RequestLimitMiddleware:\n \"\"\"\n A middleware that limits the number of concurrent requests handled by the API.\n\n This is a blunt tool for limiting SQLite concurrent writes which will cause failures\n at high volume. Ideally, we would only apply the limit to routes that perform\n writes.\n \"\"\"\n\n def __init__(self, app, limit: float):\n self.app = app\n self._limiter = anyio.CapacityLimiter(limit)\n\n async def __call__(self, scope, receive, send) -> None:\n async with self._limiter:\n await self.app(scope, receive, send)\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.SPAStaticFiles","title":"SPAStaticFiles
","text":" Bases: StaticFiles
Implementation of StaticFiles
for serving single page applications.
Adds get_response
handling to ensure that when a resource isn't found the application still returns the index.
prefect/server/api/server.py
class SPAStaticFiles(StaticFiles):\n \"\"\"\n Implementation of `StaticFiles` for serving single page applications.\n\n Adds `get_response` handling to ensure that when a resource isn't found the\n application still returns the index.\n \"\"\"\n\n async def get_response(self, path: str, scope):\n try:\n return await super().get_response(path, scope)\n except HTTPException:\n return await super().get_response(\"./index.html\", scope)\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.create_api_app","title":"create_api_app
","text":"Create a FastAPI app that includes the Prefect REST API
Parameters:
Name Type Description Defaultrouter_prefix
Optional[str]
a prefix to apply to all included routers
''
dependencies
Optional[List[Depends]]
a list of global dependencies to add to each Prefect REST API router
None
health_check_path
str
the health check route path
'/health'
fast_api_app_kwargs
dict
kwargs to pass to the FastAPI constructor
None
router_overrides
Mapping[str, Optional[APIRouter]]
a mapping of route prefixes (i.e. \"/admin\") to new routers allowing the caller to override the default routers. If None
is provided as a value, the default router will be dropped from the application.
None
Returns:
Type DescriptionFastAPI
a FastAPI app that serves the Prefect REST API
Source code inprefect/server/api/server.py
def create_api_app(\n router_prefix: Optional[str] = \"\",\n dependencies: Optional[List[Depends]] = None,\n health_check_path: str = \"/health\",\n version_check_path: str = \"/version\",\n fast_api_app_kwargs: dict = None,\n router_overrides: Mapping[str, Optional[APIRouter]] = None,\n) -> FastAPI:\n \"\"\"\n Create a FastAPI app that includes the Prefect REST API\n\n Args:\n router_prefix: a prefix to apply to all included routers\n dependencies: a list of global dependencies to add to each Prefect REST API router\n health_check_path: the health check route path\n fast_api_app_kwargs: kwargs to pass to the FastAPI constructor\n router_overrides: a mapping of route prefixes (i.e. \"/admin\") to new routers\n allowing the caller to override the default routers. If `None` is provided\n as a value, the default router will be dropped from the application.\n\n Returns:\n a FastAPI app that serves the Prefect REST API\n \"\"\"\n fast_api_app_kwargs = fast_api_app_kwargs or {}\n api_app = FastAPI(title=API_TITLE, **fast_api_app_kwargs)\n api_app.add_middleware(GZipMiddleware)\n\n @api_app.get(health_check_path, tags=[\"Root\"])\n async def health_check():\n return True\n\n @api_app.get(version_check_path, tags=[\"Root\"])\n async def orion_info():\n return SERVER_API_VERSION\n\n # always include version checking\n if dependencies is None:\n dependencies = [Depends(enforce_minimum_version)]\n else:\n dependencies.append(Depends(enforce_minimum_version))\n\n routers = {router.prefix: router for router in API_ROUTERS}\n\n if router_overrides:\n for prefix, router in router_overrides.items():\n # We may want to allow this behavior in the future to inject new routes, but\n # for now this will be treated an as an exception\n if prefix not in routers:\n raise KeyError(\n \"Router override provided for prefix that does not exist:\"\n f\" {prefix!r}\"\n )\n\n # Drop the existing router\n existing_router = routers.pop(prefix)\n\n # Replace it with a new router if provided\n if router is not None:\n if prefix != router.prefix:\n # We may want to allow this behavior in the future, but it will\n # break expectations without additional routing and is banned for\n # now\n raise ValueError(\n f\"Router override for {prefix!r} defines a different prefix \"\n f\"{router.prefix!r}.\"\n )\n\n existing_paths = method_paths_from_routes(existing_router.routes)\n new_paths = method_paths_from_routes(router.routes)\n if not existing_paths.issubset(new_paths):\n raise ValueError(\n f\"Router override for {prefix!r} is missing paths defined by \"\n f\"the original router: {existing_paths.difference(new_paths)}\"\n )\n\n routers[prefix] = router\n\n for router in routers.values():\n api_app.include_router(router, prefix=router_prefix, dependencies=dependencies)\n\n return api_app\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.create_app","title":"create_app
","text":"Create an FastAPI app that includes the Prefect REST API and UI
Parameters:
Name Type Description Defaultsettings
Settings
The settings to use to create the app. If not set, settings are pulled from the context.
None
ignore_cache
bool
If set, a new application will be created even if the settings match. Otherwise, an application is returned from the cache.
False
ephemeral
bool
If set, the application will be treated as ephemeral. The UI and services will be disabled.
False
Source code in prefect/server/api/server.py
def create_app(\n settings: prefect.settings.Settings = None,\n ephemeral: bool = False,\n ignore_cache: bool = False,\n) -> FastAPI:\n \"\"\"\n Create an FastAPI app that includes the Prefect REST API and UI\n\n Args:\n settings: The settings to use to create the app. If not set, settings are pulled\n from the context.\n ignore_cache: If set, a new application will be created even if the settings\n match. Otherwise, an application is returned from the cache.\n ephemeral: If set, the application will be treated as ephemeral. The UI\n and services will be disabled.\n \"\"\"\n settings = settings or prefect.settings.get_current_settings()\n cache_key = (settings.hash_key(), ephemeral)\n\n if cache_key in APP_CACHE and not ignore_cache:\n return APP_CACHE[cache_key]\n\n # TODO: Move these startup functions out of this closure into the top-level or\n # another dedicated location\n async def run_migrations():\n \"\"\"Ensure the database is created and up to date with the current migrations\"\"\"\n if prefect.settings.PREFECT_API_DATABASE_MIGRATE_ON_START:\n from prefect.server.database.dependencies import provide_database_interface\n\n db = provide_database_interface()\n await db.create_db()\n\n @_memoize_block_auto_registration\n async def add_block_types():\n \"\"\"Add all registered blocks to the database\"\"\"\n if not prefect.settings.PREFECT_API_BLOCKS_REGISTER_ON_START:\n return\n\n from prefect.server.database.dependencies import provide_database_interface\n from prefect.server.models.block_registration import run_block_auto_registration\n\n db = provide_database_interface()\n session = await db.session()\n\n async with session:\n await run_block_auto_registration(session=session)\n\n async def start_services():\n \"\"\"Start additional services when the Prefect REST API starts up.\"\"\"\n\n if ephemeral:\n app.state.services = None\n return\n\n service_instances = []\n\n if prefect.settings.PREFECT_API_SERVICES_SCHEDULER_ENABLED.value():\n service_instances.append(services.scheduler.Scheduler())\n service_instances.append(services.scheduler.RecentDeploymentsScheduler())\n\n if prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_ENABLED.value():\n service_instances.append(services.late_runs.MarkLateRuns())\n\n if prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED.value():\n service_instances.append(services.pause_expirations.FailExpiredPauses())\n\n if prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED.value():\n service_instances.append(\n services.cancellation_cleanup.CancellationCleanup()\n )\n\n if prefect.settings.PREFECT_SERVER_ANALYTICS_ENABLED.value():\n service_instances.append(services.telemetry.Telemetry())\n\n if prefect.settings.PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED.value():\n service_instances.append(\n services.flow_run_notifications.FlowRunNotifications()\n )\n\n if prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value():\n service_instances.append(services.task_scheduling.TaskSchedulingTimeouts())\n\n loop = asyncio.get_running_loop()\n\n app.state.services = {\n service: loop.create_task(service.start()) for service in service_instances\n }\n\n for service, task in app.state.services.items():\n logger.info(f\"{service.name} service scheduled to start in-app\")\n task.add_done_callback(partial(on_service_exit, service))\n\n async def stop_services():\n \"\"\"Ensure services are stopped before the Prefect REST API shuts down.\"\"\"\n if hasattr(app.state, \"services\") and app.state.services:\n await asyncio.gather(*[service.stop() for service in app.state.services])\n try:\n await asyncio.gather(\n *[task.stop() for task in app.state.services.values()]\n )\n except Exception:\n # `on_service_exit` should handle logging exceptions on exit\n pass\n\n @asynccontextmanager\n async def lifespan(app):\n try:\n await run_migrations()\n await add_block_types()\n await start_services()\n yield\n finally:\n await stop_services()\n\n def on_service_exit(service, task):\n \"\"\"\n Added as a callback for completion of services to log exit\n \"\"\"\n try:\n # Retrieving the result will raise the exception\n task.result()\n except Exception:\n logger.error(f\"{service.name} service failed!\", exc_info=True)\n else:\n logger.info(f\"{service.name} service stopped!\")\n\n app = FastAPI(\n title=TITLE,\n version=API_VERSION,\n lifespan=lifespan,\n )\n api_app = create_api_app(\n fast_api_app_kwargs={\n \"exception_handlers\": {\n # NOTE: FastAPI special cases the generic `Exception` handler and\n # registers it as a separate middleware from the others\n Exception: custom_internal_exception_handler,\n RequestValidationError: validation_exception_handler,\n sa.exc.IntegrityError: integrity_exception_handler,\n ObjectNotFoundError: prefect_object_not_found_exception_handler,\n }\n },\n )\n ui_app = create_ui_app(ephemeral)\n\n # middleware\n app.add_middleware(\n CORSMiddleware,\n allow_origins=[\"*\"],\n allow_methods=[\"*\"],\n allow_headers=[\"*\"],\n )\n\n # Limit the number of concurrent requests when using a SQLite database to reduce\n # chance of errors where the database cannot be opened due to a high number of\n # concurrent writes\n if (\n get_dialect(prefect.settings.PREFECT_API_DATABASE_CONNECTION_URL.value()).name\n == \"sqlite\"\n ):\n app.add_middleware(RequestLimitMiddleware, limit=100)\n\n api_app.mount(\n \"/static\",\n StaticFiles(\n directory=os.path.join(\n os.path.dirname(os.path.realpath(__file__)), \"static\"\n )\n ),\n name=\"static\",\n )\n app.api_app = api_app\n app.mount(\"/api\", app=api_app, name=\"api\")\n app.mount(\"/\", app=ui_app, name=\"ui\")\n\n def openapi():\n \"\"\"\n Convenience method for extracting the user facing OpenAPI schema from the API app.\n\n This method is attached to the global public app for easy access.\n \"\"\"\n partial_schema = get_openapi(\n title=API_TITLE,\n version=API_VERSION,\n routes=api_app.routes,\n )\n new_schema = partial_schema.copy()\n new_schema[\"paths\"] = {}\n for path, value in partial_schema[\"paths\"].items():\n new_schema[\"paths\"][f\"/api{path}\"] = value\n\n new_schema[\"info\"][\"x-logo\"] = {\"url\": \"static/prefect-logo-mark-gradient.png\"}\n return new_schema\n\n app.openapi = openapi\n\n APP_CACHE[cache_key] = app\n return app\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.custom_internal_exception_handler","title":"custom_internal_exception_handler
async
","text":"Log a detailed exception for internal server errors before returning.
Send 503 for errors clients can retry on.
Source code inprefect/server/api/server.py
async def custom_internal_exception_handler(request: Request, exc: Exception):\n \"\"\"\n Log a detailed exception for internal server errors before returning.\n\n Send 503 for errors clients can retry on.\n \"\"\"\n logger.error(\"Encountered exception in request:\", exc_info=True)\n\n if is_client_retryable_exception(exc):\n return JSONResponse(\n content={\"exception_message\": \"Service Unavailable\"},\n status_code=status.HTTP_503_SERVICE_UNAVAILABLE,\n )\n\n return JSONResponse(\n content={\"exception_message\": \"Internal Server Error\"},\n status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,\n )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.integrity_exception_handler","title":"integrity_exception_handler
async
","text":"Capture database integrity errors.
Source code inprefect/server/api/server.py
async def integrity_exception_handler(request: Request, exc: Exception):\n \"\"\"Capture database integrity errors.\"\"\"\n logger.error(\"Encountered exception in request:\", exc_info=True)\n return JSONResponse(\n content={\n \"detail\": (\n \"Data integrity conflict. This usually means a \"\n \"unique or foreign key constraint was violated. \"\n \"See server logs for details.\"\n )\n },\n status_code=status.HTTP_409_CONFLICT,\n )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.prefect_object_not_found_exception_handler","title":"prefect_object_not_found_exception_handler
async
","text":"Return 404 status code on object not found exceptions.
Source code inprefect/server/api/server.py
async def prefect_object_not_found_exception_handler(\n request: Request, exc: ObjectNotFoundError\n):\n \"\"\"Return 404 status code on object not found exceptions.\"\"\"\n return JSONResponse(\n content={\"exception_message\": str(exc)}, status_code=status.HTTP_404_NOT_FOUND\n )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.replace_placeholder_string_in_files","title":"replace_placeholder_string_in_files
","text":"Recursively loops through all files in the given directory and replaces a placeholder string.
Source code inprefect/server/api/server.py
def replace_placeholder_string_in_files(\n directory, placeholder, replacement, allowed_extensions=None\n):\n \"\"\"\n Recursively loops through all files in the given directory and replaces\n a placeholder string.\n \"\"\"\n if allowed_extensions is None:\n allowed_extensions = [\".txt\", \".html\", \".css\", \".js\", \".json\", \".txt\"]\n\n for root, dirs, files in os.walk(directory):\n for file in files:\n if any(file.endswith(ext) for ext in allowed_extensions):\n file_path = os.path.join(root, file)\n\n with open(file_path, \"r\", encoding=\"utf-8\") as file:\n file_data = file.read()\n\n file_data = file_data.replace(placeholder, replacement)\n\n with open(file_path, \"w\", encoding=\"utf-8\") as file:\n file.write(file_data)\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.validation_exception_handler","title":"validation_exception_handler
async
","text":"Provide a detailed message for request validation errors.
Source code inprefect/server/api/server.py
async def validation_exception_handler(request: Request, exc: RequestValidationError):\n \"\"\"Provide a detailed message for request validation errors.\"\"\"\n return JSONResponse(\n status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n content=jsonable_encoder(\n {\n \"exception_message\": \"Invalid request received.\",\n \"exception_detail\": exc.errors(),\n \"request_body\": exc.body,\n }\n ),\n )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/task_run_states/","title":"server.api.task_run_states","text":"","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states","title":"prefect.server.api.task_run_states
","text":"Routes for interacting with task run state objects.
","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states.read_task_run_state","title":"read_task_run_state
async
","text":"Get a task run state by id.
Source code inprefect/server/api/task_run_states.py
@router.get(\"/{id}\")\nasync def read_task_run_state(\n task_run_state_id: UUID = Path(\n ..., description=\"The task run state id\", alias=\"id\"\n ),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.states.State:\n \"\"\"\n Get a task run state by id.\n \"\"\"\n async with db.session_context() as session:\n task_run_state = await models.task_run_states.read_task_run_state(\n session=session, task_run_state_id=task_run_state_id\n )\n if not task_run_state:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run state not found\"\n )\n return task_run_state\n
","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states.read_task_run_states","title":"read_task_run_states
async
","text":"Get states associated with a task run.
Source code inprefect/server/api/task_run_states.py
@router.get(\"/\")\nasync def read_task_run_states(\n task_run_id: UUID,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.states.State]:\n \"\"\"\n Get states associated with a task run.\n \"\"\"\n async with db.session_context() as session:\n return await models.task_run_states.read_task_run_states(\n session=session, task_run_id=task_run_id\n )\n
","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_runs/","title":"server.api.task_runs","text":"","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs","title":"prefect.server.api.task_runs
","text":"Routes for interacting with task run objects.
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.count_task_runs","title":"count_task_runs
async
","text":"Count task runs.
Source code inprefect/server/api/task_runs.py
@router.post(\"/count\")\nasync def count_task_runs(\n db: PrefectDBInterface = Depends(provide_database_interface),\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n) -> int:\n \"\"\"\n Count task runs.\n \"\"\"\n async with db.session_context() as session:\n return await models.task_runs.count_task_runs(\n session=session,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n )\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.create_task_run","title":"create_task_run
async
","text":"Create a task run. If a task run with the same flow_run_id, task_key, and dynamic_key already exists, the existing task run will be returned.
If no state is provided, the task run will be created in a PENDING state.
Source code inprefect/server/api/task_runs.py
@router.post(\"/\")\nasync def create_task_run(\n task_run: schemas.actions.TaskRunCreate,\n response: Response,\n db: PrefectDBInterface = Depends(provide_database_interface),\n orchestration_parameters: dict = Depends(\n orchestration_dependencies.provide_task_orchestration_parameters\n ),\n) -> schemas.core.TaskRun:\n \"\"\"\n Create a task run. If a task run with the same flow_run_id,\n task_key, and dynamic_key already exists, the existing task\n run will be returned.\n\n If no state is provided, the task run will be created in a PENDING state.\n \"\"\"\n # hydrate the input model into a full task run / state model\n task_run = schemas.core.TaskRun(**task_run.dict())\n\n if not task_run.state:\n task_run.state = schemas.states.Pending()\n\n now = pendulum.now(\"UTC\")\n\n async with db.session_context(begin_transaction=True) as session:\n model = await models.task_runs.create_task_run(\n session=session,\n task_run=task_run,\n orchestration_parameters=orchestration_parameters,\n )\n\n if model.created >= now:\n response.status_code = status.HTTP_201_CREATED\n\n new_task_run: schemas.core.TaskRun = schemas.core.TaskRun.from_orm(model)\n\n return new_task_run\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.delete_task_run","title":"delete_task_run
async
","text":"Delete a task run by id.
Source code inprefect/server/api/task_runs.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_task_run(\n task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Delete a task run by id.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n result = await models.task_runs.delete_task_run(\n session=session, task_run_id=task_run_id\n )\n if not result:\n raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task not found\")\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.read_task_run","title":"read_task_run
async
","text":"Get a task run by id.
Source code inprefect/server/api/task_runs.py
@router.get(\"/{id}\")\nasync def read_task_run(\n task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.TaskRun:\n \"\"\"\n Get a task run by id.\n \"\"\"\n async with db.session_context() as session:\n task_run = await models.task_runs.read_task_run(\n session=session, task_run_id=task_run_id\n )\n if not task_run:\n raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task not found\")\n return task_run\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.read_task_runs","title":"read_task_runs
async
","text":"Query for task runs.
Source code inprefect/server/api/task_runs.py
@router.post(\"/filter\")\nasync def read_task_runs(\n sort: schemas.sorting.TaskRunSort = Body(schemas.sorting.TaskRunSort.ID_DESC),\n limit: int = dependencies.LimitBody(),\n offset: int = Body(0, ge=0),\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.TaskRun]:\n \"\"\"\n Query for task runs.\n \"\"\"\n async with db.session_context() as session:\n return await models.task_runs.read_task_runs(\n session=session,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n offset=offset,\n limit=limit,\n sort=sort,\n )\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.set_task_run_state","title":"set_task_run_state
async
","text":"Set a task run state, invoking any orchestration rules.
Source code inprefect/server/api/task_runs.py
@router.post(\"/{id}/set_state\")\nasync def set_task_run_state(\n task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n state: schemas.actions.StateCreate = Body(..., description=\"The intended state.\"),\n force: bool = Body(\n False,\n description=(\n \"If false, orchestration rules will be applied that may alter or prevent\"\n \" the state transition. If True, orchestration rules are not applied.\"\n ),\n ),\n db: PrefectDBInterface = Depends(provide_database_interface),\n response: Response = None,\n task_policy: BaseOrchestrationPolicy = Depends(\n orchestration_dependencies.provide_task_policy\n ),\n orchestration_parameters: dict = Depends(\n orchestration_dependencies.provide_task_orchestration_parameters\n ),\n) -> OrchestrationResult:\n \"\"\"Set a task run state, invoking any orchestration rules.\"\"\"\n\n now = pendulum.now(\"UTC\")\n\n # create the state\n async with db.session_context(\n begin_transaction=True, with_for_update=True\n ) as session:\n orchestration_result = await models.task_runs.set_task_run_state(\n session=session,\n task_run_id=task_run_id,\n state=schemas.states.State.parse_obj(\n state\n ), # convert to a full State object\n force=force,\n task_policy=task_policy,\n orchestration_parameters=orchestration_parameters,\n )\n\n # set the 201 if a new state was created\n if orchestration_result.state and orchestration_result.state.timestamp >= now:\n response.status_code = status.HTTP_201_CREATED\n else:\n response.status_code = status.HTTP_200_OK\n\n return orchestration_result\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.task_run_history","title":"task_run_history
async
","text":"Query for task run history data across a given range and interval.
Source code inprefect/server/api/task_runs.py
@router.post(\"/history\")\nasync def task_run_history(\n history_start: DateTimeTZ = Body(..., description=\"The history's start time.\"),\n history_end: DateTimeTZ = Body(..., description=\"The history's end time.\"),\n history_interval: datetime.timedelta = Body(\n ...,\n description=(\n \"The size of each history interval, in seconds. Must be at least 1 second.\"\n ),\n alias=\"history_interval_seconds\",\n ),\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.HistoryResponse]:\n \"\"\"\n Query for task run history data across a given range and interval.\n \"\"\"\n if history_interval < datetime.timedelta(seconds=1):\n raise HTTPException(\n status.HTTP_422_UNPROCESSABLE_ENTITY,\n detail=\"History interval must not be less than 1 second.\",\n )\n\n async with db.session_context() as session:\n return await run_history(\n session=session,\n run_type=\"task_run\",\n history_start=history_start,\n history_end=history_end,\n history_interval=history_interval,\n flows=flows,\n flow_runs=flow_runs,\n task_runs=task_runs,\n deployments=deployments,\n )\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.update_task_run","title":"update_task_run
async
","text":"Updates a task run.
Source code inprefect/server/api/task_runs.py
@router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_task_run(\n task_run: schemas.actions.TaskRunUpdate,\n task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Updates a task run.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n result = await models.task_runs.update_task_run(\n session=session, task_run=task_run, task_run_id=task_run_id\n )\n if not result:\n raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task run not found\")\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/models/deployments/","title":"server.models.deployments","text":""},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments","title":"prefect.server.models.deployments
","text":"Functions for interacting with deployment ORM objects. Intended for internal use by the Prefect REST API.
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.check_work_queues_for_deployment","title":"check_work_queues_for_deployment
async
","text":"Get work queues that can pick up the specified deployment.
Work queues will pick up a deployment when all of the following are met.
Notes on the query:
json_contains(A, B)
should be interpreted as \"True if A contains B\".Returns:
Type DescriptionList[WorkQueue]
List[db.WorkQueue]: WorkQueues
Source code inprefect/server/models/deployments.py
@inject_db\nasync def check_work_queues_for_deployment(\n db: PrefectDBInterface, session: sa.orm.Session, deployment_id: UUID\n) -> List[schemas.core.WorkQueue]:\n \"\"\"\n Get work queues that can pick up the specified deployment.\n\n Work queues will pick up a deployment when all of the following are met.\n\n - The deployment has ALL tags that the work queue has (i.e. the work\n queue's tags must be a subset of the deployment's tags).\n - The work queue's specified deployment IDs match the deployment's ID,\n or the work queue does NOT have specified deployment IDs.\n - The work queue's specified flow runners match the deployment's flow\n runner or the work queue does NOT have a specified flow runner.\n\n Notes on the query:\n\n - Our database currently allows either \"null\" and empty lists as\n null values in filters, so we need to catch both cases with \"or\".\n - `json_contains(A, B)` should be interpreted as \"True if A\n contains B\".\n\n Returns:\n List[db.WorkQueue]: WorkQueues\n \"\"\"\n deployment = await session.get(db.Deployment, deployment_id)\n if not deployment:\n raise ObjectNotFoundError(f\"Deployment with id {deployment_id} not found\")\n\n query = (\n select(db.WorkQueue)\n # work queue tags are a subset of deployment tags\n .filter(\n or_(\n json_contains(deployment.tags, db.WorkQueue.filter[\"tags\"]),\n json_contains([], db.WorkQueue.filter[\"tags\"]),\n json_contains(None, db.WorkQueue.filter[\"tags\"]),\n )\n )\n # deployment_ids is null or contains the deployment's ID\n .filter(\n or_(\n json_contains(\n db.WorkQueue.filter[\"deployment_ids\"],\n str(deployment.id),\n ),\n json_contains(None, db.WorkQueue.filter[\"deployment_ids\"]),\n json_contains([], db.WorkQueue.filter[\"deployment_ids\"]),\n )\n )\n )\n\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.count_deployments","title":"count_deployments
async
","text":"Count deployments.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_filter
FlowFilter
only count deployments whose flows match these criteria
None
flow_run_filter
FlowRunFilter
only count deployments whose flow runs match these criteria
None
task_run_filter
TaskRunFilter
only count deployments whose task runs match these criteria
None
deployment_filter
DeploymentFilter
only count deployment that match these filters
None
work_pool_filter
WorkPoolFilter
only count deployments that match these work pool filters
None
work_queue_filter
WorkQueueFilter
only count deployments that match these work pool queue filters
None
Returns:
Name Type Descriptionint
int
the number of deployments matching filters
Source code inprefect/server/models/deployments.py
@inject_db\nasync def count_deployments(\n session: sa.orm.Session,\n db: PrefectDBInterface,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n work_pool_filter: schemas.filters.WorkPoolFilter = None,\n work_queue_filter: schemas.filters.WorkQueueFilter = None,\n) -> int:\n \"\"\"\n Count deployments.\n\n Args:\n session: A database session\n flow_filter: only count deployments whose flows match these criteria\n flow_run_filter: only count deployments whose flow runs match these criteria\n task_run_filter: only count deployments whose task runs match these criteria\n deployment_filter: only count deployment that match these filters\n work_pool_filter: only count deployments that match these work pool filters\n work_queue_filter: only count deployments that match these work pool queue filters\n\n Returns:\n int: the number of deployments matching filters\n \"\"\"\n\n query = select(sa.func.count(sa.text(\"*\"))).select_from(db.Deployment)\n\n query = await _apply_deployment_filters(\n query=query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n work_pool_filter=work_pool_filter,\n work_queue_filter=work_queue_filter,\n db=db,\n )\n\n result = await session.execute(query)\n return result.scalar()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.create_deployment","title":"create_deployment
async
","text":"Upserts a deployment.
Parameters:
Name Type Description Defaultsession
AsyncSession
a database session
requireddeployment
Deployment
a deployment model
requiredReturns:
Type Descriptiondb.Deployment: the newly-created or updated deployment
Source code inprefect/server/models/deployments.py
@inject_db\nasync def create_deployment(\n session: AsyncSession,\n deployment: schemas.core.Deployment,\n db: PrefectDBInterface,\n):\n \"\"\"Upserts a deployment.\n\n Args:\n session: a database session\n deployment: a deployment model\n\n Returns:\n db.Deployment: the newly-created or updated deployment\n\n \"\"\"\n\n # set `updated` manually\n # known limitation of `on_conflict_do_update`, will not use `Column.onupdate`\n # https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#the-set-clause\n deployment.updated = pendulum.now(\"UTC\")\n\n schedules = deployment.schedules\n insert_values = deployment.dict(\n shallow=True, exclude_unset=True, exclude={\"schedules\"}\n )\n\n insert_stmt = (\n (await db.insert(db.Deployment))\n .values(**insert_values)\n .on_conflict_do_update(\n index_elements=db.deployment_unique_upsert_columns,\n set_={\n **deployment.dict(\n shallow=True,\n exclude_unset=True,\n exclude={\"id\", \"created\", \"created_by\", \"schedules\"},\n ),\n },\n )\n )\n\n await session.execute(insert_stmt)\n\n # Get the id of the deployment we just created or updated\n result = await session.execute(\n sa.select(db.Deployment.id).where(\n sa.and_(\n db.Deployment.flow_id == deployment.flow_id,\n db.Deployment.name == deployment.name,\n )\n )\n )\n deployment_id = result.scalar_one_or_none()\n\n if not deployment_id:\n return None\n\n # Because this was possibly an upsert, we need to delete any existing\n # schedules and any runs from the old deployment.\n\n await _delete_scheduled_runs(\n session=session, deployment_id=deployment_id, db=db, auto_scheduled_only=True\n )\n\n await delete_schedules_for_deployment(session=session, deployment_id=deployment_id)\n\n if schedules:\n await create_deployment_schedules(\n session=session,\n deployment_id=deployment_id,\n schedules=[\n schemas.actions.DeploymentScheduleCreate(\n schedule=schedule.schedule,\n active=schedule.active, # type: ignore[call-arg]\n )\n for schedule in schedules\n ],\n )\n\n query = (\n sa.select(db.Deployment)\n .where(\n sa.and_(\n db.Deployment.flow_id == deployment.flow_id,\n db.Deployment.name == deployment.name,\n )\n )\n .execution_options(populate_existing=True)\n )\n result = await session.execute(query)\n return result.scalar()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.create_deployment_schedules","title":"create_deployment_schedules
async
","text":"Creates a deployment's schedules.
Parameters:
Name Type Description Defaultsession
AsyncSession
A database session
requireddeployment_id
UUID
a deployment id
requiredschedules
List[DeploymentScheduleCreate]
a list of deployment schedule create actions
required Source code inprefect/server/models/deployments.py
@inject_db\nasync def create_deployment_schedules(\n db: PrefectDBInterface,\n session: AsyncSession,\n deployment_id: UUID,\n schedules: List[schemas.actions.DeploymentScheduleCreate],\n) -> List[schemas.core.DeploymentSchedule]:\n \"\"\"\n Creates a deployment's schedules.\n\n Args:\n session: A database session\n deployment_id: a deployment id\n schedules: a list of deployment schedule create actions\n \"\"\"\n\n schedules_with_deployment_id = []\n for schedule in schedules:\n data = schedule.dict()\n data[\"deployment_id\"] = deployment_id\n schedules_with_deployment_id.append(data)\n\n models = [\n db.DeploymentSchedule(**schedule) for schedule in schedules_with_deployment_id\n ]\n session.add_all(models)\n await session.flush()\n\n return [schemas.core.DeploymentSchedule.from_orm(m) for m in models]\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.delete_deployment","title":"delete_deployment
async
","text":"Delete a deployment by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requireddeployment_id
UUID
a deployment id
requiredReturns:
Name Type Descriptionbool
bool
whether or not the deployment was deleted
Source code inprefect/server/models/deployments.py
@inject_db\nasync def delete_deployment(\n session: sa.orm.Session, deployment_id: UUID, db: PrefectDBInterface\n) -> bool:\n \"\"\"\n Delete a deployment by id.\n\n Args:\n session: A database session\n deployment_id: a deployment id\n\n Returns:\n bool: whether or not the deployment was deleted\n \"\"\"\n\n # delete scheduled runs, both auto- and user- created.\n await _delete_scheduled_runs(\n session=session, deployment_id=deployment_id, auto_scheduled_only=False\n )\n\n result = await session.execute(\n delete(db.Deployment).where(db.Deployment.id == deployment_id)\n )\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.delete_deployment_schedule","title":"delete_deployment_schedule
async
","text":"Deletes a deployment schedule.
Parameters:
Name Type Description Defaultsession
AsyncSession
A database session
requireddeployment_schedule_id
UUID
a deployment schedule id
required Source code inprefect/server/models/deployments.py
@inject_db\nasync def delete_deployment_schedule(\n db: PrefectDBInterface,\n session: AsyncSession,\n deployment_id: UUID,\n deployment_schedule_id: UUID,\n) -> bool:\n \"\"\"\n Deletes a deployment schedule.\n\n Args:\n session: A database session\n deployment_schedule_id: a deployment schedule id\n \"\"\"\n\n result = await session.execute(\n sa.delete(db.DeploymentSchedule).where(\n sa.and_(\n db.DeploymentSchedule.id == deployment_schedule_id,\n db.DeploymentSchedule.deployment_id == deployment_id,\n )\n )\n )\n\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.delete_schedules_for_deployment","title":"delete_schedules_for_deployment
async
","text":"Deletes a deployment schedule.
Parameters:
Name Type Description Defaultsession
AsyncSession
A database session
requireddeployment_id
UUID
a deployment id
required Source code inprefect/server/models/deployments.py
@inject_db\nasync def delete_schedules_for_deployment(\n db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID\n) -> bool:\n \"\"\"\n Deletes a deployment schedule.\n\n Args:\n session: A database session\n deployment_id: a deployment id\n \"\"\"\n\n result = await session.execute(\n sa.delete(db.DeploymentSchedule).where(\n db.DeploymentSchedule.deployment_id == deployment_id\n )\n )\n\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment","title":"read_deployment
async
","text":"Reads a deployment by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requireddeployment_id
UUID
a deployment id
requiredReturns:
Type Descriptiondb.Deployment: the deployment
Source code inprefect/server/models/deployments.py
@inject_db\nasync def read_deployment(\n session: sa.orm.Session, deployment_id: UUID, db: PrefectDBInterface\n):\n \"\"\"Reads a deployment by id.\n\n Args:\n session: A database session\n deployment_id: a deployment id\n\n Returns:\n db.Deployment: the deployment\n \"\"\"\n\n return await session.get(db.Deployment, deployment_id)\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment_by_name","title":"read_deployment_by_name
async
","text":"Reads a deployment by name.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredname
str
a deployment name
requiredflow_name
str
the name of the flow the deployment belongs to
requiredReturns:
Type Descriptiondb.Deployment: the deployment
Source code inprefect/server/models/deployments.py
@inject_db\nasync def read_deployment_by_name(\n session: sa.orm.Session, name: str, flow_name: str, db: PrefectDBInterface\n):\n \"\"\"Reads a deployment by name.\n\n Args:\n session: A database session\n name: a deployment name\n flow_name: the name of the flow the deployment belongs to\n\n Returns:\n db.Deployment: the deployment\n \"\"\"\n\n result = await session.execute(\n select(db.Deployment)\n .join(db.Flow, db.Deployment.flow_id == db.Flow.id)\n .where(\n sa.and_(\n db.Flow.name == flow_name,\n db.Deployment.name == name,\n )\n )\n .limit(1)\n )\n return result.scalar()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment_schedules","title":"read_deployment_schedules
async
","text":"Reads a deployment's schedules.
Parameters:
Name Type Description Defaultsession
AsyncSession
A database session
requireddeployment_id
UUID
a deployment id
requiredReturns:
Type DescriptionList[DeploymentSchedule]
list[schemas.core.DeploymentSchedule]: the deployment's schedules
Source code inprefect/server/models/deployments.py
@inject_db\nasync def read_deployment_schedules(\n db: PrefectDBInterface,\n session: AsyncSession,\n deployment_id: UUID,\n deployment_schedule_filter: Optional[\n schemas.filters.DeploymentScheduleFilter\n ] = None,\n) -> List[schemas.core.DeploymentSchedule]:\n \"\"\"\n Reads a deployment's schedules.\n\n Args:\n session: A database session\n deployment_id: a deployment id\n\n Returns:\n list[schemas.core.DeploymentSchedule]: the deployment's schedules\n \"\"\"\n\n query = (\n sa.select(db.DeploymentSchedule)\n .where(db.DeploymentSchedule.deployment_id == deployment_id)\n .order_by(db.DeploymentSchedule.updated.desc())\n )\n\n if deployment_schedule_filter:\n query = query.where(deployment_schedule_filter.as_sql_filter(db))\n\n result = await session.execute(query)\n\n return [schemas.core.DeploymentSchedule.from_orm(s) for s in result.scalars().all()]\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployments","title":"read_deployments
async
","text":"Read deployments.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredoffset
int
Query offset
None
limit
int
Query limit
None
flow_filter
FlowFilter
only select deployments whose flows match these criteria
None
flow_run_filter
FlowRunFilter
only select deployments whose flow runs match these criteria
None
task_run_filter
TaskRunFilter
only select deployments whose task runs match these criteria
None
deployment_filter
DeploymentFilter
only select deployment that match these filters
None
work_pool_filter
WorkPoolFilter
only select deployments whose work pools match these criteria
None
work_queue_filter
WorkQueueFilter
only select deployments whose work pool queues match these criteria
None
sort
DeploymentSort
the sort criteria for selected deployments. Defaults to name
ASC.
NAME_ASC
Returns:
Type DescriptionList[db.Deployment]: deployments
Source code inprefect/server/models/deployments.py
@inject_db\nasync def read_deployments(\n session: sa.orm.Session,\n db: PrefectDBInterface,\n offset: int = None,\n limit: int = None,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n work_pool_filter: schemas.filters.WorkPoolFilter = None,\n work_queue_filter: schemas.filters.WorkQueueFilter = None,\n sort: schemas.sorting.DeploymentSort = schemas.sorting.DeploymentSort.NAME_ASC,\n):\n \"\"\"\n Read deployments.\n\n Args:\n session: A database session\n offset: Query offset\n limit: Query limit\n flow_filter: only select deployments whose flows match these criteria\n flow_run_filter: only select deployments whose flow runs match these criteria\n task_run_filter: only select deployments whose task runs match these criteria\n deployment_filter: only select deployment that match these filters\n work_pool_filter: only select deployments whose work pools match these criteria\n work_queue_filter: only select deployments whose work pool queues match these criteria\n sort: the sort criteria for selected deployments. Defaults to `name` ASC.\n\n Returns:\n List[db.Deployment]: deployments\n \"\"\"\n\n query = select(db.Deployment).order_by(sort.as_sql_sort(db=db))\n\n query = await _apply_deployment_filters(\n query=query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n work_pool_filter=work_pool_filter,\n work_queue_filter=work_queue_filter,\n db=db,\n )\n\n if offset is not None:\n query = query.offset(offset)\n if limit is not None:\n query = query.limit(limit)\n\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.schedule_runs","title":"schedule_runs
async
","text":"Schedule flow runs for a deployment
Parameters:
Name Type Description Defaultsession
Session
a database session
requireddeployment_id
UUID
the id of the deployment to schedule
requiredstart_time
datetime
the time from which to start scheduling runs
None
end_time
datetime
runs will be scheduled until at most this time
None
min_time
timedelta
runs will be scheduled until at least this far in the future
None
min_runs
int
a minimum amount of runs to schedule
None
max_runs
int
a maximum amount of runs to schedule
None
This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected.
- Runs will be generated starting on or after the `start_time`\n- No more than `max_runs` runs will be generated\n- No runs will be generated after `end_time` is reached\n- At least `min_runs` runs will be generated\n- Runs will be generated until at least `start_time` + `min_time` is reached\n
Returns:
Type DescriptionList[UUID]
a list of flow run ids scheduled for the deployment
Source code inprefect/server/models/deployments.py
async def schedule_runs(\n session: sa.orm.Session,\n deployment_id: UUID,\n start_time: datetime.datetime = None,\n end_time: datetime.datetime = None,\n min_time: datetime.timedelta = None,\n min_runs: int = None,\n max_runs: int = None,\n auto_scheduled: bool = True,\n) -> List[UUID]:\n \"\"\"\n Schedule flow runs for a deployment\n\n Args:\n session: a database session\n deployment_id: the id of the deployment to schedule\n start_time: the time from which to start scheduling runs\n end_time: runs will be scheduled until at most this time\n min_time: runs will be scheduled until at least this far in the future\n min_runs: a minimum amount of runs to schedule\n max_runs: a maximum amount of runs to schedule\n\n This function will generate the minimum number of runs that satisfy the min\n and max times, and the min and max counts. Specifically, the following order\n will be respected.\n\n - Runs will be generated starting on or after the `start_time`\n - No more than `max_runs` runs will be generated\n - No runs will be generated after `end_time` is reached\n - At least `min_runs` runs will be generated\n - Runs will be generated until at least `start_time` + `min_time` is reached\n\n Returns:\n a list of flow run ids scheduled for the deployment\n \"\"\"\n if min_runs is None:\n min_runs = PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS.value()\n if max_runs is None:\n max_runs = PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS.value()\n if start_time is None:\n start_time = pendulum.now(\"UTC\")\n if end_time is None:\n end_time = start_time + (\n PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME.value()\n )\n if min_time is None:\n min_time = PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME.value()\n\n start_time = pendulum.instance(start_time)\n end_time = pendulum.instance(end_time)\n\n runs = await _generate_scheduled_flow_runs(\n session=session,\n deployment_id=deployment_id,\n start_time=start_time,\n end_time=end_time,\n min_time=min_time,\n min_runs=min_runs,\n max_runs=max_runs,\n auto_scheduled=auto_scheduled,\n )\n return await _insert_scheduled_flow_runs(session=session, runs=runs)\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.update_deployment","title":"update_deployment
async
","text":"Updates a deployment.
Parameters:
Name Type Description Defaultsession
Session
a database session
requireddeployment_id
UUID
the ID of the deployment to modify
requireddeployment
DeploymentUpdate
changes to a deployment model
requiredReturns:
Name Type Descriptionbool
bool
whether the deployment was updated
Source code inprefect/server/models/deployments.py
@inject_db\nasync def update_deployment(\n session: sa.orm.Session,\n deployment_id: UUID,\n deployment: schemas.actions.DeploymentUpdate,\n db: PrefectDBInterface,\n) -> bool:\n \"\"\"Updates a deployment.\n\n Args:\n session: a database session\n deployment_id: the ID of the deployment to modify\n deployment: changes to a deployment model\n\n Returns:\n bool: whether the deployment was updated\n\n \"\"\"\n\n schedules = deployment.schedules\n\n # exclude_unset=True allows us to only update values provided by\n # the user, ignoring any defaults on the model\n update_data = deployment.dict(\n shallow=True,\n exclude_unset=True,\n exclude={\"work_pool_name\"},\n )\n\n should_update_schedules = update_data.pop(\"schedules\", None) is not None\n\n if deployment.work_pool_name and deployment.work_queue_name:\n # If a specific pool name/queue name combination was provided, get the\n # ID for that work pool queue.\n update_data[\n \"work_queue_id\"\n ] = await WorkerLookups()._get_work_queue_id_from_name(\n session=session,\n work_pool_name=deployment.work_pool_name,\n work_queue_name=deployment.work_queue_name,\n create_queue_if_not_found=True,\n )\n elif deployment.work_pool_name:\n # If just a pool name was provided, get the ID for its default\n # work pool queue.\n update_data[\n \"work_queue_id\"\n ] = await WorkerLookups()._get_default_work_queue_id_from_work_pool_name(\n session=session,\n work_pool_name=deployment.work_pool_name,\n )\n elif deployment.work_queue_name:\n # If just a queue name was provided, ensure the queue exists and\n # get its ID.\n work_queue = await models.work_queues._ensure_work_queue_exists(\n session=session, name=update_data[\"work_queue_name\"], db=db\n )\n update_data[\"work_queue_id\"] = work_queue.id\n\n if \"is_schedule_active\" in update_data:\n update_data[\"paused\"] = not update_data[\"is_schedule_active\"]\n\n update_stmt = (\n sa.update(db.Deployment)\n .where(db.Deployment.id == deployment_id)\n .values(**update_data)\n )\n result = await session.execute(update_stmt)\n\n # delete any auto scheduled runs that would have reflected the old deployment config\n await _delete_scheduled_runs(\n session=session, deployment_id=deployment_id, db=db, auto_scheduled_only=True\n )\n\n if should_update_schedules:\n # If schedules were provided, remove the existing schedules and\n # replace them with the new ones.\n await delete_schedules_for_deployment(\n session=session, deployment_id=deployment_id\n )\n await create_deployment_schedules(\n session=session,\n deployment_id=deployment_id,\n schedules=[\n schemas.actions.DeploymentScheduleCreate(\n schedule=schedule.schedule,\n active=schedule.active, # type: ignore[call-arg]\n )\n for schedule in schedules\n ],\n )\n\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.update_deployment_schedule","title":"update_deployment_schedule
async
","text":"Updates a deployment's schedules.
Parameters:
Name Type Description Defaultsession
AsyncSession
A database session
requireddeployment_schedule_id
UUID
a deployment schedule id
requiredschedule
DeploymentScheduleUpdate
a deployment schedule update action
required Source code inprefect/server/models/deployments.py
@inject_db\nasync def update_deployment_schedule(\n db: PrefectDBInterface,\n session: AsyncSession,\n deployment_id: UUID,\n deployment_schedule_id: UUID,\n schedule: schemas.actions.DeploymentScheduleUpdate,\n) -> bool:\n \"\"\"\n Updates a deployment's schedules.\n\n Args:\n session: A database session\n deployment_schedule_id: a deployment schedule id\n schedule: a deployment schedule update action\n \"\"\"\n\n result = await session.execute(\n sa.update(db.DeploymentSchedule)\n .where(\n sa.and_(\n db.DeploymentSchedule.id == deployment_schedule_id,\n db.DeploymentSchedule.deployment_id == deployment_id,\n )\n )\n .values(**schedule.dict(exclude_none=True))\n )\n\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flow_run_states/","title":"server.models.flow_run_states","text":""},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states","title":"prefect.server.models.flow_run_states
","text":"Functions for interacting with flow run state ORM objects. Intended for internal use by the Prefect REST API.
"},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.delete_flow_run_state","title":"delete_flow_run_state
async
","text":"Delete a flow run state by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_run_state_id
UUID
a flow run state id
requiredReturns:
Name Type Descriptionbool
bool
whether or not the flow run state was deleted
Source code inprefect/server/models/flow_run_states.py
@inject_db\nasync def delete_flow_run_state(\n session: sa.orm.Session, flow_run_state_id: UUID, db: PrefectDBInterface\n) -> bool:\n \"\"\"\n Delete a flow run state by id.\n\n Args:\n session: A database session\n flow_run_state_id: a flow run state id\n\n Returns:\n bool: whether or not the flow run state was deleted\n \"\"\"\n\n result = await session.execute(\n delete(db.FlowRunState).where(db.FlowRunState.id == flow_run_state_id)\n )\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.read_flow_run_state","title":"read_flow_run_state
async
","text":"Reads a flow run state by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_run_state_id
UUID
a flow run state id
requiredReturns:
Type Descriptiondb.FlowRunState: the flow state
Source code inprefect/server/models/flow_run_states.py
@inject_db\nasync def read_flow_run_state(\n session: sa.orm.Session, flow_run_state_id: UUID, db: PrefectDBInterface\n):\n \"\"\"\n Reads a flow run state by id.\n\n Args:\n session: A database session\n flow_run_state_id: a flow run state id\n\n Returns:\n db.FlowRunState: the flow state\n \"\"\"\n\n return await session.get(db.FlowRunState, flow_run_state_id)\n
"},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.read_flow_run_states","title":"read_flow_run_states
async
","text":"Reads flow runs states for a flow run.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_run_id
UUID
the flow run id
requiredReturns:
Type DescriptionList[db.FlowRunState]: the flow run states
Source code inprefect/server/models/flow_run_states.py
@inject_db\nasync def read_flow_run_states(\n session: sa.orm.Session, flow_run_id: UUID, db: PrefectDBInterface\n):\n \"\"\"\n Reads flow runs states for a flow run.\n\n Args:\n session: A database session\n flow_run_id: the flow run id\n\n Returns:\n List[db.FlowRunState]: the flow run states\n \"\"\"\n\n query = (\n select(db.FlowRunState)\n .filter_by(flow_run_id=flow_run_id)\n .order_by(db.FlowRunState.timestamp)\n )\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/flow_runs/","title":"server.models.flow_runs","text":""},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs","title":"prefect.server.models.flow_runs
","text":"Functions for interacting with flow run ORM objects. Intended for internal use by the Prefect REST API.
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.count_flow_runs","title":"count_flow_runs
async
","text":"Count flow runs.
Parameters:
Name Type Description Defaultsession
AsyncSession
a database session
requiredflow_filter
FlowFilter
only count flow runs whose flows match these filters
None
flow_run_filter
FlowRunFilter
only count flow runs that match these filters
None
task_run_filter
TaskRunFilter
only count flow runs whose task runs match these filters
None
deployment_filter
DeploymentFilter
only count flow runs whose deployments match these filters
None
Returns:
Name Type Descriptionint
int
count of flow runs
Source code inprefect/server/models/flow_runs.py
@inject_db\nasync def count_flow_runs(\n session: AsyncSession,\n db: PrefectDBInterface,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n work_pool_filter: schemas.filters.WorkPoolFilter = None,\n work_queue_filter: schemas.filters.WorkQueueFilter = None,\n) -> int:\n \"\"\"\n Count flow runs.\n\n Args:\n session: a database session\n flow_filter: only count flow runs whose flows match these filters\n flow_run_filter: only count flow runs that match these filters\n task_run_filter: only count flow runs whose task runs match these filters\n deployment_filter: only count flow runs whose deployments match these filters\n\n Returns:\n int: count of flow runs\n \"\"\"\n\n query = select(sa.func.count(sa.text(\"*\"))).select_from(db.FlowRun)\n\n query = await _apply_flow_run_filters(\n query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n work_pool_filter=work_pool_filter,\n work_queue_filter=work_queue_filter,\n db=db,\n )\n\n result = await session.execute(query)\n return result.scalar()\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.create_flow_run","title":"create_flow_run
async
","text":"Creates a new flow run.
If the provided flow run has a state attached, it will also be created.
Parameters:
Name Type Description Defaultsession
AsyncSession
a database session
requiredflow_run
FlowRun
a flow run model
requiredReturns:
Type Descriptiondb.FlowRun: the newly-created flow run
Source code inprefect/server/models/flow_runs.py
@inject_db\nasync def create_flow_run(\n session: AsyncSession,\n flow_run: schemas.core.FlowRun,\n db: PrefectDBInterface,\n orchestration_parameters: Optional[dict] = None,\n):\n \"\"\"Creates a new flow run.\n\n If the provided flow run has a state attached, it will also be created.\n\n Args:\n session: a database session\n flow_run: a flow run model\n\n Returns:\n db.FlowRun: the newly-created flow run\n \"\"\"\n now = pendulum.now(\"UTC\")\n\n flow_run_dict = dict(\n **flow_run.dict(\n shallow=True,\n exclude={\n \"created\",\n \"state\",\n \"estimated_run_time\",\n \"estimated_start_time_delta\",\n },\n exclude_unset=True,\n ),\n created=now,\n )\n\n # if no idempotency key was provided, create the run directly\n if not flow_run.idempotency_key:\n model = db.FlowRun(**flow_run_dict)\n session.add(model)\n await session.flush()\n\n # otherwise let the database take care of enforcing idempotency\n else:\n insert_stmt = (\n (await db.insert(db.FlowRun))\n .values(**flow_run_dict)\n .on_conflict_do_nothing(\n index_elements=db.flow_run_unique_upsert_columns,\n )\n )\n await session.execute(insert_stmt)\n\n # read the run to see if idempotency was applied or not\n query = (\n sa.select(db.FlowRun)\n .where(\n sa.and_(\n db.FlowRun.flow_id == flow_run.flow_id,\n db.FlowRun.idempotency_key == flow_run.idempotency_key,\n )\n )\n .limit(1)\n .execution_options(populate_existing=True)\n .options(\n selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n )\n )\n result = await session.execute(query)\n model = result.scalar()\n\n # if the flow run was created in this function call then we need to set the\n # state. If it was created idempotently, the created time won't match.\n if model.created == now and flow_run.state:\n await models.flow_runs.set_flow_run_state(\n session=session,\n flow_run_id=model.id,\n state=flow_run.state,\n force=True,\n orchestration_parameters=orchestration_parameters,\n )\n return model\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.delete_flow_run","title":"delete_flow_run
async
","text":"Delete a flow run by flow_run_id.
Parameters:
Name Type Description Defaultsession
AsyncSession
A database session
requiredflow_run_id
UUID
a flow run id
requiredReturns:
Name Type Descriptionbool
bool
whether or not the flow run was deleted
Source code inprefect/server/models/flow_runs.py
@inject_db\nasync def delete_flow_run(\n session: AsyncSession, flow_run_id: UUID, db: PrefectDBInterface\n) -> bool:\n \"\"\"\n Delete a flow run by flow_run_id.\n\n Args:\n session: A database session\n flow_run_id: a flow run id\n\n Returns:\n bool: whether or not the flow run was deleted\n \"\"\"\n\n result = await session.execute(\n delete(db.FlowRun).where(db.FlowRun.id == flow_run_id)\n )\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_run","title":"read_flow_run
async
","text":"Reads a flow run by id.
Parameters:
Name Type Description Defaultsession
AsyncSession
A database session
requiredflow_run_id
UUID
a flow run id
requiredReturns:
Type Descriptiondb.FlowRun: the flow run
Source code inprefect/server/models/flow_runs.py
@inject_db\nasync def read_flow_run(\n session: AsyncSession,\n flow_run_id: UUID,\n db: PrefectDBInterface,\n for_update: bool = False,\n):\n \"\"\"\n Reads a flow run by id.\n\n Args:\n session: A database session\n flow_run_id: a flow run id\n\n Returns:\n db.FlowRun: the flow run\n \"\"\"\n select = (\n sa.select(db.FlowRun)\n .where(db.FlowRun.id == flow_run_id)\n .options(\n selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n )\n )\n\n if for_update:\n select = select.with_for_update()\n\n result = await session.execute(select)\n return result.scalar()\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_run_graph","title":"read_flow_run_graph
async
","text":"Given a flow run, return the graph of it's task and subflow runs. If a since
datetime is provided, only return items that may have changed since that time.
prefect/server/models/flow_runs.py
@inject_db\nasync def read_flow_run_graph(\n db: PrefectDBInterface,\n session: AsyncSession,\n flow_run_id: UUID,\n since: datetime.datetime = datetime.datetime.min,\n) -> Graph:\n \"\"\"Given a flow run, return the graph of it's task and subflow runs. If a `since`\n datetime is provided, only return items that may have changed since that time.\"\"\"\n return await db.queries.flow_run_graph_v2(\n db=db,\n session=session,\n flow_run_id=flow_run_id,\n since=since,\n max_nodes=PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES.value(),\n max_artifacts=PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS.value(),\n )\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_runs","title":"read_flow_runs
async
","text":"Read flow runs.
Parameters:
Name Type Description Defaultsession
AsyncSession
a database session
requiredcolumns
List
a list of the flow run ORM columns to load, for performance
None
flow_filter
FlowFilter
only select flow runs whose flows match these filters
None
flow_run_filter
FlowRunFilter
only select flow runs match these filters
None
task_run_filter
TaskRunFilter
only select flow runs whose task runs match these filters
None
deployment_filter
DeploymentFilter
only select flow runs whose deployments match these filters
None
offset
int
Query offset
None
limit
int
Query limit
None
sort
FlowRunSort
Query sort
ID_DESC
Returns:
Type DescriptionList[db.FlowRun]: flow runs
Source code inprefect/server/models/flow_runs.py
@inject_db\nasync def read_flow_runs(\n session: AsyncSession,\n db: PrefectDBInterface,\n columns: List = None,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n work_pool_filter: schemas.filters.WorkPoolFilter = None,\n work_queue_filter: schemas.filters.WorkQueueFilter = None,\n offset: int = None,\n limit: int = None,\n sort: schemas.sorting.FlowRunSort = schemas.sorting.FlowRunSort.ID_DESC,\n):\n \"\"\"\n Read flow runs.\n\n Args:\n session: a database session\n columns: a list of the flow run ORM columns to load, for performance\n flow_filter: only select flow runs whose flows match these filters\n flow_run_filter: only select flow runs match these filters\n task_run_filter: only select flow runs whose task runs match these filters\n deployment_filter: only select flow runs whose deployments match these filters\n offset: Query offset\n limit: Query limit\n sort: Query sort\n\n Returns:\n List[db.FlowRun]: flow runs\n \"\"\"\n query = (\n select(db.FlowRun)\n .order_by(sort.as_sql_sort(db))\n .options(\n selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n )\n )\n\n if columns:\n query = query.options(load_only(*columns))\n\n query = await _apply_flow_run_filters(\n query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n work_pool_filter=work_pool_filter,\n work_queue_filter=work_queue_filter,\n db=db,\n )\n\n if offset is not None:\n query = query.offset(offset)\n\n if limit is not None:\n query = query.limit(limit)\n\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_task_run_dependencies","title":"read_task_run_dependencies
async
","text":"Get a task run dependency map for a given flow run.
Source code inprefect/server/models/flow_runs.py
async def read_task_run_dependencies(\n session: AsyncSession,\n flow_run_id: UUID,\n) -> List[DependencyResult]:\n \"\"\"\n Get a task run dependency map for a given flow run.\n \"\"\"\n flow_run = await models.flow_runs.read_flow_run(\n session=session, flow_run_id=flow_run_id\n )\n if not flow_run:\n raise ObjectNotFoundError(f\"Flow run with id {flow_run_id} not found\")\n\n task_runs = await models.task_runs.read_task_runs(\n session=session,\n flow_run_filter=schemas.filters.FlowRunFilter(\n id=schemas.filters.FlowRunFilterId(any_=[flow_run_id])\n ),\n )\n\n dependency_graph = []\n\n for task_run in task_runs:\n inputs = list(set(chain(*task_run.task_inputs.values())))\n untrackable_result_status = (\n False\n if task_run.state is None\n else task_run.state.state_details.untrackable_result\n )\n dependency_graph.append(\n {\n \"id\": task_run.id,\n \"upstream_dependencies\": inputs,\n \"state\": task_run.state,\n \"expected_start_time\": task_run.expected_start_time,\n \"name\": task_run.name,\n \"start_time\": task_run.start_time,\n \"end_time\": task_run.end_time,\n \"total_run_time\": task_run.total_run_time,\n \"estimated_run_time\": task_run.estimated_run_time,\n \"untrackable_result\": untrackable_result_status,\n }\n )\n\n return dependency_graph\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.set_flow_run_state","title":"set_flow_run_state
async
","text":"Creates a new orchestrated flow run state.
Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed state
input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A force
flag is supplied to bypass a subset of orchestration logic.
Parameters:
Name Type Description Defaultsession
AsyncSession
a database session
requiredflow_run_id
UUID
the flow run id
requiredstate
State
a flow run state model
requiredforce
bool
if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.
False
Returns:
Type DescriptionOrchestrationResult
OrchestrationResult object
Source code inprefect/server/models/flow_runs.py
async def set_flow_run_state(\n session: AsyncSession,\n flow_run_id: UUID,\n state: schemas.states.State,\n force: bool = False,\n flow_policy: BaseOrchestrationPolicy = None,\n orchestration_parameters: dict = None,\n) -> OrchestrationResult:\n \"\"\"\n Creates a new orchestrated flow run state.\n\n Setting a new state on a run is the one of the principal actions that is governed by\n Prefect's orchestration logic. Setting a new run state will not guarantee creation,\n but instead trigger orchestration rules to govern the proposed `state` input. If\n the state is considered valid, it will be written to the database. Otherwise, a\n it's possible a different state, or no state, will be created. A `force` flag is\n supplied to bypass a subset of orchestration logic.\n\n Args:\n session: a database session\n flow_run_id: the flow run id\n state: a flow run state model\n force: if False, orchestration rules will be applied that may alter or prevent\n the state transition. If True, orchestration rules are not applied.\n\n Returns:\n OrchestrationResult object\n \"\"\"\n\n # load the flow run\n run = await models.flow_runs.read_flow_run(\n session=session,\n flow_run_id=flow_run_id,\n # Lock the row to prevent orchestration race conditions\n for_update=True,\n )\n\n if not run:\n raise ObjectNotFoundError(f\"Flow run with id {flow_run_id} not found\")\n\n initial_state = run.state.as_state() if run.state else None\n initial_state_type = initial_state.type if initial_state else None\n proposed_state_type = state.type if state else None\n intended_transition = (initial_state_type, proposed_state_type)\n\n if force or flow_policy is None:\n flow_policy = MinimalFlowPolicy\n\n orchestration_rules = flow_policy.compile_transition_rules(*intended_transition)\n global_rules = GlobalFlowPolicy.compile_transition_rules(*intended_transition)\n\n context = FlowOrchestrationContext(\n session=session,\n run=run,\n initial_state=initial_state,\n proposed_state=state,\n )\n\n if orchestration_parameters is not None:\n context.parameters = orchestration_parameters\n\n # apply orchestration rules and create the new flow run state\n async with contextlib.AsyncExitStack() as stack:\n for rule in orchestration_rules:\n context = await stack.enter_async_context(\n rule(context, *intended_transition)\n )\n\n for rule in global_rules:\n context = await stack.enter_async_context(\n rule(context, *intended_transition)\n )\n\n await context.validate_proposed_state()\n\n if context.orchestration_error is not None:\n raise context.orchestration_error\n\n result = OrchestrationResult(\n state=context.validated_state,\n status=context.response_status,\n details=context.response_details,\n )\n\n # if a new state is being set (either ACCEPTED from user or REJECTED\n # and set by the server), check for any notification policies\n if result.status in (SetStateStatus.ACCEPT, SetStateStatus.REJECT):\n await models.flow_run_notification_policies.queue_flow_run_notifications(\n session=session, flow_run=run\n )\n\n return result\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.update_flow_run","title":"update_flow_run
async
","text":"Updates a flow run.
Parameters:
Name Type Description Defaultsession
AsyncSession
a database session
requiredflow_run_id
UUID
the flow run id to update
requiredflow_run
FlowRunUpdate
a flow run model
requiredReturns:
Name Type Descriptionbool
bool
whether or not matching rows were found to update
Source code inprefect/server/models/flow_runs.py
@inject_db\nasync def update_flow_run(\n session: AsyncSession,\n flow_run_id: UUID,\n flow_run: schemas.actions.FlowRunUpdate,\n db: PrefectDBInterface,\n) -> bool:\n \"\"\"\n Updates a flow run.\n\n Args:\n session: a database session\n flow_run_id: the flow run id to update\n flow_run: a flow run model\n\n Returns:\n bool: whether or not matching rows were found to update\n \"\"\"\n update_stmt = (\n sa.update(db.FlowRun)\n .where(db.FlowRun.id == flow_run_id)\n # exclude_unset=True allows us to only update values provided by\n # the user, ignoring any defaults on the model\n .values(**flow_run.dict(shallow=True, exclude_unset=True))\n )\n result = await session.execute(update_stmt)\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flows/","title":"server.models.flows","text":""},{"location":"api-ref/server/models/flows/#prefect.server.models.flows","title":"prefect.server.models.flows
","text":"Functions for interacting with flow ORM objects. Intended for internal use by the Prefect REST API.
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.count_flows","title":"count_flows
async
","text":"Count flows.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_filter
FlowFilter
only count flows that match these filters
None
flow_run_filter
FlowRunFilter
only count flows whose flow runs match these filters
None
task_run_filter
TaskRunFilter
only count flows whose task runs match these filters
None
deployment_filter
DeploymentFilter
only count flows whose deployments match these filters
None
work_pool_filter
WorkPoolFilter
only count flows whose work pools match these filters
None
Returns:
Name Type Descriptionint
int
count of flows
Source code inprefect/server/models/flows.py
@inject_db\nasync def count_flows(\n session: sa.orm.Session,\n db: PrefectDBInterface,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n work_pool_filter: schemas.filters.WorkPoolFilter = None,\n) -> int:\n \"\"\"\n Count flows.\n\n Args:\n session: A database session\n flow_filter: only count flows that match these filters\n flow_run_filter: only count flows whose flow runs match these filters\n task_run_filter: only count flows whose task runs match these filters\n deployment_filter: only count flows whose deployments match these filters\n work_pool_filter: only count flows whose work pools match these filters\n\n Returns:\n int: count of flows\n \"\"\"\n\n query = select(sa.func.count(sa.text(\"*\"))).select_from(db.Flow)\n\n query = await _apply_flow_filters(\n query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n work_pool_filter=work_pool_filter,\n db=db,\n )\n\n result = await session.execute(query)\n return result.scalar()\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.create_flow","title":"create_flow
async
","text":"Creates a new flow.
If a flow with the same name already exists, the existing flow is returned.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredflow
Flow
a flow model
requiredReturns:
Type Descriptiondb.Flow: the newly-created or existing flow
Source code inprefect/server/models/flows.py
@inject_db\nasync def create_flow(\n session: sa.orm.Session, flow: schemas.core.Flow, db: PrefectDBInterface\n):\n \"\"\"\n Creates a new flow.\n\n If a flow with the same name already exists, the existing flow is returned.\n\n Args:\n session: a database session\n flow: a flow model\n\n Returns:\n db.Flow: the newly-created or existing flow\n \"\"\"\n\n insert_stmt = (\n (await db.insert(db.Flow))\n .values(**flow.dict(shallow=True, exclude_unset=True))\n .on_conflict_do_nothing(\n index_elements=db.flow_unique_upsert_columns,\n )\n )\n await session.execute(insert_stmt)\n\n query = (\n sa.select(db.Flow)\n .where(\n db.Flow.name == flow.name,\n )\n .limit(1)\n .execution_options(populate_existing=True)\n )\n result = await session.execute(query)\n model = result.scalar()\n return model\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.delete_flow","title":"delete_flow
async
","text":"Delete a flow by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_id
UUID
a flow id
requiredReturns:
Name Type Descriptionbool
bool
whether or not the flow was deleted
Source code inprefect/server/models/flows.py
@inject_db\nasync def delete_flow(\n session: sa.orm.Session, flow_id: UUID, db: PrefectDBInterface\n) -> bool:\n \"\"\"\n Delete a flow by id.\n\n Args:\n session: A database session\n flow_id: a flow id\n\n Returns:\n bool: whether or not the flow was deleted\n \"\"\"\n\n result = await session.execute(delete(db.Flow).where(db.Flow.id == flow_id))\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flow","title":"read_flow
async
","text":"Reads a flow by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_id
UUID
a flow id
requiredReturns:
Type Descriptiondb.Flow: the flow
Source code inprefect/server/models/flows.py
@inject_db\nasync def read_flow(session: sa.orm.Session, flow_id: UUID, db: PrefectDBInterface):\n \"\"\"\n Reads a flow by id.\n\n Args:\n session: A database session\n flow_id: a flow id\n\n Returns:\n db.Flow: the flow\n \"\"\"\n return await session.get(db.Flow, flow_id)\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flow_by_name","title":"read_flow_by_name
async
","text":"Reads a flow by name.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredname
str
a flow name
requiredReturns:
Type Descriptiondb.Flow: the flow
Source code inprefect/server/models/flows.py
@inject_db\nasync def read_flow_by_name(session: sa.orm.Session, name: str, db: PrefectDBInterface):\n \"\"\"\n Reads a flow by name.\n\n Args:\n session: A database session\n name: a flow name\n\n Returns:\n db.Flow: the flow\n \"\"\"\n\n result = await session.execute(select(db.Flow).filter_by(name=name))\n return result.scalar()\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flows","title":"read_flows
async
","text":"Read multiple flows.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_filter
FlowFilter
only select flows that match these filters
None
flow_run_filter
FlowRunFilter
only select flows whose flow runs match these filters
None
task_run_filter
TaskRunFilter
only select flows whose task runs match these filters
None
deployment_filter
DeploymentFilter
only select flows whose deployments match these filters
None
work_pool_filter
WorkPoolFilter
only select flows whose work pools match these filters
None
offset
int
Query offset
None
limit
int
Query limit
None
Returns:
Type DescriptionList[db.Flow]: flows
Source code inprefect/server/models/flows.py
@inject_db\nasync def read_flows(\n session: sa.orm.Session,\n db: PrefectDBInterface,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n work_pool_filter: schemas.filters.WorkPoolFilter = None,\n sort: schemas.sorting.FlowSort = schemas.sorting.FlowSort.NAME_ASC,\n offset: int = None,\n limit: int = None,\n):\n \"\"\"\n Read multiple flows.\n\n Args:\n session: A database session\n flow_filter: only select flows that match these filters\n flow_run_filter: only select flows whose flow runs match these filters\n task_run_filter: only select flows whose task runs match these filters\n deployment_filter: only select flows whose deployments match these filters\n work_pool_filter: only select flows whose work pools match these filters\n offset: Query offset\n limit: Query limit\n\n Returns:\n List[db.Flow]: flows\n \"\"\"\n\n query = select(db.Flow).order_by(sort.as_sql_sort(db=db))\n\n query = await _apply_flow_filters(\n query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n work_pool_filter=work_pool_filter,\n db=db,\n )\n\n if offset is not None:\n query = query.offset(offset)\n\n if limit is not None:\n query = query.limit(limit)\n\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.update_flow","title":"update_flow
async
","text":"Updates a flow.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredflow_id
UUID
the flow id to update
requiredflow
FlowUpdate
a flow update model
requiredReturns:
Name Type Descriptionbool
whether or not matching rows were found to update
Source code inprefect/server/models/flows.py
@inject_db\nasync def update_flow(\n session: sa.orm.Session,\n flow_id: UUID,\n flow: schemas.actions.FlowUpdate,\n db: PrefectDBInterface,\n):\n \"\"\"\n Updates a flow.\n\n Args:\n session: a database session\n flow_id: the flow id to update\n flow: a flow update model\n\n Returns:\n bool: whether or not matching rows were found to update\n \"\"\"\n update_stmt = (\n sa.update(db.Flow)\n .where(db.Flow.id == flow_id)\n # exclude_unset=True allows us to only update values provided by\n # the user, ignoring any defaults on the model\n .values(**flow.dict(shallow=True, exclude_unset=True))\n )\n result = await session.execute(update_stmt)\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/saved_searches/","title":"server.models.saved_searches","text":""},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches","title":"prefect.server.models.saved_searches
","text":"Functions for interacting with saved search ORM objects. Intended for internal use by the Prefect REST API.
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.create_saved_search","title":"create_saved_search
async
","text":"Upserts a SavedSearch.
If a SavedSearch with the same name exists, all properties will be updated.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredsaved_search
SavedSearch
a SavedSearch model
requiredReturns:
Type Descriptiondb.SavedSearch: the newly-created or updated SavedSearch
Source code inprefect/server/models/saved_searches.py
@inject_db\nasync def create_saved_search(\n session: sa.orm.Session,\n saved_search: schemas.core.SavedSearch,\n db: PrefectDBInterface,\n):\n \"\"\"\n Upserts a SavedSearch.\n\n If a SavedSearch with the same name exists, all properties will be updated.\n\n Args:\n session (sa.orm.Session): a database session\n saved_search (schemas.core.SavedSearch): a SavedSearch model\n\n Returns:\n db.SavedSearch: the newly-created or updated SavedSearch\n\n \"\"\"\n\n insert_stmt = (\n (await db.insert(db.SavedSearch))\n .values(**saved_search.dict(shallow=True, exclude_unset=True))\n .on_conflict_do_update(\n index_elements=db.saved_search_unique_upsert_columns,\n set_=saved_search.dict(shallow=True, include={\"filters\"}),\n )\n )\n\n await session.execute(insert_stmt)\n\n query = (\n sa.select(db.SavedSearch)\n .where(\n db.SavedSearch.name == saved_search.name,\n )\n .execution_options(populate_existing=True)\n )\n result = await session.execute(query)\n model = result.scalar()\n\n return model\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.delete_saved_search","title":"delete_saved_search
async
","text":"Delete a SavedSearch by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredsaved_search_id
str
a SavedSearch id
requiredReturns:
Name Type Descriptionbool
bool
whether or not the SavedSearch was deleted
Source code inprefect/server/models/saved_searches.py
@inject_db\nasync def delete_saved_search(\n session: sa.orm.Session, saved_search_id: UUID, db: PrefectDBInterface\n) -> bool:\n \"\"\"\n Delete a SavedSearch by id.\n\n Args:\n session (sa.orm.Session): A database session\n saved_search_id (str): a SavedSearch id\n\n Returns:\n bool: whether or not the SavedSearch was deleted\n \"\"\"\n\n result = await session.execute(\n delete(db.SavedSearch).where(db.SavedSearch.id == saved_search_id)\n )\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_search","title":"read_saved_search
async
","text":"Reads a SavedSearch by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredsaved_search_id
str
a SavedSearch id
requiredReturns:
Type Descriptiondb.SavedSearch: the SavedSearch
Source code inprefect/server/models/saved_searches.py
@inject_db\nasync def read_saved_search(\n session: sa.orm.Session, saved_search_id: UUID, db: PrefectDBInterface\n):\n \"\"\"\n Reads a SavedSearch by id.\n\n Args:\n session (sa.orm.Session): A database session\n saved_search_id (str): a SavedSearch id\n\n Returns:\n db.SavedSearch: the SavedSearch\n \"\"\"\n\n return await session.get(db.SavedSearch, saved_search_id)\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_search_by_name","title":"read_saved_search_by_name
async
","text":"Reads a SavedSearch by name.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredname
str
a SavedSearch name
requiredReturns:
Type Descriptiondb.SavedSearch: the SavedSearch
Source code inprefect/server/models/saved_searches.py
@inject_db\nasync def read_saved_search_by_name(\n session: sa.orm.Session, name: str, db: PrefectDBInterface\n):\n \"\"\"\n Reads a SavedSearch by name.\n\n Args:\n session (sa.orm.Session): A database session\n name (str): a SavedSearch name\n\n Returns:\n db.SavedSearch: the SavedSearch\n \"\"\"\n result = await session.execute(\n select(db.SavedSearch).where(db.SavedSearch.name == name).limit(1)\n )\n return result.scalar()\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_searches","title":"read_saved_searches
async
","text":"Read SavedSearches.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredoffset
int
Query offset
None
limit(int)
Query limit
requiredReturns:
Type DescriptionList[db.SavedSearch]: SavedSearches
Source code inprefect/server/models/saved_searches.py
@inject_db\nasync def read_saved_searches(\n db: PrefectDBInterface,\n session: sa.orm.Session,\n offset: int = None,\n limit: int = None,\n):\n \"\"\"\n Read SavedSearches.\n\n Args:\n session (sa.orm.Session): A database session\n offset (int): Query offset\n limit(int): Query limit\n\n Returns:\n List[db.SavedSearch]: SavedSearches\n \"\"\"\n\n query = select(db.SavedSearch).order_by(db.SavedSearch.name)\n\n if offset is not None:\n query = query.offset(offset)\n if limit is not None:\n query = query.limit(limit)\n\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/task_run_states/","title":"server.models.task_run_states","text":""},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states","title":"prefect.server.models.task_run_states
","text":"Functions for interacting with task run state ORM objects. Intended for internal use by the Prefect REST API.
"},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.delete_task_run_state","title":"delete_task_run_state
async
","text":"Delete a task run state by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredtask_run_state_id
UUID
a task run state id
requiredReturns:
Name Type Descriptionbool
bool
whether or not the task run state was deleted
Source code inprefect/server/models/task_run_states.py
@inject_db\nasync def delete_task_run_state(\n session: sa.orm.Session, task_run_state_id: UUID, db: PrefectDBInterface\n) -> bool:\n \"\"\"\n Delete a task run state by id.\n\n Args:\n session: A database session\n task_run_state_id: a task run state id\n\n Returns:\n bool: whether or not the task run state was deleted\n \"\"\"\n\n result = await session.execute(\n delete(db.TaskRunState).where(db.TaskRunState.id == task_run_state_id)\n )\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.read_task_run_state","title":"read_task_run_state
async
","text":"Reads a task run state by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredtask_run_state_id
UUID
a task run state id
requiredReturns:
Type Descriptiondb.TaskRunState: the task state
Source code inprefect/server/models/task_run_states.py
@inject_db\nasync def read_task_run_state(\n session: sa.orm.Session, task_run_state_id: UUID, db: PrefectDBInterface\n):\n \"\"\"\n Reads a task run state by id.\n\n Args:\n session: A database session\n task_run_state_id: a task run state id\n\n Returns:\n db.TaskRunState: the task state\n \"\"\"\n\n return await session.get(db.TaskRunState, task_run_state_id)\n
"},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.read_task_run_states","title":"read_task_run_states
async
","text":"Reads task runs states for a task run.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredtask_run_id
UUID
the task run id
requiredReturns:
Type DescriptionList[db.TaskRunState]: the task run states
Source code inprefect/server/models/task_run_states.py
@inject_db\nasync def read_task_run_states(\n session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n):\n \"\"\"\n Reads task runs states for a task run.\n\n Args:\n session: A database session\n task_run_id: the task run id\n\n Returns:\n List[db.TaskRunState]: the task run states\n \"\"\"\n\n query = (\n select(db.TaskRunState)\n .filter_by(task_run_id=task_run_id)\n .order_by(db.TaskRunState.timestamp)\n )\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/task_runs/","title":"server.models.task_runs","text":""},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs","title":"prefect.server.models.task_runs
","text":"Functions for interacting with task run ORM objects. Intended for internal use by the Prefect REST API.
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.count_task_runs","title":"count_task_runs
async
","text":"Count task runs.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredflow_filter
FlowFilter
only count task runs whose flows match these filters
None
flow_run_filter
FlowRunFilter
only count task runs whose flow runs match these filters
None
task_run_filter
TaskRunFilter
only count task runs that match these filters
None
deployment_filter
DeploymentFilter
only count task runs whose deployments match these filters
None
Source code in prefect/server/models/task_runs.py
@inject_db\nasync def count_task_runs(\n session: sa.orm.Session,\n db: PrefectDBInterface,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n) -> int:\n \"\"\"\n Count task runs.\n\n Args:\n session: a database session\n flow_filter: only count task runs whose flows match these filters\n flow_run_filter: only count task runs whose flow runs match these filters\n task_run_filter: only count task runs that match these filters\n deployment_filter: only count task runs whose deployments match these filters\n Returns:\n int: count of task runs\n \"\"\"\n\n query = select(sa.func.count(sa.text(\"*\"))).select_from(db.TaskRun)\n\n query = await _apply_task_run_filters(\n query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n db=db,\n )\n\n result = await session.execute(query)\n return result.scalar()\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.create_task_run","title":"create_task_run
async
","text":"Creates a new task run.
If a task run with the same flow_run_id, task_key, and dynamic_key already exists, the existing task run will be returned. If the provided task run has a state attached, it will also be created.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredtask_run
TaskRun
a task run model
requiredReturns:
Type Descriptiondb.TaskRun: the newly-created or existing task run
Source code inprefect/server/models/task_runs.py
@inject_db\nasync def create_task_run(\n session: sa.orm.Session,\n task_run: schemas.core.TaskRun,\n db: PrefectDBInterface,\n orchestration_parameters: dict = None,\n):\n \"\"\"\n Creates a new task run.\n\n If a task run with the same flow_run_id, task_key, and dynamic_key already exists,\n the existing task run will be returned. If the provided task run has a state\n attached, it will also be created.\n\n Args:\n session: a database session\n task_run: a task run model\n\n Returns:\n db.TaskRun: the newly-created or existing task run\n \"\"\"\n\n now = pendulum.now(\"UTC\")\n\n # if a dynamic key exists, we need to guard against conflicts\n if task_run.flow_run_id:\n insert_stmt = (\n (await db.insert(db.TaskRun))\n .values(\n created=now,\n **task_run.dict(\n shallow=True, exclude={\"state\", \"created\"}, exclude_unset=True\n ),\n )\n .on_conflict_do_nothing(\n index_elements=db.task_run_unique_upsert_columns,\n )\n )\n await session.execute(insert_stmt)\n\n query = (\n sa.select(db.TaskRun)\n .where(\n sa.and_(\n db.TaskRun.flow_run_id == task_run.flow_run_id,\n db.TaskRun.task_key == task_run.task_key,\n db.TaskRun.dynamic_key == task_run.dynamic_key,\n )\n )\n .limit(1)\n .execution_options(populate_existing=True)\n )\n result = await session.execute(query)\n model = result.scalar()\n else:\n # Upsert on (task_key, dynamic_key) application logic.\n query = (\n sa.select(db.TaskRun)\n .where(\n sa.and_(\n db.TaskRun.flow_run_id.is_(None),\n db.TaskRun.task_key == task_run.task_key,\n db.TaskRun.dynamic_key == task_run.dynamic_key,\n )\n )\n .limit(1)\n .execution_options(populate_existing=True)\n )\n\n result = await session.execute(query)\n model = result.scalar()\n\n if model is None:\n model = db.TaskRun(\n created=now,\n **task_run.dict(\n shallow=True, exclude={\"state\", \"created\"}, exclude_unset=True\n ),\n state=None,\n )\n session.add(model)\n await session.flush()\n\n if model.created == now and task_run.state:\n await models.task_runs.set_task_run_state(\n session=session,\n task_run_id=model.id,\n state=task_run.state,\n force=True,\n orchestration_parameters=orchestration_parameters,\n )\n return model\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.delete_task_run","title":"delete_task_run
async
","text":"Delete a task run by id.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredtask_run_id
UUID
the task run id to delete
requiredReturns:
Name Type Descriptionbool
bool
whether or not the task run was deleted
Source code inprefect/server/models/task_runs.py
@inject_db\nasync def delete_task_run(\n session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n) -> bool:\n \"\"\"\n Delete a task run by id.\n\n Args:\n session: a database session\n task_run_id: the task run id to delete\n\n Returns:\n bool: whether or not the task run was deleted\n \"\"\"\n\n result = await session.execute(\n delete(db.TaskRun).where(db.TaskRun.id == task_run_id)\n )\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.read_task_run","title":"read_task_run
async
","text":"Read a task run by id.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredtask_run_id
UUID
the task run id
requiredReturns:
Type Descriptiondb.TaskRun: the task run
Source code inprefect/server/models/task_runs.py
@inject_db\nasync def read_task_run(\n session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n):\n \"\"\"\n Read a task run by id.\n\n Args:\n session: a database session\n task_run_id: the task run id\n\n Returns:\n db.TaskRun: the task run\n \"\"\"\n\n model = await session.get(db.TaskRun, task_run_id)\n return model\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.read_task_runs","title":"read_task_runs
async
","text":"Read task runs.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredflow_filter
FlowFilter
only select task runs whose flows match these filters
None
flow_run_filter
FlowRunFilter
only select task runs whose flow runs match these filters
None
task_run_filter
TaskRunFilter
only select task runs that match these filters
None
deployment_filter
DeploymentFilter
only select task runs whose deployments match these filters
None
offset
int
Query offset
None
limit
int
Query limit
None
sort
TaskRunSort
Query sort
ID_DESC
Returns:
Type DescriptionList[db.TaskRun]: the task runs
Source code inprefect/server/models/task_runs.py
@inject_db\nasync def read_task_runs(\n session: sa.orm.Session,\n db: PrefectDBInterface,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n offset: int = None,\n limit: int = None,\n sort: schemas.sorting.TaskRunSort = schemas.sorting.TaskRunSort.ID_DESC,\n):\n \"\"\"\n Read task runs.\n\n Args:\n session: a database session\n flow_filter: only select task runs whose flows match these filters\n flow_run_filter: only select task runs whose flow runs match these filters\n task_run_filter: only select task runs that match these filters\n deployment_filter: only select task runs whose deployments match these filters\n offset: Query offset\n limit: Query limit\n sort: Query sort\n\n Returns:\n List[db.TaskRun]: the task runs\n \"\"\"\n\n query = select(db.TaskRun).order_by(sort.as_sql_sort(db))\n\n query = await _apply_task_run_filters(\n query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n db=db,\n )\n\n if offset is not None:\n query = query.offset(offset)\n\n if limit is not None:\n query = query.limit(limit)\n\n logger.debug(f\"In read_task_runs, query generated is:\\n{query}\")\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.set_task_run_state","title":"set_task_run_state
async
","text":"Creates a new orchestrated task run state.
Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed state
input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A force
flag is supplied to bypass a subset of orchestration logic.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredtask_run_id
UUID
the task run id
requiredstate
State
a task run state model
requiredforce
bool
if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.
False
Returns:
Type DescriptionOrchestrationResult
OrchestrationResult object
Source code inprefect/server/models/task_runs.py
async def set_task_run_state(\n session: sa.orm.Session,\n task_run_id: UUID,\n state: schemas.states.State,\n force: bool = False,\n task_policy: BaseOrchestrationPolicy = None,\n orchestration_parameters: dict = None,\n) -> OrchestrationResult:\n \"\"\"\n Creates a new orchestrated task run state.\n\n Setting a new state on a run is the one of the principal actions that is governed by\n Prefect's orchestration logic. Setting a new run state will not guarantee creation,\n but instead trigger orchestration rules to govern the proposed `state` input. If\n the state is considered valid, it will be written to the database. Otherwise, a\n it's possible a different state, or no state, will be created. A `force` flag is\n supplied to bypass a subset of orchestration logic.\n\n Args:\n session: a database session\n task_run_id: the task run id\n state: a task run state model\n force: if False, orchestration rules will be applied that may alter or prevent\n the state transition. If True, orchestration rules are not applied.\n\n Returns:\n OrchestrationResult object\n \"\"\"\n\n # load the task run\n run = await models.task_runs.read_task_run(session=session, task_run_id=task_run_id)\n\n if not run:\n raise ObjectNotFoundError(f\"Task run with id {task_run_id} not found\")\n\n initial_state = run.state.as_state() if run.state else None\n initial_state_type = initial_state.type if initial_state else None\n proposed_state_type = state.type if state else None\n intended_transition = (initial_state_type, proposed_state_type)\n\n if run.flow_run_id is None:\n task_policy = AutonomousTaskPolicy # CoreTaskPolicy + prevent `Running` -> `Running` transition\n elif force or task_policy is None:\n task_policy = MinimalTaskPolicy\n\n orchestration_rules = task_policy.compile_transition_rules(*intended_transition)\n global_rules = GlobalTaskPolicy.compile_transition_rules(*intended_transition)\n\n context = TaskOrchestrationContext(\n session=session,\n run=run,\n initial_state=initial_state,\n proposed_state=state,\n )\n\n if orchestration_parameters is not None:\n context.parameters = orchestration_parameters\n\n # apply orchestration rules and create the new task run state\n async with contextlib.AsyncExitStack() as stack:\n for rule in orchestration_rules:\n context = await stack.enter_async_context(\n rule(context, *intended_transition)\n )\n\n for rule in global_rules:\n context = await stack.enter_async_context(\n rule(context, *intended_transition)\n )\n\n await context.validate_proposed_state()\n\n if context.orchestration_error is not None:\n raise context.orchestration_error\n\n result = OrchestrationResult(\n state=context.validated_state,\n status=context.response_status,\n details=context.response_details,\n )\n\n return result\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.update_task_run","title":"update_task_run
async
","text":"Updates a task run.
Parameters:
Name Type Description Defaultsession
AsyncSession
a database session
requiredtask_run_id
UUID
the task run id to update
requiredtask_run
TaskRunUpdate
a task run model
requiredReturns:
Name Type Descriptionbool
bool
whether or not matching rows were found to update
Source code inprefect/server/models/task_runs.py
@inject_db\nasync def update_task_run(\n session: AsyncSession,\n task_run_id: UUID,\n task_run: schemas.actions.TaskRunUpdate,\n db: PrefectDBInterface,\n) -> bool:\n \"\"\"\n Updates a task run.\n\n Args:\n session: a database session\n task_run_id: the task run id to update\n task_run: a task run model\n\n Returns:\n bool: whether or not matching rows were found to update\n \"\"\"\n update_stmt = (\n sa.update(db.TaskRun)\n .where(db.TaskRun.id == task_run_id)\n # exclude_unset=True allows us to only update values provided by\n # the user, ignoring any defaults on the model\n .values(**task_run.dict(shallow=True, exclude_unset=True))\n )\n result = await session.execute(update_stmt)\n return result.rowcount > 0\n
"},{"location":"api-ref/server/orchestration/core_policy/","title":"server.orchestration.core_policy","text":""},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy","title":"prefect.server.orchestration.core_policy
","text":"Orchestration logic that fires on state transitions.
CoreFlowPolicy
and CoreTaskPolicy
contain all default orchestration rules that Prefect enforces on a state transition.
CoreFlowPolicy
","text":" Bases: BaseOrchestrationPolicy
Orchestration rules that run against flow-run-state transitions in priority order.
Source code inprefect/server/orchestration/core_policy.py
class CoreFlowPolicy(BaseOrchestrationPolicy):\n \"\"\"\n Orchestration rules that run against flow-run-state transitions in priority order.\n \"\"\"\n\n def priority():\n return [\n PreventDuplicateTransitions,\n HandleFlowTerminalStateTransitions,\n EnforceCancellingToCancelledTransition,\n BypassCancellingScheduledFlowRuns,\n PreventPendingTransitions,\n EnsureOnlyScheduledFlowsMarkedLate,\n HandlePausingFlows,\n HandleResumingPausedFlows,\n CopyScheduledTime,\n WaitForScheduledTime,\n RetryFailedFlows,\n ]\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CoreTaskPolicy","title":"CoreTaskPolicy
","text":" Bases: BaseOrchestrationPolicy
Orchestration rules that run against task-run-state transitions in priority order.
Source code inprefect/server/orchestration/core_policy.py
class CoreTaskPolicy(BaseOrchestrationPolicy):\n \"\"\"\n Orchestration rules that run against task-run-state transitions in priority order.\n \"\"\"\n\n def priority():\n return [\n CacheRetrieval,\n HandleTaskTerminalStateTransitions,\n PreventRunningTasksFromStoppedFlows,\n SecureTaskConcurrencySlots, # retrieve cached states even if slots are full\n CopyScheduledTime,\n WaitForScheduledTime,\n RetryFailedTasks,\n RenameReruns,\n UpdateFlowRunTrackerOnTasks,\n CacheInsertion,\n ReleaseTaskConcurrencySlots,\n ]\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.AutonomousTaskPolicy","title":"AutonomousTaskPolicy
","text":" Bases: BaseOrchestrationPolicy
Orchestration rules that run against task-run-state transitions in priority order.
Source code inprefect/server/orchestration/core_policy.py
class AutonomousTaskPolicy(BaseOrchestrationPolicy):\n \"\"\"\n Orchestration rules that run against task-run-state transitions in priority order.\n \"\"\"\n\n def priority():\n return [\n PreventPendingTransitions,\n CacheRetrieval,\n HandleTaskTerminalStateTransitions,\n SecureTaskConcurrencySlots, # retrieve cached states even if slots are full\n CopyScheduledTime,\n WaitForScheduledTime,\n RetryFailedTasks,\n RenameReruns,\n UpdateFlowRunTrackerOnTasks,\n CacheInsertion,\n ReleaseTaskConcurrencySlots,\n EnqueueScheduledTasks,\n ]\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.SecureTaskConcurrencySlots","title":"SecureTaskConcurrencySlots
","text":" Bases: BaseOrchestrationRule
Checks relevant concurrency slots are available before entering a Running state.
This rule checks if concurrency limits have been set on the tags associated with a TaskRun. If so, a concurrency slot will be secured against each concurrency limit before being allowed to transition into a running state. If a concurrency limit has been reached, the client will be instructed to delay the transition for the duration specified by the \"PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS\" setting before trying again. If the concurrency limit set on a tag is 0, the transition will be aborted to prevent deadlocks.
Source code inprefect/server/orchestration/core_policy.py
class SecureTaskConcurrencySlots(BaseOrchestrationRule):\n \"\"\"\n Checks relevant concurrency slots are available before entering a Running state.\n\n This rule checks if concurrency limits have been set on the tags associated with a\n TaskRun. If so, a concurrency slot will be secured against each concurrency limit\n before being allowed to transition into a running state. If a concurrency limit has\n been reached, the client will be instructed to delay the transition for the duration\n specified by the \"PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS\" setting\n before trying again. If the concurrency limit set on a tag is 0, the transition will\n be aborted to prevent deadlocks.\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.RUNNING]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n self._applied_limits = []\n filtered_limits = (\n await concurrency_limits.filter_concurrency_limits_for_orchestration(\n context.session, tags=context.run.tags\n )\n )\n run_limits = {limit.tag: limit for limit in filtered_limits}\n for tag, cl in run_limits.items():\n limit = cl.concurrency_limit\n if limit == 0:\n # limits of 0 will deadlock, and the transition needs to abort\n for stale_tag in self._applied_limits:\n stale_limit = run_limits.get(stale_tag, None)\n active_slots = set(stale_limit.active_slots)\n active_slots.discard(str(context.run.id))\n stale_limit.active_slots = list(active_slots)\n\n await self.abort_transition(\n reason=(\n f'The concurrency limit on tag \"{tag}\" is 0 and will deadlock'\n \" if the task tries to run again.\"\n ),\n )\n elif len(cl.active_slots) >= limit:\n # if the limit has already been reached, delay the transition\n for stale_tag in self._applied_limits:\n stale_limit = run_limits.get(stale_tag, None)\n active_slots = set(stale_limit.active_slots)\n active_slots.discard(str(context.run.id))\n stale_limit.active_slots = list(active_slots)\n\n await self.delay_transition(\n PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS.value(),\n f\"Concurrency limit for the {tag} tag has been reached\",\n )\n else:\n # log the TaskRun ID to active_slots\n self._applied_limits.append(tag)\n active_slots = set(cl.active_slots)\n active_slots.add(str(context.run.id))\n cl.active_slots = list(active_slots)\n\n async def cleanup(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n for tag in self._applied_limits:\n cl = await concurrency_limits.read_concurrency_limit_by_tag(\n context.session, tag\n )\n active_slots = set(cl.active_slots)\n active_slots.discard(str(context.run.id))\n cl.active_slots = list(active_slots)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.ReleaseTaskConcurrencySlots","title":"ReleaseTaskConcurrencySlots
","text":" Bases: BaseUniversalTransform
Releases any concurrency slots held by a run upon exiting a Running or Cancelling state.
Source code inprefect/server/orchestration/core_policy.py
class ReleaseTaskConcurrencySlots(BaseUniversalTransform):\n \"\"\"\n Releases any concurrency slots held by a run upon exiting a Running or\n Cancelling state.\n \"\"\"\n\n async def after_transition(\n self,\n context: OrchestrationContext,\n ):\n if self.nullified_transition():\n return\n\n if context.validated_state and context.validated_state.type not in [\n states.StateType.RUNNING,\n states.StateType.CANCELLING,\n ]:\n filtered_limits = (\n await concurrency_limits.filter_concurrency_limits_for_orchestration(\n context.session, tags=context.run.tags\n )\n )\n run_limits = {limit.tag: limit for limit in filtered_limits}\n for tag, cl in run_limits.items():\n active_slots = set(cl.active_slots)\n active_slots.discard(str(context.run.id))\n cl.active_slots = list(active_slots)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.AddUnknownResult","title":"AddUnknownResult
","text":" Bases: BaseOrchestrationRule
Assign an \"unknown\" result to runs that are forced to complete from a failed or crashed state, if the previous state used a persisted result.
When we retry a flow run, we retry any task runs that were in a failed or crashed state, but we also retry completed task runs that didn't use a persisted result. This means that without a sentinel value for unknown results, a task run forced into Completed state will always get rerun if the flow run retries because the task run lacks a persisted result. The \"unknown\" sentinel ensures that when we see a completed task run with an unknown result, we know that it was forced to complete and we shouldn't rerun it.
Flow runs forced into a Completed state have a similar problem: without a sentinel value, attempting to refer to the flow run's result will raise an exception because the flow run has no result. The sentinel ensures that we can distinguish between a flow run that has no result and a flow run that has an unknown result.
Source code inprefect/server/orchestration/core_policy.py
class AddUnknownResult(BaseOrchestrationRule):\n \"\"\"\n Assign an \"unknown\" result to runs that are forced to complete from a\n failed or crashed state, if the previous state used a persisted result.\n\n When we retry a flow run, we retry any task runs that were in a failed or\n crashed state, but we also retry completed task runs that didn't use a\n persisted result. This means that without a sentinel value for unknown\n results, a task run forced into Completed state will always get rerun if the\n flow run retries because the task run lacks a persisted result. The\n \"unknown\" sentinel ensures that when we see a completed task run with an\n unknown result, we know that it was forced to complete and we shouldn't\n rerun it.\n\n Flow runs forced into a Completed state have a similar problem: without a\n sentinel value, attempting to refer to the flow run's result will raise an\n exception because the flow run has no result. The sentinel ensures that we\n can distinguish between a flow run that has no result and a flow run that\n has an unknown result.\n \"\"\"\n\n FROM_STATES = [StateType.FAILED, StateType.CRASHED]\n TO_STATES = [StateType.COMPLETED]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n if (\n initial_state\n and initial_state.data\n and initial_state.data.get(\"type\") == \"reference\"\n ):\n unknown_result = await UnknownResult.create()\n self.context.proposed_state.data = unknown_result.dict()\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CacheInsertion","title":"CacheInsertion
","text":" Bases: BaseOrchestrationRule
Caches completed states with cache keys after they are validated.
Source code inprefect/server/orchestration/core_policy.py
class CacheInsertion(BaseOrchestrationRule):\n \"\"\"\n Caches completed states with cache keys after they are validated.\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.COMPLETED]\n\n @inject_db\n async def after_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: TaskOrchestrationContext,\n db: PrefectDBInterface,\n ) -> None:\n if not validated_state or not context.session:\n return\n\n cache_key = validated_state.state_details.cache_key\n if cache_key:\n new_cache_item = db.TaskRunStateCache(\n cache_key=cache_key,\n cache_expiration=validated_state.state_details.cache_expiration,\n task_run_state_id=validated_state.id,\n )\n context.session.add(new_cache_item)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CacheRetrieval","title":"CacheRetrieval
","text":" Bases: BaseOrchestrationRule
Rejects running states if a completed state has been cached.
This rule rejects transitions into a running state with a cache key if the key has already been associated with a completed state in the cache table. The client will be instructed to transition into the cached completed state instead.
Source code inprefect/server/orchestration/core_policy.py
class CacheRetrieval(BaseOrchestrationRule):\n \"\"\"\n Rejects running states if a completed state has been cached.\n\n This rule rejects transitions into a running state with a cache key if the key\n has already been associated with a completed state in the cache table. The client\n will be instructed to transition into the cached completed state instead.\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.RUNNING]\n\n @inject_db\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n db: PrefectDBInterface,\n ) -> None:\n cache_key = proposed_state.state_details.cache_key\n if cache_key and not proposed_state.state_details.refresh_cache:\n # Check for cached states matching the cache key\n cached_state_id = (\n select(db.TaskRunStateCache.task_run_state_id)\n .where(\n sa.and_(\n db.TaskRunStateCache.cache_key == cache_key,\n sa.or_(\n db.TaskRunStateCache.cache_expiration.is_(None),\n db.TaskRunStateCache.cache_expiration > pendulum.now(\"utc\"),\n ),\n ),\n )\n .order_by(db.TaskRunStateCache.created.desc())\n .limit(1)\n ).scalar_subquery()\n query = select(db.TaskRunState).where(db.TaskRunState.id == cached_state_id)\n cached_state = (await context.session.execute(query)).scalar()\n if cached_state:\n new_state = cached_state.as_state().copy(reset_fields=True)\n new_state.name = \"Cached\"\n await self.reject_transition(\n state=new_state, reason=\"Retrieved state from cache\"\n )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RetryFailedFlows","title":"RetryFailedFlows
","text":" Bases: BaseOrchestrationRule
Rejects failed states and schedules a retry if the retry limit has not been reached.
This rule rejects transitions into a failed state if retries
has been set and the run count has not reached the specified limit. The client will be instructed to transition into a scheduled state to retry flow execution.
prefect/server/orchestration/core_policy.py
class RetryFailedFlows(BaseOrchestrationRule):\n \"\"\"\n Rejects failed states and schedules a retry if the retry limit has not been reached.\n\n This rule rejects transitions into a failed state if `retries` has been\n set and the run count has not reached the specified limit. The client will be\n instructed to transition into a scheduled state to retry flow execution.\n \"\"\"\n\n FROM_STATES = [StateType.RUNNING]\n TO_STATES = [StateType.FAILED]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: FlowOrchestrationContext,\n ) -> None:\n run_settings = context.run_settings\n run_count = context.run.run_count\n\n if run_settings.retries is None or run_count > run_settings.retries:\n return # Retry count exceeded, allow transition to failed\n\n scheduled_start_time = pendulum.now(\"UTC\").add(\n seconds=run_settings.retry_delay or 0\n )\n\n # support old-style flow run retries for older clients\n # older flow retries require us to loop over failed tasks to update their state\n # this is not required after API version 0.8.3\n api_version = context.parameters.get(\"api-version\", None)\n if api_version and api_version < Version(\"0.8.3\"):\n failed_task_runs = await models.task_runs.read_task_runs(\n context.session,\n flow_run_filter=filters.FlowRunFilter(id={\"any_\": [context.run.id]}),\n task_run_filter=filters.TaskRunFilter(\n state={\"type\": {\"any_\": [\"FAILED\"]}}\n ),\n )\n for run in failed_task_runs:\n await models.task_runs.set_task_run_state(\n context.session,\n run.id,\n state=states.AwaitingRetry(scheduled_time=scheduled_start_time),\n force=True,\n )\n # Reset the run count so that the task run retries still work correctly\n run.run_count = 0\n\n # Reset pause metadata on retry\n # Pauses as a concept only exist after API version 0.8.4\n api_version = context.parameters.get(\"api-version\", None)\n if api_version is None or api_version >= Version(\"0.8.4\"):\n updated_policy = context.run.empirical_policy.dict()\n updated_policy[\"resuming\"] = False\n updated_policy[\"pause_keys\"] = set()\n context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n\n # Generate a new state for the flow\n retry_state = states.AwaitingRetry(\n scheduled_time=scheduled_start_time,\n message=proposed_state.message,\n data=proposed_state.data,\n )\n await self.reject_transition(state=retry_state, reason=\"Retrying\")\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RetryFailedTasks","title":"RetryFailedTasks
","text":" Bases: BaseOrchestrationRule
Rejects failed states and schedules a retry if the retry limit has not been reached.
This rule rejects transitions into a failed state if retries
has been set, the run count has not reached the specified limit, and the client asserts it is a retriable task run. The client will be instructed to transition into a scheduled state to retry task execution.
prefect/server/orchestration/core_policy.py
class RetryFailedTasks(BaseOrchestrationRule):\n \"\"\"\n Rejects failed states and schedules a retry if the retry limit has not been reached.\n\n This rule rejects transitions into a failed state if `retries` has been\n set, the run count has not reached the specified limit, and the client\n asserts it is a retriable task run. The client will be instructed to\n transition into a scheduled state to retry task execution.\n \"\"\"\n\n FROM_STATES = [StateType.RUNNING]\n TO_STATES = [StateType.FAILED]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n run_settings = context.run_settings\n run_count = context.run.run_count\n delay = run_settings.retry_delay\n\n if isinstance(delay, list):\n base_delay = delay[min(run_count - 1, len(delay) - 1)]\n else:\n base_delay = run_settings.retry_delay or 0\n\n # guard against negative relative jitter inputs\n if run_settings.retry_jitter_factor:\n delay = clamped_poisson_interval(\n base_delay, clamping_factor=run_settings.retry_jitter_factor\n )\n else:\n delay = base_delay\n\n # set by user to conditionally retry a task using @task(retry_condition_fn=...)\n if getattr(proposed_state.state_details, \"retriable\", True) is False:\n return\n\n if run_settings.retries is not None and run_count <= run_settings.retries:\n retry_state = states.AwaitingRetry(\n scheduled_time=pendulum.now(\"UTC\").add(seconds=delay),\n message=proposed_state.message,\n data=proposed_state.data,\n )\n await self.reject_transition(state=retry_state, reason=\"Retrying\")\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.EnqueueScheduledTasks","title":"EnqueueScheduledTasks
","text":" Bases: BaseOrchestrationRule
Enqueues autonomous task runs when they are scheduled
Source code inprefect/server/orchestration/core_policy.py
class EnqueueScheduledTasks(BaseOrchestrationRule):\n \"\"\"\n Enqueues autonomous task runs when they are scheduled\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.SCHEDULED]\n\n async def after_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n if not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value():\n # Only if task scheduling is enabled\n return\n\n if not validated_state:\n # Only if the transition was valid\n return\n\n if context.run.flow_run_id:\n # Only for autonomous tasks\n return\n\n task_run: TaskRun = TaskRun.from_orm(context.run)\n queue = TaskQueue.for_key(task_run.task_key)\n\n if validated_state.name == \"AwaitingRetry\":\n await queue.retry(task_run)\n else:\n await queue.enqueue(task_run)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RenameReruns","title":"RenameReruns
","text":" Bases: BaseOrchestrationRule
Name the states if they have run more than once.
In the special case where the initial state is an \"AwaitingRetry\" scheduled state, the proposed state will be renamed to \"Retrying\" instead.
Source code inprefect/server/orchestration/core_policy.py
class RenameReruns(BaseOrchestrationRule):\n \"\"\"\n Name the states if they have run more than once.\n\n In the special case where the initial state is an \"AwaitingRetry\" scheduled state,\n the proposed state will be renamed to \"Retrying\" instead.\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.RUNNING]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n run_count = context.run.run_count\n if run_count > 0:\n if initial_state.name == \"AwaitingRetry\":\n await self.rename_state(\"Retrying\")\n else:\n await self.rename_state(\"Rerunning\")\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CopyScheduledTime","title":"CopyScheduledTime
","text":" Bases: BaseOrchestrationRule
Ensures scheduled time is copied from scheduled states to pending states.
If a new scheduled time has been proposed on the pending state, the scheduled time on the scheduled state will be ignored.
Source code inprefect/server/orchestration/core_policy.py
class CopyScheduledTime(BaseOrchestrationRule):\n \"\"\"\n Ensures scheduled time is copied from scheduled states to pending states.\n\n If a new scheduled time has been proposed on the pending state, the scheduled time\n on the scheduled state will be ignored.\n \"\"\"\n\n FROM_STATES = [StateType.SCHEDULED]\n TO_STATES = [StateType.PENDING]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n if not proposed_state.state_details.scheduled_time:\n proposed_state.state_details.scheduled_time = (\n initial_state.state_details.scheduled_time\n )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.WaitForScheduledTime","title":"WaitForScheduledTime
","text":" Bases: BaseOrchestrationRule
Prevents transitions to running states from happening to early.
This rule enforces that all scheduled states will only start with the machine clock used by the Prefect REST API instance. This rule will identify transitions from scheduled states that are too early and nullify them. Instead, no state will be written to the database and the client will be sent an instruction to wait for delay_seconds
before attempting the transition again.
prefect/server/orchestration/core_policy.py
class WaitForScheduledTime(BaseOrchestrationRule):\n \"\"\"\n Prevents transitions to running states from happening to early.\n\n This rule enforces that all scheduled states will only start with the machine clock\n used by the Prefect REST API instance. This rule will identify transitions from scheduled\n states that are too early and nullify them. Instead, no state will be written to the\n database and the client will be sent an instruction to wait for `delay_seconds`\n before attempting the transition again.\n \"\"\"\n\n FROM_STATES = [StateType.SCHEDULED, StateType.PENDING]\n TO_STATES = [StateType.RUNNING]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n scheduled_time = initial_state.state_details.scheduled_time\n if not scheduled_time:\n return\n\n # At this moment, we round delay to the nearest second as the API schema\n # specifies an integer return value.\n delay = scheduled_time - pendulum.now(\"UTC\")\n delay_seconds = delay.in_seconds()\n delay_seconds += round(delay.microseconds / 1e6)\n if delay_seconds > 0:\n await self.delay_transition(\n delay_seconds, reason=\"Scheduled time is in the future\"\n )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandlePausingFlows","title":"HandlePausingFlows
","text":" Bases: BaseOrchestrationRule
Governs runs attempting to enter a Paused/Suspended state
Source code inprefect/server/orchestration/core_policy.py
class HandlePausingFlows(BaseOrchestrationRule):\n \"\"\"\n Governs runs attempting to enter a Paused/Suspended state\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.PAUSED]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n verb = \"suspend\" if proposed_state.name == \"Suspended\" else \"pause\"\n\n if initial_state is None:\n await self.abort_transition(f\"Cannot {verb} flows with no state.\")\n return\n\n if not initial_state.is_running():\n await self.reject_transition(\n state=None,\n reason=f\"Cannot {verb} flows that are not currently running.\",\n )\n return\n\n self.key = proposed_state.state_details.pause_key\n if self.key is None:\n # if no pause key is provided, default to a UUID\n self.key = str(uuid4())\n\n if self.key in context.run.empirical_policy.pause_keys:\n await self.reject_transition(\n state=None, reason=f\"This {verb} has already fired.\"\n )\n return\n\n if proposed_state.state_details.pause_reschedule:\n if context.run.parent_task_run_id:\n await self.abort_transition(\n reason=f\"Cannot {verb} subflows.\",\n )\n return\n\n if context.run.deployment_id is None:\n await self.abort_transition(\n reason=f\"Cannot {verb} flows without a deployment.\",\n )\n return\n\n async def after_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n updated_policy = context.run.empirical_policy.dict()\n updated_policy[\"pause_keys\"].add(self.key)\n context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleResumingPausedFlows","title":"HandleResumingPausedFlows
","text":" Bases: BaseOrchestrationRule
Governs runs attempting to leave a Paused state
Source code inprefect/server/orchestration/core_policy.py
class HandleResumingPausedFlows(BaseOrchestrationRule):\n \"\"\"\n Governs runs attempting to leave a Paused state\n \"\"\"\n\n FROM_STATES = [StateType.PAUSED]\n TO_STATES = ALL_ORCHESTRATION_STATES\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n if not (\n proposed_state.is_running()\n or proposed_state.is_scheduled()\n or proposed_state.is_final()\n ):\n await self.reject_transition(\n state=None,\n reason=(\n f\"This run cannot transition to the {proposed_state.type} state\"\n f\" from the {initial_state.type} state.\"\n ),\n )\n return\n\n verb = \"suspend\" if proposed_state.name == \"Suspended\" else \"pause\"\n\n if initial_state.state_details.pause_reschedule:\n if not context.run.deployment_id:\n await self.reject_transition(\n state=None,\n reason=(\n f\"Cannot reschedule a {proposed_state.name.lower()} flow run\"\n \" without a deployment.\"\n ),\n )\n return\n pause_timeout = initial_state.state_details.pause_timeout\n if pause_timeout and pause_timeout < pendulum.now(\"UTC\"):\n pause_timeout_failure = states.Failed(\n message=(\n f\"The flow was {proposed_state.name.lower()} and never resumed.\"\n ),\n )\n await self.reject_transition(\n state=pause_timeout_failure,\n reason=f\"The flow run {verb} has timed out and can no longer resume.\",\n )\n return\n\n async def after_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n updated_policy = context.run.empirical_policy.dict()\n updated_policy[\"resuming\"] = True\n context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.UpdateFlowRunTrackerOnTasks","title":"UpdateFlowRunTrackerOnTasks
","text":" Bases: BaseOrchestrationRule
Tracks the flow run attempt a task run state is associated with.
Source code inprefect/server/orchestration/core_policy.py
class UpdateFlowRunTrackerOnTasks(BaseOrchestrationRule):\n \"\"\"\n Tracks the flow run attempt a task run state is associated with.\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.RUNNING]\n\n async def after_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n if context.run.flow_run_id is not None:\n self.flow_run = await context.flow_run()\n if self.flow_run:\n context.run.flow_run_run_count = self.flow_run.run_count\n else:\n raise ObjectNotFoundError(\n (\n \"Unable to read flow run associated with task run:\"\n f\" {context.run.id}, this flow run might have been deleted\"\n ),\n )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleTaskTerminalStateTransitions","title":"HandleTaskTerminalStateTransitions
","text":" Bases: BaseOrchestrationRule
We do not allow tasks to leave terminal states if: - The task is completed and has a persisted result - The task is going to CANCELLING / PAUSED / CRASHED
We reset the run count when a task leaves a terminal state for a non-terminal state which resets task run retries; this is particularly relevant for flow run retries.
Source code inprefect/server/orchestration/core_policy.py
class HandleTaskTerminalStateTransitions(BaseOrchestrationRule):\n \"\"\"\n We do not allow tasks to leave terminal states if:\n - The task is completed and has a persisted result\n - The task is going to CANCELLING / PAUSED / CRASHED\n\n We reset the run count when a task leaves a terminal state for a non-terminal state\n which resets task run retries; this is particularly relevant for flow run retries.\n \"\"\"\n\n FROM_STATES = TERMINAL_STATES\n TO_STATES = ALL_ORCHESTRATION_STATES\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n self.original_run_count = context.run.run_count\n\n # Do not allow runs to be marked as crashed, paused, or cancelling if already terminal\n if proposed_state.type in {\n StateType.CANCELLING,\n StateType.PAUSED,\n StateType.CRASHED,\n }:\n await self.abort_transition(f\"Run is already {initial_state.type.value}.\")\n return\n\n # Only allow departure from a happily completed state if the result is not persisted\n if (\n initial_state.is_completed()\n and initial_state.data\n and initial_state.data.get(\"type\") != \"unpersisted\"\n ):\n await self.reject_transition(None, \"This run is already completed.\")\n return\n\n if not proposed_state.is_final():\n # Reset run count to reset retries\n context.run.run_count = 0\n\n # Change the name of the state to retrying if its a flow run retry\n if proposed_state.is_running() and context.run.flow_run_id is not None:\n self.flow_run = await context.flow_run()\n flow_retrying = context.run.flow_run_run_count < self.flow_run.run_count\n if flow_retrying:\n await self.rename_state(\"Retrying\")\n\n async def cleanup(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n ):\n # reset run count\n context.run.run_count = self.original_run_count\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleFlowTerminalStateTransitions","title":"HandleFlowTerminalStateTransitions
","text":" Bases: BaseOrchestrationRule
We do not allow flows to leave terminal states if: - The flow is completed and has a persisted result - The flow is going to CANCELLING / PAUSED / CRASHED - The flow is going to scheduled and has no deployment
We reset the pause metadata when a flow leaves a terminal state for a non-terminal state. This resets pause behavior during manual flow run retries.
Source code inprefect/server/orchestration/core_policy.py
class HandleFlowTerminalStateTransitions(BaseOrchestrationRule):\n \"\"\"\n We do not allow flows to leave terminal states if:\n - The flow is completed and has a persisted result\n - The flow is going to CANCELLING / PAUSED / CRASHED\n - The flow is going to scheduled and has no deployment\n\n We reset the pause metadata when a flow leaves a terminal state for a non-terminal\n state. This resets pause behavior during manual flow run retries.\n \"\"\"\n\n FROM_STATES = TERMINAL_STATES\n TO_STATES = ALL_ORCHESTRATION_STATES\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: FlowOrchestrationContext,\n ) -> None:\n self.original_flow_policy = context.run.empirical_policy.dict()\n\n # Do not allow runs to be marked as crashed, paused, or cancelling if already terminal\n if proposed_state.type in {\n StateType.CANCELLING,\n StateType.PAUSED,\n StateType.CRASHED,\n }:\n await self.abort_transition(\n f\"Run is already in terminal state {initial_state.type.value}.\"\n )\n return\n\n # Only allow departure from a happily completed state if the result is not\n # persisted and the a rerun is being proposed\n if (\n initial_state.is_completed()\n and not proposed_state.is_final()\n and initial_state.data\n and initial_state.data.get(\"type\") != \"unpersisted\"\n ):\n await self.reject_transition(None, \"Run is already COMPLETED.\")\n return\n\n # Do not allows runs to be rescheduled without a deployment\n if proposed_state.is_scheduled() and not context.run.deployment_id:\n await self.abort_transition(\n \"Cannot reschedule a run without an associated deployment.\"\n )\n return\n\n if not proposed_state.is_final():\n # Reset pause metadata when leaving a terminal state\n api_version = context.parameters.get(\"api-version\", None)\n if api_version is None or api_version >= Version(\"0.8.4\"):\n updated_policy = context.run.empirical_policy.dict()\n updated_policy[\"resuming\"] = False\n updated_policy[\"pause_keys\"] = set()\n context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n\n async def cleanup(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n ):\n context.run.empirical_policy = core.FlowRunPolicy(**self.original_flow_policy)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventPendingTransitions","title":"PreventPendingTransitions
","text":" Bases: BaseOrchestrationRule
Prevents transitions to PENDING.
This rule is only used for flow runs.
This is intended to prevent race conditions during duplicate submissions of runs. Before a run is submitted to its execution environment, it should be placed in a PENDING state. If two workers attempt to submit the same run, one of them should encounter a PENDING -> PENDING transition and abort orchestration of the run.
Similarly, if the execution environment starts quickly the run may be in a RUNNING state when the second worker attempts the PENDING transition. We deny these state changes as well to prevent duplicate submission. If a run has transitioned to a RUNNING state a worker should not attempt to submit it again unless it has moved into a terminal state.
CANCELLING and CANCELLED runs should not be allowed to transition to PENDING. For re-runs of deployed runs, they should transition to SCHEDULED first. For re-runs of ad-hoc runs, they should transition directly to RUNNING.
Source code inprefect/server/orchestration/core_policy.py
class PreventPendingTransitions(BaseOrchestrationRule):\n \"\"\"\n Prevents transitions to PENDING.\n\n This rule is only used for flow runs.\n\n This is intended to prevent race conditions during duplicate submissions of runs.\n Before a run is submitted to its execution environment, it should be placed in a\n PENDING state. If two workers attempt to submit the same run, one of them should\n encounter a PENDING -> PENDING transition and abort orchestration of the run.\n\n Similarly, if the execution environment starts quickly the run may be in a RUNNING\n state when the second worker attempts the PENDING transition. We deny these state\n changes as well to prevent duplicate submission. If a run has transitioned to a\n RUNNING state a worker should not attempt to submit it again unless it has moved\n into a terminal state.\n\n CANCELLING and CANCELLED runs should not be allowed to transition to PENDING.\n For re-runs of deployed runs, they should transition to SCHEDULED first.\n For re-runs of ad-hoc runs, they should transition directly to RUNNING.\n \"\"\"\n\n FROM_STATES = [\n StateType.PENDING,\n StateType.CANCELLING,\n StateType.RUNNING,\n StateType.CANCELLED,\n ]\n TO_STATES = [StateType.PENDING]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n await self.abort_transition(\n reason=(\n f\"This run is in a {initial_state.type.name} state and cannot\"\n \" transition to a PENDING state.\"\n )\n )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventRunningTasksFromStoppedFlows","title":"PreventRunningTasksFromStoppedFlows
","text":" Bases: BaseOrchestrationRule
Prevents running tasks from stopped flows.
A running state implies execution, but also the converse. This rule ensures that a flow's tasks cannot be run unless the flow is also running.
Source code inprefect/server/orchestration/core_policy.py
class PreventRunningTasksFromStoppedFlows(BaseOrchestrationRule):\n \"\"\"\n Prevents running tasks from stopped flows.\n\n A running state implies execution, but also the converse. This rule ensures that a\n flow's tasks cannot be run unless the flow is also running.\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.RUNNING]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n flow_run = await context.flow_run()\n if flow_run is not None:\n if flow_run.state is None:\n await self.abort_transition(\n reason=\"The enclosing flow must be running to begin task execution.\"\n )\n elif flow_run.state.type == StateType.PAUSED:\n # Use the flow run's Paused state details to preserve data like\n # timeouts.\n paused_state = states.Paused(\n name=\"NotReady\",\n pause_expiration_time=flow_run.state.state_details.pause_timeout,\n reschedule=flow_run.state.state_details.pause_reschedule,\n )\n await self.reject_transition(\n state=paused_state,\n reason=(\n \"The flow is paused, new tasks can execute after resuming flow\"\n f\" run: {flow_run.id}.\"\n ),\n )\n elif not flow_run.state.type == StateType.RUNNING:\n # task runners should abort task run execution\n await self.abort_transition(\n reason=(\n \"The enclosing flow must be running to begin task execution.\"\n ),\n )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.EnforceCancellingToCancelledTransition","title":"EnforceCancellingToCancelledTransition
","text":" Bases: BaseOrchestrationRule
Rejects transitions from Cancelling to any terminal state except for Cancelled.
Source code inprefect/server/orchestration/core_policy.py
class EnforceCancellingToCancelledTransition(BaseOrchestrationRule):\n \"\"\"\n Rejects transitions from Cancelling to any terminal state except for Cancelled.\n \"\"\"\n\n FROM_STATES = {StateType.CANCELLED, StateType.CANCELLING}\n TO_STATES = ALL_ORCHESTRATION_STATES - {StateType.CANCELLED}\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n await self.reject_transition(\n state=None,\n reason=(\n \"Cannot transition flows that are cancelling to a state other \"\n \"than Cancelled.\"\n ),\n )\n return\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.BypassCancellingScheduledFlowRuns","title":"BypassCancellingScheduledFlowRuns
","text":" Bases: BaseOrchestrationRule
Rejects transitions from Scheduled to Cancelling, and instead sets the state to Cancelled, if the flow run has no associated infrastructure process ID.
The Cancelling
state is used to clean up infrastructure. If there is not infrastructure to clean up, we can transition directly to Cancelled
. Runs that are AwaitingRetry
are a Scheduled
state that may have associated infrastructure.
prefect/server/orchestration/core_policy.py
class BypassCancellingScheduledFlowRuns(BaseOrchestrationRule):\n \"\"\"Rejects transitions from Scheduled to Cancelling, and instead sets the state to Cancelled,\n if the flow run has no associated infrastructure process ID.\n\n The `Cancelling` state is used to clean up infrastructure. If there is not infrastructure\n to clean up, we can transition directly to `Cancelled`. Runs that are `AwaitingRetry` are\n a `Scheduled` state that may have associated infrastructure.\n \"\"\"\n\n FROM_STATES = {StateType.SCHEDULED}\n TO_STATES = {StateType.CANCELLING}\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: FlowOrchestrationContext,\n ) -> None:\n if not context.run.infrastructure_pid:\n await self.reject_transition(\n state=states.Cancelled(),\n reason=\"Scheduled flow run has no infrastructure to terminate.\",\n )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventDuplicateTransitions","title":"PreventDuplicateTransitions
","text":" Bases: BaseOrchestrationRule
Prevent duplicate transitions from being made right after one another.
This rule allows for clients to set an optional transition_id on a state. If the run's next transition has the same transition_id, the transition will be rejected and the existing state will be returned.
This allows for clients to make state transition requests without worrying about the following case: - A client making a state transition request - The server accepts transition and commits the transition - The client is unable to receive the response and retries the request
Source code inprefect/server/orchestration/core_policy.py
class PreventDuplicateTransitions(BaseOrchestrationRule):\n \"\"\"\n Prevent duplicate transitions from being made right after one another.\n\n This rule allows for clients to set an optional transition_id on a state. If the\n run's next transition has the same transition_id, the transition will be\n rejected and the existing state will be returned.\n\n This allows for clients to make state transition requests without worrying about\n the following case:\n - A client making a state transition request\n - The server accepts transition and commits the transition\n - The client is unable to receive the response and retries the request\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = ALL_ORCHESTRATION_STATES\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n if (\n initial_state is None\n or proposed_state is None\n or initial_state.state_details is None\n or proposed_state.state_details is None\n ):\n return\n\n initial_transition_id = getattr(\n initial_state.state_details, \"transition_id\", None\n )\n proposed_transition_id = getattr(\n proposed_state.state_details, \"transition_id\", None\n )\n if (\n initial_transition_id is not None\n and proposed_transition_id is not None\n and initial_transition_id == proposed_transition_id\n ):\n await self.reject_transition(\n # state=None will return the initial (current) state\n state=None,\n reason=\"This run has already made this state transition.\",\n )\n
"},{"location":"api-ref/server/orchestration/global_policy/","title":"server.orchestration.global_policy","text":""},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy","title":"prefect.server.orchestration.global_policy
","text":"Bookkeeping logic that fires on every state transition.
For clarity, GlobalFlowpolicy
and GlobalTaskPolicy
contain all transition logic implemented using BaseUniversalTransform
. None of these operations modify state, and regardless of what orchestration Prefect REST API might enforce on a transition, the global policies contain Prefect's necessary bookkeeping. Because these transforms record information about the validated state committed to the state database, they should be the most deeply nested contexts in orchestration loop.
GlobalFlowPolicy
","text":" Bases: BaseOrchestrationPolicy
Global transforms that run against flow-run-state transitions in priority order.
These transforms are intended to run immediately before and after a state transition is validated.
Source code inprefect/server/orchestration/global_policy.py
class GlobalFlowPolicy(BaseOrchestrationPolicy):\n \"\"\"\n Global transforms that run against flow-run-state transitions in priority order.\n\n These transforms are intended to run immediately before and after a state transition\n is validated.\n \"\"\"\n\n def priority():\n return COMMON_GLOBAL_TRANSFORMS() + [\n UpdateSubflowParentTask,\n UpdateSubflowStateDetails,\n IncrementFlowRunCount,\n RemoveResumingIndicator,\n ]\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.GlobalTaskPolicy","title":"GlobalTaskPolicy
","text":" Bases: BaseOrchestrationPolicy
Global transforms that run against task-run-state transitions in priority order.
These transforms are intended to run immediately before and after a state transition is validated.
Source code inprefect/server/orchestration/global_policy.py
class GlobalTaskPolicy(BaseOrchestrationPolicy):\n \"\"\"\n Global transforms that run against task-run-state transitions in priority order.\n\n These transforms are intended to run immediately before and after a state transition\n is validated.\n \"\"\"\n\n def priority():\n return COMMON_GLOBAL_TRANSFORMS() + [\n IncrementTaskRunCount,\n ]\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateType","title":"SetRunStateType
","text":" Bases: BaseUniversalTransform
Updates the state type of a run on a state transition.
Source code inprefect/server/orchestration/global_policy.py
class SetRunStateType(BaseUniversalTransform):\n \"\"\"\n Updates the state type of a run on a state transition.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # record the new state's type\n context.run.state_type = context.proposed_state.type\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateName","title":"SetRunStateName
","text":" Bases: BaseUniversalTransform
Updates the state name of a run on a state transition.
Source code inprefect/server/orchestration/global_policy.py
class SetRunStateName(BaseUniversalTransform):\n \"\"\"\n Updates the state name of a run on a state transition.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # record the new state's name\n context.run.state_name = context.proposed_state.name\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetStartTime","title":"SetStartTime
","text":" Bases: BaseUniversalTransform
Records the time a run enters a running state for the first time.
Source code inprefect/server/orchestration/global_policy.py
class SetStartTime(BaseUniversalTransform):\n \"\"\"\n Records the time a run enters a running state for the first time.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # if entering a running state and no start time is set...\n if context.proposed_state.is_running() and context.run.start_time is None:\n # set the start time\n context.run.start_time = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateTimestamp","title":"SetRunStateTimestamp
","text":" Bases: BaseUniversalTransform
Records the time a run changes states.
Source code inprefect/server/orchestration/global_policy.py
class SetRunStateTimestamp(BaseUniversalTransform):\n \"\"\"\n Records the time a run changes states.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # record the new state's timestamp\n context.run.state_timestamp = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetEndTime","title":"SetEndTime
","text":" Bases: BaseUniversalTransform
Records the time a run enters a terminal state.
With normal client usage, a run will not transition out of a terminal state. However, it's possible to force these transitions manually via the API. While leaving a terminal state, the end time will be unset.
Source code inprefect/server/orchestration/global_policy.py
class SetEndTime(BaseUniversalTransform):\n \"\"\"\n Records the time a run enters a terminal state.\n\n With normal client usage, a run will not transition out of a terminal state.\n However, it's possible to force these transitions manually via the API. While\n leaving a terminal state, the end time will be unset.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # if exiting a final state for a non-final state...\n if (\n context.initial_state\n and context.initial_state.is_final()\n and not context.proposed_state.is_final()\n ):\n # clear the end time\n context.run.end_time = None\n\n # if entering a final state...\n if context.proposed_state.is_final():\n # if the run has a start time and no end time, give it one\n if context.run.start_time and not context.run.end_time:\n context.run.end_time = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementRunTime","title":"IncrementRunTime
","text":" Bases: BaseUniversalTransform
Records the amount of time a run spends in the running state.
Source code inprefect/server/orchestration/global_policy.py
class IncrementRunTime(BaseUniversalTransform):\n \"\"\"\n Records the amount of time a run spends in the running state.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # if exiting a running state...\n if context.initial_state and context.initial_state.is_running():\n # increment the run time by the time spent in the previous state\n context.run.total_run_time += (\n context.proposed_state.timestamp - context.initial_state.timestamp\n )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementFlowRunCount","title":"IncrementFlowRunCount
","text":" Bases: BaseUniversalTransform
Records the number of times a run enters a running state. For use with retries.
Source code inprefect/server/orchestration/global_policy.py
class IncrementFlowRunCount(BaseUniversalTransform):\n \"\"\"\n Records the number of times a run enters a running state. For use with retries.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # if entering a running state...\n if context.proposed_state.is_running():\n # do not increment the run count if resuming a paused flow\n api_version = context.parameters.get(\"api-version\", None)\n if api_version is None or api_version >= Version(\"0.8.4\"):\n if context.run.empirical_policy.resuming:\n return\n\n # increment the run count\n context.run.run_count += 1\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.RemoveResumingIndicator","title":"RemoveResumingIndicator
","text":" Bases: BaseUniversalTransform
Removes the indicator on a flow run that marks it as resuming.
Source code inprefect/server/orchestration/global_policy.py
class RemoveResumingIndicator(BaseUniversalTransform):\n \"\"\"\n Removes the indicator on a flow run that marks it as resuming.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n proposed_state = context.proposed_state\n\n api_version = context.parameters.get(\"api-version\", None)\n if api_version is None or api_version >= Version(\"0.8.4\"):\n if proposed_state.is_running() or proposed_state.is_final():\n if context.run.empirical_policy.resuming:\n updated_policy = context.run.empirical_policy.dict()\n updated_policy[\"resuming\"] = False\n context.run.empirical_policy = FlowRunPolicy(**updated_policy)\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementTaskRunCount","title":"IncrementTaskRunCount
","text":" Bases: BaseUniversalTransform
Records the number of times a run enters a running state. For use with retries.
Source code inprefect/server/orchestration/global_policy.py
class IncrementTaskRunCount(BaseUniversalTransform):\n \"\"\"\n Records the number of times a run enters a running state. For use with retries.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # if entering a running state...\n if context.proposed_state.is_running():\n # increment the run count\n context.run.run_count += 1\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetExpectedStartTime","title":"SetExpectedStartTime
","text":" Bases: BaseUniversalTransform
Estimates the time a state is expected to start running if not set.
For scheduled states, this estimate is simply the scheduled time. For other states, this is set to the time the proposed state was created by Prefect.
Source code inprefect/server/orchestration/global_policy.py
class SetExpectedStartTime(BaseUniversalTransform):\n \"\"\"\n Estimates the time a state is expected to start running if not set.\n\n For scheduled states, this estimate is simply the scheduled time. For other states,\n this is set to the time the proposed state was created by Prefect.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # set expected start time if this is the first state\n if not context.run.expected_start_time:\n if context.proposed_state.is_scheduled():\n context.run.expected_start_time = (\n context.proposed_state.state_details.scheduled_time\n )\n else:\n context.run.expected_start_time = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetNextScheduledStartTime","title":"SetNextScheduledStartTime
","text":" Bases: BaseUniversalTransform
Records the scheduled time on a run.
When a run enters a scheduled state, run.next_scheduled_start_time
is set to the state's scheduled time. When leaving a scheduled state, run.next_scheduled_start_time
is unset.
prefect/server/orchestration/global_policy.py
class SetNextScheduledStartTime(BaseUniversalTransform):\n \"\"\"\n Records the scheduled time on a run.\n\n When a run enters a scheduled state, `run.next_scheduled_start_time` is set to\n the state's scheduled time. When leaving a scheduled state,\n `run.next_scheduled_start_time` is unset.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # remove the next scheduled start time if exiting a scheduled state\n if context.initial_state and context.initial_state.is_scheduled():\n context.run.next_scheduled_start_time = None\n\n # set next scheduled start time if entering a scheduled state\n if context.proposed_state.is_scheduled():\n context.run.next_scheduled_start_time = (\n context.proposed_state.state_details.scheduled_time\n )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateSubflowParentTask","title":"UpdateSubflowParentTask
","text":" Bases: BaseUniversalTransform
Whenever a subflow changes state, it must update its parent task run's state.
Source code inprefect/server/orchestration/global_policy.py
class UpdateSubflowParentTask(BaseUniversalTransform):\n \"\"\"\n Whenever a subflow changes state, it must update its parent task run's state.\n \"\"\"\n\n async def after_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # only applies to flow runs with a parent task run id\n if context.run.parent_task_run_id is not None:\n # avoid mutation of the flow run state\n subflow_parent_task_state = context.validated_state.copy(\n reset_fields=True,\n include={\n \"type\",\n \"timestamp\",\n \"name\",\n \"message\",\n \"state_details\",\n \"data\",\n },\n )\n\n # set the task's \"child flow run id\" to be the subflow run id\n subflow_parent_task_state.state_details.child_flow_run_id = context.run.id\n\n await models.task_runs.set_task_run_state(\n session=context.session,\n task_run_id=context.run.parent_task_run_id,\n state=subflow_parent_task_state,\n force=True,\n )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateSubflowStateDetails","title":"UpdateSubflowStateDetails
","text":" Bases: BaseUniversalTransform
Update a child subflow state's references to a corresponding tracking task run id in the parent flow run
Source code inprefect/server/orchestration/global_policy.py
class UpdateSubflowStateDetails(BaseUniversalTransform):\n \"\"\"\n Update a child subflow state's references to a corresponding tracking task run id\n in the parent flow run\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # only applies to flow runs with a parent task run id\n if context.run.parent_task_run_id is not None:\n context.proposed_state.state_details.task_run_id = (\n context.run.parent_task_run_id\n )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateStateDetails","title":"UpdateStateDetails
","text":" Bases: BaseUniversalTransform
Update a state's references to a corresponding flow- or task- run.
Source code inprefect/server/orchestration/global_policy.py
class UpdateStateDetails(BaseUniversalTransform):\n \"\"\"\n Update a state's references to a corresponding flow- or task- run.\n \"\"\"\n\n async def before_transition(\n self,\n context: OrchestrationContext,\n ) -> None:\n if self.nullified_transition():\n return\n\n if isinstance(context, FlowOrchestrationContext):\n flow_run = await context.flow_run()\n context.proposed_state.state_details.flow_run_id = flow_run.id\n\n elif isinstance(context, TaskOrchestrationContext):\n task_run = await context.task_run()\n context.proposed_state.state_details.flow_run_id = task_run.flow_run_id\n context.proposed_state.state_details.task_run_id = task_run.id\n
"},{"location":"api-ref/server/orchestration/policies/","title":"server.orchestration.policies","text":""},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies","title":"prefect.server.orchestration.policies
","text":"Policies are collections of orchestration rules and transforms.
Prefect implements (most) orchestration with logic that governs a Prefect flow or task changing state. Policies organize of orchestration logic both to provide an ordering mechanism as well as provide observability into the orchestration process.
While Prefect's orchestration rules can gracefully run independently of one another, ordering can still have an impact on the observed behavior of the system. For example, it makes no sense to secure a concurrency slot for a run if a cached state exists. Furthermore, policies, provide a mechanism to configure and observe exactly what logic will fire against a transition.
"},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy","title":"BaseOrchestrationPolicy
","text":" Bases: ABC
An abstract base class used to organize orchestration rules in priority order.
Different collections of orchestration rules might be used to govern various kinds of transitions. For example, flow-run states and task-run states might require different orchestration logic.
Source code inprefect/server/orchestration/policies.py
class BaseOrchestrationPolicy(ABC):\n \"\"\"\n An abstract base class used to organize orchestration rules in priority order.\n\n Different collections of orchestration rules might be used to govern various kinds\n of transitions. For example, flow-run states and task-run states might require\n different orchestration logic.\n \"\"\"\n\n @staticmethod\n @abstractmethod\n def priority():\n \"\"\"\n A list of orchestration rules in priority order.\n \"\"\"\n\n return []\n\n @classmethod\n def compile_transition_rules(cls, from_state=None, to_state=None):\n \"\"\"\n Returns rules in policy that are valid for the specified state transition.\n \"\"\"\n\n transition_rules = []\n for rule in cls.priority():\n if from_state in rule.FROM_STATES and to_state in rule.TO_STATES:\n transition_rules.append(rule)\n return transition_rules\n
"},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy.priority","title":"priority
abstractmethod
staticmethod
","text":"A list of orchestration rules in priority order.
Source code inprefect/server/orchestration/policies.py
@staticmethod\n@abstractmethod\ndef priority():\n \"\"\"\n A list of orchestration rules in priority order.\n \"\"\"\n\n return []\n
"},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy.compile_transition_rules","title":"compile_transition_rules
classmethod
","text":"Returns rules in policy that are valid for the specified state transition.
Source code inprefect/server/orchestration/policies.py
@classmethod\ndef compile_transition_rules(cls, from_state=None, to_state=None):\n \"\"\"\n Returns rules in policy that are valid for the specified state transition.\n \"\"\"\n\n transition_rules = []\n for rule in cls.priority():\n if from_state in rule.FROM_STATES and to_state in rule.TO_STATES:\n transition_rules.append(rule)\n return transition_rules\n
"},{"location":"api-ref/server/orchestration/rules/","title":"server.orchestration.rules","text":""},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules","title":"prefect.server.orchestration.rules
","text":"Prefect's flow and task-run orchestration machinery.
This module contains all the core concepts necessary to implement Prefect's state orchestration engine. These states correspond to intuitive descriptions of all the points that a Prefect flow or task can observe executing user code and intervene, if necessary. A detailed description of states can be found in our concept documentation.
Prefect's orchestration engine operates under the assumption that no governed user code will execute without first requesting Prefect REST API validate a change in state and record metadata about the run. With all attempts to run user code being checked against a Prefect instance, the Prefect REST API database becomes the unambiguous source of truth for managing the execution of complex interacting workflows. Orchestration rules can be implemented as discrete units of logic that operate against each state transition and can be fully observable, extensible, and customizable -- all without needing to store or parse a single line of user code.
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext","title":"OrchestrationContext
","text":" Bases: PrefectBaseModel
A container for a state transition, governed by orchestration rules.
NoteAn OrchestrationContext
should not be instantiated directly, instead use the flow- or task- specific subclasses, FlowOrchestrationContext
and TaskOrchestrationContext
.
When a flow- or task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext
, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule
ABC.
OrchestrationContext
introduces the concept of a state being None
in the context of an intended state transition. An initial state can be None
if a run is is attempting to set a state for the first time. The proposed state might be None
if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.
Attributes:
Name Type Descriptionsession
Optional[Union[Session, AsyncSession]]
a SQLAlchemy database session
initial_state
Optional[State]
the initial state of a run
proposed_state
Optional[State]
the proposed state a run is transitioning into
validated_state
Optional[State]
a proposed state that has committed to the database
rule_signature
List[str]
a record of rules that have fired on entry into a managed context, currently only used for debugging purposes
finalization_signature
List[str]
a record of rules that have fired on exit from a managed context, currently only used for debugging purposes
response_status
SetStateStatus
a SetStateStatus object used to build the API response
response_details
StateResponseDetails
a StateResponseDetails object use to build the API response
Parameters:
Name Type Description Defaultsession
a SQLAlchemy database session
requiredinitial_state
the initial state of a run
requiredproposed_state
the proposed state a run is transitioning into
required Source code inprefect/server/orchestration/rules.py
class OrchestrationContext(PrefectBaseModel):\n \"\"\"\n A container for a state transition, governed by orchestration rules.\n\n Note:\n An `OrchestrationContext` should not be instantiated directly, instead\n use the flow- or task- specific subclasses, `FlowOrchestrationContext` and\n `TaskOrchestrationContext`.\n\n When a flow- or task- run attempts to change state, Prefect REST API has an opportunity\n to decide whether this transition can proceed. All the relevant information\n associated with the state transition is stored in an `OrchestrationContext`,\n which is subsequently governed by nested orchestration rules implemented using\n the `BaseOrchestrationRule` ABC.\n\n `OrchestrationContext` introduces the concept of a state being `None` in the\n context of an intended state transition. An initial state can be `None` if a run\n is is attempting to set a state for the first time. The proposed state might be\n `None` if a rule governing the transition determines that no state change\n should occur at all and nothing is written to the database.\n\n Attributes:\n session: a SQLAlchemy database session\n initial_state: the initial state of a run\n proposed_state: the proposed state a run is transitioning into\n validated_state: a proposed state that has committed to the database\n rule_signature: a record of rules that have fired on entry into a\n managed context, currently only used for debugging purposes\n finalization_signature: a record of rules that have fired on exit from a\n managed context, currently only used for debugging purposes\n response_status: a SetStateStatus object used to build the API response\n response_details:a StateResponseDetails object use to build the API response\n\n Args:\n session: a SQLAlchemy database session\n initial_state: the initial state of a run\n proposed_state: the proposed state a run is transitioning into\n \"\"\"\n\n class Config:\n arbitrary_types_allowed = True\n\n session: Optional[Union[sa.orm.Session, AsyncSession]] = ...\n initial_state: Optional[states.State] = ...\n proposed_state: Optional[states.State] = ...\n validated_state: Optional[states.State]\n rule_signature: List[str] = Field(default_factory=list)\n finalization_signature: List[str] = Field(default_factory=list)\n response_status: SetStateStatus = Field(default=SetStateStatus.ACCEPT)\n response_details: StateResponseDetails = Field(default_factory=StateAcceptDetails)\n orchestration_error: Optional[Exception] = Field(default=None)\n parameters: Dict[Any, Any] = Field(default_factory=dict)\n\n @property\n def initial_state_type(self) -> Optional[states.StateType]:\n \"\"\"The state type of `self.initial_state` if it exists.\"\"\"\n\n return self.initial_state.type if self.initial_state else None\n\n @property\n def proposed_state_type(self) -> Optional[states.StateType]:\n \"\"\"The state type of `self.proposed_state` if it exists.\"\"\"\n\n return self.proposed_state.type if self.proposed_state else None\n\n @property\n def validated_state_type(self) -> Optional[states.StateType]:\n \"\"\"The state type of `self.validated_state` if it exists.\"\"\"\n return self.validated_state.type if self.validated_state else None\n\n def safe_copy(self):\n \"\"\"\n Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n Orchestration rules govern state transitions using information stored in\n an `OrchestrationContext`. However, mutating objects stored on the context\n directly can have unintended side-effects. To guard against this,\n `self.safe_copy` can be used to pass information to orchestration rules\n without risking mutation.\n\n Returns:\n A mutation-safe copy of the `OrchestrationContext`\n \"\"\"\n\n safe_copy = self.copy()\n\n safe_copy.initial_state = (\n self.initial_state.copy() if self.initial_state else None\n )\n safe_copy.proposed_state = (\n self.proposed_state.copy() if self.proposed_state else None\n )\n safe_copy.validated_state = (\n self.validated_state.copy() if self.validated_state else None\n )\n safe_copy.parameters = self.parameters.copy()\n return safe_copy\n\n def entry_context(self):\n \"\"\"\n A convenience method that generates input parameters for orchestration rules.\n\n An `OrchestrationContext` defines a state transition that is managed by\n orchestration rules which can fire hooks before a transition has been committed\n to the database. These hooks have a consistent interface which can be generated\n with this method.\n \"\"\"\n\n safe_context = self.safe_copy()\n return safe_context.initial_state, safe_context.proposed_state, safe_context\n\n def exit_context(self):\n \"\"\"\n A convenience method that generates input parameters for orchestration rules.\n\n An `OrchestrationContext` defines a state transition that is managed by\n orchestration rules which can fire hooks after a transition has been committed\n to the database. These hooks have a consistent interface which can be generated\n with this method.\n \"\"\"\n\n safe_context = self.safe_copy()\n return safe_context.initial_state, safe_context.validated_state, safe_context\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.initial_state_type","title":"initial_state_type: Optional[states.StateType]
property
","text":"The state type of self.initial_state
if it exists.
proposed_state_type: Optional[states.StateType]
property
","text":"The state type of self.proposed_state
if it exists.
validated_state_type: Optional[states.StateType]
property
","text":"The state type of self.validated_state
if it exists.
safe_copy
","text":"Creates a mostly-mutation-safe copy for use in orchestration rules.
Orchestration rules govern state transitions using information stored in an OrchestrationContext
. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy
can be used to pass information to orchestration rules without risking mutation.
Returns:
Type DescriptionA mutation-safe copy of the OrchestrationContext
prefect/server/orchestration/rules.py
def safe_copy(self):\n \"\"\"\n Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n Orchestration rules govern state transitions using information stored in\n an `OrchestrationContext`. However, mutating objects stored on the context\n directly can have unintended side-effects. To guard against this,\n `self.safe_copy` can be used to pass information to orchestration rules\n without risking mutation.\n\n Returns:\n A mutation-safe copy of the `OrchestrationContext`\n \"\"\"\n\n safe_copy = self.copy()\n\n safe_copy.initial_state = (\n self.initial_state.copy() if self.initial_state else None\n )\n safe_copy.proposed_state = (\n self.proposed_state.copy() if self.proposed_state else None\n )\n safe_copy.validated_state = (\n self.validated_state.copy() if self.validated_state else None\n )\n safe_copy.parameters = self.parameters.copy()\n return safe_copy\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.entry_context","title":"entry_context
","text":"A convenience method that generates input parameters for orchestration rules.
An OrchestrationContext
defines a state transition that is managed by orchestration rules which can fire hooks before a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method.
prefect/server/orchestration/rules.py
def entry_context(self):\n \"\"\"\n A convenience method that generates input parameters for orchestration rules.\n\n An `OrchestrationContext` defines a state transition that is managed by\n orchestration rules which can fire hooks before a transition has been committed\n to the database. These hooks have a consistent interface which can be generated\n with this method.\n \"\"\"\n\n safe_context = self.safe_copy()\n return safe_context.initial_state, safe_context.proposed_state, safe_context\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.exit_context","title":"exit_context
","text":"A convenience method that generates input parameters for orchestration rules.
An OrchestrationContext
defines a state transition that is managed by orchestration rules which can fire hooks after a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method.
prefect/server/orchestration/rules.py
def exit_context(self):\n \"\"\"\n A convenience method that generates input parameters for orchestration rules.\n\n An `OrchestrationContext` defines a state transition that is managed by\n orchestration rules which can fire hooks after a transition has been committed\n to the database. These hooks have a consistent interface which can be generated\n with this method.\n \"\"\"\n\n safe_context = self.safe_copy()\n return safe_context.initial_state, safe_context.validated_state, safe_context\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext","title":"FlowOrchestrationContext
","text":" Bases: OrchestrationContext
A container for a flow run state transition, governed by orchestration rules.
When a flow- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext
, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule
ABC.
FlowOrchestrationContext
introduces the concept of a state being None
in the context of an intended state transition. An initial state can be None
if a run is is attempting to set a state for the first time. The proposed state might be None
if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.
Attributes:
Name Type Descriptionsession
a SQLAlchemy database session
run
Any
the flow run attempting to change state
initial_state
Any
the initial state of the run
proposed_state
Any
the proposed state the run is transitioning into
validated_state
Any
a proposed state that has committed to the database
rule_signature
Any
a record of rules that have fired on entry into a managed context, currently only used for debugging purposes
finalization_signature
Any
a record of rules that have fired on exit from a managed context, currently only used for debugging purposes
response_status
Any
a SetStateStatus object used to build the API response
response_details
Any
a StateResponseDetails object use to build the API response
Parameters:
Name Type Description Defaultsession
a SQLAlchemy database session
requiredrun
the flow run attempting to change state
requiredinitial_state
the initial state of a run
requiredproposed_state
the proposed state a run is transitioning into
required Source code inprefect/server/orchestration/rules.py
class FlowOrchestrationContext(OrchestrationContext):\n \"\"\"\n A container for a flow run state transition, governed by orchestration rules.\n\n When a flow- run attempts to change state, Prefect REST API has an opportunity\n to decide whether this transition can proceed. All the relevant information\n associated with the state transition is stored in an `OrchestrationContext`,\n which is subsequently governed by nested orchestration rules implemented using\n the `BaseOrchestrationRule` ABC.\n\n `FlowOrchestrationContext` introduces the concept of a state being `None` in the\n context of an intended state transition. An initial state can be `None` if a run\n is is attempting to set a state for the first time. The proposed state might be\n `None` if a rule governing the transition determines that no state change\n should occur at all and nothing is written to the database.\n\n Attributes:\n session: a SQLAlchemy database session\n run: the flow run attempting to change state\n initial_state: the initial state of the run\n proposed_state: the proposed state the run is transitioning into\n validated_state: a proposed state that has committed to the database\n rule_signature: a record of rules that have fired on entry into a\n managed context, currently only used for debugging purposes\n finalization_signature: a record of rules that have fired on exit from a\n managed context, currently only used for debugging purposes\n response_status: a SetStateStatus object used to build the API response\n response_details:a StateResponseDetails object use to build the API response\n\n Args:\n session: a SQLAlchemy database session\n run: the flow run attempting to change state\n initial_state: the initial state of a run\n proposed_state: the proposed state a run is transitioning into\n \"\"\"\n\n # run: db.FlowRun = ...\n run: Any = ...\n\n @inject_db\n async def validate_proposed_state(\n self,\n db: PrefectDBInterface,\n ):\n \"\"\"\n Validates a proposed state by committing it to the database.\n\n After the `FlowOrchestrationContext` is governed by orchestration rules, the\n proposed state can be validated: the proposed state is added to the current\n SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n state. The state on the run is set to the validated state as well.\n\n If the proposed state is `None` when this method is called, no state will be\n written and `self.validated_state` will be set to the run's current state.\n\n Returns:\n None\n \"\"\"\n # (circular import)\n from prefect.server.api.server import is_client_retryable_exception\n\n try:\n await self._validate_proposed_state()\n return\n except Exception as exc:\n logger.exception(\"Encountered error during state validation\")\n self.proposed_state = None\n\n if is_client_retryable_exception(exc):\n # Do not capture retryable database exceptions, this exception will be\n # raised as a 503 in the API layer\n raise\n\n reason = f\"Error validating state: {exc!r}\"\n self.response_status = SetStateStatus.ABORT\n self.response_details = StateAbortDetails(reason=reason)\n\n @inject_db\n async def _validate_proposed_state(\n self,\n db: PrefectDBInterface,\n ):\n if self.proposed_state is None:\n validated_orm_state = self.run.state\n # We cannot access `self.run.state.data` directly for unknown reasons\n state_data = (\n (\n await artifacts.read_artifact(\n self.session, self.run.state.result_artifact_id\n )\n ).data\n if self.run.state.result_artifact_id\n else None\n )\n else:\n state_payload = self.proposed_state.dict(shallow=True)\n state_data = state_payload.pop(\"data\", None)\n\n if state_data is not None:\n state_result_artifact = core.Artifact.from_result(state_data)\n state_result_artifact.flow_run_id = self.run.id\n await artifacts.create_artifact(self.session, state_result_artifact)\n state_payload[\"result_artifact_id\"] = state_result_artifact.id\n\n validated_orm_state = db.FlowRunState(\n flow_run_id=self.run.id,\n **state_payload,\n )\n\n self.session.add(validated_orm_state)\n self.run.set_state(validated_orm_state)\n\n await self.session.flush()\n if validated_orm_state:\n self.validated_state = states.State.from_orm_without_result(\n validated_orm_state, with_data=state_data\n )\n else:\n self.validated_state = None\n\n def safe_copy(self):\n \"\"\"\n Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n Orchestration rules govern state transitions using information stored in\n an `OrchestrationContext`. However, mutating objects stored on the context\n directly can have unintended side-effects. To guard against this,\n `self.safe_copy` can be used to pass information to orchestration rules\n without risking mutation.\n\n Note:\n `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n Returns:\n A mutation-safe copy of `FlowOrchestrationContext`\n \"\"\"\n\n return super().safe_copy()\n\n @property\n def run_settings(self) -> Dict:\n \"\"\"Run-level settings used to orchestrate the state transition.\"\"\"\n\n return self.run.empirical_policy\n\n async def task_run(self):\n return None\n\n async def flow_run(self):\n return self.run\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.run_settings","title":"run_settings: Dict
property
","text":"Run-level settings used to orchestrate the state transition.
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.validate_proposed_state","title":"validate_proposed_state
async
","text":"Validates a proposed state by committing it to the database.
After the FlowOrchestrationContext
is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. self.validated_state
set to the flushed state. The state on the run is set to the validated state as well.
If the proposed state is None
when this method is called, no state will be written and self.validated_state
will be set to the run's current state.
Returns:
Type DescriptionNone
Source code inprefect/server/orchestration/rules.py
@inject_db\nasync def validate_proposed_state(\n self,\n db: PrefectDBInterface,\n):\n \"\"\"\n Validates a proposed state by committing it to the database.\n\n After the `FlowOrchestrationContext` is governed by orchestration rules, the\n proposed state can be validated: the proposed state is added to the current\n SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n state. The state on the run is set to the validated state as well.\n\n If the proposed state is `None` when this method is called, no state will be\n written and `self.validated_state` will be set to the run's current state.\n\n Returns:\n None\n \"\"\"\n # (circular import)\n from prefect.server.api.server import is_client_retryable_exception\n\n try:\n await self._validate_proposed_state()\n return\n except Exception as exc:\n logger.exception(\"Encountered error during state validation\")\n self.proposed_state = None\n\n if is_client_retryable_exception(exc):\n # Do not capture retryable database exceptions, this exception will be\n # raised as a 503 in the API layer\n raise\n\n reason = f\"Error validating state: {exc!r}\"\n self.response_status = SetStateStatus.ABORT\n self.response_details = StateAbortDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.safe_copy","title":"safe_copy
","text":"Creates a mostly-mutation-safe copy for use in orchestration rules.
Orchestration rules govern state transitions using information stored in an OrchestrationContext
. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy
can be used to pass information to orchestration rules without risking mutation.
self.run
is an ORM model, and even when copied is unsafe to mutate
Returns:
Type DescriptionA mutation-safe copy of FlowOrchestrationContext
prefect/server/orchestration/rules.py
def safe_copy(self):\n \"\"\"\n Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n Orchestration rules govern state transitions using information stored in\n an `OrchestrationContext`. However, mutating objects stored on the context\n directly can have unintended side-effects. To guard against this,\n `self.safe_copy` can be used to pass information to orchestration rules\n without risking mutation.\n\n Note:\n `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n Returns:\n A mutation-safe copy of `FlowOrchestrationContext`\n \"\"\"\n\n return super().safe_copy()\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext","title":"TaskOrchestrationContext
","text":" Bases: OrchestrationContext
A container for a task run state transition, governed by orchestration rules.
When a task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext
, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule
ABC.
TaskOrchestrationContext
introduces the concept of a state being None
in the context of an intended state transition. An initial state can be None
if a run is is attempting to set a state for the first time. The proposed state might be None
if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.
Attributes:
Name Type Descriptionsession
a SQLAlchemy database session
run
Any
the task run attempting to change state
initial_state
Any
the initial state of the run
proposed_state
Any
the proposed state the run is transitioning into
validated_state
Any
a proposed state that has committed to the database
rule_signature
Any
a record of rules that have fired on entry into a managed context, currently only used for debugging purposes
finalization_signature
Any
a record of rules that have fired on exit from a managed context, currently only used for debugging purposes
response_status
Any
a SetStateStatus object used to build the API response
response_details
Any
a StateResponseDetails object use to build the API response
Parameters:
Name Type Description Defaultsession
a SQLAlchemy database session
requiredrun
the task run attempting to change state
requiredinitial_state
the initial state of a run
requiredproposed_state
the proposed state a run is transitioning into
required Source code inprefect/server/orchestration/rules.py
class TaskOrchestrationContext(OrchestrationContext):\n \"\"\"\n A container for a task run state transition, governed by orchestration rules.\n\n When a task- run attempts to change state, Prefect REST API has an opportunity\n to decide whether this transition can proceed. All the relevant information\n associated with the state transition is stored in an `OrchestrationContext`,\n which is subsequently governed by nested orchestration rules implemented using\n the `BaseOrchestrationRule` ABC.\n\n `TaskOrchestrationContext` introduces the concept of a state being `None` in the\n context of an intended state transition. An initial state can be `None` if a run\n is is attempting to set a state for the first time. The proposed state might be\n `None` if a rule governing the transition determines that no state change\n should occur at all and nothing is written to the database.\n\n Attributes:\n session: a SQLAlchemy database session\n run: the task run attempting to change state\n initial_state: the initial state of the run\n proposed_state: the proposed state the run is transitioning into\n validated_state: a proposed state that has committed to the database\n rule_signature: a record of rules that have fired on entry into a\n managed context, currently only used for debugging purposes\n finalization_signature: a record of rules that have fired on exit from a\n managed context, currently only used for debugging purposes\n response_status: a SetStateStatus object used to build the API response\n response_details:a StateResponseDetails object use to build the API response\n\n Args:\n session: a SQLAlchemy database session\n run: the task run attempting to change state\n initial_state: the initial state of a run\n proposed_state: the proposed state a run is transitioning into\n \"\"\"\n\n # run: db.TaskRun = ...\n run: Any = ...\n\n @inject_db\n async def validate_proposed_state(\n self,\n db: PrefectDBInterface,\n ):\n \"\"\"\n Validates a proposed state by committing it to the database.\n\n After the `TaskOrchestrationContext` is governed by orchestration rules, the\n proposed state can be validated: the proposed state is added to the current\n SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n state. The state on the run is set to the validated state as well.\n\n If the proposed state is `None` when this method is called, no state will be\n written and `self.validated_state` will be set to the run's current state.\n\n Returns:\n None\n \"\"\"\n # (circular import)\n from prefect.server.api.server import is_client_retryable_exception\n\n try:\n await self._validate_proposed_state()\n return\n except Exception as exc:\n logger.exception(\"Encountered error during state validation\")\n self.proposed_state = None\n\n if is_client_retryable_exception(exc):\n # Do not capture retryable database exceptions, this exception will be\n # raised as a 503 in the API layer\n raise\n\n reason = f\"Error validating state: {exc!r}\"\n self.response_status = SetStateStatus.ABORT\n self.response_details = StateAbortDetails(reason=reason)\n\n @inject_db\n async def _validate_proposed_state(\n self,\n db: PrefectDBInterface,\n ):\n if self.proposed_state is None:\n validated_orm_state = self.run.state\n # We cannot access `self.run.state.data` directly for unknown reasons\n state_data = (\n (\n await artifacts.read_artifact(\n self.session, self.run.state.result_artifact_id\n )\n ).data\n if self.run.state.result_artifact_id\n else None\n )\n else:\n state_payload = self.proposed_state.dict(shallow=True)\n state_data = state_payload.pop(\"data\", None)\n\n if state_data is not None:\n state_result_artifact = core.Artifact.from_result(state_data)\n state_result_artifact.task_run_id = self.run.id\n\n if self.run.flow_run_id is not None:\n flow_run = await self.flow_run()\n state_result_artifact.flow_run_id = flow_run.id\n\n await artifacts.create_artifact(self.session, state_result_artifact)\n state_payload[\"result_artifact_id\"] = state_result_artifact.id\n\n validated_orm_state = db.TaskRunState(\n task_run_id=self.run.id,\n **state_payload,\n )\n\n self.session.add(validated_orm_state)\n self.run.set_state(validated_orm_state)\n\n await self.session.flush()\n if validated_orm_state:\n self.validated_state = states.State.from_orm_without_result(\n validated_orm_state, with_data=state_data\n )\n else:\n self.validated_state = None\n\n def safe_copy(self):\n \"\"\"\n Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n Orchestration rules govern state transitions using information stored in\n an `OrchestrationContext`. However, mutating objects stored on the context\n directly can have unintended side-effects. To guard against this,\n `self.safe_copy` can be used to pass information to orchestration rules\n without risking mutation.\n\n Note:\n `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n Returns:\n A mutation-safe copy of `TaskOrchestrationContext`\n \"\"\"\n\n return super().safe_copy()\n\n @property\n def run_settings(self) -> Dict:\n \"\"\"Run-level settings used to orchestrate the state transition.\"\"\"\n\n return self.run.empirical_policy\n\n async def task_run(self):\n return self.run\n\n async def flow_run(self):\n return await flow_runs.read_flow_run(\n session=self.session,\n flow_run_id=self.run.flow_run_id,\n )\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.run_settings","title":"run_settings: Dict
property
","text":"Run-level settings used to orchestrate the state transition.
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.validate_proposed_state","title":"validate_proposed_state
async
","text":"Validates a proposed state by committing it to the database.
After the TaskOrchestrationContext
is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. self.validated_state
set to the flushed state. The state on the run is set to the validated state as well.
If the proposed state is None
when this method is called, no state will be written and self.validated_state
will be set to the run's current state.
Returns:
Type DescriptionNone
Source code inprefect/server/orchestration/rules.py
@inject_db\nasync def validate_proposed_state(\n self,\n db: PrefectDBInterface,\n):\n \"\"\"\n Validates a proposed state by committing it to the database.\n\n After the `TaskOrchestrationContext` is governed by orchestration rules, the\n proposed state can be validated: the proposed state is added to the current\n SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n state. The state on the run is set to the validated state as well.\n\n If the proposed state is `None` when this method is called, no state will be\n written and `self.validated_state` will be set to the run's current state.\n\n Returns:\n None\n \"\"\"\n # (circular import)\n from prefect.server.api.server import is_client_retryable_exception\n\n try:\n await self._validate_proposed_state()\n return\n except Exception as exc:\n logger.exception(\"Encountered error during state validation\")\n self.proposed_state = None\n\n if is_client_retryable_exception(exc):\n # Do not capture retryable database exceptions, this exception will be\n # raised as a 503 in the API layer\n raise\n\n reason = f\"Error validating state: {exc!r}\"\n self.response_status = SetStateStatus.ABORT\n self.response_details = StateAbortDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.safe_copy","title":"safe_copy
","text":"Creates a mostly-mutation-safe copy for use in orchestration rules.
Orchestration rules govern state transitions using information stored in an OrchestrationContext
. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy
can be used to pass information to orchestration rules without risking mutation.
self.run
is an ORM model, and even when copied is unsafe to mutate
Returns:
Type DescriptionA mutation-safe copy of TaskOrchestrationContext
prefect/server/orchestration/rules.py
def safe_copy(self):\n \"\"\"\n Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n Orchestration rules govern state transitions using information stored in\n an `OrchestrationContext`. However, mutating objects stored on the context\n directly can have unintended side-effects. To guard against this,\n `self.safe_copy` can be used to pass information to orchestration rules\n without risking mutation.\n\n Note:\n `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n Returns:\n A mutation-safe copy of `TaskOrchestrationContext`\n \"\"\"\n\n return super().safe_copy()\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule","title":"BaseOrchestrationRule
","text":" Bases: AbstractAsyncContextManager
An abstract base class used to implement a discrete piece of orchestration logic.
An OrchestrationRule
is a stateful context manager that directly governs a state transition. Complex orchestration is achieved by nesting multiple rules. Each rule runs against an OrchestrationContext
that contains the transition details; this context is then passed to subsequent rules. The context can be modified by hooks that fire before and after a new state is validated and committed to the database. These hooks will fire as long as the state transition is considered \"valid\" and govern a transition by either modifying the proposed state before it is validated or by producing a side-effect.
A state transition occurs whenever a flow- or task- run changes state, prompting Prefect REST API to decide whether or not this transition can proceed. The current state of the run is referred to as the \"initial state\", and the state a run is attempting to transition into is the \"proposed state\". Together, the initial state transitioning into the proposed state is the intended transition that is governed by these orchestration rules. After using rules to enter a runtime context, the OrchestrationContext
will contain a proposed state that has been governed by each rule, and at that point can validate the proposed state and commit it to the database. The validated state will be set on the context as context.validated_state
, and rules will call the self.after_transition
hook upon exiting the managed context.
Examples:
Create a rule:\n\n>>> class BasicRule(BaseOrchestrationRule):\n>>> # allowed initial state types\n>>> FROM_STATES = [StateType.RUNNING]\n>>> # allowed proposed state types\n>>> TO_STATES = [StateType.COMPLETED, StateType.FAILED]\n>>>\n>>> async def before_transition(initial_state, proposed_state, ctx):\n>>> # side effects and proposed state mutation can happen here\n>>> ...\n>>>\n>>> async def after_transition(initial_state, validated_state, ctx):\n>>> # operations on states that have been validated can happen here\n>>> ...\n>>>\n>>> async def cleanup(intitial_state, validated_state, ctx):\n>>> # reverts side effects generated by `before_transition` if necessary\n>>> ...\n\nUse a rule:\n\n>>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n>>> async with BasicRule(context, *intended_transition):\n>>> # context.proposed_state has been governed by BasicRule\n>>> ...\n\nUse multiple rules:\n\n>>> rules = [BasicRule, BasicRule]\n>>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n>>> async with contextlib.AsyncExitStack() as stack:\n>>> for rule in rules:\n>>> stack.enter_async_context(rule(context, *intended_transition))\n>>>\n>>> # context.proposed_state has been governed by all rules\n>>> ...\n
Attributes:
Name Type DescriptionFROM_STATES
Iterable
list of valid initial state types this rule governs
TO_STATES
Iterable
list of valid proposed state types this rule governs
context
the orchestration context
from_state_type
the state type a run is currently in
to_state_type
the intended proposed state type prior to any orchestration
Parameters:
Name Type Description Defaultcontext
OrchestrationContext
A FlowOrchestrationContext
or TaskOrchestrationContext
that is passed between rules
from_state_type
Optional[StateType]
The state type of the initial state of a run, if this state type is not contained in FROM_STATES
, no hooks will fire
to_state_type
Optional[StateType]
The state type of the proposed state before orchestration, if this state type is not contained in TO_STATES
, no hooks will fire
prefect/server/orchestration/rules.py
class BaseOrchestrationRule(contextlib.AbstractAsyncContextManager):\n \"\"\"\n An abstract base class used to implement a discrete piece of orchestration logic.\n\n An `OrchestrationRule` is a stateful context manager that directly governs a state\n transition. Complex orchestration is achieved by nesting multiple rules.\n Each rule runs against an `OrchestrationContext` that contains the transition\n details; this context is then passed to subsequent rules. The context can be\n modified by hooks that fire before and after a new state is validated and committed\n to the database. These hooks will fire as long as the state transition is\n considered \"valid\" and govern a transition by either modifying the proposed state\n before it is validated or by producing a side-effect.\n\n A state transition occurs whenever a flow- or task- run changes state, prompting\n Prefect REST API to decide whether or not this transition can proceed. The current state of\n the run is referred to as the \"initial state\", and the state a run is\n attempting to transition into is the \"proposed state\". Together, the initial state\n transitioning into the proposed state is the intended transition that is governed\n by these orchestration rules. After using rules to enter a runtime context, the\n `OrchestrationContext` will contain a proposed state that has been governed by\n each rule, and at that point can validate the proposed state and commit it to\n the database. The validated state will be set on the context as\n `context.validated_state`, and rules will call the `self.after_transition` hook\n upon exiting the managed context.\n\n Examples:\n\n Create a rule:\n\n >>> class BasicRule(BaseOrchestrationRule):\n >>> # allowed initial state types\n >>> FROM_STATES = [StateType.RUNNING]\n >>> # allowed proposed state types\n >>> TO_STATES = [StateType.COMPLETED, StateType.FAILED]\n >>>\n >>> async def before_transition(initial_state, proposed_state, ctx):\n >>> # side effects and proposed state mutation can happen here\n >>> ...\n >>>\n >>> async def after_transition(initial_state, validated_state, ctx):\n >>> # operations on states that have been validated can happen here\n >>> ...\n >>>\n >>> async def cleanup(intitial_state, validated_state, ctx):\n >>> # reverts side effects generated by `before_transition` if necessary\n >>> ...\n\n Use a rule:\n\n >>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n >>> async with BasicRule(context, *intended_transition):\n >>> # context.proposed_state has been governed by BasicRule\n >>> ...\n\n Use multiple rules:\n\n >>> rules = [BasicRule, BasicRule]\n >>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n >>> async with contextlib.AsyncExitStack() as stack:\n >>> for rule in rules:\n >>> stack.enter_async_context(rule(context, *intended_transition))\n >>>\n >>> # context.proposed_state has been governed by all rules\n >>> ...\n\n Attributes:\n FROM_STATES: list of valid initial state types this rule governs\n TO_STATES: list of valid proposed state types this rule governs\n context: the orchestration context\n from_state_type: the state type a run is currently in\n to_state_type: the intended proposed state type prior to any orchestration\n\n Args:\n context: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is\n passed between rules\n from_state_type: The state type of the initial state of a run, if this\n state type is not contained in `FROM_STATES`, no hooks will fire\n to_state_type: The state type of the proposed state before orchestration, if\n this state type is not contained in `TO_STATES`, no hooks will fire\n \"\"\"\n\n FROM_STATES: Iterable = []\n TO_STATES: Iterable = []\n\n def __init__(\n self,\n context: OrchestrationContext,\n from_state_type: Optional[states.StateType],\n to_state_type: Optional[states.StateType],\n ):\n self.context = context\n self.from_state_type = from_state_type\n self.to_state_type = to_state_type\n self._invalid_on_entry = None\n\n async def __aenter__(self) -> OrchestrationContext:\n \"\"\"\n Enter an async runtime context governed by this rule.\n\n The `with` statement will bind a governed `OrchestrationContext` to the target\n specified by the `as` clause. If the transition proposed by the\n `OrchestrationContext` is considered invalid on entry, entering this context\n will do nothing. Otherwise, `self.before_transition` will fire.\n \"\"\"\n\n if await self.invalid():\n pass\n else:\n try:\n entry_context = self.context.entry_context()\n await self.before_transition(*entry_context)\n self.context.rule_signature.append(str(self.__class__))\n except Exception as before_transition_error:\n reason = (\n f\"Aborting orchestration due to error in {self.__class__!r}:\"\n f\" !{before_transition_error!r}\"\n )\n logger.exception(\n f\"Error running before-transition hook in rule {self.__class__!r}:\"\n f\" !{before_transition_error!r}\"\n )\n\n self.context.proposed_state = None\n self.context.response_status = SetStateStatus.ABORT\n self.context.response_details = StateAbortDetails(reason=reason)\n self.context.orchestration_error = before_transition_error\n\n return self.context\n\n async def __aexit__(\n self,\n exc_type: Optional[Type[BaseException]],\n exc_val: Optional[BaseException],\n exc_tb: Optional[TracebackType],\n ) -> None:\n \"\"\"\n Exit the async runtime context governed by this rule.\n\n One of three outcomes can happen upon exiting this rule's context depending on\n the state of the rule. If the rule was found to be invalid on entry, nothing\n happens. If the rule was valid on entry and continues to be valid on exit,\n `self.after_transition` will fire. If the rule was valid on entry but invalid\n on exit, the rule will \"fizzle\" and `self.cleanup` will fire in order to revert\n any side-effects produced by `self.before_transition`.\n \"\"\"\n\n exit_context = self.context.exit_context()\n if await self.invalid():\n pass\n elif await self.fizzled():\n await self.cleanup(*exit_context)\n else:\n await self.after_transition(*exit_context)\n self.context.finalization_signature.append(str(self.__class__))\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n \"\"\"\n Implements a hook that can fire before a state is committed to the database.\n\n This hook may produce side-effects or mutate the proposed state of a\n transition using one of four methods: `self.reject_transition`,\n `self.delay_transition`, `self.abort_transition`, and `self.rename_state`.\n\n Note:\n As currently implemented, the `before_transition` hook is not\n perfectly isolated from mutating the transition. It is a standard instance\n method that has access to `self`, and therefore `self.context`. This should\n never be modified directly. Furthermore, `context.run` is an ORM model, and\n mutating the run can also cause unintended writes to the database.\n\n Args:\n initial_state: The initial state of a transition\n proposed_state: The proposed state of a transition\n context: A safe copy of the `OrchestrationContext`, with the exception of\n `context.run`, mutating this context will have no effect on the broader\n orchestration environment.\n\n Returns:\n None\n \"\"\"\n\n async def after_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n \"\"\"\n Implements a hook that can fire after a state is committed to the database.\n\n Args:\n initial_state: The initial state of a transition\n validated_state: The governed state that has been committed to the database\n context: A safe copy of the `OrchestrationContext`, with the exception of\n `context.run`, mutating this context will have no effect on the broader\n orchestration environment.\n\n Returns:\n None\n \"\"\"\n\n async def cleanup(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n \"\"\"\n Implements a hook that can fire after a state is committed to the database.\n\n The intended use of this method is to revert side-effects produced by\n `self.before_transition` when the transition is found to be invalid on exit.\n This allows multiple rules to be gracefully run in sequence, without logic that\n keeps track of all other rules that might govern a transition.\n\n Args:\n initial_state: The initial state of a transition\n validated_state: The governed state that has been committed to the database\n context: A safe copy of the `OrchestrationContext`, with the exception of\n `context.run`, mutating this context will have no effect on the broader\n orchestration environment.\n\n Returns:\n None\n \"\"\"\n\n async def invalid(self) -> bool:\n \"\"\"\n Determines if a rule is invalid.\n\n Invalid rules do nothing and no hooks fire upon entering or exiting a governed\n context. Rules are invalid if the transition states types are not contained in\n `self.FROM_STATES` and `self.TO_STATES`, or if the context is proposing\n a transition that differs from the transition the rule was instantiated with.\n\n Returns:\n True if the rules in invalid, False otherwise.\n \"\"\"\n # invalid and fizzled states are mutually exclusive,\n # `_invalid_on_entry` holds this statefulness\n if self.from_state_type not in self.FROM_STATES:\n self._invalid_on_entry = True\n if self.to_state_type not in self.TO_STATES:\n self._invalid_on_entry = True\n\n if self._invalid_on_entry is None:\n self._invalid_on_entry = await self.invalid_transition()\n return self._invalid_on_entry\n\n async def fizzled(self) -> bool:\n \"\"\"\n Determines if a rule is fizzled and side-effects need to be reverted.\n\n Rules are fizzled if the transitions were valid on entry (thus firing\n `self.before_transition`) but are invalid upon exiting the governed context,\n most likely caused by another rule mutating the transition.\n\n Returns:\n True if the rule is fizzled, False otherwise.\n \"\"\"\n\n if self._invalid_on_entry:\n return False\n return await self.invalid_transition()\n\n async def invalid_transition(self) -> bool:\n \"\"\"\n Determines if the transition proposed by the `OrchestrationContext` is invalid.\n\n If the `OrchestrationContext` is attempting to manage a transition with this\n rule that differs from the transition the rule was instantiated with, the\n transition is considered to be invalid. Depending on the context, a rule with an\n invalid transition is either \"invalid\" or \"fizzled\".\n\n Returns:\n True if the transition is invalid, False otherwise.\n \"\"\"\n\n initial_state_type = self.context.initial_state_type\n proposed_state_type = self.context.proposed_state_type\n return (self.from_state_type != initial_state_type) or (\n self.to_state_type != proposed_state_type\n )\n\n async def reject_transition(self, state: Optional[states.State], reason: str):\n \"\"\"\n Rejects a proposed transition before the transition is validated.\n\n This method will reject a proposed transition, mutating the proposed state to\n the provided `state`. A reason for rejecting the transition is also passed on\n to the `OrchestrationContext`. Rules that reject the transition will not fizzle,\n despite the proposed state type changing.\n\n Args:\n state: The new proposed state. If `None`, the current run state will be\n returned in the result instead.\n reason: The reason for rejecting the transition\n \"\"\"\n\n # don't run if the transition is already validated\n if self.context.validated_state:\n raise RuntimeError(\"The transition is already validated\")\n\n # the current state will be used if a new one is not provided\n if state is None:\n if self.from_state_type is None:\n raise OrchestrationError(\n \"The current run has no state; this transition cannot be \"\n \"rejected without providing a new state.\"\n )\n self.to_state_type = None\n self.context.proposed_state = None\n else:\n # a rule that mutates state should not fizzle itself\n self.to_state_type = state.type\n self.context.proposed_state = state\n\n self.context.response_status = SetStateStatus.REJECT\n self.context.response_details = StateRejectDetails(reason=reason)\n\n async def delay_transition(\n self,\n delay_seconds: int,\n reason: str,\n ):\n \"\"\"\n Delays a proposed transition before the transition is validated.\n\n This method will delay a proposed transition, setting the proposed state to\n `None`, signaling to the `OrchestrationContext` that no state should be\n written to the database. The number of seconds a transition should be delayed is\n passed to the `OrchestrationContext`. A reason for delaying the transition is\n also provided. Rules that delay the transition will not fizzle, despite the\n proposed state type changing.\n\n Args:\n delay_seconds: The number of seconds the transition should be delayed\n reason: The reason for delaying the transition\n \"\"\"\n\n # don't run if the transition is already validated\n if self.context.validated_state:\n raise RuntimeError(\"The transition is already validated\")\n\n # a rule that mutates state should not fizzle itself\n self.to_state_type = None\n self.context.proposed_state = None\n self.context.response_status = SetStateStatus.WAIT\n self.context.response_details = StateWaitDetails(\n delay_seconds=delay_seconds, reason=reason\n )\n\n async def abort_transition(self, reason: str):\n \"\"\"\n Aborts a proposed transition before the transition is validated.\n\n This method will abort a proposed transition, expecting no further action to\n occur for this run. The proposed state is set to `None`, signaling to the\n `OrchestrationContext` that no state should be written to the database. A\n reason for aborting the transition is also provided. Rules that abort the\n transition will not fizzle, despite the proposed state type changing.\n\n Args:\n reason: The reason for aborting the transition\n \"\"\"\n\n # don't run if the transition is already validated\n if self.context.validated_state:\n raise RuntimeError(\"The transition is already validated\")\n\n # a rule that mutates state should not fizzle itself\n self.to_state_type = None\n self.context.proposed_state = None\n self.context.response_status = SetStateStatus.ABORT\n self.context.response_details = StateAbortDetails(reason=reason)\n\n async def rename_state(self, state_name):\n \"\"\"\n Sets the \"name\" attribute on a proposed state.\n\n The name of a state is an annotation intended to provide rich, human-readable\n context for how a run is progressing. This method only updates the name and not\n the canonical state TYPE, and will not fizzle or invalidate any other rules\n that might govern this state transition.\n \"\"\"\n if self.context.proposed_state is not None:\n self.context.proposed_state.name = state_name\n\n async def update_context_parameters(self, key, value):\n \"\"\"\n Updates the \"parameters\" dictionary attribute with the specified key-value pair.\n\n This mechanism streamlines the process of passing messages and information\n between orchestration rules if necessary and is simpler and more ephemeral than\n message-passing via the database or some other side-effect. This mechanism can\n be used to break up large rules for ease of testing or comprehension, but note\n that any rules coupled this way (or any other way) are no longer independent and\n the order in which they appear in the orchestration policy priority will matter.\n \"\"\"\n\n self.context.parameters.update({key: value})\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.before_transition","title":"before_transition
async
","text":"Implements a hook that can fire before a state is committed to the database.
This hook may produce side-effects or mutate the proposed state of a transition using one of four methods: self.reject_transition
, self.delay_transition
, self.abort_transition
, and self.rename_state
.
As currently implemented, the before_transition
hook is not perfectly isolated from mutating the transition. It is a standard instance method that has access to self
, and therefore self.context
. This should never be modified directly. Furthermore, context.run
is an ORM model, and mutating the run can also cause unintended writes to the database.
Parameters:
Name Type Description Defaultinitial_state
Optional[State]
The initial state of a transition
requiredproposed_state
Optional[State]
The proposed state of a transition
requiredcontext
OrchestrationContext
A safe copy of the OrchestrationContext
, with the exception of context.run
, mutating this context will have no effect on the broader orchestration environment.
Returns:
Type DescriptionNone
None
Source code inprefect/server/orchestration/rules.py
async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: OrchestrationContext,\n) -> None:\n \"\"\"\n Implements a hook that can fire before a state is committed to the database.\n\n This hook may produce side-effects or mutate the proposed state of a\n transition using one of four methods: `self.reject_transition`,\n `self.delay_transition`, `self.abort_transition`, and `self.rename_state`.\n\n Note:\n As currently implemented, the `before_transition` hook is not\n perfectly isolated from mutating the transition. It is a standard instance\n method that has access to `self`, and therefore `self.context`. This should\n never be modified directly. Furthermore, `context.run` is an ORM model, and\n mutating the run can also cause unintended writes to the database.\n\n Args:\n initial_state: The initial state of a transition\n proposed_state: The proposed state of a transition\n context: A safe copy of the `OrchestrationContext`, with the exception of\n `context.run`, mutating this context will have no effect on the broader\n orchestration environment.\n\n Returns:\n None\n \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.after_transition","title":"after_transition
async
","text":"Implements a hook that can fire after a state is committed to the database.
Parameters:
Name Type Description Defaultinitial_state
Optional[State]
The initial state of a transition
requiredvalidated_state
Optional[State]
The governed state that has been committed to the database
requiredcontext
OrchestrationContext
A safe copy of the OrchestrationContext
, with the exception of context.run
, mutating this context will have no effect on the broader orchestration environment.
Returns:
Type DescriptionNone
None
Source code inprefect/server/orchestration/rules.py
async def after_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n) -> None:\n \"\"\"\n Implements a hook that can fire after a state is committed to the database.\n\n Args:\n initial_state: The initial state of a transition\n validated_state: The governed state that has been committed to the database\n context: A safe copy of the `OrchestrationContext`, with the exception of\n `context.run`, mutating this context will have no effect on the broader\n orchestration environment.\n\n Returns:\n None\n \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.cleanup","title":"cleanup
async
","text":"Implements a hook that can fire after a state is committed to the database.
The intended use of this method is to revert side-effects produced by self.before_transition
when the transition is found to be invalid on exit. This allows multiple rules to be gracefully run in sequence, without logic that keeps track of all other rules that might govern a transition.
Parameters:
Name Type Description Defaultinitial_state
Optional[State]
The initial state of a transition
requiredvalidated_state
Optional[State]
The governed state that has been committed to the database
requiredcontext
OrchestrationContext
A safe copy of the OrchestrationContext
, with the exception of context.run
, mutating this context will have no effect on the broader orchestration environment.
Returns:
Type DescriptionNone
None
Source code inprefect/server/orchestration/rules.py
async def cleanup(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n) -> None:\n \"\"\"\n Implements a hook that can fire after a state is committed to the database.\n\n The intended use of this method is to revert side-effects produced by\n `self.before_transition` when the transition is found to be invalid on exit.\n This allows multiple rules to be gracefully run in sequence, without logic that\n keeps track of all other rules that might govern a transition.\n\n Args:\n initial_state: The initial state of a transition\n validated_state: The governed state that has been committed to the database\n context: A safe copy of the `OrchestrationContext`, with the exception of\n `context.run`, mutating this context will have no effect on the broader\n orchestration environment.\n\n Returns:\n None\n \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.invalid","title":"invalid
async
","text":"Determines if a rule is invalid.
Invalid rules do nothing and no hooks fire upon entering or exiting a governed context. Rules are invalid if the transition states types are not contained in self.FROM_STATES
and self.TO_STATES
, or if the context is proposing a transition that differs from the transition the rule was instantiated with.
Returns:
Type Descriptionbool
True if the rules in invalid, False otherwise.
Source code inprefect/server/orchestration/rules.py
async def invalid(self) -> bool:\n \"\"\"\n Determines if a rule is invalid.\n\n Invalid rules do nothing and no hooks fire upon entering or exiting a governed\n context. Rules are invalid if the transition states types are not contained in\n `self.FROM_STATES` and `self.TO_STATES`, or if the context is proposing\n a transition that differs from the transition the rule was instantiated with.\n\n Returns:\n True if the rules in invalid, False otherwise.\n \"\"\"\n # invalid and fizzled states are mutually exclusive,\n # `_invalid_on_entry` holds this statefulness\n if self.from_state_type not in self.FROM_STATES:\n self._invalid_on_entry = True\n if self.to_state_type not in self.TO_STATES:\n self._invalid_on_entry = True\n\n if self._invalid_on_entry is None:\n self._invalid_on_entry = await self.invalid_transition()\n return self._invalid_on_entry\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.fizzled","title":"fizzled
async
","text":"Determines if a rule is fizzled and side-effects need to be reverted.
Rules are fizzled if the transitions were valid on entry (thus firing self.before_transition
) but are invalid upon exiting the governed context, most likely caused by another rule mutating the transition.
Returns:
Type Descriptionbool
True if the rule is fizzled, False otherwise.
Source code inprefect/server/orchestration/rules.py
async def fizzled(self) -> bool:\n \"\"\"\n Determines if a rule is fizzled and side-effects need to be reverted.\n\n Rules are fizzled if the transitions were valid on entry (thus firing\n `self.before_transition`) but are invalid upon exiting the governed context,\n most likely caused by another rule mutating the transition.\n\n Returns:\n True if the rule is fizzled, False otherwise.\n \"\"\"\n\n if self._invalid_on_entry:\n return False\n return await self.invalid_transition()\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.invalid_transition","title":"invalid_transition
async
","text":"Determines if the transition proposed by the OrchestrationContext
is invalid.
If the OrchestrationContext
is attempting to manage a transition with this rule that differs from the transition the rule was instantiated with, the transition is considered to be invalid. Depending on the context, a rule with an invalid transition is either \"invalid\" or \"fizzled\".
Returns:
Type Descriptionbool
True if the transition is invalid, False otherwise.
Source code inprefect/server/orchestration/rules.py
async def invalid_transition(self) -> bool:\n \"\"\"\n Determines if the transition proposed by the `OrchestrationContext` is invalid.\n\n If the `OrchestrationContext` is attempting to manage a transition with this\n rule that differs from the transition the rule was instantiated with, the\n transition is considered to be invalid. Depending on the context, a rule with an\n invalid transition is either \"invalid\" or \"fizzled\".\n\n Returns:\n True if the transition is invalid, False otherwise.\n \"\"\"\n\n initial_state_type = self.context.initial_state_type\n proposed_state_type = self.context.proposed_state_type\n return (self.from_state_type != initial_state_type) or (\n self.to_state_type != proposed_state_type\n )\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.reject_transition","title":"reject_transition
async
","text":"Rejects a proposed transition before the transition is validated.
This method will reject a proposed transition, mutating the proposed state to the provided state
. A reason for rejecting the transition is also passed on to the OrchestrationContext
. Rules that reject the transition will not fizzle, despite the proposed state type changing.
Parameters:
Name Type Description Defaultstate
Optional[State]
The new proposed state. If None
, the current run state will be returned in the result instead.
reason
str
The reason for rejecting the transition
required Source code inprefect/server/orchestration/rules.py
async def reject_transition(self, state: Optional[states.State], reason: str):\n \"\"\"\n Rejects a proposed transition before the transition is validated.\n\n This method will reject a proposed transition, mutating the proposed state to\n the provided `state`. A reason for rejecting the transition is also passed on\n to the `OrchestrationContext`. Rules that reject the transition will not fizzle,\n despite the proposed state type changing.\n\n Args:\n state: The new proposed state. If `None`, the current run state will be\n returned in the result instead.\n reason: The reason for rejecting the transition\n \"\"\"\n\n # don't run if the transition is already validated\n if self.context.validated_state:\n raise RuntimeError(\"The transition is already validated\")\n\n # the current state will be used if a new one is not provided\n if state is None:\n if self.from_state_type is None:\n raise OrchestrationError(\n \"The current run has no state; this transition cannot be \"\n \"rejected without providing a new state.\"\n )\n self.to_state_type = None\n self.context.proposed_state = None\n else:\n # a rule that mutates state should not fizzle itself\n self.to_state_type = state.type\n self.context.proposed_state = state\n\n self.context.response_status = SetStateStatus.REJECT\n self.context.response_details = StateRejectDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.delay_transition","title":"delay_transition
async
","text":"Delays a proposed transition before the transition is validated.
This method will delay a proposed transition, setting the proposed state to None
, signaling to the OrchestrationContext
that no state should be written to the database. The number of seconds a transition should be delayed is passed to the OrchestrationContext
. A reason for delaying the transition is also provided. Rules that delay the transition will not fizzle, despite the proposed state type changing.
Parameters:
Name Type Description Defaultdelay_seconds
int
The number of seconds the transition should be delayed
requiredreason
str
The reason for delaying the transition
required Source code inprefect/server/orchestration/rules.py
async def delay_transition(\n self,\n delay_seconds: int,\n reason: str,\n):\n \"\"\"\n Delays a proposed transition before the transition is validated.\n\n This method will delay a proposed transition, setting the proposed state to\n `None`, signaling to the `OrchestrationContext` that no state should be\n written to the database. The number of seconds a transition should be delayed is\n passed to the `OrchestrationContext`. A reason for delaying the transition is\n also provided. Rules that delay the transition will not fizzle, despite the\n proposed state type changing.\n\n Args:\n delay_seconds: The number of seconds the transition should be delayed\n reason: The reason for delaying the transition\n \"\"\"\n\n # don't run if the transition is already validated\n if self.context.validated_state:\n raise RuntimeError(\"The transition is already validated\")\n\n # a rule that mutates state should not fizzle itself\n self.to_state_type = None\n self.context.proposed_state = None\n self.context.response_status = SetStateStatus.WAIT\n self.context.response_details = StateWaitDetails(\n delay_seconds=delay_seconds, reason=reason\n )\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.abort_transition","title":"abort_transition
async
","text":"Aborts a proposed transition before the transition is validated.
This method will abort a proposed transition, expecting no further action to occur for this run. The proposed state is set to None
, signaling to the OrchestrationContext
that no state should be written to the database. A reason for aborting the transition is also provided. Rules that abort the transition will not fizzle, despite the proposed state type changing.
Parameters:
Name Type Description Defaultreason
str
The reason for aborting the transition
required Source code inprefect/server/orchestration/rules.py
async def abort_transition(self, reason: str):\n \"\"\"\n Aborts a proposed transition before the transition is validated.\n\n This method will abort a proposed transition, expecting no further action to\n occur for this run. The proposed state is set to `None`, signaling to the\n `OrchestrationContext` that no state should be written to the database. A\n reason for aborting the transition is also provided. Rules that abort the\n transition will not fizzle, despite the proposed state type changing.\n\n Args:\n reason: The reason for aborting the transition\n \"\"\"\n\n # don't run if the transition is already validated\n if self.context.validated_state:\n raise RuntimeError(\"The transition is already validated\")\n\n # a rule that mutates state should not fizzle itself\n self.to_state_type = None\n self.context.proposed_state = None\n self.context.response_status = SetStateStatus.ABORT\n self.context.response_details = StateAbortDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.rename_state","title":"rename_state
async
","text":"Sets the \"name\" attribute on a proposed state.
The name of a state is an annotation intended to provide rich, human-readable context for how a run is progressing. This method only updates the name and not the canonical state TYPE, and will not fizzle or invalidate any other rules that might govern this state transition.
Source code inprefect/server/orchestration/rules.py
async def rename_state(self, state_name):\n \"\"\"\n Sets the \"name\" attribute on a proposed state.\n\n The name of a state is an annotation intended to provide rich, human-readable\n context for how a run is progressing. This method only updates the name and not\n the canonical state TYPE, and will not fizzle or invalidate any other rules\n that might govern this state transition.\n \"\"\"\n if self.context.proposed_state is not None:\n self.context.proposed_state.name = state_name\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.update_context_parameters","title":"update_context_parameters
async
","text":"Updates the \"parameters\" dictionary attribute with the specified key-value pair.
This mechanism streamlines the process of passing messages and information between orchestration rules if necessary and is simpler and more ephemeral than message-passing via the database or some other side-effect. This mechanism can be used to break up large rules for ease of testing or comprehension, but note that any rules coupled this way (or any other way) are no longer independent and the order in which they appear in the orchestration policy priority will matter.
Source code inprefect/server/orchestration/rules.py
async def update_context_parameters(self, key, value):\n \"\"\"\n Updates the \"parameters\" dictionary attribute with the specified key-value pair.\n\n This mechanism streamlines the process of passing messages and information\n between orchestration rules if necessary and is simpler and more ephemeral than\n message-passing via the database or some other side-effect. This mechanism can\n be used to break up large rules for ease of testing or comprehension, but note\n that any rules coupled this way (or any other way) are no longer independent and\n the order in which they appear in the orchestration policy priority will matter.\n \"\"\"\n\n self.context.parameters.update({key: value})\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform","title":"BaseUniversalTransform
","text":" Bases: AbstractAsyncContextManager
An abstract base class used to implement privileged bookkeeping logic.
WarningIn almost all cases, use the BaseOrchestrationRule
base class instead.
Beyond the orchestration rules implemented with the BaseOrchestrationRule
ABC, Universal transforms are not stateful, and fire their before- and after- transition hooks on every state transition unless the proposed state is None
, indicating that no state should be written to the database. Because there are no guardrails in place to prevent directly mutating state or other parts of the orchestration context, universal transforms should only be used with care.
Attributes:
Name Type DescriptionFROM_STATES
Iterable
for compatibility with BaseOrchestrationPolicy
TO_STATES
Iterable
for compatibility with BaseOrchestrationPolicy
context
the orchestration context
from_state_type
the state type a run is currently in
to_state_type
the intended proposed state type prior to any orchestration
Parameters:
Name Type Description Defaultcontext
OrchestrationContext
A FlowOrchestrationContext
or TaskOrchestrationContext
that is passed between transforms
prefect/server/orchestration/rules.py
class BaseUniversalTransform(contextlib.AbstractAsyncContextManager):\n \"\"\"\n An abstract base class used to implement privileged bookkeeping logic.\n\n Warning:\n In almost all cases, use the `BaseOrchestrationRule` base class instead.\n\n Beyond the orchestration rules implemented with the `BaseOrchestrationRule` ABC,\n Universal transforms are not stateful, and fire their before- and after- transition\n hooks on every state transition unless the proposed state is `None`, indicating that\n no state should be written to the database. Because there are no guardrails in place\n to prevent directly mutating state or other parts of the orchestration context,\n universal transforms should only be used with care.\n\n Attributes:\n FROM_STATES: for compatibility with `BaseOrchestrationPolicy`\n TO_STATES: for compatibility with `BaseOrchestrationPolicy`\n context: the orchestration context\n from_state_type: the state type a run is currently in\n to_state_type: the intended proposed state type prior to any orchestration\n\n Args:\n context: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is\n passed between transforms\n \"\"\"\n\n # `BaseUniversalTransform` will always fire on non-null transitions\n FROM_STATES: Iterable = ALL_ORCHESTRATION_STATES\n TO_STATES: Iterable = ALL_ORCHESTRATION_STATES\n\n def __init__(\n self,\n context: OrchestrationContext,\n from_state_type: Optional[states.StateType],\n to_state_type: Optional[states.StateType],\n ):\n self.context = context\n self.from_state_type = from_state_type\n self.to_state_type = to_state_type\n\n async def __aenter__(self):\n \"\"\"\n Enter an async runtime context governed by this transform.\n\n The `with` statement will bind a governed `OrchestrationContext` to the target\n specified by the `as` clause. If the transition proposed by the\n `OrchestrationContext` has been nullified on entry and `context.proposed_state`\n is `None`, entering this context will do nothing. Otherwise\n `self.before_transition` will fire.\n \"\"\"\n\n await self.before_transition(self.context)\n self.context.rule_signature.append(str(self.__class__))\n return self.context\n\n async def __aexit__(\n self,\n exc_type: Optional[Type[BaseException]],\n exc_val: Optional[BaseException],\n exc_tb: Optional[TracebackType],\n ) -> None:\n \"\"\"\n Exit the async runtime context governed by this transform.\n\n If the transition has been nullified or errorred upon exiting this transforms's context,\n nothing happens. Otherwise, `self.after_transition` will fire on every non-null\n proposed state.\n \"\"\"\n\n if not self.exception_in_transition():\n await self.after_transition(self.context)\n self.context.finalization_signature.append(str(self.__class__))\n\n async def before_transition(self, context) -> None:\n \"\"\"\n Implements a hook that fires before a state is committed to the database.\n\n Args:\n context: the `OrchestrationContext` that contains transition details\n\n Returns:\n None\n \"\"\"\n\n async def after_transition(self, context) -> None:\n \"\"\"\n Implements a hook that can fire after a state is committed to the database.\n\n Args:\n context: the `OrchestrationContext` that contains transition details\n\n Returns:\n None\n \"\"\"\n\n def nullified_transition(self) -> bool:\n \"\"\"\n Determines if the transition has been nullified.\n\n Transitions are nullified if the proposed state is `None`, indicating that\n nothing should be written to the database.\n\n Returns:\n True if the transition is nullified, False otherwise.\n \"\"\"\n\n return self.context.proposed_state is None\n\n def exception_in_transition(self) -> bool:\n \"\"\"\n Determines if the transition has encountered an exception.\n\n Returns:\n True if the transition is encountered an exception, False otherwise.\n \"\"\"\n\n return self.context.orchestration_error is not None\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.before_transition","title":"before_transition
async
","text":"Implements a hook that fires before a state is committed to the database.
Parameters:
Name Type Description Defaultcontext
the OrchestrationContext
that contains transition details
Returns:
Type DescriptionNone
None
Source code inprefect/server/orchestration/rules.py
async def before_transition(self, context) -> None:\n \"\"\"\n Implements a hook that fires before a state is committed to the database.\n\n Args:\n context: the `OrchestrationContext` that contains transition details\n\n Returns:\n None\n \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.after_transition","title":"after_transition
async
","text":"Implements a hook that can fire after a state is committed to the database.
Parameters:
Name Type Description Defaultcontext
the OrchestrationContext
that contains transition details
Returns:
Type DescriptionNone
None
Source code inprefect/server/orchestration/rules.py
async def after_transition(self, context) -> None:\n \"\"\"\n Implements a hook that can fire after a state is committed to the database.\n\n Args:\n context: the `OrchestrationContext` that contains transition details\n\n Returns:\n None\n \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.nullified_transition","title":"nullified_transition
","text":"Determines if the transition has been nullified.
Transitions are nullified if the proposed state is None
, indicating that nothing should be written to the database.
Returns:
Type Descriptionbool
True if the transition is nullified, False otherwise.
Source code inprefect/server/orchestration/rules.py
def nullified_transition(self) -> bool:\n \"\"\"\n Determines if the transition has been nullified.\n\n Transitions are nullified if the proposed state is `None`, indicating that\n nothing should be written to the database.\n\n Returns:\n True if the transition is nullified, False otherwise.\n \"\"\"\n\n return self.context.proposed_state is None\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.exception_in_transition","title":"exception_in_transition
","text":"Determines if the transition has encountered an exception.
Returns:
Type Descriptionbool
True if the transition is encountered an exception, False otherwise.
Source code inprefect/server/orchestration/rules.py
def exception_in_transition(self) -> bool:\n \"\"\"\n Determines if the transition has encountered an exception.\n\n Returns:\n True if the transition is encountered an exception, False otherwise.\n \"\"\"\n\n return self.context.orchestration_error is not None\n
"},{"location":"api-ref/server/schemas/actions/","title":"server.schemas.actions","text":""},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions","title":"prefect.server.schemas.actions
","text":"Reduced schemas for accepting API actions.
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactCreate","title":"ArtifactCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create an artifact.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass ArtifactCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create an artifact.\"\"\"\n\n key: Optional[str] = FieldFrom(schemas.core.Artifact)\n type: Optional[str] = FieldFrom(schemas.core.Artifact)\n description: Optional[str] = FieldFrom(schemas.core.Artifact)\n data: Optional[Union[Dict[str, Any], Any]] = FieldFrom(schemas.core.Artifact)\n metadata_: Optional[Dict[str, str]] = FieldFrom(schemas.core.Artifact)\n flow_run_id: Optional[UUID] = FieldFrom(schemas.core.Artifact)\n task_run_id: Optional[UUID] = FieldFrom(schemas.core.Artifact)\n\n _validate_artifact_format = validator(\"key\", allow_reuse=True)(\n validate_artifact_key\n )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactUpdate","title":"ArtifactUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update an artifact.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass ArtifactUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update an artifact.\"\"\"\n\n data: Optional[Union[Dict[str, Any], Any]] = FieldFrom(schemas.core.Artifact)\n description: Optional[str] = FieldFrom(schemas.core.Artifact)\n metadata_: Optional[Dict[str, str]] = FieldFrom(schemas.core.Artifact)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentCreate","title":"BlockDocumentCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a block document.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass BlockDocumentCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a block document.\"\"\"\n\n name: Optional[str] = FieldFrom(schemas.core.BlockDocument)\n data: dict = FieldFrom(schemas.core.BlockDocument)\n block_schema_id: UUID = FieldFrom(schemas.core.BlockDocument)\n block_type_id: UUID = FieldFrom(schemas.core.BlockDocument)\n is_anonymous: bool = FieldFrom(schemas.core.BlockDocument)\n\n _validate_name_format = validator(\"name\", allow_reuse=True)(\n validate_block_document_name\n )\n\n @root_validator\n def validate_name_is_present_if_not_anonymous(cls, values):\n # TODO: We should find an elegant way to reuse this logic from the origin model\n if not values.get(\"is_anonymous\") and not values.get(\"name\"):\n raise ValueError(\"Names must be provided for block documents.\")\n return values\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentReferenceCreate","title":"BlockDocumentReferenceCreate
","text":" Bases: ActionBaseModel
Data used to create block document reference.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass BlockDocumentReferenceCreate(ActionBaseModel):\n \"\"\"Data used to create block document reference.\"\"\"\n\n id: UUID = FieldFrom(schemas.core.BlockDocumentReference)\n parent_block_document_id: UUID = FieldFrom(schemas.core.BlockDocumentReference)\n reference_block_document_id: UUID = FieldFrom(schemas.core.BlockDocumentReference)\n name: str = FieldFrom(schemas.core.BlockDocumentReference)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentReferenceCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentUpdate","title":"BlockDocumentUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a block document.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass BlockDocumentUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a block document.\"\"\"\n\n block_schema_id: Optional[UUID] = Field(\n default=None, description=\"A block schema ID\"\n )\n data: dict = FieldFrom(schemas.core.BlockDocument)\n merge_existing_data: bool = True\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockSchemaCreate","title":"BlockSchemaCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a block schema.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass BlockSchemaCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a block schema.\"\"\"\n\n fields: dict = FieldFrom(schemas.core.BlockSchema)\n block_type_id: Optional[UUID] = FieldFrom(schemas.core.BlockSchema)\n capabilities: List[str] = FieldFrom(schemas.core.BlockSchema)\n version: str = FieldFrom(schemas.core.BlockSchema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockSchemaCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeCreate","title":"BlockTypeCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a block type.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass BlockTypeCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a block type.\"\"\"\n\n name: str = FieldFrom(schemas.core.BlockType)\n slug: str = FieldFrom(schemas.core.BlockType)\n logo_url: Optional[schemas.core.HttpUrl] = FieldFrom(schemas.core.BlockType)\n documentation_url: Optional[schemas.core.HttpUrl] = FieldFrom(\n schemas.core.BlockType\n )\n description: Optional[str] = FieldFrom(schemas.core.BlockType)\n code_example: Optional[str] = FieldFrom(schemas.core.BlockType)\n\n # validators\n _validate_slug_format = validator(\"slug\", allow_reuse=True)(\n validate_block_type_slug\n )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeUpdate","title":"BlockTypeUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a block type.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass BlockTypeUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a block type.\"\"\"\n\n logo_url: Optional[schemas.core.HttpUrl] = FieldFrom(schemas.core.BlockType)\n documentation_url: Optional[schemas.core.HttpUrl] = FieldFrom(\n schemas.core.BlockType\n )\n description: Optional[str] = FieldFrom(schemas.core.BlockType)\n code_example: Optional[str] = FieldFrom(schemas.core.BlockType)\n\n @classmethod\n def updatable_fields(cls) -> set:\n return get_class_fields_only(cls)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitCreate","title":"ConcurrencyLimitCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a concurrency limit.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass ConcurrencyLimitCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a concurrency limit.\"\"\"\n\n tag: str = FieldFrom(schemas.core.ConcurrencyLimit)\n concurrency_limit: int = FieldFrom(schemas.core.ConcurrencyLimit)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Create","title":"ConcurrencyLimitV2Create
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a v2 concurrency limit.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass ConcurrencyLimitV2Create(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a v2 concurrency limit.\"\"\"\n\n active: bool = FieldFrom(schemas.core.ConcurrencyLimitV2)\n name: str = FieldFrom(schemas.core.ConcurrencyLimitV2)\n limit: int = FieldFrom(schemas.core.ConcurrencyLimitV2)\n active_slots: int = FieldFrom(schemas.core.ConcurrencyLimitV2)\n denied_slots: int = FieldFrom(schemas.core.ConcurrencyLimitV2)\n slot_decay_per_second: float = FieldFrom(schemas.core.ConcurrencyLimitV2)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Create.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Update","title":"ConcurrencyLimitV2Update
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a v2 concurrency limit.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass ConcurrencyLimitV2Update(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a v2 concurrency limit.\"\"\"\n\n active: Optional[bool] = FieldFrom(schemas.core.ConcurrencyLimitV2)\n name: Optional[str] = FieldFrom(schemas.core.ConcurrencyLimitV2)\n limit: Optional[int] = FieldFrom(schemas.core.ConcurrencyLimitV2)\n active_slots: Optional[int] = FieldFrom(schemas.core.ConcurrencyLimitV2)\n denied_slots: Optional[int] = FieldFrom(schemas.core.ConcurrencyLimitV2)\n slot_decay_per_second: Optional[float] = FieldFrom(schemas.core.ConcurrencyLimitV2)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Update.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate","title":"DeploymentCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a deployment.
Source code inprefect/server/schemas/actions.py
@experimental_field(\n \"work_pool_name\",\n group=\"work_pools\",\n when=lambda x: x is not None,\n)\n@copy_model_fields\nclass DeploymentCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a deployment.\"\"\"\n\n @root_validator\n def populate_schedules(cls, values):\n if not values.get(\"schedules\") and values.get(\"schedule\"):\n values[\"schedules\"] = [\n DeploymentScheduleCreate(\n schedule=values[\"schedule\"],\n active=values[\"is_schedule_active\"],\n )\n ]\n\n return values\n\n @root_validator(pre=True)\n def remove_old_fields(cls, values):\n # 2.7.7 removed worker_pool_queue_id in lieu of worker_pool_name and\n # worker_pool_queue_name. Those fields were later renamed to work_pool_name\n # and work_queue_name. This validator removes old fields provided\n # by older clients to avoid 422 errors.\n values_copy = copy(values)\n worker_pool_queue_id = values_copy.pop(\"worker_pool_queue_id\", None)\n worker_pool_name = values_copy.pop(\"worker_pool_name\", None)\n worker_pool_queue_name = values_copy.pop(\"worker_pool_queue_name\", None)\n work_pool_queue_name = values_copy.pop(\"work_pool_queue_name\", None)\n if worker_pool_queue_id:\n warnings.warn(\n (\n \"`worker_pool_queue_id` is no longer supported for creating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n if worker_pool_name or worker_pool_queue_name or work_pool_queue_name:\n warnings.warn(\n (\n \"`worker_pool_name`, `worker_pool_queue_name`, and \"\n \"`work_pool_name` are\"\n \"no longer supported for creating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n return values_copy\n\n name: str = FieldFrom(schemas.core.Deployment)\n flow_id: UUID = FieldFrom(schemas.core.Deployment)\n is_schedule_active: Optional[bool] = FieldFrom(schemas.core.Deployment)\n paused: bool = FieldFrom(schemas.core.Deployment)\n schedules: List[DeploymentScheduleCreate] = Field(\n default_factory=list,\n description=\"A list of schedules for the deployment.\",\n )\n enforce_parameter_schema: bool = FieldFrom(schemas.core.Deployment)\n parameter_openapi_schema: Optional[Dict[str, Any]] = FieldFrom(\n schemas.core.Deployment\n )\n parameters: Dict[str, Any] = FieldFrom(schemas.core.Deployment)\n tags: List[str] = FieldFrom(schemas.core.Deployment)\n pull_steps: Optional[List[dict]] = FieldFrom(schemas.core.Deployment)\n\n manifest_path: Optional[str] = FieldFrom(schemas.core.Deployment)\n work_queue_name: Optional[str] = FieldFrom(schemas.core.Deployment)\n work_pool_name: Optional[str] = Field(\n default=None,\n description=\"The name of the deployment's work pool.\",\n example=\"my-work-pool\",\n )\n storage_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n schedule: Optional[schemas.schedules.SCHEDULE_TYPES] = FieldFrom(\n schemas.core.Deployment\n )\n description: Optional[str] = FieldFrom(schemas.core.Deployment)\n path: Optional[str] = FieldFrom(schemas.core.Deployment)\n version: Optional[str] = FieldFrom(schemas.core.Deployment)\n entrypoint: Optional[str] = FieldFrom(schemas.core.Deployment)\n infra_overrides: Optional[Dict[str, Any]] = FieldFrom(schemas.core.Deployment)\n\n def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n jsonschema.validate(self.infra_overrides, variables_schema)\n\n @validator(\"parameters\")\n def _validate_parameters_conform_to_schema(cls, value, values):\n \"\"\"Validate that the parameters conform to the parameter schema.\"\"\"\n if values.get(\"enforce_parameter_schema\"):\n validate_values_conform_to_schema(\n value, values.get(\"parameter_openapi_schema\"), ignore_required=True\n )\n return value\n\n @validator(\"parameter_openapi_schema\")\n def _validate_parameter_openapi_schema(cls, value, values):\n \"\"\"Validate that the parameter_openapi_schema is a valid json schema.\"\"\"\n if values.get(\"enforce_parameter_schema\"):\n validate_schema(value)\n return value\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.check_valid_configuration","title":"check_valid_configuration
","text":"Check that the combination of base_job_template defaults and infra_overrides conforms to the specified schema.
Source code inprefect/server/schemas/actions.py
def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n jsonschema.validate(self.infra_overrides, variables_schema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentFlowRunCreate","title":"DeploymentFlowRunCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow run from a deployment.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass DeploymentFlowRunCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow run from a deployment.\"\"\"\n\n # FlowRunCreate states must be provided as StateCreate objects\n state: Optional[StateCreate] = Field(\n default=None, description=\"The state of the flow run to create\"\n )\n\n name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n parameters: dict = FieldFrom(schemas.core.FlowRun)\n context: dict = FieldFrom(schemas.core.FlowRun)\n infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n empirical_policy: schemas.core.FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n tags: List[str] = FieldFrom(schemas.core.FlowRun)\n idempotency_key: Optional[str] = FieldFrom(schemas.core.FlowRun)\n parent_task_run_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n work_queue_name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n job_variables: Optional[Dict[str, Any]] = FieldFrom(schemas.core.FlowRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentFlowRunCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate","title":"DeploymentUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a deployment.
Source code inprefect/server/schemas/actions.py
@experimental_field(\n \"work_pool_name\",\n group=\"work_pools\",\n when=lambda x: x is not None,\n)\n@copy_model_fields\nclass DeploymentUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a deployment.\"\"\"\n\n @root_validator(pre=True)\n def remove_old_fields(cls, values):\n # 2.7.7 removed worker_pool_queue_id in lieu of worker_pool_name and\n # worker_pool_queue_name. Those fields were later renamed to work_pool_name\n # and work_queue_name. This validator removes old fields provided\n # by older clients to avoid 422 errors.\n values_copy = copy(values)\n worker_pool_queue_id = values_copy.pop(\"worker_pool_queue_id\", None)\n worker_pool_name = values_copy.pop(\"worker_pool_name\", None)\n worker_pool_queue_name = values_copy.pop(\"worker_pool_queue_name\", None)\n work_pool_queue_name = values_copy.pop(\"work_pool_queue_name\", None)\n if worker_pool_queue_id:\n warnings.warn(\n (\n \"`worker_pool_queue_id` is no longer supported for updating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n if worker_pool_name or worker_pool_queue_name or work_pool_queue_name:\n warnings.warn(\n (\n \"`worker_pool_name`, `worker_pool_queue_name`, and \"\n \"`work_pool_name` are\"\n \"no longer supported for creating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n return values_copy\n\n version: Optional[str] = FieldFrom(schemas.core.Deployment)\n schedule: Optional[schemas.schedules.SCHEDULE_TYPES] = FieldFrom(\n schemas.core.Deployment\n )\n description: Optional[str] = FieldFrom(schemas.core.Deployment)\n is_schedule_active: bool = FieldFrom(schemas.core.Deployment)\n paused: bool = FieldFrom(schemas.core.Deployment)\n schedules: List[DeploymentScheduleCreate] = Field(\n default_factory=list,\n description=\"A list of schedules for the deployment.\",\n )\n parameters: Optional[Dict[str, Any]] = Field(\n default=None,\n description=\"Parameters for flow runs scheduled by the deployment.\",\n )\n tags: List[str] = FieldFrom(schemas.core.Deployment)\n work_queue_name: Optional[str] = FieldFrom(schemas.core.Deployment)\n work_pool_name: Optional[str] = Field(\n default=None,\n description=\"The name of the deployment's work pool.\",\n example=\"my-work-pool\",\n )\n path: Optional[str] = FieldFrom(schemas.core.Deployment)\n infra_overrides: Optional[Dict[str, Any]] = FieldFrom(schemas.core.Deployment)\n entrypoint: Optional[str] = FieldFrom(schemas.core.Deployment)\n manifest_path: Optional[str] = FieldFrom(schemas.core.Deployment)\n storage_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n enforce_parameter_schema: Optional[bool] = Field(\n default=None,\n description=(\n \"Whether or not the deployment should enforce the parameter schema.\"\n ),\n )\n\n def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n if variables_schema is not None:\n jsonschema.validate(self.infra_overrides, variables_schema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.check_valid_configuration","title":"check_valid_configuration
","text":"Check that the combination of base_job_template defaults and infra_overrides conforms to the specified schema.
Source code inprefect/server/schemas/actions.py
def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n if variables_schema is not None:\n jsonschema.validate(self.infra_overrides, variables_schema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowCreate","title":"FlowCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass FlowCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow.\"\"\"\n\n name: str = FieldFrom(schemas.core.Flow)\n tags: List[str] = FieldFrom(schemas.core.Flow)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunCreate","title":"FlowRunCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow run.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow run.\"\"\"\n\n # FlowRunCreate states must be provided as StateCreate objects\n state: Optional[StateCreate] = Field(\n default=None, description=\"The state of the flow run to create\"\n )\n\n name: str = FieldFrom(schemas.core.FlowRun)\n flow_id: UUID = FieldFrom(schemas.core.FlowRun)\n flow_version: Optional[str] = FieldFrom(schemas.core.FlowRun)\n parameters: dict = FieldFrom(schemas.core.FlowRun)\n context: dict = FieldFrom(schemas.core.FlowRun)\n parent_task_run_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n empirical_policy: schemas.core.FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n tags: List[str] = FieldFrom(schemas.core.FlowRun)\n idempotency_key: Optional[str] = FieldFrom(schemas.core.FlowRun)\n\n # DEPRECATED\n\n deployment_id: Optional[UUID] = Field(\n None,\n description=(\n \"DEPRECATED: The id of the deployment associated with this flow run, if\"\n \" available.\"\n ),\n deprecated=True,\n )\n\n class Config(ActionBaseModel.Config):\n json_dumps = orjson_dumps_extra_compatible\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyCreate","title":"FlowRunNotificationPolicyCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow run notification policy.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunNotificationPolicyCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow run notification policy.\"\"\"\n\n is_active: bool = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n state_names: List[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n tags: List[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n block_document_id: UUID = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n message_template: Optional[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyUpdate","title":"FlowRunNotificationPolicyUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a flow run notification policy.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunNotificationPolicyUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a flow run notification policy.\"\"\"\n\n is_active: Optional[bool] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n state_names: Optional[List[str]] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n tags: Optional[List[str]] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n block_document_id: Optional[UUID] = FieldFrom(\n schemas.core.FlowRunNotificationPolicy\n )\n message_template: Optional[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunUpdate","title":"FlowRunUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a flow run.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a flow run.\"\"\"\n\n name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n flow_version: Optional[str] = FieldFrom(schemas.core.FlowRun)\n parameters: dict = FieldFrom(schemas.core.FlowRun)\n empirical_policy: schemas.core.FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n tags: List[str] = FieldFrom(schemas.core.FlowRun)\n infrastructure_pid: Optional[str] = FieldFrom(schemas.core.FlowRun)\n job_variables: Optional[Dict[str, Any]] = FieldFrom(schemas.core.FlowRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowUpdate","title":"FlowUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a flow.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass FlowUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a flow.\"\"\"\n\n tags: List[str] = FieldFrom(schemas.core.Flow)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.LogCreate","title":"LogCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a log.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass LogCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a log.\"\"\"\n\n name: str = FieldFrom(schemas.core.Log)\n level: int = FieldFrom(schemas.core.Log)\n message: str = FieldFrom(schemas.core.Log)\n timestamp: DateTimeTZ = FieldFrom(schemas.core.Log)\n flow_run_id: Optional[UUID] = FieldFrom(schemas.core.Log)\n task_run_id: Optional[UUID] = FieldFrom(schemas.core.Log)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.LogCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.SavedSearchCreate","title":"SavedSearchCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a saved search.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass SavedSearchCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a saved search.\"\"\"\n\n name: str = FieldFrom(schemas.core.SavedSearch)\n filters: List[schemas.core.SavedSearchFilter] = FieldFrom(schemas.core.SavedSearch)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.SavedSearchCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.StateCreate","title":"StateCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a new state.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass StateCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a new state.\"\"\"\n\n type: schemas.states.StateType = FieldFrom(schemas.states.State)\n name: Optional[str] = FieldFrom(schemas.states.State)\n message: Optional[str] = FieldFrom(schemas.states.State)\n data: Optional[Any] = FieldFrom(schemas.states.State)\n state_details: schemas.states.StateDetails = FieldFrom(schemas.states.State)\n\n # DEPRECATED\n\n timestamp: Optional[DateTimeTZ] = Field(\n default=None,\n repr=False,\n ignored=True,\n )\n id: Optional[UUID] = Field(default=None, repr=False, ignored=True)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.StateCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunCreate","title":"TaskRunCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a task run
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass TaskRunCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a task run\"\"\"\n\n # TaskRunCreate states must be provided as StateCreate objects\n state: Optional[StateCreate] = Field(\n default=None, description=\"The state of the task run to create\"\n )\n\n name: str = FieldFrom(schemas.core.TaskRun)\n flow_run_id: Optional[UUID] = FieldFrom(schemas.core.TaskRun)\n task_key: str = FieldFrom(schemas.core.TaskRun)\n dynamic_key: str = FieldFrom(schemas.core.TaskRun)\n cache_key: Optional[str] = FieldFrom(schemas.core.TaskRun)\n cache_expiration: Optional[DateTimeTZ] = FieldFrom(schemas.core.TaskRun)\n task_version: Optional[str] = FieldFrom(schemas.core.TaskRun)\n empirical_policy: schemas.core.TaskRunPolicy = FieldFrom(schemas.core.TaskRun)\n tags: List[str] = FieldFrom(schemas.core.TaskRun)\n task_inputs: Dict[\n str,\n List[\n Union[\n schemas.core.TaskRunResult,\n schemas.core.Parameter,\n schemas.core.Constant,\n ]\n ],\n ] = FieldFrom(schemas.core.TaskRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunUpdate","title":"TaskRunUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a task run
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass TaskRunUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a task run\"\"\"\n\n name: str = FieldFrom(schemas.core.TaskRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableCreate","title":"VariableCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a Variable.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass VariableCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a Variable.\"\"\"\n\n name: str = FieldFrom(schemas.core.Variable)\n value: str = FieldFrom(schemas.core.Variable)\n tags: Optional[List[str]] = FieldFrom(schemas.core.Variable)\n\n # validators\n _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableUpdate","title":"VariableUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a Variable.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass VariableUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a Variable.\"\"\"\n\n name: Optional[str] = Field(\n default=None,\n description=\"The name of the variable\",\n example=\"my_variable\",\n max_length=schemas.core.MAX_VARIABLE_NAME_LENGTH,\n )\n value: Optional[str] = Field(\n default=None,\n description=\"The value of the variable\",\n example=\"my-value\",\n max_length=schemas.core.MAX_VARIABLE_VALUE_LENGTH,\n )\n tags: Optional[List[str]] = FieldFrom(schemas.core.Variable)\n\n # validators\n _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolCreate","title":"WorkPoolCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a work pool.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass WorkPoolCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a work pool.\"\"\"\n\n name: str = FieldFrom(schemas.core.WorkPool)\n description: Optional[str] = FieldFrom(schemas.core.WorkPool)\n type: str = Field(description=\"The work pool type.\", default=\"prefect-agent\")\n base_job_template: Dict[str, Any] = FieldFrom(schemas.core.WorkPool)\n is_paused: bool = FieldFrom(schemas.core.WorkPool)\n concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkPool)\n\n _validate_base_job_template = validator(\"base_job_template\", allow_reuse=True)(\n validate_base_job_template\n )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolUpdate","title":"WorkPoolUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a work pool.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass WorkPoolUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a work pool.\"\"\"\n\n description: Optional[str] = FieldFrom(schemas.core.WorkPool)\n is_paused: Optional[bool] = FieldFrom(schemas.core.WorkPool)\n base_job_template: Optional[Dict[str, Any]] = FieldFrom(schemas.core.WorkPool)\n concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkPool)\n\n _validate_base_job_template = validator(\"base_job_template\", allow_reuse=True)(\n validate_base_job_template\n )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueCreate","title":"WorkQueueCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a work queue.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass WorkQueueCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a work queue.\"\"\"\n\n name: str = FieldFrom(schemas.core.WorkQueue)\n description: Optional[str] = FieldFrom(schemas.core.WorkQueue)\n is_paused: bool = FieldFrom(schemas.core.WorkQueue)\n concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkQueue)\n priority: Optional[int] = Field(\n default=None,\n description=(\n \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n ),\n )\n\n # DEPRECATED\n\n filter: Optional[schemas.core.QueueFilter] = Field(\n None,\n description=\"DEPRECATED: Filter criteria for the work queue.\",\n deprecated=True,\n )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueUpdate","title":"WorkQueueUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a work queue.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass WorkQueueUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a work queue.\"\"\"\n\n name: str = FieldFrom(schemas.core.WorkQueue)\n description: Optional[str] = FieldFrom(schemas.core.WorkQueue)\n is_paused: bool = FieldFrom(schemas.core.WorkQueue)\n concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkQueue)\n priority: Optional[int] = FieldFrom(schemas.core.WorkQueue)\n last_polled: Optional[DateTimeTZ] = FieldFrom(schemas.core.WorkQueue)\n\n # DEPRECATED\n\n filter: Optional[schemas.core.QueueFilter] = Field(\n None,\n description=\"DEPRECATED: Filter criteria for the work queue.\",\n deprecated=True,\n )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/core/","title":"server.schemas.core","text":""},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core","title":"prefect.server.schemas.core
","text":"Full schemas of Prefect REST API objects.
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Agent","title":"Agent
","text":" Bases: ORMBaseModel
An ORM representation of an agent
Source code inprefect/server/schemas/core.py
class Agent(ORMBaseModel):\n \"\"\"An ORM representation of an agent\"\"\"\n\n name: str = Field(\n default_factory=lambda: generate_slug(2),\n description=(\n \"The name of the agent. If a name is not provided, it will be\"\n \" auto-generated.\"\n ),\n )\n work_queue_id: UUID = Field(\n default=..., description=\"The work queue with which the agent is associated.\"\n )\n last_activity_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The last time this agent polled for work.\"\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockDocument","title":"BlockDocument
","text":" Bases: ORMBaseModel
An ORM representation of a block document.
Source code inprefect/server/schemas/core.py
class BlockDocument(ORMBaseModel):\n \"\"\"An ORM representation of a block document.\"\"\"\n\n name: Optional[str] = Field(\n default=None,\n description=(\n \"The block document's name. Not required for anonymous block documents.\"\n ),\n )\n data: dict = Field(default_factory=dict, description=\"The block document's data\")\n block_schema_id: UUID = Field(default=..., description=\"A block schema ID\")\n block_schema: Optional[BlockSchema] = Field(\n default=None, description=\"The associated block schema\"\n )\n block_type_id: UUID = Field(default=..., description=\"A block type ID\")\n block_type_name: Optional[str] = Field(\n default=None, description=\"The associated block type's name\"\n )\n block_type: Optional[BlockType] = Field(\n default=None, description=\"The associated block type\"\n )\n block_document_references: Dict[str, Dict[str, Any]] = Field(\n default_factory=dict, description=\"Record of the block document's references\"\n )\n is_anonymous: bool = Field(\n default=False,\n description=(\n \"Whether the block is anonymous (anonymous blocks are usually created by\"\n \" Prefect automatically)\"\n ),\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n # the BlockDocumentCreate subclass allows name=None\n # and will inherit this validator\n if v is not None:\n raise_on_name_with_banned_characters(v)\n return v\n\n @root_validator\n def validate_name_is_present_if_not_anonymous(cls, values):\n # anonymous blocks may have no name prior to actually being\n # stored in the database\n if not values.get(\"is_anonymous\") and not values.get(\"name\"):\n raise ValueError(\"Names must be provided for block documents.\")\n return values\n\n @classmethod\n async def from_orm_model(\n cls,\n session,\n orm_block_document: \"prefect.server.database.orm_models.ORMBlockDocument\",\n include_secrets: bool = False,\n ):\n data = await orm_block_document.decrypt_data(session=session)\n # if secrets are not included, obfuscate them based on the schema's\n # `secret_fields`. Note this walks any nested blocks as well. If the\n # nested blocks were recovered from named blocks, they will already\n # be obfuscated, but if nested fields were hardcoded into the parent\n # blocks data, this is the only opportunity to obfuscate them.\n if not include_secrets:\n flat_data = dict_to_flatdict(data)\n # iterate over the (possibly nested) secret fields\n # and obfuscate their data\n for secret_field in orm_block_document.block_schema.fields.get(\n \"secret_fields\", []\n ):\n secret_key = tuple(secret_field.split(\".\"))\n if flat_data.get(secret_key) is not None:\n flat_data[secret_key] = obfuscate_string(flat_data[secret_key])\n # If a wildcard (*) is in the current secret key path, we take the portion\n # of the path before the wildcard and compare it to the same level of each\n # key. A match means that the field is nested under the secret key and should\n # be obfuscated.\n elif \"*\" in secret_key:\n wildcard_index = secret_key.index(\"*\")\n for data_key in flat_data.keys():\n if secret_key[0:wildcard_index] == data_key[0:wildcard_index]:\n flat_data[data_key] = obfuscate(flat_data[data_key])\n data = flatdict_to_dict(flat_data)\n\n return cls(\n id=orm_block_document.id,\n created=orm_block_document.created,\n updated=orm_block_document.updated,\n name=orm_block_document.name,\n data=data,\n block_schema_id=orm_block_document.block_schema_id,\n block_schema=orm_block_document.block_schema,\n block_type_id=orm_block_document.block_type_id,\n block_type_name=orm_block_document.block_type_name,\n block_type=orm_block_document.block_type,\n is_anonymous=orm_block_document.is_anonymous,\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockDocumentReference","title":"BlockDocumentReference
","text":" Bases: ORMBaseModel
An ORM representation of a block document reference.
Source code inprefect/server/schemas/core.py
class BlockDocumentReference(ORMBaseModel):\n \"\"\"An ORM representation of a block document reference.\"\"\"\n\n parent_block_document_id: UUID = Field(\n default=..., description=\"ID of block document the reference is nested within\"\n )\n parent_block_document: Optional[BlockDocument] = Field(\n default=None, description=\"The block document the reference is nested within\"\n )\n reference_block_document_id: UUID = Field(\n default=..., description=\"ID of the nested block document\"\n )\n reference_block_document: Optional[BlockDocument] = Field(\n default=None, description=\"The nested block document\"\n )\n name: str = Field(\n default=..., description=\"The name that the reference is nested under\"\n )\n\n @root_validator\n def validate_parent_and_ref_are_different(cls, values):\n parent_id = values.get(\"parent_block_document_id\")\n ref_id = values.get(\"reference_block_document_id\")\n if parent_id and ref_id and parent_id == ref_id:\n raise ValueError(\n \"`parent_block_document_id` and `reference_block_document_id` cannot be\"\n \" the same\"\n )\n return values\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockSchema","title":"BlockSchema
","text":" Bases: ORMBaseModel
An ORM representation of a block schema.
Source code inprefect/server/schemas/core.py
class BlockSchema(ORMBaseModel):\n \"\"\"An ORM representation of a block schema.\"\"\"\n\n checksum: str = Field(default=..., description=\"The block schema's unique checksum\")\n fields: dict = Field(\n default_factory=dict, description=\"The block schema's field schema\"\n )\n block_type_id: Optional[UUID] = Field(default=..., description=\"A block type ID\")\n block_type: Optional[BlockType] = Field(\n default=None, description=\"The associated block type\"\n )\n capabilities: List[str] = Field(\n default_factory=list,\n description=\"A list of Block capabilities\",\n )\n version: str = Field(\n default=DEFAULT_BLOCK_SCHEMA_VERSION,\n description=\"Human readable identifier for the block schema\",\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockSchemaReference","title":"BlockSchemaReference
","text":" Bases: ORMBaseModel
An ORM representation of a block schema reference.
Source code inprefect/server/schemas/core.py
class BlockSchemaReference(ORMBaseModel):\n \"\"\"An ORM representation of a block schema reference.\"\"\"\n\n parent_block_schema_id: UUID = Field(\n default=..., description=\"ID of block schema the reference is nested within\"\n )\n parent_block_schema: Optional[BlockSchema] = Field(\n default=None, description=\"The block schema the reference is nested within\"\n )\n reference_block_schema_id: UUID = Field(\n default=..., description=\"ID of the nested block schema\"\n )\n reference_block_schema: Optional[BlockSchema] = Field(\n default=None, description=\"The nested block schema\"\n )\n name: str = Field(\n default=..., description=\"The name that the reference is nested under\"\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockType","title":"BlockType
","text":" Bases: ORMBaseModel
An ORM representation of a block type
Source code inprefect/server/schemas/core.py
class BlockType(ORMBaseModel):\n \"\"\"An ORM representation of a block type\"\"\"\n\n name: str = Field(default=..., description=\"A block type's name\")\n slug: str = Field(default=..., description=\"A block type's slug\")\n logo_url: Optional[HttpUrl] = Field(\n default=None, description=\"Web URL for the block type's logo\"\n )\n documentation_url: Optional[HttpUrl] = Field(\n default=None, description=\"Web URL for the block type's documentation\"\n )\n description: Optional[str] = Field(\n default=None,\n description=\"A short blurb about the corresponding block's intended use\",\n )\n code_example: Optional[str] = Field(\n default=None,\n description=\"A code snippet demonstrating use of the corresponding block\",\n )\n is_protected: bool = Field(\n default=False, description=\"Protected block types cannot be modified via API.\"\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.ConcurrencyLimit","title":"ConcurrencyLimit
","text":" Bases: ORMBaseModel
An ORM representation of a concurrency limit.
Source code inprefect/server/schemas/core.py
class ConcurrencyLimit(ORMBaseModel):\n \"\"\"An ORM representation of a concurrency limit.\"\"\"\n\n tag: str = Field(\n default=..., description=\"A tag the concurrency limit is applied to.\"\n )\n concurrency_limit: int = Field(default=..., description=\"The concurrency limit.\")\n active_slots: List[UUID] = Field(\n default_factory=list,\n description=\"A list of active run ids using a concurrency slot\",\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.ConcurrencyLimitV2","title":"ConcurrencyLimitV2
","text":" Bases: ORMBaseModel
An ORM representation of a v2 concurrency limit.
Source code inprefect/server/schemas/core.py
class ConcurrencyLimitV2(ORMBaseModel):\n \"\"\"An ORM representation of a v2 concurrency limit.\"\"\"\n\n active: bool = Field(\n default=True, description=\"Whether the concurrency limit is active.\"\n )\n name: str = Field(default=..., description=\"The name of the concurrency limit.\")\n limit: int = Field(default=..., description=\"The concurrency limit.\")\n active_slots: int = Field(default=0, description=\"The number of active slots.\")\n denied_slots: int = Field(default=0, description=\"The number of denied slots.\")\n slot_decay_per_second: float = Field(\n default=0,\n description=\"The decay rate for active slots when used as a rate limit.\",\n )\n avg_slot_occupancy_seconds: float = Field(\n default=2.0, description=\"The average amount of time a slot is occupied.\"\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Configuration","title":"Configuration
","text":" Bases: ORMBaseModel
An ORM representation of account info.
Source code inprefect/server/schemas/core.py
class Configuration(ORMBaseModel):\n \"\"\"An ORM representation of account info.\"\"\"\n\n key: str = Field(default=..., description=\"Account info key\")\n value: dict = Field(default=..., description=\"Account info\")\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Deployment","title":"Deployment
","text":" Bases: ORMBaseModel
An ORM representation of deployment data.
Source code inprefect/server/schemas/core.py
class Deployment(ORMBaseModel):\n \"\"\"An ORM representation of deployment data.\"\"\"\n\n name: str = Field(default=..., description=\"The name of the deployment.\")\n version: Optional[str] = Field(\n default=None, description=\"An optional version for the deployment.\"\n )\n description: Optional[str] = Field(\n default=None, description=\"A description for the deployment.\"\n )\n flow_id: UUID = Field(\n default=..., description=\"The flow id associated with the deployment.\"\n )\n schedule: Optional[schedules.SCHEDULE_TYPES] = Field(\n default=None, description=\"A schedule for the deployment.\"\n )\n is_schedule_active: bool = Field(\n default=True, description=\"Whether or not the deployment schedule is active.\"\n )\n paused: bool = Field(\n default=False, description=\"Whether or not the deployment is paused.\"\n )\n schedules: List[DeploymentSchedule] = Field(\n default_factory=list, description=\"A list of schedules for the deployment.\"\n )\n infra_overrides: Dict[str, Any] = Field(\n default_factory=dict,\n description=\"Overrides to apply to the base infrastructure block at runtime.\",\n )\n parameters: Dict[str, Any] = Field(\n default_factory=dict,\n description=\"Parameters for flow runs scheduled by the deployment.\",\n )\n pull_steps: Optional[List[dict]] = Field(\n default=None,\n description=\"Pull steps for cloning and running this deployment.\",\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"A list of tags for the deployment\",\n example=[\"tag-1\", \"tag-2\"],\n )\n work_queue_name: Optional[str] = Field(\n default=None,\n description=(\n \"The work queue for the deployment. If no work queue is set, work will not\"\n \" be scheduled.\"\n ),\n )\n last_polled: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The last time the deployment was polled for status updates.\",\n )\n parameter_openapi_schema: Optional[Dict[str, Any]] = Field(\n default=None,\n description=\"The parameter schema of the flow, including defaults.\",\n )\n path: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the working directory for the workflow, relative to remote\"\n \" storage or an absolute path.\"\n ),\n )\n entrypoint: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the entrypoint for the workflow, relative to the `path`.\"\n ),\n )\n manifest_path: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the flow's manifest file, relative to the chosen storage.\"\n ),\n )\n storage_document_id: Optional[UUID] = Field(\n default=None,\n description=\"The block document defining storage used for this flow.\",\n )\n infrastructure_document_id: Optional[UUID] = Field(\n default=None,\n description=\"The block document defining infrastructure to use for flow runs.\",\n )\n created_by: Optional[CreatedBy] = Field(\n default=None,\n description=\"Optional information about the creator of this deployment.\",\n )\n updated_by: Optional[UpdatedBy] = Field(\n default=None,\n description=\"Optional information about the updater of this deployment.\",\n )\n work_queue_id: UUID = Field(\n default=None,\n description=(\n \"The id of the work pool queue to which this deployment is assigned.\"\n ),\n )\n enforce_parameter_schema: bool = Field(\n default=False,\n description=(\n \"Whether or not the deployment should enforce the parameter schema.\"\n ),\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Flow","title":"Flow
","text":" Bases: ORMBaseModel
An ORM representation of flow data.
Source code inprefect/server/schemas/core.py
class Flow(ORMBaseModel):\n \"\"\"An ORM representation of flow data.\"\"\"\n\n name: str = Field(\n default=..., description=\"The name of the flow\", example=\"my-flow\"\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"A list of flow tags\",\n example=[\"tag-1\", \"tag-2\"],\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRun","title":"FlowRun
","text":" Bases: ORMBaseModel
An ORM representation of flow run data.
Source code inprefect/server/schemas/core.py
class FlowRun(ORMBaseModel):\n \"\"\"An ORM representation of flow run data.\"\"\"\n\n name: str = Field(\n default_factory=lambda: generate_slug(2),\n description=(\n \"The name of the flow run. Defaults to a random slug if not specified.\"\n ),\n example=\"my-flow-run\",\n )\n flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n state_id: Optional[UUID] = Field(\n default=None, description=\"The id of the flow run's current state.\"\n )\n deployment_id: Optional[UUID] = Field(\n default=None,\n description=(\n \"The id of the deployment associated with this flow run, if available.\"\n ),\n )\n work_queue_name: Optional[str] = Field(\n default=None, description=\"The work queue that handled this flow run.\"\n )\n flow_version: Optional[str] = Field(\n default=None,\n description=\"The version of the flow executed in this flow run.\",\n example=\"1.0\",\n )\n parameters: dict = Field(\n default_factory=dict, description=\"Parameters for the flow run.\"\n )\n idempotency_key: Optional[str] = Field(\n default=None,\n description=(\n \"An optional idempotency key for the flow run. Used to ensure the same flow\"\n \" run is not created multiple times.\"\n ),\n )\n context: dict = Field(\n default_factory=dict,\n description=\"Additional context for the flow run.\",\n example={\"my_var\": \"my_val\"},\n )\n empirical_policy: FlowRunPolicy = Field(\n default_factory=FlowRunPolicy,\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"A list of tags on the flow run\",\n example=[\"tag-1\", \"tag-2\"],\n )\n parent_task_run_id: Optional[UUID] = Field(\n default=None,\n description=(\n \"If the flow run is a subflow, the id of the 'dummy' task in the parent\"\n \" flow used to track subflow state.\"\n ),\n )\n\n state_type: Optional[states.StateType] = Field(\n default=None, description=\"The type of the current flow run state.\"\n )\n state_name: Optional[str] = Field(\n default=None, description=\"The name of the current flow run state.\"\n )\n run_count: int = Field(\n default=0, description=\"The number of times the flow run was executed.\"\n )\n expected_start_time: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The flow run's expected start time.\",\n )\n next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The next time the flow run is scheduled to start.\",\n )\n start_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The actual start time.\"\n )\n end_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The actual end time.\"\n )\n total_run_time: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=(\n \"Total run time. If the flow run was executed multiple times, the time of\"\n \" each run will be summed.\"\n ),\n )\n estimated_run_time: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=\"A real-time estimate of the total run time.\",\n )\n estimated_start_time_delta: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=\"The difference between actual and expected start time.\",\n )\n auto_scheduled: bool = Field(\n default=False,\n description=\"Whether or not the flow run was automatically scheduled.\",\n )\n infrastructure_document_id: Optional[UUID] = Field(\n default=None,\n description=\"The block document defining infrastructure to use this flow run.\",\n )\n infrastructure_pid: Optional[str] = Field(\n default=None,\n description=\"The id of the flow run as returned by an infrastructure block.\",\n )\n created_by: Optional[CreatedBy] = Field(\n default=None,\n description=\"Optional information about the creator of this flow run.\",\n )\n work_queue_id: Optional[UUID] = Field(\n default=None, description=\"The id of the run's work pool queue.\"\n )\n\n # relationships\n # flow: Flow = None\n # task_runs: List[\"TaskRun\"] = Field(default_factory=list)\n state: Optional[states.State] = Field(\n default=None, description=\"The current state of the flow run.\"\n )\n # parent_task_run: \"TaskRun\" = None\n\n job_variables: Optional[Dict[str, Any]] = Field(\n default=None,\n description=\"Variables used as overrides in the base job template\",\n )\n\n @validator(\"name\", pre=True)\n def set_name(cls, name):\n return name or generate_slug(2)\n\n def __eq__(self, other: Any) -> bool:\n \"\"\"\n Check for \"equality\" to another flow run schema\n\n Estimates times are rolling and will always change with repeated queries for\n a flow run so we ignore them during equality checks.\n \"\"\"\n if isinstance(other, FlowRun):\n exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n return self.dict(exclude=exclude_fields) == other.dict(\n exclude=exclude_fields\n )\n return super().__eq__(other)\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunNotificationPolicy","title":"FlowRunNotificationPolicy
","text":" Bases: ORMBaseModel
An ORM representation of a flow run notification.
Source code inprefect/server/schemas/core.py
class FlowRunNotificationPolicy(ORMBaseModel):\n \"\"\"An ORM representation of a flow run notification.\"\"\"\n\n is_active: bool = Field(\n default=True, description=\"Whether the policy is currently active\"\n )\n state_names: List[str] = Field(\n default=..., description=\"The flow run states that trigger notifications\"\n )\n tags: List[str] = Field(\n default=...,\n description=\"The flow run tags that trigger notifications (set [] to disable)\",\n )\n block_document_id: UUID = Field(\n default=..., description=\"The block document ID used for sending notifications\"\n )\n message_template: Optional[str] = Field(\n default=None,\n description=(\n \"A templatable notification message. Use {braces} to add variables.\"\n \" Valid variables include:\"\n f\" {listrepr(sorted(FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS), sep=', ')}\"\n ),\n example=(\n \"Flow run {flow_run_name} with id {flow_run_id} entered state\"\n \" {flow_run_state_name}.\"\n ),\n )\n\n @validator(\"message_template\")\n def validate_message_template_variables(cls, v):\n if v is not None:\n try:\n v.format(**{k: \"test\" for k in FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS})\n except KeyError as exc:\n raise ValueError(f\"Invalid template variable provided: '{exc.args[0]}'\")\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunPolicy","title":"FlowRunPolicy
","text":" Bases: PrefectBaseModel
Defines of how a flow run should retry.
Source code inprefect/server/schemas/core.py
class FlowRunPolicy(PrefectBaseModel):\n \"\"\"Defines of how a flow run should retry.\"\"\"\n\n # TODO: Determine how to separate between infrastructure and within-process level\n # retries\n max_retries: int = Field(\n default=0,\n description=(\n \"The maximum number of retries. Field is not used. Please use `retries`\"\n \" instead.\"\n ),\n deprecated=True,\n )\n retry_delay_seconds: float = Field(\n default=0,\n description=(\n \"The delay between retries. Field is not used. Please use `retry_delay`\"\n \" instead.\"\n ),\n deprecated=True,\n )\n retries: Optional[int] = Field(default=None, description=\"The number of retries.\")\n retry_delay: Optional[int] = Field(\n default=None, description=\"The delay time between retries, in seconds.\"\n )\n pause_keys: Optional[set] = Field(\n default_factory=set, description=\"Tracks pauses this run has observed.\"\n )\n resuming: Optional[bool] = Field(\n default=False, description=\"Indicates if this run is resuming from a pause.\"\n )\n\n @root_validator\n def populate_deprecated_fields(cls, values):\n \"\"\"\n If deprecated fields are provided, populate the corresponding new fields\n to preserve orchestration behavior.\n \"\"\"\n if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n values[\"retries\"] = values[\"max_retries\"]\n if (\n not values.get(\"retry_delay\", None)\n and values.get(\"retry_delay_seconds\", 0) != 0\n ):\n values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n return values\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunPolicy.populate_deprecated_fields","title":"populate_deprecated_fields
","text":"If deprecated fields are provided, populate the corresponding new fields to preserve orchestration behavior.
Source code inprefect/server/schemas/core.py
@root_validator\ndef populate_deprecated_fields(cls, values):\n \"\"\"\n If deprecated fields are provided, populate the corresponding new fields\n to preserve orchestration behavior.\n \"\"\"\n if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n values[\"retries\"] = values[\"max_retries\"]\n if (\n not values.get(\"retry_delay\", None)\n and values.get(\"retry_delay_seconds\", 0) != 0\n ):\n values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n return values\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunnerSettings","title":"FlowRunnerSettings
","text":" Bases: PrefectBaseModel
An API schema for passing details about the flow runner.
This schema is agnostic to the types and configuration provided by clients
Source code inprefect/server/schemas/core.py
class FlowRunnerSettings(PrefectBaseModel):\n \"\"\"\n An API schema for passing details about the flow runner.\n\n This schema is agnostic to the types and configuration provided by clients\n \"\"\"\n\n type: Optional[str] = Field(\n default=None,\n description=(\n \"The type of the flow runner which can be used by the client for\"\n \" dispatching.\"\n ),\n )\n config: Optional[dict] = Field(\n default=None, description=\"The configuration for the given flow runner type.\"\n )\n\n # The following is required for composite compatibility in the ORM\n\n def __init__(self, type: str = None, config: dict = None, **kwargs) -> None:\n # Pydantic does not support positional arguments so they must be converted to\n # keyword arguments\n super().__init__(type=type, config=config, **kwargs)\n\n def __composite_values__(self):\n return self.type, self.config\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Log","title":"Log
","text":" Bases: ORMBaseModel
An ORM representation of log data.
Source code inprefect/server/schemas/core.py
class Log(ORMBaseModel):\n \"\"\"An ORM representation of log data.\"\"\"\n\n name: str = Field(default=..., description=\"The logger name.\")\n level: int = Field(default=..., description=\"The log level.\")\n message: str = Field(default=..., description=\"The log message.\")\n timestamp: DateTimeTZ = Field(default=..., description=\"The log timestamp.\")\n flow_run_id: Optional[UUID] = Field(\n default=None, description=\"The flow run ID associated with the log.\"\n )\n task_run_id: Optional[UUID] = Field(\n default=None, description=\"The task run ID associated with the log.\"\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.QueueFilter","title":"QueueFilter
","text":" Bases: PrefectBaseModel
Filter criteria definition for a work queue.
Source code inprefect/server/schemas/core.py
class QueueFilter(PrefectBaseModel):\n \"\"\"Filter criteria definition for a work queue.\"\"\"\n\n tags: Optional[List[str]] = Field(\n default=None,\n description=\"Only include flow runs with these tags in the work queue.\",\n )\n deployment_ids: Optional[List[UUID]] = Field(\n default=None,\n description=\"Only include flow runs from these deployments in the work queue.\",\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.SavedSearch","title":"SavedSearch
","text":" Bases: ORMBaseModel
An ORM representation of saved search data. Represents a set of filter criteria.
Source code inprefect/server/schemas/core.py
class SavedSearch(ORMBaseModel):\n \"\"\"An ORM representation of saved search data. Represents a set of filter criteria.\"\"\"\n\n name: str = Field(default=..., description=\"The name of the saved search.\")\n filters: List[SavedSearchFilter] = Field(\n default_factory=list, description=\"The filter set for the saved search.\"\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.SavedSearchFilter","title":"SavedSearchFilter
","text":" Bases: PrefectBaseModel
A filter for a saved search model. Intended for use by the Prefect UI.
Source code inprefect/server/schemas/core.py
class SavedSearchFilter(PrefectBaseModel):\n \"\"\"A filter for a saved search model. Intended for use by the Prefect UI.\"\"\"\n\n object: str = Field(default=..., description=\"The object over which to filter.\")\n property: str = Field(\n default=..., description=\"The property of the object on which to filter.\"\n )\n type: str = Field(default=..., description=\"The type of the property.\")\n operation: str = Field(\n default=...,\n description=\"The operator to apply to the object. For example, `equals`.\",\n )\n value: Any = Field(\n default=..., description=\"A JSON-compatible value for the filter.\"\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.TaskRun","title":"TaskRun
","text":" Bases: ORMBaseModel
An ORM representation of task run data.
Source code inprefect/server/schemas/core.py
class TaskRun(ORMBaseModel):\n \"\"\"An ORM representation of task run data.\"\"\"\n\n name: str = Field(default_factory=lambda: generate_slug(2), example=\"my-task-run\")\n flow_run_id: Optional[UUID] = Field(\n default=None, description=\"The flow run id of the task run.\"\n )\n task_key: str = Field(\n default=..., description=\"A unique identifier for the task being run.\"\n )\n dynamic_key: str = Field(\n default=...,\n description=(\n \"A dynamic key used to differentiate between multiple runs of the same task\"\n \" within the same flow run.\"\n ),\n )\n cache_key: Optional[str] = Field(\n default=None,\n description=(\n \"An optional cache key. If a COMPLETED state associated with this cache key\"\n \" is found, the cached COMPLETED state will be used instead of executing\"\n \" the task run.\"\n ),\n )\n cache_expiration: Optional[DateTimeTZ] = Field(\n default=None, description=\"Specifies when the cached state should expire.\"\n )\n task_version: Optional[str] = Field(\n default=None, description=\"The version of the task being run.\"\n )\n empirical_policy: TaskRunPolicy = Field(\n default_factory=TaskRunPolicy,\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"A list of tags for the task run.\",\n example=[\"tag-1\", \"tag-2\"],\n )\n state_id: Optional[UUID] = Field(\n default=None, description=\"The id of the current task run state.\"\n )\n task_inputs: Dict[str, List[Union[TaskRunResult, Parameter, Constant]]] = Field(\n default_factory=dict,\n description=(\n \"Tracks the source of inputs to a task run. Used for internal bookkeeping.\"\n ),\n )\n state_type: Optional[states.StateType] = Field(\n default=None, description=\"The type of the current task run state.\"\n )\n state_name: Optional[str] = Field(\n default=None, description=\"The name of the current task run state.\"\n )\n run_count: int = Field(\n default=0, description=\"The number of times the task run has been executed.\"\n )\n flow_run_run_count: int = Field(\n default=0,\n description=(\n \"If the parent flow has retried, this indicates the flow retry this run is\"\n \" associated with.\"\n ),\n )\n expected_start_time: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The task run's expected start time.\",\n )\n\n # the next scheduled start time will be populated\n # whenever the run is in a scheduled state\n next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The next time the task run is scheduled to start.\",\n )\n start_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The actual start time.\"\n )\n end_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The actual end time.\"\n )\n total_run_time: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=(\n \"Total run time. If the task run was executed multiple times, the time of\"\n \" each run will be summed.\"\n ),\n )\n estimated_run_time: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=\"A real-time estimate of total run time.\",\n )\n estimated_start_time_delta: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=\"The difference between actual and expected start time.\",\n )\n\n # relationships\n # flow_run: FlowRun = None\n # subflow_runs: List[FlowRun] = Field(default_factory=list)\n state: Optional[states.State] = Field(\n default=None, description=\"The current task run state.\"\n )\n\n @validator(\"name\", pre=True)\n def set_name(cls, name):\n return name or generate_slug(2)\n\n @validator(\"cache_key\")\n def validate_cache_key_length(cls, cache_key):\n if cache_key and len(cache_key) > PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH.value():\n raise ValueError(\n \"Cache key exceeded maximum allowed length of\"\n f\" {PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH.value()} characters.\"\n )\n return cache_key\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkPool","title":"WorkPool
","text":" Bases: ORMBaseModel
An ORM representation of a work pool
Source code inprefect/server/schemas/core.py
class WorkPool(ORMBaseModel):\n \"\"\"An ORM representation of a work pool\"\"\"\n\n name: str = Field(\n description=\"The name of the work pool.\",\n )\n description: Optional[str] = Field(\n default=None, description=\"A description of the work pool.\"\n )\n type: str = Field(description=\"The work pool type.\")\n base_job_template: Dict[str, Any] = Field(\n default_factory=dict, description=\"The work pool's base job template.\"\n )\n is_paused: bool = Field(\n default=False,\n description=\"Pausing the work pool stops the delivery of all work.\",\n )\n concurrency_limit: Optional[conint(ge=0)] = Field(\n default=None, description=\"A concurrency limit for the work pool.\"\n )\n # this required field has a default of None so that the custom validator\n # below will be called and produce a more helpful error message\n default_queue_id: UUID = Field(\n None, description=\"The id of the pool's default queue.\"\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n\n @validator(\"default_queue_id\", always=True)\n def helpful_error_for_missing_default_queue_id(cls, v):\n \"\"\"\n Default queue ID is required because all pools must have a default queue\n ID, but it represents a circular foreign key relationship to a\n WorkQueue (which can't be created until the work pool exists).\n Therefore, while this field can *technically* be null, it shouldn't be.\n This should only be an issue when creating new pools, as reading\n existing ones will always have this field populated. This custom error\n message will help users understand that they should use the\n `actions.WorkPoolCreate` model in that case.\n \"\"\"\n if v is None:\n raise ValueError(\n \"`default_queue_id` is a required field. If you are \"\n \"creating a new WorkPool and don't have a queue \"\n \"ID yet, use the `actions.WorkPoolCreate` model instead.\"\n )\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkPool.helpful_error_for_missing_default_queue_id","title":"helpful_error_for_missing_default_queue_id
","text":"Default queue ID is required because all pools must have a default queue ID, but it represents a circular foreign key relationship to a WorkQueue (which can't be created until the work pool exists). Therefore, while this field can technically be null, it shouldn't be. This should only be an issue when creating new pools, as reading existing ones will always have this field populated. This custom error message will help users understand that they should use the actions.WorkPoolCreate
model in that case.
prefect/server/schemas/core.py
@validator(\"default_queue_id\", always=True)\ndef helpful_error_for_missing_default_queue_id(cls, v):\n \"\"\"\n Default queue ID is required because all pools must have a default queue\n ID, but it represents a circular foreign key relationship to a\n WorkQueue (which can't be created until the work pool exists).\n Therefore, while this field can *technically* be null, it shouldn't be.\n This should only be an issue when creating new pools, as reading\n existing ones will always have this field populated. This custom error\n message will help users understand that they should use the\n `actions.WorkPoolCreate` model in that case.\n \"\"\"\n if v is None:\n raise ValueError(\n \"`default_queue_id` is a required field. If you are \"\n \"creating a new WorkPool and don't have a queue \"\n \"ID yet, use the `actions.WorkPoolCreate` model instead.\"\n )\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueue","title":"WorkQueue
","text":" Bases: ORMBaseModel
An ORM representation of a work queue
Source code inprefect/server/schemas/core.py
class WorkQueue(ORMBaseModel):\n \"\"\"An ORM representation of a work queue\"\"\"\n\n name: str = Field(default=..., description=\"The name of the work queue.\")\n description: Optional[str] = Field(\n default=\"\", description=\"An optional description for the work queue.\"\n )\n is_paused: bool = Field(\n default=False, description=\"Whether or not the work queue is paused.\"\n )\n concurrency_limit: Optional[conint(ge=0)] = Field(\n default=None, description=\"An optional concurrency limit for the work queue.\"\n )\n priority: conint(ge=1) = Field(\n default=1,\n description=(\n \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n ),\n )\n # Will be required after a future migration\n work_pool_id: Optional[UUID] = Field(\n default=None, description=\"The work pool with which the queue is associated.\"\n )\n filter: Optional[QueueFilter] = Field(\n default=None,\n description=\"DEPRECATED: Filter criteria for the work queue.\",\n deprecated=True,\n )\n last_polled: Optional[DateTimeTZ] = Field(\n default=None, description=\"The last time an agent polled this queue for work.\"\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueueHealthPolicy","title":"WorkQueueHealthPolicy
","text":" Bases: PrefectBaseModel
prefect/server/schemas/core.py
class WorkQueueHealthPolicy(PrefectBaseModel):\n maximum_late_runs: Optional[int] = Field(\n default=0,\n description=(\n \"The maximum number of late runs in the work queue before it is deemed\"\n \" unhealthy. Defaults to `0`.\"\n ),\n )\n maximum_seconds_since_last_polled: Optional[int] = Field(\n default=60,\n description=(\n \"The maximum number of time in seconds elapsed since work queue has been\"\n \" polled before it is deemed unhealthy. Defaults to `60`.\"\n ),\n )\n\n def evaluate_health_status(\n self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n ) -> bool:\n \"\"\"\n Given empirical information about the state of the work queue, evaluate its health status.\n\n Args:\n late_runs: the count of late runs for the work queue.\n last_polled: the last time the work queue was polled, if available.\n\n Returns:\n bool: whether or not the work queue is healthy.\n \"\"\"\n healthy = True\n if (\n self.maximum_late_runs is not None\n and late_runs_count > self.maximum_late_runs\n ):\n healthy = False\n\n if self.maximum_seconds_since_last_polled is not None:\n if (\n last_polled is None\n or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n > self.maximum_seconds_since_last_polled\n ):\n healthy = False\n\n return healthy\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueueHealthPolicy.evaluate_health_status","title":"evaluate_health_status
","text":"Given empirical information about the state of the work queue, evaluate its health status.
Parameters:
Name Type Description Defaultlate_runs
the count of late runs for the work queue.
requiredlast_polled
Optional[DateTimeTZ]
the last time the work queue was polled, if available.
None
Returns:
Name Type Descriptionbool
bool
whether or not the work queue is healthy.
Source code inprefect/server/schemas/core.py
def evaluate_health_status(\n self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n) -> bool:\n \"\"\"\n Given empirical information about the state of the work queue, evaluate its health status.\n\n Args:\n late_runs: the count of late runs for the work queue.\n last_polled: the last time the work queue was polled, if available.\n\n Returns:\n bool: whether or not the work queue is healthy.\n \"\"\"\n healthy = True\n if (\n self.maximum_late_runs is not None\n and late_runs_count > self.maximum_late_runs\n ):\n healthy = False\n\n if self.maximum_seconds_since_last_polled is not None:\n if (\n last_polled is None\n or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n > self.maximum_seconds_since_last_polled\n ):\n healthy = False\n\n return healthy\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Worker","title":"Worker
","text":" Bases: ORMBaseModel
An ORM representation of a worker
Source code inprefect/server/schemas/core.py
class Worker(ORMBaseModel):\n \"\"\"An ORM representation of a worker\"\"\"\n\n name: str = Field(description=\"The name of the worker.\")\n work_pool_id: UUID = Field(\n description=\"The work pool with which the queue is associated.\"\n )\n last_heartbeat_time: datetime.datetime = Field(\n None, description=\"The last time the worker process sent a heartbeat.\"\n )\n heartbeat_interval_seconds: Optional[int] = Field(\n default=None,\n description=(\n \"The number of seconds to expect between heartbeats sent by the worker.\"\n ),\n )\n
"},{"location":"api-ref/server/schemas/filters/","title":"server.schemas.filters","text":""},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters","title":"prefect.server.schemas.filters
","text":"Schemas that define Prefect REST API filtering operations.
Each filter schema includes logic for transforming itself into a SQL where
clause.
ArtifactCollectionFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter artifact collections. Only artifact collections matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class ArtifactCollectionFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter artifact collections. Only artifact collections matching all criteria will be returned\"\"\"\n\n latest_id: Optional[ArtifactCollectionFilterLatestId] = Field(\n default=None, description=\"Filter criteria for `Artifact.id`\"\n )\n key: Optional[ArtifactCollectionFilterKey] = Field(\n default=None, description=\"Filter criteria for `Artifact.key`\"\n )\n flow_run_id: Optional[ArtifactCollectionFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n )\n task_run_id: Optional[ArtifactCollectionFilterTaskRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n )\n type: Optional[ArtifactCollectionFilterType] = Field(\n default=None, description=\"Filter criteria for `Artifact.type`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.latest_id is not None:\n filters.append(self.latest_id.as_sql_filter(db))\n if self.key is not None:\n filters.append(self.key.as_sql_filter(db))\n if self.flow_run_id is not None:\n filters.append(self.flow_run_id.as_sql_filter(db))\n if self.task_run_id is not None:\n filters.append(self.task_run_id.as_sql_filter(db))\n if self.type is not None:\n filters.append(self.type.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterFlowRunId","title":"ArtifactCollectionFilterFlowRunId
","text":" Bases: PrefectFilterBaseModel
Filter by ArtifactCollection.flow_run_id
.
prefect/server/schemas/filters.py
class ArtifactCollectionFilterFlowRunId(PrefectFilterBaseModel):\n \"\"\"Filter by `ArtifactCollection.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run IDs to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.ArtifactCollection.flow_run_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterFlowRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterKey","title":"ArtifactCollectionFilterKey
","text":" Bases: PrefectFilterBaseModel
Filter by ArtifactCollection.key
.
prefect/server/schemas/filters.py
class ArtifactCollectionFilterKey(PrefectFilterBaseModel):\n \"\"\"Filter by `ArtifactCollection.key`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact keys to include\"\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match artifact keys against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-artifact-%\",\n )\n\n exists_: Optional[bool] = Field(\n default=None,\n description=(\n \"If `true`, only include artifacts with a non-null key. If `false`, \"\n \"only include artifacts with a null key. Should return all rows in \"\n \"the ArtifactCollection table if specified.\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.ArtifactCollection.key.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.ArtifactCollection.key.ilike(f\"%{self.like_}%\"))\n if self.exists_ is not None:\n filters.append(\n db.ArtifactCollection.key.isnot(None)\n if self.exists_\n else db.ArtifactCollection.key.is_(None)\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterKey.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterLatestId","title":"ArtifactCollectionFilterLatestId
","text":" Bases: PrefectFilterBaseModel
Filter by ArtifactCollection.latest_id
.
prefect/server/schemas/filters.py
class ArtifactCollectionFilterLatestId(PrefectFilterBaseModel):\n \"\"\"Filter by `ArtifactCollection.latest_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of artifact ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.ArtifactCollection.latest_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterLatestId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterTaskRunId","title":"ArtifactCollectionFilterTaskRunId
","text":" Bases: PrefectFilterBaseModel
Filter by ArtifactCollection.task_run_id
.
prefect/server/schemas/filters.py
class ArtifactCollectionFilterTaskRunId(PrefectFilterBaseModel):\n \"\"\"Filter by `ArtifactCollection.task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run IDs to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.ArtifactCollection.task_run_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterTaskRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterType","title":"ArtifactCollectionFilterType
","text":" Bases: PrefectFilterBaseModel
Filter by ArtifactCollection.type
.
prefect/server/schemas/filters.py
class ArtifactCollectionFilterType(PrefectFilterBaseModel):\n \"\"\"Filter by `ArtifactCollection.type`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to include\"\n )\n not_any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to exclude\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.ArtifactCollection.type.in_(self.any_))\n if self.not_any_ is not None:\n filters.append(db.ArtifactCollection.type.notin_(self.not_any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterType.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilter","title":"ArtifactFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter artifacts. Only artifacts matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class ArtifactFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter artifacts. Only artifacts matching all criteria will be returned\"\"\"\n\n id: Optional[ArtifactFilterId] = Field(\n default=None, description=\"Filter criteria for `Artifact.id`\"\n )\n key: Optional[ArtifactFilterKey] = Field(\n default=None, description=\"Filter criteria for `Artifact.key`\"\n )\n flow_run_id: Optional[ArtifactFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n )\n task_run_id: Optional[ArtifactFilterTaskRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n )\n type: Optional[ArtifactFilterType] = Field(\n default=None, description=\"Filter criteria for `Artifact.type`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.key is not None:\n filters.append(self.key.as_sql_filter(db))\n if self.flow_run_id is not None:\n filters.append(self.flow_run_id.as_sql_filter(db))\n if self.task_run_id is not None:\n filters.append(self.task_run_id.as_sql_filter(db))\n if self.type is not None:\n filters.append(self.type.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterFlowRunId","title":"ArtifactFilterFlowRunId
","text":" Bases: PrefectFilterBaseModel
Filter by Artifact.flow_run_id
.
prefect/server/schemas/filters.py
class ArtifactFilterFlowRunId(PrefectFilterBaseModel):\n \"\"\"Filter by `Artifact.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run IDs to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Artifact.flow_run_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterFlowRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterId","title":"ArtifactFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by Artifact.id
.
prefect/server/schemas/filters.py
class ArtifactFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `Artifact.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of artifact ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Artifact.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterKey","title":"ArtifactFilterKey
","text":" Bases: PrefectFilterBaseModel
Filter by Artifact.key
.
prefect/server/schemas/filters.py
class ArtifactFilterKey(PrefectFilterBaseModel):\n \"\"\"Filter by `Artifact.key`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact keys to include\"\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match artifact keys against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-artifact-%\",\n )\n\n exists_: Optional[bool] = Field(\n default=None,\n description=(\n \"If `true`, only include artifacts with a non-null key. If `false`, \"\n \"only include artifacts with a null key.\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Artifact.key.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.Artifact.key.ilike(f\"%{self.like_}%\"))\n if self.exists_ is not None:\n filters.append(\n db.Artifact.key.isnot(None)\n if self.exists_\n else db.Artifact.key.is_(None)\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterKey.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterTaskRunId","title":"ArtifactFilterTaskRunId
","text":" Bases: PrefectFilterBaseModel
Filter by Artifact.task_run_id
.
prefect/server/schemas/filters.py
class ArtifactFilterTaskRunId(PrefectFilterBaseModel):\n \"\"\"Filter by `Artifact.task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run IDs to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Artifact.task_run_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterTaskRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterType","title":"ArtifactFilterType
","text":" Bases: PrefectFilterBaseModel
Filter by Artifact.type
.
prefect/server/schemas/filters.py
class ArtifactFilterType(PrefectFilterBaseModel):\n \"\"\"Filter by `Artifact.type`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to include\"\n )\n not_any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to exclude\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Artifact.type.in_(self.any_))\n if self.not_any_ is not None:\n filters.append(db.Artifact.type.notin_(self.not_any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterType.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilter","title":"BlockDocumentFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class BlockDocumentFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned\"\"\"\n\n id: Optional[BlockDocumentFilterId] = Field(\n default=None, description=\"Filter criteria for `BlockDocument.id`\"\n )\n is_anonymous: Optional[BlockDocumentFilterIsAnonymous] = Field(\n # default is to exclude anonymous blocks\n BlockDocumentFilterIsAnonymous(eq_=False),\n description=(\n \"Filter criteria for `BlockDocument.is_anonymous`. \"\n \"Defaults to excluding anonymous blocks.\"\n ),\n )\n block_type_id: Optional[BlockDocumentFilterBlockTypeId] = Field(\n default=None, description=\"Filter criteria for `BlockDocument.block_type_id`\"\n )\n name: Optional[BlockDocumentFilterName] = Field(\n default=None, description=\"Filter criteria for `BlockDocument.name`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.is_anonymous is not None:\n filters.append(self.is_anonymous.as_sql_filter(db))\n if self.block_type_id is not None:\n filters.append(self.block_type_id.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterBlockTypeId","title":"BlockDocumentFilterBlockTypeId
","text":" Bases: PrefectFilterBaseModel
Filter by BlockDocument.block_type_id
.
prefect/server/schemas/filters.py
class BlockDocumentFilterBlockTypeId(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockDocument.block_type_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of block type ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.BlockDocument.block_type_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterBlockTypeId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterId","title":"BlockDocumentFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by BlockDocument.id
.
prefect/server/schemas/filters.py
class BlockDocumentFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockDocument.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of block ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.BlockDocument.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterIsAnonymous","title":"BlockDocumentFilterIsAnonymous
","text":" Bases: PrefectFilterBaseModel
Filter by BlockDocument.is_anonymous
.
prefect/server/schemas/filters.py
class BlockDocumentFilterIsAnonymous(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockDocument.is_anonymous`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=(\n \"Filter block documents for only those that are or are not anonymous.\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.eq_ is not None:\n filters.append(db.BlockDocument.is_anonymous.is_(self.eq_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterIsAnonymous.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterName","title":"BlockDocumentFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by BlockDocument.name
.
prefect/server/schemas/filters.py
class BlockDocumentFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockDocument.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of block names to include\"\n )\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match block names against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-block%\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.BlockDocument.name.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.BlockDocument.name.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilter","title":"BlockSchemaFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter BlockSchemas
Source code inprefect/server/schemas/filters.py
class BlockSchemaFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter BlockSchemas\"\"\"\n\n block_type_id: Optional[BlockSchemaFilterBlockTypeId] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.block_type_id`\"\n )\n block_capabilities: Optional[BlockSchemaFilterCapabilities] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.capabilities`\"\n )\n id: Optional[BlockSchemaFilterId] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.id`\"\n )\n version: Optional[BlockSchemaFilterVersion] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.version`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.block_type_id is not None:\n filters.append(self.block_type_id.as_sql_filter(db))\n if self.block_capabilities is not None:\n filters.append(self.block_capabilities.as_sql_filter(db))\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.version is not None:\n filters.append(self.version.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterBlockTypeId","title":"BlockSchemaFilterBlockTypeId
","text":" Bases: PrefectFilterBaseModel
Filter by BlockSchema.block_type_id
.
prefect/server/schemas/filters.py
class BlockSchemaFilterBlockTypeId(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockSchema.block_type_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of block type ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.BlockSchema.block_type_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterBlockTypeId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterCapabilities","title":"BlockSchemaFilterCapabilities
","text":" Bases: PrefectFilterBaseModel
Filter by BlockSchema.capabilities
prefect/server/schemas/filters.py
class BlockSchemaFilterCapabilities(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"write-storage\", \"read-storage\"],\n description=(\n \"A list of block capabilities. Block entities will be returned only if an\"\n \" associated block schema has a superset of the defined capabilities.\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n from prefect.server.utilities.database import json_has_all_keys\n\n filters = []\n if self.all_ is not None:\n filters.append(json_has_all_keys(db.BlockSchema.capabilities, self.all_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterCapabilities.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterId","title":"BlockSchemaFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by BlockSchema.id
Source code inprefect/server/schemas/filters.py
class BlockSchemaFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by BlockSchema.id\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of IDs to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.BlockSchema.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterVersion","title":"BlockSchemaFilterVersion
","text":" Bases: PrefectFilterBaseModel
Filter by BlockSchema.capabilities
prefect/server/schemas/filters.py
class BlockSchemaFilterVersion(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n example=[\"2.0.0\", \"2.1.0\"],\n description=\"A list of block schema versions.\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n pass\n\n filters = []\n if self.any_ is not None:\n filters.append(db.BlockSchema.version.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterVersion.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilter","title":"BlockTypeFilter
","text":" Bases: PrefectFilterBaseModel
Filter BlockTypes
Source code inprefect/server/schemas/filters.py
class BlockTypeFilter(PrefectFilterBaseModel):\n \"\"\"Filter BlockTypes\"\"\"\n\n name: Optional[BlockTypeFilterName] = Field(\n default=None, description=\"Filter criteria for `BlockType.name`\"\n )\n\n slug: Optional[BlockTypeFilterSlug] = Field(\n default=None, description=\"Filter criteria for `BlockType.slug`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n if self.slug is not None:\n filters.append(self.slug.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterName","title":"BlockTypeFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by BlockType.name
prefect/server/schemas/filters.py
class BlockTypeFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockType.name`\"\"\"\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.like_ is not None:\n filters.append(db.BlockType.name.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterSlug","title":"BlockTypeFilterSlug
","text":" Bases: PrefectFilterBaseModel
Filter by BlockType.slug
prefect/server/schemas/filters.py
class BlockTypeFilterSlug(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockType.slug`\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of slugs to match\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.BlockType.slug.in_(self.any_))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterSlug.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilter","title":"DeploymentFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter for deployments. Only deployments matching all criteria will be returned.
Source code inprefect/server/schemas/filters.py
class DeploymentFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter for deployments. Only deployments matching all criteria will be returned.\"\"\"\n\n id: Optional[DeploymentFilterId] = Field(\n default=None, description=\"Filter criteria for `Deployment.id`\"\n )\n name: Optional[DeploymentFilterName] = Field(\n default=None, description=\"Filter criteria for `Deployment.name`\"\n )\n paused: Optional[DeploymentFilterPaused] = Field(\n default=None, description=\"Filter criteria for `Deployment.paused`\"\n )\n is_schedule_active: Optional[DeploymentFilterIsScheduleActive] = Field(\n default=None, description=\"Filter criteria for `Deployment.is_schedule_active`\"\n )\n tags: Optional[DeploymentFilterTags] = Field(\n default=None, description=\"Filter criteria for `Deployment.tags`\"\n )\n work_queue_name: Optional[DeploymentFilterWorkQueueName] = Field(\n default=None, description=\"Filter criteria for `Deployment.work_queue_name`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n if self.paused is not None:\n filters.append(self.paused.as_sql_filter(db))\n if self.is_schedule_active is not None:\n filters.append(self.is_schedule_active.as_sql_filter(db))\n if self.tags is not None:\n filters.append(self.tags.as_sql_filter(db))\n if self.work_queue_name is not None:\n filters.append(self.work_queue_name.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterId","title":"DeploymentFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by Deployment.id
.
prefect/server/schemas/filters.py
class DeploymentFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `Deployment.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of deployment ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Deployment.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterIsScheduleActive","title":"DeploymentFilterIsScheduleActive
","text":" Bases: PrefectFilterBaseModel
Legacy filter to filter by Deployment.is_schedule_active
which is always the opposite of Deployment.paused
.
prefect/server/schemas/filters.py
class DeploymentFilterIsScheduleActive(PrefectFilterBaseModel):\n \"\"\"Legacy filter to filter by `Deployment.is_schedule_active` which\n is always the opposite of `Deployment.paused`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=\"Only returns where deployment schedule is/is not active\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.eq_ is not None:\n filters.append(db.Deployment.paused.is_not(self.eq_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterIsScheduleActive.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterName","title":"DeploymentFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by Deployment.name
.
prefect/server/schemas/filters.py
class DeploymentFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `Deployment.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of deployment names to include\",\n example=[\"my-deployment-1\", \"my-deployment-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Deployment.name.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.Deployment.name.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterPaused","title":"DeploymentFilterPaused
","text":" Bases: PrefectFilterBaseModel
Filter by Deployment.paused
.
prefect/server/schemas/filters.py
class DeploymentFilterPaused(PrefectFilterBaseModel):\n \"\"\"Filter by `Deployment.paused`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=\"Only returns where deployment is/is not paused\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.eq_ is not None:\n filters.append(db.Deployment.paused.is_(self.eq_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterPaused.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterTags","title":"DeploymentFilterTags
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by Deployment.tags
.
prefect/server/schemas/filters.py
class DeploymentFilterTags(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `Deployment.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Deployments will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include deployments without tags\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n from prefect.server.utilities.database import json_has_all_keys\n\n filters = []\n if self.all_ is not None:\n filters.append(json_has_all_keys(db.Deployment.tags, self.all_))\n if self.is_null_ is not None:\n filters.append(\n db.Deployment.tags == [] if self.is_null_ else db.Deployment.tags != []\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterTags.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterWorkQueueName","title":"DeploymentFilterWorkQueueName
","text":" Bases: PrefectFilterBaseModel
Filter by Deployment.work_queue_name
.
prefect/server/schemas/filters.py
class DeploymentFilterWorkQueueName(PrefectFilterBaseModel):\n \"\"\"Filter by `Deployment.work_queue_name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of work queue names to include\",\n example=[\"work_queue_1\", \"work_queue_2\"],\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Deployment.work_queue_name.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterWorkQueueName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilter","title":"DeploymentScheduleFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter for deployments. Only deployments matching all criteria will be returned.
Source code inprefect/server/schemas/filters.py
class DeploymentScheduleFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter for deployments. Only deployments matching all criteria will be returned.\"\"\"\n\n active: Optional[DeploymentScheduleFilterActive] = Field(\n default=None, description=\"Filter criteria for `DeploymentSchedule.active`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.active is not None:\n filters.append(self.active.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilterActive","title":"DeploymentScheduleFilterActive
","text":" Bases: PrefectFilterBaseModel
Filter by DeploymentSchedule.active
.
prefect/server/schemas/filters.py
class DeploymentScheduleFilterActive(PrefectFilterBaseModel):\n \"\"\"Filter by `DeploymentSchedule.active`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=\"Only returns where deployment schedule is/is not active\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.eq_ is not None:\n filters.append(db.DeploymentSchedule.active.is_(self.eq_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilterActive.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FilterSet","title":"FilterSet
","text":" Bases: PrefectBaseModel
A collection of filters for common objects
Source code inprefect/server/schemas/filters.py
class FilterSet(PrefectBaseModel):\n \"\"\"A collection of filters for common objects\"\"\"\n\n flows: FlowFilter = Field(\n default_factory=FlowFilter, description=\"Filters that apply to flows\"\n )\n flow_runs: FlowRunFilter = Field(\n default_factory=FlowRunFilter, description=\"Filters that apply to flow runs\"\n )\n task_runs: TaskRunFilter = Field(\n default_factory=TaskRunFilter, description=\"Filters that apply to task runs\"\n )\n deployments: DeploymentFilter = Field(\n default_factory=DeploymentFilter,\n description=\"Filters that apply to deployments\",\n )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FilterSet.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilter","title":"FlowFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter for flows. Only flows matching all criteria will be returned.
Source code inprefect/server/schemas/filters.py
class FlowFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter for flows. Only flows matching all criteria will be returned.\"\"\"\n\n id: Optional[FlowFilterId] = Field(\n default=None, description=\"Filter criteria for `Flow.id`\"\n )\n deployment: Optional[FlowFilterDeployment] = Field(\n default=None, description=\"Filter criteria for Flow deployments\"\n )\n name: Optional[FlowFilterName] = Field(\n default=None, description=\"Filter criteria for `Flow.name`\"\n )\n tags: Optional[FlowFilterTags] = Field(\n default=None, description=\"Filter criteria for `Flow.tags`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.deployment is not None:\n filters.append(self.deployment.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n if self.tags is not None:\n filters.append(self.tags.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterDeployment","title":"FlowFilterDeployment
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by flows by deployment
Source code inprefect/server/schemas/filters.py
class FlowFilterDeployment(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by flows by deployment\"\"\"\n\n is_null_: Optional[bool] = Field(\n default=None,\n description=\"If true, only include flows without deployments\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.is_null_ is not None:\n deployments_subquery = (\n sa.select(db.Deployment.flow_id).distinct().subquery()\n )\n\n if self.is_null_:\n filters.append(\n db.Flow.id.not_in(sa.select(deployments_subquery.c.flow_id))\n )\n else:\n filters.append(\n db.Flow.id.in_(sa.select(deployments_subquery.c.flow_id))\n )\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterDeployment.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterId","title":"FlowFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by Flow.id
.
prefect/server/schemas/filters.py
class FlowFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `Flow.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Flow.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterName","title":"FlowFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by Flow.name
.
prefect/server/schemas/filters.py
class FlowFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `Flow.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of flow names to include\",\n example=[\"my-flow-1\", \"my-flow-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Flow.name.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.Flow.name.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterTags","title":"FlowFilterTags
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by Flow.tags
.
prefect/server/schemas/filters.py
class FlowFilterTags(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `Flow.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Flows will be returned only if their tags are a superset\"\n \" of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include flows without tags\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n from prefect.server.utilities.database import json_has_all_keys\n\n filters = []\n if self.all_ is not None:\n filters.append(json_has_all_keys(db.Flow.tags, self.all_))\n if self.is_null_ is not None:\n filters.append(db.Flow.tags == [] if self.is_null_ else db.Flow.tags != [])\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterTags.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilter","title":"FlowRunFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter flow runs. Only flow runs matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class FlowRunFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter flow runs. Only flow runs matching all criteria will be returned\"\"\"\n\n id: Optional[FlowRunFilterId] = Field(\n default=None, description=\"Filter criteria for `FlowRun.id`\"\n )\n name: Optional[FlowRunFilterName] = Field(\n default=None, description=\"Filter criteria for `FlowRun.name`\"\n )\n tags: Optional[FlowRunFilterTags] = Field(\n default=None, description=\"Filter criteria for `FlowRun.tags`\"\n )\n deployment_id: Optional[FlowRunFilterDeploymentId] = Field(\n default=None, description=\"Filter criteria for `FlowRun.deployment_id`\"\n )\n work_queue_name: Optional[FlowRunFilterWorkQueueName] = Field(\n default=None, description=\"Filter criteria for `FlowRun.work_queue_name\"\n )\n state: Optional[FlowRunFilterState] = Field(\n default=None, description=\"Filter criteria for `FlowRun.state`\"\n )\n flow_version: Optional[FlowRunFilterFlowVersion] = Field(\n default=None, description=\"Filter criteria for `FlowRun.flow_version`\"\n )\n start_time: Optional[FlowRunFilterStartTime] = Field(\n default=None, description=\"Filter criteria for `FlowRun.start_time`\"\n )\n expected_start_time: Optional[FlowRunFilterExpectedStartTime] = Field(\n default=None, description=\"Filter criteria for `FlowRun.expected_start_time`\"\n )\n next_scheduled_start_time: Optional[FlowRunFilterNextScheduledStartTime] = Field(\n default=None,\n description=\"Filter criteria for `FlowRun.next_scheduled_start_time`\",\n )\n parent_flow_run_id: Optional[FlowRunFilterParentFlowRunId] = Field(\n default=None, description=\"Filter criteria for subflows of the given flow runs\"\n )\n parent_task_run_id: Optional[FlowRunFilterParentTaskRunId] = Field(\n default=None, description=\"Filter criteria for `FlowRun.parent_task_run_id`\"\n )\n idempotency_key: Optional[FlowRunFilterIdempotencyKey] = Field(\n default=None, description=\"Filter criteria for `FlowRun.idempotency_key`\"\n )\n\n def only_filters_on_id(self):\n return (\n self.id is not None\n and (self.id.any_ and not self.id.not_any_)\n and self.name is None\n and self.tags is None\n and self.deployment_id is None\n and self.work_queue_name is None\n and self.state is None\n and self.flow_version is None\n and self.start_time is None\n and self.expected_start_time is None\n and self.next_scheduled_start_time is None\n and self.parent_flow_run_id is None\n and self.parent_task_run_id is None\n and self.idempotency_key is None\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n if self.tags is not None:\n filters.append(self.tags.as_sql_filter(db))\n if self.deployment_id is not None:\n filters.append(self.deployment_id.as_sql_filter(db))\n if self.work_queue_name is not None:\n filters.append(self.work_queue_name.as_sql_filter(db))\n if self.flow_version is not None:\n filters.append(self.flow_version.as_sql_filter(db))\n if self.state is not None:\n filters.append(self.state.as_sql_filter(db))\n if self.start_time is not None:\n filters.append(self.start_time.as_sql_filter(db))\n if self.expected_start_time is not None:\n filters.append(self.expected_start_time.as_sql_filter(db))\n if self.next_scheduled_start_time is not None:\n filters.append(self.next_scheduled_start_time.as_sql_filter(db))\n if self.parent_flow_run_id is not None:\n filters.append(self.parent_flow_run_id.as_sql_filter(db))\n if self.parent_task_run_id is not None:\n filters.append(self.parent_task_run_id.as_sql_filter(db))\n if self.idempotency_key is not None:\n filters.append(self.idempotency_key.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterDeploymentId","title":"FlowRunFilterDeploymentId
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by FlowRun.deployment_id
.
prefect/server/schemas/filters.py
class FlowRunFilterDeploymentId(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `FlowRun.deployment_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run deployment ids to include\"\n )\n is_null_: Optional[bool] = Field(\n default=None,\n description=\"If true, only include flow runs without deployment ids\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.deployment_id.in_(self.any_))\n if self.is_null_ is not None:\n filters.append(\n db.FlowRun.deployment_id.is_(None)\n if self.is_null_\n else db.FlowRun.deployment_id.is_not(None)\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterDeploymentId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterExpectedStartTime","title":"FlowRunFilterExpectedStartTime
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.expected_start_time
.
prefect/server/schemas/filters.py
class FlowRunFilterExpectedStartTime(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.expected_start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs scheduled to start at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs scheduled to start at or after this time\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.before_ is not None:\n filters.append(db.FlowRun.expected_start_time <= self.before_)\n if self.after_ is not None:\n filters.append(db.FlowRun.expected_start_time >= self.after_)\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterExpectedStartTime.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterFlowVersion","title":"FlowRunFilterFlowVersion
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.flow_version
.
prefect/server/schemas/filters.py
class FlowRunFilterFlowVersion(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.flow_version`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of flow run flow_versions to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.flow_version.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterFlowVersion.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterId","title":"FlowRunFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.id
.
prefect/server/schemas/filters.py
class FlowRunFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run ids to include\"\n )\n not_any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run ids to exclude\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.id.in_(self.any_))\n if self.not_any_ is not None:\n filters.append(db.FlowRun.id.not_in(self.not_any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterIdempotencyKey","title":"FlowRunFilterIdempotencyKey
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.idempotency_key.
Source code inprefect/server/schemas/filters.py
class FlowRunFilterIdempotencyKey(PrefectFilterBaseModel):\n \"\"\"Filter by FlowRun.idempotency_key.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of flow run idempotency keys to include\"\n )\n not_any_: Optional[List[str]] = Field(\n default=None, description=\"A list of flow run idempotency keys to exclude\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.idempotency_key.in_(self.any_))\n if self.not_any_ is not None:\n filters.append(db.FlowRun.idempotency_key.not_in(self.not_any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterIdempotencyKey.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterName","title":"FlowRunFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.name
.
prefect/server/schemas/filters.py
class FlowRunFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of flow run names to include\",\n example=[\"my-flow-run-1\", \"my-flow-run-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.name.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.FlowRun.name.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterNextScheduledStartTime","title":"FlowRunFilterNextScheduledStartTime
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.next_scheduled_start_time
.
prefect/server/schemas/filters.py
class FlowRunFilterNextScheduledStartTime(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.next_scheduled_start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include flow runs with a next_scheduled_start_time or before this\"\n \" time\"\n ),\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include flow runs with a next_scheduled_start_time at or after this\"\n \" time\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.before_ is not None:\n filters.append(db.FlowRun.next_scheduled_start_time <= self.before_)\n if self.after_ is not None:\n filters.append(db.FlowRun.next_scheduled_start_time >= self.after_)\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterNextScheduledStartTime.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentFlowRunId","title":"FlowRunFilterParentFlowRunId
","text":" Bases: PrefectOperatorFilterBaseModel
Filter for subflows of a given flow run
Source code inprefect/server/schemas/filters.py
class FlowRunFilterParentFlowRunId(PrefectOperatorFilterBaseModel):\n \"\"\"Filter for subflows of a given flow run\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of parent flow run ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(\n db.FlowRun.id.in_(\n sa.select(db.FlowRun.id)\n .join(\n db.TaskRun,\n sa.and_(\n db.TaskRun.id == db.FlowRun.parent_task_run_id,\n ),\n )\n .where(db.TaskRun.flow_run_id.in_(self.any_))\n )\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentFlowRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentTaskRunId","title":"FlowRunFilterParentTaskRunId
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by FlowRun.parent_task_run_id
.
prefect/server/schemas/filters.py
class FlowRunFilterParentTaskRunId(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `FlowRun.parent_task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run parent_task_run_ids to include\"\n )\n is_null_: Optional[bool] = Field(\n default=None,\n description=\"If true, only include flow runs without parent_task_run_id\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.parent_task_run_id.in_(self.any_))\n if self.is_null_ is not None:\n filters.append(\n db.FlowRun.parent_task_run_id.is_(None)\n if self.is_null_\n else db.FlowRun.parent_task_run_id.is_not(None)\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentTaskRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStartTime","title":"FlowRunFilterStartTime
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.start_time
.
prefect/server/schemas/filters.py
class FlowRunFilterStartTime(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs starting at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs starting at or after this time\",\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only return flow runs without a start time\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.before_ is not None:\n filters.append(db.FlowRun.start_time <= self.before_)\n if self.after_ is not None:\n filters.append(db.FlowRun.start_time >= self.after_)\n if self.is_null_ is not None:\n filters.append(\n db.FlowRun.start_time.is_(None)\n if self.is_null_\n else db.FlowRun.start_time.is_not(None)\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStartTime.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterState","title":"FlowRunFilterState
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by FlowRun.state_type
and FlowRun.state_name
.
prefect/server/schemas/filters.py
class FlowRunFilterState(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `FlowRun.state_type` and `FlowRun.state_name`.\"\"\"\n\n type: Optional[FlowRunFilterStateType] = Field(\n default=None, description=\"Filter criteria for `FlowRun.state_type`\"\n )\n name: Optional[FlowRunFilterStateName] = Field(\n default=None, description=\"Filter criteria for `FlowRun.state_name`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.type is not None:\n filters.extend(self.type._get_filter_list(db))\n if self.name is not None:\n filters.extend(self.name._get_filter_list(db))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterState.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateName","title":"FlowRunFilterStateName
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.state_name
.
prefect/server/schemas/filters.py
class FlowRunFilterStateName(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.state_name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of flow run state names to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.state_name.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateType","title":"FlowRunFilterStateType
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.state_type
.
prefect/server/schemas/filters.py
class FlowRunFilterStateType(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.state_type`.\"\"\"\n\n any_: Optional[List[schemas.states.StateType]] = Field(\n default=None, description=\"A list of flow run state types to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.state_type.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateType.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterTags","title":"FlowRunFilterTags
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by FlowRun.tags
.
prefect/server/schemas/filters.py
class FlowRunFilterTags(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `FlowRun.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Flow runs will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include flow runs without tags\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n from prefect.server.utilities.database import json_has_all_keys\n\n filters = []\n if self.all_ is not None:\n filters.append(json_has_all_keys(db.FlowRun.tags, self.all_))\n if self.is_null_ is not None:\n filters.append(\n db.FlowRun.tags == [] if self.is_null_ else db.FlowRun.tags != []\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterTags.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterWorkQueueName","title":"FlowRunFilterWorkQueueName
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by FlowRun.work_queue_name
.
prefect/server/schemas/filters.py
class FlowRunFilterWorkQueueName(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `FlowRun.work_queue_name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of work queue names to include\",\n example=[\"work_queue_1\", \"work_queue_2\"],\n )\n is_null_: Optional[bool] = Field(\n default=None,\n description=\"If true, only include flow runs without work queue names\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.work_queue_name.in_(self.any_))\n if self.is_null_ is not None:\n filters.append(\n db.FlowRun.work_queue_name.is_(None)\n if self.is_null_\n else db.FlowRun.work_queue_name.is_not(None)\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterWorkQueueName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilter","title":"FlowRunNotificationPolicyFilter
","text":" Bases: PrefectFilterBaseModel
Filter FlowRunNotificationPolicies.
Source code inprefect/server/schemas/filters.py
class FlowRunNotificationPolicyFilter(PrefectFilterBaseModel):\n \"\"\"Filter FlowRunNotificationPolicies.\"\"\"\n\n is_active: Optional[FlowRunNotificationPolicyFilterIsActive] = Field(\n default=FlowRunNotificationPolicyFilterIsActive(eq_=False),\n description=\"Filter criteria for `FlowRunNotificationPolicy.is_active`. \",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.is_active is not None:\n filters.append(self.is_active.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilterIsActive","title":"FlowRunNotificationPolicyFilterIsActive
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRunNotificationPolicy.is_active
.
prefect/server/schemas/filters.py
class FlowRunNotificationPolicyFilterIsActive(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRunNotificationPolicy.is_active`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=(\n \"Filter notification policies for only those that are or are not active.\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.eq_ is not None:\n filters.append(db.FlowRunNotificationPolicy.is_active.is_(self.eq_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilterIsActive.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilter","title":"LogFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter logs. Only logs matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class LogFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter logs. Only logs matching all criteria will be returned\"\"\"\n\n level: Optional[LogFilterLevel] = Field(\n default=None, description=\"Filter criteria for `Log.level`\"\n )\n timestamp: Optional[LogFilterTimestamp] = Field(\n default=None, description=\"Filter criteria for `Log.timestamp`\"\n )\n flow_run_id: Optional[LogFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `Log.flow_run_id`\"\n )\n task_run_id: Optional[LogFilterTaskRunId] = Field(\n default=None, description=\"Filter criteria for `Log.task_run_id`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.level is not None:\n filters.append(self.level.as_sql_filter(db))\n if self.timestamp is not None:\n filters.append(self.timestamp.as_sql_filter(db))\n if self.flow_run_id is not None:\n filters.append(self.flow_run_id.as_sql_filter(db))\n if self.task_run_id is not None:\n filters.append(self.task_run_id.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterFlowRunId","title":"LogFilterFlowRunId
","text":" Bases: PrefectFilterBaseModel
Filter by Log.flow_run_id
.
prefect/server/schemas/filters.py
class LogFilterFlowRunId(PrefectFilterBaseModel):\n \"\"\"Filter by `Log.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run IDs to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Log.flow_run_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterFlowRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterLevel","title":"LogFilterLevel
","text":" Bases: PrefectFilterBaseModel
Filter by Log.level
.
prefect/server/schemas/filters.py
class LogFilterLevel(PrefectFilterBaseModel):\n \"\"\"Filter by `Log.level`.\"\"\"\n\n ge_: Optional[int] = Field(\n default=None,\n description=\"Include logs with a level greater than or equal to this level\",\n example=20,\n )\n\n le_: Optional[int] = Field(\n default=None,\n description=\"Include logs with a level less than or equal to this level\",\n example=50,\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.ge_ is not None:\n filters.append(db.Log.level >= self.ge_)\n if self.le_ is not None:\n filters.append(db.Log.level <= self.le_)\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterLevel.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterName","title":"LogFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by Log.name
.
prefect/server/schemas/filters.py
class LogFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `Log.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of log names to include\",\n example=[\"prefect.logger.flow_runs\", \"prefect.logger.task_runs\"],\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Log.name.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTaskRunId","title":"LogFilterTaskRunId
","text":" Bases: PrefectFilterBaseModel
Filter by Log.task_run_id
.
prefect/server/schemas/filters.py
class LogFilterTaskRunId(PrefectFilterBaseModel):\n \"\"\"Filter by `Log.task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run IDs to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Log.task_run_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTaskRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTimestamp","title":"LogFilterTimestamp
","text":" Bases: PrefectFilterBaseModel
Filter by Log.timestamp
.
prefect/server/schemas/filters.py
class LogFilterTimestamp(PrefectFilterBaseModel):\n \"\"\"Filter by `Log.timestamp`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include logs with a timestamp at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include logs with a timestamp at or after this time\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.before_ is not None:\n filters.append(db.Log.timestamp <= self.before_)\n if self.after_ is not None:\n filters.append(db.Log.timestamp >= self.after_)\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTimestamp.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.Operator","title":"Operator
","text":" Bases: AutoEnum
Operators for combining filter criteria.
Source code inprefect/server/schemas/filters.py
class Operator(AutoEnum):\n \"\"\"Operators for combining filter criteria.\"\"\"\n\n and_ = AutoEnum.auto()\n or_ = AutoEnum.auto()\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.Operator.auto","title":"auto
staticmethod
","text":"Exposes enum.auto()
to avoid requiring a second import to use AutoEnum
prefect/utilities/collections.py
@staticmethod\ndef auto():\n \"\"\"\n Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n \"\"\"\n return auto()\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectFilterBaseModel","title":"PrefectFilterBaseModel
","text":" Bases: PrefectBaseModel
Base model for Prefect filters
Source code inprefect/server/schemas/filters.py
class PrefectFilterBaseModel(PrefectBaseModel):\n \"\"\"Base model for Prefect filters\"\"\"\n\n class Config:\n extra = \"forbid\"\n\n def as_sql_filter(self, db: \"PrefectDBInterface\") -> \"BooleanClauseList\":\n \"\"\"Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter.\"\"\"\n filters = self._get_filter_list(db)\n if not filters:\n return True\n return sa.and_(*filters)\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n \"\"\"Return a list of boolean filter statements based on filter parameters\"\"\"\n raise NotImplementedError(\"_get_filter_list must be implemented\")\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectFilterBaseModel.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectOperatorFilterBaseModel","title":"PrefectOperatorFilterBaseModel
","text":" Bases: PrefectFilterBaseModel
Base model for Prefect filters that combines criteria with a user-provided operator
Source code inprefect/server/schemas/filters.py
class PrefectOperatorFilterBaseModel(PrefectFilterBaseModel):\n \"\"\"Base model for Prefect filters that combines criteria with a user-provided operator\"\"\"\n\n operator: Operator = Field(\n default=Operator.and_,\n description=\"Operator for combining filter criteria. Defaults to 'and_'.\",\n )\n\n def as_sql_filter(self, db: \"PrefectDBInterface\") -> \"BooleanClauseList\":\n filters = self._get_filter_list(db)\n if not filters:\n return True\n return sa.and_(*filters) if self.operator == Operator.and_ else sa.or_(*filters)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectOperatorFilterBaseModel.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilter","title":"TaskRunFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter task runs. Only task runs matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class TaskRunFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter task runs. Only task runs matching all criteria will be returned\"\"\"\n\n id: Optional[TaskRunFilterId] = Field(\n default=None, description=\"Filter criteria for `TaskRun.id`\"\n )\n name: Optional[TaskRunFilterName] = Field(\n default=None, description=\"Filter criteria for `TaskRun.name`\"\n )\n tags: Optional[TaskRunFilterTags] = Field(\n default=None, description=\"Filter criteria for `TaskRun.tags`\"\n )\n state: Optional[TaskRunFilterState] = Field(\n default=None, description=\"Filter criteria for `TaskRun.state`\"\n )\n start_time: Optional[TaskRunFilterStartTime] = Field(\n default=None, description=\"Filter criteria for `TaskRun.start_time`\"\n )\n subflow_runs: Optional[TaskRunFilterSubFlowRuns] = Field(\n default=None, description=\"Filter criteria for `TaskRun.subflow_run`\"\n )\n flow_run_id: Optional[TaskRunFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `TaskRun.flow_run_id`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n if self.tags is not None:\n filters.append(self.tags.as_sql_filter(db))\n if self.state is not None:\n filters.append(self.state.as_sql_filter(db))\n if self.start_time is not None:\n filters.append(self.start_time.as_sql_filter(db))\n if self.subflow_runs is not None:\n filters.append(self.subflow_runs.as_sql_filter(db))\n if self.flow_run_id is not None:\n filters.append(self.flow_run_id.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterFlowRunId","title":"TaskRunFilterFlowRunId
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by TaskRun.flow_run_id
.
prefect/server/schemas/filters.py
class TaskRunFilterFlowRunId(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `TaskRun.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run flow run ids to include\"\n )\n\n is_null_: bool = Field(\n default=False, description=\"Filter for task runs with None as their flow run id\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.is_null_ is True:\n filters.append(db.TaskRun.flow_run_id.is_(None))\n else:\n if self.any_ is not None:\n filters.append(db.TaskRun.flow_run_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterFlowRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterId","title":"TaskRunFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by TaskRun.id
.
prefect/server/schemas/filters.py
class TaskRunFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `TaskRun.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.TaskRun.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterName","title":"TaskRunFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by TaskRun.name
.
prefect/server/schemas/filters.py
class TaskRunFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `TaskRun.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of task run names to include\",\n example=[\"my-task-run-1\", \"my-task-run-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.TaskRun.name.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.TaskRun.name.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStartTime","title":"TaskRunFilterStartTime
","text":" Bases: PrefectFilterBaseModel
Filter by TaskRun.start_time
.
prefect/server/schemas/filters.py
class TaskRunFilterStartTime(PrefectFilterBaseModel):\n \"\"\"Filter by `TaskRun.start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include task runs starting at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include task runs starting at or after this time\",\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only return task runs without a start time\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.before_ is not None:\n filters.append(db.TaskRun.start_time <= self.before_)\n if self.after_ is not None:\n filters.append(db.TaskRun.start_time >= self.after_)\n if self.is_null_ is not None:\n filters.append(\n db.TaskRun.start_time.is_(None)\n if self.is_null_\n else db.TaskRun.start_time.is_not(None)\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStartTime.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterState","title":"TaskRunFilterState
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by TaskRun.type
and TaskRun.name
.
prefect/server/schemas/filters.py
class TaskRunFilterState(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `TaskRun.type` and `TaskRun.name`.\"\"\"\n\n type: Optional[TaskRunFilterStateType]\n name: Optional[TaskRunFilterStateName]\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.type is not None:\n filters.extend(self.type._get_filter_list(db))\n if self.name is not None:\n filters.extend(self.name._get_filter_list(db))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterState.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateName","title":"TaskRunFilterStateName
","text":" Bases: PrefectFilterBaseModel
Filter by TaskRun.state_name
.
prefect/server/schemas/filters.py
class TaskRunFilterStateName(PrefectFilterBaseModel):\n \"\"\"Filter by `TaskRun.state_name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of task run state names to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.TaskRun.state_name.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateType","title":"TaskRunFilterStateType
","text":" Bases: PrefectFilterBaseModel
Filter by TaskRun.state_type
.
prefect/server/schemas/filters.py
class TaskRunFilterStateType(PrefectFilterBaseModel):\n \"\"\"Filter by `TaskRun.state_type`.\"\"\"\n\n any_: Optional[List[schemas.states.StateType]] = Field(\n default=None, description=\"A list of task run state types to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.TaskRun.state_type.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateType.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterSubFlowRuns","title":"TaskRunFilterSubFlowRuns
","text":" Bases: PrefectFilterBaseModel
Filter by TaskRun.subflow_run
.
prefect/server/schemas/filters.py
class TaskRunFilterSubFlowRuns(PrefectFilterBaseModel):\n \"\"\"Filter by `TaskRun.subflow_run`.\"\"\"\n\n exists_: Optional[bool] = Field(\n default=None,\n description=(\n \"If true, only include task runs that are subflow run parents; if false,\"\n \" exclude parent task runs\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.exists_ is True:\n filters.append(db.TaskRun.subflow_run.has())\n elif self.exists_ is False:\n filters.append(sa.not_(db.TaskRun.subflow_run.has()))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterSubFlowRuns.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterTags","title":"TaskRunFilterTags
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by TaskRun.tags
.
prefect/server/schemas/filters.py
class TaskRunFilterTags(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `TaskRun.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Task runs will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include task runs without tags\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n from prefect.server.utilities.database import json_has_all_keys\n\n filters = []\n if self.all_ is not None:\n filters.append(json_has_all_keys(db.TaskRun.tags, self.all_))\n if self.is_null_ is not None:\n filters.append(\n db.TaskRun.tags == [] if self.is_null_ else db.TaskRun.tags != []\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterTags.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilter","title":"VariableFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter variables. Only variables matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class VariableFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter variables. Only variables matching all criteria will be returned\"\"\"\n\n id: Optional[VariableFilterId] = Field(\n default=None, description=\"Filter criteria for `Variable.id`\"\n )\n name: Optional[VariableFilterName] = Field(\n default=None, description=\"Filter criteria for `Variable.name`\"\n )\n value: Optional[VariableFilterValue] = Field(\n default=None, description=\"Filter criteria for `Variable.value`\"\n )\n tags: Optional[VariableFilterTags] = Field(\n default=None, description=\"Filter criteria for `Variable.tags`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n if self.value is not None:\n filters.append(self.value.as_sql_filter(db))\n if self.tags is not None:\n filters.append(self.tags.as_sql_filter(db))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterId","title":"VariableFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by Variable.id
.
prefect/server/schemas/filters.py
class VariableFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `Variable.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of variable ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Variable.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterName","title":"VariableFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by Variable.name
.
prefect/server/schemas/filters.py
class VariableFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `Variable.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of variables names to include\"\n )\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match variable names against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my_variable_%\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Variable.name.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.Variable.name.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterTags","title":"VariableFilterTags
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by Variable.tags
.
prefect/server/schemas/filters.py
class VariableFilterTags(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `Variable.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Variables will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include Variables without tags\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n from prefect.server.utilities.database import json_has_all_keys\n\n filters = []\n if self.all_ is not None:\n filters.append(json_has_all_keys(db.Variable.tags, self.all_))\n if self.is_null_ is not None:\n filters.append(\n db.Variable.tags == [] if self.is_null_ else db.Variable.tags != []\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterTags.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterValue","title":"VariableFilterValue
","text":" Bases: PrefectFilterBaseModel
Filter by Variable.value
.
prefect/server/schemas/filters.py
class VariableFilterValue(PrefectFilterBaseModel):\n \"\"\"Filter by `Variable.value`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of variables value to include\"\n )\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match variable value against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-value-%\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Variable.value.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.Variable.value.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterValue.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilter","title":"WorkPoolFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter work pools. Only work pools matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class WorkPoolFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter work pools. Only work pools matching all criteria will be returned\"\"\"\n\n id: Optional[WorkPoolFilterId] = Field(\n default=None, description=\"Filter criteria for `WorkPool.id`\"\n )\n name: Optional[WorkPoolFilterName] = Field(\n default=None, description=\"Filter criteria for `WorkPool.name`\"\n )\n type: Optional[WorkPoolFilterType] = Field(\n default=None, description=\"Filter criteria for `WorkPool.type`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n if self.type is not None:\n filters.append(self.type.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterId","title":"WorkPoolFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by WorkPool.id
.
prefect/server/schemas/filters.py
class WorkPoolFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `WorkPool.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of work pool ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.WorkPool.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterName","title":"WorkPoolFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by WorkPool.name
.
prefect/server/schemas/filters.py
class WorkPoolFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `WorkPool.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of work pool names to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.WorkPool.name.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterType","title":"WorkPoolFilterType
","text":" Bases: PrefectFilterBaseModel
Filter by WorkPool.type
.
prefect/server/schemas/filters.py
class WorkPoolFilterType(PrefectFilterBaseModel):\n \"\"\"Filter by `WorkPool.type`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of work pool types to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.WorkPool.type.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterType.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilter","title":"WorkQueueFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter work queues. Only work queues matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class WorkQueueFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter work queues. Only work queues matching all criteria will be\n returned\"\"\"\n\n id: Optional[WorkQueueFilterId] = Field(\n default=None, description=\"Filter criteria for `WorkQueue.id`\"\n )\n\n name: Optional[WorkQueueFilterName] = Field(\n default=None, description=\"Filter criteria for `WorkQueue.name`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterId","title":"WorkQueueFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by WorkQueue.id
.
prefect/server/schemas/filters.py
class WorkQueueFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `WorkQueue.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None,\n description=\"A list of work queue ids to include\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.WorkQueue.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterName","title":"WorkQueueFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by WorkQueue.name
.
prefect/server/schemas/filters.py
class WorkQueueFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `WorkQueue.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of work queue names to include\",\n example=[\"wq-1\", \"wq-2\"],\n )\n\n startswith_: Optional[List[str]] = Field(\n default=None,\n description=(\n \"A list of case-insensitive starts-with matches. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', and 'Marvin-robot', but not 'sad-marvin'.\"\n ),\n example=[\"marvin\", \"Marvin-robot\"],\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.WorkQueue.name.in_(self.any_))\n if self.startswith_ is not None:\n filters.append(\n sa.or_(\n *[db.WorkQueue.name.ilike(f\"{item}%\") for item in self.startswith_]\n )\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilter","title":"WorkerFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by Worker.last_heartbeat_time
.
prefect/server/schemas/filters.py
class WorkerFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n # worker_config_id: Optional[WorkerFilterWorkPoolId] = Field(\n # default=None, description=\"Filter criteria for `Worker.worker_config_id`\"\n # )\n\n last_heartbeat_time: Optional[WorkerFilterLastHeartbeatTime] = Field(\n default=None,\n description=\"Filter criteria for `Worker.last_heartbeat_time`\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.last_heartbeat_time is not None:\n filters.append(self.last_heartbeat_time.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterLastHeartbeatTime","title":"WorkerFilterLastHeartbeatTime
","text":" Bases: PrefectFilterBaseModel
Filter by Worker.last_heartbeat_time
.
prefect/server/schemas/filters.py
class WorkerFilterLastHeartbeatTime(PrefectFilterBaseModel):\n \"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include processes whose last heartbeat was at or before this time\"\n ),\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include processes whose last heartbeat was at or after this time\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.before_ is not None:\n filters.append(db.Worker.last_heartbeat_time <= self.before_)\n if self.after_ is not None:\n filters.append(db.Worker.last_heartbeat_time >= self.after_)\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterLastHeartbeatTime.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterWorkPoolId","title":"WorkerFilterWorkPoolId
","text":" Bases: PrefectFilterBaseModel
Filter by Worker.worker_config_id
.
prefect/server/schemas/filters.py
class WorkerFilterWorkPoolId(PrefectFilterBaseModel):\n \"\"\"Filter by `Worker.worker_config_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of work pool ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Worker.worker_config_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterWorkPoolId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/responses/","title":"server.schemas.responses","text":""},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses","title":"prefect.server.schemas.responses
","text":"Schemas for special responses from the Prefect REST API.
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.FlowRunResponse","title":"FlowRunResponse
","text":" Bases: ORMBaseModel
prefect/server/schemas/responses.py
@copy_model_fields\nclass FlowRunResponse(ORMBaseModel):\n name: str = FieldFrom(schemas.core.FlowRun)\n flow_id: UUID = FieldFrom(schemas.core.FlowRun)\n state_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n deployment_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n work_queue_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n work_queue_name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n flow_version: Optional[str] = FieldFrom(schemas.core.FlowRun)\n parameters: dict = FieldFrom(schemas.core.FlowRun)\n idempotency_key: Optional[str] = FieldFrom(schemas.core.FlowRun)\n context: dict = FieldFrom(schemas.core.FlowRun)\n empirical_policy: FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n tags: List[str] = FieldFrom(schemas.core.FlowRun)\n parent_task_run_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n state_type: Optional[schemas.states.StateType] = FieldFrom(schemas.core.FlowRun)\n state_name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n run_count: int = FieldFrom(schemas.core.FlowRun)\n expected_start_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n next_scheduled_start_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n start_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n end_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n total_run_time: datetime.timedelta = FieldFrom(schemas.core.FlowRun)\n estimated_run_time: datetime.timedelta = FieldFrom(schemas.core.FlowRun)\n estimated_start_time_delta: datetime.timedelta = FieldFrom(schemas.core.FlowRun)\n auto_scheduled: bool = FieldFrom(schemas.core.FlowRun)\n infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n infrastructure_pid: Optional[str] = FieldFrom(schemas.core.FlowRun)\n created_by: Optional[CreatedBy] = FieldFrom(schemas.core.FlowRun)\n work_pool_id: Optional[UUID] = Field(\n default=None,\n description=\"The id of the flow run's work pool.\",\n )\n work_pool_name: Optional[str] = Field(\n default=None,\n description=\"The name of the flow run's work pool.\",\n example=\"my-work-pool\",\n )\n state: Optional[schemas.states.State] = FieldFrom(schemas.core.FlowRun)\n job_variables: Optional[Dict[str, Any]] = FieldFrom(schemas.core.FlowRun)\n\n @classmethod\n def from_orm(cls, orm_flow_run: \"prefect.server.database.orm_models.ORMFlowRun\"):\n response = super().from_orm(orm_flow_run)\n if orm_flow_run.work_queue:\n response.work_queue_id = orm_flow_run.work_queue.id\n response.work_queue_name = orm_flow_run.work_queue.name\n if orm_flow_run.work_queue.work_pool:\n response.work_pool_id = orm_flow_run.work_queue.work_pool.id\n response.work_pool_name = orm_flow_run.work_queue.work_pool.name\n\n return response\n\n def __eq__(self, other: Any) -> bool:\n \"\"\"\n Check for \"equality\" to another flow run schema\n\n Estimates times are rolling and will always change with repeated queries for\n a flow run so we ignore them during equality checks.\n \"\"\"\n if isinstance(other, FlowRunResponse):\n exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n return self.dict(exclude=exclude_fields) == other.dict(\n exclude=exclude_fields\n )\n return super().__eq__(other)\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.HistoryResponse","title":"HistoryResponse
","text":" Bases: PrefectBaseModel
Represents a history of aggregation states over an interval
Source code inprefect/server/schemas/responses.py
class HistoryResponse(PrefectBaseModel):\n \"\"\"Represents a history of aggregation states over an interval\"\"\"\n\n interval_start: DateTimeTZ = Field(\n default=..., description=\"The start date of the interval.\"\n )\n interval_end: DateTimeTZ = Field(\n default=..., description=\"The end date of the interval.\"\n )\n states: List[HistoryResponseState] = Field(\n default=..., description=\"A list of state histories during the interval.\"\n )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.HistoryResponseState","title":"HistoryResponseState
","text":" Bases: PrefectBaseModel
Represents a single state's history over an interval.
Source code inprefect/server/schemas/responses.py
class HistoryResponseState(PrefectBaseModel):\n \"\"\"Represents a single state's history over an interval.\"\"\"\n\n state_type: schemas.states.StateType = Field(\n default=..., description=\"The state type.\"\n )\n state_name: str = Field(default=..., description=\"The state name.\")\n count_runs: int = Field(\n default=...,\n description=\"The number of runs in the specified state during the interval.\",\n )\n sum_estimated_run_time: datetime.timedelta = Field(\n default=...,\n description=\"The total estimated run time of all runs during the interval.\",\n )\n sum_estimated_lateness: datetime.timedelta = Field(\n default=...,\n description=(\n \"The sum of differences between actual and expected start time during the\"\n \" interval.\"\n ),\n )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.OrchestrationResult","title":"OrchestrationResult
","text":" Bases: PrefectBaseModel
A container for the output of state orchestration.
Source code inprefect/server/schemas/responses.py
class OrchestrationResult(PrefectBaseModel):\n \"\"\"\n A container for the output of state orchestration.\n \"\"\"\n\n state: Optional[schemas.states.State]\n status: SetStateStatus\n details: StateResponseDetails\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.SetStateStatus","title":"SetStateStatus
","text":" Bases: AutoEnum
Enumerates return statuses for setting run states.
Source code inprefect/server/schemas/responses.py
class SetStateStatus(AutoEnum):\n \"\"\"Enumerates return statuses for setting run states.\"\"\"\n\n ACCEPT = AutoEnum.auto()\n REJECT = AutoEnum.auto()\n ABORT = AutoEnum.auto()\n WAIT = AutoEnum.auto()\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateAbortDetails","title":"StateAbortDetails
","text":" Bases: PrefectBaseModel
Details associated with an ABORT state transition.
Source code inprefect/server/schemas/responses.py
class StateAbortDetails(PrefectBaseModel):\n \"\"\"Details associated with an ABORT state transition.\"\"\"\n\n type: Literal[\"abort_details\"] = Field(\n default=\"abort_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n reason: Optional[str] = Field(\n default=None, description=\"The reason why the state transition was aborted.\"\n )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateAcceptDetails","title":"StateAcceptDetails
","text":" Bases: PrefectBaseModel
Details associated with an ACCEPT state transition.
Source code inprefect/server/schemas/responses.py
class StateAcceptDetails(PrefectBaseModel):\n \"\"\"Details associated with an ACCEPT state transition.\"\"\"\n\n type: Literal[\"accept_details\"] = Field(\n default=\"accept_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateRejectDetails","title":"StateRejectDetails
","text":" Bases: PrefectBaseModel
Details associated with a REJECT state transition.
Source code inprefect/server/schemas/responses.py
class StateRejectDetails(PrefectBaseModel):\n \"\"\"Details associated with a REJECT state transition.\"\"\"\n\n type: Literal[\"reject_details\"] = Field(\n default=\"reject_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n reason: Optional[str] = Field(\n default=None, description=\"The reason why the state transition was rejected.\"\n )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateWaitDetails","title":"StateWaitDetails
","text":" Bases: PrefectBaseModel
Details associated with a WAIT state transition.
Source code inprefect/server/schemas/responses.py
class StateWaitDetails(PrefectBaseModel):\n \"\"\"Details associated with a WAIT state transition.\"\"\"\n\n type: Literal[\"wait_details\"] = Field(\n default=\"wait_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n delay_seconds: int = Field(\n default=...,\n description=(\n \"The length of time in seconds the client should wait before transitioning\"\n \" states.\"\n ),\n )\n reason: Optional[str] = Field(\n default=None, description=\"The reason why the state transition should wait.\"\n )\n
"},{"location":"api-ref/server/schemas/schedules/","title":"server.schemas.schedules","text":""},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules","title":"prefect.server.schemas.schedules
","text":"Schedule schemas
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.CronSchedule","title":"CronSchedule
","text":" Bases: PrefectBaseModel
Cron schedule
NOTE: If the timezone is a DST-observing one, then the schedule will adjust itself appropriately. Cron's rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule will fire on every new schedule hour, not every elapsed hour; for example, when clocks are set back this will result in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later. Longer schedules, such as one that fires at 9am every morning, will automatically adjust for DST.
Parameters:
Name Type Description Defaultcron
str
a valid cron string
requiredtimezone
str
a valid timezone string in IANA tzdata format (for example, America/New_York).
requiredday_or
bool
Control how croniter handles day
and day_of_week
entries. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday.
prefect/server/schemas/schedules.py
class CronSchedule(PrefectBaseModel):\n \"\"\"\n Cron schedule\n\n NOTE: If the timezone is a DST-observing one, then the schedule will adjust\n itself appropriately. Cron's rules for DST are based on schedule times, not\n intervals. This means that an hourly cron schedule will fire on every new\n schedule hour, not every elapsed hour; for example, when clocks are set back\n this will result in a two-hour pause as the schedule will fire *the first\n time* 1am is reached and *the first time* 2am is reached, 120 minutes later.\n Longer schedules, such as one that fires at 9am every morning, will\n automatically adjust for DST.\n\n Args:\n cron (str): a valid cron string\n timezone (str): a valid timezone string in IANA tzdata format (for example,\n America/New_York).\n day_or (bool, optional): Control how croniter handles `day` and `day_of_week`\n entries. Defaults to True, matching cron which connects those values using\n OR. If the switch is set to False, the values are connected using AND. This\n behaves like fcron and enables you to e.g. define a job that executes each\n 2nd friday of a month by setting the days of month and the weekday.\n\n \"\"\"\n\n class Config:\n extra = \"forbid\"\n\n cron: str = Field(default=..., example=\"0 0 * * *\")\n timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n day_or: bool = Field(\n default=True,\n description=(\n \"Control croniter behavior for handling day and day_of_week entries.\"\n ),\n )\n\n @validator(\"timezone\")\n def valid_timezone(cls, v):\n # pendulum.tz.timezones is a callable in 3.0 and above\n # https://github.com/PrefectHQ/prefect/issues/11619\n if callable(pendulum.tz.timezones):\n timezones = pendulum.tz.timezones()\n else:\n timezones = pendulum.tz.timezones\n\n if v and v not in timezones:\n raise ValueError(\n f'Invalid timezone: \"{v}\" (specify in IANA tzdata format, for example,'\n \" America/New_York)\"\n )\n return v\n\n @validator(\"cron\")\n def valid_cron_string(cls, v):\n # croniter allows \"random\" and \"hashed\" expressions\n # which we do not support https://github.com/kiorky/croniter\n if not croniter.is_valid(v):\n raise ValueError(f'Invalid cron string: \"{v}\"')\n elif any(c for c in v.split() if c.casefold() in [\"R\", \"H\", \"r\", \"h\"]):\n raise ValueError(\n f'Random and Hashed expressions are unsupported, received: \"{v}\"'\n )\n return v\n\n async def get_dates(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n ) -> List[pendulum.DateTime]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to None. If a timezone-naive datetime is\n provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): The maximum scheduled date to return. If\n a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: A list of dates\n \"\"\"\n return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n def _get_dates_generator(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n ) -> Generator[pendulum.DateTime, None, None]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to the current date. If a timezone-naive\n datetime is provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): No returned date will exceed this date.\n If a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: a list of dates\n \"\"\"\n if start is None:\n start = pendulum.now(\"UTC\")\n\n start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n if n is None:\n # if an end was supplied, we do our best to supply all matching dates (up to\n # MAX_ITERATIONS)\n if end is not None:\n n = MAX_ITERATIONS\n else:\n n = 1\n\n elif self.timezone:\n start = start.in_tz(self.timezone)\n\n # subtract one second from the start date, so that croniter returns it\n # as an event (if it meets the cron criteria)\n start = start.subtract(seconds=1)\n\n # Respect microseconds by rounding up\n if start.microsecond > 0:\n start += datetime.timedelta(seconds=1)\n\n # croniter's DST logic interferes with all other datetime libraries except pytz\n start_localized = pytz.timezone(start.tz.name).localize(\n datetime.datetime(\n year=start.year,\n month=start.month,\n day=start.day,\n hour=start.hour,\n minute=start.minute,\n second=start.second,\n microsecond=start.microsecond,\n )\n )\n start_naive_tz = start.naive()\n\n cron = croniter(self.cron, start_naive_tz, day_or=self.day_or) # type: ignore\n dates = set()\n counter = 0\n\n while True:\n # croniter does not handle DST properly when the start time is\n # in and around when the actual shift occurs. To work around this,\n # we use the naive start time to get the next cron date delta, then\n # add that time to the original scheduling anchor.\n next_time = cron.get_next(datetime.datetime)\n delta = next_time - start_naive_tz\n next_date = pendulum.instance(start_localized + delta)\n\n # if the end date was exceeded, exit\n if end and next_date > end:\n break\n # ensure no duplicates; weird things can happen with DST\n if next_date not in dates:\n dates.add(next_date)\n yield next_date\n\n # if enough dates have been collected or enough attempts were made, exit\n if len(dates) >= n or counter > MAX_ITERATIONS:\n break\n\n counter += 1\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.CronSchedule.get_dates","title":"get_dates
async
","text":"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.
Parameters:
Name Type Description Defaultn
int
The number of dates to generate
None
start
datetime
The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.
None
end
datetime
The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.
None
Returns:
Type DescriptionList[DateTime]
List[pendulum.DateTime]: A list of dates
Source code inprefect/server/schemas/schedules.py
async def get_dates(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to None. If a timezone-naive datetime is\n provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): The maximum scheduled date to return. If\n a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: A list of dates\n \"\"\"\n return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.IntervalSchedule","title":"IntervalSchedule
","text":" Bases: PrefectBaseModel
A schedule formed by adding interval
increments to an anchor_date
. If no anchor_date
is supplied, the current UTC time is used. If a timezone-naive datetime is provided for anchor_date
, it is assumed to be in the schedule's timezone (or UTC). Even if supplied with an IANA timezone, anchor dates are always stored as UTC offsets, so a timezone
can be provided to determine localization behaviors like DST boundary handling. If none is provided it will be inferred from the anchor date.
NOTE: If the IntervalSchedule
anchor_date
or timezone
is provided in a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals. For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time. For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.
Parameters:
Name Type Description Defaultinterval
timedelta
an interval to schedule on.
requiredanchor_date
DateTimeTZ
an anchor date to schedule increments against; if not provided, the current timestamp will be used.
requiredtimezone
str
a valid timezone string.
required Source code inprefect/server/schemas/schedules.py
class IntervalSchedule(PrefectBaseModel):\n \"\"\"\n A schedule formed by adding `interval` increments to an `anchor_date`. If no\n `anchor_date` is supplied, the current UTC time is used. If a\n timezone-naive datetime is provided for `anchor_date`, it is assumed to be\n in the schedule's timezone (or UTC). Even if supplied with an IANA timezone,\n anchor dates are always stored as UTC offsets, so a `timezone` can be\n provided to determine localization behaviors like DST boundary handling. If\n none is provided it will be inferred from the anchor date.\n\n NOTE: If the `IntervalSchedule` `anchor_date` or `timezone` is provided in a\n DST-observing timezone, then the schedule will adjust itself appropriately.\n Intervals greater than 24 hours will follow DST conventions, while intervals\n of less than 24 hours will follow UTC intervals. For example, an hourly\n schedule will fire every UTC hour, even across DST boundaries. When clocks\n are set back, this will result in two runs that *appear* to both be\n scheduled for 1am local time, even though they are an hour apart in UTC\n time. For longer intervals, like a daily schedule, the interval schedule\n will adjust for DST boundaries so that the clock-hour remains constant. This\n means that a daily schedule that always fires at 9am will observe DST and\n continue to fire at 9am in the local time zone.\n\n Args:\n interval (datetime.timedelta): an interval to schedule on.\n anchor_date (DateTimeTZ, optional): an anchor date to schedule increments against;\n if not provided, the current timestamp will be used.\n timezone (str, optional): a valid timezone string.\n \"\"\"\n\n class Config:\n extra = \"forbid\"\n exclude_none = True\n\n interval: datetime.timedelta\n anchor_date: DateTimeTZ = None\n timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n\n @validator(\"interval\")\n def interval_must_be_positive(cls, v):\n if v.total_seconds() <= 0:\n raise ValueError(\"The interval must be positive\")\n return v\n\n @validator(\"anchor_date\", always=True)\n def default_anchor_date(cls, v):\n if v is None:\n return pendulum.now(\"UTC\")\n return pendulum.instance(v)\n\n @validator(\"timezone\", always=True)\n def default_timezone(cls, v, *, values, **kwargs):\n # pendulum.tz.timezones is a callable in 3.0 and above\n # https://github.com/PrefectHQ/prefect/issues/11619\n if callable(pendulum.tz.timezones):\n timezones = pendulum.tz.timezones()\n else:\n timezones = pendulum.tz.timezones\n\n # if was provided, make sure its a valid IANA string\n if v and v not in timezones:\n raise ValueError(f'Invalid timezone: \"{v}\"')\n\n # otherwise infer the timezone from the anchor date\n elif v is None and values.get(\"anchor_date\"):\n tz = values[\"anchor_date\"].tz.name\n if tz in timezones:\n return tz\n # sometimes anchor dates have \"timezones\" that are UTC offsets\n # like \"-04:00\". This happens when parsing ISO8601 strings.\n # In this case we, the correct inferred localization is \"UTC\".\n else:\n return \"UTC\"\n\n return v\n\n async def get_dates(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n ) -> List[pendulum.DateTime]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to None. If a timezone-naive datetime is\n provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): The maximum scheduled date to return. If\n a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: A list of dates\n \"\"\"\n return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n def _get_dates_generator(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n ) -> Generator[pendulum.DateTime, None, None]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to None. If a timezone-naive datetime is\n provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): The maximum scheduled date to return. If\n a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: a list of dates\n \"\"\"\n if n is None:\n # if an end was supplied, we do our best to supply all matching dates (up to\n # MAX_ITERATIONS)\n if end is not None:\n n = MAX_ITERATIONS\n else:\n n = 1\n\n if start is None:\n start = pendulum.now(\"UTC\")\n\n anchor_tz = self.anchor_date.in_tz(self.timezone)\n start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n # compute the offset between the anchor date and the start date to jump to the\n # next date\n offset = (start - anchor_tz).total_seconds() / self.interval.total_seconds()\n next_date = anchor_tz.add(seconds=self.interval.total_seconds() * int(offset))\n\n # break the interval into `days` and `seconds` because pendulum\n # will handle DST boundaries properly if days are provided, but not\n # if we add `total seconds`. Therefore, `next_date + self.interval`\n # fails while `next_date.add(days=days, seconds=seconds)` works.\n interval_days = self.interval.days\n interval_seconds = self.interval.total_seconds() - (\n interval_days * 24 * 60 * 60\n )\n\n # daylight saving time boundaries can create a situation where the next date is\n # before the start date, so we advance it if necessary\n while next_date < start:\n next_date = next_date.add(days=interval_days, seconds=interval_seconds)\n\n counter = 0\n dates = set()\n\n while True:\n # if the end date was exceeded, exit\n if end and next_date > end:\n break\n\n # ensure no duplicates; weird things can happen with DST\n if next_date not in dates:\n dates.add(next_date)\n yield next_date\n\n # if enough dates have been collected or enough attempts were made, exit\n if len(dates) >= n or counter > MAX_ITERATIONS:\n break\n\n counter += 1\n\n next_date = next_date.add(days=interval_days, seconds=interval_seconds)\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.IntervalSchedule.get_dates","title":"get_dates
async
","text":"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.
Parameters:
Name Type Description Defaultn
int
The number of dates to generate
None
start
datetime
The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.
None
end
datetime
The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.
None
Returns:
Type DescriptionList[DateTime]
List[pendulum.DateTime]: A list of dates
Source code inprefect/server/schemas/schedules.py
async def get_dates(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to None. If a timezone-naive datetime is\n provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): The maximum scheduled date to return. If\n a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: A list of dates\n \"\"\"\n return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule","title":"RRuleSchedule
","text":" Bases: PrefectBaseModel
RRule schedule, based on the iCalendar standard (RFC 5545) as implemented in dateutils.rrule
.
RRules are appropriate for any kind of calendar-date manipulation, including irregular intervals, repetition, exclusions, week day or day-of-month adjustments, and more.
Note that as a calendar-oriented standard, RRuleSchedules
are sensitive to to the initial timezone provided. A 9am daily schedule with a daylight saving time-aware start date will maintain a local 9am time through DST boundaries; a 9am daily schedule with a UTC start date will maintain a 9am UTC time.
Parameters:
Name Type Description Defaultrrule
str
a valid RRule string
requiredtimezone
str
a valid timezone string
required Source code inprefect/server/schemas/schedules.py
class RRuleSchedule(PrefectBaseModel):\n \"\"\"\n RRule schedule, based on the iCalendar standard\n ([RFC 5545](https://datatracker.ietf.org/doc/html/rfc5545)) as\n implemented in `dateutils.rrule`.\n\n RRules are appropriate for any kind of calendar-date manipulation, including\n irregular intervals, repetition, exclusions, week day or day-of-month\n adjustments, and more.\n\n Note that as a calendar-oriented standard, `RRuleSchedules` are sensitive to\n to the initial timezone provided. A 9am daily schedule with a daylight saving\n time-aware start date will maintain a local 9am time through DST boundaries;\n a 9am daily schedule with a UTC start date will maintain a 9am UTC time.\n\n Args:\n rrule (str): a valid RRule string\n timezone (str, optional): a valid timezone string\n \"\"\"\n\n class Config:\n extra = \"forbid\"\n\n rrule: str\n timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n\n @validator(\"rrule\")\n def validate_rrule_str(cls, v):\n # attempt to parse the rrule string as an rrule object\n # this will error if the string is invalid\n try:\n dateutil.rrule.rrulestr(v, cache=True)\n except ValueError as exc:\n # rrules errors are a mix of cryptic and informative\n # so reraise to be clear that the string was invalid\n raise ValueError(f'Invalid RRule string \"{v}\": {exc}')\n if len(v) > MAX_RRULE_LENGTH:\n raise ValueError(\n f'Invalid RRule string \"{v[:40]}...\"\\n'\n f\"Max length is {MAX_RRULE_LENGTH}, got {len(v)}\"\n )\n return v\n\n @classmethod\n def from_rrule(cls, rrule: dateutil.rrule.rrule):\n if isinstance(rrule, dateutil.rrule.rrule):\n if rrule._dtstart.tzinfo is not None:\n timezone = rrule._dtstart.tzinfo.name\n else:\n timezone = \"UTC\"\n return RRuleSchedule(rrule=str(rrule), timezone=timezone)\n elif isinstance(rrule, dateutil.rrule.rruleset):\n dtstarts = [rr._dtstart for rr in rrule._rrule if rr._dtstart is not None]\n unique_dstarts = set(pendulum.instance(d).in_tz(\"UTC\") for d in dtstarts)\n unique_timezones = set(d.tzinfo for d in dtstarts if d.tzinfo is not None)\n\n if len(unique_timezones) > 1:\n raise ValueError(\n f\"rruleset has too many dtstart timezones: {unique_timezones}\"\n )\n\n if len(unique_dstarts) > 1:\n raise ValueError(f\"rruleset has too many dtstarts: {unique_dstarts}\")\n\n if unique_dstarts and unique_timezones:\n timezone = dtstarts[0].tzinfo.name\n else:\n timezone = \"UTC\"\n\n rruleset_string = \"\"\n if rrule._rrule:\n rruleset_string += \"\\n\".join(str(r) for r in rrule._rrule)\n if rrule._exrule:\n rruleset_string += \"\\n\" if rruleset_string else \"\"\n rruleset_string += \"\\n\".join(str(r) for r in rrule._exrule).replace(\n \"RRULE\", \"EXRULE\"\n )\n if rrule._rdate:\n rruleset_string += \"\\n\" if rruleset_string else \"\"\n rruleset_string += \"RDATE:\" + \",\".join(\n rd.strftime(\"%Y%m%dT%H%M%SZ\") for rd in rrule._rdate\n )\n if rrule._exdate:\n rruleset_string += \"\\n\" if rruleset_string else \"\"\n rruleset_string += \"EXDATE:\" + \",\".join(\n exd.strftime(\"%Y%m%dT%H%M%SZ\") for exd in rrule._exdate\n )\n return RRuleSchedule(rrule=rruleset_string, timezone=timezone)\n else:\n raise ValueError(f\"Invalid RRule object: {rrule}\")\n\n def to_rrule(self) -> dateutil.rrule.rrule:\n \"\"\"\n Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n here\n \"\"\"\n rrule = dateutil.rrule.rrulestr(\n self.rrule,\n dtstart=DEFAULT_ANCHOR_DATE,\n cache=True,\n )\n timezone = dateutil.tz.gettz(self.timezone)\n if isinstance(rrule, dateutil.rrule.rrule):\n kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n if rrule._until:\n kwargs.update(\n until=rrule._until.replace(tzinfo=timezone),\n )\n return rrule.replace(**kwargs)\n elif isinstance(rrule, dateutil.rrule.rruleset):\n # update rrules\n localized_rrules = []\n for rr in rrule._rrule:\n kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n if rr._until:\n kwargs.update(\n until=rr._until.replace(tzinfo=timezone),\n )\n localized_rrules.append(rr.replace(**kwargs))\n rrule._rrule = localized_rrules\n\n # update exrules\n localized_exrules = []\n for exr in rrule._exrule:\n kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n if exr._until:\n kwargs.update(\n until=exr._until.replace(tzinfo=timezone),\n )\n localized_exrules.append(exr.replace(**kwargs))\n rrule._exrule = localized_exrules\n\n # update rdates\n localized_rdates = []\n for rd in rrule._rdate:\n localized_rdates.append(rd.replace(tzinfo=timezone))\n rrule._rdate = localized_rdates\n\n # update exdates\n localized_exdates = []\n for exd in rrule._exdate:\n localized_exdates.append(exd.replace(tzinfo=timezone))\n rrule._exdate = localized_exdates\n\n return rrule\n\n @validator(\"timezone\", always=True)\n def valid_timezone(cls, v):\n if v and v not in pytz.all_timezones_set:\n raise ValueError(f'Invalid timezone: \"{v}\"')\n elif v is None:\n return \"UTC\"\n return v\n\n async def get_dates(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n ) -> List[pendulum.DateTime]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to None. If a timezone-naive datetime is\n provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): The maximum scheduled date to return. If\n a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: A list of dates\n \"\"\"\n return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n def _get_dates_generator(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n ) -> Generator[pendulum.DateTime, None, None]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to the current date. If a timezone-naive\n datetime is provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): No returned date will exceed this date.\n If a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: a list of dates\n \"\"\"\n if start is None:\n start = pendulum.now(\"UTC\")\n\n start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n if n is None:\n # if an end was supplied, we do our best to supply all matching dates (up\n # to MAX_ITERATIONS)\n if end is not None:\n n = MAX_ITERATIONS\n else:\n n = 1\n\n dates = set()\n counter = 0\n\n # pass count = None to account for discrepancies with duplicates around DST\n # boundaries\n for next_date in self.to_rrule().xafter(start, count=None, inc=True):\n next_date = pendulum.instance(next_date).in_tz(self.timezone)\n\n # if the end date was exceeded, exit\n if end and next_date > end:\n break\n\n # ensure no duplicates; weird things can happen with DST\n if next_date not in dates:\n dates.add(next_date)\n yield next_date\n\n # if enough dates have been collected or enough attempts were made, exit\n if len(dates) >= n or counter > MAX_ITERATIONS:\n break\n\n counter += 1\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule.get_dates","title":"get_dates
async
","text":"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.
Parameters:
Name Type Description Defaultn
int
The number of dates to generate
None
start
datetime
The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.
None
end
datetime
The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.
None
Returns:
Type DescriptionList[DateTime]
List[pendulum.DateTime]: A list of dates
Source code inprefect/server/schemas/schedules.py
async def get_dates(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to None. If a timezone-naive datetime is\n provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): The maximum scheduled date to return. If\n a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: A list of dates\n \"\"\"\n return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule.to_rrule","title":"to_rrule
","text":"Since rrule doesn't properly serialize/deserialize timezones, we localize dates here
Source code inprefect/server/schemas/schedules.py
def to_rrule(self) -> dateutil.rrule.rrule:\n \"\"\"\n Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n here\n \"\"\"\n rrule = dateutil.rrule.rrulestr(\n self.rrule,\n dtstart=DEFAULT_ANCHOR_DATE,\n cache=True,\n )\n timezone = dateutil.tz.gettz(self.timezone)\n if isinstance(rrule, dateutil.rrule.rrule):\n kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n if rrule._until:\n kwargs.update(\n until=rrule._until.replace(tzinfo=timezone),\n )\n return rrule.replace(**kwargs)\n elif isinstance(rrule, dateutil.rrule.rruleset):\n # update rrules\n localized_rrules = []\n for rr in rrule._rrule:\n kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n if rr._until:\n kwargs.update(\n until=rr._until.replace(tzinfo=timezone),\n )\n localized_rrules.append(rr.replace(**kwargs))\n rrule._rrule = localized_rrules\n\n # update exrules\n localized_exrules = []\n for exr in rrule._exrule:\n kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n if exr._until:\n kwargs.update(\n until=exr._until.replace(tzinfo=timezone),\n )\n localized_exrules.append(exr.replace(**kwargs))\n rrule._exrule = localized_exrules\n\n # update rdates\n localized_rdates = []\n for rd in rrule._rdate:\n localized_rdates.append(rd.replace(tzinfo=timezone))\n rrule._rdate = localized_rdates\n\n # update exdates\n localized_exdates = []\n for exd in rrule._exdate:\n localized_exdates.append(exd.replace(tzinfo=timezone))\n rrule._exdate = localized_exdates\n\n return rrule\n
"},{"location":"api-ref/server/schemas/sorting/","title":"server.schemas.sorting","text":""},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting","title":"prefect.server.schemas.sorting
","text":"Schemas for sorting Prefect REST API objects.
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactCollectionSort","title":"ArtifactCollectionSort
","text":" Bases: AutoEnum
Defines artifact collection sorting options.
Source code inprefect/server/schemas/sorting.py
class ArtifactCollectionSort(AutoEnum):\n \"\"\"Defines artifact collection sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n ID_DESC = AutoEnum.auto()\n KEY_DESC = AutoEnum.auto()\n KEY_ASC = AutoEnum.auto()\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort artifact collections\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.ArtifactCollection.created.desc(),\n \"UPDATED_DESC\": db.ArtifactCollection.updated.desc(),\n \"ID_DESC\": db.ArtifactCollection.id.desc(),\n \"KEY_DESC\": db.ArtifactCollection.key.desc(),\n \"KEY_ASC\": db.ArtifactCollection.key.asc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactCollectionSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort artifact collections
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort artifact collections\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.ArtifactCollection.created.desc(),\n \"UPDATED_DESC\": db.ArtifactCollection.updated.desc(),\n \"ID_DESC\": db.ArtifactCollection.id.desc(),\n \"KEY_DESC\": db.ArtifactCollection.key.desc(),\n \"KEY_ASC\": db.ArtifactCollection.key.asc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactSort","title":"ArtifactSort
","text":" Bases: AutoEnum
Defines artifact sorting options.
Source code inprefect/server/schemas/sorting.py
class ArtifactSort(AutoEnum):\n \"\"\"Defines artifact sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n ID_DESC = AutoEnum.auto()\n KEY_DESC = AutoEnum.auto()\n KEY_ASC = AutoEnum.auto()\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort artifacts\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Artifact.created.desc(),\n \"UPDATED_DESC\": db.Artifact.updated.desc(),\n \"ID_DESC\": db.Artifact.id.desc(),\n \"KEY_DESC\": db.Artifact.key.desc(),\n \"KEY_ASC\": db.Artifact.key.asc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort artifacts
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort artifacts\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Artifact.created.desc(),\n \"UPDATED_DESC\": db.Artifact.updated.desc(),\n \"ID_DESC\": db.Artifact.id.desc(),\n \"KEY_DESC\": db.Artifact.key.desc(),\n \"KEY_ASC\": db.Artifact.key.asc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.BlockDocumentSort","title":"BlockDocumentSort
","text":" Bases: AutoEnum
Defines block document sorting options.
Source code inprefect/server/schemas/sorting.py
class BlockDocumentSort(AutoEnum):\n \"\"\"Defines block document sorting options.\"\"\"\n\n NAME_DESC = \"NAME_DESC\"\n NAME_ASC = \"NAME_ASC\"\n BLOCK_TYPE_AND_NAME_ASC = \"BLOCK_TYPE_AND_NAME_ASC\"\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort block documents\"\"\"\n sort_mapping = {\n \"NAME_DESC\": db.BlockDocument.name.desc(),\n \"NAME_ASC\": db.BlockDocument.name.asc(),\n \"BLOCK_TYPE_AND_NAME_ASC\": sa.text(\"block_type_name asc, name asc\"),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.BlockDocumentSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort block documents
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort block documents\"\"\"\n sort_mapping = {\n \"NAME_DESC\": db.BlockDocument.name.desc(),\n \"NAME_ASC\": db.BlockDocument.name.asc(),\n \"BLOCK_TYPE_AND_NAME_ASC\": sa.text(\"block_type_name asc, name asc\"),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.DeploymentSort","title":"DeploymentSort
","text":" Bases: AutoEnum
Defines deployment sorting options.
Source code inprefect/server/schemas/sorting.py
class DeploymentSort(AutoEnum):\n \"\"\"Defines deployment sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort deployments\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Deployment.created.desc(),\n \"UPDATED_DESC\": db.Deployment.updated.desc(),\n \"NAME_ASC\": db.Deployment.name.asc(),\n \"NAME_DESC\": db.Deployment.name.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.DeploymentSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort deployments
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort deployments\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Deployment.created.desc(),\n \"UPDATED_DESC\": db.Deployment.updated.desc(),\n \"NAME_ASC\": db.Deployment.name.asc(),\n \"NAME_DESC\": db.Deployment.name.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowRunSort","title":"FlowRunSort
","text":" Bases: AutoEnum
Defines flow run sorting options.
Source code inprefect/server/schemas/sorting.py
class FlowRunSort(AutoEnum):\n \"\"\"Defines flow run sorting options.\"\"\"\n\n ID_DESC = AutoEnum.auto()\n START_TIME_ASC = AutoEnum.auto()\n START_TIME_DESC = AutoEnum.auto()\n EXPECTED_START_TIME_ASC = AutoEnum.auto()\n EXPECTED_START_TIME_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n END_TIME_DESC = AutoEnum.auto()\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n from sqlalchemy.sql.functions import coalesce\n\n \"\"\"Return an expression used to sort flow runs\"\"\"\n sort_mapping = {\n \"ID_DESC\": db.FlowRun.id.desc(),\n \"START_TIME_ASC\": coalesce(\n db.FlowRun.start_time, db.FlowRun.expected_start_time\n ).asc(),\n \"START_TIME_DESC\": coalesce(\n db.FlowRun.start_time, db.FlowRun.expected_start_time\n ).desc(),\n \"EXPECTED_START_TIME_ASC\": db.FlowRun.expected_start_time.asc(),\n \"EXPECTED_START_TIME_DESC\": db.FlowRun.expected_start_time.desc(),\n \"NAME_ASC\": db.FlowRun.name.asc(),\n \"NAME_DESC\": db.FlowRun.name.desc(),\n \"NEXT_SCHEDULED_START_TIME_ASC\": db.FlowRun.next_scheduled_start_time.asc(),\n \"END_TIME_DESC\": db.FlowRun.end_time.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowSort","title":"FlowSort
","text":" Bases: AutoEnum
Defines flow sorting options.
Source code inprefect/server/schemas/sorting.py
class FlowSort(AutoEnum):\n \"\"\"Defines flow sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort flows\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Flow.created.desc(),\n \"UPDATED_DESC\": db.Flow.updated.desc(),\n \"NAME_ASC\": db.Flow.name.asc(),\n \"NAME_DESC\": db.Flow.name.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort flows
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort flows\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Flow.created.desc(),\n \"UPDATED_DESC\": db.Flow.updated.desc(),\n \"NAME_ASC\": db.Flow.name.asc(),\n \"NAME_DESC\": db.Flow.name.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.LogSort","title":"LogSort
","text":" Bases: AutoEnum
Defines log sorting options.
Source code inprefect/server/schemas/sorting.py
class LogSort(AutoEnum):\n \"\"\"Defines log sorting options.\"\"\"\n\n TIMESTAMP_ASC = AutoEnum.auto()\n TIMESTAMP_DESC = AutoEnum.auto()\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort task runs\"\"\"\n sort_mapping = {\n \"TIMESTAMP_ASC\": db.Log.timestamp.asc(),\n \"TIMESTAMP_DESC\": db.Log.timestamp.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.LogSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort task runs
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort task runs\"\"\"\n sort_mapping = {\n \"TIMESTAMP_ASC\": db.Log.timestamp.asc(),\n \"TIMESTAMP_DESC\": db.Log.timestamp.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.TaskRunSort","title":"TaskRunSort
","text":" Bases: AutoEnum
Defines task run sorting options.
Source code inprefect/server/schemas/sorting.py
class TaskRunSort(AutoEnum):\n \"\"\"Defines task run sorting options.\"\"\"\n\n ID_DESC = AutoEnum.auto()\n EXPECTED_START_TIME_ASC = AutoEnum.auto()\n EXPECTED_START_TIME_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n END_TIME_DESC = AutoEnum.auto()\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort task runs\"\"\"\n sort_mapping = {\n \"ID_DESC\": db.TaskRun.id.desc(),\n \"EXPECTED_START_TIME_ASC\": db.TaskRun.expected_start_time.asc(),\n \"EXPECTED_START_TIME_DESC\": db.TaskRun.expected_start_time.desc(),\n \"NAME_ASC\": db.TaskRun.name.asc(),\n \"NAME_DESC\": db.TaskRun.name.desc(),\n \"NEXT_SCHEDULED_START_TIME_ASC\": db.TaskRun.next_scheduled_start_time.asc(),\n \"END_TIME_DESC\": db.TaskRun.end_time.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.TaskRunSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort task runs
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort task runs\"\"\"\n sort_mapping = {\n \"ID_DESC\": db.TaskRun.id.desc(),\n \"EXPECTED_START_TIME_ASC\": db.TaskRun.expected_start_time.asc(),\n \"EXPECTED_START_TIME_DESC\": db.TaskRun.expected_start_time.desc(),\n \"NAME_ASC\": db.TaskRun.name.asc(),\n \"NAME_DESC\": db.TaskRun.name.desc(),\n \"NEXT_SCHEDULED_START_TIME_ASC\": db.TaskRun.next_scheduled_start_time.asc(),\n \"END_TIME_DESC\": db.TaskRun.end_time.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.VariableSort","title":"VariableSort
","text":" Bases: AutoEnum
Defines variables sorting options.
Source code inprefect/server/schemas/sorting.py
class VariableSort(AutoEnum):\n \"\"\"Defines variables sorting options.\"\"\"\n\n CREATED_DESC = \"CREATED_DESC\"\n UPDATED_DESC = \"UPDATED_DESC\"\n NAME_DESC = \"NAME_DESC\"\n NAME_ASC = \"NAME_ASC\"\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort variables\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Variable.created.desc(),\n \"UPDATED_DESC\": db.Variable.updated.desc(),\n \"NAME_DESC\": db.Variable.name.desc(),\n \"NAME_ASC\": db.Variable.name.asc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.VariableSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort variables
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort variables\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Variable.created.desc(),\n \"UPDATED_DESC\": db.Variable.updated.desc(),\n \"NAME_DESC\": db.Variable.name.desc(),\n \"NAME_ASC\": db.Variable.name.asc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/states/","title":"server.schemas.states","text":""},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states","title":"prefect.server.schemas.states
","text":"State schemas.
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State","title":"State
","text":" Bases: StateBaseModel
, Generic[R]
Represents the state of a run.
Source code inprefect/server/schemas/states.py
class State(StateBaseModel, Generic[R]):\n \"\"\"Represents the state of a run.\"\"\"\n\n class Config:\n orm_mode = True\n\n type: StateType\n name: Optional[str] = Field(default=None)\n timestamp: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n message: Optional[str] = Field(default=None, example=\"Run started\")\n data: Optional[Any] = Field(\n default=None,\n description=(\n \"Data associated with the state, e.g. a result. \"\n \"Content must be storable as JSON.\"\n ),\n )\n state_details: StateDetails = Field(default_factory=StateDetails)\n\n @classmethod\n def from_orm_without_result(\n cls,\n orm_state: Union[\n \"prefect.server.database.orm_models.ORMFlowRunState\",\n \"prefect.server.database.orm_models.ORMTaskRunState\",\n ],\n with_data: Optional[Any] = None,\n ):\n \"\"\"\n During orchestration, ORM states can be instantiated prior to inserting results\n into the artifact table and the `data` field will not be eagerly loaded. In\n these cases, sqlalchemy will attempt to lazily load the the relationship, which\n will fail when called within a synchronous pydantic method.\n\n This method will construct a `State` object from an ORM model without a loaded\n artifact and attach data passed using the `with_data` argument to the `data`\n field.\n \"\"\"\n\n field_keys = cls.schema()[\"properties\"].keys()\n state_data = {\n field: getattr(orm_state, field, None)\n for field in field_keys\n if field != \"data\"\n }\n state_data[\"data\"] = with_data\n return cls(**state_data)\n\n @validator(\"name\", always=True)\n def default_name_from_type(cls, v, *, values, **kwargs):\n \"\"\"If a name is not provided, use the type\"\"\"\n\n # if `type` is not in `values` it means the `type` didn't pass its own\n # validation check and an error will be raised after this function is called\n if v is None and values.get(\"type\"):\n v = \" \".join([v.capitalize() for v in values.get(\"type\").value.split(\"_\")])\n return v\n\n @root_validator\n def default_scheduled_start_time(cls, values):\n \"\"\"\n TODO: This should throw an error instead of setting a default but is out of\n scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled\n into work refactoring state initialization\n \"\"\"\n if values.get(\"type\") == StateType.SCHEDULED:\n state_details = values.setdefault(\n \"state_details\", cls.__fields__[\"state_details\"].get_default()\n )\n if not state_details.scheduled_time:\n state_details.scheduled_time = pendulum.now(\"utc\")\n return values\n\n def is_scheduled(self) -> bool:\n return self.type == StateType.SCHEDULED\n\n def is_pending(self) -> bool:\n return self.type == StateType.PENDING\n\n def is_running(self) -> bool:\n return self.type == StateType.RUNNING\n\n def is_completed(self) -> bool:\n return self.type == StateType.COMPLETED\n\n def is_failed(self) -> bool:\n return self.type == StateType.FAILED\n\n def is_crashed(self) -> bool:\n return self.type == StateType.CRASHED\n\n def is_cancelled(self) -> bool:\n return self.type == StateType.CANCELLED\n\n def is_cancelling(self) -> bool:\n return self.type == StateType.CANCELLING\n\n def is_final(self) -> bool:\n return self.type in TERMINAL_STATES\n\n def is_paused(self) -> bool:\n return self.type == StateType.PAUSED\n\n def copy(self, *, update: dict = None, reset_fields: bool = False, **kwargs):\n \"\"\"\n Copying API models should return an object that could be inserted into the\n database again. The 'timestamp' is reset using the default factory.\n \"\"\"\n update = update or {}\n update.setdefault(\"timestamp\", self.__fields__[\"timestamp\"].get_default())\n return super().copy(reset_fields=reset_fields, update=update, **kwargs)\n\n def result(self, raise_on_failure: bool = True, fetch: Optional[bool] = None):\n # Backwards compatible `result` handling on the server-side schema\n from prefect.states import State\n\n warnings.warn(\n (\n \"`result` is no longer supported by\"\n \" `prefect.server.schemas.states.State` and will be removed in a future\"\n \" release. When result retrieval is needed, use `prefect.states.State`.\"\n ),\n DeprecationWarning,\n stacklevel=2,\n )\n\n state = State.parse_obj(self)\n return state.result(raise_on_failure=raise_on_failure, fetch=fetch)\n\n def to_state_create(self):\n # Backwards compatibility for `to_state_create`\n from prefect.client.schemas import State\n\n warnings.warn(\n (\n \"Use of `prefect.server.schemas.states.State` from the client is\"\n \" deprecated and support will be removed in a future release. Use\"\n \" `prefect.states.State` instead.\"\n ),\n DeprecationWarning,\n stacklevel=2,\n )\n\n state = State.parse_obj(self)\n return state.to_state_create()\n\n def __repr__(self) -> str:\n \"\"\"\n Generates a complete state representation appropriate for introspection\n and debugging, including the result:\n\n `MyCompletedState(message=\"my message\", type=COMPLETED, result=...)`\n \"\"\"\n from prefect.deprecated.data_documents import DataDocument\n\n if isinstance(self.data, DataDocument):\n result = self.data.decode()\n else:\n result = self.data\n\n display = dict(\n message=repr(self.message),\n type=str(self.type.value),\n result=repr(result),\n )\n\n return f\"{self.name}({', '.join(f'{k}={v}' for k, v in display.items())})\"\n\n def __str__(self) -> str:\n \"\"\"\n Generates a simple state representation appropriate for logging:\n\n `MyCompletedState(\"my message\", type=COMPLETED)`\n \"\"\"\n\n display = []\n\n if self.message:\n display.append(repr(self.message))\n\n if self.type.value.lower() != self.name.lower():\n display.append(f\"type={self.type.value}\")\n\n return f\"{self.name}({', '.join(display)})\"\n\n def __hash__(self) -> int:\n return hash(\n (\n getattr(self.state_details, \"flow_run_id\", None),\n getattr(self.state_details, \"task_run_id\", None),\n self.timestamp,\n self.type,\n )\n )\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State.from_orm_without_result","title":"from_orm_without_result
classmethod
","text":"During orchestration, ORM states can be instantiated prior to inserting results into the artifact table and the data
field will not be eagerly loaded. In these cases, sqlalchemy will attempt to lazily load the the relationship, which will fail when called within a synchronous pydantic method.
This method will construct a State
object from an ORM model without a loaded artifact and attach data passed using the with_data
argument to the data
field.
prefect/server/schemas/states.py
@classmethod\ndef from_orm_without_result(\n cls,\n orm_state: Union[\n \"prefect.server.database.orm_models.ORMFlowRunState\",\n \"prefect.server.database.orm_models.ORMTaskRunState\",\n ],\n with_data: Optional[Any] = None,\n):\n \"\"\"\n During orchestration, ORM states can be instantiated prior to inserting results\n into the artifact table and the `data` field will not be eagerly loaded. In\n these cases, sqlalchemy will attempt to lazily load the the relationship, which\n will fail when called within a synchronous pydantic method.\n\n This method will construct a `State` object from an ORM model without a loaded\n artifact and attach data passed using the `with_data` argument to the `data`\n field.\n \"\"\"\n\n field_keys = cls.schema()[\"properties\"].keys()\n state_data = {\n field: getattr(orm_state, field, None)\n for field in field_keys\n if field != \"data\"\n }\n state_data[\"data\"] = with_data\n return cls(**state_data)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateBaseModel","title":"StateBaseModel
","text":" Bases: IDBaseModel
prefect/server/schemas/states.py
class StateBaseModel(IDBaseModel):\n def orm_dict(\n self, *args, shallow: bool = False, json_compatible: bool = False, **kwargs\n ) -> dict:\n \"\"\"\n This method is used as a convenience method for constructing fixtues by first\n building a `State` schema object and converting it into an ORM-compatible\n format. Because the `data` field is not writable on ORM states, this method\n omits the `data` field entirely for the purposes of constructing an ORM model.\n If state data is required, an artifact must be created separately.\n \"\"\"\n\n schema_dict = self.dict(\n *args, shallow=shallow, json_compatible=json_compatible, **kwargs\n )\n # remove the data field in order to construct a state ORM model\n schema_dict.pop(\"data\", None)\n return schema_dict\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateBaseModel.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateType","title":"StateType
","text":" Bases: AutoEnum
Enumeration of state types.
Source code inprefect/server/schemas/states.py
class StateType(AutoEnum):\n \"\"\"Enumeration of state types.\"\"\"\n\n SCHEDULED = AutoEnum.auto()\n PENDING = AutoEnum.auto()\n RUNNING = AutoEnum.auto()\n COMPLETED = AutoEnum.auto()\n FAILED = AutoEnum.auto()\n CANCELLED = AutoEnum.auto()\n CRASHED = AutoEnum.auto()\n PAUSED = AutoEnum.auto()\n CANCELLING = AutoEnum.auto()\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateType.auto","title":"auto
staticmethod
","text":"Exposes enum.auto()
to avoid requiring a second import to use AutoEnum
prefect/utilities/collections.py
@staticmethod\ndef auto():\n \"\"\"\n Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n \"\"\"\n return auto()\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.AwaitingRetry","title":"AwaitingRetry
","text":"Convenience function for creating AwaitingRetry
states.
Returns:
Name Type DescriptionState
State
a AwaitingRetry state
Source code inprefect/server/schemas/states.py
def AwaitingRetry(\n scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n \"\"\"Convenience function for creating `AwaitingRetry` states.\n\n Returns:\n State: a AwaitingRetry state\n \"\"\"\n return Scheduled(\n cls=cls, scheduled_time=scheduled_time, name=\"AwaitingRetry\", **kwargs\n )\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Cancelled","title":"Cancelled
","text":"Convenience function for creating Cancelled
states.
Returns:
Name Type DescriptionState
State
a Cancelled state
Source code inprefect/server/schemas/states.py
def Cancelled(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Cancelled` states.\n\n Returns:\n State: a Cancelled state\n \"\"\"\n return cls(type=StateType.CANCELLED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Cancelling","title":"Cancelling
","text":"Convenience function for creating Cancelling
states.
Returns:
Name Type DescriptionState
State
a Cancelling state
Source code inprefect/server/schemas/states.py
def Cancelling(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Cancelling` states.\n\n Returns:\n State: a Cancelling state\n \"\"\"\n return cls(type=StateType.CANCELLING, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Completed","title":"Completed
","text":"Convenience function for creating Completed
states.
Returns:
Name Type DescriptionState
State
a Completed state
Source code inprefect/server/schemas/states.py
def Completed(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Completed` states.\n\n Returns:\n State: a Completed state\n \"\"\"\n return cls(type=StateType.COMPLETED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Crashed","title":"Crashed
","text":"Convenience function for creating Crashed
states.
Returns:
Name Type DescriptionState
State
a Crashed state
Source code inprefect/server/schemas/states.py
def Crashed(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Crashed` states.\n\n Returns:\n State: a Crashed state\n \"\"\"\n return cls(type=StateType.CRASHED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Failed","title":"Failed
","text":"Convenience function for creating Failed
states.
Returns:
Name Type DescriptionState
State
a Failed state
Source code inprefect/server/schemas/states.py
def Failed(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Failed` states.\n\n Returns:\n State: a Failed state\n \"\"\"\n return cls(type=StateType.FAILED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Late","title":"Late
","text":"Convenience function for creating Late
states.
Returns:
Name Type DescriptionState
State
a Late state
Source code inprefect/server/schemas/states.py
def Late(\n scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n \"\"\"Convenience function for creating `Late` states.\n\n Returns:\n State: a Late state\n \"\"\"\n return Scheduled(cls=cls, scheduled_time=scheduled_time, name=\"Late\", **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Paused","title":"Paused
","text":"Convenience function for creating Paused
states.
Returns:
Name Type DescriptionState
State
a Paused state
Source code inprefect/server/schemas/states.py
def Paused(\n cls: Type[State] = State,\n timeout_seconds: int = None,\n pause_expiration_time: datetime.datetime = None,\n reschedule: bool = False,\n pause_key: str = None,\n **kwargs,\n) -> State:\n \"\"\"Convenience function for creating `Paused` states.\n\n Returns:\n State: a Paused state\n \"\"\"\n state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n\n if state_details.pause_timeout:\n raise ValueError(\"An extra pause timeout was provided in state_details\")\n\n if pause_expiration_time is not None and timeout_seconds is not None:\n raise ValueError(\n \"Cannot supply both a pause_expiration_time and timeout_seconds\"\n )\n\n if pause_expiration_time is None and timeout_seconds is None:\n pass\n else:\n state_details.pause_timeout = pause_expiration_time or (\n pendulum.now(\"UTC\") + pendulum.Duration(seconds=timeout_seconds)\n )\n\n state_details.pause_reschedule = reschedule\n state_details.pause_key = pause_key\n\n return cls(type=StateType.PAUSED, state_details=state_details, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Pending","title":"Pending
","text":"Convenience function for creating Pending
states.
Returns:
Name Type DescriptionState
State
a Pending state
Source code inprefect/server/schemas/states.py
def Pending(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Pending` states.\n\n Returns:\n State: a Pending state\n \"\"\"\n return cls(type=StateType.PENDING, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Retrying","title":"Retrying
","text":"Convenience function for creating Retrying
states.
Returns:
Name Type DescriptionState
State
a Retrying state
Source code inprefect/server/schemas/states.py
def Retrying(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Retrying` states.\n\n Returns:\n State: a Retrying state\n \"\"\"\n return cls(type=StateType.RUNNING, name=\"Retrying\", **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Running","title":"Running
","text":"Convenience function for creating Running
states.
Returns:
Name Type DescriptionState
State
a Running state
Source code inprefect/server/schemas/states.py
def Running(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Running` states.\n\n Returns:\n State: a Running state\n \"\"\"\n return cls(type=StateType.RUNNING, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Scheduled","title":"Scheduled
","text":"Convenience function for creating Scheduled
states.
Returns:
Name Type DescriptionState
State
a Scheduled state
Source code inprefect/server/schemas/states.py
def Scheduled(\n scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n \"\"\"Convenience function for creating `Scheduled` states.\n\n Returns:\n State: a Scheduled state\n \"\"\"\n # NOTE: `scheduled_time` must come first for backwards compatibility\n\n state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n if scheduled_time is None:\n scheduled_time = pendulum.now(\"UTC\")\n elif state_details.scheduled_time:\n raise ValueError(\"An extra scheduled_time was provided in state_details\")\n state_details.scheduled_time = scheduled_time\n\n return cls(type=StateType.SCHEDULED, state_details=state_details, **kwargs)\n
"},{"location":"api-ref/server/services/late_runs/","title":"server.services.late_runs","text":""},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs","title":"prefect.server.services.late_runs
","text":"The MarkLateRuns service. Responsible for putting flow runs in a Late state if they are not started on time. The threshold for a late run can be configured by changing PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS
.
MarkLateRuns
","text":" Bases: LoopService
A simple loop service responsible for identifying flow runs that are \"late\".
A flow run is defined as \"late\" if has not scheduled within a certain amount of time after its scheduled start time. The exact amount is configurable in Prefect REST API Settings.
Source code inprefect/server/services/late_runs.py
class MarkLateRuns(LoopService):\n \"\"\"\n A simple loop service responsible for identifying flow runs that are \"late\".\n\n A flow run is defined as \"late\" if has not scheduled within a certain amount\n of time after its scheduled start time. The exact amount is configurable in\n Prefect REST API Settings.\n \"\"\"\n\n def __init__(self, loop_seconds: float = None, **kwargs):\n super().__init__(\n loop_seconds=loop_seconds\n or PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS.value(),\n **kwargs,\n )\n\n # mark runs late if they are this far past their expected start time\n self.mark_late_after: datetime.timedelta = (\n PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS.value()\n )\n\n # query for this many runs to mark as late at once\n self.batch_size = 400\n\n @inject_db\n async def run_once(self, db: PrefectDBInterface):\n \"\"\"\n Mark flow runs as late by:\n\n - Querying for flow runs in a scheduled state that are Scheduled to start in the past\n - For any runs past the \"late\" threshold, setting the flow run state to a new `Late` state\n \"\"\"\n scheduled_to_start_before = pendulum.now(\"UTC\").subtract(\n seconds=self.mark_late_after.total_seconds()\n )\n\n while True:\n async with db.session_context(begin_transaction=True) as session:\n query = self._get_select_late_flow_runs_query(\n scheduled_to_start_before=scheduled_to_start_before, db=db\n )\n\n result = await session.execute(query)\n runs = result.all()\n\n # mark each run as late\n for run in runs:\n await self._mark_flow_run_as_late(session=session, flow_run=run)\n\n # if no runs were found, exit the loop\n if len(runs) < self.batch_size:\n break\n\n self.logger.info(\"Finished monitoring for late runs.\")\n\n @inject_db\n def _get_select_late_flow_runs_query(\n self, scheduled_to_start_before: datetime.datetime, db: PrefectDBInterface\n ):\n \"\"\"\n Returns a sqlalchemy query for late flow runs.\n\n Args:\n scheduled_to_start_before: the maximum next scheduled start time of\n scheduled flow runs to consider in the returned query\n \"\"\"\n query = (\n sa.select(\n db.FlowRun.id,\n db.FlowRun.next_scheduled_start_time,\n )\n .where(\n # The next scheduled start time is in the past, including the mark late\n # after buffer\n (db.FlowRun.next_scheduled_start_time <= scheduled_to_start_before),\n db.FlowRun.state_type == states.StateType.SCHEDULED,\n db.FlowRun.state_name == \"Scheduled\",\n )\n .limit(self.batch_size)\n )\n return query\n\n async def _mark_flow_run_as_late(\n self, session: AsyncSession, flow_run: PrefectDBInterface.FlowRun\n ) -> None:\n \"\"\"\n Mark a flow run as late.\n\n Pass-through method for overrides.\n \"\"\"\n await models.flow_runs.set_flow_run_state(\n session=session,\n flow_run_id=flow_run.id,\n state=states.Late(scheduled_time=flow_run.next_scheduled_start_time),\n flow_policy=MarkLateRunsPolicy, # type: ignore\n )\n
"},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs.MarkLateRuns.run_once","title":"run_once
async
","text":"Mark flow runs as late by:
Late
stateprefect/server/services/late_runs.py
@inject_db\nasync def run_once(self, db: PrefectDBInterface):\n \"\"\"\n Mark flow runs as late by:\n\n - Querying for flow runs in a scheduled state that are Scheduled to start in the past\n - For any runs past the \"late\" threshold, setting the flow run state to a new `Late` state\n \"\"\"\n scheduled_to_start_before = pendulum.now(\"UTC\").subtract(\n seconds=self.mark_late_after.total_seconds()\n )\n\n while True:\n async with db.session_context(begin_transaction=True) as session:\n query = self._get_select_late_flow_runs_query(\n scheduled_to_start_before=scheduled_to_start_before, db=db\n )\n\n result = await session.execute(query)\n runs = result.all()\n\n # mark each run as late\n for run in runs:\n await self._mark_flow_run_as_late(session=session, flow_run=run)\n\n # if no runs were found, exit the loop\n if len(runs) < self.batch_size:\n break\n\n self.logger.info(\"Finished monitoring for late runs.\")\n
"},{"location":"api-ref/server/services/loop_service/","title":"server.services.loop_service","text":""},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service","title":"prefect.server.services.loop_service
","text":"The base class for all Prefect REST API loop services.
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService","title":"LoopService
","text":"Loop services are relatively lightweight maintenance routines that need to run periodically.
This class makes it straightforward to design and integrate them. Users only need to define the run_once
coroutine to describe the behavior of the service on each loop.
prefect/server/services/loop_service.py
class LoopService:\n \"\"\"\n Loop services are relatively lightweight maintenance routines that need to run periodically.\n\n This class makes it straightforward to design and integrate them. Users only need to\n define the `run_once` coroutine to describe the behavior of the service on each loop.\n \"\"\"\n\n loop_seconds = 60\n\n def __init__(self, loop_seconds: float = None, handle_signals: bool = True):\n \"\"\"\n Args:\n loop_seconds (float): if provided, overrides the loop interval\n otherwise specified as a class variable\n handle_signals (bool): if True (default), SIGINT and SIGTERM are\n gracefully intercepted and shut down the running service.\n \"\"\"\n if loop_seconds:\n self.loop_seconds = loop_seconds # seconds between runs\n self._should_stop = False # flag for whether the service should stop running\n self._is_running = False # flag for whether the service is running\n self.name = type(self).__name__\n self.logger = get_logger(f\"server.services.{self.name.lower()}\")\n\n if handle_signals:\n _register_signal(signal.SIGINT, self._stop)\n _register_signal(signal.SIGTERM, self._stop)\n\n @inject_db\n async def _on_start(self, db: PrefectDBInterface) -> None:\n \"\"\"\n Called prior to running the service\n \"\"\"\n # reset the _should_stop flag\n self._should_stop = False\n # set the _is_running flag\n self._is_running = True\n\n async def _on_stop(self) -> None:\n \"\"\"\n Called after running the service\n \"\"\"\n # reset the _is_running flag\n self._is_running = False\n\n async def start(self, loops=None) -> None:\n \"\"\"\n Run the service `loops` time. Pass loops=None to run forever.\n\n Args:\n loops (int, optional): the number of loops to run before exiting.\n \"\"\"\n\n await self._on_start()\n\n i = 0\n while not self._should_stop:\n start_time = pendulum.now(\"UTC\")\n\n try:\n self.logger.debug(f\"About to run {self.name}...\")\n await self.run_once()\n\n except NotImplementedError as exc:\n raise exc from None\n\n # if an error is raised, log and continue\n except Exception as exc:\n self.logger.error(f\"Unexpected error in: {repr(exc)}\", exc_info=True)\n\n end_time = pendulum.now(\"UTC\")\n\n # if service took longer than its loop interval, log a warning\n # that the interval might be too short\n if (end_time - start_time).total_seconds() > self.loop_seconds:\n self.logger.warning(\n f\"{self.name} took {(end_time-start_time).total_seconds()} seconds\"\n \" to run, which is longer than its loop interval of\"\n f\" {self.loop_seconds} seconds.\"\n )\n\n # check if early stopping was requested\n i += 1\n if loops is not None and i == loops:\n self.logger.debug(f\"{self.name} exiting after {loops} loop(s).\")\n await self.stop(block=False)\n\n # next run is every \"loop seconds\" after each previous run *started*.\n # note that if the loop took unexpectedly long, the \"next_run\" time\n # might be in the past, which will result in an instant start\n next_run = max(\n start_time.add(seconds=self.loop_seconds), pendulum.now(\"UTC\")\n )\n self.logger.debug(f\"Finished running {self.name}. Next run at {next_run}\")\n\n # check the `_should_stop` flag every 1 seconds until the next run time is reached\n while pendulum.now(\"UTC\") < next_run and not self._should_stop:\n await asyncio.sleep(\n min(1, (next_run - pendulum.now(\"UTC\")).total_seconds())\n )\n\n await self._on_stop()\n\n async def stop(self, block=True) -> None:\n \"\"\"\n Gracefully stops a running LoopService and optionally blocks until the\n service stops.\n\n Args:\n block (bool): if True, blocks until the service is\n finished running. Otherwise it requests a stop and returns but\n the service may still be running a final loop.\n\n \"\"\"\n self._stop()\n\n if block:\n # if block=True, sleep until the service stops running,\n # but no more than `loop_seconds` to avoid a deadlock\n with anyio.move_on_after(self.loop_seconds):\n while self._is_running:\n await asyncio.sleep(0.1)\n\n # if the service is still running after `loop_seconds`, something's wrong\n if self._is_running:\n self.logger.warning(\n f\"`stop(block=True)` was called on {self.name} but more than one\"\n f\" loop interval ({self.loop_seconds} seconds) has passed. This\"\n \" usually means something is wrong. If `stop()` was called from\"\n \" inside the loop service, use `stop(block=False)` instead.\"\n )\n\n def _stop(self, *_) -> None:\n \"\"\"\n Private, synchronous method for setting the `_should_stop` flag. Takes arbitrary\n arguments so it can be used as a signal handler.\n \"\"\"\n self._should_stop = True\n\n async def run_once(self) -> None:\n \"\"\"\n Represents one loop of the service.\n\n Users should override this method.\n\n To actually run the service once, call `LoopService().start(loops=1)`\n instead of `LoopService().run_once()`, because this method will not invoke setup\n and teardown methods properly.\n \"\"\"\n raise NotImplementedError(\"LoopService subclasses must implement this method.\")\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.run_once","title":"run_once
async
","text":"Represents one loop of the service.
Users should override this method.
To actually run the service once, call LoopService().start(loops=1)
instead of LoopService().run_once()
, because this method will not invoke setup and teardown methods properly.
prefect/server/services/loop_service.py
async def run_once(self) -> None:\n \"\"\"\n Represents one loop of the service.\n\n Users should override this method.\n\n To actually run the service once, call `LoopService().start(loops=1)`\n instead of `LoopService().run_once()`, because this method will not invoke setup\n and teardown methods properly.\n \"\"\"\n raise NotImplementedError(\"LoopService subclasses must implement this method.\")\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.start","title":"start
async
","text":"Run the service loops
time. Pass loops=None to run forever.
Parameters:
Name Type Description Defaultloops
int
the number of loops to run before exiting.
None
Source code in prefect/server/services/loop_service.py
async def start(self, loops=None) -> None:\n \"\"\"\n Run the service `loops` time. Pass loops=None to run forever.\n\n Args:\n loops (int, optional): the number of loops to run before exiting.\n \"\"\"\n\n await self._on_start()\n\n i = 0\n while not self._should_stop:\n start_time = pendulum.now(\"UTC\")\n\n try:\n self.logger.debug(f\"About to run {self.name}...\")\n await self.run_once()\n\n except NotImplementedError as exc:\n raise exc from None\n\n # if an error is raised, log and continue\n except Exception as exc:\n self.logger.error(f\"Unexpected error in: {repr(exc)}\", exc_info=True)\n\n end_time = pendulum.now(\"UTC\")\n\n # if service took longer than its loop interval, log a warning\n # that the interval might be too short\n if (end_time - start_time).total_seconds() > self.loop_seconds:\n self.logger.warning(\n f\"{self.name} took {(end_time-start_time).total_seconds()} seconds\"\n \" to run, which is longer than its loop interval of\"\n f\" {self.loop_seconds} seconds.\"\n )\n\n # check if early stopping was requested\n i += 1\n if loops is not None and i == loops:\n self.logger.debug(f\"{self.name} exiting after {loops} loop(s).\")\n await self.stop(block=False)\n\n # next run is every \"loop seconds\" after each previous run *started*.\n # note that if the loop took unexpectedly long, the \"next_run\" time\n # might be in the past, which will result in an instant start\n next_run = max(\n start_time.add(seconds=self.loop_seconds), pendulum.now(\"UTC\")\n )\n self.logger.debug(f\"Finished running {self.name}. Next run at {next_run}\")\n\n # check the `_should_stop` flag every 1 seconds until the next run time is reached\n while pendulum.now(\"UTC\") < next_run and not self._should_stop:\n await asyncio.sleep(\n min(1, (next_run - pendulum.now(\"UTC\")).total_seconds())\n )\n\n await self._on_stop()\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.stop","title":"stop
async
","text":"Gracefully stops a running LoopService and optionally blocks until the service stops.
Parameters:
Name Type Description Defaultblock
bool
if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop.
True
Source code in prefect/server/services/loop_service.py
async def stop(self, block=True) -> None:\n \"\"\"\n Gracefully stops a running LoopService and optionally blocks until the\n service stops.\n\n Args:\n block (bool): if True, blocks until the service is\n finished running. Otherwise it requests a stop and returns but\n the service may still be running a final loop.\n\n \"\"\"\n self._stop()\n\n if block:\n # if block=True, sleep until the service stops running,\n # but no more than `loop_seconds` to avoid a deadlock\n with anyio.move_on_after(self.loop_seconds):\n while self._is_running:\n await asyncio.sleep(0.1)\n\n # if the service is still running after `loop_seconds`, something's wrong\n if self._is_running:\n self.logger.warning(\n f\"`stop(block=True)` was called on {self.name} but more than one\"\n f\" loop interval ({self.loop_seconds} seconds) has passed. This\"\n \" usually means something is wrong. If `stop()` was called from\"\n \" inside the loop service, use `stop(block=False)` instead.\"\n )\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.run_multiple_services","title":"run_multiple_services
async
","text":"Only one signal handler can be active at a time, so this function takes a list of loop services and runs all of them with a global signal handler.
Source code inprefect/server/services/loop_service.py
async def run_multiple_services(loop_services: List[LoopService]):\n \"\"\"\n Only one signal handler can be active at a time, so this function takes a list\n of loop services and runs all of them with a global signal handler.\n \"\"\"\n\n def stop_all_services(self, *_):\n for service in loop_services:\n service._stop()\n\n signal.signal(signal.SIGINT, stop_all_services)\n signal.signal(signal.SIGTERM, stop_all_services)\n await asyncio.gather(*[service.start() for service in loop_services])\n
"},{"location":"api-ref/server/services/scheduler/","title":"server.services.scheduler","text":""},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler","title":"prefect.server.services.scheduler
","text":"The Scheduler service.
"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.RecentDeploymentsScheduler","title":"RecentDeploymentsScheduler
","text":" Bases: Scheduler
A scheduler that only schedules deployments that were updated very recently. This scheduler can run on a tight loop and ensure that runs from newly-created or updated deployments are rapidly scheduled without having to wait for the \"main\" scheduler to complete its loop.
Note that scheduling is idempotent, so its ok for this scheduler to attempt to schedule the same deployments as the main scheduler. It's purpose is to accelerate scheduling for any deployments that users are interacting with.
Source code inprefect/server/services/scheduler.py
class RecentDeploymentsScheduler(Scheduler):\n \"\"\"\n A scheduler that only schedules deployments that were updated very recently.\n This scheduler can run on a tight loop and ensure that runs from\n newly-created or updated deployments are rapidly scheduled without having to\n wait for the \"main\" scheduler to complete its loop.\n\n Note that scheduling is idempotent, so its ok for this scheduler to attempt\n to schedule the same deployments as the main scheduler. It's purpose is to\n accelerate scheduling for any deployments that users are interacting with.\n \"\"\"\n\n # this scheduler runs on a tight loop\n loop_seconds = 5\n\n @inject_db\n def _get_select_deployments_to_schedule_query(self, db: PrefectDBInterface):\n \"\"\"\n Returns a sqlalchemy query for selecting deployments to schedule\n \"\"\"\n query = (\n sa.select(db.Deployment.id)\n .where(\n sa.and_(\n db.Deployment.paused.is_not(True),\n # use a slightly larger window than the loop interval to pick up\n # any deployments that were created *while* the scheduler was\n # last running (assuming the scheduler takes less than one\n # second to run). Scheduling is idempotent so picking up schedules\n # multiple times is not a concern.\n db.Deployment.updated\n >= pendulum.now(\"UTC\").subtract(seconds=self.loop_seconds + 1),\n (\n # Only include deployments that have at least one\n # active schedule.\n sa.select(db.DeploymentSchedule.deployment_id)\n .where(\n sa.and_(\n db.DeploymentSchedule.deployment_id == db.Deployment.id,\n db.DeploymentSchedule.active.is_(True),\n )\n )\n .exists()\n ),\n )\n )\n .order_by(db.Deployment.id)\n .limit(self.deployment_batch_size)\n )\n return query\n
"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.Scheduler","title":"Scheduler
","text":" Bases: LoopService
A loop service that schedules flow runs from deployments.
Source code inprefect/server/services/scheduler.py
class Scheduler(LoopService):\n \"\"\"\n A loop service that schedules flow runs from deployments.\n \"\"\"\n\n # the main scheduler takes its loop interval from\n # PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS\n loop_seconds = None\n\n def __init__(self, loop_seconds: float = None, **kwargs):\n super().__init__(\n loop_seconds=(\n loop_seconds\n or self.loop_seconds\n or PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS.value()\n ),\n **kwargs,\n )\n self.deployment_batch_size: int = (\n PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE.value()\n )\n self.max_runs: int = PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS.value()\n self.min_runs: int = PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS.value()\n self.max_scheduled_time: datetime.timedelta = (\n PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME.value()\n )\n self.min_scheduled_time: datetime.timedelta = (\n PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME.value()\n )\n self.insert_batch_size = (\n PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE.value()\n )\n\n @inject_db\n async def run_once(self, db: PrefectDBInterface):\n \"\"\"\n Schedule flow runs by:\n\n - Querying for deployments with active schedules\n - Generating the next set of flow runs based on each deployments schedule\n - Inserting all scheduled flow runs into the database\n\n All inserted flow runs are committed to the database at the termination of the\n loop.\n \"\"\"\n total_inserted_runs = 0\n\n last_id = None\n while True:\n async with db.session_context(begin_transaction=False) as session:\n query = self._get_select_deployments_to_schedule_query()\n\n # use cursor based pagination\n if last_id:\n query = query.where(db.Deployment.id > last_id)\n\n result = await session.execute(query)\n deployment_ids = result.scalars().unique().all()\n\n # collect runs across all deployments\n try:\n runs_to_insert = await self._collect_flow_runs(\n session=session, deployment_ids=deployment_ids\n )\n except TryAgain:\n continue\n\n # bulk insert the runs based on batch size setting\n for batch in batched_iterable(runs_to_insert, self.insert_batch_size):\n async with db.session_context(begin_transaction=True) as session:\n inserted_runs = await self._insert_scheduled_flow_runs(\n session=session, runs=batch\n )\n total_inserted_runs += len(inserted_runs)\n\n # if this is the last page of deployments, exit the loop\n if len(deployment_ids) < self.deployment_batch_size:\n break\n else:\n # record the last deployment ID\n last_id = deployment_ids[-1]\n\n self.logger.info(f\"Scheduled {total_inserted_runs} runs.\")\n\n @inject_db\n def _get_select_deployments_to_schedule_query(self, db: PrefectDBInterface):\n \"\"\"\n Returns a sqlalchemy query for selecting deployments to schedule.\n\n The query gets the IDs of any deployments with:\n\n - an active schedule\n - EITHER:\n - fewer than `min_runs` auto-scheduled runs\n - OR the max scheduled time is less than `max_scheduled_time` in the future\n \"\"\"\n now = pendulum.now(\"UTC\")\n query = (\n sa.select(db.Deployment.id)\n .select_from(db.Deployment)\n # TODO: on Postgres, this could be replaced with a lateral join that\n # sorts by `next_scheduled_start_time desc` and limits by\n # `self.min_runs` for a ~ 50% speedup. At the time of writing,\n # performance of this universal query appears to be fast enough that\n # this optimization is not worth maintaining db-specific queries\n .join(\n db.FlowRun,\n # join on matching deployments, only picking up future scheduled runs\n sa.and_(\n db.Deployment.id == db.FlowRun.deployment_id,\n db.FlowRun.state_type == StateType.SCHEDULED,\n db.FlowRun.next_scheduled_start_time >= now,\n db.FlowRun.auto_scheduled.is_(True),\n ),\n isouter=True,\n )\n .where(\n sa.and_(\n db.Deployment.paused.is_not(True),\n (\n # Only include deployments that have at least one\n # active schedule.\n sa.select(db.DeploymentSchedule.deployment_id)\n .where(\n sa.and_(\n db.DeploymentSchedule.deployment_id == db.Deployment.id,\n db.DeploymentSchedule.active.is_(True),\n )\n )\n .exists()\n ),\n )\n )\n .group_by(db.Deployment.id)\n # having EITHER fewer than three runs OR runs not scheduled far enough out\n .having(\n sa.or_(\n sa.func.count(db.FlowRun.next_scheduled_start_time) < self.min_runs,\n sa.func.max(db.FlowRun.next_scheduled_start_time)\n < now + self.min_scheduled_time,\n )\n )\n .order_by(db.Deployment.id)\n .limit(self.deployment_batch_size)\n )\n return query\n\n async def _collect_flow_runs(\n self,\n session: sa.orm.Session,\n deployment_ids: List[UUID],\n ) -> List[Dict]:\n runs_to_insert = []\n for deployment_id in deployment_ids:\n now = pendulum.now(\"UTC\")\n # guard against erroneously configured schedules\n try:\n runs_to_insert.extend(\n await self._generate_scheduled_flow_runs(\n session=session,\n deployment_id=deployment_id,\n start_time=now,\n end_time=now + self.max_scheduled_time,\n min_time=self.min_scheduled_time,\n min_runs=self.min_runs,\n max_runs=self.max_runs,\n )\n )\n except Exception:\n self.logger.exception(\n f\"Error scheduling deployment {deployment_id!r}.\",\n )\n finally:\n connection = await session.connection()\n if connection.invalidated:\n # If the error we handled above was the kind of database error that\n # causes underlying transaction to rollback and the connection to\n # become invalidated, rollback this session. Errors that may cause\n # this are connection drops, database restarts, and things of the\n # sort.\n #\n # This rollback _does not rollback a transaction_, since that has\n # actually already happened due to the error above. It brings the\n # Python session in sync with underlying connection so that when we\n # exec the outer with block, the context manager will not attempt to\n # commit the session.\n #\n # Then, raise TryAgain to break out of these nested loops, back to\n # the outer loop, where we'll begin a new transaction with\n # session.begin() in the next loop iteration.\n await session.rollback()\n raise TryAgain()\n return runs_to_insert\n\n @inject_db\n async def _generate_scheduled_flow_runs(\n self,\n session: sa.orm.Session,\n deployment_id: UUID,\n start_time: datetime.datetime,\n end_time: datetime.datetime,\n min_time: datetime.timedelta,\n min_runs: int,\n max_runs: int,\n db: PrefectDBInterface,\n ) -> List[Dict]:\n \"\"\"\n Given a `deployment_id` and schedule params, generates a list of flow run\n objects and associated scheduled states that represent scheduled flow runs.\n\n Pass-through method for overrides.\n\n\n Args:\n session: a database session\n deployment_id: the id of the deployment to schedule\n start_time: the time from which to start scheduling runs\n end_time: runs will be scheduled until at most this time\n min_time: runs will be scheduled until at least this far in the future\n min_runs: a minimum amount of runs to schedule\n max_runs: a maximum amount of runs to schedule\n\n This function will generate the minimum number of runs that satisfy the min\n and max times, and the min and max counts. Specifically, the following order\n will be respected:\n\n - Runs will be generated starting on or after the `start_time`\n - No more than `max_runs` runs will be generated\n - No runs will be generated after `end_time` is reached\n - At least `min_runs` runs will be generated\n - Runs will be generated until at least `start_time + min_time` is reached\n\n \"\"\"\n return await models.deployments._generate_scheduled_flow_runs(\n session=session,\n deployment_id=deployment_id,\n start_time=start_time,\n end_time=end_time,\n min_time=min_time,\n min_runs=min_runs,\n max_runs=max_runs,\n )\n\n @inject_db\n async def _insert_scheduled_flow_runs(\n self,\n session: sa.orm.Session,\n runs: List[Dict],\n db: PrefectDBInterface,\n ) -> List[UUID]:\n \"\"\"\n Given a list of flow runs to schedule, as generated by\n `_generate_scheduled_flow_runs`, inserts them into the database. Note this is a\n separate method to facilitate batch operations on many scheduled runs.\n\n Pass-through method for overrides.\n \"\"\"\n return await models.deployments._insert_scheduled_flow_runs(\n session=session, runs=runs\n )\n
"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.Scheduler.run_once","title":"run_once
async
","text":"Schedule flow runs by:
All inserted flow runs are committed to the database at the termination of the loop.
Source code inprefect/server/services/scheduler.py
@inject_db\nasync def run_once(self, db: PrefectDBInterface):\n \"\"\"\n Schedule flow runs by:\n\n - Querying for deployments with active schedules\n - Generating the next set of flow runs based on each deployments schedule\n - Inserting all scheduled flow runs into the database\n\n All inserted flow runs are committed to the database at the termination of the\n loop.\n \"\"\"\n total_inserted_runs = 0\n\n last_id = None\n while True:\n async with db.session_context(begin_transaction=False) as session:\n query = self._get_select_deployments_to_schedule_query()\n\n # use cursor based pagination\n if last_id:\n query = query.where(db.Deployment.id > last_id)\n\n result = await session.execute(query)\n deployment_ids = result.scalars().unique().all()\n\n # collect runs across all deployments\n try:\n runs_to_insert = await self._collect_flow_runs(\n session=session, deployment_ids=deployment_ids\n )\n except TryAgain:\n continue\n\n # bulk insert the runs based on batch size setting\n for batch in batched_iterable(runs_to_insert, self.insert_batch_size):\n async with db.session_context(begin_transaction=True) as session:\n inserted_runs = await self._insert_scheduled_flow_runs(\n session=session, runs=batch\n )\n total_inserted_runs += len(inserted_runs)\n\n # if this is the last page of deployments, exit the loop\n if len(deployment_ids) < self.deployment_batch_size:\n break\n else:\n # record the last deployment ID\n last_id = deployment_ids[-1]\n\n self.logger.info(f\"Scheduled {total_inserted_runs} runs.\")\n
"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.TryAgain","title":"TryAgain
","text":" Bases: Exception
Internal control-flow exception used to retry the Scheduler's main loop
Source code inprefect/server/services/scheduler.py
class TryAgain(Exception):\n \"\"\"Internal control-flow exception used to retry the Scheduler's main loop\"\"\"\n
"},{"location":"api-ref/server/utilities/database/","title":"server.utilities.database","text":""},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database","title":"prefect.server.utilities.database
","text":"Utilities for interacting with Prefect REST API database and ORM layer.
Prefect supports both SQLite and Postgres. Many of these utilities allow the Prefect REST API to seamlessly switch between the two.
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.GenerateUUID","title":"GenerateUUID
","text":" Bases: FunctionElement
Platform-independent UUID default generator. Note the actual functionality for this class is specified in the compiles
-decorated functions below
prefect/server/utilities/database.py
class GenerateUUID(FunctionElement):\n \"\"\"\n Platform-independent UUID default generator.\n Note the actual functionality for this class is specified in the\n `compiles`-decorated functions below\n \"\"\"\n\n name = \"uuid_default\"\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.JSON","title":"JSON
","text":" Bases: TypeDecorator
JSON type that returns SQLAlchemy's dialect-specific JSON types, where possible. Uses generic JSON otherwise.
The \"base\" type is postgresql.JSONB to expose useful methods prior to SQL compilation
Source code inprefect/server/utilities/database.py
class JSON(TypeDecorator):\n \"\"\"\n JSON type that returns SQLAlchemy's dialect-specific JSON types, where\n possible. Uses generic JSON otherwise.\n\n The \"base\" type is postgresql.JSONB to expose useful methods prior\n to SQL compilation\n \"\"\"\n\n impl = postgresql.JSONB\n cache_ok = True\n\n def load_dialect_impl(self, dialect):\n if dialect.name == \"postgresql\":\n return dialect.type_descriptor(postgresql.JSONB(none_as_null=True))\n elif dialect.name == \"sqlite\":\n return dialect.type_descriptor(sqlite.JSON(none_as_null=True))\n else:\n return dialect.type_descriptor(sa.JSON(none_as_null=True))\n\n def process_bind_param(self, value, dialect):\n \"\"\"Prepares the given value to be used as a JSON field in a parameter binding\"\"\"\n if not value:\n return value\n\n # PostgreSQL does not support the floating point extrema values `NaN`,\n # `-Infinity`, or `Infinity`\n # https://www.postgresql.org/docs/current/datatype-json.html#JSON-TYPE-MAPPING-TABLE\n #\n # SQLite supports storing and retrieving full JSON values that include\n # `NaN`, `-Infinity`, or `Infinity`, but any query that requires SQLite to parse\n # the value (like `json_extract`) will fail.\n #\n # Replace any `NaN`, `-Infinity`, or `Infinity` values with `None` in the\n # returned value. See more about `parse_constant` at\n # https://docs.python.org/3/library/json.html#json.load.\n return json.loads(json.dumps(value), parse_constant=lambda c: None)\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.Pydantic","title":"Pydantic
","text":" Bases: TypeDecorator
A pydantic type that converts inserted parameters to json and converts read values to the pydantic type.
Source code inprefect/server/utilities/database.py
class Pydantic(TypeDecorator):\n \"\"\"\n A pydantic type that converts inserted parameters to\n json and converts read values to the pydantic type.\n \"\"\"\n\n impl = JSON\n cache_ok = True\n\n def __init__(self, pydantic_type, sa_column_type=None):\n super().__init__()\n self._pydantic_type = pydantic_type\n if sa_column_type is not None:\n self.impl = sa_column_type\n\n def process_bind_param(self, value, dialect):\n if value is None:\n return None\n # parse the value to ensure it complies with the schema\n # (this will raise validation errors if not)\n value = pydantic.parse_obj_as(self._pydantic_type, value)\n # sqlalchemy requires the bind parameter's value to be a python-native\n # collection of JSON-compatible objects. we achieve that by dumping the\n # value to a json string using the pydantic JSON encoder and re-parsing\n # it into a python-native form.\n return json.loads(json.dumps(value, default=pydantic.json.pydantic_encoder))\n\n def process_result_value(self, value, dialect):\n if value is not None:\n # load the json object into a fully hydrated typed object\n return pydantic.parse_obj_as(self._pydantic_type, value)\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.Timestamp","title":"Timestamp
","text":" Bases: TypeDecorator
TypeDecorator that ensures that timestamps have a timezone.
For SQLite, all timestamps are converted to UTC (since they are stored as naive timestamps without timezones) and recovered as UTC.
Source code inprefect/server/utilities/database.py
class Timestamp(TypeDecorator):\n \"\"\"TypeDecorator that ensures that timestamps have a timezone.\n\n For SQLite, all timestamps are converted to UTC (since they are stored\n as naive timestamps without timezones) and recovered as UTC.\n \"\"\"\n\n impl = sa.TIMESTAMP(timezone=True)\n cache_ok = True\n\n def load_dialect_impl(self, dialect):\n if dialect.name == \"postgresql\":\n return dialect.type_descriptor(postgresql.TIMESTAMP(timezone=True))\n elif dialect.name == \"sqlite\":\n return dialect.type_descriptor(\n sqlite.DATETIME(\n # SQLite is very particular about datetimes, and performs all comparisons\n # as alphanumeric comparisons without regard for actual timestamp\n # semantics or timezones. Therefore, it's important to have uniform\n # and sortable datetime representations. The default is an ISO8601-compatible\n # string with NO time zone and a space (\" \") delimiter between the date\n # and the time. The below settings can be used to add a \"T\" delimiter but\n # will require all other sqlite datetimes to be set similarly, including\n # the custom default value for datetime columns and any handwritten SQL\n # formed with `strftime()`.\n #\n # store with \"T\" separator for time\n # storage_format=(\n # \"%(year)04d-%(month)02d-%(day)02d\"\n # \"T%(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d\"\n # ),\n # handle ISO 8601 with \"T\" or \" \" as the time separator\n # regexp=r\"(\\d+)-(\\d+)-(\\d+)[T ](\\d+):(\\d+):(\\d+).(\\d+)\",\n )\n )\n else:\n return dialect.type_descriptor(sa.TIMESTAMP(timezone=True))\n\n def process_bind_param(self, value, dialect):\n if value is None:\n return None\n else:\n if value.tzinfo is None:\n raise ValueError(\"Timestamps must have a timezone.\")\n elif dialect.name == \"sqlite\":\n return pendulum.instance(value).in_timezone(\"UTC\")\n else:\n return value\n\n def process_result_value(self, value, dialect):\n # retrieve timestamps in their native timezone (or UTC)\n if value is not None:\n return pendulum.instance(value).in_timezone(\"utc\")\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.UUID","title":"UUID
","text":" Bases: TypeDecorator
Platform-independent UUID type.
Uses PostgreSQL's UUID type, otherwise uses CHAR(36), storing as stringified hex values with hyphens.
Source code inprefect/server/utilities/database.py
class UUID(TypeDecorator):\n \"\"\"\n Platform-independent UUID type.\n\n Uses PostgreSQL's UUID type, otherwise uses\n CHAR(36), storing as stringified hex values with\n hyphens.\n \"\"\"\n\n impl = TypeEngine\n cache_ok = True\n\n def load_dialect_impl(self, dialect):\n if dialect.name == \"postgresql\":\n return dialect.type_descriptor(postgresql.UUID())\n else:\n return dialect.type_descriptor(CHAR(36))\n\n def process_bind_param(self, value, dialect):\n if value is None:\n return None\n elif dialect.name == \"postgresql\":\n return str(value)\n elif isinstance(value, uuid.UUID):\n return str(value)\n else:\n return str(uuid.UUID(value))\n\n def process_result_value(self, value, dialect):\n if value is None:\n return value\n else:\n if not isinstance(value, uuid.UUID):\n value = uuid.UUID(value)\n return value\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.date_add","title":"date_add
","text":" Bases: FunctionElement
Platform-independent way to add a date and an interval.
Source code inprefect/server/utilities/database.py
class date_add(FunctionElement):\n \"\"\"\n Platform-independent way to add a date and an interval.\n \"\"\"\n\n type = Timestamp()\n name = \"date_add\"\n # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n inherit_cache = False\n\n def __init__(self, dt, interval):\n self.dt = dt\n self.interval = interval\n super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.date_diff","title":"date_diff
","text":" Bases: FunctionElement
Platform-independent difference of dates. Computes d1 - d2.
Source code inprefect/server/utilities/database.py
class date_diff(FunctionElement):\n \"\"\"\n Platform-independent difference of dates. Computes d1 - d2.\n \"\"\"\n\n type = sa.Interval()\n name = \"date_diff\"\n # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n inherit_cache = False\n\n def __init__(self, d1, d2):\n self.d1 = d1\n self.d2 = d2\n super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.interval_add","title":"interval_add
","text":" Bases: FunctionElement
Platform-independent way to add two intervals.
Source code inprefect/server/utilities/database.py
class interval_add(FunctionElement):\n \"\"\"\n Platform-independent way to add two intervals.\n \"\"\"\n\n type = sa.Interval()\n name = \"interval_add\"\n # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n inherit_cache = False\n\n def __init__(self, i1, i2):\n self.i1 = i1\n self.i2 = i2\n super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_contains","title":"json_contains
","text":" Bases: FunctionElement
Platform independent json_contains operator, tests if the left
expression contains the right
expression.
On postgres this is equivalent to the @> containment operator. https://www.postgresql.org/docs/current/functions-json.html
Source code inprefect/server/utilities/database.py
class json_contains(FunctionElement):\n \"\"\"\n Platform independent json_contains operator, tests if the\n `left` expression contains the `right` expression.\n\n On postgres this is equivalent to the @> containment operator.\n https://www.postgresql.org/docs/current/functions-json.html\n \"\"\"\n\n type = BOOLEAN\n name = \"json_contains\"\n # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n inherit_cache = False\n\n def __init__(self, left, right):\n self.left = left\n self.right = right\n super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_has_all_keys","title":"json_has_all_keys
","text":" Bases: FunctionElement
Platform independent json_has_all_keys operator.
On postgres this is equivalent to the ?& existence operator. https://www.postgresql.org/docs/current/functions-json.html
Source code inprefect/server/utilities/database.py
class json_has_all_keys(FunctionElement):\n \"\"\"Platform independent json_has_all_keys operator.\n\n On postgres this is equivalent to the ?& existence operator.\n https://www.postgresql.org/docs/current/functions-json.html\n \"\"\"\n\n type = BOOLEAN\n name = \"json_has_all_keys\"\n # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n inherit_cache = False\n\n def __init__(self, json_expr, values: List):\n self.json_expr = json_expr\n if isinstance(values, list) and not all(isinstance(v, str) for v in values):\n raise ValueError(\n \"json_has_all_key values must be strings if provided as a literal list\"\n )\n self.values = values\n super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_has_any_key","title":"json_has_any_key
","text":" Bases: FunctionElement
Platform independent json_has_any_key operator.
On postgres this is equivalent to the ?| existence operator. https://www.postgresql.org/docs/current/functions-json.html
Source code inprefect/server/utilities/database.py
class json_has_any_key(FunctionElement):\n \"\"\"\n Platform independent json_has_any_key operator.\n\n On postgres this is equivalent to the ?| existence operator.\n https://www.postgresql.org/docs/current/functions-json.html\n \"\"\"\n\n type = BOOLEAN\n name = \"json_has_any_key\"\n # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n inherit_cache = False\n\n def __init__(self, json_expr, values: List):\n self.json_expr = json_expr\n if not all(isinstance(v, str) for v in values):\n raise ValueError(\"json_has_any_key values must be strings\")\n self.values = values\n super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.now","title":"now
","text":" Bases: FunctionElement
Platform-independent \"now\" generator.
Source code inprefect/server/utilities/database.py
class now(FunctionElement):\n \"\"\"\n Platform-independent \"now\" generator.\n \"\"\"\n\n type = Timestamp()\n name = \"now\"\n # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n inherit_cache = True\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.get_dialect","title":"get_dialect
","text":"Get the dialect of a session, engine, or connection url.
Primary use case is figuring out whether the Prefect REST API is communicating with SQLite or Postgres.
Exampleimport prefect.settings\nfrom prefect.server.utilities.database import get_dialect\n\ndialect = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value())\nif dialect == \"sqlite\":\n print(\"Using SQLite!\")\nelse:\n print(\"Using Postgres!\")\n
Source code in prefect/server/utilities/database.py
def get_dialect(\n obj: Union[str, sa.orm.Session, sa.engine.Engine],\n) -> sa.engine.Dialect:\n \"\"\"\n Get the dialect of a session, engine, or connection url.\n\n Primary use case is figuring out whether the Prefect REST API is communicating with\n SQLite or Postgres.\n\n Example:\n ```python\n import prefect.settings\n from prefect.server.utilities.database import get_dialect\n\n dialect = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value())\n if dialect == \"sqlite\":\n print(\"Using SQLite!\")\n else:\n print(\"Using Postgres!\")\n ```\n \"\"\"\n if isinstance(obj, sa.orm.Session):\n url = obj.bind.url\n elif isinstance(obj, sa.engine.Engine):\n url = obj.url\n else:\n url = sa.engine.url.make_url(obj)\n\n return url.get_dialect()\n
"},{"location":"api-ref/server/utilities/schemas/","title":"server.utilities.schemas","text":""},{"location":"api-ref/server/utilities/schemas/#prefect.server.utilities.schemas","title":"prefect.server.utilities.schemas
","text":""},{"location":"api-ref/server/utilities/server/","title":"server.utilities.server","text":""},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server","title":"prefect.server.utilities.server
","text":"Utilities for the Prefect REST API server.
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectAPIRoute","title":"PrefectAPIRoute
","text":" Bases: APIRoute
A FastAPIRoute class which attaches an async stack to requests that exits before a response is returned.
Requests already have request.scope['fastapi_astack']
which is an async stack for the full scope of the request. This stack is used for managing contexts of FastAPI dependencies. If we want to close a dependency before the request is complete (i.e. before returning a response to the user), we need a stack with a different scope. This extension adds this stack at request.state.response_scoped_stack
.
prefect/server/utilities/server.py
class PrefectAPIRoute(APIRoute):\n \"\"\"\n A FastAPIRoute class which attaches an async stack to requests that exits before\n a response is returned.\n\n Requests already have `request.scope['fastapi_astack']` which is an async stack for\n the full scope of the request. This stack is used for managing contexts of FastAPI\n dependencies. If we want to close a dependency before the request is complete\n (i.e. before returning a response to the user), we need a stack with a different\n scope. This extension adds this stack at `request.state.response_scoped_stack`.\n \"\"\"\n\n def get_route_handler(self) -> Callable[[Request], Coroutine[Any, Any, Response]]:\n default_handler = super().get_route_handler()\n\n async def handle_response_scoped_depends(request: Request) -> Response:\n # Create a new stack scoped to exit before the response is returned\n async with AsyncExitStack() as stack:\n request.state.response_scoped_stack = stack\n response = await default_handler(request)\n\n return response\n\n return handle_response_scoped_depends\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectRouter","title":"PrefectRouter
","text":" Bases: APIRouter
A base class for Prefect REST API routers.
Source code inprefect/server/utilities/server.py
class PrefectRouter(APIRouter):\n \"\"\"\n A base class for Prefect REST API routers.\n \"\"\"\n\n def __init__(self, **kwargs: Any) -> None:\n kwargs.setdefault(\"route_class\", PrefectAPIRoute)\n super().__init__(**kwargs)\n\n def add_api_route(\n self, path: str, endpoint: Callable[..., Any], **kwargs: Any\n ) -> None:\n \"\"\"\n Add an API route.\n\n For routes that return content and have not specified a `response_model`,\n use return type annotation to infer the response model.\n\n For routes that return No-Content status codes, explicitly set\n a `response_class` to ensure nothing is returned in the response body.\n \"\"\"\n if kwargs.get(\"status_code\") == status.HTTP_204_NO_CONTENT:\n # any routes that return No-Content status codes must\n # explicitly set a response_class that will handle status codes\n # and not return anything in the body\n kwargs[\"response_class\"] = Response\n if kwargs.get(\"response_model\") is None:\n kwargs[\"response_model\"] = get_type_hints(endpoint).get(\"return\")\n return super().add_api_route(path, endpoint, **kwargs)\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectRouter.add_api_route","title":"add_api_route
","text":"Add an API route.
For routes that return content and have not specified a response_model
, use return type annotation to infer the response model.
For routes that return No-Content status codes, explicitly set a response_class
to ensure nothing is returned in the response body.
prefect/server/utilities/server.py
def add_api_route(\n self, path: str, endpoint: Callable[..., Any], **kwargs: Any\n) -> None:\n \"\"\"\n Add an API route.\n\n For routes that return content and have not specified a `response_model`,\n use return type annotation to infer the response model.\n\n For routes that return No-Content status codes, explicitly set\n a `response_class` to ensure nothing is returned in the response body.\n \"\"\"\n if kwargs.get(\"status_code\") == status.HTTP_204_NO_CONTENT:\n # any routes that return No-Content status codes must\n # explicitly set a response_class that will handle status codes\n # and not return anything in the body\n kwargs[\"response_class\"] = Response\n if kwargs.get(\"response_model\") is None:\n kwargs[\"response_model\"] = get_type_hints(endpoint).get(\"return\")\n return super().add_api_route(path, endpoint, **kwargs)\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.method_paths_from_routes","title":"method_paths_from_routes
","text":"Generate a set of strings describing the given routes in the format:
For example, \"GET /logs/\"
Source code inprefect/server/utilities/server.py
def method_paths_from_routes(routes: Sequence[BaseRoute]) -> Set[str]:\n \"\"\"\n Generate a set of strings describing the given routes in the format: <method> <path>\n\n For example, \"GET /logs/\"\n \"\"\"\n method_paths = set()\n for route in routes:\n if isinstance(route, (APIRoute, StarletteRoute)):\n for method in route.methods:\n method_paths.add(f\"{method} {route.path}\")\n\n return method_paths\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.response_scoped_dependency","title":"response_scoped_dependency
","text":"Ensure that this dependency closes before the response is returned to the client. By default, FastAPI closes dependencies after sending the response.
Uses an async stack that is exited before the response is returned. This is particularly useful for database sessions which must be committed before the client can do more work.
Do not use a response-scoped dependency within a FastAPI background task.Background tasks run after FastAPI sends the response, so a response-scoped dependency will already be closed. Use a normal FastAPI dependency instead.
Parameters:
Name Type Description Defaultdependency
Callable
An async callable. FastAPI dependencies may still be used.
requiredReturns:
Type DescriptionA wrapped dependency
which will push the dependency
context manager onto
a stack when called.
Source code inprefect/server/utilities/server.py
def response_scoped_dependency(dependency: Callable):\n \"\"\"\n Ensure that this dependency closes before the response is returned to the client. By\n default, FastAPI closes dependencies after sending the response.\n\n Uses an async stack that is exited before the response is returned. This is\n particularly useful for database sessions which must be committed before the client\n can do more work.\n\n NOTE: Do not use a response-scoped dependency within a FastAPI background task.\n Background tasks run after FastAPI sends the response, so a response-scoped\n dependency will already be closed. Use a normal FastAPI dependency instead.\n\n Args:\n dependency: An async callable. FastAPI dependencies may still be used.\n\n Returns:\n A wrapped `dependency` which will push the `dependency` context manager onto\n a stack when called.\n \"\"\"\n signature = inspect.signature(dependency)\n\n async def wrapper(*args, request: Request, **kwargs):\n # Replicate FastAPI behavior of auto-creating a context manager\n if inspect.isasyncgenfunction(dependency):\n context_manager = asynccontextmanager(dependency)\n else:\n context_manager = dependency\n\n # Ensure request is provided if requested\n if \"request\" in signature.parameters:\n kwargs[\"request\"] = request\n\n # Enter the route handler provided stack that is closed before responding,\n # return the value yielded by the wrapped dependency\n return await request.state.response_scoped_stack.enter_async_context(\n context_manager(*args, **kwargs)\n )\n\n # Ensure that the signature includes `request: Request` to ensure that FastAPI will\n # inject the request as a dependency; maintain the old signature so those depends\n # work\n request_parameter = inspect.signature(wrapper).parameters[\"request\"]\n functools.update_wrapper(wrapper, dependency)\n\n if \"request\" not in signature.parameters:\n new_parameters = signature.parameters.copy()\n new_parameters[\"request\"] = request_parameter\n wrapper.__signature__ = signature.replace(\n parameters=tuple(new_parameters.values())\n )\n\n return wrapper\n
"},{"location":"cloud/","title":"Welcome to Prefect Cloud","text":"Prefect Cloud is a hosted workflow application framework that provides all the capabilities of Prefect server plus additional features, such as:
Getting Started with Prefect Cloud
Ready to jump right in and start running with Prefect Cloud? See the Quickstart and follow the instructions on the Cloud tabs to write and deploy your first Prefect Cloud-monitored flow run.
Prefect Cloud includes all the features in the open-source Prefect server plus the following:
Prefect Cloud features
Failed
and Crashed
flow runs into actionable information.When you sign up for Prefect Cloud, an account and a user profile are automatically provisioned for you.
Your profile is the place where you'll manage settings related to yourself as a user, including:
As an account Admin, you will also have access to account settings from the Account Settings page, such as:
As an account Admin you can create a workspace and invite other individuals to your workspace.
Upgrading from a Prefect Cloud Free tier plan to a Pro or Enterprise tier plan enables additional functionality for adding workspaces, managing teams, and running higher volume workloads.
Workspace Admins have the ability to use single sign-on (SSO), set role-based access controls (RBAC), view Audit Logs, and configure service accounts.
Enterprise add custom roles, object-level access control lists, teams, and Directory Sync/SCIM provisioning for SSO.
Prefect Cloud plans for teams of every size
See the Prefect Cloud plans for details on Pro and Enterprise account tiers.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#workspaces","title":"Workspaces","text":"A workspace is an isolated environment within Prefect Cloud for your flows, deployments, and block configuration. See the Workspaces documentation for more information about configuring and using workspaces.
Each workspace keeps track of its own:
Prefect Cloud allows you to see your events. Events provide information about the state of your workflows, and can be used as automation triggers.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#automations","title":"Automations","text":"Prefect Cloud automations provide additional notification capabilities beyond those in a self-hosted open-source Prefect server. Automations also enable you to create event-driven workflows, toggle resources such as schedules and work pools, and declare incidents.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#incidents","title":"Incidents","text":"Prefect Cloud's incidents help teams identify, rectify, and document issues in mission-critical workflows. Incidents are formal declarations of disruptions to a workspace. With automations), activity in that workspace can be paused when an incident is created and resumed when it is resolved.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#error-summaries","title":"Error summaries","text":"Prefect Cloud error summaries, enabled by Marvin AI, distill the error logs of Failed
and Crashed
flow runs into actionable information. To enable this feature and others powered by Marvin AI, visit the Settings page for your account.
Service accounts enable you to create Prefect Cloud API keys that are not associated with a user account. Service accounts are typically used to configure API access for running workers or executing flow runs on remote infrastructure. See the service accounts documentation for more information about creating and managing service accounts.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#roles-and-custom-permissions","title":"Roles and custom permissions","text":"Role-based access controls (RBAC) enable you to assign users a role with permissions to perform certain activities within an account or a workspace. See the role-based access controls (RBAC) documentation for more information about managing user roles in a Prefect Cloud account.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#single-sign-on-sso","title":"Single Sign-on (SSO)","text":"Prefect Cloud's Pro and Enterprise plans offer single sign-on (SSO) authentication integration with your team\u2019s identity provider. SSO integration can bet set up with identity providers that support OIDC and SAML. Directory Sync and SCIM provisioning is also available with Enterprise plans.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#audit-log","title":"Audit log","text":"Prefect Cloud's Pro and Enterprise plans offer Audit Logs for compliance and security. Audit logs provide a chronological record of activities performed by users in an account.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#prefect-cloud-rest-api","title":"Prefect Cloud REST API","text":"The Prefect REST API is used for communicating data from Prefect clients to Prefect Cloud or a local Prefect server for orchestration and monitoring. This API is mainly consumed by Prefect clients like the Prefect Python Client or the Prefect UI.
Prefect Cloud REST API interactive documentation
Prefect Cloud REST API documentation is available at https://app.prefect.cloud/api/docs.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#start-using-prefect-cloud","title":"Start using Prefect Cloud","text":"To create an account or sign in with an existing Prefect Cloud account, go to https://app.prefect.cloud/.
Then follow the steps in the UI to deploy your first Prefect Cloud-monitored flow run. For more details, see the Prefect Quickstart and follow the instructions on the Cloud tabs.
Need help?
Get your questions answered by a Prefect Product Advocate! Book a Meeting
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/cloud-quickstart/","title":"Getting Started with Prefect Cloud","text":"Get started with Prefect Cloud in just a few steps:
To sign in with an existing account or register an account, go to https://app.prefect.cloud/.
You can create an account with any of the following:
A workspace is an isolated environment within Prefect Cloud for your flows and deployments. You can use workspaces to organize or compartmentalize your workflows.
When you register a new account, you'll be prompted to provide a name and description for your workspace.
Note that the Owner setting applies only to users who are members of Prefect Cloud accounts and have permission to create workspaces within account.
Select Create to create the workspace. If you change your mind, select Edit from the options menu to modify the workspace details or to delete it.
The Workspace Settings page for your new workspace displays the commands that enable you to install Prefect and log into Prefect Cloud in a local execution environment.
","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#install-prefect","title":"Install Prefect","text":"Configure a local execution environment to use Prefect Cloud as the API server for flow runs. In other words, \"log in\" to Prefect Cloud from a local environment where you want to run a flow.
Open a new terminal session.
Install Prefect in the environment in which you want to execute flow runs.
pip install -U prefect\n
Installation requirements
Prefect requires Python 3.8 or later. If you have any questions about Prefect installations requirements or dependencies in your preferred development environment, check out the Installation documentation.
","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#log-into-prefect-cloud-from-a-terminal","title":"Log into Prefect Cloud from a terminal","text":"Use the prefect cloud login
Prefect CLI command to log into Prefect Cloud from your environment.
prefect cloud login\n
The prefect cloud login
command, used on its own, provides an interactive login experience. Using this command, you may log in with either an API key or through a browser.
? How would you like to authenticate? [Use arrows to move; enter to select]\n> Log in with a webb browser \n Paste an API key \nOpening browser...\nWaiting for response...\nAuthenticated with Prefect Cloud! Using workspace 'jeffdc/prod'.\n
If you choose to log in via the browser, Prefect opens a new tab in your default browser and enables you to log in and authenticate the session.
","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#run-a-flow-with-prefect-cloud","title":"Run a flow with Prefect Cloud","text":"You're all set to run a flow locally, orchestrated with Prefect Cloud.
In your local environment, where you configured the previous steps, create a file named quickstart_flow.py
with the following contents:
from prefect import flow\n\n@flow(log_prints=True)\ndef quickstart_flow():\n print(\"Local quickstart flow is running!\")\n\nif __name__ == \"__main__\":\n quickstart_flow()\n
Now run quickstart_flow.py
. You'll see log messages like this in your terminal, indicating that the flow is running correctly:
17:18:09.863 | INFO | prefect.engine - Created flow run 'fragrant-quetzal' for flow 'quickstart-flow'\n17:18:09.864 | INFO | Flow run 'fragrant-quetzal' - View at https://app.prefect.cloud/account/my_workspace_id/workspace/my_flow_id/flow-runs/flow-run/my_flow_run_id\n17:18:10.010 | INFO | Flow run 'fragrant-quetzal' - Local quickstart flow is running!\n17:18:10.144 | INFO | Flow run 'fragrant-quetzal' - Finished in state Completed()\n
Go to the Flow Runs pages in your workspace in Prefect Cloud. You'll see the flow run results right there in Prefect Cloud!
Prefect Cloud automatically tracks any flow runs in a local execution environment logged into Prefect Cloud.
Select the name of the flow run to see details about this run.
Congratulations! You successfully ran a local flow and, because you're logged into Prefect Cloud, the local flow run results were captured by Prefect Cloud.
","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#next-steps","title":"Next steps","text":"If you're new to Prefect, learn more about writing and running flows in the Prefect Flows First Steps tutorial. If you're already familiar with flows, try creating a deployment and triggering flow runs with Prefect Cloud by following the Deployments tutorial.
Want to learn more about the features available in Prefect Cloud? Start with the Prefect Cloud Overview.
If you ran into any issues getting your first flow run with Prefect Cloud working, please join our community to ask questions or provide feedback:
Prefect's Slack Community is helpful, friendly, and fast growing - come say hi!
","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/connecting/","title":"Connecting & Troubleshooting Prefect Cloud","text":"To create flow runs in a local or remote execution environment and use either Prefect Cloud or a Prefect server as the backend API server, you need to
Configure a local execution environment to use Prefect Cloud as the API server for flow runs. In other words, \"log in\" to Prefect Cloud from a local environment where you want to run a flow.
$ pip install -U prefect\n
prefect cloud login
Prefect CLI command to log into Prefect Cloud from your environment.$ prefect cloud login\n
The prefect cloud login
command, used on its own, provides an interactive login experience. Using this command, you can log in with either an API key or through a browser.
$ prefect cloud login\n? How would you like to authenticate? [Use arrows to move; enter to select]\n> Log in with a web browser\n Paste an API key\nPaste your authentication key:\n? Which workspace would you like to use? [Use arrows to move; enter to select]\n> prefect/terry-prefect-workspace\n g-gadflow/g-workspace\nAuthenticated with Prefect Cloud! Using workspace 'prefect/terry-prefect-workspace'.\n
You can also log in by providing a Prefect Cloud API key that you create.
","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#change-workspaces","title":"Change workspaces","text":"If you need to change which workspace you're syncing with, use the prefect cloud workspace set
Prefect CLI command while logged in, passing the account handle and workspace name.
$ prefect cloud workspace set --workspace \"prefect/my-workspace\"\n
If no workspace is provided, you will be prompted to select one.
Workspace Settings also shows you the prefect cloud workspace set
Prefect CLI command you can use to sync a local execution environment with a given workspace.
You may also use the prefect cloud login
command with the --workspace
or -w
option to set the current workspace.
$ prefect cloud login --workspace \"prefect/my-workspace\"\n
","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#manually-configure-prefect-api-settings","title":"Manually configure Prefect API settings","text":"You can also manually configure the PREFECT_API_URL
setting to specify the Prefect Cloud API.
For Prefect Cloud, you can configure the PREFECT_API_URL
and PREFECT_API_KEY
settings to authenticate with Prefect Cloud by using an account ID, workspace ID, and API key.
$ prefect config set PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n$ prefect config set PREFECT_API_KEY=\"[API-KEY]\"\n
When you're in a Prefect Cloud workspace, you can copy the PREFECT_API_URL
value directly from the page URL.
In this example, we configured PREFECT_API_URL
and PREFECT_API_KEY
in the default profile. You can use prefect profile
CLI commands to create settings profiles for different configurations. For example, you could have a \"cloud\" profile configured to use the Prefect Cloud API URL and API key, and another \"local\" profile for local development using a local Prefect API server started with prefect server start
. See Settings for details.
Environment variables
You can also set PREFECT_API_URL
and PREFECT_API_KEY
as you would any other environment variable. See Overriding defaults with environment variables for more information.
See the Flow orchestration with Prefect tutorial for examples.
","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#install-requirements-in-execution-environments","title":"Install requirements in execution environments","text":"In local and remote execution environments \u2014 such as VMs and containers \u2014 you must make sure any flow requirements or dependencies have been installed before creating a flow run.
","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#troubleshooting-prefect-cloud","title":"Troubleshooting Prefect Cloud","text":"This section provides tips that may be helpful if you run into problems using Prefect Cloud.
","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#prefect-cloud-and-proxies","title":"Prefect Cloud and proxies","text":"Proxies intermediate network requests between a server and a client.
To communicate with Prefect Cloud, the Prefect client library makes HTTPS requests. These requests are made using the httpx
Python library. httpx
respects accepted proxy environment variables, so the Prefect client is able to communicate through proxies.
To enable communication via proxies, simply set the HTTPS_PROXY
and SSL_CERT_FILE
environment variables as appropriate in your execution environment and things should \u201cjust work.\u201d
See the Using Prefect Cloud with proxies topic in Prefect Discourse for examples of proxy configuration.
URLs that should be whitelisted for outbound-communication in a secure environment include the UI, the API, Authentication, and the current OCSP server:
If the Prefect Cloud API key, environment variable settings, or account login for your execution environment are not configured correctly, you may experience errors or unexpected flow run results when using Prefect CLI commands, running flows, or observing flow run results in Prefect Cloud.
Use the prefect config view
CLI command to make sure your execution environment is correctly configured to access Prefect Cloud.
$ prefect config view\nPREFECT_PROFILE='cloud'\nPREFECT_API_KEY='pnu_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' (from profile)\nPREFECT_API_URL='https://api.prefect.cloud/api/accounts/...' (from profile)\n
Make sure PREFECT_API_URL
is configured to use https://api.prefect.cloud/api/...
.
Make sure PREFECT_API_KEY
is configured to use a valid API key.
You can use the prefect cloud workspace ls
CLI command to view or set the active workspace.
$ prefect cloud workspace ls\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Available Workspaces: \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 g-gadflow/g-workspace \u2502\n\u2502 * prefect/workinonit \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n * active workspace\n
You can also check that the account and workspace IDs specified in the URL for PREFECT_API_URL
match those shown in the URL bar for your Prefect Cloud workspace.
If you're having difficulty logging in to Prefect Cloud, the following troubleshooting steps may resolve the issue, or will provide more information when sharing your case to the support channel.
Other tips to help with login difficulties:
None of this worked?
Email us at help@prefect.io and provide answers to the questions above in your email to make it faster to troubleshoot and unblock you. Make sure you add the email address with which you were trying to log in, your Prefect Cloud account name, and, if applicable, the organization to which it belongs.
","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/events/","title":"Events","text":"An event is a notification of a change. Together, events form a feed of activity recording what's happening across your stack.
Events power several features in Prefect Cloud, including flow run logs, audit logs, and automations.
Events can represent API calls, state transitions, or changes in your execution environment or infrastructure.
Events enable observability into your data stack via the event feed, and the configuration of Prefect's reactivity via automations.
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-specification","title":"Event specification","text":"Events adhere to a structured specification.
Name Type Required? Description occurred String yes When the event happened event String yes The name of the event that happened resource Object yes The primary Resource this event concerns related Array no A list of additional Resources involved in this event payload Object no An open-ended set of data describing what happened id String yes The client-provided identifier of this event follows String no The ID of an event that is known to have occurred prior to this one.","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-grammar","title":"Event grammar","text":"Generally, events have a consistent and informative grammar - an event describes a resource and an action that the resource took or that was taken on that resource. For example, events emitted by Prefect objects take the form of:
prefect.block.write-method.called\nprefect-cloud.automation.action.executed\nprefect-cloud.user.logged-in\n
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-sources","title":"Event sources","text":"Events are automatically emitted by all Prefect objects, including flows, tasks, deployments, work queues, and logs. Prefect-emitted events will contain the prefect
or prefect-cloud
resource prefix. Events can also be sent to the Prefect events API via authenticated http request.
The Prefect Python SDK provides an emit_event
function that emits an Prefect event when called. The function can be called inside or outside of a task or flow. Running the following code will emit an event to Prefect Cloud, which will validate and ingest the event data.
from prefect.events import emit_event\n\ndef some_function(name: str=\"kiki\") -> None:\n print(f\"hi {name}!\")\n emit_event(event=f\"{name}.sent.event!\", resource={\"prefect.resource.id\": f\"coder.{name}\"})\n\nsome_function()\n
Note that the emit_event
arguments shown above are required: event
represents the name of the event and resource={\"prefect.resource.id\": \"my_string\"}
is the resource id. To get data into an event for use in an automation action, you can specify a dictionary of values for the payload
parameter.
Prefect Cloud offers programmable webhooks to receive HTTP requests from other systems and translate them into events within your workspace. Webhooks can emit pre-defined static events, dynamic events that use portions of the incoming HTTP request, or events derived from CloudEvents.
Events emitted from any source will appear in the event feed, where you can visualize activity in context and configure automations to react to the presence or absence of it in the future.
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#resources","title":"Resources","text":"Every event has a primary resource, which describes the object that emitted an event. Resources are used as quasi-stable identifiers for sources of events, and are constructed as dot-delimited strings, for example:
prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3.action.0\nacme.user.kiki.elt_script_1\nprefect.flow-run.e3755d32-cec5-42ca-9bcd-af236e308ba6\n
Resources can optionally have additional arbitrary labels which can be used in event aggregation queries, such as:
\"resource\": {\n \"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3\",\n \"prefect-cloud.action.type\": \"call-webhook\"\n }\n
Events can optionally contain related resources, used to associate the event with other resources, such as in the case that the primary resource acted on or with another resource:
\"resource\": {\n \"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3.action.0\",\n \"prefect-cloud.action.type\": \"call-webhook\"\n },\n\"related\": [\n {\n \"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3\",\n \"prefect.resource.role\": \"automation\",\n \"prefect-cloud.name\": \"webhook_body_demo\",\n \"prefect-cloud.posture\": \"Reactive\"\n }\n]\n
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#events-in-the-cloud-ui","title":"Events in the Cloud UI","text":"Prefect Cloud provides an interactive dashboard to analyze and take action on events that occurred in your workspace on the event feed page.
The event feed is the primary place to view, search, and filter events to understand activity across your stack. Each entry displays data on the resource, related resource, and event that took place.
You can view more information about an event by clicking into it, where you can view the full details of an event's resource, related resources, and its payload.
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#reacting-to-events","title":"Reacting to events","text":"From an event page, you can configure an automation to trigger on the observation of matching events or a lack of matching events by clicking the automate button in the overflow menu:
The default trigger configuration will fire every time it sees an event with a matching resource identifier. Advanced configuration is possible via custom triggers.
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/incidents/","title":"Incidents","text":"","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#overview","title":"Overview","text":"Incidents are a Prefect Cloud feature to help your team manage workflow disruptions. Incidents help you identify, resolve, and document issues with mission-critical workflows. This system enhances operational efficiency by automating the incident management process and providing a centralized platform for collaboration and compliance.
","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#what-are-incidents","title":"What are incidents?","text":"Incidents are formal declarations of disruptions to a workspace. With automations, activity in a workspace can be paused when an incident is created and resumed when it is resolved.
Incidents vary in nature and severity, ranging from minor glitches to critical system failures. Prefect Cloud enables users to effectively and automatically track and manage these incidents, ensuring minimal impact on operational continuity.
","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#why-use-incident-management","title":"Why use incident management?","text":"Automated detection and reporting: Incidents can be automatically identified based on specific triggers or manually reported by team members, facilitating prompt response.
Collaborative problem-solving: The platform fosters collaboration, allowing team members to share insights, discuss resolutions, and track contributions.
Comprehensive impact assessment: Users gain insights into the incident's influence on workflows, helping in prioritizing response efforts.
Compliance with incident management processes: Detailed documentation and reporting features support compliance with incident management systems.
Enhanced operational transparency: The system provides a transparent view of both ongoing and resolved incidents, promoting accountability and continuous improvement.
There are several ways to create an incident:
From the Incidents page:
From a flow run, work pool, or block:
Via an automation:
Automations can be used for triggering an incident and for selecting actions to take when an incident is triggered. For example, a work pool status change could trigger the declaration of an incident, or a critical level incident could trigger a notification action.
To automatically take action when an incident is declared, set up a custom trigger that listens for declaration events.
{\n \"match\": {\n \"prefect.resource.id\": \"prefect-cloud.incident.*\"\n },\n \"expect\": [\n \"prefect-cloud.incident.declared\"\n ],\n \"posture\": \"Reactive\",\n \"threshold\": 1,\n \"within\": 0\n}\n
Building custom triggers
To get started with incident automations, you only need to specify two fields in your trigger:
match: The resource emitting your event of interest. You can match on specific resource IDs, use wildcards to match on all resources of a given type, and even match on other resource attributes, like prefect.resource.name
.
expect: The event type to listen for. For example, you could listen for any (or all) of the following event types:
prefect-cloud.incident.declared
prefect-cloud.incident.resolved
prefect-cloud.incident.updated.severity
See Event Triggers for more information on custom triggers, and check out your Event Feed to see the event types emitted by your incidents and other resources (i.e. events that you can react to).
When an incident is declared, any actions you configure such as pausing work pools or sending notifications, will execute immediately.
","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#managing-an-incident","title":"Managing an incident","text":"API rate limits restrict the number of requests that a single client can make in a given time period. They ensure Prefect Cloud's stability, so that when you make an API call, you always get a response.
Prefect Cloud rate limits are subject to change
The following rate limits are in effect currently, but are subject to change. Contact Prefect support at help@prefect.io if you have questions about current rate limits.
Prefect Cloud enforces the following rate limits:
Prefect Cloud limits the flow_runs
, task_runs
, and flows
endpoints and their subroutes at the following levels:
The Prefect Cloud API will return a 429
response with an appropriate Retry-After
header if these limits are triggered.
Prefect Cloud limits the number of logs accepted:
The Prefect Cloud API will return a 429
response if these limits are triggered.
Prefect Cloud feature
The Flow Run Retention Policy setting is only applicable in Prefect Cloud.
Flow runs in Prefect Cloud are retained according to the Flow Run Retention Policy set by your account tier. The policy setting applies to all workspaces owned by the account.
The flow run retention policy represents the number of days each flow run is available in the Prefect Cloud UI, and via the Prefect CLI and API after it ends. Once a flow run reaches a terminal state (detailed in the chart here), it will be retained until the end of the flow run retention period.
Flow Run Retention Policy keys on terminal state
Note that, because Flow Run Retention Policy keys on terminal state, if two flows start at the same time, but reach a terminal state at different times, they will be removed at different times according to when they each reached their respective terminal states.
This retention policy applies to all details about a flow run, including its task runs. Subflow runs follow the retention policy independently from their parent flow runs, and are removed based on the time each subflow run reaches a terminal state.
If you or your organization have needs that require a tailored retention period, contact the Prefect Sales team.
","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/workspaces/","title":"Workspaces","text":"A workspace is a discrete environment within Prefect Cloud for your workflows and blocks. Workspaces are available to Prefect Cloud accounts only.
Workspaces can be used to organize and compartmentalize your workflows. For example, you can use separate workspaces to isolate dev, staging, and prod environments, or to provide separation between different teams.
When you first log into Prefect Cloud, you will be prompted to create your own initial workspace. After creating your workspace, you'll be able to view flow runs, flows, deployments, and other workspace-specific features in the Prefect Cloud UI.
Select a workspace name in the navigation menu to see all workspaces you can access.
Your list of available workspaces may include:
Workspace-specific features
Each workspace keeps track of its own:
Your user permissions within workspaces may vary. Account admins can assign roles and permissions at the workspace level.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#create-a-workspace","title":"Create a workspace","text":"On the Account Workspaces dropdown or the Workspaces page select the + icon to create a new workspace.
You'll be prompted to configure:
Select Create to create the new workspace. The number of available workspaces varies by Prefect Cloud plan. See Pricing if you need additional workspaces or users.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-settings","title":"Workspace settings","text":"Within a workspace, select Settings -> General to view or edit workspace details.
On this page you can edit workspace details or delete the workspace.
Deleting a workspace
Deleting a workspace deletes all deployments, flow run history, work pools, and notifications configured in workspace.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-access","title":"Workspace access","text":"Within a Prefect Cloud Pro or Enterprise tier account, Workspace Owners can invite other people to be members and provision service accounts to a workspace. In addition to giving the user access to the workspace, a Workspace Owner assigns a workspace role to the user. The role specifies the scope of permissions for the user within the workspace.
As a Workspace Owner, select Workspaces -> Sharing to manage members and service accounts for the workspace.
If you've previously invited individuals to your account or provisioned service accounts, you'll see them listed here.
To invite someone to an account, select the Members + icon. You can select from a list of existing account members.
Select a Role for the user. This will be the initial role for the user within the workspace. A workspace Owner can change this role at any time.
Select Send to initiate the invitation.
To add a service account to a workspace, select the Service Accounts + icon. You can select from a list of configured service accounts. Select a Workspace Role for the service account. This will be the initial role for the service account within the workspace. A workspace Owner can change this role at any time. Select Share to finalize adding the service account.
To remove a workspace member or service account, select Remove from the menu on the right side of the user or service account information on this page.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-transfer","title":"Workspace transfer","text":"Workspace transfer enables you to move an existing workspace from one account to another.
Workspace transfer retains existing workspace configuration and flow run history, including blocks, deployments, notifications, work pools, and logs.
Workspace transfer permissions
Workspace transfer must be initiated or approved by a user with admin privileges for the workspace to be transferred.
To initiate a workspace transfer between personal accounts, contact support@prefect.io.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#transfer-a-workspace","title":"Transfer a workspace","text":"To transfer a workspace, select Settings -> General within the workspace. Then, from the three dot menu in the upper right of the page, select Transfer.
The Transfer Workspace page shows the workspace to be transferred on the left. Select the target account for the workspace on the right.
Workspace transfer impact on accounts
Workspace transfer may impact resource usage and costs for source and target accounts.
When you transfer a workspace, users, API keys, and service accounts may lose access to the workspace. Audit log will no longer track activity on the workspace. Flow runs ending outside of the destination account\u2019s flow run retention period will be removed. You may also need to update Prefect CLI profiles and execution environment settings to access the workspace's new location.
You may also incur new charges in the target account to accommodate the transferred workspace.
The Transfer Workspace page outlines the impacts of transferring the selected workspace to the selected target. Please review these notes carefully before selecting Transfer to transfer the workspace.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/","title":"User accounts","text":"Sign up for a Prefect Cloud account at app.prefect.cloud.
An individual user can be invited to become a member of other accounts.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#user-settings","title":"User settings","text":"Users can access their personal settings in the profile menu, including:
Users who are part of an account can hold the role of Admin or Member. Admins can invite other users to join the account and manage the account's workspaces and teams.
Admins on Pro and Enterprise tier Prefect Cloud accounts can grant members of the account roles in a workspace, such as Runner or Viewer. Custom roles are available on Enterprise tier accounts.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#api-keys","title":"API keys","text":"API keys enable you to authenticate an environment to work with Prefect Cloud.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#service-accounts","title":"Service accounts","text":"Service accounts enable you to create a Prefect Cloud API key that is not associated with a user account.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#single-sign-on-sso","title":"Single sign-on (SSO)","text":"Pro and Enterprise plans offer single sign-on (SSO) integration with your team\u2019s identity provider. Enterprise tier accounts provide additional options with directory sync and SCIM provisioning.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#audit-log","title":"Audit log","text":"Audit logs provide a chronological record of activities performed by Prefect Cloud users who are members of an account.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#object-level-access-control-lists-acls","title":"Object-level access control lists (ACLs)","text":"Prefect Cloud's Enterprise plan offers object-level access control lists to restrict access to specific users and service accounts within a workspace.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#teams","title":"Teams","text":"Users of Enterprise tier Prefect Cloud accounts can be added to Teams to simplify access control governance.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/api-keys/","title":"Manage Prefect Cloud API Keys","text":"API keys enable you to authenticate a local environment to work with Prefect Cloud.
If you run prefect cloud login
from your CLI, you'll have the choice to authenticate through your browser or by pasting an API key.
If you choose to authenticate through your browser, you'll be directed to an authorization page. After you grant approval to connect, you'll be redirected to the CLI and the API key will be saved to your local Prefect profile.
If you choose to authenticate by pasting an API key, you'll need to create an API key in the Prefect Cloud UI first.
","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#create-an-api-key","title":"Create an API key","text":"To create an API key, select the account icon at the bottom-left corner of the UI.
Select API Keys. The page displays a list of previously generated keys and lets you create new API keys or delete keys.
Select the + button to create a new API key. Provide a name for the key and an expiration date.
Note that API keys cannot be revealed again in the UI after you generate them, so copy the key to a secure location.
","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#log-into-prefect-cloud-with-an-api-key","title":"Log into Prefect Cloud with an API Key","text":"prefect cloud login -k '<my-api-key>'\n
","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#service-account-api-keys","title":"Service account API keys","text":"Service accounts are a feature of Prefect Cloud Pro and Enterprise tier plans that enable you to create a Prefect Cloud API key that is not associated with a user account.
Service accounts are typically used to configure API access for running workers or executing flow runs on remote infrastructure. Events and logs for flow runs in those environments are then associated with the service account rather than a user, and API access may be managed or revoked by configuring or removing the service account without disrupting user access.
See the service accounts documentation for more information about creating and managing service accounts in Prefect Cloud.
","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/audit-log/","title":"Audit Log","text":"Prefect Cloud's Pro and Enterprise plans offer enhanced compliance and transparency tools with Audit Log. Audit logs provide a chronological record of activities performed by members in your account, allowing you to monitor detailed Prefect Cloud actions for security and compliance purposes.
Audit logs enable you to identify who took what action, when, and using what resources within your Prefect Cloud account. In conjunction with appropriate tools and procedures, audit logs can assist in detecting potential security violations and investigating application errors.
Audit logs can be used to identify changes in:
See the Prefect Cloud plan information to learn more about options for supporting audit logs.
","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/audit-log/#viewing-audit-logs","title":"Viewing audit logs","text":"From your Pro or Enterprise account settings page, select the Audit Log page to view audit logs.
Pro and Enterprise account tier admins can view audit logs for:
Admins can filter audit logs on multiple dimensions to restrict the results they see by workspace, user, or event type. Available audit log events are displayed in the Events drop-down menu.
Audit logs may also be filtered by date range. Audit log retention period varies by Prefect Cloud plan.
","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/object-access-control-lists/","title":"Object Access Control Lists","text":"Prefect Cloud's Enterprise plan offers object-level access control lists to restrict access to specific users and service accounts within a workspace. ACLs are supported for blocks and deployments.
Organization Admins and Workspace Owners can configure access control lists by navigating to an object and clicking manage access. When an ACL is added, all users and service accounts with access to an object via their workspace role will lose access if not explicitly added to the ACL.
ACLs and visibility
Objects not governed by access control lists such as flow runs, flows, and artifacts will be visible to a user within a workspace even if an associated block or deployment has been restricted for that user.
See the Prefect Cloud plans to learn more about options for supporting object-level access control.
","tags":["UI","Permissions","Access","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/roles/","title":"User and Service Account Roles","text":"Prefect Cloud's Pro and Enterprise tiers allow you to set team member access to the appropriate level within specific workspaces.
Role-based access controls (RBAC) enable you to assign users granular permissions to perform certain activities.
To give users access to functionality beyond the scope of Prefect\u2019s built-in workspace roles, Enterprise account Admins can create custom roles for users.
","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#built-in-roles","title":"Built-in roles","text":"Roles give users abilities at either the account level or at the individual workspace level.
The following sections outline the abilities of the built-in, Prefect-defined ac and workspace roles.
","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#account-level-roles","title":"Account-level roles","text":"The following built-in roles have permissions across an account in Prefect Cloud.
Role Abilities Owner \u2022 Set/change all account profile settings allowed to be set/changed by a Prefect user. \u2022 Add and remove account members, and their account roles. \u2022 Create and delete service accounts in the account. \u2022 Create workspaces in the account. \u2022 Implicit workspace owner access on all workspaces in the account. \u2022 Bypass SSO. Admin \u2022 Set/change all account profile settings allowed to be set/changed by a Prefect user. \u2022 Add and remove account members, and their account roles. \u2022 Create and delete service accounts in the account. \u2022 Create workspaces in the account. \u2022 Implicit workspace owner access on all workspaces in the account. \u2022 Cannot bypass SSO. Member \u2022 View account profile settings. \u2022 View workspaces I have access to in the account. \u2022 View account members and their roles. \u2022 View service accounts in the account.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-level-roles","title":"Workspace-level roles","text":"The following built-in roles have permissions within a given workspace in Prefect Cloud.
Role Abilities Viewer \u2022 View flow runs within a workspace. \u2022 View deployments within a workspace. \u2022 View all work pools within a workspace. \u2022 View all blocks within a workspace. \u2022 View all automations within a workspace. \u2022 View workspace handle and description. Runner All Viewer abilities, plus: \u2022 Run deployments within a workspace. Developer All Runner abilities, plus: \u2022 Run flows within a workspace. \u2022 Delete flow runs within a workspace. \u2022 Create, edit, and delete deployments within a workspace. \u2022 Create, edit, and delete work pools within a workspace. \u2022 Create, edit, and delete all blocks and their secrets within a workspace. \u2022 Create, edit, and delete automations within a workspace. \u2022 View all workspace settings. Owner All Developer abilities, plus: \u2022 Add and remove account members, and set their role within a workspace. \u2022 Set the workspace\u2019s default workspace role for all users in the account. \u2022 Set, view, edit workspace settings. Worker The minimum scopes required for a worker to poll for and submit work.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#custom-workspace-roles","title":"Custom workspace roles","text":"The built-in roles will serve the needs of most users, but your team may need to configure custom roles, giving users access to specific permissions within a workspace.
Custom roles can inherit permissions from a built-in role. This enables tweaks to the role to meet your team\u2019s needs, while ensuring users can still benefit from Prefect\u2019s default workspace role permission curation as new functionality becomes available.
Custom workspace roles can also be created independent of Prefect\u2019s built-in roles. This option gives workspace admins full control of user access to workspace functionality. However, for non-inherited custom roles, the workspace admin takes on the responsibility for monitoring and setting permissions for new functionality as it is released.
See Role permissions for details of permissions you may set for custom roles.
After you create a new role, it become available in the account Members page and the Workspace Sharing page for you to apply to users.
","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#inherited-roles","title":"Inherited roles","text":"A custom role may be configured as an Inherited Role. Using an inherited role allows you to create a custom role using a set of initial permissions associated with a built-in Prefect role. Additional permissions can be added to the custom role. Permissions included in the inherited role cannot be removed.
Custom roles created using an inherited role will follow Prefect's default workspace role permission curation as new functionality becomes available.
To configure an inherited role when configuring a custom role, select the Inherit permission from a default role check box, then select the role from which the new role should inherit permissions.
","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-role-permissions","title":"Workspace role permissions","text":"The following permissions are available for custom roles.
","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#automations","title":"Automations","text":"Permission Description View automations User can see configured automations within a workspace. Create, edit, and delete automations User can create, edit, and delete automations within a workspace. Includes permissions of View automations.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#blocks","title":"Blocks","text":"Permission Description View blocks User can see configured blocks within a workspace. View secret block data User can see configured blocks and their secrets within a workspace. Includes permissions of\u00a0View blocks. Create, edit, and delete blocks User can create, edit, and delete blocks within a workspace. Includes permissions of View blocks and View secret block data.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#deployments","title":"Deployments","text":"Permission Description View deployments User can see configured deployments within a workspace. Run deployments User can run deployments within a workspace. This does not give a user permission to execute the flow associated with the deployment. This only gives a user (via their key) the ability to run a deployment \u2014 another user/key must actually execute that flow, such as a service account with an appropriate role. Includes permissions of View deployments. Create and edit deployments User can create and edit deployments within a workspace. Includes permissions of View deployments and Run deployments. Delete deployments User can delete deployments within a workspace. Includes permissions of View deployments, Run deployments, and Create and edit deployments.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#flows","title":"Flows","text":"Permission Description View flows and flow runs User can see flows and flow runs within a workspace. Create, update, and delete saved search filters User can create, update, and delete saved flow run search filters configured within a workspace. Includes permissions of View flows and flow runs. Create, update, and run flows User can create, update, and run flows within a workspace. Includes permissions of View flows and flow runs. Delete flows User can delete flows within a workspace. Includes permissions of View flows and flow runs and Create, update, and run flows.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#notifications","title":"Notifications","text":"Permission Description View notification policies User can see notification policies configured within a workspace. Create and edit notification policies User can create and edit notification policies configured within a workspace. Includes permissions of View notification policies. Delete notification policies User can delete notification policies configured within a workspace. Includes permissions of View notification policies and Create and edit notification policies.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#task-run-concurrency","title":"Task run concurrency","text":"Permission Description View concurrency limits User can see configured task run concurrency limits within a workspace. Create, edit, and delete concurrency limits User can create, edit, and delete task run concurrency limits within a workspace. Includes permissions of View concurrency limits.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#work-pools","title":"Work pools","text":"Permission Description View work pools User can see work pools configured within a workspace. Create, edit, and pause work pools User can create, edit, and pause work pools configured within a workspace. Includes permissions of View work pools. Delete work pools User can delete work pools configured within a workspace. Includes permissions of View work pools and Create, edit, and pause work pools.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-management","title":"Workspace management","text":"Permission Description View information about workspace service accounts User can see service accounts configured within a workspace. View information about workspace users User can see user accounts for users invited to the workspace. View workspace settings User can see settings configured within a workspace. Edit workspace settings User can edit settings for a workspace. Includes permissions of View workspace settings. Delete the workspace User can delete a workspace. Includes permissions of View workspace settings and Edit workspace settings.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/service-accounts/","title":"Service Accounts","text":"Service accounts enable you to create a Prefect Cloud API key that is not associated with a user account. Service accounts are typically used to configure API access for running workers or executing deployment flow runs on remote infrastructure.
Service accounts are non-user accounts that have the following features:
Using service account credentials, you can configure an execution environment to interact with your Prefect Cloud workspaces without a user having to manually log in from that environment. Service accounts may be created, added to workspaces, have their roles changed, or deleted without affecting other user accounts.
Select Service Accounts to view, create, or edit service accounts.
Service accounts are created at the account level, but individual workspaces may be shared with the service account. See workspace sharing for more information.
Service account credentials
When you create a service account, Prefect Cloud creates a new API key for the account and provides the API configuration command for the execution environment. Save these to a safe location for future use. If the access credentials are lost or compromised, you should regenerate the credentials from the service account page.
Service account roles
Service accounts are created at the account level, and can then be added to workspaces within the account.
A service account may only be a Member of an account. It can never be an account Admin. You may apply any valid workspace-level role to a service account.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/service-accounts/#create-a-service-account","title":"Create a service account","text":"Within your account, on the Service Accounts page, select the + icon to create a new service account. You'll be prompted to configure:
Service account roles
A service account may only be a Member of an account. You may apply any valid workspace-level role to a service account when it is added to a workspace.
Select Create to create the new service account.
Note that API keys cannot be revealed again in the UI after you generate them, so copy the key to a secure location.
You can change the API key and expiration for a service account by rotating the API key. Select Rotate API Key from the menu on the left side of the service account's information on this page.
To delete a service account, select Remove from the menu on the left side of the service account's information.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/sso/","title":"Single Sign-on (SSO)","text":"Prefect Cloud's Pro and Enterprise plans offer single sign-on (SSO) integration with your team\u2019s identity provider. SSO integration can bet set up with any identity provider that supports:
When using SSO, Prefect Cloud won't store passwords for any accounts managed by your identity provider. Members of your Prefect Cloud account will instead log in and authenticate using your identity provider.
Once your SSO integration has been set up, non-admins will be required to authenticate through the SSO provider when accessing account resources.
See the Prefect Cloud plans to learn more about options for supporting more users and workspaces, service accounts, and SSO.
","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#configuring-sso","title":"Configuring SSO","text":"Within your account, select the SSO page to enable SSO for users.
If you haven't enabled SSO for a domain yet, enter the email domains for which you want to configure SSO in Prefect Cloud and save it.
Under Enabled Domains, select the domains from the Domains list, then select Generate Link. This step creates a link you can use to configure SSO with your identity provider.
Using the provided link navigate to the Identity Provider Configuration dashboard and select your identity provider to continue configuration. If your provider isn't listed, you can continue with the SAML
or Open ID Connect
choices instead.
Once you complete SSO configuration your users will be required to authenticate via your identity provider when accessing account resources, giving you full control over application access.
","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#directory-sync","title":"Directory sync","text":"Directory sync automatically provisions and de-provisions users for your account.
Provisioned users are given basic \u201cMember\u201d roles and will have access to any resources that role entails.
When a user is unassigned from the Prefect Cloud application in your identity provider, they will automatically lose access to Prefect Cloud resources, allowing your IT team to control access to Prefect Cloud without ever signing into the app.
","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#scim-provisioning","title":"SCIM Provisioning","text":"Enterprise accounts have access to SCIM for user provisioning. The SSO tab provides access to enable SCIM provisioning.
","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/teams/","title":"Teams","text":"Prefect Cloud's Enterprise plan offers team management to simplify access control governance.
Account Admins can configure teams and team membership from the account settings menu by clicking Teams. Teams are composed of users and service accounts. Teams can be added to workspaces or object access control lists just like users and service accounts.
If SCIM is enabled on your account, the set of teams and the users within them is governed by your IDP. Prefect Cloud service accounts, which are not governed by your IDP, can be still be added to your existing set of teams.
See the Prefect Cloud plans to learn more about options for supporting teams.
","tags":["UI","Permissions","Access","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"community/","title":"Community","text":"There are many ways to get involved with the Prefect community
Many features specific to Prefect Cloud are in their own navigation subheading.
","tags":["concepts","features","overview"],"boost":2},{"location":"concepts/agents/","title":"Agents","text":"Workers are recommended
Agents are part of the block-based deployment model. Work Pools and Workers simplify the specification of a flow's infrastructure and runtime environment. If you have existing agents, you can upgrade from agents to workers to significantly enhance the experience of deploying flows.
","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#agent-overview","title":"Agent overview","text":"Agent processes are lightweight polling services that get scheduled work from a work pool and deploy the corresponding flow runs.
Agents poll for work every 15 seconds by default. This interval is configurable in your profile settings with the PREFECT_AGENT_QUERY_INTERVAL
setting.
It is possible for multiple agent processes to be started for a single work pool. Each agent process sends a unique ID to the server to help disambiguate themselves and let users know how many agents are active.
","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#agent-options","title":"Agent options","text":"Agents are configured to pull work from one or more work pool queues. If the agent references a work queue that doesn't exist, it will be created automatically.
Configuration parameters you can specify when starting an agent include:
Option Description--api
The API URL for the Prefect server. Default is the value of PREFECT_API_URL
. --hide-welcome
Do not display the startup ASCII art for the agent process. --limit
Maximum number of flow runs to start simultaneously. [default: None] --match
, -m
Dynamically matches work queue names with the specified prefix for the agent to pull from,for example dev-
will match all work queues with a name that starts with dev-
. [default: None] --pool
, -p
A work pool name for the agent to pull from. [default: None] --prefetch-seconds
The amount of time before a flow run's scheduled start time to begin submission. Default is the value of PREFECT_AGENT_PREFETCH_SECONDS
. --run-once
Only run agent polling once. By default, the agent runs forever. [default: no-run-once] --work-queue
, -q
One or more work queue names for the agent to pull from. [default: None] You must start an agent within an environment that can access or create the infrastructure needed to execute flow runs. Your agent will deploy flow runs to the infrastructure specified by the deployment.
Prefect must be installed in execution environments
Prefect must be installed in any environment in which you intend to run the agent or execute a flow run.
PREFECT_API_URL
and PREFECT_API_KEY
settings for agents
PREFECT_API_URL
must be set for the environment in which your agent is running or specified when starting the agent with the --api
flag. You must also have a user or service account with the Worker
role, which can be configured by setting the PREFECT_API_KEY
.
If you want an agent to communicate with Prefect Cloud or a Prefect server from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL
in that environment.
Use the prefect agent start
CLI command to start an agent. You must pass at least one work pool name or match string that the agent will poll for work. If the work pool does not exist, it will be created.
prefect agent start -p [work pool name]\n
For example:
Starting agent with ephemeral API...\n\u00a0 ___ ___ ___ ___ ___ ___ _____ \u00a0 \u00a0 _ \u00a0 ___ ___ _\u00a0 _ _____\n\u00a0| _ \\ _ \\ __| __| __/ __|_ \u00a0 _| \u00a0 /_\\ / __| __| \\| |_ \u00a0 _|\n\u00a0|\u00a0 _/ \u00a0 / _|| _|| _| (__\u00a0 | |\u00a0 \u00a0 / _ \\ (_ | _|| .` | | |\n\u00a0|_| |_|_\\___|_| |___\\___| |_| \u00a0 /_/ \\_\\___|___|_|\\_| |_|\n\nAgent started! Looking for work from work pool 'my-pool'...\n
By default, the agent polls the API specified by the PREFECT_API_URL
environment variable. To configure the agent to poll from a different server location, use the --api
flag, specifying the URL of the server.
In addition, agents can match multiple queues in a work pool by providing a --match
string instead of specifying all of the queues. The agent will poll every queue with a name that starts with the given string. New queues matching this prefix will be found by the agent without needing to restart it.
For example:
prefect agent start --match \"foo-\"\n
This example will poll every work queue that starts with \"foo-\".
","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#configuring-prefetch","title":"Configuring prefetch","text":"By default, the agent begins submission of flow runs a short time (10 seconds) before they are scheduled to run. This allows time for the infrastructure to be created, so the flow run can start on time. In some cases, infrastructure will take longer than this to actually start the flow run. In these cases, the prefetch can be increased using the --prefetch-seconds
option or the PREFECT_AGENT_PREFETCH_SECONDS
setting.
Submission can begin an arbitrary amount of time before the flow run is scheduled to start. If this value is larger than the amount of time it takes for the infrastructure to start, the flow run will wait until its scheduled start time. This allows flow runs to start exactly on time.
","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#troubleshooting","title":"Troubleshooting","text":"","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#agent-crash-or-keyboard-interrupt","title":"Agent crash or keyboard interrupt","text":"If the agent process is ended abruptly, you can sometimes have left over flows that were destined for the agent whose process was ended. In the UI, these will show up as pending. You will need to delete these flows in order for the restarted agent to begin processing the work queue again. Take note of the flows you deleted, you might need to set them to run manually.
","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/artifacts/","title":"Artifacts","text":"Artifacts are persisted outputs such as tables, Markdown, or links. They are stored on Prefect Cloud or a Prefect server instance and rendered in the Prefect UI. Artifacts make it easy to track and monitor the objects that your flows produce and update over time.
Published artifacts may be associated with a particular task run or flow run. Artifacts can also be created outside of any flow run context.
Whether you're publishing links, Markdown, or tables, artifacts provide a powerful and flexible way to showcase data within your workflows.
With artifacts, you can easily manage and share information with your team, providing valuable insights and context.
Common use cases for artifacts include:
Creating artifacts allows you to publish data from task and flow runs or outside of a flow run context. Currently, you can render three artifact types: links, Markdown, and tables.
Artifacts render individually
Please note that every artifact created within a task will be displayed as an individual artifact in the Prefect UI. This means that each call to create_link_artifact()
or create_markdown_artifact()
generates a distinct artifact.
Unlike the print()
command, where you can concatenate multiple calls to include additional items in a report, within a task, these commands must be used multiple times if necessary.
To create artifacts like reports or summaries using create_markdown_artifact()
, compile your message string separately and then pass it to create_markdown_artifact()
to create the complete artifact.
To create a link artifact, use the create_link_artifact()
function. To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key
argument to the create_link_artifact()
function to track an artifact's history over time. Without a key
, the artifact will only be visible in the Artifacts tab of the associated flow run or task run.
from prefect import flow, task\nfrom prefect.artifacts import create_link_artifact\n\n@task\ndef my_first_task():\n create_link_artifact(\n key=\"irregular-data\",\n link=\"https://nyc3.digitaloceanspaces.com/my-bucket-name/highly_variable_data.csv\",\n description=\"## Highly variable data\",\n )\n\n@task\ndef my_second_task():\n create_link_artifact(\n key=\"irregular-data\",\n link=\"https://nyc3.digitaloceanspaces.com/my-bucket-name/low_pred_data.csv\",\n description=\"# Low prediction accuracy\",\n )\n\n@flow\ndef my_flow():\n my_first_task()\n my_second_task()\n\nif __name__ == \"__main__\":\n my_flow()\n
Tip
You can specify multiple artifacts with the same key to more easily track something very specific that you care about, such as irregularities in your data pipeline.
After running the above flows, you can find your new artifacts in the Artifacts page of the UI. Click into the \"irregular-data\" artifact and see all versions of it, along with custom descriptions and links to the relevant data.
Here, you'll also be able to view information about your artifact such as its associated flow run or task run id, previous and future versions of the artifact (multiple artifacts can have the same key in order to show lineage), the data you've stored (in this case a Markdown-rendered link), an optional Markdown description, and when the artifact was created or updated.
To make the links more readable for you and your collaborators, you can pass in a link_text
argument for your link artifacts:
from prefect import flow\nfrom prefect.artifacts import create_link_artifact\n\n@flow\ndef my_flow():\n create_link_artifact(\n key=\"my-important-link\",\n link=\"https://www.prefect.io/\",\n link_text=\"Prefect\",\n )\n\nif __name__ == \"__main__\":\n my_flow()\n
In the above example, the create_link_artifact
method is used within a flow to create a link artifact with a key of my-important-link
. The link
parameter is used to specify the external resource to be linked to, and link_text
is used to specify the text to be displayed for the link. An optional description
could also be added for context.
To create a Markdown artifact, you can use the create_markdown_artifact()
function. To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key
argument to the create_markdown_artifact()
function to track an artifact's history over time. Without a key
, the artifact will only be visible in the Artifacts tab of the associated flow run or task run.
Don't indent Markdown
Markdown in mult-line strings must be unindented to be interpreted correctly.
from prefect import flow, task\nfrom prefect.artifacts import create_markdown_artifact\n\n@task\ndef markdown_task():\n na_revenue = 500000\n markdown_report = f\"\"\"# Sales Report\n\n## Summary\n\nIn the past quarter, our company saw a significant increase in sales, with a total revenue of $1,000,000. \nThis represents a 20% increase over the same period last year.\n\n## Sales by Region\n\n| Region | Revenue |\n|:--------------|-------:|\n| North America | ${na_revenue:,} |\n| Europe | $250,000 |\n| Asia | $150,000 |\n| South America | $75,000 |\n| Africa | $25,000 |\n\n## Top Products\n\n1. Product A - $300,000 in revenue\n2. Product B - $200,000 in revenue\n3. Product C - $150,000 in revenue\n\n## Conclusion\n\nOverall, these results are very encouraging and demonstrate the success of our sales team in increasing revenue \nacross all regions. However, we still have room for improvement and should focus on further increasing sales in \nthe coming quarter.\n\"\"\"\n create_markdown_artifact(\n key=\"gtm-report\",\n markdown=markdown_report,\n description=\"Quarterly Sales Report\",\n )\n\n@flow()\ndef my_flow():\n markdown_task()\n\n\nif __name__ == \"__main__\":\n my_flow()\n
After running the above flow, you should see your \"gtm-report\" artifact in the Artifacts page of the UI.
As with all artifacts, you'll be able to view the associated flow run or task run id, previous and future versions of the artifact, your rendered Markdown data, and your optional Markdown description.
","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#create-table-artifacts","title":"Create table artifacts","text":"You can create a table artifact by calling create_table_artifact()
. To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key
argument to the create_table_artifact()
function to track an artifact's history over time. Without a key
, the artifact will only be visible in the artifacts tab of the associated flow run or task run.
Note
The create_table_artifact()
function accepts a table
argument, which can be provided as either a list of lists, a list of dictionaries, or a dictionary of lists.
from prefect.artifacts import create_table_artifact\n\ndef my_fn():\n highest_churn_possibility = [\n {'customer_id':'12345', 'name': 'John Smith', 'churn_probability': 0.85 }, \n {'customer_id':'56789', 'name': 'Jane Jones', 'churn_probability': 0.65 } \n ]\n\n create_table_artifact(\n key=\"personalized-reachout\",\n table=highest_churn_possibility,\n description= \"# Marvin, please reach out to these customers today!\"\n )\n\nif __name__ == \"__main__\":\n my_fn()\n
As you can see, you don't need to create an artifact in a flow run context. You can create one anywhere in a Python script and see it in the Prefect UI.
","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#managing-artifacts","title":"Managing artifacts","text":"","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#reading-artifacts","title":"Reading artifacts","text":"In the Prefect UI, you can view all of the latest versions of your artifacts and click into a specific artifact to see its lineage over time. Additionally, you can inspect all versions of an artifact with a given key by running:
prefect artifact inspect <my_key>\n
or view all artifacts by running:
prefect artifact ls\n
You can also use the Prefect REST API to programmatically filter your results.
","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#deleting-artifacts","title":"Deleting artifacts","text":"You can delete an artifact directly using the CLI to delete specific artifacts with a given key or id:
prefect artifact delete <my_key>\n
prefect artifact delete --id <my_id>\n
Alternatively, you can delete artifacts using the Prefect REST API.
","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#artifacts-api","title":"Artifacts API","text":"Prefect provides the Prefect REST API to allow you to create, read, and delete artifacts programmatically. With the Artifacts API, you can automate the creation and management of artifacts as part of your workflow.
For example, to read the five most recently created Markdown, table, and link artifacts, you can run the following:
import requests\n\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/abc/workspaces/xyz\"\nPREFECT_API_KEY=\"pnu_ghijk\"\ndata = {\n \"sort\": \"CREATED_DESC\",\n \"limit\": 5,\n \"artifacts\": {\n \"key\": {\n \"exists_\": True\n }\n }\n}\n\nheaders = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\nendpoint = f\"{PREFECT_API_URL}/artifacts/filter\"\n\nresponse = requests.post(endpoint, headers=headers, json=data)\nassert response.status_code == 200\nfor artifact in response.json():\n print(artifact)\n
If you don't specify a key or that a key must exist, you will also return results (which are a type of key-less artifact).
See the rest of the Prefect REST API documentation on artifacts for more information!
","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/automations/","title":"Automations","text":"Automations in Prefect Cloud enable you to configure actions that Prefect executes automatically based on trigger conditions.
Potential triggers include the occurrence of events from changes in a flow run's state - or the absence of such events. You can event define your own custom trigger to fire based on an event created from a webhook or a custom event defined in Python code.
Potential actions include kicking off flow runs, pausing schedules, and sending custom notifications.
Automations are only available in Prefect Cloud
Notifications in an open-source Prefect server provide a subset of the notification message-sending features available in Automations.
Automations provide a flexible and powerful framework for automatically taking action in response to events.
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#automations-overview","title":"Automations overview","text":"The Automations page provides an overview of all configured automations for your workspace.
Selecting the toggle next to an automation pauses execution of the automation.
The button next to the toggle provides commands to copy the automation ID, edit the automation, or delete the automation.
Select the name of an automation to view Details about it and relevant Events.
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#create-an-automation","title":"Create an automation","text":"On the Automations page, select the + icon to create a new automation. You'll be prompted to configure:
Triggers specify the conditions under which your action should be performed. Triggers can be of several types, including triggers based on:
OR
criteriaAutomations API
The automations API enables further programmatic customization of trigger and action policies based on arbitrary events.
Importantly, triggers can be configured not only in reaction to events, but also proactively: to fire in the absence of an expected event.
For example, in the case of flow run state change triggers, you might expect production flows to finish in no longer than thirty minutes. But transient infrastructure or network issues could cause your flow to get \u201cstuck\u201d in a running state. A trigger could kick off an action if the flow stays in a running state for more than 30 minutes. This action could be taken on the flow itself, such as canceling or restarting it. Or the action could take the form of a notification so someone can take manual remediation steps. Or you could set both actions to to take place when the trigger occurs.
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#actions","title":"Actions","text":"Actions specify what your automation does when its trigger criteria are met. Current action types include:
Some actions require you to either select the target of the action, or specify that the target of the action should be inferred.
Selected targets are simple, and useful for when you know exactly what object your action should act on \u2014 for example, the case of a cleanup flow you want to run or a specific notification you\u2019d like to send.
Inferred targets are deduced from the trigger itself.
For example, if a trigger fires on a flow run that is stuck in a running state, and the action is to cancel an inferred flow run, the flow run to cancel is inferred as the stuck run that caused the trigger to fire.
Similarly, if a trigger fires on a work queue event and the corresponding action is to pause an inferred work queue, the inferred work queue is the one that emitted the event.
Prefect tries to infer the relevant event whenever possible, but sometimes one does not exist.
Specify a name and, optionally, a description for the automation.
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#custom-triggers","title":"Custom triggers","text":"Custom triggers allow advanced configuration of the conditions on which an automation executes its actions. Several custom trigger fields accept values that end with trailing wildcards, like \"prefect.flow-run.*\"
.
The schema that defines a trigger is as follows:
Name Type Supports trailing wildcards Description match object Labels for resources which this Automation will match. match_related object Labels for related resources which this Automation will match. after array of strings Event(s), one of which must have first been seen to start this automation. expect array of strings The event(s) this automation is expecting to see. If empty, this automation will evaluate any matched event. for_each array of strings Evaluate the Automation separately for each distinct value of these labels on the resource. By default, labels refer to the primary resource of the triggering event. You may also refer to labels from related resources by specifyingrelated:<role>:<label>
. This will use the value of that label for the first related resource in that role. posture string enum N/A The posture of this Automation, either Reactive or Proactive. Reactive automations respond to the presence of the expected events, while Proactive automations respond to the absence of those expected events. threshold integer N/A The number of events required for this Automation to trigger (for Reactive automations), or the number of events expected (for Proactive automations) within number N/A The time period over which the events must occur. For Reactive triggers, this may be as low as 0 seconds, but must be at least 10 seconds for Proactive triggers","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#resource-matching","title":"Resource matching","text":"match
and match_related
control which events a trigger considers for evaluation by filtering on the contents of their resource
and related
fields, respectively. Each label added to a match
filter is AND
ed with the other labels, and can accept a single value or a list of multiple values that are OR
ed together.
Consider the resource
and related
fields on the following prefect.flow-run.Completed
event, truncated for the sake of example. Its primary resource is a flow run, and since that flow run was started via a deployment, it is related to both its flow and its deployment:
\"resource\": {\n \"prefect.resource.id\": \"prefect.flow-run.925eacce-7fe5-4753-8f02-77f1511543db\",\n \"prefect.resource.name\": \"cute-kittiwake\"\n}\n\"related\": [\n {\n \"prefect.resource.id\": \"prefect.flow.cb6126db-d528-402f-b439-96637187a8ca\",\n \"prefect.resource.role\": \"flow\",\n \"prefect.resource.name\": \"hello\"\n },\n {\n \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\",\n \"prefect.resource.role\": \"deployment\",\n \"prefect.resource.name\": \"example\"\n }\n]\n
There are a number of valid ways to select the above event for evaluation, and the approach depends on the purpose of the automation.
The following configuration will filter for any events whose primary resource is a flow run, and that flow run has a name starting with cute-
or radical-
.
\"match\": {\n \"prefect.resource.id\": \"prefect.flow-run.*\",\n \"prefect.resource.name\": [\"cute-*\", \"radical-*\"]\n},\n\"match_related\": {},\n...\n
This configuration, on the other hand, will filter for any events for which this specific deployment is a related resource.
\"match\": {},\n\"match_related\": {\n \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\"\n},\n...\n
Both of the above approaches will select the example prefect.flow-run.Completed
event, but will permit additional, possibly undesired events through the filter as well. match
and match_related
can be combined for more restrictive filtering:
\"match\": {\n \"prefect.resource.id\": \"prefect.flow-run.*\",\n \"prefect.resource.name\": [\"cute-*\", \"radical-*\"]\n},\n\"match_related\": {\n \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\"\n},\n...\n
Now this trigger will filter only for events whose primary resource is a flow run started by a specific deployment, and that flow run has a name starting with cute-
or radical-
.
Once an event has passed through the match
filters, it must be decided if this event should be counted toward the trigger's threshold
. Whether that is the case is determined by the event names present in expect
.
This configuration informs the trigger to evaluate only prefect.flow-run.Completed
events that have passed the match
filters.
\"expect\": [\n \"prefect.flow-run.Completed\"\n],\n...\n
threshold
decides the quantity of expect
ed events needed to satisfy the trigger. Increasing the threshold
above 1 will also require use of within
to define a range of time in which multiple events are seen. The following configuration will expect two occurrences of prefect.flow-run.Completed
within 60 seconds.
\"expect\": [\n \"prefect.flow-run.Completed\"\n],\n\"threshold\": 2,\n\"within\": 60,\n...\n
after
can be used to handle scenarios that require more complex event reactivity.
Take, for example, this flow which emits an event indicating the table it operates on is missing or empty:
from prefect import flow\nfrom prefect.events import emit_event\nfrom db import Table\n\n\n@flow\ndef transform(table_name: str):\n table = Table(table_name)\n\n if not table.exists():\n emit_event(\n event=\"table-missing\",\n resource={\"prefect.resource.id\": \"etl-events.transform\"}\n )\n elif table.is_empty():\n emit_event(\n event=\"table-empty\",\n resource={\"prefect.resource.id\": \"etl-events.transform\"}\n )\n else:\n # transform data\n
The following configuration uses after
to prevent this automation from firing unless either a table-missing
or a table-empty
event has occurred before a flow run of this deployment completes.
Tip
Note how match
and match_related
are used to ensure the trigger only evaluates events that are relevant to its purpose.
\"match\": {\n \"prefect.resource.id\": [\n \"prefect.flow-run.*\",\n \"etl-events.transform\"\n ]\n},\n\"match_related\": {\n \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\"\n}\n\"after\": [\n \"table-missing\",\n \"table-empty\"\n]\n\"expect\": [\n \"prefect.flow-run.Completed\"\n],\n...\n
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#evaluation-strategy","title":"Evaluation strategy","text":"All of the previous examples were designed around a reactive posture
- that is, count up events toward the threshold
until it is met, then execute actions. To respond to the absence of events, use a proactive posture
. A proactive trigger will fire when its threshold
has not been met by the end of the window of time defined by within
. Proactive triggers must have a within
of at least 10 seconds.
The following trigger will fire if a prefect.flow-run.Completed
event is not seen within 60 seconds after a prefect.flow-run.Running
event is seen.
{\n \"match\": {\n \"prefect.resource.id\": \"prefect.flow-run.*\"\n },\n \"match_related\": {},\n \"after\": [\n \"prefect.flow-run.Running\"\n ],\n \"expect\": [\n \"prefect.flow-run.Completed\"\n ],\n \"for_each\": [],\n \"posture\": \"Proactive\",\n \"threshold\": 1,\n \"within\": 60\n}\n
However, without for_each
, a prefect.flow-run.Completed
event from a different flow run than the one that started this trigger with its prefect.flow-run.Running
event could satisfy the condition. Adding a for_each
of prefect.resource.id
will cause this trigger to be evaluated separately for each flow run id associated with these events. {\n \"match\": {\n \"prefect.resource.id\": \"prefect.flow-run.*\"\n },\n \"match_related\": {},\n \"after\": [\n \"prefect.flow-run.Running\"\n ],\n \"expect\": [\n \"prefect.flow-run.Completed\"\n ],\n \"for_each\": [\n \"prefect.resource.id\"\n ],\n \"posture\": \"Proactive\",\n \"threshold\": 1,\n \"within\": 60\n}\n
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#create-an-automation-via-deployment-triggers","title":"Create an automation via deployment triggers","text":"To enable the simple configuration of event-driven deployments, Prefect provides deployment triggers - a shorthand for creating automations that are linked to specific deployments to run them based on the presence or absence of events.
# prefect.yaml\ndeployments:\n - name: my-deployment\n entrypoint: path/to/flow.py:decorated_fn\n work_pool:\n name: my-process-pool\n triggers:\n - enabled: true\n match:\n prefect.resource.id: my.external.resource\n expect:\n - external.resource.pinged\n parameters:\n param_1: \"{{ event }}\"\n
At deployment time, this will create a linked automation that is triggered by events matching your chosen grammar, which will pass the templatable event
as a parameter to the deployment's flow run.
prefect deploy
","text":"You can pass one or more --trigger
arguments to prefect deploy
, which can be either a JSON string or a path to a .yaml
or .json
file.
# Pass a trigger as a JSON string\nprefect deploy -n test-deployment \\\n --trigger '{\n \"enabled\": true, \n \"match\": {\n \"prefect.resource.id\": \"prefect.flow-run.*\"\n }, \n \"expect\": [\"prefect.flow-run.Completed\"]\n }'\n\n# Pass a trigger using a JSON/YAML file\nprefect deploy -n test-deployment --trigger triggers.yaml\nprefect deploy -n test-deployment --trigger my_stuff/triggers.json\n
For example, a triggers.yaml
file could have many triggers defined:
triggers:\n - enabled: true\n match:\n prefect.resource.id: my.external.resource\n expect:\n - external.resource.pinged\n parameters:\n param_1: \"{{ event }}\"\n - enabled: true\n match:\n prefect.resource.id: my.other.external.resource\n expect:\n - some.other.event\n parameters:\n param_1: \"{{ event }}\"\n
Both of the above triggers would be attached to test-deployment
after running prefect deploy
. Triggers passed to prefect deploy
will override any triggers defined in prefect.yaml
While you can define triggers in prefect.yaml
for a given deployment, triggers passed to prefect deploy
will take precedence over those defined in prefect.yaml
.
Note that deployment triggers contribute to the total number of automations in your workspace.
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#automation-notifications","title":"Automation notifications","text":"Notifications enable you to set up automation actions that send a message.
Automation notifications support sending notifications via any predefined block that is capable of and configured to send a message. That includes, for example:
Automation actions can access templated variables through Jinja syntax. Templated variables enable you to dynamically include details from an automation trigger, such as a flow or pool name.
Jinja templated variable syntax wraps the variable name in double curly brackets, like this: {{ variable }}
.
You can access properties of the underlying flow run objects including:
In addition to its native properties, each object includes an id
along with created
and updated
timestamps.
The flow_run|ui_url
token returns the URL for viewing the flow run in Prefect Cloud.
Here\u2019s an example for something that would be relevant to a flow run state-based notification:
Flow run {{ flow_run.name }} entered state {{ flow_run.state.name }}. \n\n Timestamp: {{ flow_run.state.timestamp }}\n Flow ID: {{ flow_run.flow_id }}\n Flow Run ID: {{ flow_run.id }}\n State message: {{ flow_run.state.message }}\n
The resulting Slack webhook notification would look something like this:
You could include flow
and deployment
properties.
Flow run {{ flow_run.name }} for flow {{ flow.name }}\nentered state {{ flow_run.state.name }}\nwith message {{ flow_run.state.message }}\n\nFlow tags: {{ flow_run.tags }}\nDeployment name: {{ deployment.name }}\nDeployment version: {{ deployment.version }}\nDeployment parameters: {{ deployment.parameters }}\n
An automation that reports on work pool status might include notifications using work_pool
properties.
Work pool status alert!\n\nName: {{ work_pool.name }}\nLast polled: {{ work_pool.last_polled }}\n
In addition to those shortcuts for flows, deployments, and work pools, you have access to the automation and the event that triggered the automation. See the Automations API for additional details.
Automation: {{ automation.name }}\nDescription: {{ automation.description }}\n\nEvent: {{ event.id }}\nResource:\n{% for label, value in event.resource %}\n{{ label }}: {{ value }}\n{% endfor %}\nRelated Resources:\n{% for related in event.related %}\n Role: {{ related.role }}\n {% for label, value in event.resource %}\n {{ label }}: {{ value }}\n {% endfor %}\n{% endfor %}\n
Note that this example also illustrates the ability to use Jinja features such as iterator and for loop control structures when templating notifications.
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/blocks/","title":"Blocks","text":"Blocks are a primitive within Prefect that enable the storage of configuration and provide an interface for interacting with external systems.
With blocks, you can securely store credentials for authenticating with services like AWS, GitHub, Slack, and any other system you'd like to orchestrate with Prefect.
Blocks expose methods that provide pre-built functionality for performing actions against an external system. They can be used to download data from or upload data to an S3 bucket, query data from or write data to a database, or send a message to a Slack channel.
You may configure blocks through code or via the Prefect Cloud and the Prefect server UI.
You can access blocks for both configuring flow deployments and directly from within your flow code.
Prefect provides some built-in block types that you can use right out of the box. Additional blocks are available through Prefect Integrations. To use these blocks you can pip install
the package, then register the blocks you want to use with Prefect Cloud or a Prefect server.
Prefect Cloud and the Prefect server UI display a library of block types available for you to configure blocks that may be used by your flows.
Blocks and parameters
Blocks are useful for configuration that needs to be shared across flow runs and between flows.
For configuration that will change between flow runs, we recommend using parameters.
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#prefect-built-in-blocks","title":"Prefect built-in blocks","text":"Prefect provides a broad range of commonly used, built-in block types. These block types are available in Prefect Cloud and the Prefect server UI.
Block Slug Description Azureazure
Store data as a file on Azure Datalake and Azure Blob Storage. Date Time date-time
A block that represents a datetime. Docker Container docker-container
Runs a command in a container. Docker Registry docker-registry
Connects to a Docker registry. Requires a Docker Engine to be connectable. GCS gcs
Store data as a file on Google Cloud Storage. GitHub github
Interact with files stored on public GitHub repositories. JSON json
A block that represents JSON. Kubernetes Cluster Config kubernetes-cluster-config
Stores configuration for interaction with Kubernetes clusters. Kubernetes Job kubernetes-job
Runs a command as a Kubernetes Job. Local File System local-file-system
Store data as a file on a local file system. Microsoft Teams Webhook ms-teams-webhook
Enables sending notifications via a provided Microsoft Teams webhook. Opsgenie Webhook opsgenie-webhook
Enables sending notifications via a provided Opsgenie webhook. Pager Duty Webhook pager-duty-webhook
Enables sending notifications via a provided PagerDuty webhook. Process process
Run a command in a new process. Remote File System remote-file-system
Store data as a file on a remote file system. Supports any remote file system supported by fsspec
. S3 s3
Store data as a file on AWS S3. Secret secret
A block that represents a secret value. The value stored in this block will be obfuscated when this block is logged or shown in the UI. Slack Webhook slack-webhook
Enables sending notifications via a provided Slack webhook. SMB smb
Store data as a file on a SMB share. String string
A block that represents a string. Twilio SMS twilio-sms
Enables sending notifications via Twilio SMS. Webhook webhook
Block that enables calling webhooks.","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#blocks-in-prefect-integrations","title":"Blocks in Prefect Integrations","text":"Blocks can also be created by anyone and shared with the community. You'll find blocks that are available for consumption in many of the published Prefect Integrations. The following table provides an overview of the blocks available from our most popular Prefect Integrations.
Integration Block Slug prefect-airbyte Airbyte Connectionairbyte-connection
prefect-airbyte Airbyte Server airbyte-server
prefect-aws AWS Credentials aws-credentials
prefect-aws ECS Task ecs-task
prefect-aws MinIO Credentials minio-credentials
prefect-aws S3 Bucket s3-bucket
prefect-azure Azure Blob Storage Credentials azure-blob-storage-credentials
prefect-azure Azure Container Instance Credentials azure-container-instance-credentials
prefect-azure Azure Container Instance Job azure-container-instance-job
prefect-azure Azure Cosmos DB Credentials azure-cosmos-db-credentials
prefect-azure AzureML Credentials azureml-credentials
prefect-bitbucket BitBucket Credentials bitbucket-credentials
prefect-bitbucket BitBucket Repository bitbucket-repository
prefect-census Census Credentials census-credentials
prefect-census Census Sync census-sync
prefect-databricks Databricks Credentials databricks-credentials
prefect-dbt dbt CLI BigQuery Target Configs dbt-cli-bigquery-target-configs
prefect-dbt dbt CLI Profile dbt-cli-profile
prefect-dbt dbt Cloud Credentials dbt-cloud-credentials
prefect-dbt dbt CLI Global Configs dbt-cli-global-configs
prefect-dbt dbt CLI Postgres Target Configs dbt-cli-postgres-target-configs
prefect-dbt dbt CLI Snowflake Target Configs dbt-cli-snowflake-target-configs
prefect-dbt dbt CLI Target Configs dbt-cli-target-configs
prefect-docker Docker Host docker-host
prefect-docker Docker Registry Credentials docker-registry-credentials
prefect-email Email Server Credentials email-server-credentials
prefect-firebolt Firebolt Credentials firebolt-credentials
prefect-firebolt Firebolt Database firebolt-database
prefect-gcp BigQuery Warehouse bigquery-warehouse
prefect-gcp GCP Cloud Run Job cloud-run-job
prefect-gcp GCP Credentials gcp-credentials
prefect-gcp GcpSecret gcpsecret
prefect-gcp GCS Bucket gcs-bucket
prefect-gcp Vertex AI Custom Training Job vertex-ai-custom-training-job
prefect-github GitHub Credentials github-credentials
prefect-github GitHub Repository github-repository
prefect-gitlab GitLab Credentials gitlab-credentials
prefect-gitlab GitLab Repository gitlab-repository
prefect-hex Hex Credentials hex-credentials
prefect-hightouch Hightouch Credentials hightouch-credentials
prefect-kubernetes Kubernetes Credentials kubernetes-credentials
prefect-monday Monday Credentials monday-credentials
prefect-monte-carlo Monte Carlo Credentials monte-carlo-credentials
prefect-openai OpenAI Completion Model openai-completion-model
prefect-openai OpenAI Image Model openai-image-model
prefect-openai OpenAI Credentials openai-credentials
prefect-slack Slack Credentials slack-credentials
prefect-slack Slack Incoming Webhook slack-incoming-webhook
prefect-snowflake Snowflake Connector snowflake-connector
prefect-snowflake Snowflake Credentials snowflake-credentials
prefect-sqlalchemy Database Credentials database-credentials
prefect-sqlalchemy SQLAlchemy Connector sqlalchemy-connector
prefect-twitter Twitter Credentials twitter-credentials
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#using-existing-block-types","title":"Using existing block types","text":"Blocks are classes that subclass the Block
base class. They can be instantiated and used like normal classes.
For example, to instantiate a block that stores a JSON value, use the JSON
block:
from prefect.blocks.system import JSON\n\njson_block = JSON(value={\"the_answer\": 42})\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#saving-blocks","title":"Saving blocks","text":"If this JSON value needs to be retrieved later to be used within a flow or task, we can use the .save()
method on the block to store the value in a block document on the Prefect database for retrieval later:
json_block.save(name=\"life-the-universe-everything\")\n
If you'd like to update the block value stored for a given name
, you can overwrite the existing block document by setting overwrite=True
:
json_block.save(overwrite=True)\n
Tip
in the above example, the name \"life-the-universe-everything\"
is inferred from the existing block document
... or save the same block value as a new block document by setting the name
parameter to a new value:
json_block.save(name=\"actually-life-the-universe-everything\")\n
Utilizing the UI
Blocks documents can also be created and updated via the Prefect UI.
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#loading-blocks","title":"Loading blocks","text":"The name given when saving the value stored in the JSON block can be used when retrieving the value during a flow or task run:
from prefect import flow\nfrom prefect.blocks.system import JSON\n\n@flow\ndef what_is_the_answer():\n json_block = JSON.load(\"life-the-universe-everything\")\n print(json_block.value[\"the_answer\"])\n\nwhat_is_the_answer() # 42\n
Blocks can also be loaded with a unique slug that is a combination of a block type slug and a block document name.
To load our JSON block document from before, we can run the following:
from prefect.blocks.core import Block\n\njson_block = Block.load(\"json/life-the-universe-everything\")\nprint(json_block.value[\"the-answer\"]) #42\n
Sharing Blocks
Blocks can also be loaded by fellow Workspace Collaborators, available on Prefect Cloud.
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#deleting-blocks","title":"Deleting blocks","text":"You can delete a block by using the .delete()
method on the block:
from prefect.blocks.core import Block\nBlock.delete(\"json/life-the-universe-everything\")\n
You can also use the CLI to delete specific blocks with a given slug or id:
prefect block delete json/life-the-universe-everything\n
prefect block delete --id <my-id>\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#creating-new-block-types","title":"Creating new block types","text":"To create a custom block type, define a class that subclasses Block
. The Block
base class builds off of Pydantic's BaseModel
, so custom blocks can be declared in same manner as a Pydantic model.
Here's a block that represents a cube and holds information about the length of each edge in inches:
from prefect.blocks.core import Block\n\nclass Cube(Block):\n edge_length_inches: float\n
You can also include methods on a block include useful functionality. Here's the same cube block with methods to calculate the volume and surface area of the cube:
from prefect.blocks.core import Block\n\nclass Cube(Block):\n edge_length_inches: float\n\n def get_volume(self):\n return self.edge_length_inches**3\n\n def get_surface_area(self):\n return 6 * self.edge_length_inches**2\n
Now the Cube
block can be used to store different cube configuration that can later be used in a flow:
from prefect import flow\n\nrubiks_cube = Cube(edge_length_inches=2.25)\nrubiks_cube.save(\"rubiks-cube\")\n\n@flow\ndef calculate_cube_surface_area(cube_name):\n cube = Cube.load(cube_name)\n print(cube.get_surface_area())\n\ncalculate_cube_surface_area(\"rubiks-cube\") # 30.375\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#secret-fields","title":"Secret fields","text":"All block values are encrypted before being stored, but if you have values that you would not like visible in the UI or in logs, then you can use the SecretStr
field type provided by Pydantic to automatically obfuscate those values. This can be useful for fields that are used to store credentials like passwords and API tokens.
Here's an example of an AWSCredentials
block that uses SecretStr
:
from typing import Optional\n\nfrom prefect.blocks.core import Block\nfrom pydantic import SecretStr\n\nclass AWSCredentials(Block):\n aws_access_key_id: Optional[str] = None\n aws_secret_access_key: Optional[SecretStr] = None\n aws_session_token: Optional[str] = None\n profile_name: Optional[str] = None\n region_name: Optional[str] = None\n
Because aws_secret_access_key
has the SecretStr
type hint assigned to it, the value of that field will not be exposed if the object is logged:
aws_credentials_block = AWSCredentials(\n aws_access_key_id=\"AKIAJKLJKLJKLJKLJKLJK\",\n aws_secret_access_key=\"secret_access_key\"\n)\n\nprint(aws_credentials_block)\n# aws_access_key_id='AKIAJKLJKLJKLJKLJKLJK' aws_secret_access_key=SecretStr('**********') aws_session_token=None profile_name=None region_name=None\n
There's also use the SecretDict
field type provided by Prefect. This type will allow you to add a dictionary field to your block that will have values at all levels automatically obfuscated in the UI or in logs. This is useful for blocks where typing or structure of secret fields is not known until configuration time.
Here's an example of a block that uses SecretDict
:
from typing import Dict\n\nfrom prefect.blocks.core import Block\nfrom prefect.blocks.fields import SecretDict\n\n\nclass SystemConfiguration(Block):\n system_secrets: SecretDict\n system_variables: Dict\n\n\nsystem_configuration_block = SystemConfiguration(\n system_secrets={\n \"password\": \"p@ssw0rd\",\n \"api_token\": \"token_123456789\",\n \"private_key\": \"<private key here>\",\n },\n system_variables={\n \"self_destruct_countdown_seconds\": 60,\n \"self_destruct_countdown_stop_time\": 7,\n },\n)\n
system_secrets
will be obfuscated when system_configuration_block
is displayed, but system_variables
will be shown in plain-text: print(system_configuration_block)\n# SystemConfiguration(\n# system_secrets=SecretDict('{'password': '**********', 'api_token': '**********', 'private_key': '**********'}'), \n# system_variables={'self_destruct_countdown_seconds': 60, 'self_destruct_countdown_stop_time': 7}\n# )\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#blocks-metadata","title":"Blocks metadata","text":"The way that a block is displayed can be controlled by metadata fields that can be set on a block subclass.
Available metadata fields include:
Property Description _block_type_name Display name of the block in the UI. Defaults to the class name. _block_type_slug Unique slug used to reference the block type in the API. Defaults to a lowercase, dash-delimited version of the block type name. _logo_url URL pointing to an image that should be displayed for the block type in the UI. Default toNone
. _description Short description of block type. Defaults to docstring, if provided. _code_example Short code snippet shown in UI for how to load/use block type. Default to first example provided in the docstring of the class, if provided.","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#nested-blocks","title":"Nested blocks","text":"Block are composable. This means that you can create a block that uses functionality from another block by declaring it as an attribute on the block that you're creating. It also means that configuration can be changed for each block independently, which allows configuration that may change on different time frames to be easily managed and configuration can be shared across multiple use cases.
To illustrate, here's a an expanded AWSCredentials
block that includes the ability to get an authenticated session via the boto3
library:
from typing import Optional\n\nimport boto3\nfrom prefect.blocks.core import Block\nfrom pydantic import SecretStr\n\nclass AWSCredentials(Block):\n aws_access_key_id: Optional[str] = None\n aws_secret_access_key: Optional[SecretStr] = None\n aws_session_token: Optional[str] = None\n profile_name: Optional[str] = None\n region_name: Optional[str] = None\n\n def get_boto3_session(self):\n return boto3.Session(\n aws_access_key_id = self.aws_access_key_id\n aws_secret_access_key = self.aws_secret_access_key\n aws_session_token = self.aws_session_token\n profile_name = self.profile_name\n region_name = self.region\n )\n
The AWSCredentials
block can be used within an S3Bucket block to provide authentication when interacting with an S3 bucket:
import io\n\nclass S3Bucket(Block):\n bucket_name: str\n credentials: AWSCredentials\n\n def read(self, key: str) -> bytes:\n s3_client = self.credentials.get_boto3_session().client(\"s3\")\n\n stream = io.BytesIO()\n s3_client.download_fileobj(Bucket=self.bucket_name, key=key, Fileobj=stream)\n\n stream.seek(0)\n output = stream.read()\n\n return output\n\n def write(self, key: str, data: bytes) -> None:\n s3_client = self.credentials.get_boto3_session().client(\"s3\")\n stream = io.BytesIO(data)\n s3_client.upload_fileobj(stream, Bucket=self.bucket_name, Key=key)\n
You can use this S3Bucket
block with previously saved AWSCredentials
block values in order to interact with the configured S3 bucket:
my_s3_bucket = S3Bucket(\n bucket_name=\"my_s3_bucket\",\n credentials=AWSCredentials.load(\"my_aws_credentials\")\n)\n\nmy_s3_bucket.save(\"my_s3_bucket\")\n
Saving block values like this links the values of the two blocks so that any changes to the values stored for the AWSCredentials
block with the name my_aws_credentials
will be seen the next time that block values for the S3Bucket
block named my_s3_bucket
is loaded.
Values for nested blocks can also be hard coded by not first saving child blocks:
my_s3_bucket = S3Bucket(\n bucket_name=\"my_s3_bucket\",\n credentials=AWSCredentials(\n aws_access_key_id=\"AKIAJKLJKLJKLJKLJKLJK\",\n aws_secret_access_key=\"secret_access_key\"\n )\n)\n\nmy_s3_bucket.save(\"my_s3_bucket\")\n
In the above example, the values for AWSCredentials
are saved with my_s3_bucket
and will not be usable with any other blocks.
Block
types","text":"Let's say that you now want to add a bucket_folder
field to your custom S3Bucket
block that represents the default path to read and write objects from (this field exists on our implementation).
We can add the new field to the class definition:
class S3Bucket(Block):\n bucket_name: str\n credentials: AWSCredentials\n bucket_folder: str = None\n ...\n
Then register the updated block type with either Prefect Cloud or your self-hosted Prefect server.
If you have any existing blocks of this type that were created before the update and you'd prefer to not re-create them, you can migrate them to the new version of your block type by adding the missing values:
# Bypass Pydantic validation to allow your local Block class to load the old block version\nmy_s3_bucket_block = S3Bucket.load(\"my-s3-bucket\", validate=False)\n\n# Set the new field to an appropriate value\nmy_s3_bucket_block.bucket_path = \"my-default-bucket-path\"\n\n# Overwrite the old block values and update the expected fields on the block\nmy_s3_bucket_block.save(\"my-s3-bucket\", overwrite=True)\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#registering-blocks-for-use-in-the-prefect-ui","title":"Registering blocks for use in the Prefect UI","text":"Blocks can be registered from a Python module available in the current virtual environment with a CLI command like this:
$ prefect block register --module prefect_aws.credentials\n
This command is useful for registering all blocks found in the credentials module within Prefect Integrations.
Or, if a block has been created in a .py
file, the block can also be registered with the CLI command:
$ prefect block register --file my_block.py\n
The registered block will then be available in the Prefect UI for configuration.
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/deployments-block-based/","title":"Block Based Deployments","text":"Workers are recommended
This page is about the block-based deployment model. The Work Pools and Workers based deployment model simplifies the specification of a flow's infrastructure and runtime environment. If you have existing agents, you can upgrade from agents to workers to significantly enhance the experience of deploying flows.
We encourage you to check out the new deployment experience with guided command line prompts and convenient CI/CD with prefect.yaml
files.
With remote storage blocks, you can package not only your flow code script but also any supporting files, including your custom modules, SQL scripts and any configuration files needed in your project.
To define how your flow execution environment should be configured, you may either reference pre-configured infrastructure blocks or let Prefect create those automatically for you as anonymous blocks (this happens when you specify the infrastructure type using --infra
flag during the build process).
Work queue affinity improved starting from Prefect 2.0.5
Until Prefect 2.0.4, tags were used to associate flow runs with work queues. Starting in Prefect 2.0.5, tag-based work queues are deprecated. Instead, work queue names are used to explicitly direct flow runs from deployments into queues.
Note that backward compatibility is maintained and work queues that use tag-based matching can still be created and will continue to work. However, those work queues are now considered legacy and we encourage you to use the new behavior by specifying work queues explicitly on agents and deployments.
See Agents & Work Pools for details.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#deployments-and-flows","title":"Deployments and flows","text":"Each deployment is associated with a single flow, but any given flow can be referenced by multiple deployments.
Deployments are uniquely identified by the combination of: flow_name/deployment_name
.
graph LR\n F(\"my_flow\"):::yellow -.-> A(\"Deployment 'daily'\"):::tan --> W(\"my_flow/daily\"):::fgreen\n F -.-> B(\"Deployment 'weekly'\"):::gold --> X(\"my_flow/weekly\"):::green\n F -.-> C(\"Deployment 'ad-hoc'\"):::dgold --> Y(\"my_flow/ad-hoc\"):::dgreen\n F -.-> D(\"Deployment 'trigger-based'\"):::dgold --> Z(\"my_flow/trigger-based\"):::dgreen\n\n classDef gold fill:goldenrod,stroke:goldenrod,stroke-width:4px,color:white\n classDef yellow fill:gold,stroke:gold,stroke-width:4px\n classDef dgold fill:darkgoldenrod,stroke:darkgoldenrod,stroke-width:4px,color:white\n classDef tan fill:tan,stroke:tan,stroke-width:4px,color:white\n classDef fgreen fill:forestgreen,stroke:forestgreen,stroke-width:4px,color:white\n classDef green fill:green,stroke:green,stroke-width:4px,color:white\n classDef dgreen fill:darkgreen,stroke:darkgreen,stroke-width:4px,color:white
This enables you to run a single flow with different parameters, based on multiple schedules and triggers, and in different environments. This also enables you to run different versions of the same flow for testing and production purposes.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#deployment-definition","title":"Deployment definition","text":"A deployment definition captures the settings for creating a deployment object on the Prefect API. You can create the deployment definition by:
prefect deployment build
CLI command with deployment options to create a deployment.yaml
deployment definition file, then run prefect deployment apply
to create a deployment on the API using the settings in deployment.yaml
.Deployment
Python object, specifying the deployment options as properties of the object, then building and applying the object using methods of Deployment
.The minimum required information to create a deployment includes:
You may provide additional settings for the deployment. Any settings you do not explicitly specify are inferred from defaults.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-deployment-on-the-cli","title":"Create a deployment on the CLI","text":"To create a deployment on the CLI, there are two steps:
deployment.yaml
. This step includes uploading your flow to its configured remote storage location, if one is specified.To build the deployment definition file deployment.yaml
, run the prefect deployment build
Prefect CLI command from the folder containing your flow script and any dependencies of the script.
$ prefect deployment build [OPTIONS] PATH\n
Path to the flow is specified in the format path-to-script:flow-function-name
\u2014 The path and filename of the flow script file, a colon, then the name of the entrypoint flow function.
For example:
$ prefect deployment build -n marvin -p default-agent-pool -q test flows/marvin.py:say_hi\n
When you run this command, Prefect:
marvin_flow-deployment.yaml
file for your deployment based on your flow code and options.test
. The work queue test
will be created if it doesn't exist.Uploading files may require storage filesystem libraries
Note that the appropriate filesystem library supporting the storage location must be installed prior to building a deployment with a storage block. For example, the AWS S3 Storage block requires the s3fs
library.
Ignore files or directories from a deployment
By default, Prefect uploads all files in the current folder to the configured storage location (local by default) when you build a deployment.
If you want to omit certain files or directories from your deployments, add a .prefectignore
file to the root directory. .prefectignore
enables users to omit certain files or directories from their deployments.
Similar to other .ignore
files, the syntax supports pattern matching, so an entry of *.pyc
will ensure all .pyc
files are ignored by the deployment call when uploading to remote storage.
You may specify additional options to further customize your deployment.
Options Description PATH Path, filename, and flow name of the flow definition. (Required) --apply
, -a
When provided, automatically registers the resulting deployment with the API. --cron TEXT
A cron string that will be used to set a CronSchedule
on the deployment. For example, --cron \"*/1 * * * *\"
to create flow runs from that deployment every minute. --help
Display help for available commands and options. --infra-block TEXT
, -ib
The infrastructure block to use, in block-type/block-name
format. --infra
, -i
The infrastructure type to use. (Default is Process
) --interval INTEGER
An integer specifying an interval (in seconds) that will be used to set an IntervalSchedule
on the deployment. For example, --interval 60
to create flow runs from that deployment every minute. --name TEXT
, -n
The name of the deployment. --output TEXT
, -o
Optional location for the YAML manifest generated as a result of the build
step. You can version-control that file, but it's not required since the CLI can generate everything you need to define a deployment. --override TEXT
One or more optional infrastructure overrides provided as a dot delimited path. For example, specify an environment variable: env.env_key=env_value
. For Kubernetes, specify customizations: customizations='[{\"op\": \"add\",\"path\": \"/spec/template/spec/containers/0/resources/limits\", \"value\": {\"memory\": \"8Gi\",\"cpu\": \"4000m\"}}]'
(note the string format). --param
An optional parameter override, values are parsed as JSON strings. For example, --param question=ultimate --param answer=42
. --params
An optional parameter override in a JSON string format. For example, --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\'
. --path
An optional path to specify a subdirectory of remote storage to upload to, or to point to a subdirectory of a locally stored flow. --pool TEXT
, -p
The work pool that will handle this deployment's runs. \u2502 --rrule TEXT
An RRule
that will be used to set an RRuleSchedule
on the deployment. For example, --rrule 'FREQ=HOURLY;BYDAY=MO,TU,WE,TH,FR;BYHOUR=9,10,11,12,13,14,15,16,17'
to create flow runs from that deployment every hour but only during business hours. --skip-upload
When provided, skips uploading this deployment's files to remote storage. --storage-block TEXT
, -sb
The storage block to use, in block-type/block-name
or block-type/block-name/path
format. Note that the appropriate library supporting the storage filesystem must be installed. --tag TEXT
, -t
One or more optional tags to apply to the deployment. --version TEXT
, -v
An optional version for the deployment. This could be a git commit hash if you use this command from a CI/CD pipeline. --work-queue TEXT
, -q
The work queue that will handle this deployment's runs. It will be created if it doesn't already exist. Defaults to None
. Note that if a work queue is not set, work will not be scheduled.","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#block-identifiers","title":"Block identifiers","text":"
When specifying a storage block with the -sb
or --storage-block
flag, you may specify the block by passing its slug. The storage block slug is formatted as block-type/block-name
.
For example, s3/example-block
is the slug for an S3 block named example-block
.
In addition, when passing the storage block slug, you may pass just the block slug or the block slug and a path.
block-type/block-name
indicates just the block, including any path included in the block configuration.block-type/block-name/path
indicates a storage path in addition to any path included in the block configuration.When specifying an infrastructure block with the -ib
or --infra-block
flag, you specify the block by passing its slug. The infrastructure block slug is formatted as block-type/block-name
.
Azure
azure
Docker Container DockerContainer
docker-container
GitHub GitHub
github
GCS GCS
gcs
Kubernetes Job KubernetesJob
kubernetes-job
Process Process
process
Remote File System RemoteFileSystem
remote-file-system
S3 S3
s3
SMB SMB
smb
GitLab Repository GitLabRepository
gitlab-repository
Note that the appropriate library supporting the storage filesystem must be installed prior to building a deployment with a storage block. For example, the AWS S3 Storage block requires the s3fs
library. See Storage for more information.
A deployment's YAML file configures additional settings needed to create a deployment on the server.
A single flow may have multiple deployments created for it, with different schedules, tags, and so on. A single flow definition may have multiple deployment YAML files referencing it, each specifying different settings. The only requirement is that each deployment must have a unique name.
The default {flow-name}-deployment.yaml
filename may be edited as needed with the --output
flag to prefect deployment build
.
###\n### A complete description of a Prefect Deployment for flow 'Cat Facts'\n###\nname: catfact\ndescription: null\nversion: c0fc95308d8137c50d2da51af138aa23\n# The work queue that will handle this deployment's runs\nwork_queue_name: test\nwork_pool_name: null\ntags: []\nparameters: {}\nschedule: null\ninfra_overrides: {}\ninfrastructure:\n type: process\n env: {}\n labels: {}\n name: null\n command:\n - python\n - -m\n - prefect.engine\n stream_output: true\n###\n### DO NOT EDIT BELOW THIS LINE\n###\nflow_name: Cat Facts\nmanifest_path: null\nstorage: null\npath: /Users/terry/test/testflows/catfact\nentrypoint: catfact.py:catfacts_flow\nparameter_openapi_schema:\n title: Parameters\n type: object\n properties:\n url:\n title: url\n required:\n - url\n definitions: null\n
Editing deployment.yaml
Note the big DO NOT EDIT comment in your deployment's YAML: In practice, anything above this block can be freely edited before running prefect deployment apply
to create the deployment on the API.
We recommend editing most of these fields from the CLI or Prefect UI for convenience.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#parameters-in-deployments","title":"Parameters in deployments","text":"You may provide default parameter values in the deployment.yaml
configuration, and these parameter values will be used for flow runs based on the deployment.
To configure default parameter values, add them to the parameters: {}
line of deployment.yaml
as JSON key-value pairs. The parameter list configured in deployment.yaml
must match the parameters expected by the entrypoint flow function.
parameters: {\"name\": \"Marvin\", \"num\": 42, \"url\": \"https://catfact.ninja/fact\"}\n
Passing **kwargs as flow parameters
You may pass **kwargs
as a deployment parameter as a \"kwargs\":{}
JSON object containing the key-value pairs of any passed keyword arguments.
parameters: {\"name\": \"Marvin\", \"kwargs\":{\"cattype\":\"tabby\",\"num\": 42}\n
You can edit default parameters for deployments in the Prefect UI, and you can override default parameter values when creating ad-hoc flow runs via the Prefect UI.
To edit parameters in the Prefect UI, go the the details page for a deployment, then select Edit from the commands menu. If you change parameter values, the new values are used for all future flow runs based on the deployment.
To create an ad-hoc flow run with different parameter values, go the the details page for a deployment, select Run, then select Custom. You will be able to provide custom values for any editable deployment fields. Under Parameters, select Custom. Provide the new values, then select Save. Select Run to begin the flow run with custom values.
If you want the Prefect API to verify the parameter values passed to a flow run against the schema defined by parameter_openapi_schema
, set enforce_parameter_schema
to true
.
When you've configured deployment.yaml
for a deployment, you can create the deployment on the API by running the prefect deployment apply
Prefect CLI command.
$ prefect deployment apply catfacts_flow-deployment.yaml\n
For example:
$ prefect deployment apply ./catfacts_flow-deployment.yaml\nSuccessfully loaded 'catfact'\nDeployment '76a9f1ac-4d8c-4a92-8869-615bec502685' successfully created.\n
prefect deployment apply
accepts an optional --upload
flag that, when provided, uploads this deployment's files to remote storage.
Once the deployment has been created, you'll see it in the Prefect UI and can inspect it using the CLI.
$ prefect deployment ls\n Deployments\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name \u2503 ID \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 Cat Facts/catfact \u2502 76a9f1ac-4d8c-4a92-8869-615bec502685 \u2502\n\u2502 leonardo_dicapriflow/hello_leo \u2502 fb4681d7-aa5a-4617-bf6f-f67e6f964984 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
When you run a deployed flow with Prefect, the following happens:
Agents and work pools enable the Prefect orchestration engine and API to run deployments in your local execution environments. To execute deployed flow runs you need to configure at least one agent.
Scheduled flow runs
Scheduled flow runs will not be created unless the scheduler is running with either Prefect Cloud or a local Prefect server started with prefect server start
.
Scheduled flow runs will not run unless an appropriate agent and work pool are configured.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-deployment-from-a-python-object","title":"Create a deployment from a Python object","text":"You can also create deployments from Python scripts by using the prefect.deployments.Deployment
class.
Create a new deployment using configuration defaults for an imported flow:
from my_project.flows import my_flow\nfrom prefect.deployments import Deployment\n\ndeployment = Deployment.build_from_flow(\n flow=my_flow,\n name=\"example-deployment\", \n version=1, \n work_queue_name=\"demo\",\n work_pool_name=\"default-agent-pool\",\n)\ndeployment.apply()\n
Create a new deployment with a pre-defined storage block and an infrastructure override:
from my_project.flows import my_flow\nfrom prefect.deployments import Deployment\nfrom prefect.filesystems import S3\n\nstorage = S3.load(\"dev-bucket\") # load a pre-defined block\n\ndeployment = Deployment.build_from_flow(\n flow=my_flow,\n name=\"s3-example\",\n version=2,\n work_queue_name=\"aws\",\n work_pool_name=\"default-agent-pool\",\n storage=storage,\n infra_overrides={\n \"env\": {\n \"ENV_VAR\": \"value\"\n }\n },\n)\n\ndeployment.apply()\n
If you have settings that you want to share from an existing deployment you can load those settings:
deployment = Deployment(\n name=\"a-name-you-used\", \n flow_name=\"name-of-flow\"\n)\ndeployment.load() # loads server-side settings\n
Once the existing deployment settings are loaded, you may update them as needed by changing deployment properties.
View all of the parameters for the Deployment
object in the Python API documentation.
When you create a deployment, it is constructed from deployment definition data you provide and additional properties set by client-side utilities.
Deployment properties include:
Property Descriptionid
An auto-generated UUID ID value identifying the deployment. created
A datetime
timestamp indicating when the deployment was created. updated
A datetime
timestamp indicating when the deployment was last changed. name
The name of the deployment. version
The version of the deployment description
A description of the deployment. flow_id
The id of the flow associated with the deployment. schedule
An optional schedule for the deployment. is_schedule_active
Boolean indicating whether the deployment schedule is active. Default is True. infra_overrides
One or more optional infrastructure overrides parameters
An optional dictionary of parameters for flow runs scheduled by the deployment. tags
An optional list of tags for the deployment. work_queue_name
The optional work queue that will handle the deployment's run parameter_openapi_schema
JSON schema for flow parameters. enforce_parameter_schema
Whether the API should validate the parameters passed to a flow run against the schema defined by parameter_openapi_schema
path
The path to the deployment.yaml file entrypoint
The path to a flow entry point storage_document_id
Storage block configured for the deployment. infrastructure_document_id
Infrastructure block configured for the deployment. You can inspect a deployment using the CLI with the prefect deployment inspect
command, referencing the deployment with <flow_name>/<deployment_name>
.
$ prefect deployment inspect 'Cat Facts/catfact'\n{\n 'id': '76a9f1ac-4d8c-4a92-8869-615bec502685',\n 'created': '2022-07-26T03:48:14.723328+00:00',\n 'updated': '2022-07-26T03:50:02.043238+00:00',\n 'name': 'catfact',\n 'version': '899b136ebc356d58562f48d8ddce7c19',\n 'description': None,\n 'flow_id': '2c7b36d1-0bdb-462e-bb97-f6eb9fef6fd5',\n 'schedule': None,\n 'is_schedule_active': True,\n 'infra_overrides': {},\n 'parameters': {},\n 'tags': [],\n 'work_queue_name': 'test',\n 'parameter_openapi_schema': {\n 'title': 'Parameters',\n 'type': 'object',\n 'properties': {'url': {'title': 'url'}},\n 'required': ['url']\n },\n 'path': '/Users/terry/test/testflows/catfact',\n 'entrypoint': 'catfact.py:catfacts_flow',\n 'manifest_path': None,\n 'storage_document_id': None,\n 'infrastructure_document_id': 'f958db1c-b143-4709-846c-321125247e07',\n 'infrastructure': {\n 'type': 'process',\n 'env': {},\n 'labels': {},\n 'name': None,\n 'command': ['python', '-m', 'prefect.engine'],\n 'stream_output': True\n }\n}\n
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-from-a-deployment","title":"Create a flow run from a deployment","text":"","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-with-a-schedule","title":"Create a flow run with a schedule","text":"If you specify a schedule for a deployment, the deployment will execute its flow automatically on that schedule as long as a Prefect server and agent are running. Prefect Cloud creates schedules flow runs automatically, and they will run on schedule if an agent is configured to pick up flow runs for the deployment.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-with-an-event-trigger","title":"Create a flow run with an event trigger","text":"deployment triggers are only available in Prefect Cloud
Deployments can optionally take a trigger specification, which will configure an automation to run the deployment based on the presence or absence of events, and optionally pass event data into the deployment run as parameters via jinja templating.
triggers:\n - enabled: true\n match:\n prefect.resource.id: prefect.flow-run.*\n expect:\n - prefect.flow-run.Completed\n match_related:\n prefect.resource.name: prefect.flow.etl-flow\n prefect.resource.role: flow\n parameters:\n param_1: \"{{ event }}\"\n
When applied, this deployment will start a flow run upon the completion of the upstream flow specified in the match_related
key, with the flow run passed in as a parameter. Triggers can be configured to respond to the presence or absence of arbitrary internal or external events. The trigger system and API are detailed in Automations.
In the Prefect UI, you can click the Run button next to any deployment to execute an ad hoc flow run for that deployment.
The prefect deployment
CLI command provides commands for managing and running deployments locally.
apply
Create or update a deployment from a YAML file. build
Generate a deployment YAML from /path/to/file.py:flow_function. delete
Delete a deployment. inspect
View details about a deployment. ls
View all deployments or deployments for specific flows. pause-schedule
Pause schedule of a given deployment. resume-schedule
Resume schedule of a given deployment. run
Create a flow run for the given flow and deployment. schedule
Commands for interacting with your deployment's schedules. set-schedule
Set schedule for a given deployment. Deprecated Schedule Commands
The pause-schedule, resume-schedule, and set-schedule commands are deprecated due to the introduction of multi-schedule support for deployments. Use the new prefect deployment schedule
command for enhanced flexibility and control over your deployment schedules.
You can create a flow run from a deployment in a Python script with the run_deployment
function.
from prefect.deployments import run_deployment\n\n\ndef main():\n response = run_deployment(name=\"flow-name/deployment-name\")\n print(response)\n\n\nif __name__ == \"__main__\":\n main()\n
PREFECT_API_URL
setting for agents
You'll need to configure agents and work pools that can create flow runs for deployments in remote environments. PREFECT_API_URL
must be set for the environment in which your agent is running.
If you want the agent to communicate with Prefect Cloud from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL
in that environment.
Deployments are server-side representations of flows. They store the crucial metadata needed for remote orchestration including when, where, and how a workflow should run. Deployments elevate workflows from functions that you must call manually to API-managed entities that can be triggered remotely.
Here we will focus largely on the metadata that defines a deployment and how it is used. Different ways of creating a deployment populate these fields differently.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#overview","title":"Overview","text":"Every Prefect deployment references one and only one \"entrypoint\" flow (though that flow may itself call any number of subflows). Different deployments may reference the same underlying flow, a useful pattern when developing or promoting workflow changes through staged environments.
The complete schema that defines a deployment is as follows:
class Deployment:\n \"\"\"\n Structure of the schema defining a deployment\n \"\"\"\n\n # required defining data\n name: str \n flow_id: UUID\n entrypoint: str\n path: str = None\n\n # workflow scheduling and parametrization\n parameters: dict = None\n parameter_openapi_schema: dict = None\n schedules: list[Schedule] = None\n paused: bool = False\n trigger: Trigger = None\n\n # metadata for bookkeeping\n version: str = None\n description: str = None\n tags: list = None\n\n # worker-specific fields\n work_pool_name: str = None\n work_queue_name: str = None\n infra_overrides: dict = None\n pull_steps: dict = None\n
All methods for creating Prefect deployments are interfaces for populating this schema. Let's look at each section in turn.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#required-data","title":"Required data","text":"Deployments universally require both a name
and a reference to an underlying Flow
. In almost all instances of deployment creation, users do not need to concern themselves with the flow_id
as most interfaces will only need the flow's name. Note that the deployment name is not required to be unique across all deployments but is required to be unique for a given flow ID. As a consequence, you will often see references to the deployment's unique identifying name {FLOW_NAME}/{DEPLOYMENT_NAME}
. For example, triggering a run of a deployment from the Prefect CLI can be done via:
prefect deployment run my-first-flow/my-first-deployment\n
The other two fields are less obvious:
path
: the path can generally be interpreted as the runtime working directory for the flow. For example, if a deployment references a workflow defined within a Docker image, the path
will be the absolute path to the parent directory where that workflow will run anytime the deployment is triggered. This interpretation is more subtle in the case of flows defined in remote filesystems.entrypoint
: the entrypoint of a deployment is a relative reference to a function decorated as a flow that exists on some filesystem. It is always specified relative to the path
. Entrypoints use Python's standard path-to-object syntax (e.g., path/to/file.py:function_name
or simply path:object
).The entrypoint must reference the same flow as the flow ID.
Note that Prefect requires that deployments reference flows defined within Python files. Flows defined within interactive REPLs or notebooks cannot currently be deployed as such. They are still valid flows that will be monitored by the API and observable in the UI whenever they are run, but Prefect cannot trigger them.
Deployments do not contain code definitions
Deployment metadata references code that exists in potentially diverse locations within your environment. This separation of concerns means that your flow code stays within your storage and execution infrastructure and never lives on the Prefect server or database.
This is the heart of the Prefect hybrid model: there's a boundary between your proprietary assets, such as your flow code, and the Prefect backend (including Prefect Cloud).
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#scheduling-and-parametrization","title":"Scheduling and parametrization","text":"One of the primary motivations for creating deployments of flows is to remotely schedule and trigger them. Just as flows can be called as functions with different input values, so can deployments be triggered or scheduled with different values through the use of parameters.
The six fields here capture the necessary metadata to perform such actions:
schedules
: a list of schedule objects. Most of the convenient interfaces for creating deployments allow users to avoid creating this object themselves. For example, when updating a deployment schedule in the UI basic information such as a cron string or interval is all that's required.trigger
(Cloud-only): triggers allow you to define event-based rules for running a deployment. For more information see Automations.parameter_openapi_schema
: an OpenAPI compatible schema that defines the types and defaults for the flow's parameters. This is used by both the UI and the backend to expose options for creating manual runs as well as type validation.parameters
: default values of flow parameters that this deployment will pass on each run. These can be overwritten through a trigger or when manually creating a custom run.enforce_parameter_schema
: a boolean flag that determines whether the API should validate the parameters passed to a flow run against the schema defined by parameter_openapi_schema
.Scheduling is asynchronous and decoupled
Because deployments are nothing more than metadata, runs can be created at anytime. Note that pausing a schedule, updating your deployment, and other actions reset your auto-scheduled runs.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#running-a-deployed-flow-from-within-python-flow-code","title":"Running a deployed flow from within Python flow code","text":"Prefect provides a run_deployment
function that can be used to schedule the run of an existing deployment when your Python code executes.
from prefect.deployments import run_deployment\n\ndef main():\n run_deployment(name=\"my_flow_name/my_deployment_name\")\n
Run a deployment without blocking
By default, run_deployment
blocks until the scheduled flow run finishes executing. Pass timeout=0
to return immediately and not block.
If you call run_deployment
from within a flow or task, the scheduled flow run will be linked to the calling flow run (or the calling task's flow run) as a subflow run by default.
Subflow runs have different behavior than regular flow runs. For example, a subflow run can't be suspended independently of its parent flow. If you'd rather not link the scheduled flow run to the calling flow or task run, you can disable this behavior by passing as_subflow=False
:
from prefect import flow\nfrom prefect.deployments import run_deployment\n\n\n@flow\ndef my_flow():\n # The scheduled flow run will not be linked to this flow as a subflow.\n run_deployment(name=\"my_other_flow/my_deployment_name\", as_subflow=False)\n
The return value of run_deployment
is a FlowRun object containing metadata about the scheduled run. You can use this object to retrieve information about the run after calling run_deployment
:
from prefect import get_client\nfrom prefect.deployments import run_deployment\n\ndef main():\n flow_run = run_deployment(name=\"my_flow_name/my_deployment_name\")\n flow_run_id = flow_run.id\n\n # If you save the flow run's ID, you can use it later to retrieve\n # flow run metadata again, e.g. to check if it's completed.\n async with get_client() as client:\n flow_run = client.read_flow_run(flow_run_id)\n print(f\"Current state of the flow run: {flow_run.state}\")\n
Using the Prefect client
For more information on using the Prefect client to interact with Prefect's REST API, see our guide.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#versioning-and-bookkeeping","title":"Versioning and bookkeeping","text":"Versions, descriptions and tags are omnipresent fields throughout Prefect that can be easy to overlook. However, putting some extra thought into how you use these fields can pay dividends down the road.
version
: versions are always set by the client and can be any arbitrary string. We recommend tightly coupling this field on your deployments to your software development lifecycle. For example if you leverage git
to manage code changes, use either a tag or commit hash in this field. If you don't set a value for the version, Prefect will compute a hashdescription
: the description field of a deployment is a place to provide rich reference material for downstream stakeholders such as intended use and parameter documentation. Markdown formatting will be rendered in the Prefect UI, allowing for section headers, links, tables, and other formatting. If not provided explicitly, Prefect will use the docstring of your flow function as a default value.tags
: tags are a mechanism for grouping related work together across a diverse set of objects. Tags set on a deployment will be inherited by that deployment's flow runs. These tags can then be used to filter what runs are displayed on the primary UI dashboard, allowing you to customize different views into your work. In addition, in Prefect Cloud you can easily find objects through searching by tag.All of these bits of metadata can be leveraged to great effect by injecting them into the processes that Prefect is orchestrating. For example you can use both run ID and versions to organize files that you produce from your workflows, or by associating your flow run's tags with the metadata of a job it orchestrates. This metadata is available during execution through Prefect runtime.
Everything has a version
Deployments aren't the only entity in Prefect with a version attached; both flows and tasks also have versions that can be set through their respective decorators. These versions will be sent to the API anytime the flow or task is run and thereby allow you to audit your changes across all levels.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#workers-and-work-pools","title":"Workers and Work Pools","text":"Workers and work pools are an advanced deployment pattern that allow you to dynamically provision infrastructure for each flow run. In addition, the work pool job template interface allows users to create and govern opinionated interfaces to their workflow infrastructure. To do this, a deployment using workers needs to evaluate the following fields:
work_pool_name
: the name of the work pool this deployment will be associated with. Work pool types mirror infrastructure types and therefore the decision here affects the options available for the other fields.work_queue_name
: if you are using work queues to either manage priority or concurrency, you can associate a deployment with a specific queue within a work pool using this field.infra_overrides
: often called job_variables
within various interfaces, this field allows deployment authors to customize whatever infrastructure options have been exposed on this work pool. This field is often used for things such as Docker image names, Kubernetes annotations and limits, and environment variables.pull_steps
: a JSON description of steps that should be performed to retrieve flow code or configuration and prepare the runtime environment for workflow execution.Pull steps allow users to highly decouple their workflow architecture. For example, a common use of pull steps is to dynamically pull code from remote filesystems such as GitHub with each run of their deployment.
For more information see the guide to deploying with a worker.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#two-approaches-to-deployments","title":"Two approaches to deployments","text":"There are two primary ways to deploy flows with Prefect, differentiated by how much control Prefect has over the infrastructure in which the flows run.
In one setup, deploying Prefect flows is analogous to deploying a webserver - users author their workflows and then start a long-running process (often within a Docker container) that is responsible for managing all of the runs for the associated deployment(s).
In the other setup, you do a little extra up-front work to set up a work pool and a base job template that defines how individual flow runs will be submitted to infrastructure.
Prefect provides several types of work pools corresponding to different types of infrastructure. Prefect Cloud provides a Prefect Managed work pool option that is the simplest way to run workflows remotely. A cloud-provider account, such as AWS, is not required with a Prefect Managed work pool.
Some work pool types require a client-side worker to submit job definitions to the appropriate infrastructure with each run.
Each of these setups can support production workloads. The choice ultimately boils down to your use case and preferences. Read further to decide which setup is best for your situation.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#serving-flows-on-long-lived-infrastructure","title":"Serving flows on long-lived infrastructure","text":"When you have several flows running regularly, the serve
method of the Flow
object or the serve
utility is a great option for managing multiple flows simultaneously.
Once you have authored your flow and decided on its deployment settings as described above, all that's left is to run this long-running process in a location of your choosing. The process will stay in communication with the Prefect API, monitoring for work and submitting each run within an individual subprocess. Note that because runs are submitted to subprocesses, any external infrastructure configuration will need to be setup beforehand and kept associated with this process.
This approach has many benefits:
However, there are a few reasons you might consider running flows on dynamically provisioned infrastructure with work pools instead:
Work pools allow Prefect to exercise greater control of the infrastructure on which flows run. Options for serverless work pools allow you to scale to zero when workflows aren't running. Prefect even provides you with the ability to provision cloud infrastructure via a single CLI command, if you use a Prefect Cloud push work pool option.
With work pools:
You don't have to commit to one approach
You are not required to use only one of these approaches for your deployments. You can mix and match approaches based on the needs of each flow. Further, you can change the deployment approach for a particular flow as its needs evolve. For example, you might use workers for your expensive machine learning pipelines, but use the serve mechanics for smaller, more frequent file-processing pipelines.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/filesystems/","title":"Filesystems","text":"A filesystem block is an object that allows you to read and write data from paths. Prefect provides multiple built-in file system types that cover a wide range of use cases.
LocalFileSystem
RemoteFileSystem
Azure
GitHub
GitLab
GCS
S3
SMB
Additional file system types are available in Prefect Collections.
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#local-filesystem","title":"Local filesystem","text":"The LocalFileSystem
block enables interaction with the files in your current development environment.
LocalFileSystem
properties include:
from prefect.filesystems import LocalFileSystem\n\nfs = LocalFileSystem(basepath=\"/foo/bar\")\n
Limited access to local file system
Be aware that LocalFileSystem
access is limited to the exact path provided. This file system may not be ideal for some use cases. The execution environment for your workflows may not have the same file system as the environment you are writing and deploying your code on.
Use of this file system can limit the availability of results after a flow run has completed or prevent the code for a flow from being retrieved successfully at the start of a run.
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#remote-file-system","title":"Remote file system","text":"The RemoteFileSystem
block enables interaction with arbitrary remote file systems. Under the hood, RemoteFileSystem
uses fsspec
and supports any file system that fsspec
supports.
RemoteFileSystem
properties include:
The file system is specified using a protocol:
s3://my-bucket/my-folder/
will use S3gcs://my-bucket/my-folder/
will use GCSaz://my-bucket/my-folder/
will use AzureFor example, to use it with Amazon S3:
from prefect.filesystems import RemoteFileSystem\n\nblock = RemoteFileSystem(basepath=\"s3://my-bucket/folder/\")\nblock.save(\"dev\")\n
You may need to install additional libraries to use some remote storage types.
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#remotefilesystem-examples","title":"RemoteFileSystem examples","text":"How can we use RemoteFileSystem
to store our flow code? The following is a use case where we use MinIO as a storage backend:
from prefect.filesystems import RemoteFileSystem\n\nminio_block = RemoteFileSystem(\n basepath=\"s3://my-bucket\",\n settings={\n \"key\": \"MINIO_ROOT_USER\",\n \"secret\": \"MINIO_ROOT_PASSWORD\",\n \"client_kwargs\": {\"endpoint_url\": \"http://localhost:9000\"},\n },\n)\nminio_block.save(\"minio\")\n
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#azure","title":"Azure","text":"The Azure
file system block enables interaction with Azure Datalake and Azure Blob Storage. Under the hood, the Azure
block uses adlfs
.
Azure
properties include:
DefaultAzureCredential
. To create a block:
from prefect.filesystems import Azure\n\nblock = Azure(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n
To use it in a deployment:
prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb az/dev\n
You need to install adlfs
to use it.
The GitHub
filesystem block enables interaction with GitHub repositories. This block is read-only and works with both public and private repositories.
GitHub
properties include:
repo
scope. To create a block:
from prefect.filesystems import GitHub\n\nblock = GitHub(\n repository=\"https://github.com/my-repo/\",\n access_token=<my_access_token> # only required for private repos\n)\nblock.get_directory(\"folder-in-repo\") # specify a subfolder of repo\nblock.save(\"dev\")\n
To use it in a deployment:
prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb github/dev -a\n
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#gitlabrepository","title":"GitLabRepository","text":"The GitLabRepository
block is read-only and works with private GitLab repositories.
GitLabRepository
properties include:
GitLabCredentials
block with Personal Access Token (PAT) with read_repository
scope. To create a block:
from prefect_gitlab.credentials import GitLabCredentials\nfrom prefect_gitlab.repositories import GitLabRepository\n\ngitlab_creds = GitLabCredentials(token=\"YOUR_GITLAB_ACCESS_TOKEN\")\ngitlab_repo = GitLabRepository(\n repository=\"https://gitlab.com/yourorg/yourrepo.git\",\n reference=\"main\",\n credentials=gitlab_creds,\n)\ngitlab_repo.save(\"dev\")\n
To use it in a deployment (and apply):
prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb gitlab-repository/dev -a\n
Note that to use this block, you need to install the prefect-gitlab
collection.
The GCS
file system block enables interaction with Google Cloud Storage. Under the hood, GCS
uses gcsfs
.
GCS
properties include:
To create a block:
from prefect.filesystems import GCS\n\nblock = GCS(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n
To use it in a deployment:
prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb gcs/dev\n
You need to install gcsfs
to use it.
The S3
file system block enables interaction with Amazon S3. Under the hood, S3
uses s3fs
.
S3
properties include:
To create a block:
from prefect.filesystems import S3\n\nblock = S3(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n
To use it in a deployment:
prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb s3/dev\n
You need to install s3fs
to use this block.
The SMB
file system block enables interaction with SMB shared network storage. Under the hood, SMB
uses smbprotocol
. Used to connect to Windows-based SMB shares from Linux-based Prefect flows. The SMB file system block is able to copy files, but cannot create directories.
SMB
properties include:
To create a block:
from prefect.filesystems import SMB\n\nblock = SMB(basepath=\"my-share/folder/\")\nblock.save(\"dev\")\n
To use it in a deployment:
prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb smb/dev\n
You need to install smbprotocol
to use it.
If you leverage S3
, GCS
, or Azure
storage blocks, and you don't explicitly configure credentials on the respective storage block, those credentials will be inferred from the environment. Make sure to set those either explicitly on the block or as environment variables, configuration files, or IAM roles within both the build and runtime environment for your deployments.
A Prefect installation and doesn't include filesystem-specific package dependencies such as s3fs
, gcsfs
or adlfs
. This includes Prefect base Docker images.
You must ensure that filesystem-specific libraries are installed in an execution environment where they will be used by flow runs.
In Dockerized deployments using the Prefect base image, you can leverage the EXTRA_PIP_PACKAGES
environment variable. Those dependencies will be installed at runtime within your Docker container or Kubernetes Job before the flow starts running.
In Dockerized deployments using a custom image, you must include the filesystem-specific package dependency in your image.
Here is an example from a deployment YAML file showing how to specify the installation of s3fs
from into your image:
infrastructure:\n type: docker-container\n env:\n EXTRA_PIP_PACKAGES: s3fs # could be gcsfs, adlfs, etc.\n
You may specify multiple dependencies by providing a comma-delimted list.
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#saving-and-loading-file-systems","title":"Saving and loading file systems","text":"Configuration for a file system can be saved to the Prefect API. For example:
fs = RemoteFileSystem(basepath=\"s3://my-bucket/folder/\")\nfs.write_path(\"foo\", b\"hello\")\nfs.save(\"dev-s3\")\n
This file system can be retrieved for later use with load
.
fs = RemoteFileSystem.load(\"dev-s3\")\nfs.read_path(\"foo\") # b'hello'\n
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#readable-and-writable-file-systems","title":"Readable and writable file systems","text":"Prefect provides two abstract file system types, ReadableFileSystem
and WriteableFileSystem
.
read_path
, which takes a file path to read content from and returns bytes. write_path
which takes a file path and content and writes the content to the file as bytes. A file system may implement both of these types.
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/flows/","title":"Flows","text":"Flows are the most central Prefect object. A flow is a container for workflow logic as-code and allows users to configure how their workflows behave. Flows are defined as Python functions, and any Python function is eligible to be a flow.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flows-overview","title":"Flows overview","text":"Flows can be thought of as special types of functions. They can take inputs, perform work, and return an output. In fact, you can turn any function into a Prefect flow by adding the @flow
decorator. When a function becomes a flow, its behavior changes, giving it the following advantages:
Flows also take advantage of automatic Prefect logging to capture details about flow runs such as run time and final state.
Flows can include calls to tasks as well as to other flows, which Prefect calls \"subflows\" in this context. Flows may be defined within modules and imported for use as subflows in your flow definitions.
Deployments elevate individual workflows from functions that you call manually to API-managed entities.
Tasks must be called from flows
All tasks must be called from within a flow. Tasks may not be called from other tasks.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flow-runs","title":"Flow runs","text":"A flow run represents a single execution of the flow.
You can create a flow run by calling the flow manually. For example, by running a Python script or importing the flow into an interactive session and calling it.
You can also create a flow run by:
cron
to invoke a flow functionHowever you run the flow, the Prefect API monitors the flow run, capturing flow run state for observability.
When you run a flow that contains tasks or additional flows, Prefect will track the relationship of each child run to the parent flow run.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#writing-flows","title":"Writing flows","text":"The @flow
decorator is used to designate a flow:
from prefect import flow\n\n@flow\ndef my_flow():\n return\n
There are no rigid rules for what code you include within a flow definition - all valid Python is acceptable.
Flows are uniquely identified by name. You can provide a name
parameter value for the flow. If you don't provide a name, Prefect uses the flow function name.
@flow(name=\"My Flow\")\ndef my_flow():\n return\n
Flows can call tasks to allow Prefect to orchestrate and track more granular units of work:
from prefect import flow, task\n\n@task\ndef print_hello(name):\n print(f\"Hello {name}!\")\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n print_hello(name)\n
Flows and tasks
There's nothing stopping you from putting all of your code in a single flow function \u2014 Prefect will happily run it!
However, organizing your workflow code into smaller flow and task units lets you take advantage of Prefect features like retries, more granular visibility into runtime state, the ability to determine final state regardless of individual task state, and more.
In addition, if you put all of your workflow logic in a single flow function and any line of code fails, the entire flow will fail and must be retried from the beginning. This can be avoided by breaking up the code into multiple tasks.
You may call any number of other tasks, subflows, and even regular Python functions within your flow. You can pass parameters to your flow function that will be used elsewhere in the workflow, and Prefect will report on the progress and final state of any invocation.
Prefect encourages \"small tasks\" \u2014 each one should represent a single logical step of your workflow. This allows Prefect to better contain task failures.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flow-settings","title":"Flow settings","text":"Flows allow a great deal of configuration by passing arguments to the decorator. Flows accept the following optional settings.
Argument Descriptiondescription
An optional string description for the flow. If not provided, the description will be pulled from the docstring for the decorated function. name
An optional name for the flow. If not provided, the name will be inferred from the function. retries
An optional number of times to retry on flow run failure. retry_delay_seconds
An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries
is nonzero. flow_run_name
An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables; this name can also be provided as a function that returns a string. task_runner
An optional task runner to use for task execution within the flow when you .submit()
tasks. If not provided and you .submit()
tasks, the ConcurrentTaskRunner
will be used. timeout_seconds
An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called. validate_parameters
Boolean indicating whether parameters passed to flows are validated by Pydantic. Default is True
. version
An optional version string for the flow. If not provided, we will attempt to create a version string as a hash of the file containing the wrapped function. If the file cannot be located, the version will be null. For example, you can provide a name
value for the flow. Here we've also used the optional description
argument and specified a non-default task runner.
from prefect import flow\nfrom prefect.task_runners import SequentialTaskRunner\n\n@flow(name=\"My Flow\",\n description=\"My flow using SequentialTaskRunner\",\n task_runner=SequentialTaskRunner())\ndef my_flow():\n return\n
You can also provide the description as the docstring on the flow function.
@flow(name=\"My Flow\",\n task_runner=SequentialTaskRunner())\ndef my_flow():\n \"\"\"My flow using SequentialTaskRunner\"\"\"\n return\n
You can distinguish runs of this flow by providing a flow_run_name
. This setting accepts a string that can optionally contain templated references to the parameters of your flow. The name will be formatted using Python's standard string formatting syntax as can be seen here:
import datetime\nfrom prefect import flow\n\n@flow(flow_run_name=\"{name}-on-{date:%A}\")\ndef my_flow(name: str, date: datetime.datetime):\n pass\n\n# creates a flow run called 'marvin-on-Thursday'\nmy_flow(name=\"marvin\", date=datetime.datetime.now(datetime.timezone.utc))\n
Additionally this setting also accepts a function that returns a string for the flow run name:
import datetime\nfrom prefect import flow\n\ndef generate_flow_run_name():\n date = datetime.datetime.now(datetime.timezone.utc)\n\n return f\"{date:%A}-is-a-nice-day\"\n\n@flow(flow_run_name=generate_flow_run_name)\ndef my_flow(name: str):\n pass\n\n# creates a flow run called 'Thursday-is-a-nice-day'\nmy_flow(name=\"marvin\")\n
If you need access to information about the flow, use the prefect.runtime
module. For example:
from prefect import flow\nfrom prefect.runtime import flow_run\n\ndef generate_flow_run_name():\n flow_name = flow_run.flow_name\n\n parameters = flow_run.parameters\n name = parameters[\"name\"]\n limit = parameters[\"limit\"]\n\n return f\"{flow_name}-with-{name}-and-{limit}\"\n\n@flow(flow_run_name=generate_flow_run_name)\ndef my_flow(name: str, limit: int = 100):\n pass\n\n# creates a flow run called 'my-flow-with-marvin-and-100'\nmy_flow(name=\"marvin\")\n
Note that validate_parameters
will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type. For example, if a parameter is defined as x: int
and \"5\" is passed, it will be resolved to 5
. If set to False
, no validation will be performed on flow parameters.
The simplest workflow is just a @flow
function that does all the work of the workflow.
from prefect import flow\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n print(f\"Hello {name}!\")\n\nhello_world(\"Marvin\")\n
When you run this flow, you'll see output like the following:
$ python hello.py\n15:11:23.594 | INFO | prefect.engine - Created flow run 'benevolent-donkey' for flow 'hello-world'\n15:11:23.594 | INFO | Flow run 'benevolent-donkey' - Using task runner 'ConcurrentTaskRunner'\nHello Marvin!\n15:11:24.447 | INFO | Flow run 'benevolent-donkey' - Finished in state Completed()\n
A better practice is to create @task
functions that do the specific work of your flow, and use your @flow
function as the conductor that orchestrates the flow of your application:
from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n msg = f\"Hello {name}!\"\n print(msg)\n return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n message = print_hello(name)\n\nhello_world(\"Marvin\")\n
When you run this flow, you'll see the following output, which illustrates how the work is encapsulated in a task run.
$ python hello.py\n15:15:58.673 | INFO | prefect.engine - Created flow run 'loose-wolverine' for flow 'Hello Flow'\n15:15:58.674 | INFO | Flow run 'loose-wolverine' - Using task runner 'ConcurrentTaskRunner'\n15:15:58.973 | INFO | Flow run 'loose-wolverine' - Created task run 'Print Hello-84f0fe0e-0' for task 'Print Hello'\nHello Marvin!\n15:15:59.037 | INFO | Task run 'Print Hello-84f0fe0e-0' - Finished in state Completed()\n15:15:59.568 | INFO | Flow run 'loose-wolverine' - Finished in state Completed('All states completed.')\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#visualizing-flow-structure","title":"Visualizing flow structure","text":"You can get a quick sense of the structure of your flow using the .visualize()
method on your flow. Calling this method will attempt to produce a schematic diagram of your flow and tasks without actually running your flow code.
Functions and code not inside of flows or tasks will still be run when calling .visualize()
. This may have unintended consequences. Place your code into tasks to avoid unintended execution.
To use the visualize()
method, Graphviz must be installed and on your PATH. Please install Graphviz from http://www.graphviz.org/download/. And note: just installing the graphviz
python package is not sufficient.
from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n msg = f\"Hello {name}!\"\n print(msg)\n return msg\n\n@task(name=\"Print Hello Again\")\ndef print_hello_again(name):\n msg = f\"Hello {name}!\"\n print(msg)\n return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n message = print_hello(name)\n message2 = print_hello_again(message)\n\nhello_world.visualize()\n
Prefect cannot automatically produce a schematic for dynamic workflows, such as those with loops or if/else control flow. In this case, you can provide tasks with mock return values for use in the visualize()
call.
from prefect import flow, task\n@task(viz_return_value=[4])\ndef get_list():\n return [1, 2, 3]\n\n@task\ndef append_one(n):\n return n.append(6)\n\n@flow\ndef viz_return_value_tracked():\n l = get_list()\n for num in range(3):\n l.append(5)\n append_one(l)\n\nviz_return_value_tracked.visualize()\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#composing-flows","title":"Composing flows","text":"A subflow run is created when a flow function is called inside the execution of another flow. The primary flow is the \"parent\" flow. The flow created within the parent is the \"child\" flow or \"subflow.\"
Subflow runs behave like normal flow runs. There is a full representation of the flow run in the backend as if it had been called separately. When a subflow starts, it will create a new task runner for tasks within the subflow. When the subflow completes, the task runner is shut down.
Subflows will block execution of the parent flow until completion. However, asynchronous subflows can be run concurrently by using AnyIO task groups or asyncio.gather.
Subflows differ from normal flows in that they will resolve any passed task futures into data. This allows data to be passed from the parent flow to the child easily.
The relationship between a child and parent flow is tracked by creating a special task run in the parent flow. This task run will mirror the state of the child flow run.
A task that represents a subflow will be annotated as such in its state_details
via the presence of a child_flow_run_id
field. A subflow can be identified via the presence of a parent_task_run_id
on state_details
.
You can define multiple flows within the same file. Whether running locally or via a deployment, you must indicate which flow is the entrypoint for a flow run.
Cancelling subflow runs
Inline subflow runs, specifically those created without run_deployment
, cannot be cancelled without cancelling their parent flow run. If you may need to cancel a subflow run independent of its parent flow run, we recommend deploying it separately and starting it using the run_deployment function.
from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n msg = f\"Hello {name}!\"\n print(msg)\n return msg\n\n@flow(name=\"Subflow\")\ndef my_subflow(msg):\n print(f\"Subflow says: {msg}\")\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n message = print_hello(name)\n my_subflow(message)\n\nhello_world(\"Marvin\")\n
You can also define flows or tasks in separate modules and import them for usage. For example, here's a simple subflow module:
from prefect import flow, task\n\n@flow(name=\"Subflow\")\ndef my_subflow(msg):\n print(f\"Subflow says: {msg}\")\n
Here's a parent flow that imports and uses my_subflow()
as a subflow:
from prefect import flow, task\nfrom subflow import my_subflow\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n msg = f\"Hello {name}!\"\n print(msg)\n return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n message = print_hello(name)\n my_subflow(message)\n\nhello_world(\"Marvin\")\n
Running the hello_world()
flow (in this example from the file hello.py
) creates a flow run like this:
$ python hello.py\n15:19:21.651 | INFO | prefect.engine - Created flow run 'daft-cougar' for flow 'Hello Flow'\n15:19:21.651 | INFO | Flow run 'daft-cougar' - Using task runner 'ConcurrentTaskRunner'\n15:19:21.945 | INFO | Flow run 'daft-cougar' - Created task run 'Print Hello-84f0fe0e-0' for task 'Print Hello'\nHello Marvin!\n15:19:22.055 | INFO | Task run 'Print Hello-84f0fe0e-0' - Finished in state Completed()\n15:19:22.107 | INFO | Flow run 'daft-cougar' - Created subflow run 'ninja-duck' for flow 'Subflow'\nSubflow says: Hello Marvin!\n15:19:22.794 | INFO | Flow run 'ninja-duck' - Finished in state Completed()\n15:19:23.215 | INFO | Flow run 'daft-cougar' - Finished in state Completed('All states completed.')\n
Subflows or tasks?
In Prefect you can call tasks or subflows to do work within your workflow, including passing results from other tasks to your subflow. So a common question is:
\"When should I use a subflow instead of a task?\"
We recommend writing tasks that do a discrete, specific piece of work in your workflow: calling an API, performing a database operation, analyzing or transforming a data point. Prefect tasks are well suited to parallel or distributed execution using distributed computation frameworks such as Dask or Ray. For troubleshooting, the more granular you create your tasks, the easier it is to find and fix issues should a task fail.
Subflows enable you to group related tasks within your workflow. Here are some scenarios where you might choose to use a subflow rather than calling tasks individually:
Flows can be called with both positional and keyword arguments. These arguments are resolved at runtime into a dictionary of parameters mapping name to value. These parameters are stored by the Prefect orchestration engine on the flow run object.
Prefect API requires keyword arguments
When creating flow runs from the Prefect API, parameter names must be specified when overriding defaults \u2014 they cannot be positional.
Type hints provide an easy way to enforce typing on your flow parameters via pydantic. This means any pydantic model used as a type hint within a flow will be coerced automatically into the relevant object type:
from prefect import flow\nfrom pydantic import BaseModel\n\nclass Model(BaseModel):\n a: int\n b: float\n c: str\n\n@flow\ndef model_validator(model: Model):\n print(model)\n
Note that parameter values can be provided to a flow via API using a deployment. Flow run parameters sent to the API on flow calls are coerced to a serializable form. Type hints on your flow functions provide you a way of automatically coercing JSON provided values to their appropriate Python representation.
For example, to automatically convert something to a datetime:
from prefect import flow\nfrom datetime import datetime\n\n@flow\ndef what_day_is_it(date: datetime = None):\n if date is None:\n date = datetime.now(timezone.utc)\n print(f\"It was {date.strftime('%A')} on {date.isoformat()}\")\n\nwhat_day_is_it(\"2021-01-01T02:00:19.180906\")\n# It was Friday on 2021-01-01T02:00:19.180906\n
Parameters are validated before a flow is run. If a flow call receives invalid parameters, a flow run is created in a Failed
state. If a flow run for a deployment receives invalid parameters, it will move from a Pending
state to a Failed
without entering a Running
state.
Flow run parameters cannot exceed 512kb
in size
Prerequisite
Read the documentation about states before proceeding with this section.
The final state of the flow is determined by its return value. The following rules apply:
None
), its state is determined by the states of all of the tasks and subflows within it.FAILED
.CANCELLED
.The following examples illustrate each of these cases:
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#raise-an-exception","title":"Raise an exception","text":"If an exception is raised within the flow function, the flow is immediately marked as failed.
from prefect import flow\n\n@flow\ndef always_fails_flow():\n raise ValueError(\"This flow immediately fails\")\n\nalways_fails_flow()\n
Running this flow produces the following result:
22:22:36.864 | INFO | prefect.engine - Created flow run 'acrid-tuatara' for flow 'always-fails-flow'\n22:22:36.864 | INFO | Flow run 'acrid-tuatara' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n22:22:37.060 | ERROR | Flow run 'acrid-tuatara' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: This flow immediately fails\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-none","title":"Return none
","text":"A flow with no return statement is determined by the state of all of its task runs.
from prefect import flow, task\n\n@task\ndef always_fails_task():\n raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n print(\"I'm fail safe!\")\n return \"success\"\n\n@flow\ndef always_fails_flow():\n always_fails_task.submit().result(raise_on_failure=False)\n always_succeeds_task()\n\nif __name__ == \"__main__\":\n always_fails_flow()\n
Running this flow produces the following result:
18:32:05.345 | INFO | prefect.engine - Created flow run 'auburn-lionfish' for flow 'always-fails-flow'\n18:32:05.346 | INFO | Flow run 'auburn-lionfish' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:32:05.582 | INFO | Flow run 'auburn-lionfish' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:32:05.582 | INFO | Flow run 'auburn-lionfish' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:32:05.610 | ERROR | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n ...\nValueError: I fail successfully\n18:32:05.638 | ERROR | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:32:05.658 | INFO | Flow run 'auburn-lionfish' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:32:05.659 | INFO | Flow run 'auburn-lionfish' - Executing 'always_succeeds_task-9c27db32-0' immediately...\nI'm fail safe!\n18:32:05.703 | INFO | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:32:05.730 | ERROR | Flow run 'auburn-lionfish' - Finished in state Failed('1/2 states failed.')\nTraceback (most recent call last):\n ...\nValueError: I fail successfully\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-a-future","title":"Return a future","text":"If a flow returns one or more futures, the final state is determined based on the underlying states.
from prefect import flow, task\n\n@task\ndef always_fails_task():\n raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n print(\"I'm fail safe!\")\n return \"success\"\n\n@flow\ndef always_succeeds_flow():\n x = always_fails_task.submit().result(raise_on_failure=False)\n y = always_succeeds_task.submit(wait_for=[x])\n return y\n\nif __name__ == \"__main__\":\n always_succeeds_flow()\n
Running this flow produces the following result \u2014 it succeeds because it returns the future of the task that succeeds:
18:35:24.965 | INFO | prefect.engine - Created flow run 'whispering-guan' for flow 'always-succeeds-flow'\n18:35:24.965 | INFO | Flow run 'whispering-guan' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:35:25.204 | INFO | Flow run 'whispering-guan' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:35:25.205 | INFO | Flow run 'whispering-guan' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:35:25.232 | ERROR | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n ...\nValueError: I fail successfully\n18:35:25.265 | ERROR | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:35:25.289 | INFO | Flow run 'whispering-guan' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:35:25.289 | INFO | Flow run 'whispering-guan' - Submitted task run 'always_succeeds_task-9c27db32-0' for execution.\nI'm fail safe!\n18:35:25.335 | INFO | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:35:25.362 | INFO | Flow run 'whispering-guan' - Finished in state Completed('All states completed.')\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-multiple-states-or-futures","title":"Return multiple states or futures","text":"If a flow returns a mix of futures and states, the final state is determined by resolving all futures to states, then determining if any of the states are not COMPLETED
.
from prefect import task, flow\n\n@task\ndef always_fails_task():\n raise ValueError(\"I am bad task\")\n\n@task\ndef always_succeeds_task():\n return \"foo\"\n\n@flow\ndef always_succeeds_flow():\n return \"bar\"\n\n@flow\ndef always_fails_flow():\n x = always_fails_task()\n y = always_succeeds_task()\n z = always_succeeds_flow()\n return x, y, z\n
Running this flow produces the following result. It fails because one of the three returned futures failed. Note that the final state is Failed
, but the states of each of the returned futures is included in the flow state:
20:57:51.547 | INFO | prefect.engine - Created flow run 'impartial-gorilla' for flow 'always-fails-flow'\n20:57:51.548 | INFO | Flow run 'impartial-gorilla' - Using task runner 'ConcurrentTaskRunner'\n20:57:51.645 | INFO | Flow run 'impartial-gorilla' - Created task run 'always_fails_task-58ea43a6-0' for task 'always_fails_task'\n20:57:51.686 | INFO | Flow run 'impartial-gorilla' - Created task run 'always_succeeds_task-c9014725-0' for task 'always_succeeds_task'\n20:57:51.727 | ERROR | Task run 'always_fails_task-58ea43a6-0' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: I am bad task\n20:57:51.787 | INFO | Task run 'always_succeeds_task-c9014725-0' - Finished in state Completed()\n20:57:51.808 | INFO | Flow run 'impartial-gorilla' - Created subflow run 'unbiased-firefly' for flow 'always-succeeds-flow'\n20:57:51.884 | ERROR | Task run 'always_fails_task-58ea43a6-0' - Finished in state Failed('Task run encountered an exception.')\n20:57:52.438 | INFO | Flow run 'unbiased-firefly' - Finished in state Completed()\n20:57:52.811 | ERROR | Flow run 'impartial-gorilla' - Finished in state Failed('1/3 states failed.')\nFailed(message='1/3 states failed.', type=FAILED, result=(Failed(message='Task run encountered an exception.', type=FAILED, result=ValueError('I am bad task'), task_run_id=5fd4c697-7c4c-440d-8ebc-dd9c5bbf2245), Completed(message=None, type=COMPLETED, result='foo', task_run_id=df9b6256-f8ac-457c-ba69-0638ac9b9367), Completed(message=None, type=COMPLETED, result='bar', task_run_id=cfdbf4f1-dccd-4816-8d0f-128750017d0c)), flow_run_id=6d2ec094-001a-4cb0-a24e-d2051db6318d)\n
Returning multiple states
When returning multiple states, they must be contained in a set
, list
, or tuple
. If other collection types are used, the result of the contained states will not be checked.
If a flow returns a manually created state, the final state is determined based on the return value.
from prefect import task, flow\nfrom prefect.states import Completed, Failed\n\n@task\ndef always_fails_task():\n raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n print(\"I'm fail safe!\")\n return \"success\"\n\n@flow\ndef always_succeeds_flow():\n x = always_fails_task.submit()\n y = always_succeeds_task.submit()\n if y.result() == \"success\":\n return Completed(message=\"I am happy with this result\")\n else:\n return Failed(message=\"How did this happen!?\")\n\nif __name__ == \"__main__\":\n always_succeeds_flow()\n
Running this flow produces the following result.
18:37:42.844 | INFO | prefect.engine - Created flow run 'lavender-elk' for flow 'always-succeeds-flow'\n18:37:42.845 | INFO | Flow run 'lavender-elk' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:37:43.125 | INFO | Flow run 'lavender-elk' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:37:43.126 | INFO | Flow run 'lavender-elk' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:37:43.162 | INFO | Flow run 'lavender-elk' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:37:43.163 | INFO | Flow run 'lavender-elk' - Submitted task run 'always_succeeds_task-9c27db32-0' for execution.\n18:37:43.175 | ERROR | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n ...\nValueError: I fail successfully\nI'm fail safe!\n18:37:43.217 | ERROR | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:37:43.236 | INFO | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:37:43.264 | INFO | Flow run 'lavender-elk' - Finished in state Completed('I am happy with this result')\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-an-object","title":"Return an object","text":"If the flow run returns any other object, then it is marked as completed.
from prefect import task, flow\n\n@task\ndef always_fails_task():\n raise ValueError(\"I fail successfully\")\n\n@flow\ndef always_succeeds_flow():\n always_fails_task().submit()\n return \"foo\"\n\nif __name__ == \"__main__\":\n always_succeeds_flow()\n
Running this flow produces the following result.
21:02:45.715 | INFO | prefect.engine - Created flow run 'sparkling-pony' for flow 'always-succeeds-flow'\n21:02:45.715 | INFO | Flow run 'sparkling-pony' - Using task runner 'ConcurrentTaskRunner'\n21:02:45.816 | INFO | Flow run 'sparkling-pony' - Created task run 'always_fails_task-58ea43a6-0' for task 'always_fails_task'\n21:02:45.853 | ERROR | Task run 'always_fails_task-58ea43a6-0' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: I am bad task\n21:02:45.879 | ERROR | Task run 'always_fails_task-58ea43a6-0' - Finished in state Failed('Task run encountered an exception.')\n21:02:46.593 | INFO | Flow run 'sparkling-pony' - Finished in state Completed()\nCompleted(message=None, type=COMPLETED, result='foo', flow_run_id=7240e6f5-f0a8-4e00-9440-a7b33fb51153)\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#serving-a-flow","title":"Serving a flow","text":"The simplest way to create a deployment for your flow is by calling its serve
method. This method creates a deployment for the flow and starts a long-running process that monitors for work from the Prefect server. When work is found, it is executed within its own isolated subprocess.
from prefect import flow\n\n\n@flow(log_prints=True)\ndef hello_world(name: str = \"world\", goodbye: bool = False):\n print(f\"Hello {name} from Prefect! \ud83e\udd17\")\n\n if goodbye:\n print(f\"Goodbye {name}!\")\n\n\nif __name__ == \"__main__\":\n # creates a deployment and stays running to monitor for work instructions generated on the server\n\n hello_world.serve(name=\"my-first-deployment\",\n tags=[\"onboarding\"],\n parameters={\"goodbye\": True},\n interval=60)\n
This interface provides all of the configuration needed for a deployment with no strong infrastructure requirements:
Schedules are auto-paused on shutdown
By default, stopping the process running flow.serve
will pause the schedule for the deployment (if it has one). When running this in environments where restarts are expected use the pause_on_shutdown=False
flag to prevent this behavior:
if __name__ == \"__main__\":\n hello_world.serve(name=\"my-first-deployment\",\n tags=[\"onboarding\"],\n parameters={\"goodbye\": True},\n pause_on_shutdown=False,\n interval=60)\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#serving-multiple-flows-at-once","title":"Serving multiple flows at once","text":"You can take this further and serve multiple flows with the same process using the serve
utility along with the to_deployment
method of flows:
import time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n \"Fastest flow this side of the Mississippi.\"\n return\n\n\nif __name__ == \"__main__\":\n slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n fast_deploy = fast_flow.to_deployment(name=\"fast\")\n serve(slow_deploy, fast_deploy)\n
The behavior and interfaces are identical to the single flow case.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#retrieve-a-flow-from-remote-storage","title":"Retrieve a flow from remote storage","text":"Flows can be retrieved from remote storage using the flow.from_source
method.
flow.from_source
accepts a git repository URL and an entrypoint pointing to the flow to load from the repository:
from prefect import flow\n\nmy_flow = flow.from_source(\n source=\"https://github.com/PrefectHQ/prefect.git\",\n entrypoint=\"flows/hello_world.py:hello\"\n)\n\nif __name__ == \"__main__\":\n my_flow()\n
16:40:33.818 | INFO | prefect.engine - Created flow run 'muscular-perch' for flow 'hello'\n16:40:34.048 | INFO | Flow run 'muscular-perch' - Hello world!\n16:40:34.706 | INFO | Flow run 'muscular-perch' - Finished in state Completed()\n
A flow entrypoint is the path to the file the flow is located in and the name of the flow function separated by a colon.
If you need additional configuration, such as specifying a private repository, you can provide a GitRepository
instead of URL:
from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nmy_flow = flow.from_source(\n source=GitRepository(\n url=\"https://github.com/org/private-repo.git\",\n branch=\"dev\",\n credentials={\n \"access_token\": Secret.load(\"github-access-token\").get()\n }\n ),\n entrypoint=\"flows.py:my_flow\"\n)\n\nif __name__ == \"__main__\":\n my_flow()\n
You can serve loaded flows
Flows loaded from remote storage can be served using the same serve
method as local flows:
from prefect import flow\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\"\n ).serve(name=\"my-deployment\")\n
When you serve a flow loaded from remote storage, the serving process will periodically poll your remote storage for updates to the flow's code. This pattern allows you to update your flow code without restarting the serving process.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#pausing-or-suspending-a-flow-run","title":"Pausing or suspending a flow run","text":"Prefect provides you with the ability to halt a flow run with two functions that are similar, but slightly different. When a flow run is paused, code execution is stopped and the process continues to run. When a flow run is suspended, code execution is stopped and so is the process.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#pausing-a-flow-run","title":"Pausing a flow run","text":"Prefect enables pausing an in-progress flow run for manual approval. Prefect exposes this functionality via the pause_flow_run
and resume_flow_run
functions.
Timeouts
Paused flow runs time out after one hour by default. After the timeout, the flow run will fail with a message saying it paused and never resumed. You can specify a different timeout period in seconds using the timeout
parameter.
Most simply, pause_flow_run
can be called inside a flow:
from prefect import task, flow, pause_flow_run, resume_flow_run\n\n@task\nasync def marvin_setup():\n return \"a raft of ducks walk into a bar...\"\n\n\n@task\nasync def marvin_punchline():\n return \"it's a wonder none of them ducked!\"\n\n\n@flow\nasync def inspiring_joke():\n await marvin_setup()\n await pause_flow_run(timeout=600) # pauses for 10 minutes\n await marvin_punchline()\n
You can also implement conditional pauses:
from prefect import task, flow, pause_flow_run\n\n@task\ndef task_one():\n for i in range(3):\n sleep(1)\n print(i)\n\n@flow\ndef my_flow():\n terminal_state = task_one.submit(return_state=True)\n if terminal_state.type == StateType.COMPLETED:\n print(\"Task one succeeded! Pausing flow run..\")\n pause_flow_run(timeout=2) \n else:\n print(\"Task one failed. Skipping pause flow run..\")\n
Calling this flow will block code execution after the first task and wait for resumption to deliver the punchline.
await inspiring_joke()\n> \"a raft of ducks walk into a bar...\"\n
Paused flow runs can be resumed by clicking the Resume button in the Prefect UI or calling the resume_flow_run
utility via client code.
resume_flow_run(FLOW_RUN_ID)\n
The paused flow run will then finish!
> \"it's a wonder none of them ducked!\"\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#suspending-a-flow-run","title":"Suspending a flow run","text":"Similar to pausing a flow run, Prefect enables suspending an in-progress flow run.
The difference between pausing and suspending a flow run
There is an important difference between pausing and suspending a flow run. When you pause a flow run, the flow code is still running but is blocked until someone resumes the flow. This is not the case with suspending a flow run! When you suspend a flow run, the flow exits completely and the infrastructure running it (e.g., a Kubernetes Job) tears down.
This means that you can suspend flow runs to save costs instead of paying for long-running infrastructure. However, when the flow run resumes, the flow code will execute again from the beginning of the flow, so you should use tasks and task caching to avoid recomputing expensive operations.
Prefect exposes this functionality via the suspend_flow_run
and resume_flow_run
functions, as well as the Prefect UI.
When called inside of a flow suspend_flow_run
will immediately suspend execution of the flow run. The flow run will be marked as Suspended
and will not be resumed until resume_flow_run
is called.
Timeouts
Suspended flow runs time out after one hour by default. After the timeout, the flow run will fail with a message saying it suspended and never resumed. You can specify a different timeout period in seconds using the timeout
parameter or pass timeout=None
for no timeout.
Here is an example of a flow that does not block flow execution while paused. This flow will exit after one task, and will be rescheduled upon resuming. The stored result of the first task is retrieved instead of being rerun.
from prefect import flow, pause_flow_run, task\n\n@task(persist_result=True)\ndef foo():\n return 42\n\n@flow(persist_result=True)\ndef noblock_pausing():\n x = foo.submit()\n pause_flow_run(timeout=30, reschedule=True)\n y = foo.submit()\n z = foo(wait_for=[x])\n alpha = foo(wait_for=[y])\n omega = foo(wait_for=[x, y])\n
Flow runs can be suspended out-of-process by calling suspend_flow_run(flow_run_id=<ID>)
or selecting the Suspend button in the Prefect UI or Prefect Cloud.
Suspended flow runs can be resumed by clicking the Resume button in the Prefect UI or calling the resume_flow_run
utility via client code.
resume_flow_run(FLOW_RUN_ID)\n
Subflows can't be suspended independently of their parent run
You can't suspend a subflow run independently of its parent flow run.
If you use a flow to schedule a flow run with run_deployment
, the scheduled flow run will be linked to the calling flow as a subflow run by default. This means you won't be able to suspend the scheduled flow run independently of the calling flow. Call run_deployment
with as_subflow=False
to disable this linking if you need to be able to suspend the scheduled flow run independently of the calling flow.
Experimental
The wait_for_input
parameter used in the pause_flow_run
or suspend_flow_run
functions is an experimental feature. The interface or behavior of this feature may change without warning in future releases.
If you encounter any issues, please let us know in Slack or with a Github issue.
When pausing or suspending a flow run you may want to wait for input from a user. Prefect provides a way to do this by leveraging the pause_flow_run
and suspend_flow_run
functions. These functions accept a wait_for_input
argument, the value of which should be a subclass of prefect.input.RunInput
, a pydantic model. When resuming the flow run, users are required to provide data for this model. Upon successful validation, the flow run resumes, and the return value of the pause_flow_run
or suspend_flow_run
is an instance of the model containing the provided data.
Here is an example of a flow that pauses and waits for input from a user:
from prefect import flow, get_run_logger, pause_flow_run\nfrom prefect.input import RunInput\n\n\nclass UserNameInput(RunInput):\n name: str\n\n\n@flow\nasync def greet_user():\n logger = get_run_logger()\n\n user_input = await pause_flow_run(\n wait_for_input=UserNameInput\n )\n\n logger.info(f\"Hello, {user_input.name}!\")\n
Running this flow will create a flow run. The flow run will advance until code execution reaches pause_flow_run
, at which point it will move into a Paused
state. Execution will block and wait for resumption.
When resuming the flow run, users will be prompted to provide a value for the name
field of the UserNameInput
model. Upon successful validation, the flow run will resume, and the return value of the pause_flow_run
will be an instance of the UserNameInput
model containing the provided data.
For more in-depth information on receiving input from users when pausing and suspending flow runs, see the Creating interactive workflows guide.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#canceling-a-flow-run","title":"Canceling a flow run","text":"You may cancel a scheduled or in-progress flow run from the CLI, UI, REST API, or Python client.
When cancellation is requested, the flow run is moved to a \"Cancelling\" state. If the deployment is a work pool-based deployemnt with a worker, then the worker monitors the state of flow runs and detects that cancellation has been requested. The worker then sends a signal to the flow run infrastructure, requesting termination of the run. If the run does not terminate after a grace period (default of 30 seconds), the infrastructure will be killed, ensuring the flow run exits.
A deployment is required
Flow run cancellation requires the flow run to be associated with a deployment. A monitoring process must be running to enforce the cancellation. Inline subflow runs, i.e. those created without run_deployment
, cannot be cancelled without cancelling the parent flow run. If you may need to cancel a subflow run independent of its parent flow run, we recommend deploying it separately and starting it using the run_deployment function.
Cancellation is robust to restarts of Prefect workers. To enable this, we attach metadata about the created infrastructure to the flow run. Internally, this is referred to as the infrastructure_pid
or infrastructure identifier. Generally, this is composed of two parts:
The scope is used to ensure that Prefect does not kill the wrong infrastructure. For example, workers running on multiple machines may have overlapping process IDs but should not have a matching scope.
The identifiers for infrastructure types:
While the cancellation process is robust, there are a few issues than can occur:
infrastructre_pid
is missing from the flow run will be marked as cancelled but cancellation cannot be enforced.Enhanced cancellation
We are working on improving cases where cancellation can fail. You can try the improved cancellation experience by enabling the PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION
setting on your worker or agents:
prefect config set PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION=True\n
If you encounter any issues, please let us know in Slack or with a Github issue.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#cancel-via-the-cli","title":"Cancel via the CLI","text":"From the command line in your execution environment, you can cancel a flow run by using the prefect flow-run cancel
CLI command, passing the ID of the flow run.
prefect flow-run cancel 'a55a4804-9e3c-4042-8b59-b3b6b7618736'\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#cancel-via-the-ui","title":"Cancel via the UI","text":"From the UI you can cancel a flow run by navigating to the flow run's detail page and clicking the Cancel
button in the upper right corner.
Workers are recommended
Infrastructure blocks are part of the agent based deployment model. Work Pools and Workers simplify the specification of each flow's infrastructure and runtime environment. If you have existing agents, you can upgrade from agents to workers to significantly enhance the experience of deploying flows.
Users may specify an infrastructure block when creating a deployment. This block will be used to specify infrastructure for flow runs created by the deployment at runtime.
Infrastructure can only be used with a deployment. When you run a flow directly by calling the flow yourself, you are responsible for the environment in which the flow executes.
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#infrastructure-overview","title":"Infrastructure overview","text":"Prefect uses infrastructure to create the environment for a user's flow to execute.
Infrastructure is attached to a deployment and is propagated to flow runs created for that deployment. Infrastructure is deserialized by the agent and it has two jobs:
prefect.engine
in the infrastructure, which retrieves the flow from storage and executes the flow.The engine acquires and calls the flow. Infrastructure doesn't know anything about how the flow is stored, it's just passing a flow run ID to the engine.
Infrastructure is specific to the environments in which flows will run. Prefect currently provides the following infrastructure types:
Process
runs flows in a local subprocess.DockerContainer
runs flows in a Docker container.KubernetesJob
runs flows in a Kubernetes Job.ECSTask
runs flows in an Amazon ECS Task.Cloud Run
runs flows in a Google Cloud Run Job.Container Instance
runs flows in an Azure Container Instance.What about tasks?
Flows and tasks can both use configuration objects to manage the environment in which code runs.
Flows use infrastructure.
Tasks use task runners. For more on how task runners work, see Task Runners.
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#using-infrastructure","title":"Using infrastructure","text":"You may create customized infrastructure blocks through the Prefect UI or Prefect Cloud Blocks page or create them in code and save them to the API using the blocks .save()
method.
Once created, there are two distinct ways to use infrastructure in a deployment:
-i
or --infra
flag and provide a type when building deployment files.-ib
or --infra-block
and a block slug when building deployment files.For example, when creating your deployment files, the supported Prefect infrastrucure types are:
process
docker-container
kubernetes-job
ecs-task
cloud-run-job
container-instance-job
$ prefect deployment build ./my_flow.py:my_flow -n my-flow-deployment -t test -i docker-container -sb s3/my-bucket --override env.EXTRA_PIP_PACKAGES=s3fs\nFound flow 'my-flow'\nSuccessfully uploaded 2 files to s3://bucket-full-of-sunshine\nDeployment YAML created at '/Users/terry/test/flows/infra/deployment.yaml'.\n
In this example we specify the DockerContainer
infrastructure in addition to a preconfigured AWS S3 bucket storage block.
The default deployment YAML filename may be edited as needed to add an infrastructure type or infrastructure settings.
###\n### A complete description of a Prefect Deployment for flow 'my-flow'\n###\nname: my-flow-deployment\ndescription: null\nversion: e29de5d01b06d61b4e321d40f34a480c\n# The work queue that will handle this deployment's runs\nwork_queue_name: default\nwork_pool_name: default-agent-pool\ntags:\n- test\nparameters: {}\nschedules: []\npaused: true\ninfra_overrides:\n env.EXTRA_PIP_PACKAGES: s3fs\ninfrastructure:\n type: docker-container\n env: {}\n labels: {}\n name: null\n command:\n - python\n - -m\n - prefect.engine\n image: prefecthq/prefect:2-latest\n image_pull_policy: null\n networks: []\n network_mode: null\n auto_remove: false\n volumes: []\n stream_output: true\n memswap_limit: null\n mem_limit: null\n privileged: false\n block_type_slug: docker-container\n _block_type_slug: docker-container\n\n###\n### DO NOT EDIT BELOW THIS LINE\n###\nflow_name: my-flow\nmanifest_path: my_flow-manifest.json\nstorage:\n bucket_path: bucket-full-of-sunshine\n aws_access_key_id: '**********'\n aws_secret_access_key: '**********'\n _is_anonymous: true\n _block_document_name: anonymous-xxxxxxxx-f1ff-4265-b55c-6353a6d65333\n _block_document_id: xxxxxxxx-06c2-4c3c-a505-4a8db0147011\n block_type_slug: s3\n _block_type_slug: s3\npath: ''\nentrypoint: my_flow.py:my-flow\nparameter_openapi_schema:\n title: Parameters\n type: object\n properties: {}\n required: null\n definitions: null\ntimestamp: '2023-02-08T23:00:14.974642+00:00'\n
Editing deployment YAML
Note the big DO NOT EDIT comment in the deployment YAML: In practice, anything above this block can be freely edited before running prefect deployment apply
to create the deployment on the API.
Once the deployment exists, any flow runs that this deployment starts will use DockerContainer
infrastructure.
You can also create custom infrastructure blocks \u2014 either in the Prefect UI for in code via the API \u2014 and use the settings in the block to configure your infastructure. For example, here we specify settings for Kubernetes infrastructure in a block named k8sdev
.
from prefect.infrastructure import KubernetesJob, KubernetesImagePullPolicy\n\nk8s_job = KubernetesJob(\n namespace=\"dev\",\n image=\"prefecthq/prefect:2.0.0-python3.9\",\n image_pull_policy=KubernetesImagePullPolicy.IF_NOT_PRESENT,\n)\nk8s_job.save(\"k8sdev\")\n
Now we can apply the infrastrucure type and settings in the block by specifying the block slug kubernetes-job/k8sdev
as the infrastructure type when building a deployment:
prefect deployment build flows/k8s_example.py:k8s_flow --name k8sdev --tag k8s -sb s3/dev -ib kubernetes-job/k8sdev\n
See Deployments for more information about deployment build options.
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#configuring-infrastructure","title":"Configuring infrastructure","text":"Every infrastrcture type has type-specific options.
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#process","title":"Process","text":"Process
infrastructure runs a command in a new process.
Current environment variables and Prefect settings will be included in the created process. Configured environment variables will override any current environment variables.
Process
supports the following settings:
DockerContainer
infrastructure executes flow runs in a container.
Requirements for DockerContainer
:
localhost
and 127.0.0.1
will be replaced with host.docker.internal
.DockerContainer
supports the following settings:
DockerRegistry
or ensure otherwise that your execution layer is authenticated to pull the image from the image registry. image_pull_policy Specifies if the image should be pulled. One of 'ALWAYS', 'NEVER', 'IF_NOT_PRESENT'. image_registry A DockerRegistry
block containing credentials to use if image
is stored in a private image registry. labels An optional dictionary of labels, mapping name to value. name An optional name for the container. networks An optional list of strings specifying Docker networks to connect the container to. network_mode Set the network mode for the created container. Defaults to 'host' if a local API url is detected, otherwise the Docker default of 'bridge' is used. If 'networks' is set, this cannot be set. stream_output Bool indicating whether to stream output from the subprocess to local standard output. volumes An optional list of volume mount strings in the format of \"local_path:container_path\". Prefect automatically sets a Docker image matching the Python and Prefect version you're using at deployment time. You can see all available images at Docker Hub.
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#kubernetesjob","title":"KubernetesJob","text":"KubernetesJob
infrastructure executes flow runs in a Kubernetes Job.
Requirements for KubernetesJob
:
kubectl
must be available.The Prefect CLI command prefect kubernetes manifest server
automatically generates a Kubernetes manifest with default settings for Prefect deployments. By default, it simply prints out the YAML configuration for a manifest. You can pipe this output to a file of your choice and edit as necessary.
KubernetesJob
supports the following settings:
When creating deployments using KubernetesJob
infrastructure, the infra_overrides
parameter expects a dictionary. For a KubernetesJob
, the customizations
parameter expects a list.
Containers expect a list of objects, even if there is only one. For any patches applying to the container, the path value should be a list, for example: /spec/templates/spec/containers/0/resources
A Kubernetes-Job
infrastructure block defined in Python:
customizations = [\n {\n \"op\": \"add\",\n \"path\": \"/spec/template/spec/containers/0/resources\",\n \"value\": {\n \"requests\": {\n \"cpu\": \"2000m\",\n \"memory\": \"4gi\"\n },\n \"limits\": {\n \"cpu\": \"4000m\",\n \"memory\": \"8Gi\",\n \"nvidia.com/gpu\": \"1\"\n }\n },\n }\n]\n\nk8s_job = KubernetesJob(\n namespace=namespace,\n image=image_name,\n image_pull_policy=KubernetesImagePullPolicy.ALWAYS,\n finished_job_ttl=300,\n job_watch_timeout_seconds=600,\n pod_watch_timeout_seconds=600,\n service_account_name=\"prefect-server\",\n customizations=customizations,\n )\nk8s_job.save(\"devk8s\")\n
A Deployment
with infra-overrides defined in Python:
infra_overrides={ \n \"customizations\": [\n {\n \"op\": \"add\",\n \"path\": \"/spec/template/spec/containers/0/resources\",\n \"value\": {\n \"requests\": {\n \"cpu\": \"2000m\",\n \"memory\": \"4gi\"\n },\n \"limits\": {\n \"cpu\": \"4000m\",\n \"memory\": \"8Gi\",\n \"nvidia.com/gpu\": \"1\"\n }\n },\n }\n ]\n}\n\n# Load an already created K8s Block\nk8sjob = k8s_job.load(\"devk8s\")\n\ndeployment = Deployment.build_from_flow(\n flow=my_flow,\n name=\"s3-example\",\n version=2,\n work_queue_name=\"aws\",\n infrastructure=k8sjob,\n storage=storage,\n infra_overrides=infra_overrides,\n)\n\ndeployment.apply()\n
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#ecstask","title":"ECSTask","text":"ECSTask
infrastructure runs your flow in an ECS Task.
Requirements for ECSTask
:
prefect-aws
collection must be installed within the agent environment: pip install prefect-aws
ECSTask
and AwsCredentials
blocks must be registered within the agent environment: prefect block register -m prefect_aws.ecs
ECSTask
is S3. If you leverage that type of block, make sure that s3fs
is installed within your agent and flow run environment. The easiest way to satisfy all the installation-related points mentioned above is to include the following commands in your Dockerfile: FROM prefecthq/prefect:2-python3.9 # example base image \nRUN pip install s3fs prefect-aws\n
Make sure to allocate enough CPU and memory to your agent, and consider adding retries
When you start a Prefect agent on AWS ECS Fargate, allocate as much CPU and memory as needed for your workloads. Your agent needs enough resources to appropriately provision infrastructure for your flow runs and to monitor their execution. Otherwise, your flow runs may get stuck in a Pending
state. Alternatively, set a work-queue concurrency limit to ensure that the agent will not try to process all runs at the same time.
Some API calls to provision infrastructure may fail due to unexpected issues on the client side (for example, transient errors such as ConnectionError
, HTTPClientError
, or RequestTimeout
), or due to server-side rate limiting from the AWS service. To mitigate those issues, we recommend adding environment variables such as AWS_MAX_ATTEMPTS
(can be set to an integer value such as 10) and AWS_RETRY_MODE
(can be set to a string value including standard
or adaptive
modes). Those environment variables must be added within the agent environment, e.g. on your ECS service running the agent, rather than on the ECSTask
infrastructure block.
Learn about options for Prefect-maintained Docker images in the Docker guide.
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/results/","title":"Results","text":"Results represent the data returned by a flow or a task.
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#retrieving-results","title":"Retrieving results","text":"When calling flows or tasks, the result is returned directly:
from prefect import flow, task\n\n@task\ndef my_task():\n return 1\n\n@flow\ndef my_flow():\n task_result = my_task()\n return task_result + 1\n\nresult = my_flow()\nassert result == 2\n
When working with flow and task states, the result can be retrieved with the State.result()
method:
from prefect import flow, task\n\n@task\ndef my_task():\n return 1\n\n@flow\ndef my_flow():\n state = my_task(return_state=True)\n return state.result() + 1\n\nstate = my_flow(return_state=True)\nassert state.result() == 2\n
When submitting tasks to a runner, the result can be retrieved with the Future.result()
method:
from prefect import flow, task\n\n@task\ndef my_task():\n return 1\n\n@flow\ndef my_flow():\n future = my_task.submit()\n return future.result() + 1\n\nresult = my_flow()\nassert result == 2\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#handling-failures","title":"Handling failures","text":"Sometimes your flows or tasks will encounter an exception. Prefect captures all exceptions in order to report states to the orchestrator, but we do not hide them from you (unless you ask us to) as your program needs to know if an unexpected error has occurred.
When calling flows or tasks, the exceptions are raised as in normal Python:
from prefect import flow, task\n\n@task\ndef my_task():\n raise ValueError()\n\n@flow\ndef my_flow():\n try:\n my_task()\n except ValueError:\n print(\"Oh no! The task failed.\")\n\n return True\n\nmy_flow()\n
If you would prefer to check for a failed task without using try/except
, you may ask Prefect to return the state:
from prefect import flow, task\n\n@task\ndef my_task():\n raise ValueError()\n\n@flow\ndef my_flow():\n state = my_task(return_state=True)\n\n if state.is_failed():\n print(\"Oh no! The task failed. Falling back to '1'.\")\n result = 1\n else:\n result = state.result()\n\n return result + 1\n\nresult = my_flow()\nassert result == 2\n
If you retrieve the result from a failed state, the exception will be raised. For this reason, it's often best to check if the state is failed first.
from prefect import flow, task\n\n@task\ndef my_task():\n raise ValueError()\n\n@flow\ndef my_flow():\n state = my_task(return_state=True)\n\n try:\n result = state.result()\n except ValueError:\n print(\"Oh no! The state raised the error!\")\n\n return True\n\nmy_flow()\n
When retrieving the result from a state, you can ask Prefect not to raise exceptions:
from prefect import flow, task\n\n@task\ndef my_task():\n raise ValueError()\n\n@flow\ndef my_flow():\n state = my_task(return_state=True)\n\n maybe_result = state.result(raise_on_failure=False)\n if isinstance(maybe_result, ValueError):\n print(\"Oh no! The task failed. Falling back to '1'.\")\n result = 1\n else:\n result = maybe_result\n\n return result + 1\n\nresult = my_flow()\nassert result == 2\n
When submitting tasks to a runner, Future.result()
works the same as State.result()
:
from prefect import flow, task\n\n@task\ndef my_task():\n raise ValueError()\n\n@flow\ndef my_flow():\n future = my_task.submit()\n\n try:\n future.result()\n except ValueError:\n print(\"Ah! Futures will raise the failure as well.\")\n\n # You can ask it not to raise the exception too\n maybe_result = future.result(raise_on_failure=False)\n print(f\"Got {type(maybe_result)}\")\n\n return True\n\nmy_flow()\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#working-with-async-results","title":"Working with async results","text":"When calling flows or tasks, the result is returned directly:
import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n return 1\n\n@flow\nasync def my_flow():\n task_result = await my_task()\n return task_result + 1\n\nresult = asyncio.run(my_flow())\nassert result == 2\n
When working with flow and task states, the result can be retrieved with the State.result()
method:
import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n return 1\n\n@flow\nasync def my_flow():\n state = await my_task(return_state=True)\n result = await state.result(fetch=True)\n return result + 1\n\nasync def main():\n state = await my_flow(return_state=True)\n assert await state.result(fetch=True) == 2\n\nasyncio.run(main())\n
Resolving results
Prefect 2.6.0 added automatic retrieval of persisted results. Prior to this version, State.result()
did not require an await
. For backwards compatibility, when used from an asynchronous context, State.result()
returns a raw result type.
You may opt-in to the new behavior by passing fetch=True
as shown in the example above. If you would like this behavior to be used automatically, you may enable the PREFECT_ASYNC_FETCH_STATE_RESULT
setting. If you do not opt-in to this behavior, you will see a warning.
You may also opt-out by setting fetch=False
. This will silence the warning, but you will need to retrieve your result manually from the result type.
When submitting tasks to a runner, the result can be retrieved with the Future.result()
method:
import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n return 1\n\n@flow\nasync def my_flow():\n future = await my_task.submit()\n result = await future.result()\n return result + 1\n\nresult = asyncio.run(my_flow())\nassert result == 2\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisting-results","title":"Persisting results","text":"The Prefect API does not store your results except in special cases. Instead, the result is persisted to a storage location in your infrastructure and Prefect stores a reference to the result.
The following Prefect features require results to be persisted:
If results are not persisted, these features may not be usable.
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#configuring-persistence-of-results","title":"Configuring persistence of results","text":"Persistence of results requires a serializer and a storage location. Prefect sets defaults for these, and you should not need to adjust them until you want to customize behavior. You can configure results on the flow
and task
decorators with the following options:
persist_result
: Whether the result should be persisted to storage.result_storage
: Where to store the result when persisted.result_serializer
: How to convert the result to a storable form.Persistence of the result of a task or flow can be configured with the persist_result
option. The persist_result
option defaults to a null value, which will automatically enable persistence if it is needed for a Prefect feature used by the flow or task. Otherwise, persistence is disabled by default.
For example, the following flow has retries enabled. Flow retries require that all task results are persisted, so the task's result will be persisted:
from prefect import flow, task\n\n@task\ndef my_task():\n return \"hello world!\"\n\n@flow(retries=2)\ndef my_flow():\n # This task does not have persistence toggled off and it is needed for the flow feature,\n # so Prefect will persist its result at runtime\n my_task()\n
Flow retries do not require the flow's result to be persisted, so it will not be.
In this next example, one task has caching enabled. Task caching requires that the given task's result is persisted:
from prefect import flow, task\nfrom datetime import timedelta\n\n@task(cache_key_fn=lambda: \"always\", cache_expiration=timedelta(seconds=20))\ndef my_task():\n # This task uses caching so its result will be persisted by default\n return \"hello world!\"\n\n\n@task\ndef my_other_task():\n ...\n\n@flow\ndef my_flow():\n # This task uses a feature that requires result persistence\n my_task()\n\n # This task does not use a feature that requires result persistence and the\n # flow does not use any features that require task result persistence so its\n # result will not be persisted by default\n my_other_task()\n
Persistence of results can be manually toggled on or off:
from prefect import flow, task\n\n@flow(persist_result=True)\ndef my_flow():\n # This flow will persist its result even if not necessary for a feature.\n ...\n\n@task(persist_result=False)\ndef my_task():\n # This task will never persist its result.\n # If persistence needed for a feature, an error will be raised.\n ...\n
Toggling persistence manually will always override any behavior that Prefect would infer.
You may also change Prefect's default persistence behavior with the PREFECT_RESULTS_PERSIST_BY_DEFAULT
setting. To persist results by default, even if they are not needed for a feature change the value to a truthy value:
prefect config set PREFECT_RESULTS_PERSIST_BY_DEFAULT=true\n
Task and flows with persist_result=False
will not persist their results even if PREFECT_RESULTS_PERSIST_BY_DEFAULT
is true
.
The result storage location can be configured with the result_storage
option. The result_storage
option defaults to a null value, which infers storage from the context. Generally, this means that tasks will use the result storage configured on the flow unless otherwise specified. If there is no context to load the storage from and results must be persisted, results will be stored in the path specified by the PREFECT_LOCAL_STORAGE_PATH
setting (defaults to ~/.prefect/storage
).
from prefect import flow, task\nfrom prefect.filesystems import LocalFileSystem, S3\n\n@flow(persist_result=True)\ndef my_flow():\n my_task() # This task will use the flow's result storage\n\n@task(persist_result=True)\ndef my_task():\n ...\n\nmy_flow() # The flow has no result storage configured and no parent, the local file system will be used.\n\n\n# Reconfigure the flow to use a different storage type\nnew_flow = my_flow.with_options(result_storage=S3(bucket_path=\"my-bucket\"))\n\nnew_flow() # The flow and task within it will use S3 for result storage.\n
You can configure this to use a specific storage using one of the following:
LocalFileSystem(basepath=\".my-results\")
's3/dev-s3-block'
The path of the result file in the result storage can be configured with the result_storage_key
. The result_storage_key
option defaults to a null value, which generates a unique identifier for each result.
from prefect import flow, task\nfrom prefect.filesystems import LocalFileSystem, S3\n\n@flow(result_storage=S3(bucket_path=\"my-bucket\"))\ndef my_flow():\n my_task()\n\n@task(persist_result=True, result_storage_key=\"my_task.json\")\ndef my_task():\n ...\n\nmy_flow() # The task's result will be persisted to 's3://my-bucket/my_task.json'\n
Result storage keys are formatted with access to all of the modules in prefect.runtime
and the run's parameters
. In the following example, we will run a flow with three runs of the same task. Each task run will write its result to a unique file based on the name
parameter.
from prefect import flow, task\n\n@flow()\ndef my_flow():\n hello_world()\n hello_world(name=\"foo\")\n hello_world(name=\"bar\")\n\n@task(persist_result=True, result_storage_key=\"hello-{parameters[name]}.json\")\ndef hello_world(name: str = \"world\"):\n return f\"hello {name}\"\n\nmy_flow()\n
After running the flow, we can see three persisted result files in our storage directory:
$ ls ~/.prefect/storage | grep \"hello-\"\nhello-bar.json\nhello-foo.json\nhello-world.json\n
In the next example, we include metadata about the flow run from the prefect.runtime.flow_run
module:
from prefect import flow, task\n\n@flow\ndef my_flow():\n hello_world()\n\n@task(persist_result=True, result_storage_key=\"{flow_run.flow_name}_{flow_run.name}_hello.json\")\ndef hello_world(name: str = \"world\"):\n return f\"hello {name}\"\n\nmy_flow()\n
After running this flow, we can see a result file templated with the name of the flow and the flow run:
\u276f ls ~/.prefect/storage | grep \"my-flow\" \nmy-flow_industrious-trout_hello.json\n
If a result exists at a given storage key in the storage location, it will be overwritten.
Result storage keys can only be configured on tasks at this time.
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-serializer","title":"Result serializer","text":"The result serializer can be configured with the result_serializer
option. The result_serializer
option defaults to a null value, which infers the serializer from the context. Generally, this means that tasks will use the result serializer configured on the flow unless otherwise specified. If there is no context to load the serializer from, the serializer defined by PREFECT_RESULTS_DEFAULT_SERIALIZER
will be used. This setting defaults to Prefect's pickle serializer.
You may configure the result serializer using:
\"json\"
or \"pickle\"
\u2014 this corresponds to an instance with default valuesJSONSerializer(jsonlib=\"orjson\")
Prefect provides a CompressedSerializer
which can be used to wrap other serializers to provide compression over the bytes they generate. The compressed serializer uses lzma
compression by default. We test other compression schemes provided in the Python standard library such as bz2
and zlib
, but you should be able to use any compression library that provides compress
and decompress
methods.
You may configure compression of results using:
compressed/
e.g. \"compressed/json\"
or \"compressed/pickle\"
CompressedSerializer(serializer=\"pickle\", compressionlib=\"lzma\")
Note that the \"compressed/<serializer-type>\"
shortcut will only work for serializers provided by Prefect. If you are using custom serializers, you must pass a full instance.
The Prefect API does not store your results in most cases for the following reasons:
There are a few cases where Prefect will store your results directly in the database. This is an optimization to reduce the overhead of reading and writing to result storage.
The following data types will be stored by the API without persistence to storage:
True
, False
)None
)If persist_result
is set to False
, these values will never be stored.
The Prefect API tracks metadata about your results. The value of your result is only stored in specific cases. Result metadata can be seen in the UI on the \"Results\" page for flows.
Prefect tracks the following result metadata:
When running your workflows, Prefect will keep the results of all tasks and flows in memory so they can be passed downstream. In some cases, it is desirable to override this behavior. For example, if you are returning a large amount of data from a task it can be costly to keep it memory for the entire duration of the flow run.
Flows and tasks both include an option to drop the result from memory with cache_result_in_memory
:
@flow(cache_result_in_memory=False)\ndef foo():\n return \"pretend this is large data\"\n\n@task(cache_result_in_memory=False)\ndef bar():\n return \"pretend this is biiiig data\"\n
When cache_result_in_memory
is disabled, the result of your flow or task will be persisted by default. The result will then be pulled from storage when needed.
@flow\ndef foo():\n result = bar()\n state = bar(return_state=True)\n\n # The result will be retrieved from storage here\n state.result()\n\n future = bar.submit()\n # The result will be retrieved from storage here\n future.result()\n\n@task(cache_result_in_memory=False)\ndef bar():\n # This result will persisted\n return \"pretend this is biiiig data\"\n
If both cache_result_in_memory
and persistence are disabled, your results will not be available downstream.
@task(persist_result=False, cache_result_in_memory=False)\ndef bar():\n return \"pretend this is biiiig data\"\n\n@flow\ndef foo():\n # Raises an error\n result = bar()\n\n # This is oaky\n state = bar(return_state=True)\n\n # Raises an error\n state.result()\n\n # This is okay\n future = bar.submit()\n\n # Raises an error\n future.result()\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-storage-types","title":"Result storage types","text":"Result storage is responsible for reading and writing serialized data to an external location. At this time, any file system block can be used for result storage.
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-serializer-types","title":"Result serializer types","text":"A result serializer is responsible for converting your Python object to and from bytes. This is necessary to store the object outside of Python and retrieve it later.
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#pickle-serializer","title":"Pickle serializer","text":"Pickle is a standard Python protocol for encoding arbitrary Python objects. We supply a custom pickle serializer at prefect.serializers.PickleSerializer
. Prefect's pickle serializer uses the cloudpickle project by default to support more object types. Alternative pickle libraries can be specified:
from prefect.serializers import PickleSerializer\n\nPickleSerializer(picklelib=\"custompickle\")\n
Benefits of the pickle serializer:
Drawbacks of the pickle serializer:
We supply a custom JSON serializer at prefect.serializers.JSONSerializer
. Prefect's JSON serializer uses custom hooks by default to support more object types. Specifically, we add support for all types supported by Pydantic.
By default, we use the standard Python json
library. Alternative JSON libraries can be specified:
from prefect.serializers import JSONSerializer\n\nJSONSerializer(jsonlib=\"orjson\")\n
Benefits of the JSON serializer:
Drawbacks of the JSON serializer:
Prefect uses internal result types to capture information about the result attached to a state. The following types are used:
UnpersistedResult
: Stores result metadata but the value is only available when created.LiteralResult
: Stores simple values inline.PersistedResult
: Stores a reference to a result persisted to storage.All result types include a get()
method that can be called to return the value of the result. This is done behind the scenes when the result()
method is used on states or futures.
Unpersisted results are used to represent results that have not been and will not be persisted beyond the current flow run. The value associated with the result is stored in memory, but will not be available later. Result metadata is attached to this object for storage in the API and representation in the UI.
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#literal-results","title":"Literal results","text":"Literal results are used to represent results stored in the Prefect database. The values contained by these results must always be JSON serializable.
Example:
result = LiteralResult(value=None)\nresult.json()\n# {\"type\": \"result\", \"value\": \"null\"}\n
Literal results reduce the overhead required to persist simple results.
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisted-results","title":"Persisted results","text":"The persisted result type contains all of the information needed to retrieve the result from storage. This includes:
Persisted result types also contain metadata for inspection without retrieving the result:
The get()
method on result references retrieves the data from storage, deserializes it, and returns the original object. The get()
operation will cache the resolved object to reduce the overhead of subsequent calls.
When results are persisted to storage, they are always written as a JSON document. The schema for this is described by the PersistedResultBlob
type. The document contains:
Scheduling is one of the primary reasons for using an orchestrator such as Prefect. Prefect allows you to use schedules to automatically create new flow runs for deployments.
Prefect Cloud can also schedule flow runs through event-driven automations.
Schedules tell the Prefect API how to create new flow runs for you automatically on a specified cadence.
You can add a schedule to any deployment. The Prefect Scheduler
service periodically reviews every deployment and creates new flow runs according to the schedule configured for the deployment.
Support for multiple schedules
We are currently rolling out support for multiple schedules per deployment. You can now assign multiple schedules to deployments in the Prefect UI, the CLI via prefect deployment schedule
commands, the Deployment
class, and in block-based deployment YAML files.
Support for multiple schedules in flow.serve
, flow.deploy
, serve
, and worker-based deployments with prefect deploy
will arrive soon.
Prefect supports several types of schedules that cover a wide range of use cases and offer a large degree of customization:
Cron
is most appropriate for users who are already familiar with cron
from previous use.Interval
is best suited for deployments that need to run at some consistent cadence that isn't related to absolute time.RRule
is best suited for deployments that rely on calendar logic for simple recurring schedules, irregular intervals, exclusions, or day-of-month adjustments.Schedules can be inactive
When you create or edit a schedule, you can set the active
property to False
in Python (or false
in a YAML file) to deactivate the schedule. This is useful if you want to keep the schedule configuration but temporarily stop the schedule from creating new flow runs.
A schedule may be specified with a cron
pattern. Users may also provide a timezone to enforce DST behaviors.
Cron
uses croniter
to specify datetime iteration with a cron
-like format.
Cron
properties include:
cron
string. (Required) day_or Boolean indicating how croniter
handles day
and day_of_week
entries. Default is True
. timezone String name of a time zone. (See the IANA Time Zone Database for valid time zones.)","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#how-the-day_or-property-works","title":"How the day_or
property works","text":"The day_or
property defaults to True
, matching the behavior of cron
. In this mode, if you specify a day
(of the month) entry and a day_of_week
entry, the schedule will run a flow on both the specified day of the month and on the specified day of the week. The \"or\" in day_or
refers to the fact that the two entries are treated like an OR
statement, so the schedule should include both, as in the SQL statement SELECT * FROM employees WHERE first_name = 'Xi\u0101ng' OR last_name = 'Brookins';
.
For example, with day_or
set to True
, the cron schedule * * 3 1 2
runs a flow every minute on the 3rd day of the month (whatever that is) and on Tuesday (the second day of the week) in January (the first month of the year).
With day_or
set to False
, the day
(of the month) and day_of_week
entries are joined with the more restrictive AND
operation, as in the SQL statement SELECT * from employees WHERE first_name = 'Andrew' AND last_name = 'Brookins';
. For example, the same schedule, when day_or
is False
, runs a flow on every minute on the 3rd Tuesday in January. This behavior matches fcron
instead of cron
.
Supported croniter
features
While Prefect supports most features of croniter
for creating cron
-like schedules, we do not currently support \"R\" random or \"H\" hashed keyword expressions or the schedule jittering possible with those expressions.
Daylight saving time considerations
If the timezone
is a DST-observing one, then the schedule will adjust itself appropriately.
The cron
rules for DST are based on schedule times, not intervals. This means that an hourly cron
schedule fires on every new schedule hour, not every elapsed hour. For example, when clocks are set back, this results in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later.
Longer schedules, such as one that fires at 9am every morning, will adjust for DST automatically.
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#interval","title":"Interval","text":"An Interval
schedule creates new flow runs on a regular interval measured in seconds. Intervals are computed using an optional anchor_date
. For example, here's how you can create a schedule for every 10 minutes in a block-based deployment YAML file:
schedule:\n interval: 600\n timezone: America/Chicago \n
Interval
properties include:
datetime.timedelta
indicating the time between flow runs. (Required) anchor_date datetime.datetime
indicating the starting or \"anchor\" date to begin the schedule. If no anchor_date
is supplied, the current UTC time is used. timezone String name of a time zone, used to enforce localization behaviors like DST boundaries. (See the IANA Time Zone Database for valid time zones.) Note that the anchor_date
does not indicate a \"start time\" for the schedule, but rather a fixed point in time from which to compute intervals. If the anchor date is in the future, then schedule dates are computed by subtracting the interval
from it. Note that in this example, we import the Pendulum Python package for easy datetime manipulation. Pendulum isn\u2019t required, but it\u2019s a useful tool for specifying dates.
Daylight saving time considerations
If the schedule's anchor_date
or timezone
are provided with a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals.
For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time.
For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#rrule","title":"RRule","text":"An RRule
scheduling supports iCal recurrence rules (RRules), which provide convenient syntax for creating repetitive schedules. Schedules can repeat on a frequency from yearly down to every minute.
RRule
uses the dateutil rrule module to specify iCal recurrence rules.
RRules are appropriate for any kind of calendar-date manipulation, including simple repetition, irregular intervals, exclusions, week day or day-of-month adjustments, and more. RRules can represent complex logic like:
RRule
properties include:
rrulestr
examples for syntax. timezone String name of a time zone. See the IANA Time Zone Database for valid time zones. You may find it useful to use an RRule string generator such as the iCalendar.org RRule Tool to help create valid RRules.
For example, the following RRule schedule in a block-based deployment YAML file creates flow runs on Monday, Wednesday, and Friday until July 30, 2024.
schedule:\n rrule: 'FREQ=WEEKLY;BYDAY=MO,WE,FR;UNTIL=20240730T040000Z'\n
RRule restrictions
Note the max supported character length of an rrulestr
is 6500 characters
Note that COUNT
is not supported. Please use UNTIL
or the /deployments/{id}/runs
endpoint to schedule a fixed number of flow runs.
Daylight saving time considerations
Note that as a calendar-oriented standard, RRules
are sensitive to the initial timezone provided. A 9am daily schedule with a DST-aware start date will maintain a local 9am time through DST boundaries. A 9am daily schedule with a UTC start date will maintain a 9am UTC time.
There are several ways to create a schedule for a deployment:
cron
, interval
, or rrule
parameters if building your deployment via the serve
method of the Flow
object or the serve
utility for managing multiple flows simultaneouslyprefect deploy
commanddeployments
-> schedule
section of the prefect.yaml
file )Through the schedules
section of the deployment YAML fileschedules
into the Deployment
class or Deployment.build_from_flow
You can add schedules in the Schedules section on a Deployment page in the UI.
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#locating-the-schedules-section","title":"Locating the Schedules section","text":"On larger displays, the Schedules section will appear in a sidebar on the right side of the page. On smaller displays, it will appear on the \"Details\" tab of the page.
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#adding-a-schedule","title":"Adding a schedule","text":"Under Schedules, select the + Schedule button. A modal dialog will open. Choose Interval or Cron to create a schedule.
What about RRule?
The UI does not support creating RRule schedules. However, the UI will display RRule schedules that you've created via the command line.
The new schedule will appear on the Deployment page where you created it. In addition, the schedule will be viewable in human-friendly text in the list of deployments on the Deployments page.
After you create a schedule, new scheduled flow runs will be visible in the Upcoming tab of the Deployment page where you created it.
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#editing-schedules","title":"Editing schedules","text":"You can edit a schedule by selecting Edit from the three-dot menu next to a schedule on a Deployment page.
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-with-a-python-deployment-creation-file","title":"Creating schedules with a Python deployment creation file","text":"When you create a deployment in a Python file with flow.serve()
, serve
, flow.deploy()
, or deploy
you can specify the schedule. Just add the keyword argument cron
, interval
, or rrule
.
interval: An interval on which to execute the deployment. Accepts a number or a \n timedelta object to create a single schedule. If a number is given, it will be \n interpreted as seconds. Also accepts an iterable of numbers or timedelta to create \n multiple schedules.\ncron: A cron schedule string of when to execute runs of this deployment. \n Also accepts an iterable of cron schedule strings to create multiple schedules.\nrrule: An rrule schedule string of when to execute runs of this deployment.\n Also accepts an iterable of rrule schedule strings to create multiple schedules.\nschedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options such as `timezone`.\nschedule: A schedule object defining when to execute runs of this deployment. Used to\n define additional scheduling options like `timezone`.\n
Here's an example of creating a cron schedule with serve
for a deployment flow that will run every minute of every day:
my_flow.serve(name=\"flowing\", cron=\"* * * * *\")\n
Here's an example of creating an interval schedule with serve
for a deployment flow that will run every 10 minutes with an anchor date and a timezone:
from datetime import timedelta, datetime\nfrom prefect.client.schemas.schedules import IntervalSchedule\n\nmy_flow.serve(name=\"flowing\", schedule=IntervalSchedule(interval=timedelta(minutes=10), anchor_date=datetime(2023, 1, 1, 0, 0), timezone=\"America/Chicago\"))\n
Block and agent-based deployments with Python files are not a recommended way to create deployments. However, if you are using that deployment creation method you can create a schedule by passing a schedule
argument to the Deployment.build_from_flow
method.
Here's how you create the equivalent schedule in a Python deployment file, with a timezone specified.
from prefect.server.schemas.schedules import CronSchedule\n\ncron_demo = Deployment.build_from_flow(\n pipeline,\n \"etl\",\n schedule=(CronSchedule(cron=\"0 0 * * *\", timezone=\"America/Chicago\"))\n)\n
IntervalSchedule
and RRuleSchedule
are the other two Python class schedule options.
prefect deploy
command","text":"If you are using worker-based deployments, you can create a schedule through the interactive prefect deploy
command. You will be prompted to choose which type of schedule to create.
prefect.yaml
file's deployments
-> schedule
section","text":"If you save the prefect.yaml
file from the prefect deploy
command, you will see it has a schedules
section for your deployment. Alternatively, you can create a prefect.yaml
file from a recipe or from scratch and add a schedules
section to it.
deployments:\n ...\n schedules:\n - cron: \"0 0 * * *\"\n timezone: \"America/Chicago\"\n active: false\n - cron: \"0 12 * * *\"\n timezone: \"America/New_York\"\n active: true\n - cron: \"0 18 * * *\"\n timezone: \"Europe/London\"\n active: true\n
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#the-scheduler-service","title":"The Scheduler
service","text":"The Scheduler
service is started automatically when prefect server start
is run and it is a built-in service of Prefect Cloud.
By default, the Scheduler
service visits deployments on a 60-second loop, though recently-modified deployments will be visited more frequently. The Scheduler
evaluates each deployment's schedules and creates new runs appropriately. For typical deployments, it will create the next three runs, though more runs will be scheduled if the next 3 would all start in the next hour.
More specifically, the Scheduler
tries to create the smallest number of runs that satisfy the following constraints, in order:
These behaviors can all be adjusted through the relevant settings that can be viewed with the terminal command prefect config view --show-defaults
:
PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE='100'\nPREFECT_API_SERVICES_SCHEDULER_ENABLED='True'\nPREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE='500'\nPREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS='60.0'\nPREFECT_API_SERVICES_SCHEDULER_MIN_RUNS='3'\nPREFECT_API_SERVICES_SCHEDULER_MAX_RUNS='100'\nPREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME='1:00:00'\nPREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME='100 days, 0:00:00'\n
See the Settings docs for more information on altering your settings.
These settings mean that if a deployment has an hourly schedule, the default settings will create runs for the next 4 days (or 100 hours). If it has a weekly schedule, the default settings will maintain the next 14 runs (up to 100 days in the future).
The Scheduler
does not affect execution
The Prefect Scheduler
service only creates new flow runs and places them in Scheduled
states. It is not involved in flow or task execution.
If you change a schedule, previously scheduled flow runs that have not started are removed, and new scheduled flow runs are created to reflect the new schedule.
To remove all scheduled runs for a flow deployment, you can remove the schedule via the UI.
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/states/","title":"States","text":"","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#overview","title":"Overview","text":"States are rich objects that contain information about the status of a particular task run or flow run. While you don't need to know the details of the states to use Prefect, you can give your workflows superpowers by taking advantage of it.
At any moment, you can learn anything you need to know about a task or flow by examining its current state or the history of its states. For example, a state could tell you that a task:
is scheduled to make a third run attempt in an hour
succeeded and what data it produced
was scheduled to run, but later cancelled
used the cached result of a previous run instead of re-running
failed because it timed out
By manipulating a relatively small number of task states, Prefect flows can harness the complexity that emerges in workflows.
Only runs have states
Though we often refer to the \"state\" of a flow or a task, what we really mean is the state of a flow run or a task run. Flows and tasks are templates that describe what a system does; only when we run the system does it also take on a state. So while we might refer to a task as \"running\" or being \"successful\", we really mean that a specific instance of the task is in that state.
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#state-types","title":"State Types","text":"States have names and types. State types are canonical, with specific orchestration rules that apply to transitions into and out of each state type. A state's name, is often, but not always, synonymous with its type. For example, a task run that is running for the first time has a state with the name Running and the type RUNNING
. However, if the task retries, that same task run will have the name Retrying and the type RUNNING
. Each time the task run transitions into the RUNNING
state, the same orchestration rules are applied.
There are terminal state types from which there are no orchestrated transitions to any other state type.
COMPLETED
CANCELLED
FAILED
CRASHED
The full complement of states and state types includes:
Name Type Terminal? Description Scheduled SCHEDULED No The run will begin at a particular time in the future. Late SCHEDULED No The run's scheduled start time has passed, but it has not transitioned to PENDING (5 seconds by default). AwaitingRetry SCHEDULED No The run did not complete successfully because of a code issue and had remaining retry attempts. Pending PENDING No The run has been submitted to run, but is waiting on necessary preconditions to be satisfied. Running RUNNING No The run code is currently executing. Retrying RUNNING No The run code is currently executing after previously not complete successfully. Paused PAUSED No The run code has stopped executing until it receives manual approval to proceed. Cancelling CANCELLING No The infrastructure on which the code was running is being cleaned up. Cancelled CANCELLED Yes The run did not complete because a user determined that it should not. Completed COMPLETED Yes The run completed successfully. Failed FAILED Yes The run did not complete because of a code issue and had no remaining retry attempts. Crashed CRASHED Yes The run did not complete because of an infrastructure issue.","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#returned-values","title":"Returned values","text":"When calling a task or a flow, there are three types of returned values:
int
, str
, dict
, list
, and so on).State
: A Prefect object indicating the state of a flow or task run.PrefectFuture
: A Prefect object that contains both data and State.Returning data\u200a is the default behavior any time you call your_task()
.
Returning Prefect State
occurs anytime you call your task or flow with the argument return_state=True
.
Returning PrefectFuture
is achieved by calling your_task.submit()
.
By default, running a task will return data:
from prefect import flow, task \n\n@task \ndef add_one(x):\n return x + 1\n\n@flow \ndef my_flow():\n result = add_one(1) # return int\n
The same rule applies for a subflow:
@flow \ndef subflow():\n return 42 \n\n@flow \ndef my_flow():\n result = subflow() # return data\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-prefect-state","title":"Return Prefect State","text":"To return a State
instead, add return_state=True
as a parameter of your task call.
@flow \ndef my_flow():\n state = add_one(1, return_state=True) # return State\n
To get data from a State
, call .result()
.
@flow \ndef my_flow():\n state = add_one(1, return_state=True) # return State\n result = state.result() # return int\n
The same rule applies for a subflow:
@flow \ndef subflow():\n return 42 \n\n@flow \ndef my_flow():\n state = subflow(return_state=True) # return State\n result = state.result() # return int\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-a-prefectfuture","title":"Return a PrefectFuture","text":"To get a PrefectFuture
, add .submit()
to your task call.
@flow \ndef my_flow():\n future = add_one.submit(1) # return PrefectFuture\n
To get data from a PrefectFuture
, call .result()
.
@flow \ndef my_flow():\n future = add_one.submit(1) # return PrefectFuture\n result = future.result() # return data\n
To get a State
from a PrefectFuture
, call .wait()
.
@flow \ndef my_flow():\n future = add_one.submit(1) # return PrefectFuture\n state = future.wait() # return State\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#final-state-determination","title":"Final state determination","text":"The final state of a flow is determined by its return value. The following rules apply:
FAILED
.None
), its state is determined by the states of all of the tasks and subflows within it.FAILED
.CANCELLED
.See the Final state determination section of the Flows documentation for further details and examples.
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#state-change-hooks","title":"State Change Hooks","text":"State change hooks execute code in response to changes in flow or task run states, enabling you to define actions for specific state transitions in a workflow.
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#a-simple-example","title":"A simple example","text":"from prefect import flow\n\ndef my_success_hook(flow, flow_run, state):\n print(\"Flow run succeeded!\")\n\n@flow(on_completion=[my_success_hook])\ndef my_flow():\n return 42\n\nmy_flow()\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-and-use-hooks","title":"Create and use hooks","text":"","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#available-state-change-hooks","title":"Available state change hooks","text":"Type Flow Task Description on_completion
\u2713 \u2713 Executes when a flow or task run enters a Completed
state. on_failure
\u2713 \u2713 Executes when a flow or task run enters a Failed
state. on_cancellation
\u2713 - Executes when a flow run enters a Cancelling
state. on_crashed
\u2713 - Executes when a flow run enters a Crashed
state. on_running
\u2713 - Executes when a flow run enters a Running
state.","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-flow-run-state-change-hooks","title":"Create flow run state change hooks","text":"def my_flow_hook(flow: Flow, flow_run: FlowRun, state: State):\n \"\"\"This is the required signature for a flow run state\n change hook. This hook can only be passed into flows.\n \"\"\"\n\n# pass hook as a list of callables\n@flow(on_completion=[my_flow_hook])\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-task-run-state-change-hooks","title":"Create task run state change hooks","text":"def my_task_hook(task: Task, task_run: TaskRun, state: State):\n \"\"\"This is the required signature for a task run state change\n hook. This hook can only be passed into tasks.\n \"\"\"\n\n# pass hook as a list of callables\n@task(on_failure=[my_task_hook])\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#use-multiple-state-change-hooks","title":"Use multiple state change hooks","text":"State change hooks are versatile, allowing you to specify multiple state change hooks for the same state transition, or to use the same state change hook for different transitions:
def my_success_hook(task, task_run, state):\n print(\"Task run succeeded!\")\n\ndef my_failure_hook(task, task_run, state):\n print(\"Task run failed!\")\n\ndef my_succeed_or_fail_hook(task, task_run, state):\n print(\"If the task run succeeds or fails, this hook runs.\")\n\n@task(\n on_completion=[my_success_hook, my_succeed_or_fail_hook],\n on_failure=[my_failure_hook, my_succeed_or_fail_hook]\n)\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#pass-kwargs-to-your-hooks","title":"Pass kwargs
to your hooks","text":"The Prefect engine will call your hooks for you upon the state change, passing in the flow, flow run, and state objects.
However, you can define your hook to have additional default arguments:
from prefect import flow\n\ndata = {}\n\ndef my_hook(flow, flow_run, state, my_arg=\"custom_value\"):\n data.update(my_arg=my_arg, state=state)\n\n@flow(on_completion=[my_hook])\ndef lazy_flow():\n pass\n\nstate = lazy_flow(return_state=True)\n\nassert data == {\"my_arg\": \"custom_value\", \"state\": state}\n
... or define your hook to accept arbitrary keyword arguments:
from functools import partial\nfrom prefect import flow, task\n\ndata = {}\n\ndef my_hook(task, task_run, state, **kwargs):\n data.update(state=state, **kwargs)\n\n@task\ndef bad_task():\n raise ValueError(\"meh\")\n\n@flow\ndef ok_with_failure_flow(x: str = \"foo\", y: int = 42):\n bad_task_with_a_hook = bad_task.with_options(\n on_failure=[partial(my_hook, **dict(x=x, y=y))]\n )\n # return a tuple of \"bar\" and the task run state\n # to avoid raising the task's exception\n return \"bar\", bad_task_with_a_hook(return_state=True)\n\n_, task_run_state = ok_with_failure_flow()\n\nassert data == {\"x\": \"foo\", \"y\": 42, \"state\": task_run_state}\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#more-examples-of-state-change-hooks","title":"More examples of state change hooks","text":"Storage blocks are not recommended
Storage blocks are part of the legacy block-based deployment model. Instead, using serve
or runner
-based Python creation methods or workers and work pools with prefect deploy
via the CLI are the recommended options for creating a deployment. Flow code storage can be specified in the Python file with serve
or runner
-based Python creation methods; alternatively, with the work pools and workers style of flow deployment, you can specify flow code storage during the interactive prefect deploy
CLI experience and in its resulting prefect.yaml
file.
Storage lets you configure how flow code for deployments is persisted and retrieved by Prefect workers (or legacy agents). Anytime you build a block-based deployment, a storage block is used to upload the entire directory containing your workflow code (along with supporting files) to its configured location. This helps ensure portability of your relative imports, configuration files, and more. Note that your environment dependencies (for example, external Python packages) still need to be managed separately.
If no storage is explicitly configured, Prefect will use LocalFileSystem
storage by default. Local storage works fine for many local flow run scenarios, especially when testing and getting started. However, due to the inherent lack of portability, many use cases are better served by using remote storage such as S3 or Google Cloud Storage.
Prefect supports creating multiple storage configurations and switching between storage as needed.
Storage uses blocks
Blocks are the Prefect technology underlying storage, and enables you to do so much more.
In addition to creating storage blocks via the Prefect CLI, you can now create storage blocks and other kinds of block configuration objects via the Prefect UI and Prefect Cloud.
","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/storage/#configuring-storage-for-a-deployment","title":"Configuring storage for a deployment","text":"When building a deployment for a workflow, you have two options for configuring workflow storage:
Anytime you call prefect deployment build
without providing the --storage-block
flag, a default LocalFileSystem
block will be used. Note that this block will always use your present working directory as its basepath (which is usually desirable). You can see the block's settings by inspecting the deployment.yaml
file that Prefect creates after calling prefect deployment build
.
While you generally can't run a deployment stored on a local file system on other machines, any agent running on the same machine will be able to successfully run your deployment.
","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/storage/#supported-storage-blocks","title":"Supported storage blocks","text":"Current options for deployment storage blocks include:
Storage Description Required Library Local File System Store code in a run's local file system. Remote File System Store code in a any filesystem supported byfsspec
. AWS S3 Storage Store code in an AWS S3 bucket. s3fs
Azure Storage Store code in Azure Datalake and Azure Blob Storage. adlfs
GitHub Storage Store code in a GitHub repository. Google Cloud Storage Store code in a Google Cloud Platform (GCP) Cloud Storage bucket. gcsfs
SMB Store code in SMB shared network storage. smbprotocol
GitLab Repository Store code in a GitLab repository. prefect-gitlab
Bitbucket Repository Store code in a Bitbucket repository. prefect-bitbucket
Accessing files may require storage filesystem libraries
Note that the appropriate filesystem library supporting the storage location must be installed prior to building a deployment with a storage block or accessing the storage location from flow scripts.
For example, the AWS S3 Storage block requires the s3fs
library.
See Filesystem package dependencies for more information about configuring filesystem libraries in your execution environment.
","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/storage/#configuring-a-block","title":"Configuring a block","text":"You can create these blocks either via the UI or via Python.
You can create, edit, and manage storage blocks in the Prefect UI and Prefect Cloud. On a Prefect server, blocks are created in the server's database. On Prefect Cloud, blocks are created on a workspace.
To create a new block, select the + button. Prefect displays a library of block types you can configure to create blocks to be used by your flows.
Select Add + to configure a new storage block based on a specific block type. Prefect displays a Create page that enables specifying storage settings.
You can also create blocks using the Prefect Python API:
from prefect.filesystems import S3\n\nblock = S3(bucket_path=\"my-bucket/a-sub-directory\", \n aws_access_key_id=\"foo\", \n aws_secret_access_key=\"bar\"\n)\nblock.save(\"example-block\")\n
This block configuration is now available to be used by anyone with appropriate access to your Prefect API. We can use this block to build a deployment by passing its slug to the prefect deployment build
command. The storage block slug is formatted as block-type/block-name
. In this case, s3/example-block
for an AWS S3 Bucket block named example-block
. See block identifiers for details.
prefect deployment build ./flows/my_flow.py:my_flow --name \"Example Deployment\" --storage-block s3/example-block\n
This command will upload the contents of your flow's directory to the designated storage location, then the full deployment specification will be persisted to a newly created deployment.yaml
file. For more information, see Deployments.
Task runners enable you to engage specific executors for Prefect tasks, such as for concurrent, parallel, or distributed execution of tasks.
Task runners are not required for task execution. If you call a task function directly, the task executes as a regular Python function, without a task runner, and produces whatever result is returned by the function.
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#task-runner-overview","title":"Task runner overview","text":"Calling a task function from within a flow, using the default task settings, executes the function sequentially. Execution of the task function blocks execution of the flow until the task completes. This means, by default, calling multiple tasks in a flow causes them to run in order.
However, that's not the only way to run tasks!
You can use the .submit()
method on a task function to submit the task to a task runner. Using a task runner enables you to control whether tasks run sequentially, concurrently, or if you want to take advantage of a parallel or distributed execution library such as Dask or Ray.
Using the .submit()
method to submit a task also causes the task run to return a PrefectFuture
, a Prefect object that contains both any data returned by the task function and a State
, a Prefect object indicating the state of the task run.
Prefect currently provides the following built-in task runners:
SequentialTaskRunner
can run tasks sequentially. ConcurrentTaskRunner
can run tasks concurrently, allowing tasks to switch when blocking on IO. Tasks will be submitted to a thread pool maintained by anyio
.In addition, the following Prefect-developed task runners for parallel or distributed task execution may be installed as Prefect Integrations.
DaskTaskRunner
can run tasks requiring parallel execution using dask.distributed
. RayTaskRunner
can run tasks requiring parallel execution using Ray.Concurrency versus parallelism
The words \"concurrency\" and \"parallelism\" may sound the same, but they mean different things in computing.
Concurrency refers to a system that can do more than one thing simultaneously, but not at the exact same time. It may be more accurate to think of concurrent execution as non-blocking: within the restrictions of resources available in the execution environment and data dependencies between tasks, execution of one task does not block execution of other tasks in a flow.
Parallelism refers to a system that can do more than one thing at the exact same time. Again, within the restrictions of resources available, parallel execution can run tasks at the same time, such as for operations mapped across a dataset.
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-a-task-runner","title":"Using a task runner","text":"You do not need to specify a task runner for a flow unless your tasks require a specific type of execution.
To configure your flow to use a specific task runner, import a task runner and assign it as an argument for the flow when the flow is defined.
Remember to call .submit()
when using a task runner
Make sure you use .submit()
to run your task with a task runner. Calling the task directly, without .submit()
, from within a flow will run the task sequentially instead of using a specified task runner.
For example, you can use ConcurrentTaskRunner
to allow tasks to switch when they would block.
from prefect import flow, task\nfrom prefect.task_runners import ConcurrentTaskRunner\nimport time\n\n@task\ndef stop_at_floor(floor):\n print(f\"elevator moving to floor {floor}\")\n time.sleep(floor)\n print(f\"elevator stops on floor {floor}\")\n\n@flow(task_runner=ConcurrentTaskRunner())\ndef elevator():\n for floor in range(10, 0, -1):\n stop_at_floor.submit(floor)\n
If you specify an uninitialized task runner class, a task runner instance of that type is created with the default settings. You can also pass additional configuration parameters for task runners that accept parameters, such as DaskTaskRunner
and RayTaskRunner
.
Default task runner
If you don't specify a task runner for a flow and you call a task with .submit()
within the flow, Prefect uses the default ConcurrentTaskRunner
.
Sometimes, it's useful to force tasks to run sequentially to make it easier to reason about the behavior of your program. Switching to the SequentialTaskRunner
will force submitted tasks to run sequentially rather than concurrently.
Synchronous and asynchronous tasks
The SequentialTaskRunner
works with both synchronous and asynchronous task functions. Asynchronous tasks are Python functions defined using async def
rather than def
.
The following example demonstrates using the SequentialTaskRunner
to ensure that tasks run sequentially. In the example, the flow glass_tower
runs the task stop_at_floor
for floors one through 38, in that order.
from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\nimport random\n\n@task\ndef stop_at_floor(floor):\n situation = random.choice([\"on fire\",\"clear\"])\n print(f\"elevator stops on {floor} which is {situation}\")\n\n@flow(task_runner=SequentialTaskRunner(),\n name=\"towering-infernflow\",\n )\ndef glass_tower():\n for floor in range(1, 39):\n stop_at_floor.submit(floor)\n\nglass_tower()\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-multiple-task-runners","title":"Using multiple task runners","text":"Each flow can only have a single task runner, but sometimes you may want a subset of your tasks to run using a specific task runner. In this case, you can create subflows for tasks that need to use a different task runner.
For example, you can have a flow (in the example below called sequential_flow
) that runs its tasks locally using the SequentialTaskRunner
. If you have some tasks that can run more efficiently in parallel on a Dask cluster, you could create a subflow (such as dask_subflow
) to run those tasks using the DaskTaskRunner
.
from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef hello_local():\n print(\"Hello!\")\n\n@task\ndef hello_dask():\n print(\"Hello from Dask!\")\n\n@flow(task_runner=SequentialTaskRunner())\ndef sequential_flow():\n hello_local.submit()\n dask_subflow()\n hello_local.submit()\n\n@flow(task_runner=DaskTaskRunner())\ndef dask_subflow():\n hello_dask.submit()\n\nif __name__ == \"__main__\":\n sequential_flow()\n
Guarding main
Note that you should guard the main
function by using if __name__ == \"__main__\"
to avoid issues with parallel processing.
This script outputs the following logs demonstrating the use of the Dask task runner:
120:14:29.785 | INFO | prefect.engine - Created flow run 'ivory-caiman' for flow 'sequential-flow'\n20:14:29.785 | INFO | Flow run 'ivory-caiman' - Starting 'SequentialTaskRunner'; submitted tasks will be run sequentially...\n20:14:29.880 | INFO | Flow run 'ivory-caiman' - Created task run 'hello_local-7633879f-0' for task 'hello_local'\n20:14:29.881 | INFO | Flow run 'ivory-caiman' - Executing 'hello_local-7633879f-0' immediately...\nHello!\n20:14:29.904 | INFO | Task run 'hello_local-7633879f-0' - Finished in state Completed()\n20:14:29.952 | INFO | Flow run 'ivory-caiman' - Created subflow run 'nimble-sparrow' for flow 'dask-subflow'\n20:14:29.953 | INFO | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n20:14:31.862 | INFO | prefect.task_runner.dask - The Dask dashboard is available at http://127.0.0.1:8787/status\n20:14:31.901 | INFO | Flow run 'nimble-sparrow' - Created task run 'hello_dask-2b96d711-0' for task 'hello_dask'\n20:14:32.370 | INFO | Flow run 'nimble-sparrow' - Submitted task run 'hello_dask-2b96d711-0' for execution.\nHello from Dask!\n20:14:33.358 | INFO | Flow run 'nimble-sparrow' - Finished in state Completed('All states completed.')\n20:14:33.368 | INFO | Flow run 'ivory-caiman' - Created task run 'hello_local-7633879f-1' for task 'hello_local'\n20:14:33.368 | INFO | Flow run 'ivory-caiman' - Executing 'hello_local-7633879f-1' immediately...\nHello!\n20:14:33.386 | INFO | Task run 'hello_local-7633879f-1' - Finished in state Completed()\n20:14:33.399 | INFO | Flow run 'ivory-caiman' - Finished in state Completed('All states completed.')\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-results-from-submitted-tasks","title":"Using results from submitted tasks","text":"When you use .submit()
to submit a task to a task runner, the task runner creates a PrefectFuture
for access to the state and result of the task.
A PrefectFuture
is an object that provides access to a computation happening in a task runner \u2014 even if that computation is happening on a remote system.
In the following example, we save the return value of calling .submit()
on the task say_hello
to the variable future
, and then we print the type of the variable:
from prefect import flow, task\n\n@task\ndef say_hello(name):\n return f\"Hello {name}!\"\n\n@flow\ndef hello_world():\n future = say_hello.submit(\"Marvin\")\n print(f\"variable 'future' is type {type(future)}\")\n\nhello_world()\n
When you run this code, you'll see that the variable future
is a PrefectFuture
:
variable 'future' is type <class 'prefect.futures.PrefectFuture'>\n
When you pass a future into a task, Prefect waits for the \"upstream\" task \u2014 the one that the future references \u2014 to reach a final state before starting the downstream task.
This means that the downstream task won't receive the PrefectFuture
you passed as an argument. Instead, the downstream task will receive the value that the upstream task returned.
Take a look at how this works in the following example
from prefect import flow, task\n\n@task\ndef say_hello(name):\n return f\"Hello {name}!\"\n\n@task\ndef print_result(result):\n print(type(result))\n print(result)\n\n@flow(name=\"hello-flow\")\ndef hello_world():\n future = say_hello.submit(\"Marvin\")\n print_result.submit(future)\n\nhello_world()\n
<class 'str'>\nHello Marvin!\n
Futures have a few useful methods. For example, you can get the return value of the task run with .result()
:
from prefect import flow, task\n\n@task\ndef my_task():\n return 42\n\n@flow\ndef my_flow():\n future = my_task.submit()\n result = future.result()\n print(result)\n\nmy_flow()\n
The .result()
method will wait for the task to complete before returning the result to the caller. If the task run fails, .result()
will raise the task run's exception. You may disable this behavior with the raise_on_failure
option:
from prefect import flow, task\n\n@task\ndef my_task():\n return \"I'm a task!\"\n\n\n@flow\ndef my_flow():\n future = my_task.submit()\n result = future.result(raise_on_failure=False)\n if future.get_state().is_failed():\n # `result` is an exception! handle accordingly\n ...\n else:\n # `result` is the expected return value of our task\n ...\n
You can retrieve the current state of the task run associated with the PrefectFuture
using .get_state()
:
@flow\ndef my_flow():\n future = my_task.submit()\n state = future.get_state()\n
You can also wait for a task to complete by using the .wait()
method:
@flow\ndef my_flow():\n future = my_task.submit()\n final_state = future.wait()\n
You can include a timeout in the wait
call to perform logic if the task has not finished in a given amount of time:
@flow\ndef my_flow():\n future = my_task.submit()\n final_state = future.wait(1) # Wait one second max\n if final_state:\n # Take action if the task is done\n result = final_state.result()\n else:\n ... # Task action if the task is still running\n
You may also use the wait_for=[]
parameter when calling a task, specifying upstream task dependencies. This enables you to control task execution order for tasks that do not share data dependencies.
@task\ndef task_a():\n pass\n\n@task\ndef task_b():\n pass\n\n@task\ndef task_c():\n pass\n\n@task\ndef task_d():\n pass\n\n@flow\ndef my_flow():\n a = task_a.submit()\n b = task_b.submit()\n # Wait for task_a and task_b to complete\n c = task_c.submit(wait_for=[a, b])\n # task_d will wait for task_c to complete\n # Note: If waiting for one task it must still be in a list.\n d = task_d(wait_for=[c])\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#when-to-use-result-in-flows","title":"When to use .result()
in flows","text":"The simplest pattern for writing a flow is either only using tasks or only using pure Python functions. When you need to mix the two, use .result()
.
Using only tasks:
from prefect import flow, task\n\n@task\ndef say_hello(name):\n return f\"Hello {name}!\"\n\n@task\ndef say_nice_to_meet_you(hello_greeting):\n return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n hello = say_hello.submit(\"Marvin\")\n nice_to_meet_you = say_nice_to_meet_you.submit(hello)\n\nhello_world()\n
Using only Python functions:
from prefect import flow, task\n\ndef say_hello(name):\n return f\"Hello {name}!\"\n\ndef say_nice_to_meet_you(hello_greeting):\n return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n # because this is just a Python function, calls will not be tracked\n hello = say_hello(\"Marvin\") \n nice_to_meet_you = say_nice_to_meet_you(hello)\n\nhello_world()\n
Mixing tasks and Python functions:
from prefect import flow, task\n\ndef say_hello_extra_nicely_to_marvin(hello): # not a task or flow!\n if hello == \"Hello Marvin!\":\n return \"HI MARVIN!\"\n return hello\n\n@task\ndef say_hello(name):\n return f\"Hello {name}!\"\n\n@task\ndef say_nice_to_meet_you(hello_greeting):\n return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n # run a task and get the result\n hello = say_hello.submit(\"Marvin\").result()\n\n # not calling a task or flow\n special_greeting = say_hello_extra_nicely_to_marvin(hello)\n\n # pass our modified greeting back into a task\n nice_to_meet_you = say_nice_to_meet_you.submit(special_greeting)\n\n print(nice_to_meet_you.result())\n\nhello_world()\n
Note that .result()
also limits Prefect's ability to track task dependencies. In the \"mixed\" example above, Prefect will not be aware that say_hello
is upstream of nice_to_meet_you
.
Calling .result()
is blocking
When calling .result()
, be mindful your flow function will have to wait until the task run is completed before continuing.
from prefect import flow, task\n\n@task\ndef say_hello(name):\n return f\"Hello {name}!\"\n\n@task\ndef do_important_stuff():\n print(\"Doing lots of important stuff!\")\n\n@flow\ndef hello_world():\n # blocks until `say_hello` has finished\n result = say_hello.submit(\"Marvin\").result() \n do_important_stuff.submit()\n\nhello_world()\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-on-dask","title":"Running tasks on Dask","text":"The DaskTaskRunner
is a parallel task runner that submits tasks to the dask.distributed
scheduler. By default, a temporary Dask cluster is created for the duration of the flow run. If you already have a Dask cluster running, either local or cloud hosted, you can provide the connection URL via the address
kwarg.
prefect-dask
collection is installed: pip install prefect-dask
.DaskTaskRunner
from prefect_dask.task_runners
.task_runner=DaskTaskRunner
argument.For example, this flow uses the DaskTaskRunner
configured to access an existing Dask cluster at http://my-dask-cluster
.
from prefect import flow\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@flow(task_runner=DaskTaskRunner(address=\"http://my-dask-cluster\"))\ndef my_flow():\n ...\n
DaskTaskRunner
accepts the following optional parameters:
\"distributed.LocalCluster\"
), or the class itself. cluster_kwargs Additional kwargs to pass to the cluster_class
when creating a temporary Dask cluster. adapt_kwargs Additional kwargs to pass to cluster.adapt
when creating a temporary Dask cluster. Note that adaptive scaling is only enabled if adapt_kwargs
are provided. client_kwargs Additional kwargs to use when creating a dask.distributed.Client
. Multiprocessing safety
Note that, because the DaskTaskRunner
uses multiprocessing, calls to flows in scripts must be guarded with if __name__ == \"__main__\":
or you will encounter warnings and errors.
If you don't provide the address
of a Dask scheduler, Prefect creates a temporary local cluster automatically. The number of workers used is based on the number of cores on your machine. The default provides a mix of processes and threads that should work well for most workloads. If you want to specify this explicitly, you can pass values for n_workers
or threads_per_worker
to cluster_kwargs
.
# Use 4 worker processes, each with 2 threads\nDaskTaskRunner(\n cluster_kwargs={\"n_workers\": 4, \"threads_per_worker\": 2}\n)\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-a-temporary-cluster","title":"Using a temporary cluster","text":"The DaskTaskRunner
is capable of creating a temporary cluster using any of Dask's cluster-manager options. This can be useful when you want each flow run to have its own Dask cluster, allowing for per-flow adaptive scaling.
To configure, you need to provide a cluster_class
. This can be:
\"dask_cloudprovider.aws.FargateCluster\"
)You can also configure cluster_kwargs
, which takes a dictionary of keyword arguments to pass to cluster_class
when starting the flow run.
For example, to configure a flow to use a temporary dask_cloudprovider.aws.FargateCluster
with 4 workers running with an image named my-prefect-image
:
DaskTaskRunner(\n cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n cluster_kwargs={\"n_workers\": 4, \"image\": \"my-prefect-image\"},\n)\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#connecting-to-an-existing-cluster","title":"Connecting to an existing cluster","text":"Multiple Prefect flow runs can all use the same existing Dask cluster. You might manage a single long-running Dask cluster (maybe using the Dask Helm Chart) and configure flows to connect to it during execution. This has a few downsides when compared to using a temporary cluster (as described above):
That said, you may prefer managing a single long-running cluster.
To configure a DaskTaskRunner
to connect to an existing cluster, pass in the address of the scheduler to the address
argument:
# Connect to an existing cluster running at a specified address\nDaskTaskRunner(address=\"tcp://...\")\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#adaptive-scaling","title":"Adaptive scaling","text":"One nice feature of using a DaskTaskRunner
is the ability to scale adaptively to the workload. Instead of specifying n_workers
as a fixed number, this lets you specify a minimum and maximum number of workers to use, and the dask cluster will scale up and down as needed.
To do this, you can pass adapt_kwargs
to DaskTaskRunner
. This takes the following fields:
maximum
(int
or None
, optional): the maximum number of workers to scale to. Set to None
for no maximum.minimum
(int
or None
, optional): the minimum number of workers to scale to. Set to None
for no minimum.For example, here we configure a flow to run on a FargateCluster
scaling up to at most 10 workers.
DaskTaskRunner(\n cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n adapt_kwargs={\"maximum\": 10}\n)\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#dask-annotations","title":"Dask annotations","text":"Dask annotations can be used to further control the behavior of tasks.
For example, we can set the priority of tasks in the Dask scheduler:
import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n print(x)\n\n\n@flow(task_runner=DaskTaskRunner())\ndef my_flow():\n with dask.annotate(priority=-10):\n future = show.submit(1) # low priority task\n\n with dask.annotate(priority=10):\n future = show.submit(2) # high priority task\n
Another common use case is resource annotations:
import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n print(x)\n\n# Create a `LocalCluster` with some resource annotations\n# Annotations are abstract in dask and not inferred from your system.\n# Here, we claim that our system has 1 GPU and 1 process available per worker\n@flow(\n task_runner=DaskTaskRunner(\n cluster_kwargs={\"n_workers\": 1, \"resources\": {\"GPU\": 1, \"process\": 1}}\n )\n)\n\ndef my_flow():\n with dask.annotate(resources={'GPU': 1}):\n future = show(0) # this task requires 1 GPU resource on a worker\n\n with dask.annotate(resources={'process': 1}):\n # These tasks each require 1 process on a worker; because we've \n # specified that our cluster has 1 process per worker and 1 worker,\n # these tasks will run sequentially\n future = show(1)\n future = show(2)\n future = show(3)\n\n\nif __name__ == \"__main__\":\n my_flow()\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-on-ray","title":"Running tasks on Ray","text":"The RayTaskRunner
\u2014 installed separately as a Prefect Collection \u2014 is a parallel task runner that submits tasks to Ray. By default, a temporary Ray instance is created for the duration of the flow run. If you already have a Ray instance running, you can provide the connection URL via an address
argument.
Remote storage and Ray tasks
We recommend configuring remote storage for task execution with the RayTaskRunner
. This ensures tasks executing in Ray have access to task result storage, particularly when accessing a Ray instance outside of your execution environment.
To configure your flow to use the RayTaskRunner
:
prefect-ray
collection is installed: pip install prefect-ray
.RayTaskRunner
from prefect_ray.task_runners
.task_runner=RayTaskRunner
argument.For example, this flow uses the RayTaskRunner
configured to access an existing Ray instance at ray://192.0.2.255:8786
.
from prefect import flow\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@flow(task_runner=RayTaskRunner(address=\"ray://192.0.2.255:8786\"))\ndef my_flow():\n ... \n
RayTaskRunner
accepts the following optional parameters:
ray.init
. Note that Ray Client uses the ray:// URI to indicate the address of a Ray instance. If you don't provide the address
of a Ray instance, Prefect creates a temporary instance automatically.
Ray environment limitations
While we're excited about adding support for parallel task execution via Ray to Prefect, there are some inherent limitations with Ray you should be aware of:
Ray's support for Python 3.11 is experimental.
Ray support for non-x86/64 architectures such as ARM/M1 processors with installation from pip
alone and will be skipped during installation of Prefect. It is possible to manually install the blocking component with conda
. See the Ray documentation for instructions.
See the Ray installation documentation for further compatibility information.
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/tasks/","title":"Tasks","text":"A task is a function that represents a discrete unit of work in a Prefect workflow. Tasks are not required \u2014 you may define Prefect workflows that consist only of flows, using regular Python statements and functions. Tasks enable you to encapsulate elements of your workflow logic in observable units that can be reused across flows and subflows.
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#tasks-overview","title":"Tasks overview","text":"Tasks are functions: they can take inputs, perform work, and return an output. A Prefect task can do almost anything a Python function can do.
Tasks are special because they receive metadata about upstream dependencies and the state of those dependencies before they run, even if they don't receive any explicit data inputs from them. This gives you the opportunity to, for example, have a task wait on the completion of another task before executing.
Tasks also take advantage of automatic Prefect logging to capture details about task runs such as runtime, tags, and final state.
You can define your tasks within the same file as your flow definition, or you can define tasks within modules and import them for use in your flow definitions. All tasks must be called from within a flow. Tasks may not be called from other tasks.
Calling a task from a flow
Use the @task
decorator to designate a function as a task. Calling the task from within a flow function creates a new task run:
from prefect import flow, task\n\n@task\ndef my_task():\n print(\"Hello, I'm a task\")\n\n@flow\ndef my_flow():\n my_task()\n
Tasks are uniquely identified by a task key, which is a hash composed of the task name, the fully-qualified name of the function, and any tags. If the task does not have a name specified, the name is derived from the task function.
How big should a task be?
Prefect encourages \"small tasks\" \u2014 each one should represent a single logical step of your workflow. This allows Prefect to better contain task failures.
To be clear, there's nothing stopping you from putting all of your code in a single task \u2014 Prefect will happily run it! However, if any line of code fails, the entire task will fail and must be retried from the beginning. This can be avoided by splitting the code into multiple dependent tasks.
Calling a task's function from another task
Prefect does not allow triggering task runs from other tasks. If you want to call your task's function directly, you can use task.fn()
.
from prefect import flow, task\n\n@task\ndef my_first_task(msg):\n print(f\"Hello, {msg}\")\n\n@task\ndef my_second_task(msg):\n my_first_task.fn(msg)\n\n@flow\ndef my_flow():\n my_second_task(\"Trillian\")\n
Note that in the example above you are only calling the task's function without actually generating a task run. Prefect won't track task execution in your Prefect backend if you call the task function this way. You also won't be able to use features such as retries with this function call.
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-arguments","title":"Task arguments","text":"Tasks allow for customization through optional arguments:
Argument Descriptionname
An optional name for the task. If not provided, the name will be inferred from the function name. description
An optional string description for the task. If not provided, the description will be pulled from the docstring for the decorated function. tags
An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags
context at task runtime. cache_key_fn
An optional callable that, given the task run context and call parameters, generates a string key. If the key matches a previous completed state, that state result will be restored instead of running the task again. cache_expiration
An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire. retries
An optional number of times to retry on task run failure. retry_delay_seconds
An optional number of seconds to wait before retrying the task after failure. This is only applicable if retries
is nonzero. log_prints
An optional boolean indicating whether to log print statements. persist_result
An optional boolean indicating whether to persist the result of the task run to storage. See all possible parameters in the Python SDK API docs.
For example, you can provide a name
value for the task. Here we've used the optional description
argument as well.
@task(name=\"hello-task\", \n description=\"This task says hello.\")\ndef my_task():\n print(\"Hello, I'm a task\")\n
You can distinguish runs of this task by providing a task_run_name
; this setting accepts a string that can optionally contain templated references to the keyword arguments of your task. The name will be formatted using Python's standard string formatting syntax as can be seen here:
import datetime\nfrom prefect import flow, task\n\n@task(name=\"My Example Task\", \n description=\"An example task for a tutorial.\",\n task_run_name=\"hello-{name}-on-{date:%A}\")\ndef my_task(name, date):\n pass\n\n@flow\ndef my_flow():\n # creates a run with a name like \"hello-marvin-on-Thursday\"\n my_task(name=\"marvin\", date=datetime.datetime.now(datetime.timezone.utc))\n
Additionally this setting also accepts a function that returns a string to be used for the task run name:
import datetime\nfrom prefect import flow, task\n\ndef generate_task_name():\n date = datetime.datetime.now(datetime.timezone.utc)\n return f\"{date:%A}-is-a-lovely-day\"\n\n@task(name=\"My Example Task\",\n description=\"An example task for a tutorial.\",\n task_run_name=generate_task_name)\ndef my_task(name):\n pass\n\n@flow\ndef my_flow():\n # creates a run with a name like \"Thursday-is-a-lovely-day\"\n my_task(name=\"marvin\")\n
If you need access to information about the task, use the prefect.runtime
module. For example:
from prefect import flow\nfrom prefect.runtime import flow_run, task_run\n\ndef generate_task_name():\n flow_name = flow_run.flow_name\n task_name = task_run.task_name\n\n parameters = task_run.parameters\n name = parameters[\"name\"]\n limit = parameters[\"limit\"]\n\n return f\"{flow_name}-{task_name}-with-{name}-and-{limit}\"\n\n@task(name=\"my-example-task\",\n description=\"An example task for a tutorial.\",\n task_run_name=generate_task_name)\ndef my_task(name: str, limit: int = 100):\n pass\n\n@flow\ndef my_flow(name: str):\n # creates a run with a name like \"my-flow-my-example-task-with-marvin-and-100\"\n my_task(name=\"marvin\")\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#tags","title":"Tags","text":"Tags are optional string labels that enable you to identify and group tasks other than by name or flow. Tags are useful for:
Tags may be specified as a keyword argument on the task decorator.
@task(name=\"hello-task\", tags=[\"test\"])\ndef my_task():\n print(\"Hello, I'm a task\")\n
You can also provide tags as an argument with a tags
context manager, specifying tags when the task is called rather than in its definition.
from prefect import flow, task\nfrom prefect import tags\n\n@task\ndef my_task():\n print(\"Hello, I'm a task\")\n\n@flow\ndef my_flow():\n with tags(\"test\"):\n my_task()\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#retries","title":"Retries","text":"Prefect can automatically retry tasks on failure. In Prefect, a task fails if its Python function raises an exception.
To enable retries, pass retries
and retry_delay_seconds
parameters to your task. If the task fails, Prefect will retry it up to retries
times, waiting retry_delay_seconds
seconds between each attempt. If the task fails on the final retry, Prefect marks the task as crashed if the task raised an exception or failed if it returned a string.
Retries don't create new task runs
A new task run is not created when a task is retried. A new state is added to the state history of the original task run.
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#a-real-world-example-making-an-api-request","title":"A real-world example: making an API request","text":"Consider the real-world problem of making an API request. In this example, we'll use the httpx
library to make an HTTP request.
import httpx\n\nfrom prefect import flow, task\n\n\n@task(retries=2, retry_delay_seconds=5)\ndef get_data_task(\n url: str = \"https://api.brittle-service.com/endpoint\"\n) -> dict:\n response = httpx.get(url)\n\n # If the response status code is anything but a 2xx, httpx will raise\n # an exception. This task doesn't handle the exception, so Prefect will\n # catch the exception and will consider the task run failed.\n response.raise_for_status()\n\n return response.json()\n\n\n@flow\ndef get_data_flow():\n get_data_task()\n
In this task, if the HTTP request to the brittle API receives any status code other than a 2xx (200, 201, etc.), Prefect will retry the task a maximum of two times, waiting five seconds in between retries.
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#custom-retry-behavior","title":"Custom retry behavior","text":"The retry_delay_seconds
option accepts a list of delays for more custom retry behavior. The following task will wait for successively increasing intervals of 1, 10, and 100 seconds, respectively, before the next attempt starts:
from prefect import task\n\n@task(retries=3, retry_delay_seconds=[1, 10, 100])\ndef some_task_with_manual_backoff_retries():\n ...\n
The retry_condition_fn
option accepts a callable that returns a boolean. If the callable returns True
, the task will be retried. If the callable returns False
, the task will not be retried. The callable accepts three arguments \u2014 the task, the task run, and the state of the task run. The following task will retry on HTTP status codes other than 401 or 404:
import httpx\nfrom prefect import flow, task\n\ndef retry_handler(task, task_run, state) -> bool:\n \"\"\"This is a custom retry handler to handle when we want to retry a task\"\"\"\n try:\n # Attempt to get the result of the task\n state.result()\n except httpx.HTTPStatusError as exc:\n # Retry on any HTTP status code that is not 401 or 404\n do_not_retry_on_these_codes = [401, 404]\n return exc.response.status_code not in do_not_retry_on_these_codes\n except httpx.ConnectError:\n # Do not retry\n return False\n except:\n # For any other exception, retry\n return True\n\n@task(retries=1, retry_condition_fn=retry_handler)\ndef my_api_call_task(url):\n response = httpx.get(url)\n response.raise_for_status()\n return response.json()\n\n@flow\ndef get_data_flow(url):\n my_api_call_task(url=url)\n\nif __name__ == \"__main__\":\n get_data_flow(url=\"https://httpbin.org/status/503\")\n
Additionally, you can pass a callable that accepts the number of retries as an argument and returns a list. Prefect includes an exponential_backoff
utility that will automatically generate a list of retry delays that correspond to an exponential backoff retry strategy. The following flow will wait for 10, 20, then 40 seconds before each retry.
from prefect import task\nfrom prefect.tasks import exponential_backoff\n\n@task(retries=3, retry_delay_seconds=exponential_backoff(backoff_factor=10))\ndef some_task_with_exponential_backoff_retries():\n ...\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#advanced-topic-adding-jitter","title":"Advanced topic: adding \"jitter\"","text":"While using exponential backoff, you may also want to add jitter to the delay times. Jitter is a random amount of time added to retry periods that helps prevent \"thundering herd\" scenarios, which is when many tasks all retry at the exact same time, potentially overwhelming systems.
The retry_jitter_factor
option can be used to add variance to the base delay. For example, a retry delay of 10 seconds with a retry_jitter_factor
of 0.5 will be allowed to delay up to 15 seconds. Large values of retry_jitter_factor
provide more protection against \"thundering herds,\" while keeping the average retry delay time constant. For example, the following task adds jitter to its exponential backoff so the retry delays will vary up to a maximum delay time of 20, 40, and 80 seconds respectively.
from prefect import task\nfrom prefect.tasks import exponential_backoff\n\n@task(\n retries=3,\n retry_delay_seconds=exponential_backoff(backoff_factor=10),\n retry_jitter_factor=1,\n)\ndef some_task_with_exponential_backoff_retries():\n ...\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#configuring-retry-behavior-globally-with-settings","title":"Configuring retry behavior globally with settings","text":"You can also set retries and retry delays by using the following global settings. These settings will not override the retries
or retry_delay_seconds
that are set in the flow or task decorator.
prefect config set PREFECT_FLOW_DEFAULT_RETRIES=2\nprefect config set PREFECT_TASK_DEFAULT_RETRIES=2\nprefect config set PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS = [1, 10, 100]\nprefect config set PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS = [1, 10, 100]\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#caching","title":"Caching","text":"Caching refers to the ability of a task run to reflect a finished state without actually running the code that defines the task. This allows you to efficiently reuse results of tasks that may be expensive to run with every flow run, or reuse cached results if the inputs to a task have not changed.
To determine whether a task run should retrieve a cached state, we use \"cache keys\". A cache key is a string value that indicates if one run should be considered identical to another. When a task run with a cache key finishes, we attach that cache key to the state. When each task run starts, Prefect checks for states with a matching cache key. If a state with an identical key is found, Prefect will use the cached state instead of running the task again.
To enable caching, specify a cache_key_fn
\u2014 a function that returns a cache key \u2014 on your task. You may optionally provide a cache_expiration
timedelta indicating when the cache expires. If you do not specify a cache_expiration
, the cache key does not expire.
You can define a task that is cached based on its inputs by using the Prefect task_input_hash
. This is a task cache key implementation that hashes all inputs to the task using a JSON or cloudpickle serializer. If the task inputs do not change, the cached results are used rather than running the task until the cache expires.
Note that, if any arguments are not JSON serializable, the pickle serializer is used as a fallback. If cloudpickle fails, task_input_hash
returns a null key indicating that a cache key could not be generated for the given inputs.
In this example, until the cache_expiration
time ends, as long as the input to hello_task()
remains the same when it is called, the cached return value is returned. In this situation the task is not rerun. However, if the input argument value changes, hello_task()
runs using the new input.
from datetime import timedelta\nfrom prefect import flow, task\nfrom prefect.tasks import task_input_hash\n\n@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\ndef hello_task(name_input):\n # Doing some work\n print(\"Saying hello\")\n return \"hello \" + name_input\n\n@flow\ndef hello_flow(name_input):\n hello_task(name_input)\n
Alternatively, you can provide your own function or other callable that returns a string cache key. A generic cache_key_fn
is a function that accepts two positional arguments:
TaskRunContext
, which stores task run metadata in the attributes task_run_id
, flow_run_id
, and task
.fn(x, y, z)
then the dictionary will have keys \"x\"
, \"y\"
, and \"z\"
with corresponding values that can be used to compute your cache key.Note that the cache_key_fn
is not defined as a @task
.
Task cache keys
By default, a task cache key is limited to 2000 characters, specified by the PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH
setting.
from prefect import task, flow\n\ndef static_cache_key(context, parameters):\n # return a constant\n return \"static cache key\"\n\n@task(cache_key_fn=static_cache_key)\ndef cached_task():\n print('running an expensive operation')\n return 42\n\n@flow\ndef test_caching():\n cached_task()\n cached_task()\n cached_task()\n
In this case, there's no expiration for the cache key, and no logic to change the cache key, so cached_task()
only runs once.
>>> test_caching()\nrunning an expensive operation\n>>> test_caching()\n>>> test_caching()\n
When each task run requested to enter a Running
state, it provided its cache key computed from the cache_key_fn
. The Prefect backend identified that there was a COMPLETED state associated with this key and instructed the run to immediately enter the same COMPLETED state, including the same return values.
A real-world example might include the flow run ID from the context in the cache key so only repeated calls in the same flow run are cached.
def cache_within_flow_run(context, parameters):\n return f\"{context.task_run.flow_run_id}-{task_input_hash(context, parameters)}\"\n\n@task(cache_key_fn=cache_within_flow_run)\ndef cached_task():\n print('running an expensive operation')\n return 42\n
Task results, retries, and caching
Task results are cached in memory during a flow run and persisted to the location specified by the PREFECT_LOCAL_STORAGE_PATH
setting. As a result, task caching between flow runs is currently limited to flow runs with access to that local storage path.
Sometimes, you want a task to update the data associated with its cache key instead of using the cache. This is a cache \"refresh\".
The refresh_cache
option can be used to enable this behavior for a specific task:
import random\n\n\ndef static_cache_key(context, parameters):\n # return a constant\n return \"static cache key\"\n\n\n@task(cache_key_fn=static_cache_key, refresh_cache=True)\ndef caching_task():\n return random.random()\n
When this task runs, it will always update the cache key instead of using the cached value. This is particularly useful when you have a flow that is responsible for updating the cache.
If you want to refresh the cache for all tasks, you can use the PREFECT_TASKS_REFRESH_CACHE
setting. Setting PREFECT_TASKS_REFRESH_CACHE=true
will change the default behavior of all tasks to refresh. This is particularly useful if you want to rerun a flow without cached results.
If you have tasks that should not refresh when this setting is enabled, you may explicitly set refresh_cache
to False
. These tasks will never refresh the cache \u2014 if a cache key exists it will be read, not updated. Note that, if a cache key does not exist yet, these tasks can still write to the cache.
@task(cache_key_fn=static_cache_key, refresh_cache=False)\ndef caching_task():\n return random.random()\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#timeouts","title":"Timeouts","text":"Task timeouts are used to prevent unintentional long-running tasks. When the duration of execution for a task exceeds the duration specified in the timeout, a timeout exception will be raised and the task will be marked as failed. In the UI, the task will be visibly designated as TimedOut
. From the perspective of the flow, the timed-out task will be treated like any other failed task.
Timeout durations are specified using the timeout_seconds
keyword argument.
from prefect import task, get_run_logger\nimport time\n\n@task(timeout_seconds=1)\ndef show_timeouts():\n logger = get_run_logger()\n logger.info(\"I will execute\")\n time.sleep(5)\n logger.info(\"I will not execute\")\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-results","title":"Task results","text":"Depending on how you call tasks, they can return different types of results and optionally engage the use of a task runner.
Any task can return:
int
, str
, dict
, list
, and so on \u2014 \u200athis is the default behavior any time you call your_task()
.PrefectFuture
\u2014 \u200athis is achieved by calling your_task.submit()
. A PrefectFuture
contains both data and StateState
\u200a\u2014 anytime you call your task or flow with the argument return_state=True
, it will directly return a state you can use to build custom behavior based on a state change you care about, such as task or flow failing or retrying.To run your task with a task runner, you must call the task with .submit()
.
See state returned values for examples.
Task runners are optional
If you just need the result from a task, you can simply call the task from your flow. For most workflows, the default behavior of calling a task directly and receiving a result is all you'll need.
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#wait-for","title":"Wait for","text":"To create a dependency between two tasks that do not exchange data, but one needs to wait for the other to finish, use the special wait_for
keyword argument:
@task\ndef task_1():\n pass\n\n@task\ndef task_2():\n pass\n\n@flow\ndef my_flow():\n x = task_1()\n\n # task 2 will wait for task_1 to complete\n y = task_2(wait_for=[x])\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#map","title":"Map","text":"Prefect provides a .map()
implementation that automatically creates a task run for each element of its input data. Mapped tasks represent the computations of many individual children tasks.
The simplest Prefect map takes a tasks and applies it to each element of its inputs.
from prefect import flow, task\n\n@task\ndef print_nums(nums):\n for n in nums:\n print(n)\n\n@task\ndef square_num(num):\n return num**2\n\n@flow\ndef map_flow(nums):\n print_nums(nums)\n squared_nums = square_num.map(nums) \n print_nums(squared_nums)\n\nmap_flow([1,2,3,5,8,13])\n
Prefect also supports unmapped
arguments, allowing you to pass static values that don't get mapped over.
from prefect import flow, task\n\n@task\ndef add_together(x, y):\n return x + y\n\n@flow\ndef sum_it(numbers, static_value):\n futures = add_together.map(numbers, static_value)\n return futures\n\nsum_it([1, 2, 3], 5)\n
If your static argument is an iterable, you'll need to wrap it with unmapped
to tell Prefect that it should be treated as a static value.
from prefect import flow, task, unmapped\n\n@task\ndef sum_plus(x, static_iterable):\n return x + sum(static_iterable)\n\n@flow\ndef sum_it(numbers, static_iterable):\n futures = sum_plus.map(numbers, static_iterable)\n return futures\n\nsum_it([4, 5, 6], unmapped([1, 2, 3]))\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#async-tasks","title":"Async tasks","text":"Prefect also supports asynchronous task and flow definitions by default. All of the standard rules of async apply:
import asyncio\n\nfrom prefect import task, flow\n\n@task\nasync def print_values(values):\n for value in values:\n await asyncio.sleep(1) # yield\n print(value, end=\" \")\n\n@flow\nasync def async_flow():\n await print_values([1, 2]) # runs immediately\n coros = [print_values(\"abcd\"), print_values(\"6789\")]\n\n # asynchronously gather the tasks\n await asyncio.gather(*coros)\n\nasyncio.run(async_flow())\n
Note, if you are not using asyncio.gather
, calling .submit()
is required for asynchronous execution on the ConcurrentTaskRunner
.
There are situations in which you want to actively prevent too many tasks from running simultaneously. For example, if many tasks across multiple flows are designed to interact with a database that only allows 10 connections, you want to make sure that no more than 10 tasks that connect to this database are running at any given time.
Prefect has built-in functionality for achieving this: task concurrency limits.
Task concurrency limits use task tags. You can specify an optional concurrency limit as the maximum number of concurrent task runs in a Running
state for tasks with a given tag. The specified concurrency limit applies to any task to which the tag is applied.
If a task has multiple tags, it will run only if all tags have available concurrency.
Tags without explicit limits are considered to have unlimited concurrency.
0 concurrency limit aborts task runs
Currently, if the concurrency limit is set to 0 for a tag, any attempt to run a task with that tag will be aborted instead of delayed.
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#execution-behavior","title":"Execution behavior","text":"Task tag limits are checked whenever a task run attempts to enter a Running
state.
If there are no concurrency slots available for any one of your task's tags, the transition to a Running
state will be delayed and the client is instructed to try entering a Running
state again in 30 seconds (or the value specified by the PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS
setting).
Flow run concurrency limits are set at a work pool and/or work queue level
While task run concurrency limits are configured via tags (as shown below), flow run concurrency limits are configured via work pools and/or work queues.
You can set concurrency limits on as few or as many tags as you wish. You can set limits through:
PrefectClient
Python clientYou can create, list, and remove concurrency limits by using Prefect CLI concurrency-limit
commands.
prefect concurrency-limit [command] [arguments]\n
Command Description create Create a concurrency limit by specifying a tag and limit. delete Delete the concurrency limit set on the specified tag. inspect View details about a concurrency limit set on the specified tag. ls View all defined concurrency limits. For example, to set a concurrency limit of 10 on the 'small_instance' tag:
prefect concurrency-limit create small_instance 10\n
To delete the concurrency limit on the 'small_instance' tag:
prefect concurrency-limit delete small_instance\n
To view details about the concurrency limit on the 'small_instance' tag:
prefect concurrency-limit inspect small_instance\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#python-client","title":"Python client","text":"To update your tag concurrency limits programmatically, use PrefectClient.orchestration.create_concurrency_limit
.
create_concurrency_limit
takes two arguments:
tag
specifies the task tag on which you're setting a limit.concurrency_limit
specifies the maximum number of concurrent task runs for that tag.For example, to set a concurrency limit of 10 on the 'small_instance' tag:
from prefect import get_client\n\nasync with get_client() as client:\n # set a concurrency limit of 10 on the 'small_instance' tag\n limit_id = await client.create_concurrency_limit(\n tag=\"small_instance\", \n concurrency_limit=10\n )\n
To remove all concurrency limits on a tag, use PrefectClient.delete_concurrency_limit_by_tag
, passing the tag:
async with get_client() as client:\n # remove a concurrency limit on the 'small_instance' tag\n await client.delete_concurrency_limit_by_tag(tag=\"small_instance\")\n
If you wish to query for the currently set limit on a tag, use PrefectClient.read_concurrency_limit_by_tag
, passing the tag:
To see all of your limits across all of your tags, use PrefectClient.read_concurrency_limits
.
async with get_client() as client:\n # query the concurrency limit on the 'small_instance' tag\n limit = await client.read_concurrency_limit_by_tag(tag=\"small_instance\")\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/work-pools/","title":"Work Pools & Workers","text":"Work pools and workers bridge the Prefect orchestration environment with your execution environment. When a deployment creates a flow run, it is submitted to a specific work pool for scheduling. A worker running in the execution environment can poll its respective work pool for new runs to execute, or the work pool can submit flow runs to serverless infrastructure directly, depending on your configuration.
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-overview","title":"Work pool overview","text":"Work pools organize work for execution. Work pools have types corresponding to the infrastructure that will execute the flow code, as well as the delivery method of work to that environment. Pull work pools require workers (or less ideally, agents) to poll the work pool for flow runs to execute. Push work pools can submit runs directly to your serverless infrastructure providers such as Google Cloud Run, Azure Container Instances, and AWS ECS without the need for an agent or worker. Managed work pools are administered by Prefect and handle the submission and execution of code on your behalf.
Work pools are like pub/sub topics
It's helpful to think of work pools as a way to coordinate (potentially many) deployments with (potentially many) workers through a known channel: the pool itself. This is similar to how \"topics\" are used to connect producers and consumers in a pub/sub or message-based system. By switching a deployment's work pool, users can quickly change the worker that will execute their runs, making it easy to promote runs through environments or even debug locally.
In addition, users can control aspects of work pool behavior, such as how many runs the pool allows to be run concurrently or pausing delivery entirely. These options can be modified at any time, and any workers requesting work for a specific pool will only see matching flow runs.
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-configuration","title":"Work pool configuration","text":"You can configure work pools by using any of the following:
To manage work pools in the UI, click the Work Pools icon. This displays a list of currently configured work pools.
You can pause a work pool from this page by using the toggle.
Select the + button to create a new work pool. You'll be able to specify the details for work served by this work pool.
To create a work pool via the Prefect CLI, use the prefect work-pool create
command:
prefect work-pool create [OPTIONS] NAME\n
NAME
is a required, unique name for the work pool.
Optional configuration parameters you can specify to filter work on the pool include:
Option Description--paused
If provided, the work pool will be created in a paused state. --type
The type of infrastructure that can execute runs from this work pool. --set-as-default
Whether to use the created work pool as the local default for deployment. --base-job-template
The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type. For example, to create a work pool called test-pool
, you would run this command:
prefect work-pool create test-pool\n
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-types","title":"Work pool types","text":"If you don't use the --type
flag to specify an infrastructure type, you are prompted to select from the following options:
On success, the command returns the details of the newly created work pool.
Created work pool with properties:\n name - 'test-pool'\n id - a51adf8c-58bb-4949-abe6-1b87af46eabd\n concurrency limit - None\n\nStart a worker to pick up flows from the work pool:\n prefect worker start -p 'test-pool'\n\nInspect the work pool:\n prefect work-pool inspect 'test-pool'\n
Set a work pool as the default for new deployments by adding the --set-as-default
flag.
Which would result in output similar to the following:
Set 'test-pool' as default work pool for profile 'default'\n\nTo change your default work pool, run:\n\n prefect config set PREFECT_DEFAULT_WORK_POOL_NAME=<work-pool-name>\n
To update a work pool via the Prefect CLI, use the prefect work-pool update
command:
prefect work-pool update [OPTIONS] NAME\n
NAME
is the name of the work pool to update.
Optional configuration parameters you can specify to update the work pool include:
Option Description--base-job-template
The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type. --description
A description of the work pool. --concurrency-limit
The maximum number of flow runs to run simultaneously in the work pool. Managing work pools in CI/CD
You can version control your base job template by committing it as a JSON file to your repository and control updates to your work pools' base job templates by using the prefect work-pool update
command in your CI/CD pipeline. For example, you could use the following command to update a work pool's base job template to the contents of a file named base-job-template.json
:
prefect work-pool update --base-job-template base-job-template.json my-work-pool\n
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#base-job-template","title":"Base job template","text":"Each work pool has a base job template that allows the customization of the behavior of the worker executing flow runs from the work pool.
The base job template acts as a contract defining the configuration passed to the worker for each flow run and the options available to deployment creators to customize worker behavior per deployment.
A base job template comprises a job_configuration
section and a variables
section.
The variables
section defines the fields available to be customized per deployment. The variables
section follows the OpenAPI specification, which allows work pool creators to place limits on provided values (type, minimum, maximum, etc.).
The job configuration section defines how values provided for fields in the variables section should be translated into the configuration given to a worker when executing a flow run.
The values in the job_configuration
can use placeholders to reference values provided in the variables
section. Placeholders are declared using double curly braces, e.g., {{ variable_name }}
. job_configuration
values can also be hard-coded if the value should not be customizable.
Each worker type is configured with a default base job template, making it easy to start with a work pool. The default base template defines fields that can be edited on a per-deployment basis or for the entire work pool via the Prefect API and UI.
For example, if we create a process
work pool named 'above-ground' via the CLI:
prefect work-pool create --type process above-ground\n
We see these configuration options available in the Prefect UI:
For a process
work pool with the default base job template, we can set environment variables for spawned processes, set the working directory to execute flows, and control whether the flow run output is streamed to workers' standard output. You can also see an example of JSON formatted base job template with the 'Advanced' tab.
You can examine the default base job template for a given worker type by running:
prefect work-pool get-default-base-job-template --type process\n
{\n \"job_configuration\": {\n \"command\": \"{{ command }}\",\n \"env\": \"{{ env }}\",\n \"labels\": \"{{ labels }}\",\n \"name\": \"{{ name }}\",\n \"stream_output\": \"{{ stream_output }}\",\n \"working_dir\": \"{{ working_dir }}\"\n },\n \"variables\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"title\": \"Name\",\n \"description\": \"Name given to infrastructure created by a worker.\",\n \"type\": \"string\"\n },\n \"env\": {\n \"title\": \"Environment Variables\",\n \"description\": \"Environment variables to set when starting a flow run.\",\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\"\n }\n },\n \"labels\": {\n \"title\": \"Labels\",\n \"description\": \"Labels applied to infrastructure created by a worker.\",\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\"\n }\n },\n \"command\": {\n \"title\": \"Command\",\n \"description\": \"The command to use when starting a flow run. In most cases, this should be left blank and the command will be automatically generated by the worker.\",\n \"type\": \"string\"\n },\n \"stream_output\": {\n \"title\": \"Stream Output\",\n \"description\": \"If enabled, workers will stream output from flow run processes to local standard output.\",\n \"default\": true,\n \"type\": \"boolean\"\n },\n \"working_dir\": {\n \"title\": \"Working Directory\",\n \"description\": \"If provided, workers will open flow run processes within the specified path as the working directory. Otherwise, a temporary directory will be created.\",\n \"type\": \"string\",\n \"format\": \"path\"\n }\n }\n }\n}\n
You can override each of these attributes on a per-deployment basis. When deploying a flow, you can specify these overrides in the work_pool.job_variables
section of a deployment.yaml
.
If we wanted to turn off streaming output for a specific deployment, we could add the following to our deployment.yaml
:
work_pool:\n name: above-ground \n job_variables:\n stream_output: false\n
Advanced Customization of the Base Job Template
For advanced use cases, you can create work pools with fully customizable job templates. This customization is available when creating or editing a work pool on the 'Advanced' tab within the UI or when updating a work pool via the Prefect CLI.
Advanced customization is useful anytime the underlying infrastructure supports a high degree of customization. In these scenarios a work pool job template allows you to expose a minimal and easy-to-digest set of options to deployment authors. Additionally, these options are the only customizable aspects for deployment infrastructure, which can be useful for restricting functionality in secure environments. For example, the kubernetes
worker type allows users to specify a custom job template that can be used to configure the manifest that workers use to create jobs for flow execution.
For more information and advanced configuration examples, see the Kubernetes Worker documentation.
For more information on overriding a work pool's job variables see this guide.
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#viewing-work-pools","title":"Viewing work pools","text":"At any time, users can see and edit configured work pools in the Prefect UI.
To view work pools with the Prefect CLI, you can:
ls
) all available poolsinspect
) the details of a single poolpreview
) scheduled work for a single poolprefect work-pool ls
lists all configured work pools for the server.
prefect work-pool ls\n
For example:
Work pools\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name \u2503 Type \u2503 ID \u2503 Concurrency Limit \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 barbeque \u2502 docker \u2502 72c0a101-b3e2-4448-b5f8-a8c5184abd17 \u2502 None \u2502\n\u2502 k8s-pool \u2502 kubernetes \u2502 7b6e3523-d35b-4882-84a7-7a107325bb3f \u2502 None \u2502\n\u2502 test-pool \u2502 prefect-agent \u2502 a51adf8c-58bb-4949-abe6-1b87af46eabd \u2502 None |\n| my-pool \u2502 process \u2502 cd6ff9e8-bfd8-43be-9be3-69375f7a11cd \u2502 None \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n (**) denotes a paused pool\n
prefect work-pool inspect
provides all configuration metadata for a specific work pool by ID.
prefect work-pool inspect 'test-pool'\n
Outputs information similar to the following:
Workpool(\n id='a51adf8c-58bb-4949-abe6-1b87af46eabd',\n created='2 minutes ago',\n updated='2 minutes ago',\n name='test-pool',\n filter=None,\n)\n
prefect work-pool preview
displays scheduled flow runs for a specific work pool by ID for the upcoming hour. The optional --hours
flag lets you specify the number of hours to look ahead.
prefect work-pool preview 'test-pool' --hours 12\n
\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Scheduled Star\u2026 \u2503 Run ID \u2503 Name \u2503 Deployment ID \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 2022-02-26 06:\u2026 \u2502 741483d4-dc90-4913-b88d-0\u2026 \u2502 messy-petrel \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 05:\u2026 \u2502 14e23a19-a51b-4833-9322-5\u2026 \u2502 unselfish-g\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 04:\u2026 \u2502 deb44d4d-5fa2-4f70-a370-e\u2026 \u2502 solid-ostri\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 03:\u2026 \u2502 07374b5c-121f-4c8d-9105-b\u2026 \u2502 sophisticat\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 02:\u2026 \u2502 545bc975-b694-4ece-9def-8\u2026 \u2502 gorgeous-mo\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 01:\u2026 \u2502 704f2d67-9dfa-4fb8-9784-4\u2026 \u2502 sassy-hedge\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 00:\u2026 \u2502 691312f0-d142-4218-b617-a\u2026 \u2502 sincere-moo\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 23:\u2026 \u2502 7cb3ff96-606b-4d8c-8a33-4\u2026 \u2502 curious-cat\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 22:\u2026 \u2502 3ea559fe-cb34-43b0-8090-1\u2026 \u2502 primitive-f\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 21:\u2026 \u2502 96212e80-426d-4bf4-9c49-e\u2026 \u2502 phenomenal-\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n (**) denotes a late run\n
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-status","title":"Work Pool Status","text":"Work pools have three statuses: READY
, NOT_READY
, and PAUSED
. A work pool is considered ready if it has at least one online worker sending heartbeats to the work pool. If a work pool has no online workers, it is considered not ready to execute work. A work pool can be placed in a paused status manually by a user or via an automation. When a paused work pool is unpaused, it will be reassigned the appropriate status based on whether any workers are sending heartbeats.
A work pool can be paused at any time to stop the delivery of work to workers. Workers will not receive any work when polling a paused pool.
To pause a work pool through the Prefect CLI, use the prefect work-pool pause
command:
prefect work-pool pause 'test-pool'\n
To resume a work pool through the Prefect CLI, use the prefect work-pool resume
command with the work pool name.
To delete a work pool through the Prefect CLI, use the prefect work-pool delete
command with the work pool name.
Each work pool can optionally restrict concurrent runs of matching flows.
For example, a work pool with a concurrency limit of 5 will only release new work if fewer than 5 matching runs are currently in a Running
or Pending
state. If 3 runs are Running
or Pending
, polling the pool for work will only result in 2 new runs, even if there are many more available, to ensure that the concurrency limit is not exceeded.
When using the prefect work-pool
Prefect CLI command to configure a work pool, the following subcommands set concurrency limits:
set-concurrency-limit
sets a concurrency limit on a work pool.clear-concurrency-limit
clears any concurrency limits from a work pool.Advanced topic
Prefect will automatically create a default work queue if needed.
Work queues offer advanced control over how runs are executed. Each work pool has a \"default\" queue that all work will be sent to by default. Additional queues can be added to a work pool to enable greater control over work delivery through fine grained priority and concurrency. Each work queue has a priority indicated by a unique positive integer. Lower numbers take greater priority in the allocation of work. Accordingly, new queues can be added without changing the rank of the higher-priority queues (e.g. no matter how many queues you add, the queue with priority 1
will always be the highest priority).
Work queues can also have their own concurrency limits. Note that each queue is also subject to the global work pool concurrency limit, which cannot be exceeded.
Together work queue priority and concurrency enable precise control over work. For example, a pool may have three queues: A \"low\" queue with priority 10
and no concurrency limit, a \"high\" queue with priority 5
and a concurrency limit of 3
, and a \"critical\" queue with priority 1
and a concurrency limit of 1
. This arrangement would enable a pattern in which there are two levels of priority, \"high\" and \"low\" for regularly scheduled flow runs, with the remaining \"critical\" queue for unplanned, urgent work, such as a backfill.
Priority is evaluated to determine the order in which flow runs are submitted for execution. If all flow runs are capable of being executed with no limitation due to concurrency or otherwise, priority is still used to determine order of submission, but there is no impact to execution. If not all flow runs can be executed, usually as a result of concurrency limits, priority is used to determine which queues receive precedence to submit runs for execution.
Priority for flow run submission proceeds from the highest priority to the lowest priority. In the preceding example, all work from the \"critical\" queue (priority 1) will be submitted, before any work is submitted from \"high\" (priority 5). Once all work has been submitted from priority queue \"critical\", work from the \"high\" queue will begin submission.
If new flow runs are received on the \"critical\" queue while flow runs are still in scheduled on the \"high\" and \"low\" queues, flow run submission goes back to ensuring all scheduled work is first satisfied from the highest priority queue, until it is empty, in waterfall fashion.
Work queue status
A work queue has a READY
status when it has been polled by a worker in the last 60 seconds. Pausing a work queue will give it a PAUSED
status and mean that it will accept no new work until it is unpaused. A user can control the work queue's paused status in the UI. Unpausing a work queue will give the work queue a NOT_READY
status unless a worker has polled it in the last 60 seconds.
As long as your deployment's infrastructure block supports it, you can use work pools to temporarily send runs to a worker running on your local machine for debugging by running prefect worker start -p my-local-machine
and updating the deployment's work pool to my-local-machine
.
Workers are lightweight polling services that retrieve scheduled runs from a work pool and execute them.
Workers are similar to agents, but offer greater control over infrastructure configuration and the ability to route work to specific types of execution environments.
Workers each have a type corresponding to the execution environment to which they will submit flow runs. Workers are only able to poll work pools that match their type. As a result, when deployments are assigned to a work pool, you know in which execution environment scheduled flow runs for that deployment will run.
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#worker-types","title":"Worker types","text":"Below is a list of available worker types. Note that most worker types will require installation of an additional package.
Worker Type Description Required Packageprocess
Executes flow runs in subprocesses kubernetes
Executes flow runs as Kubernetes jobs prefect-kubernetes
docker
Executes flow runs within Docker containers prefect-docker
ecs
Executes flow runs as ECS tasks prefect-aws
cloud-run
Executes flow runs as Google Cloud Run jobs prefect-gcp
vertex-ai
Executes flow runs as Google Cloud Vertex AI jobs prefect-gcp
azure-container-instance
Execute flow runs in ACI containers prefect-azure
If you don\u2019t see a worker type that meets your needs, consider developing a new worker type!
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#worker-options","title":"Worker options","text":"Workers poll for work from one or more queues within a work pool. If the worker references a work queue that doesn't exist, it will be created automatically. The worker CLI is able to infer the worker type from the work pool. Alternatively, you can also specify the worker type explicitly. If you supply the worker type to the worker CLI, a work pool will be created automatically if it doesn't exist (using default job settings).
Configuration parameters you can specify when starting a worker include:
Option Description--name
, -n
The name to give to the started worker. If not provided, a unique name will be generated. --pool
, -p
The work pool the started worker should poll. --work-queue
, -q
One or more work queue names for the worker to pull from. If not provided, the worker will pull from all work queues in the work pool. --type
, -t
The type of worker to start. If not provided, the worker type will be inferred from the work pool. --prefetch-seconds
The amount of time before a flow run's scheduled start time to begin submission. Default is the value of PREFECT_WORKER_PREFETCH_SECONDS
. --run-once
Only run worker polling once. By default, the worker runs forever. --limit
, -l
The maximum number of flow runs to start simultaneously. --with-healthcheck
Start a healthcheck server for the worker. --install-policy
Install policy to use workers from Prefect integration packages. You must start a worker within an environment that can access or create the infrastructure needed to execute flow runs. The worker will deploy flow runs to the infrastructure corresponding to the worker type. For example, if you start a worker with type kubernetes
, the worker will deploy flow runs to a Kubernetes cluster.
Prefect must be installed in execution environments
Prefect must be installed in any environment (virtual environment, Docker container, etc.) where you intend to run the worker or execute a flow run.
PREFECT_API_URL
and PREFECT_API_KEY
settings for workers
PREFECT_API_URL
must be set for the environment in which your worker is running. You must also have a user or service account with the Worker
role, which can be configured by setting the PREFECT_API_KEY
.
Workers have two statuses: ONLINE
and OFFLINE
. A worker is online if it sends regular heartbeat messages to the Prefect API. If a worker has missed three heartbeats, it is considered offline. By default, a worker is considered offline a maximum of 90 seconds after it stopped sending heartbeats, but the threshold can be configured via the PREFECT_WORKER_HEARTBEAT_SECONDS
setting.
Use the prefect worker start
CLI command to start a worker. You must pass at least the work pool name. If the work pool does not exist, it will be created if the --type
flag is used.
prefect worker start -p [work pool name]\n
For example:
prefect worker start -p \"my-pool\"\n
Results in output like this:
Discovered worker type 'process' for work pool 'my-pool'.\nWorker 'ProcessWorker 65716280-96f8-420b-9300-7e94417f2673' started!\n
In this case, Prefect automatically discovered the worker type from the work pool. To create a work pool and start a worker in one command, use the --type
flag:
prefect worker start -p \"my-pool\" --type \"process\"\n
Worker 'ProcessWorker d24f3768-62a9-4141-9480-a056b9539a25' started!\n06:57:53.289 | INFO | prefect.worker.process.processworker d24f3768-62a9-4141-9480-a056b9539a25 - Worker pool 'my-pool' created.\n
In addition, workers can limit the number of flow runs they will start simultaneously with the --limit
flag. For example, to limit a worker to five concurrent flow runs:
prefect worker start --pool \"my-pool\" --limit 5\n
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#configuring-prefetch","title":"Configuring prefetch","text":"By default, the worker begins submitting flow runs a short time (10 seconds) before they are scheduled to run. This behavior allows time for the infrastructure to be created so that the flow run can start on time.
In some cases, infrastructure will take longer than 10 seconds to start the flow run. The prefetch can be increased using the --prefetch-seconds
option or the PREFECT_WORKER_PREFETCH_SECONDS
setting.
If this value is more than the amount of time it takes for the infrastructure to start, the flow run will wait until its scheduled start time.
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#polling-for-work","title":"Polling for work","text":"Workers poll for work every 15 seconds by default. This interval is configurable in your profile settings with the PREFECT_WORKER_QUERY_SECONDS
setting.
The Prefect CLI can install the required package for Prefect-maintained worker types automatically. You can configure this behavior with the --install-policy
option. The following are valid install policies
always
Always install the required package. Will update the required package to the most recent version if already installed. if-not-present
Install the required package if it is not already installed. never
Never install the required package. prompt
Prompt the user to choose whether to install the required package. This is the default install policy. If prefect worker start
is run non-interactively, the prompt
install policy will behave the same as never
.","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#additional-resources","title":"Additional resources","text":"See how to daemonize a Prefect worker in this guide.
For more information on overriding a work pool's job variables see this guide.
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"contributing/overview/","title":"Contributing","text":"Thanks for considering contributing to Prefect!
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#setting-up-a-development-environment","title":"Setting up a development environment","text":"First, you'll need to download the source code and install an editable version of the Python package:
# Clone the repository\ngit clone https://github.com/PrefectHQ/prefect.git\ncd prefect\n\n# We recommend using a virtual environment\n\npython -m venv .venv\nsource .venv/bin/activate\n\n# Install the package with development dependencies\n\npip install -e \".[dev]\"\n\n# Setup pre-commit hooks for required formatting\n\npre-commit install\n
If you don't want to install the pre-commit hooks, you can manually install the formatting dependencies with:
pip install $(./scripts/precommit-versions.py)\n
You'll need to run black
and ruff
before a contribution can be accepted.
After installation, you can run the test suite with pytest
:
# Run all the tests\npytest tests\n\n# Run a subset of tests\n\npytest tests/test_flows.py\n
Building the Prefect UI
If you intend to run a local Prefect server during development, you must first build the UI. See UI development for instructions.
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#prefect-code-of-conduct","title":"Prefect Code of Conduct","text":"","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-pledge","title":"Our Pledge","text":"In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-standards","title":"Our Standards","text":"Examples of behavior that contributes to creating a positive environment include:
Examples of unacceptable behavior by participants include:
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#scope","title":"Scope","text":"This Code of Conduct applies within all project spaces, and it also applies when an individual is representing the project or its community in public spaces. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#enforcement","title":"Enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting Chris White at chris@prefect.io. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#attribution","title":"Attribution","text":"This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#developer-tooling","title":"Developer tooling","text":"The Prefect CLI provides several helpful commands to aid development.
Start all services with hot-reloading on code changes (requires UI dependencies to be installed):
prefect dev start\n
Start a Prefect API that reloads on code changes:
prefect dev api\n
Start a Prefect worker that reloads on code changes:
prefect dev agent\n
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#ui-development","title":"UI development","text":"Developing the Prefect UI requires that npm is installed.
Start a development UI that reloads on code changes:
prefect dev ui\n
Build the static UI (the UI served by prefect server start
):
prefect dev build-ui\n
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#docs-development","title":"Docs Development","text":"Prefect uses mkdocs for the docs website and the mkdocs-material theme. While we use mkdocs-material-insiders
for production, builds can still happen without the extra plugins. Deploy previews are available on pull requests, so you'll be able to browse the final look of your changes before merging.
To build the docs:
mkdocs build\n
To serve the docs locally at http://127.0.0.1:8000/:
mkdocs serve\n
For additional mkdocs help and options:
mkdocs --help\n
We use the mkdocs-material theme. To add additional JavaScript or CSS to the docs, please see the theme documentation here.
Internal developers can install the production theme by running:
pip install -e git+https://github.com/PrefectHQ/mkdocs-material-insiders.git#egg=mkdocs-material\nmkdocs build # or mkdocs build --config-file mkdocs.insiders.yml if needed\n
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#kubernetes-development","title":"Kubernetes development","text":"Generate a manifest to deploy a development API to a local kubernetes cluster:
prefect dev kubernetes-manifest\n
To access the Prefect UI running in a Kubernetes cluster, use the kubectl port-forward
command to forward a port on your local machine to an open port within the cluster. For example:
kubectl port-forward deployment/prefect-dev 4200:4200\n
This forwards port 4200 on the default internal loop IP for localhost to the Prefect server deployment.
To tell the local prefect
command how to communicate with the Prefect API running in Kubernetes, set the PREFECT_API_URL
environment variable:
export PREFECT_API_URL=http://localhost:4200/api\n
Since you previously configured port forwarding for the localhost port to the Kubernetes environment, you\u2019ll be able to interact with the Prefect API running in Kubernetes when using local Prefect CLI commands.
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#adding-database-migrations","title":"Adding Database Migrations","text":"To make changes to a table, first update the SQLAlchemy model in src/prefect/server/database/orm_models.py
. For example, if you wanted to add a new column to the flow_run
table, you would add a new column to the FlowRun
model:
# src/prefect/server/database/orm_models.py\n\n@declarative_mixin\nclass ORMFlowRun(ORMRun):\n \"\"\"SQLAlchemy model of a flow run.\"\"\"\n ...\n new_column = Column(String, nullable=True) # <-- add this line\n
Next, you will need to generate new migration files. You must generate a new migration file for each database type. Migrations will be generated for whatever database type PREFECT_API_DATABASE_CONNECTION_URL
is set to. See here for how to set the database connection URL for each database type.
To generate a new migration file, run the following command:
prefect server database revision --autogenerate -m \"<migration name>\"\n
Try to make your migration name brief but descriptive. For example:
add_flow_run_new_column
add_flow_run_new_column_idx
rename_flow_run_old_column_to_new_column
The --autogenerate
flag will automatically generate a migration file based on the changes to the models.
Always inspect the output of --autogenerate
--autogenerate
will generate a migration file based on the changes to the models. However, it is not perfect. Be sure to check the file to make sure it only includes the changes you want to make. Additionally, you may need to remove extra statements that were included and not related to your change.
The new migration can be found in the src/prefect/server/database/migrations/versions/
directory. Each database type has its own subdirectory. For example, the SQLite migrations are stored in src/prefect/server/database/migrations/versions/sqlite/
.
After you have inspected the migration file, you can apply the migration to your database by running the following command:
prefect server database upgrade -y\n
Once you have successfully created and applied migrations for all database types, make sure to update MIGRATION-NOTES.md
to document your additions.
Generally, we follow the Google Python Style Guide. This document covers sections where we differ or where additional clarification is necessary.
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports","title":"Imports","text":"A brief collection of rules and guidelines for how imports should be handled in this repository.
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports-in-__init__-files","title":"Imports in__init__
files","text":"Leave __init__
files empty unless exposing an interface. If you must expose objects to present a simpler API, please follow these rules.
If importing objects from submodules, the __init__
file should use a relative import. This is required for type checkers to understand the exposed interface.
# Correct\nfrom .flows import flow\n
# Wrong\nfrom prefect.flows import flow\n
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#exposing-submodules","title":"Exposing submodules","text":"Generally, submodules should not be imported in the __init__
file. Submodules should only be exposed when the module is designed to be imported and used as a namespaced object.
For example, we do this for our schema and model modules because it is important to know if you are working with an API schema or database model, both of which may have similar names.
import prefect.server.schemas as schemas\n\n# The full module is accessible now\nschemas.core.FlowRun\n
If exposing a submodule, use a relative import as you would when exposing an object.
# Correct\nfrom . import flows\n
# Wrong\nimport prefect.flows\n
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#importing-to-run-side-effects","title":"Importing to run side-effects","text":"Another use case for importing submodules is perform global side-effects that occur when they are imported.
Often, global side-effects on import are a dangerous pattern. Avoid them if feasible.
We have a couple acceptable use-cases for this currently:
prefect.serializers
.prefect.cli
.The from
syntax should be reserved for importing objects from modules. Modules should not be imported using the from
syntax.
# Correct\nimport prefect.server.schemas # use with the full name\nimport prefect.server.schemas as schemas # use the shorter name\n
# Wrong\nfrom prefect.server import schemas\n
Unless in an __init__.py
file, relative imports should not be used.
# Correct\nfrom prefect.utilities.foo import bar\n
# Wrong\nfrom .utilities.foo import bar\n
Imports dependent on file location should never be used without explicit indication it is relative. This avoids confusion about the source of a module.
# Correct\nfrom . import test\n
# Wrong\nimport test\n
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#resolving-circular-dependencies","title":"Resolving circular dependencies","text":"Sometimes, we must defer an import and perform it within a function to avoid a circular dependency.
## This function in `settings.py` requires a method from the global `context` but the context\n## uses settings\ndef from_context():\n from prefect.context import get_profile_context\n\n ...\n
Attempt to avoid circular dependencies. This often reveals overentanglement in the design.
When performing deferred imports, they should all be placed at the top of the function.
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#with-type-annotations","title":"With type annotations","text":"If you are just using the imported object for a type signature, you should use the TYPE_CHECKING
flag.
# Correct\nfrom typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from prefect.server.schemas.states import State\n\ndef foo(state: \"State\"):\n pass\n
Note that usage of the type within the module will need quotes e.g. \"State\"
since it is not available at runtime.
We do not have a best practice for this yet. See the kubernetes
, docker
, and distributed
implementations for now.
Sometimes, imports are slow. We'd like to keep the prefect
module import times fast. In these cases, we can lazily import the slow module by deferring import to the relevant function body. For modules that are consumed by many functions, the pattern used for optional requirements may be used instead.
Upon executing a command that creates an object, the output message should offer: - A short description of what the command just did. - A bullet point list, rehashing user inputs, if possible. - Next steps, like the next command to run, if applicable. - Other relevant, pre-formatted commands that can be copied and pasted, if applicable. - A new line before the first line and after the last line.
Output Example:
$ prefect work-queue create testing\n\nCreated work queue with properties:\n name - 'abcde'\n uuid - 940f9828-c820-4148-9526-ea8107082bda\n tags - None\n deployment_ids - None\n\nStart an agent to pick up flows from the created work queue:\n prefect agent start -q 'abcde'\n\nInspect the created work queue:\n prefect work-queue inspect 'abcde'\n
Additionally:
!r
.textwrap.dedent
to remove extraneous spacing for strings that are written with triple quotes (\"\"\").Placeholder Example:
Create a work queue with tags:\n prefect work-queue create '<WORK QUEUE NAME>' -t '<OPTIONAL TAG 1>' -t '<OPTIONAL TAG 2>'\n
Dedent Example:
from textwrap import dedent\n...\noutput_msg = dedent(\n f\"\"\"\n Created work queue with properties:\n name - {name!r}\n uuid - {result}\n tags - {tags or None}\n deployment_ids - {deployment_ids or None}\n\n Start an agent to pick up flows from the created work queue:\n prefect agent start -q {name!r}\n\n Inspect the created work queue:\n prefect work-queue inspect {name!r}\n \"\"\"\n)\n
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#api-versioning","title":"API Versioning","text":"The Prefect client can be run separately from the Prefect orchestration server and communicate entirely via an API. Among other things, the Prefect client includes anything that runs task or flow code, (e.g. agents, and the Python client) or any consumer of Prefect metadata, (e.g. the Prefect UI, and CLI). The Prefect server stores this metadata and serves it via the REST API.
Sometimes, we make breaking changes to the API (for good reasons). In order to check that a Prefect client is compatible with the API it's making requests to, every API call the client makes includes a three-component API_VERSION
header with major, minor, and patch versions.
For example, a request with the X-PREFECT-API-VERSION=3.2.1
header has a major version of 3
, minor version 2
, and patch version 1
.
This version header can be changed by modifying the API_VERSION
constant in prefect.server.api.server
.
When making a breaking change to the API, we should consider if the change might be backwards compatible for clients, meaning that the previous version of the client would still be able to make calls against the updated version of the server code. This might happen if the changes are purely additive: such as adding a non-critical API route. In these cases, we should make sure to bump the patch version.
In almost all other cases, we should bump the minor version, which denotes a non-backwards-compatible API change. We have reserved the major version chanes to denote also-backwards compatible change that might be significant in some way, such as a major release milestone.
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/versioning/","title":"Versioning","text":"","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#understanding-version-numbers","title":"Understanding version numbers","text":"Versions are composed of three parts: MAJOR.MINOR.PATCH. For example, the version 2.5.0 has a major version of 2, a minor version of 5, and patch version of 0.
Occasionally, we will add a suffix to the version such as rc
, a
, or b
. These indicate pre-release versions that users can opt-into installing to test functionality before it is ready for release.
Each release will increase one of the version numbers. If we increase a number other than the patch version, the versions to the right of it will be reset to zero.
","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#prefects-versioning-scheme","title":"Prefect's versioning scheme","text":"Prefect will increase the major version when significant and widespread changes are made to the core product. It is very unlikely that the major version will change without extensive warning.
Prefect will increase the minor version when:
Prefect will increase the patch version when:
A breaking change means that your code will need to change to use a new version of Prefect. We strive to avoid breaking changes in all releases.
At times, Prefect will deprecate a feature. This means that a feature has been marked for removal in the future. When you use it, you may see warnings that it will be removed. A feature is deprecated when it will no longer be maintained. Frequently, a deprecated feature will have a new and improved alternative. Deprecated features will be retained for at least 3 minor version increases or 6 months, whichever is longer. We may retain deprecated features longer than this time period.
Prefect will sometimes include changes to behavior to fix a bug. These changes are not categorized as breaking changes.
","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#client-compatibility-with-prefect","title":"Client compatibility with Prefect","text":"When running a Prefect server, you are in charge of ensuring the version is compatible with those of the clients that are using the server. Prefect aims to maintain backwards compatibility with old clients for each server release. In contrast, sometimes new clients cannot be used with an old server. The new client may expect the server to support functionality that it does not yet include. For this reason, we recommend that all clients are the same version as the server or older.
For example, a client on 2.1.0 can be used with a server on 2.5.0. A client on 2.5.0 cannot be used with a server on 2.1.0.
","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#client-compatibility-with-cloud","title":"Client compatibility with Cloud","text":"Prefect Cloud targets compatibility with all versions of Prefect clients. If you encounter a compatibility issue, please file a bug report.
","tags":["versioning","semver"],"boost":2},{"location":"getting-started/installation/","title":"Installation","text":"Prefect requires Python 3.8 or newer.
Python 3.12 support is experimental, as not all dependencies to support it yet. If you encounter any errors, please open an issue.
We recommend installing Prefect using a Python virtual environment manager such as pipenv
, conda
, or virtualenv
/venv
.
Windows and Linux requirements
See Windows installation notes and Linux installation notes for details on additional installation requirements and considerations.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#install-prefect","title":"Install Prefect","text":"The following sections describe how to install Prefect in your development or execution environment.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-the-latest-version","title":"Installing the latest version","text":"Prefect is published as a Python package. To install the latest release or upgrade an existing Prefect install, run the following command in your terminal:
pip install -U prefect\n
To install a specific version, specify the version number like this:
pip install -U \"prefect==2.16.2\"\n
See available release versions in the Prefect Release Notes.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-the-bleeding-edge","title":"Installing the bleeding edge","text":"If you'd like to test with the most up-to-date code, you can install directly off the main
branch on GitHub:
pip install -U git+https://github.com/PrefectHQ/prefect\n
The main
branch may not be stable
Please be aware that this method installs unreleased code and may not be stable.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-for-development","title":"Installing for development","text":"If you'd like to install a version of Prefect for development:
pip install -e
.$ git clone https://github.com/PrefectHQ/prefect.git\n$ cd prefect\n$ pip install -e \".[dev]\"\n$ pre-commit install\n
See our Contributing guide for more details about standards and practices for contributing to Prefect.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#checking-your-installation","title":"Checking your installation","text":"To confirm that Prefect was installed correctly, run the command prefect version
to print the version and environment details to your console.
$ prefect version\n\nVersion: 2.10.21\nAPI version: 0.8.4\nPython version: 3.10.12\nGit commit: da816542\nBuilt: Thu, Jul 13, 2023 2:05 PM\nOS/Arch: darwin/arm64\nProfile: local\nServer type: ephemeral\nServer:\n Database: sqlite\n SQLite version: 3.42.0\n
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#windows-installation-notes","title":"Windows installation notes","text":"You can install and run Prefect via Windows PowerShell, the Windows Command Prompt, or conda
. After installation, you may need to manually add the Python local packages Scripts
folder to your Path
environment variable.
The Scripts
folder path looks something like this (the username and Python version may be different on your system):
C:\\Users\\MyUserNameHere\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\Scripts\n
Watch the pip install
output messages for the Scripts
folder path on your system.
If you're using Windows Subsystem for Linux (WSL), see Linux installation notes.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#linux-installation-notes","title":"Linux installation notes","text":"Linux is a popular operating system for running Prefect. You can use Prefect Cloud as your API server, or host your own Prefect server backed by PostgreSQL.
For development, you can use SQLite 2.24 or newer as your database. Note that certain Linux versions of SQLite can be problematic. Compatible versions include Ubuntu 22.04 LTS and Ubuntu 20.04 LTS.
Alternatively, you can install SQLite on Red Hat Enterprise Linux (RHEL) or use the conda
virtual environment manager and configure a compatible SQLite version.
If you're using a self-signed SSL certificate, you need to configure your environment to trust the certificate. You can add the certificate to your system bundle and pointing your tools to use that bundle by configuring the SSL_CERT_FILE
environment variable.
If the certificate is not part of your system bundle, you can set the PREFECT_API_TLS_INSECURE_SKIP_VERIFY
to True
to disable certificate verification altogether.
Note: Disabling certificate validation is insecure and only suggested as an option for testing!
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#proxies","title":"Proxies","text":"Prefect supports communicating via proxies through environment variables. Simply set HTTPS_PROXY
and SSL_CERT_FILE
in your environment, and the underlying network libraries will route Prefect\u2019s requests appropriately. Read more about using Prefect Cloud with proxies here.
You can use Prefect Cloud as your API server, or host your own Prefect server backed by PostgreSQL.
By default, a local Prefect server instance uses SQLite as the backing database. SQLite is not packaged with the Prefect installation. Most systems will already have SQLite installed, because it is typically bundled as a part of Python.
The Prefect CLI command prefect version
prints environment details to your console, including the server database. For example:
$ prefect version\nVersion: 2.10.21\nAPI version: 0.8.4\nPython version: 3.10.12\nGit commit: a46cbebb\nBuilt: Sat, Jul 15, 2023 7:59 AM\nOS/Arch: darwin/arm64\nProfile: default\nServer type: cloud\n
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#install-sqlite-on-rhel","title":"Install SQLite on RHEL","text":"The following steps are needed to install an appropriate version of SQLite on Red Hat Enterprise Linux (RHEL). Note that some RHEL instances have no C compiler, so you may need to check for and install gcc
first:
yum install gcc\n
Download and extract the tarball for SQLite.
wget https://www.sqlite.org/2022/sqlite-autoconf-3390200.tar.gz\ntar -xzf sqlite-autoconf-3390200.tar.gz\n
Move to the extracted SQLite directory, then build and install SQLite.
cd sqlite-autoconf-3390200/\n./configure\nmake\nmake install\n
Add LD_LIBRARY_PATH
to your profile.
echo 'export LD_LIBRARY_PATH=\"/usr/local/lib\"' >> /etc/profile\n
Restart your shell to register these changes.
Now you can install Prefect using pip
.
pip3 install prefect\n
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#using-prefect-in-an-environment-with-http-proxies","title":"Using Prefect in an environment with HTTP proxies","text":"If you are using Prefect Cloud or hosting your own Prefect server instance, the Prefect library will connect to the API via any proxies you have listed in the HTTP_PROXY
, HTTPS_PROXY
, or ALL_PROXY
environment variables. You may also use the NO_PROXY
environment variable to specify which hosts should not be sent through the proxy.
For more information about these environment variables, see the cURL documentation.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#next-steps","title":"Next steps","text":"Now that you have Prefect installed and your environment configured, you may want to check out the Tutorial to get more familiar with Prefect.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/quickstart/","title":"Quickstart","text":"Prefect is an orchestration and observability platform that empowers developers to build and scale resilient code quickly, turning their Python scripts into resilient, recurring workflows.
In this quickstart, you'll see how you can schedule your code on remote infrastructure and observe the state of your workflows. With Prefect, you can go from a Python script to a production-ready workflow that runs remotely in a few minutes.
Let's get started!
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#setup","title":"Setup","text":"Here's a basic script that fetches statistics about the main Prefect GitHub repository.
import httpx\n\ndef get_repo_info():\n url = \"https://api.github.com/repos/PrefectHQ/prefect\"\n response = httpx.get(url)\n repo = response.json()\n print(\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n\nif __name__ == \"__main__\":\n get_repo_info()\n
How can we make this script schedulable, observable, resilient, and capable of running anywhere?
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-1-install-prefect","title":"Step 1: Install Prefect","text":"pip install -U prefect\n
See the install guide for more detailed installation instructions, if needed.
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-2-connect-to-prefects-api","title":"Step 2: Connect to Prefect's API","text":"Much of Prefect's functionality is backed by an API. Sign up for a forever free Prefect Cloud account or accept your organization's invite to join their Prefect Cloud account.
prefect cloud login
CLI command to log in to Prefect Cloud from your environment.prefect cloud login\n
Choose Log in with a web browser and click the Authorize button in the browser window that opens.
Self-hosted Prefect server instance
If you would like to host a Prefect server instance on your own infrastructure, see the tutorial and select the \"Self-hosted\" tab. Note that you will need to both host your own server and run your flows on your own infrastructure.
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-3-turn-your-function-into-a-prefect-flow","title":"Step 3: Turn your function into a Prefect flow","text":"The fastest way to get started with Prefect is to add a @flow
decorator to your Python function. Flows are the core observable, deployable units in Prefect and are the primary entrypoint to orchestrated work.
import httpx # an HTTP client library and dependency of Prefect\nfrom prefect import flow, task\n\n\n@task(retries=2)\ndef get_repo_info(repo_owner: str, repo_name: str):\n \"\"\"Get info about a repo - will retry twice after failing\"\"\"\n url = f\"https://api.github.com/repos/{repo_owner}/{repo_name}\"\n api_response = httpx.get(url)\n api_response.raise_for_status()\n repo_info = api_response.json()\n return repo_info\n\n\n@task\ndef get_contributors(repo_info: dict):\n \"\"\"Get contributors for a repo\"\"\"\n contributors_url = repo_info[\"contributors_url\"]\n response = httpx.get(contributors_url)\n response.raise_for_status()\n contributors = response.json()\n return contributors\n\n\n@flow(log_prints=True)\ndef repo_info(repo_owner: str = \"PrefectHQ\", repo_name: str = \"prefect\"):\n \"\"\"\n Given a GitHub repository, logs the number of stargazers\n and contributors for that repo.\n \"\"\"\n repo_info = get_repo_info(repo_owner, repo_name)\n print(f\"Stars \ud83c\udf20 : {repo_info['stargazers_count']}\")\n\n contributors = get_contributors(repo_info)\n print(f\"Number of contributors \ud83d\udc77: {len(contributors)}\")\n\n\nif __name__ == \"__main__\":\n repo_info()\n
Note that we added a log_prints=True
argument to the @flow
decorator so that print
statements within the flow-decorated function will be logged. Also note that our flow calls two tasks, which are defined by the @task
decorator. Tasks are the smallest unit of observed and orchestrated work in Prefect.
python my_gh_workflow.py\n
Now when we run this script, Prefect will automatically track the state of the flow run and log the output where we can see it in the UI and CLI.
14:28:31.099 | INFO | prefect.engine - Created flow run 'energetic-panther' for flow 'repo-info'\n14:28:31.100 | INFO | Flow run 'energetic-panther' - View at https://app.prefect.cloud/account/123/workspace/abc/flow-runs/flow-run/xyz\n14:28:32.178 | INFO | Flow run 'energetic-panther' - Created task run 'get_repo_info-0' for task 'get_repo_info'\n14:28:32.179 | INFO | Flow run 'energetic-panther' - Executing 'get_repo_info-0' immediately...\n14:28:32.584 | INFO | Task run 'get_repo_info-0' - Finished in state Completed()\n14:28:32.599 | INFO | Flow run 'energetic-panther' - Stars \ud83c\udf20 : 13609\n14:28:32.682 | INFO | Flow run 'energetic-panther' - Created task run 'get_contributors-0' for task 'get_contributors'\n14:28:32.682 | INFO | Flow run 'energetic-panther' - Executing 'get_contributors-0' immediately...\n14:28:33.118 | INFO | Task run 'get_contributors-0' - Finished in state Completed()\n14:28:33.134 | INFO | Flow run 'energetic-panther' - Number of contributors \ud83d\udc77: 30\n14:28:33.255 | INFO | Flow run 'energetic-panther' - Finished in state Completed('All states completed.')\n
You should see similar output in your terminal, with your own randomly generated flow run name and your own Prefect Cloud account URL.
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-4-choose-a-remote-infrastructure-location","title":"Step 4: Choose a remote infrastructure location","text":"Let's get this workflow running on infrastructure other than your local machine! We can tell Prefect where we want to run our workflow by creating a work pool.
We can have Prefect Cloud run our flow code for us with a Prefect Managed work pool.
Let's create a Prefect Managed work pool so that Prefect can run our flows for us. We can create a work pool in the UI or from the CLI. Let's use the CLI:
prefect work-pool create my-managed-pool --type prefect:managed\n
You should see a message in the CLI that your work pool was created. Feel free to check out your new work pool on the Work Pools page in the UI.
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-4-make-your-code-schedulable","title":"Step 4: Make your code schedulable","text":"We have a flow function and we have a work pool where we can run our flow remotely. Let's package both of these things, along with the location for where to find our flow code, into a deployment so that we can schedule our workflow to run remotely.
Deployments elevate flows to remotely configurable entities that have their own API.
Let's make a script to build a deployment with the name my-first-deployment and set it to run on a schedule.
create_deployment.pyfrom prefect import flow\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/discdiver/demos.git\",\n entrypoint=\"my_gh_workflow.py:repo_info\",\n ).deploy(\n name=\"my-first-deployment\",\n work_pool_name=\"my-managed-pool\",\n cron=\"0 1 * * *\",\n )\n
Run the script to create the deployment on the Prefect Cloud server. Note that the cron
argument will schedule the deployment to run at 1am every day.
python create_deployment.py\n
You should see a message that your deployment was created, similar to the one below.
Successfully created/updated all deployments!\n\n Deployments \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name \u2503 Status \u2503 Details \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 repo-info/my-first-deployment \u2502 applied \u2502 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\nTo schedule a run for this deployment, use the following command:\n\n $ prefect deployment run 'repo-info/my-first-deployment'\n\n\nYou can also run your flow via the Prefect UI: <https://app.prefect.cloud/account/abc/workspace/123/deployments/deployment/xyz>\n
Head to the Deployments page of the UI to check it out.
Code storage options
You can store your flow code in nearly any location. You just need to tell Prefect where to find it. In this example, we use a GitHub repository, but you could bake your code into a Docker image or store it in cloud provider storage. Read more here.
Push your code to GitHub
In the example above, we use an existing GitHub repository. If you make changes to the flow code, you will need to push those changes to your own GitHub account and update the source
argument to point to your repository.
You can trigger a manual run of this deployment by either clicking the Run button in the top right of the deployment page in the UI, or by running the following CLI command in your terminal:
prefect deployment run 'repo-info/my-first-deployment'\n
The deployment is configured to run on a Prefect Managed work pool, so Prefect will automatically spin up the infrastructure to run this flow. It may take a minute to set up the Docker image in which the flow will run.
After a minute or so, you should see the flow run graph and logs on the Flow Run page in the UI.
Remove the schedule
Click the Remove button in the top right of the Deployment page so that the workflow is no longer scheduled to run once per day.
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#next-steps","title":"Next steps","text":"You've seen how to move from a Python script to a scheduled, observable, remotely orchestrated workflow with Prefect.
To learn how to run flows on your own infrastructure, see how to customize the Docker image where your flow runs, and see how to gain lots of orchestration and observation benefits check out the tutorial.
Need help?
Get your questions answered by a Prefect Product Advocate! Book a Meeting
Happy building!
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"guides/","title":"How-to Guides","text":"This section of the documentation contains how-to guides for common workflows and use cases.
","tags":["guides","how to"],"boost":2},{"location":"guides/#development","title":"Development","text":"Title Description Hosting Host your own Prefect server instance. Profiles & Settings Configure Prefect and save your settings. Testing Easily test your workflows. Global Concurrency Limits Limit flow runs. Runtime Context Enable a flow to access metadata about itself and its context when it runs. Variables Store and retrieve configuration data. Prefect Client UsePrefectClient
to interact with the API server. Interactive Workflows Create human-in-the-loop workflows by pausing flow runs for input. Automations Configure actions that Prefect executes automatically based on trigger conditions. Webhooks Receive, observe, and react to events from other systems. Terraform Provider Use the Terraform Provider for Prefect Cloud for infrastructure as code. CI/CD Use CI/CD with Prefect. Prefect Recipes Common, extensible examples for setting up Prefect.","tags":["guides","how to"],"boost":2},{"location":"guides/#execution","title":"Execution","text":"Title Description Docker Deploy flows with Docker containers. State Change Hooks Execute code in response to state changes. Dask and Ray Scale your flows with parallel computing frameworks. Read and Write Data Read and write data to and from cloud provider storage. Big Data Handle large data with Prefect. Logging Configure Prefect's logger and aggregate logs from other tools. Troubleshooting Identify and resolve common issues with Prefect. Managed Execution Let prefect run your code.","tags":["guides","how to"],"boost":2},{"location":"guides/#work-pools","title":"Work Pools","text":"Title Description Deploying Flows to Work Pools and Workers Learn how to run you code with dynamic infrastructure. Upgrade from Agents to Workers Why and how to upgrade from agents to workers. Flow Code Storage Where to store your code for deployments. Kubernetes Deploy flows on Kubernetes. Serverless Push Work Pools Run flows on serverless infrastructure without a worker. Serverless Work Pools with Workers Run flows on serverless infrastructure with a worker. Daemonize Processes Set up a systemd service to run a Prefect worker or .serve process. Custom Workers Develop your own worker type. Overriding Work Pool Job Variables Override job variables for a work pool for a given deployment. Need help?
Get your questions answered by a Prefect Product Advocate! Book a Meeting
","tags":["guides","how to"],"boost":2},{"location":"guides/automations/","title":"Using Automations for Dynamic Responses","text":"From the Automations concept page, we saw what an automation can do and how to configure one within the UI.
In this guide, we will showcase the following common use cases:
Available only on Prefect Cloud
Automations are a Prefect Cloud feature.\n
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#prerequisites","title":"Prerequisites","text":"Please have the following before exploring the guide:
Automations allow you to take actions in response to triggering events recorded by Prefect.
For example, let's try to grab data from an API and send a notification based on the end state.
We can start by pulling hypothetical user data from an endpoint and then performing data cleaning and transformations.
Let's create a simple extract method, that pulls the data from a random user data generator endpoint.
from prefect import flow, task, get_run_logger\nimport requests\nimport json\n\n@task\ndef fetch(url: str):\n logger = get_run_logger()\n response = requests.get(url)\n raw_data = response.json()\n logger.info(f\"Raw response: {raw_data}\")\n return raw_data\n\n@task\ndef clean(raw_data: dict):\n print(raw_data.get('results')[0])\n results = raw_data.get('results')[0]\n logger = get_run_logger()\n logger.info(f\"Cleaned results: {results}\")\n return results['name']\n\n@flow\ndef build_names(num: int = 10):\n df = []\n url = \"https://randomuser.me/api/\"\n logger = get_run_logger()\n copy = num\n while num != 0:\n raw_data = fetch(url)\n df.append(clean(raw_data))\n num -= 1\n logger.info(f\"Built {copy} names: {df}\")\n return df\n\nif __name__ == \"__main__\":\n list_of_names = build_names()\n
The data cleaning workflow has visibility into each step, and we are sending a list of names to our next step of our pipeline.
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#create-notification-block-within-the-ui","title":"Create notification block within the UI","text":"Now let's try to send a notification based off a completed state outcome. We can configure a notification to be sent so that we know when to look into our workflow logic.
Prior to creating the automation, let's confirm the notification location. We have to create a notification block to help define where the notification will be sent.
Let's navigate to the blocks page on the UI, and click into creating an email notification block.
Now that we created a notification block, we can go to the automations page to create our first automation.
Next we try to find the trigger type, in this case let's use a flow completion.
Finally, let's create the actions that will be done once the triggered is hit. In this case, let's create a notification to be sent out to showcase the completion.
Now the automation is ready to be triggered from a flow run completion. Let's run the file locally and see that the notification is sent to our inbox after the completion. It may take a few minutes for the notification to arrive.
No deployment created
Keep in mind, we did not need to create a deployment to trigger our automation, where a state outcome of a local flow run helped trigger this notification block. We are not required to create a deployment to trigger a notification.
Now that you've seen how to create an email notification from a flow run completion, let's see how we can kick off a deployment run in response to an event.
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#event-based-deployment-automation","title":"Event-based deployment automation","text":"We can create an automation that can kick off a deployment instead of a notification. Let's explore how we can programmatically create this automation. We will take advantage of Prefect's REST API to help create this automation.
See the REST API documentation as a reference for interacting with the Prefect Cloud automation endpoints.
Let's create a deployment where we can kick off some work based on how long a flow is running. For example, if the build_names
flow is taking too long to execute, we can kick off a deployment of the with the same build_names
flow, but replace the count
value with a lower number - to speed up completion. You can create a deployment with a prefect.yaml
file or a Python file that uses flow.deploy
.
Create a prefect.yaml
file like this one for our flow build_names
:
# Welcome to your prefect.yaml file! You can use this file for storing and managing\n # configuration for deploying your flows. We recommend committing this file to source\n # control along with your flow code.\n\n # Generic metadata about this project\n name: automations-guide\n prefect-version: 2.13.1\n\n # build section allows you to manage and build docker images\n build: null\n\n # push section allows you to manage if and how this project is uploaded to remote locations\n push: null\n\n # pull section allows you to provide instructions for cloning this project in remote locations\n pull:\n - prefect.deployments.steps.set_working_directory:\n directory: /Users/src/prefect/Playground/automations-guide\n\n # the deployments section allows you to provide configuration for deploying flows\n deployments:\n - name: deploy-build-names\n version: null\n tags: []\n description: null\n entrypoint: test-automations.py:build_names\n parameters: {}\n work_pool:\n name: tutorial-process-pool\n work_queue_name: null\n job_variables: {}\n schedule: null\n
To follow a more Python based approach to create a deployment, you can use flow.deploy
as in the example below.
# .deploy only needs a name, valid work pool \n# and a reference to where the flow code exists\n\nif __name__ == \"__main__\":\nbuild_names.deploy(\n name=\"deploy-build-names\",\n work_pool_name=\"tutorial-process-pool\"\n image=\"my_registry/my_image:my_image_tag\",\n)\n
Now let's grab our deployment_id
from this deployment, and embed it in our automation. There are many ways to obtain the deployment_id
, but the CLI is a quick way to see all of your deployment ids.
Find deployment_id from the CLI
The quickest way to see the ID's associated with your deployment would be running prefect deployment ls
in an authenticated command prompt, and you will be able to see the id's associated with all of your deployments
prefect deployment ls\n Deployments \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name \u2503 ID \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 Extract islands/island-schedule \u2502 d9d7289c-7a41-436d-8313-80a044e61532 \u2502\n\u2502 build-names/deploy-build-names \u2502 8b10a65e-89ef-4c19-9065-eec5c50242f4 \u2502\n\u2502 ride-duration-prediction-backfill/backfill-deployment \u2502 76dc6581-1773-45c5-a291-7f864d064c57 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
We can create an automation via a POST call, where we can programmatically create the automation. Ensure you have your api_key
, account_id
, and workspace_id
. def create_event_driven_automation():\n api_url = f\"https://api.prefect.cloud/api/accounts/{account_id}/workspaces/{workspace_id}/automations/\"\n data = {\n \"name\": \"Event Driven Redeploy\",\n \"description\": \"Programmatically created an automation to redeploy a flow based on an event\",\n \"enabled\": \"true\",\n \"trigger\": {\n \"after\": [\n \"string\"\n ],\n \"expect\": [\n \"prefect.flow-run.Running\"\n ],\n \"for_each\": [\n \"prefect.resource.id\"\n ],\n \"posture\": \"Proactive\",\n \"threshold\": 30,\n \"within\": 0\n },\n \"actions\": [\n {\n \"type\": \"run-deployment\",\n \"source\": \"selected\",\n \"deployment_id\": \"YOUR-DEPLOYMENT-ID\", \n \"parameters\": \"10\"\n }\n ],\n \"owner_resource\": \"string\"\n }\n\n headers = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\n response = requests.post(api_url, headers=headers, json=data)\n\n print(response.json())\n return response.json()\n
After running this function, you will see within the UI the changes that came from the post request. Keep in mind, the context will be \"custom\" on UI.
Let's run the underlying flow and see the deployment get kicked off after 30 seconds elapsed. This will result in a new flow run of build_names
, and we are able to see this new deployment get initiated with the custom parameters we outlined above.
In a few quick changes, we are able to programmatically create an automation that deploys workflows with custom parameters.
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#using-an-underlying-yaml-file","title":"Using an underlying .yaml file","text":"We can extend this idea one step further by utilizing our own .yaml version of the automation, and registering that file with our UI. This simplifies the requirements of the automation by declaring it in its own .yaml file, and then registering that .yaml with the API.
Let's first start with creating the .yaml file that will house the automation requirements. Here is how it would look like:
name: Cancel long running flows\ndescription: Cancel any flow run after an hour of execution\ntrigger:\n match:\n \"prefect.resource.id\": \"prefect.flow-run.*\"\n match_related: {}\n after:\n - \"prefect.flow-run.Failed\"\n expect:\n - \"prefect.flow-run.*\"\n for_each:\n - \"prefect.resource.id\"\n posture: \"Proactive\"\n threshold: 1\n within: 30\nactions:\n - type: \"cancel-flow-run\"\n
We can then have a helper function that applies this YAML file with the REST API function.
import yaml\n\nfrom utils import post, put\n\ndef create_or_update_automation(path: str = \"automation.yaml\"):\n \"\"\"Create or update an automation from a local YAML file\"\"\"\n # Load the definition\n with open(path, \"r\") as fh:\n payload = yaml.safe_load(fh)\n\n # Find existing automations by name\n automations = post(\"/automations/filter\")\n existing_automation = [a[\"id\"] for a in automations if a[\"name\"] == payload[\"name\"]]\n automation_exists = len(existing_automation) > 0\n\n # Create or update the automation\n if automation_exists:\n print(f\"Automation '{payload['name']}' already exists and will be updated\")\n put(f\"/automations/{existing_automation[0]}\", payload=payload)\n else:\n print(f\"Creating automation '{payload['name']}'\")\n post(\"/automations/\", payload=payload)\n\nif __name__ == \"__main__\":\n create_or_update_automation()\n
You can find a complete repo with these APIs examples in this GitHub repository.
In this example, we managed to create the automation by registering the .yaml file with a helper function. This offers another experience when trying to create an automation.
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#custom-webhook-kicking-off-an-automation","title":"Custom webhook kicking off an automation","text":"We can use webhooks to expose the events API which allows us to extend the functionality of deployments and ways to respond to changes in our workflow through a few easy steps.
By exposing a webhook endpoint, we can kick off workflows that can trigger deployments - all from a simple event created from an HTTP request.
Lets create a webhook within the UI. Here is the webhook we can use to create these dynamic events.
{\n \"event\": \"model-update\",\n \"resource\": {\n \"prefect.resource.id\": \"product.models.{{ body.model_id}}\",\n \"prefect.resource.name\": \"{{ body.friendly_name }}\",\n \"run_count\": \"{{body.run_count}}\"\n }\n}\n
From a simple input, we can easily create an exposed webhook endpoint. Each webhook will correspond to a custom event created, where you can react to it downstream with a separate deployment or automation.
For example, we can create a curl request that sends the endpoint information such as a run count for our deployment.
curl -X POST https://api.prefect.cloud/hooks/34iV2SFke3mVa6y5Y-YUoA -d \"model_id=adhoc\" -d \"run_count=10\" -d \"friendly_name=test-user-input\"\n
From here, we can make a webhook that is connected to pulling in parameters on the curl command, and then it kicks off a deployment that uses these pulled parameters. Let us go into the event feed, and we can automate straight from this event.
This allows us to create automations that respond to these webhook events. From a few clicks in the UI, we are able to associate an external process with the Prefect events API, that can enable us to trigger downstream deployments.
In the next section, we will explore event triggers that automate the kickoff of a deployment run.
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#using-triggers","title":"Using triggers","text":"Let's take this idea one step further, by creating a deployment that will be triggered when a flow run takes longer than expected. We can take advantage of Prefect's Marvin library that will use an LLM to classify our data. Marvin is great at embedding data science and data analysis applications within your pre-existing data engineering workflows. In this case, we can use Marvin'd AI functions to help make our dataset more information rich.
Install Marvin with pip install marvin
and set you OpenAI API key as shown here
We can add a trigger to run a deployment in response to a specific event.
Let's create an example with Marvin's AI functions. We will take in a pandas DataFrame and use the AI function to analyze it.
Here is an example of pulling in that data and classifying using Marvin AI. We can help create dummy data based on classifications we have already created.
from marvin import ai_classifier\nfrom enum import Enum\nimport pandas as pd\n\n@ai_fn\ndef generate_synthetic_user_data(build_of_names: list[dict]) -> list:\n \"\"\"\n Generate additional data for userID (numerical values with 6 digits), location, and timestamp as separate columns and append the data onto 'build_of_names'. Make userID the first column\n \"\"\"\n\n@flow\ndef create_fake_user_dataset(df):\n artifact_df = generate_synthetic_user_data(df)\n print(artifact_df)\n\n create_table_artifact(\n key=\"fake-user-data\",\n table=artifact_df,\n description= \"Dataset that is comprised of a mix of autogenerated data based on user data\"\n )\n\nif __name__ == \"__main__\":\n create_fake_artifact() \n
Let's kick off a deployment with a trigger defined in a prefect.yaml
file. Let's specify what we want to trigger when the event stays in a running state for longer than 30 seconds.
# Welcome to your prefect.yaml file! You can use this file for storing and managing\n# configuration for deploying your flows. We recommend committing this file to source\n# control along with your flow code.\n\n# Generic metadata about this project\nname: automations-guide\nprefect-version: 2.13.1\n\n# build section allows you to manage and build docker images\nbuild: null\n\n# push section allows you to manage if and how this project is uploaded to remote locations\npush: null\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect.deployments.steps.set_working_directory:\n directory: /Users/src/prefect/Playground/marvin-extension\n\n# the deployments section allows you to provide configuration for deploying flows\ndeployments:\n- name: create-fake-user-dataset\n triggers:\n - enabled: true\n match:\n prefect.resource.id: \"prefect.flow-run.*\"\n after: \"prefect.flow-run.Running\",\n expect: [],\n for_each: [\"prefect.resource.id\"],\n parameters:\n param_1: 10\n posture: \"Proactive\"\n version: null\n tags: []\n description: null\n entrypoint: marvin-extension.py:create_fake_user_dataset\n parameters: {}\n work_pool:\n name: tutorial-process-pool\n work_queue_name: null\n job_variables: {}\n schedule: null\n
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#next-steps","title":"Next steps","text":"You've seen how to create automations via the UI, REST API, and a triggers defined in a prefect.yaml
deployment definition.
To learn more about events that can act as automation triggers, see the events docs. To learn more about event webhooks in particular, see the webhooks guide.
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/big-data/","title":"Big Data with Prefect","text":"In this guide you'll learn tips for working with large amounts of data in Prefect.
Big data doesn't have a widely accepted, precise definition. In this guide, we'll discuss methods to reduce the processing time or memory utilization of Prefect workflows, without editing your Python code.
","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#optimizing-your-python-code-with-prefect-for-big-data","title":"Optimizing your Python code with Prefect for big data","text":"Depending upon your needs, you may want to optimize your Python code for speed, memory, compute, or disk space.
Prefect provides several options that we'll explore in this guide:
quote
to save time running your code.When a task is called from a flow, each argument is introspected by Prefect, by default. To speed up your flow runs, you can disable this behavior for a task by wrapping the argument using quote
.
To demonstrate, let's use a basic example that extracts and transforms some New York taxi data.
et_quote.pyfrom prefect import task, flow\nfrom prefect.utilities.annotations import quote\nimport pandas as pd\n\n\n@task\ndef extract(url: str):\n \"\"\"Extract data\"\"\"\n df_raw = pd.read_parquet(url)\n print(df_raw.info())\n return df_raw\n\n\n@task\ndef transform(df: pd.DataFrame):\n \"\"\"Basic transformation\"\"\"\n df[\"tip_fraction\"] = df[\"tip_amount\"] / df[\"total_amount\"]\n print(df.info())\n return df\n\n\n@flow(log_prints=True)\ndef et(url: str):\n \"\"\"ET pipeline\"\"\"\n df_raw = extract(url)\n df = transform(quote(df_raw))\n\n\nif __name__ == \"__main__\":\n url = \"https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2023-09.parquet\"\n et(url)\n
Introspection can take significant time when the object being passed is a large collection, such as dictionary or DataFrame, where each element needs to be visited. Note that using quote
reduces execution time at the expense of disabling task dependency tracking for the wrapped object.
By default, the results of task runs are stored in memory in your execution environment. This behavior makes flow runs fast for small data, but can be problematic for large data. You can save memory by writing results to disk. In production, you'll generally want to write results to a cloud provider storage such as AWS S3. Prefect lets you to use a storage block from a Prefect cloud integration library such as prefect-aws to save your configuration information. Learn more about blocks here.
Install the relevant library, register the block with the server, and create your storage block. Then you can reference the block in your flow like this:
...\nfrom prefect_aws.s3 import S3Bucket\n\nmy_s3_block = S3Bucket.load(\"MY_BLOCK_NAME\")\n\n...\n@task(result_storage=my_s3_block)\n
Now the result of the task will be written to S3, rather than stored in memory.
","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#save-data-to-disk-within-a-flow","title":"Save data to disk within a flow","text":"To save memory and time with big data, you don't need to pass results between tasks at all. Instead, you can write and read data to disk directly in your flow code. Prefect has integration libraries for each of the major cloud providers. Each library contains blocks with methods that make it convenient to read and write data to and from cloud object storage. The moving data guide has step-by-step examples for each cloud provider.
","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#cache-task-results","title":"Cache task results","text":"Caching allows you to avoid re-running tasks when doing so is unnecessary. Caching can save you time and compute. Note that caching requires task result persistence. Caching is discussed in detail in the tasks concept page.
","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#compress-results-written-to-disk","title":"Compress results written to disk","text":"If you're using Prefect's task result persistence, you can save disk space by compressing the results. You just need to specify the result type with compressed/
prefixed like this:
@task(result_serializer=\"compressed/json\")\n
Read about compressing results with Prefect for more details. The tradeoff of using compression is that it takes time to compress and decompress the data.
","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#use-a-task-runner-for-parallelizable-operations","title":"Use a task runner for parallelizable operations","text":"Prefect's task runners allow you to use the Dask and Ray Python libraries to run tasks in parallel and distributed across multiple machines. This can save you time and compute when operating on large data structures. See the guide to working with Dask and Ray Task Runners for details.
","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/ci-cd/","title":"CI/CD With Prefect","text":"Many organizations deploy Prefect workflows via their CI/CD process. Each organization has their own unique CI/CD setup, but a common pattern is to use CI/CD to manage Prefect deployments. Combining Prefect's deployment features with CI/CD tools enables efficient management of flow code updates, scheduling changes, and container builds. This guide uses GitHub Actions to implement a CI/CD process, but these concepts are generally applicable across many CI/CD tools.
Note that Prefect's primary ways for creating deployments, a .deploy
flow method or a prefect.yaml
configuration file, are both designed with building and pushing images to a Docker registry in mind.
In this example, you'll write a GitHub Actions workflow that will run each time you push to your repository's main
branch. This workflow will build and push a Docker image containing your flow code to Docker Hub, then deploy the flow to Prefect Cloud.
Your CI/CD process must be able to authenticate with Prefect in order to deploy flows.
Deploying flows securely and non-interactively in your CI/CD process can be accomplished by saving your PREFECT_API_URL
and PREFECT_API_KEY
as secrets in your repository's settings so they can be accessed in your CI/CD runner's environment without exposing them in any scripts or configuration files.
In this scenario, deploying flows involves building and pushing Docker images, so add DOCKER_USERNAME
and DOCKER_PASSWORD
as secrets to your repository as well.
You can create secrets for GitHub Actions in your repository under Settings -> Secrets and variables -> Actions -> New repository secret:
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#writing-a-github-workflow","title":"Writing a GitHub workflow","text":"To deploy your flow via GitHub Actions, you'll need a workflow YAML file. GitHub will look for workflow YAML files in the .github/workflows/
directory in the root of your repository. In their simplest form, GitHub workflow files are made up of triggers and jobs.
The on:
trigger is set to run the workflow each time a push occurs on the main
branch of the repository.
The deploy
job is comprised of four steps
:
Checkout
clones your repository into the GitHub Actions runner so you can reference files or run scripts from your repository in later steps.Log in to Docker Hub
authenticates to DockerHub so your image can be pushed to the Docker registry in your DockerHub account. docker/login-action is an existing GitHub action maintained by Docker. with:
passes values into the Action, similar to passing parameters to a function.Setup Python
installs your selected version of Python.Prefect Deploy
installs the dependencies used in your flow, then deploys your flow. env:
makes the PREFECT_API_KEY
and PREFECT_API_URL
secrets from your repository available as environment variables during this step's execution.For reference, the examples below can be found on their respective branches of this repository.
.deployprefect.yaml.\n\u251c\u2500\u2500 .github/\n\u2502 \u2514\u2500\u2500 workflows/\n\u2502 \u2514\u2500\u2500 deploy-prefect-flow.yaml\n\u251c\u2500\u2500 flow.py\n\u2514\u2500\u2500 requirements.txt\n
flow.py
from prefect import flow\n\n@flow(log_prints=True)\ndef hello():\n print(\"Hello!\")\n\nif __name__ == \"__main__\":\n hello.deploy(\n name=\"my-deployment\",\n work_pool_name=\"my-work-pool\",\n image=\"my_registry/my_image:my_image_tag\",\n )\n
.github/workflows/deploy-prefect-flow.yaml
name: Deploy Prefect flow\n\non:\n push:\n branches:\n - main\n\njobs:\n deploy:\n name: Deploy\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: Log in to Docker Hub\n uses: docker/login-action@v3\n with:\n username: ${{ secrets.DOCKER_USERNAME }}\n password: ${{ secrets.DOCKER_PASSWORD }}\n\n - name: Setup Python\n uses: actions/setup-python@v5\n with:\n python-version: '3.11'\n\n - name: Prefect Deploy\n env:\n PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }}\n PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }}\n run: |\n pip install -r requirements.txt\n python flow.py\n
.\n\u251c\u2500\u2500 .github/\n\u2502 \u2514\u2500\u2500 workflows/\n\u2502 \u2514\u2500\u2500 deploy-prefect-flow.yaml\n\u251c\u2500\u2500 flow.py\n\u251c\u2500\u2500 prefect.yaml\n\u2514\u2500\u2500 requirements.txt\n
flow.py
from prefect import flow\n\n@flow(log_prints=True)\ndef hello():\n print(\"Hello!\")\n
prefect.yaml
name: cicd-example\nprefect-version: 2.14.11\n\nbuild:\n - prefect_docker.deployments.steps.build_docker_image:\n id: build_image\n requires: prefect-docker>=0.3.1\n image_name: my_registry/my_image\n tag: my_image_tag\n dockerfile: auto\n\npush:\n - prefect_docker.deployments.steps.push_docker_image:\n requires: prefect-docker>=0.3.1\n image_name: \"{{ build_image.image_name }}\"\n tag: \"{{ build_image.tag }}\"\n\npull: null\n\ndeployments:\n - name: my-deployment\n entrypoint: flow.py:hello\n work_pool:\n name: my-work-pool\n work_queue_name: default\n job_variables:\n image: \"{{ build-image.image }}\"\n
.github/workflows/deploy-prefect-flow.yaml
name: Deploy Prefect flow\n\non:\n push:\n branches:\n - main\n\njobs:\n deploy:\n name: Deploy\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: Log in to Docker Hub\n uses: docker/login-action@v3\n with:\n username: ${{ secrets.DOCKER_USERNAME }}\n password: ${{ secrets.DOCKER_PASSWORD }}\n\n - name: Setup Python\n uses: actions/setup-python@v5\n with:\n python-version: '3.11'\n\n - name: Prefect Deploy\n env:\n PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }}\n PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }}\n run: |\n pip install -r requirements.txt\n prefect deploy -n my-deployment\n
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#running-a-github-workflow","title":"Running a GitHub workflow","text":"After pushing commits to your repository, GitHub will automatically trigger a run of your workflow. The status of running and completed workflows can be monitored from the Actions tab of your repository.
You can view the logs from each workflow step as they run. The Prefect Deploy
step will include output about your image build and push, and the creation/update of your deployment.
Successfully built image '***/cicd-example:latest'\n\nSuccessfully pushed image '***/cicd-example:latest'\n\nSuccessfully created/updated all deployments!\n\n Deployments\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name \u2503 Status \u2503 Details \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 hello/my-deployment \u2502 applied \u2502 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#advanced-example","title":"Advanced example","text":"In more complex scenarios, CI/CD processes often need to accommodate several additional considerations to enable a smooth development workflow:
This example repository demonstrates how each of these considerations can be addressed using a combination of Prefect's and GitHub's capabilities.
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#deploying-to-multiple-workspaces","title":"Deploying to multiple workspaces","text":"Which deployment processes should run are automatically selected when changes are pushed depending on two conditions:
on:\n push:\n branches:\n - stg\n - main\n paths:\n - \"project_1/**\"\n
branches:
- which branch has changed. This will ultimately select which Prefect workspace a deployment is created or updated in. In this example, changes on the stg
branch will deploy flows to a staging workspace, and changes on the main
branch will deploy flows to a production workspace.paths:
- which project folders' files have changed. Since each project folder contains its own flows, dependencies, and prefect.yaml
, it represents a complete set of logic and configuration that can be deployed independently. Each project in this repository gets its own GitHub Actions workflow YAML file.The prefect.yaml
file in each project folder depends on environment variables that are dictated by the selected job in each CI/CD workflow, enabling external code storage for Prefect deployments that is clearly separated across projects and environments.
.\n \u251c\u2500\u2500 cicd-example-workspaces-prod # production bucket\n \u2502 \u251c\u2500\u2500 project_1\n \u2502 \u2514\u2500\u2500 project_2\n \u2514\u2500\u2500 cicd-example-workspaces-stg # staging bucket\n \u251c\u2500\u2500 project_1\n \u2514\u2500\u2500 project_2 \n
Since the deployments in this example use S3 for code storage, it's important that push steps place flow files in separate locations depending upon their respective environment and project so no deployment overwrites another deployment's files.
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#caching-build-dependencies","title":"Caching build dependencies","text":"Since building Docker images and installing Python dependencies are essential parts of the deployment process, it's useful to rely on caching to skip repeated build steps.
The setup-python
action offers caching options so Python packages do not have to be downloaded on repeat workflow runs.
- name: Setup Python\n uses: actions/setup-python@v5\n with:\n python-version: \"3.11\"\n cache: \"pip\"\n
Using cached prefect-2.16.1-py3-none-any.whl (2.9 MB)\nUsing cached prefect_aws-0.4.10-py3-none-any.whl (61 kB)\n
The build-push-action
for building Docker images also offers caching options for GitHub Actions. If you are not using GitHub, other remote cache backends are available as well.
- name: Build and push\n id: build-docker-image\n env:\n GITHUB_SHA: ${{ steps.get-commit-hash.outputs.COMMIT_HASH }}\n uses: docker/build-push-action@v5\n with:\n context: ${{ env.PROJECT_NAME }}/\n push: true\n tags: ${{ secrets.DOCKER_USERNAME }}/${{ env.PROJECT_NAME }}:${{ env.GITHUB_SHA }}-stg\n cache-from: type=gha\n cache-to: type=gha,mode=max\n
importing cache manifest from gha:***\nDONE 0.1s\n\n[internal] load build context\ntransferring context: 70B done\nDONE 0.0s\n\n[2/3] COPY requirements.txt requirements.txt\nCACHED\n\n[3/3] RUN pip install -r requirements.txt\nCACHED\n
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#prefect-github-actions","title":"Prefect GitHub Actions","text":"Prefect provides its own GitHub Actions for authentication and deployment creation. These actions can simplify deploying with CI/CD when using prefect.yaml
, especially in cases where a repository contains flows that are used in multiple deployments across multiple Prefect Cloud workspaces.
Here's an example of integrating these actions into the workflow we created above:
name: Deploy Prefect flow\n\non:\n push:\n branches:\n - main\n\njobs:\n deploy:\n name: Deploy\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: Log in to Docker Hub\n uses: docker/login-action@v3\n with:\n username: ${{ secrets.DOCKER_USERNAME }}\n password: ${{ secrets.DOCKER_PASSWORD }}\n\n - name: Setup Python\n uses: actions/setup-python@v5\n with:\n python-version: \"3.11\"\n\n - name: Prefect Auth\n uses: PrefectHQ/actions-prefect-auth@v1\n with:\n prefect-api-key: ${{ secrets.PREFECT_API_KEY }}\n prefect-workspace: ${{ secrets.PREFECT_WORKSPACE }}\n\n - name: Run Prefect Deploy\n uses: PrefectHQ/actions-prefect-deploy@v3\n with:\n deployment-names: my-deployment\n requirements-file-paths: requirements.txt\n
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#authenticating-to-other-docker-image-registries","title":"Authenticating to other Docker image registries","text":"The docker/login-action
GitHub Action supports pushing images to a wide variety of image registries.
For example, if you are storing Docker images in AWS Elastic Container Registry, you can add your ECR registry URL to the registry
key in the with:
part of the action and use an AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
as your username
and password
.
- name: Login to ECR\n uses: docker/login-action@v3\n with:\n registry: <aws-account-number>.dkr.ecr.<region>.amazonaws.com\n username: ${{ secrets.AWS_ACCESS_KEY_ID }}\n password: ${{ secrets.AWS_SECRET_ACCESS_KEY }}\n
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#other-resources","title":"Other resources","text":"Check out the Prefect Cloud Terraform provider if you're using Terraform to manage your infrastructure.
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/creating-interactive-workflows/","title":"Creating Interactive Workflows","text":"Flows can pause or suspend execution and automatically resume when they receive type-checked input in Prefect's UI. Flows can also send and receive type-checked input at any time while running, without pausing or suspending. This guide will show you how to use these features to build interactive workflows.
A note on async Python syntax
Most of the example code in this section uses async Python functions and await
. However, as with other Prefect features, you can call these functions with or without await
.
You can pause or suspend a flow until it receives input from a user in Prefect's UI. This is useful when you need to ask for additional information or feedback before resuming a flow. Such workflows are often called human-in-the-loop (HITL) systems.
What is human-in-the-loop interactivity used for?
Approval workflows that pause to ask a human to confirm whether a workflow should continue are very common in the business world. Certain types of machine learning training and artificial intelligence workflows benefit from incorporating HITL design.
","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#waiting-for-input","title":"Waiting for input","text":"To receive input while paused or suspended use the wait_for_input
parameter in the pause_flow_run
or suspend_flow_run
functions. This parameter accepts one of the following:
int
or str
, or a built-in collection like List[int]
pydantic.BaseModel
subclassprefect.input.RunInput
When to use a RunModel
or BaseModel
instead of a built-in type
There are a few reasons to use a RunModel
or BaseModel
. The first is that when you let Prefect automatically create one of these classes for your input type, the field that users will see in Prefect's UI when they click \"Resume\" on a flow run is named value
and has no help text to suggest what the field is. If you create a RunInput
or BaseModel
, you can change details like the field name, help text, and default value, and users will see those reflected in the \"Resume\" form.
The simplest way to pause or suspend and wait for input is to pass a built-in type:
from prefect import flow, pause_flow_run, get_run_logger\n\n@flow\ndef greet_user():\n logger = get_run_logger()\n\n user = pause_flow_run(wait_for_input=str)\n\n logger.info(f\"Hello, {user}!\")\n
In this example, the flow run will pause until a user clicks the Resume button in the Prefect UI, enters a name, and submits the form.
What types can you pass for wait_for_input
?
When you pass a built-in type such as int
as an argument for the wait_for_input
parameter to pause_flow_run
or suspend_flow_run
, Prefect automatically creates a Pydantic model containing one field annotated with the type you specified. This means you can use any type annotation that Pydantic accepts for model fields with these functions.
Instead of a built-in type, you can pass in a pydantic.BaseModel
class. This is useful if you already have a BaseModel
you want to use:
from prefect import flow, pause_flow_run, get_run_logger\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n name: str\n age: int\n\n\n@flow\nasync def greet_user():\n logger = get_run_logger()\n\n user = await pause_flow_run(wait_for_input=User)\n\n logger.info(f\"Hello, {user.name}!\")\n
BaseModel
classes are upgraded to RunInput
classes automatically
When you pass a pydantic.BaseModel
class as the wait_for_input
argument to pause_flow_run
or suspend_flow_run
, Prefect automatically creates a RunInput
class with the same behavior as your BaseModel
and uses that instead.
RunInput
classes contain extra logic that allows flows to send and receive them at runtime. You shouldn't notice any difference!
Finally, for advanced use cases like overriding how Prefect stores flow run inputs, you can create a RunInput
class:
from prefect import get_run_logger\nfrom prefect.input import RunInput\n\nclass UserInput(RunInput):\n name: str\n age: int\n\n # Imagine overridden methods here!\n def override_something(self, *args, **kwargs):\n super().override_something(*args, **kwargs)\n\n@flow\nasync def greet_user():\n logger = get_run_logger()\n\n user = await pause_flow_run(wait_for_input=UserInput)\n\n logger.info(f\"Hello, {user.name}!\")\n
","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#providing-initial-data","title":"Providing initial data","text":"You can set default values for fields in your model by using the with_initial_data
method. This is useful when you want to provide default values for the fields in your own RunInput
class.
Expanding on the example above, you could make the name
field default to \"anonymous\":
from prefect import get_run_logger\nfrom prefect.input import RunInput\n\nclass UserInput(RunInput):\n name: str\n age: int\n\n@flow\nasync def greet_user():\n logger = get_run_logger()\n\n user_input = await pause_flow_run(\n wait_for_input=UserInput.with_initial_data(name=\"anonymous\")\n )\n\n if user_input.name == \"anonymous\":\n logger.info(\"Hello, stranger!\")\n else:\n logger.info(f\"Hello, {user_input.name}!\")\n
When a user sees the form for this input, the name field will contain \"anonymous\" as the default.
","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#providing-a-description-with-runtime-data","title":"Providing a description with runtime data","text":"You can provide a dynamic, markdown description that will appear in the Prefect UI when the flow run pauses. This feature enables context-specific prompts, enhancing clarity and user interaction. Building on the example above:
from datetime import datetime\nfrom prefect import flow, pause_flow_run, get_run_logger\nfrom prefect.input import RunInput\n\n\nclass UserInput(RunInput):\n name: str\n age: int\n\n\n@flow\nasync def greet_user():\n logger = get_run_logger()\n current_date = datetime.now().strftime(\"%B %d, %Y\")\n\n description_md = f\"\"\"\n**Welcome to the User Greeting Flow!**\nToday's Date: {current_date}\n\nPlease enter your details below:\n- **Name**: What should we call you?\n- **Age**: Just a number, nothing more.\n\"\"\"\n\n user_input = await pause_flow_run(\n wait_for_input=UserInput.with_initial_data(\n description=description_md, name=\"anonymous\"\n )\n )\n\n if user_input.name == \"anonymous\":\n logger.info(\"Hello, stranger!\")\n else:\n logger.info(f\"Hello, {user_input.name}!\")\n
When a user sees the form for this input, the given markdown will appear above the input fields.
","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#handling-custom-validation","title":"Handling custom validation","text":"Prefect uses the fields and type hints on your RunInput
or BaseModel
class to validate the general structure of input your flow receives, but you might require more complex validation. If you do, you can use Pydantic validators.
Custom validation runs after the flow resumes
Prefect transforms the type annotations in your RunInput
or BaseModel
class to a JSON schema and uses that schema in the UI for client-side validation. However, custom validation requires running Python logic defined in your RunInput
class. Because of this, validation happens after the flow resumes, so you'll want to handle it explicitly in your flow. Continue reading for an example best practice.
The following is an example RunInput
class that uses a custom field validator:
import pydantic\nfrom prefect.input import RunInput\n\n\nclass ShirtOrder(RunInput):\n size: Literal[\"small\", \"medium\", \"large\", \"xlarge\"]\n color: Literal[\"red\", \"green\", \"black\"]\n\n @pydantic.validator(\"color\")\n def validate_age(cls, value, values, **kwargs):\n if value == \"green\" and values[\"size\"] == \"small\":\n raise ValueError(\n \"Green is only in-stock for medium, large, and XL sizes.\"\n )\n\n return value\n
In the example, we use Pydantic's validator
decorator to define a custom validation method for the color
field. We can use it in a flow like this:
import pydantic\nfrom prefect import flow, pause_flow_run\nfrom prefect.input import RunInput\n\n\nclass ShirtOrder(RunInput):\n size: Literal[\"small\", \"medium\", \"large\", \"xlarge\"]\n color: Literal[\"red\", \"green\", \"black\"]\n\n @pydantic.validator(\"color\")\n def validate_age(cls, value, values, **kwargs):\n if value == \"green\" and values[\"size\"] == \"small\":\n raise ValueError(\n \"Green is only in-stock for medium, large, and XL sizes.\"\n )\n\n return value\n\n\n@flow\ndef get_shirt_order():\n shirt_order = pause_flow_run(wait_for_input=ShirtOrder)\n
If a user chooses any size and color combination other than small
and green
, the flow run will resume successfully. However, if the user chooses size small
and color green
, the flow run will resume, and pause_flow_run
will raise a ValidationError
exception. This will cause the flow run to fail and log the error.
However, what if you don't want the flow run to fail? One way to handle this case is to use a while
loop and pause again if the ValidationError
exception is raised:
from typing import Literal\n\nimport pydantic\nfrom prefect import flow, get_run_logger, pause_flow_run\nfrom prefect.input import RunInput\n\n\nclass ShirtOrder(RunInput):\n size: Literal[\"small\", \"medium\", \"large\", \"xlarge\"]\n color: Literal[\"red\", \"green\", \"black\"]\n\n @pydantic.validator(\"color\")\n def validate_age(cls, value, values, **kwargs):\n if value == \"green\" and values[\"size\"] == \"small\":\n raise ValueError(\n \"Green is only in-stock for medium, large, and XL sizes.\"\n )\n\n return value\n\n\n@flow\ndef get_shirt_order():\n logger = get_run_logger()\n shirt_order = None\n\n while shirt_order is None:\n try:\n shirt_order = pause_flow_run(wait_for_input=ShirtOrder)\n except pydantic.ValidationError as exc:\n logger.error(f\"Invalid size and color combination: {exc}\")\n\n logger.info(\n f\"Shirt order: {shirt_order.size}, {shirt_order.color}\"\n )\n
This code will cause the flow run to continually pause until the user enters a valid age.
As an additional step, you may want to use an automation or notification to alert the user to the error.
","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#sending-and-receiving-input-at-runtime","title":"Sending and receiving input at runtime","text":"Use the send_input
and receive_input
functions to send input to a flow or receive input from a flow at runtime. You don't need to pause or suspend the flow to send or receive input.
Why would you send or receive input without pausing or suspending?
You might want to send or receive input without pausing or suspending in scenarios where the flow run is designed to handle real-time data. For instance, in a live monitoring system, you might need to update certain parameters based on the incoming data without interrupting the flow. Another use is having a long-running flow that continually responds to runtime input with low latency. For example, if you're building a chatbot, you could have a flow that starts a GPT Assistant and manages a conversation thread.
The most important parameter to the send_input
and receive_input
functions is run_type
, which should be one of the following:
int
or str
pydantic.BaseModel
classprefect.input.RunInput
classWhen to use a BaseModel
or RunInput
instead of a built-in type
Most built-in types and collections of built-in types should work with send_input
and receive_input
, but there is a caveat with nested collection types, such as lists of tuples, e.g. List[Tuple[str, float]])
. In this case, validation may happen after your flow receives the data, so calling receive_input
may raise a ValidationError
. You can plan to catch this exception, but also, consider placing the field in an explicit BaseModel
or RunInput
so that your flow only receives exact type matches.
Let's look at some examples! We'll check out receive_input
first, followed by send_input
, and then we'll see the two functions working together.
The following flow uses receive_input
to continually receive names and print a personalized greeting for each name it receives:
from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n@flow\nasync def greeter_flow():\n async for name_input in receive_input(str, timeout=None):\n # Prints \"Hello, andrew!\" if another flow sent \"andrew\"\n print(f\"Hello, {name_input}!\")\n
When you pass a type such as str
into receive_input
, Prefect creates a RunInput
class to manage your input automatically. When a flow sends input of this type, Prefect uses the RunInput
class to validate the input. If the validation succeeds, your flow receives the input in the type you specified. In this example, if the flow received a valid string as input, the variable name_input
would contain the string value.
If, instead, you pass a BaseModel
, Prefect upgrades your BaseModel
to a RunInput
class, and the variable your flow sees \u2014 in this case, name_input
\u2014 is a RunInput
instance that behaves like a BaseModel
. Of course, if you pass in a RunInput
class, no upgrade is needed, and you'll get a RunInput
instance.
If you prefer to keep things simple and pass types such as str
into receive_input
, you can do so. If you need access to the generated RunInput
that contains the received value, pass with_metadata=True
to receive_input
:
from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n@flow\nasync def greeter_flow():\n async for name_input in receive_input(\n str,\n timeout=None,\n with_metadata=True\n ):\n # Input will always be in the field \"value\" on this object.\n print(f\"Hello, {name_input.value}!\")\n
Why would you need to use with_metadata=True
?
The primary uses of accessing the RunInput
object for a receive input are to respond to the sender with the RunInput.respond()
function or to access the unique key for an input. Later in this guide, we'll discuss how and why you might use these features.
Notice that we are now printing name_input.value
. When Prefect generates a RunInput
for you from a built-in type, the RunInput
class has a single field, value
, that uses a type annotation matching the type you specified. So if you call receive_input
like this: receive_input(str, with_metadata=True)
, that's equivalent to manually creating the following RunInput
class and receive_input
call:
from prefect import flow\nfrom prefect.input.run_input import RunInput\n\nclass GreeterInput(RunInput):\n value: str\n\n@flow\nasync def greeter_flow():\n async for name_input in receive_input(GreeterInput, timeout=None):\n print(f\"Hello, {name_input.value}!\")\n
The type used in receive_input
and send_input
must match
For a flow to receive input, the sender must use the same type that the receiver is receiving. This means that if the receiver is receiving GreeterInput
, the sender must send GreeterInput
. If the receiver is receiving GreeterInput
and the sender sends str
input that Prefect automatically upgrades to a RunInput
class, the types won't match, so the receiving flow run won't receive the input. However, the input will be waiting if the flow ever calls receive_input(str)
!
By default, each time you call receive_input
, you get an iterator that iterates over all known inputs to a specific flow run, starting with the first received. The iterator will keep track of your current position as you iterate over it, or you can call next()
to explicitly get the next input. If you're using the iterator in a loop, you should probably assign it to a variable:
from prefect import flow, get_client\nfrom prefect.deployments.deployments import run_deployment\nfrom prefect.input.run_input import receive_input, send_input\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def sender():\n greeter_flow_run = await run_deployment(\n \"greeter/send-receive\", timeout=0, as_subflow=False\n )\n client = get_client()\n\n # Assigning the `receive_input` iterator to a variable\n # outside of the the `while True` loop allows us to continue\n # iterating over inputs in subsequent passes through the\n # while loop without losing our position.\n receiver = receive_input(\n str,\n with_metadata=True,\n timeout=None,\n poll_interval=0.1\n )\n\n while True:\n name = input(\"What is your name? \")\n if not name:\n continue\n\n if name == \"q\" or name == \"quit\":\n await send_input(\n EXIT_SIGNAL,\n flow_run_id=greeter_flow_run.id\n )\n print(\"Goodbye!\")\n break\n\n await send_input(name, flow_run_id=greeter_flow_run.id)\n\n # Saving the iterator outside of the while loop and\n # calling next() on each iteration of the loop ensures\n # that we're always getting the newest greeting. If we\n # had instead called `receive_input` here, we would\n # always get the _first_ greeting this flow received,\n # print it, and then ask for a new name.\n greeting = await receiver.next()\n print(greeting)\n
So, an iterator helps to keep track of the inputs your flow has already received. But what if you want your flow to suspend and then resume later, picking up where it left off? In that case, you will need to save the keys of the inputs you've seen so that the flow can read them back out when it resumes. You might use a Block, such as a JSONBlock
.
The following flow receives input for 30 seconds then suspends itself, which exits the flow and tears down infrastructure:
from prefect import flow, get_run_logger, suspend_flow_run\nfrom prefect.blocks.system import JSON\nfrom prefect.context import get_run_context\nfrom prefect.input.run_input import receive_input\n\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def greeter():\n logger = get_run_logger()\n run_context = get_run_context()\n assert run_context.flow_run, \"Could not see my flow run ID\"\n\n block_name = f\"{run_context.flow_run.id}-seen-ids\"\n\n try:\n seen_keys_block = await JSON.load(block_name)\n except ValueError:\n seen_keys_block = JSON(\n value=[],\n )\n\n try:\n async for name_input in receive_input(\n str,\n with_metadata=True,\n poll_interval=0.1,\n timeout=30,\n exclude_keys=seen_keys_block.value\n ):\n if name_input.value == EXIT_SIGNAL:\n print(\"Goodbye!\")\n return\n await name_input.respond(f\"Hello, {name_input.value}!\")\n\n seen_keys_block.value.append(name_input.metadata.key)\n await seen_keys_block.save(\n name=block_name,\n overwrite=True\n )\n except TimeoutError:\n logger.info(\"Suspending greeter after 30 seconds of idle time\")\n await suspend_flow_run(timeout=10000)\n
As this flow processes name input, it adds the key of the flow run input to the seen_keys_block
. When the flow later suspends and then resumes, it reads the keys it has already seen out of the JSON Block and passes them as the exlude_keys
parameter to receive_input
.
When your flow receives input from another flow, Prefect knows the sending flow run ID, so the receiving flow can respond by calling the respond
method on the RunInput
instance the flow received. There are a couple of requirements:
BaseModel
or RunInput
, or use with_metadata=True
The respond
method is equivalent to calling send_input(..., flow_run_id=sending_flow_run.id)
, but with respond
, your flow doesn't need to know the sending flow run's ID.
Now that we know about respond
, let's make our greeter_flow
respond to name inputs instead of printing them:
from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n@flow\nasync def greeter():\n async for name_input in receive_input(\n str,\n with_metadata=True,\n timeout=None\n ):\n await name_input.respond(f\"Hello, {name_input.value}!\")\n
Cool! There's one problem left: this flow runs forever! We need a way to signal that it should exit. Let's keep things simple and teach it to look for a special string:
from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def greeter():\n async for name_input in receive_input(\n str,\n with_metadata=True,\n poll_interval=0.1,\n timeout=None\n ):\n if name_input.value == EXIT_SIGNAL:\n print(\"Goodbye!\")\n return\n await name_input.respond(f\"Hello, {name_input.value}!\")\n
With a greeter
flow in place, we're ready to create the flow that sends greeter
names!
You can send input to a flow with the send_input
function. This works similarly to receive_input
and, like that function, accepts the same run_input
argument, which can be a built-in type such as str
, or else a BaseModel
or RunInput
subclass.
When can you send input to a flow run?
You can send input to a flow run as soon as you have the flow run's ID. The flow does not have to be receiving input for you to send input. If you send a flow input before it is receiving, it will see your input when it calls receive_input
(as long as the types in the send_input
and receive_input
calls match!)
Next, we'll create a sender
flow that starts a greeter
flow run and then enters a loop, continuously getting input from the terminal and sending it to the greeter flow:
@flow\nasync def sender():\n greeter_flow_run = await run_deployment(\n \"greeter/send-receive\", timeout=0, as_subflow=False\n )\n receiver = receive_input(str, timeout=None, poll_interval=0.1)\n client = get_client()\n\n while True:\n flow_run = await client.read_flow_run(greeter_flow_run.id)\n\n if not flow_run.state or not flow_run.state.is_running():\n continue\n\n name = input(\"What is your name? \")\n if not name:\n continue\n\n if name == \"q\" or name == \"quit\":\n await send_input(\n EXIT_SIGNAL,\n flow_run_id=greeter_flow_run.id\n )\n print(\"Goodbye!\")\n break\n\n await send_input(name, flow_run_id=greeter_flow_run.id)\n greeting = await receiver.next()\n print(greeting)\n
There's more going on here than in greeter
, so let's take a closer look at the pieces.
First, we use run_deployment
to start a greeter
flow run. This means we must have a worker or flow.serve()
running in separate process. That process will begin running greeter
while sender
continues to execute. Calling run_deployment(..., timeout=0)
ensures that sender
won't wait for the greeter
flow run to complete, because it's running a loop and will only exit when we send EXIT_SIGNAL
.
Next, we capture the iterator returned by receive_input
as receiver
. This flow works by entering a loop, and on each iteration of the loop, the flow asks for terminal input, sends that to the greeter
flow, and then runs receiver.next()
to wait until it receives the response from greeter
.
Next, we let the terminal user who ran this flow exit by entering the string q
or quit
. When that happens, we send the greeter
flow an exit signal so it will shut down too.
Finally, we send the new name to greeter
. We know that greeter
is going to send back a greeting as a string, so we immediately wait for new string input. When we receive the greeting, we print it and continue the loop that gets terminal input.
Finally, let's see a complete example of using send_input
and receive_input
. Here is what the greeter
and sender
flows look like together:
import asyncio\nimport sys\nfrom prefect import flow, get_client\nfrom prefect.blocks.system import JSON\nfrom prefect.context import get_run_context\nfrom prefect.deployments.deployments import run_deployment\nfrom prefect.input.run_input import receive_input, send_input\n\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def greeter():\n run_context = get_run_context()\n assert run_context.flow_run, \"Could not see my flow run ID\"\n\n block_name = f\"{run_context.flow_run.id}-seen-ids\"\n\n try:\n seen_keys_block = await JSON.load(block_name)\n except ValueError:\n seen_keys_block = JSON(\n value=[],\n )\n\n async for name_input in receive_input(\n str,\n with_metadata=True,\n poll_interval=0.1,\n timeout=None\n ):\n if name_input.value == EXIT_SIGNAL:\n print(\"Goodbye!\")\n return\n await name_input.respond(f\"Hello, {name_input.value}!\")\n\n seen_keys_block.value.append(name_input.metadata.key)\n await seen_keys_block.save(\n name=block_name,\n overwrite=True\n )\n\n\n@flow\nasync def sender():\n greeter_flow_run = await run_deployment(\n \"greeter/send-receive\", timeout=0, as_subflow=False\n )\n receiver = receive_input(str, timeout=None, poll_interval=0.1)\n client = get_client()\n\n while True:\n flow_run = await client.read_flow_run(greeter_flow_run.id)\n\n if not flow_run.state or not flow_run.state.is_running():\n continue\n\n name = input(\"What is your name? \")\n if not name:\n continue\n\n if name == \"q\" or name == \"quit\":\n await send_input(\n EXIT_SIGNAL,\n flow_run_id=greeter_flow_run.id\n )\n print(\"Goodbye!\")\n break\n\n await send_input(name, flow_run_id=greeter_flow_run.id)\n greeting = await receiver.next()\n print(greeting)\n\n\nif __name__ == \"__main__\":\n if sys.argv[1] == \"greeter\":\n asyncio.run(greeter.serve(name=\"send-receive\"))\n elif sys.argv[1] == \"sender\":\n asyncio.run(sender())\n
To run the example, you'll need a Python environment with Prefect installed, pointed at either an open-source Prefect server instance or Prefect Cloud.
With your environment set up, start a flow runner in one terminal with the following command:
python my_file_name greeter\n
For example, with Prefect Cloud, you should see output like this:
\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Your flow 'greeter' is being served and polling for scheduled runs! \u2502\n\u2502 \u2502\n\u2502 To trigger a run for this flow, use the following command: \u2502\n\u2502 \u2502\n\u2502 $ prefect deployment run 'greeter/send-receive' \u2502\n\u2502 \u2502\n\u2502 You can also run your flow via the Prefect UI: \u2502\n\u2502 https://app.prefect.cloud/account/...(a URL for your account) \u2502\n\u2502 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n
Then start the greeter process in another terminal:
python my_file_name sender\n
You should see output like this:
11:38:41.800 | INFO | prefect.engine - Created flow run 'gregarious-owl' for flow 'sender'\n11:38:41.802 | INFO | Flow run 'gregarious-owl' - View at https://app.prefect.cloud/account/...\nWhat is your name?\n
Type a name and press the enter key to see a greeting, and you'll see sending and receiving in action:
What is your name? andrew\nHello, andrew!\n
","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/dask-ray-task-runners/","title":"Dask and Ray Task Runners","text":"Task runners provide an execution environment for tasks. In a flow decorator, you can specify a task runner to run the tasks called in that flow.
The default task runner is the ConcurrentTaskRunner
.
Use .submit
to run your tasks asynchronously
To run tasks asynchronously use the .submit
method when you call them. If you call a task as you would normally in Python code it will run synchronously, even if you are calling the task within a flow that uses the ConcurrentTaskRunner
, DaskTaskRunner
, or RayTaskRunner
.
Many real-world data workflows benefit from true parallel, distributed task execution. For these use cases, the following Prefect-developed task runners for parallel task execution may be installed as Prefect Integrations.
DaskTaskRunner
runs tasks requiring parallel execution using dask.distributed
.RayTaskRunner
runs tasks requiring parallel execution using Ray.These task runners can spin up a local Dask cluster or Ray instance on the fly, or let you connect with a Dask or Ray environment you've set up separately. Then you can take advantage of massively parallel computing environments.
Use Dask or Ray in your flows to choose the execution environment that fits your particular needs.
To show you how they work, let's start small.
Remote storage
We recommend configuring remote file storage for task execution with DaskTaskRunner
or RayTaskRunner
. This ensures tasks executing in Dask or Ray have access to task result storage, particularly when accessing a Dask or Ray instance outside of your execution environment.
You may have seen this briefly in a previous tutorial, but let's look a bit more closely at how you can configure a specific task runner for a flow.
Let's start with the SequentialTaskRunner
. This task runner runs all tasks synchronously and may be useful when used as a debugging tool in conjunction with async code.
Let's start with this simple flow. We import the SequentialTaskRunner
, specify a task_runner
on the flow, and call the tasks with .submit()
.
from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\n\n@task\ndef say_hello(name):\n print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n print(f\"goodbye {name}\")\n\n@flow(task_runner=SequentialTaskRunner())\ndef greetings(names):\n for name in names:\n say_hello.submit(name)\n say_goodbye.submit(name)\n\ngreetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n
Save this as sequential_flow.py
and run it in a terminal. You'll see output similar to the following:
$ python sequential_flow.py\n16:51:17.967 | INFO | prefect.engine - Created flow run 'humongous-mink' for flow 'greetings'\n16:51:17.967 | INFO | Flow run 'humongous-mink' - Starting 'SequentialTaskRunner'; submitted tasks will be run sequentially...\n16:51:18.038 | INFO | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:51:18.038 | INFO | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-0' immediately...\nhello arthur\n16:51:18.060 | INFO | Task run 'say_hello-811087cd-0' - Finished in state Completed()\n16:51:18.107 | INFO | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:51:18.107 | INFO | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-0' immediately...\ngoodbye arthur\n16:51:18.123 | INFO | Task run 'say_goodbye-261e56a8-0' - Finished in state Completed()\n16:51:18.134 | INFO | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:51:18.134 | INFO | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-1' immediately...\nhello trillian\n16:51:18.150 | INFO | Task run 'say_hello-811087cd-1' - Finished in state Completed()\n16:51:18.159 | INFO | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:51:18.159 | INFO | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-1' immediately...\ngoodbye trillian\n16:51:18.181 | INFO | Task run 'say_goodbye-261e56a8-1' - Finished in state Completed()\n16:51:18.190 | INFO | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:51:18.190 | INFO | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-2' immediately...\nhello ford\n16:51:18.210 | INFO | Task run 'say_hello-811087cd-2' - Finished in state Completed()\n16:51:18.219 | INFO | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:51:18.219 | INFO | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-2' immediately...\ngoodbye ford\n16:51:18.237 | INFO | Task run 'say_goodbye-261e56a8-2' - Finished in state Completed()\n16:51:18.246 | INFO | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:51:18.246 | INFO | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-3' immediately...\nhello marvin\n16:51:18.264 | INFO | Task run 'say_hello-811087cd-3' - Finished in state Completed()\n16:51:18.273 | INFO | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:51:18.273 | INFO | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-3' immediately...\ngoodbye marvin\n16:51:18.290 | INFO | Task run 'say_goodbye-261e56a8-3' - Finished in state Completed()\n16:51:18.321 | INFO | Flow run 'humongous-mink' - Finished in state Completed('All states completed.')\n
If we take out the log messages and just look at the printed output of the tasks, you see they're executed in sequential order:
$ python sequential_flow.py\nhello arthur\ngoodbye arthur\nhello trillian\ngoodbye trillian\nhello ford\ngoodbye ford\nhello marvin\ngoodbye marvin\n
","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#running-parallel-tasks-with-dask","title":"Running parallel tasks with Dask","text":"You could argue that this simple flow gains nothing from parallel execution, but let's roll with it so you can see just how simple it is to take advantage of the DaskTaskRunner
.
To configure your flow to use the DaskTaskRunner
:
prefect-dask
collection is installed by running pip install prefect-dask
.DaskTaskRunner
from prefect_dask.task_runners
.task_runner=DaskTaskRunner
argument..submit
method when calling functions.This is the same flow as above, with a few minor changes to use DaskTaskRunner
where we previously configured SequentialTaskRunner
. Install prefect-dask
, made these changes, then save the updated code as dask_flow.py
.
from prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef say_hello(name):\n print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n print(f\"goodbye {name}\")\n\n@flow(task_runner=DaskTaskRunner())\ndef greetings(names):\n for name in names:\n say_hello.submit(name)\n say_goodbye.submit(name)\n\nif __name__ == \"__main__\":\n greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n
Note that, because you're using DaskTaskRunner
in a script, you must use if __name__ == \"__main__\":
or you'll see warnings and errors.
Now run dask_flow.py
. If you get a warning about accepting incoming network connections, that's okay - everything is local in this example.
$ python dask_flow.py\n19:29:03.798 | INFO | prefect.engine - Created flow run 'fine-bison' for flow 'greetings'\n\n19:29:03.798 | INFO | Flow run 'fine-bison' - Using task runner 'DaskTaskRunner'\n\n19:29:04.080 | INFO | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:54:18.465 | INFO | prefect.engine - Created flow run 'radical-finch' for flow 'greetings'\n16:54:18.465 | INFO | Flow run 'radical-finch' - Starting 'DaskTaskRunner'; submitted tasks will be run concurrently...\n16:54:18.465 | INFO | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:54:19.811 | INFO | prefect.task_runner.dask - The Dask dashboard is available at <http://127.0.0.1:8787/status>\n16:54:19.881 | INFO | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:54:20.364 | INFO | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-0' for execution.\n16:54:20.379 | INFO | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:54:20.386 | INFO | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-0' for execution.\n16:54:20.397 | INFO | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:54:20.401 | INFO | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-1' for execution.\n16:54:20.417 | INFO | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:54:20.423 | INFO | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-1' for execution.\n16:54:20.443 | INFO | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:54:20.449 | INFO | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-2' for execution.\n16:54:20.462 | INFO | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:54:20.474 | INFO | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-2' for execution.\n16:54:20.500 | INFO | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:54:20.511 | INFO | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-3' for execution.\n16:54:20.544 | INFO | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:54:20.555 | INFO | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-3' for execution.\nhello arthur\ngoodbye ford\ngoodbye arthur\nhello ford\ngoodbye marvin\ngoodbye trillian\nhello trillian\nhello marvin\n
DaskTaskRunner
automatically creates a local Dask cluster, then starts executing all of the tasks in parallel. The results do not return in the same order as the sequential code above.
Notice what happens if you do not use the submit
method when calling tasks:
from prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n\n@task\ndef say_hello(name):\n print(f\"hello {name}\")\n\n\n@task\ndef say_goodbye(name):\n print(f\"goodbye {name}\")\n\n\n@flow(task_runner=DaskTaskRunner())\ndef greetings(names):\n for name in names:\n say_hello(name)\n say_goodbye(name)\n\n\nif __name__ == \"__main__\":\n greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n
$ python dask_flow.py\n\n16:57:34.534 | INFO | prefect.engine - Created flow run 'papaya-honeybee' for flow 'greetings'\n16:57:34.534 | INFO | Flow run 'papaya-honeybee' - Starting 'DaskTaskRunner'; submitted tasks will be run concurrently...\n16:57:34.535 | INFO | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:57:35.715 | INFO | prefect.task_runner.dask - The Dask dashboard is available at <http://127.0.0.1:8787/status>\n16:57:35.787 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:57:35.788 | INFO | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-0' immediately...\nhello arthur\n16:57:35.810 | INFO | Task run 'say_hello-811087cd-0' - Finished in state Completed()\n16:57:35.820 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:57:35.820 | INFO | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-0' immediately...\ngoodbye arthur\n16:57:35.840 | INFO | Task run 'say_goodbye-261e56a8-0' - Finished in state Completed()\n16:57:35.849 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:57:35.849 | INFO | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-1' immediately...\nhello trillian\n16:57:35.869 | INFO | Task run 'say_hello-811087cd-1' - Finished in state Completed()\n16:57:35.878 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:57:35.878 | INFO | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-1' immediately...\ngoodbye trillian\n16:57:35.894 | INFO | Task run 'say_goodbye-261e56a8-1' - Finished in state Completed()\n16:57:35.907 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:57:35.907 | INFO | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-2' immediately...\nhello ford\n16:57:35.924 | INFO | Task run 'say_hello-811087cd-2' - Finished in state Completed()\n16:57:35.933 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:57:35.933 | INFO | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-2' immediately...\ngoodbye ford\n16:57:35.951 | INFO | Task run 'say_goodbye-261e56a8-2' - Finished in state Completed()\n16:57:35.959 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:57:35.959 | INFO | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-3' immediately...\nhello marvin\n16:57:35.976 | INFO | Task run 'say_hello-811087cd-3' - Finished in state Completed()\n16:57:35.985 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:57:35.985 | INFO | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-3' immediately...\ngoodbye marvin\n16:57:36.004 | INFO | Task run 'say_goodbye-261e56a8-3' - Finished in state Completed()\n16:57:36.289 | INFO | Flow run 'papaya-honeybee' - Finished in state Completed('All states completed.')\n
The tasks are not submitted to the DaskTaskRunner
and are run sequentially.
To demonstrate the ability to flexibly apply the task runner appropriate for your workflow, use the same flow as above, with a few minor changes to use the RayTaskRunner
where we previously configured DaskTaskRunner
.
To configure your flow to use the RayTaskRunner
:
prefect-ray
collection is installed by running pip install prefect-ray
.RayTaskRunner
from prefect_ray.task_runners
.task_runner=RayTaskRunner
argument.Ray environment limitations
While we're excited about parallel task execution via Ray to Prefect, there are some inherent limitations with Ray you should be aware of:
pip
alone and will be skipped during installation of Prefect. It is possible to manually install the blocking component with conda
. See the Ray documentation for instructions.See the Ray installation documentation for further compatibility information.
Save this code in ray_flow.py
.
from prefect import flow, task\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@task\ndef say_hello(name):\n print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n print(f\"goodbye {name}\")\n\n@flow(task_runner=RayTaskRunner())\ndef greetings(names):\n for name in names:\n say_hello.submit(name)\n say_goodbye.submit(name)\n\nif __name__ == \"__main__\":\n greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n
Now run ray_flow.py
RayTaskRunner
automatically creates a local Ray instance, then immediately starts executing all of the tasks in parallel. If you have an existing Ray instance, you can provide the address as a parameter to run tasks in the instance. See Running tasks on Ray for details.
Many workflows include a variety of tasks, and not all of them benefit from parallel execution. You'll most likely want to use the Dask or Ray task runners and spin up their respective resources only for those tasks that need them.
Because task runners are specified on flows, you can assign different task runners to tasks by using subflows to organize those tasks.
This example uses the same tasks as the previous examples, but on the parent flow greetings()
we use the default ConcurrentTaskRunner
. Then we call a ray_greetings()
subflow that uses the RayTaskRunner
to execute the same tasks in a Ray instance.
from prefect import flow, task\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@task\ndef say_hello(name):\n print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n print(f\"goodbye {name}\")\n\n@flow(task_runner=RayTaskRunner())\ndef ray_greetings(names):\n for name in names:\n say_hello.submit(name)\n say_goodbye.submit(name)\n\n@flow()\ndef greetings(names):\n for name in names:\n say_hello.submit(name)\n say_goodbye.submit(name)\n ray_greetings(names)\n\nif __name__ == \"__main__\":\n greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n
If you save this as ray_subflow.py
and run it, you'll see that the flow greetings
runs as you'd expect for a concurrent flow, then flow ray-greetings
spins up a Ray instance to run the tasks again.
In the Deployments tutorial, we looked at serving a flow that enables scheduling or creating flow runs via the Prefect API.
With our Python script in hand, we can build a Docker image for our script, allowing us to serve our flow in various remote environments. We'll use Kubernetes in this guide, but you can use any Docker-compatible infrastructure.
In this guide we'll:
Note that in this guide we'll create a Dockerfile from scratch. Alternatively, Prefect makes it convenient to build a Docker image as part of deployment creation. You can even include environment variables and specify additional Python packages to install at runtime.
If creating a deployment with a prefect.yaml
file, the build step makes it easy to customize your Docker image and push it to the registry of your choice. See an example here.
Deployment creation with a Python script that includes flow.deploy
similarly allows you to customize your Docker image with keyword arguments as shown below.
...\n\nif __name__ == \"__main__\":\n hello_world.deploy(\n name=\"my-first-deployment\",\n work_pool_name=\"above-ground\",\n image='my_registry/hello_world:demo',\n job_variables={\"env\": { \"EXTRA_PIP_PACKAGES\": \"boto3\" } }\n )\n
","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#prerequisites","title":"Prerequisites","text":"To complete this guide, you'll need the following:
prefect server start
.First let's make a clean directory to work from, prefect-docker-guide
.
mkdir prefect-docker-guide\ncd prefect-docker-guide\n
In this directory, we'll create a sub-directory named flows
and put our flow script from the Deployments tutorial in it.
mkdir flows\ncd flows\ntouch prefect-docker-guide-flow.py\n
Here's the flow code for reference:
prefect-docker-guide-flow.pyimport httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info.serve(name=\"prefect-docker-guide\")\n
The next file we'll add to the prefect-docker-guide
directory is a requirements.txt
. We'll include all dependencies required for our prefect-docker-guide-flow.py
script in the Docker image we'll build.
# ensure you run this line from the top level of the `prefect-docker-guide` directory\ntouch requirements.txt\n
Here's what we'll put in our requirements.txt
file:
prefect>=2.12.0\nhttpx\n
Next, we'll create a Dockerfile
that we'll use to create a Docker image that will also store the flow code.
touch Dockerfile\n
We'll add the following content to our Dockerfile
:
# We're using the latest version of Prefect with Python 3.10\nFROM prefecthq/prefect:2-python3.10\n\n# Add our requirements.txt file to the image and install dependencies\nCOPY requirements.txt .\nRUN pip install -r requirements.txt --trusted-host pypi.python.org --no-cache-dir\n\n# Add our flow code to the image\nCOPY flows /opt/prefect/flows\n\n# Run our flow script when the container starts\nCMD [\"python\", \"flows/prefect-docker-guide-flow.py\"]\n
","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#building-a-docker-image","title":"Building a Docker image","text":"Now that we have a Dockerfile we can build our image by running:
docker build -t prefect-docker-guide-image .\n
We can check that our build worked by running a container from our new image.
CloudSelf-hostedOur container will need an API URL and and API key to communicate with Prefect Cloud.
You can get an API key from the API Keys section of the user settings in the Prefect UI.
You can get your API URL by running prefect config view
and copying the PREFECT_API_URL
value.
We'll provide both these values to our container by passing them as environment variables with the -e
flag.
docker run -e PREFECT_API_URL=YOUR_PREFECT_API_URL -e PREFECT_API_KEY=YOUR_API_KEY prefect-docker-guide-image\n
After running the above command, the container should start up and serve the flow within the container!
Our container will need an API URL and network access to communicate with the Prefect API.
For this guide, we'll assume the Prefect API is running on the same machine that we'll run our container on and the Prefect API was started with prefect server start
. If you're running a different setup, check out the Hosting a Prefect server guide for information on how to connect to your Prefect API instance.
To ensure that our flow container can communicate with the Prefect API, we'll set our PREFECT_API_URL
to http://host.docker.internal:4200/api
. If you're running Linux, you'll need to set your PREFECT_API_URL
to http://localhost:4200/api
and use the --network=\"host\"
option instead.
docker run --network=\"host\" -e PREFECT_API_URL=http://host.docker.internal:4200/api prefect-docker-guide-image\n
After running the above command, the container should start up and serve the flow within the container!
","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#deploying-to-a-remote-environment","title":"Deploying to a remote environment","text":"Now that we have a Docker image with our flow code embedded, we can deploy it to a remote environment!
For this guide, we'll simulate a remote environment by using Kubernetes locally with Docker Desktop. You can use the instructions provided by Docker to set up Kubernetes locally.
","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#creating-a-kubernetes-deployment-manifest","title":"Creating a Kubernetes deployment manifest","text":"To ensure the process serving our flow is always running, we'll create a Kubernetes deployment. If our flow's container ever crashes, Kubernetes will automatically restart it, ensuring that we won't miss any scheduled runs.
First, we'll create a deployment-manifest.yaml
file in our prefect-docker-guide
directory:
touch deployment-manifest.yaml\n
And we'll add the following content to our deployment-manifest.yaml
file:
apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: prefect-docker-guide\nspec:\n replicas: 1\n selector:\n matchLabels:\n flow: get-repo-info\n template:\n metadata:\n labels:\n flow: get-repo-info\n spec:\n containers:\n - name: flow-container\n image: prefect-docker-guide-image:latest\n env:\n - name: PREFECT_API_URL\n value: YOUR_PREFECT_API_URL\n - name: PREFECT_API_KEY\n value: YOUR_API_KEY\n # Never pull the image because we're using a local image\n imagePullPolicy: Never\n
Keep your API key secret
In the above manifest we are passing in the Prefect API URL and API key as environment variables. This approach is simple, but it is not secure. If you are deploying your flow to a remote cluster, you should use a Kubernetes secret to store your API key.
deployment-manifest.yamlapiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: prefect-docker-guide\nspec:\n replicas: 1\n selector:\n matchLabels:\n flow: get-repo-info\n template:\n metadata:\n labels:\n flow: get-repo-info\n spec:\n containers:\n - name: flow-container\n image: prefect-docker-guide-image:latest\n env:\n - name: PREFECT_API_URL\n value: <http://host.docker.internal:4200/api>\n # Never pull the image because we're using a local image\n imagePullPolicy: Never\n
Linux users
If you're running Linux, you'll need to set your PREFECT_API_URL
to use the IP address of your machine instead of host.docker.internal
.
This manifest defines how our image will run when deployed in our Kubernetes cluster. Note that we will be running a single replica of our flow container. If you want to run multiple replicas of your flow container to keep up with an active schedule, or because our flow is resource-intensive, you can increase the replicas
value.
Now that we have a deployment manifest, we can deploy our flow to the cluster by running:
kubectl apply -f deployment-manifest.yaml\n
We can monitor the status of our Kubernetes deployment by running:
kubectl get deployments\n
Once the deployment has successfully started, we can check the logs of our flow container by running the following:
kubectl logs -l flow=get-repo-info\n
Now that we're serving our flow in our cluster, we can trigger a flow run by running:
prefect deployment run get-repo-info/prefect-docker-guide\n
If we navigate to the URL provided by the prefect deployment run
command, we can follow the flow run via the logs in the Prefect UI!
Every release of Prefect results in several new Docker images. These images are all named prefecthq/prefect and their tags identify their differences.
","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#image-tags","title":"Image tags","text":"When a release is published, images are built for all of Prefect's supported Python versions. These images are tagged to identify the combination of Prefect and Python versions contained. Additionally, we have \"convenience\" tags which are updated with each release to facilitate automatic updates.
For example, when release 2.11.5
is published:
prefect:2.1.1-python3.10
and prefect:2.1.1-python3.10-conda
.sha-88a7ff17a3435ec33c95c0323b8f05d7b9f3f6d2-python3.10
2.1.x
release, receiving patch updates, we update a tag without the patch version to this release, e.g. prefect.2.1-python3.10
.2.x.y
release, receiving minor version updates, we update a tag without the minor or patch version to this release, e.g. prefect.2-python3.10
2.x.y
release without specifying a Python version, we update 2-latest
to the image for our highest supported Python version, which in this case would be equivalent to prefect:2.1.1-python3.10
.Choose image versions carefully
It's a good practice to use Docker images with specific Prefect versions in production.
Use care when employing images that automatically update to new versions (such as prefecthq/prefect:2-python3.11
or prefecthq/prefect:2-latest
).
Standard Python images are based on the official Python slim
images, e.g. python:3.10-slim
.
Conda flavored images are based on continuumio/miniconda3
. Prefect is installed into a conda environment named prefect
.
If your flow relies on dependencies not found in the default prefecthq/prefect
images, you may want to build your own image. You can either base it off of one of the provided prefecthq/prefect
images, or build your own image. See the Work pool deployment guide for discussion of how Prefect can help you build custom images with dependencies specified in a requirements.txt
file.
By default, Prefect work pools that use containers refer to the 2-latest
image. You can specify another image at work pool creation. The work pool image choice can be overridden in individual deployments.
prefecthq/prefect
image manually","text":"Here we provide an example Dockerfile
for building an image based on prefecthq/prefect:2-latest
, but with scikit-learn
installed.
FROM prefecthq/prefect:2-latest\n\nRUN pip install scikit-learn\n
","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#choosing-an-image-strategy","title":"Choosing an image strategy","text":"The options described above have different complexity (and performance) characteristics. For choosing a strategy, we provide the following recommendations:
If your flow only makes use of tasks defined in the same file as the flow, or tasks that are part of prefect
itself, then you can rely on the default provided prefecthq/prefect
image.
If your flow requires a few extra dependencies found on PyPI, you can use the default prefecthq/prefect
image and set prefect.deployments.steps.pip_install_requirements:
in the pull
step to install these dependencies at runtime.
If the installation process requires compiling code or other expensive operations, you may be better off building a custom image instead.
If your flow (or flows) require extra dependencies or shared libraries, we recommend building a shared custom image with all the extra dependencies and shared task definitions you need. Your flows can then all rely on the same image, but have their source stored externally. This option can ease development, as the shared image only needs to be rebuilt when dependencies change, not when the flow source changes.
We only served a single flow in this guide, but you can extend this setup to serve multiple flows in a single Docker image by updating your Python script to using flow.to_deployment
and serve
to serve multiple flows or the same flow with different configuration.
To learn more about deploying flows, check out the Deployments concept doc!
For advanced infrastructure requirements, such as executing each flow run within its own dedicated Docker container, learn more in the Work pool deployment guide.
","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/global-concurrency-limits/","title":"Global Concurrency Limits and Rate Limits","text":"Global concurrency limits allow you to manage execution efficiently, controlling how many tasks, flows, or other operations can run simultaneously. They are ideal when optimizing resource usage, preventing bottlenecks, and customizing task execution are priorities.
Clarification on use of the term 'tasks'
In the context of global concurrency and rate limits, \"tasks\" refers not specifically to Prefect tasks, but to concurrent units of work in general, such as those managed by an event loop or TaskGroup
in asynchronous programming. These general \"tasks\" could include Prefect tasks when they are part of an asynchronous execution environment.
Rate Limits ensure system stability by governing the frequency of requests or operations. They are suitable for preventing overuse, ensuring fairness, and handling errors gracefully.
When selecting between Concurrency and Rate Limits, consider your primary goal. Choose Concurrency Limits for resource optimization and task management. Choose Rate Limits to maintain system stability and fair access to services.
The core difference between a rate limit and a concurrency limit is the way in which slots are released. With a rate limit, slots are released at a controlled rate, controlled by slot_decay_per_second
whereas with a concurrency limit, slots are released when the concurrency manager is exited.
You can create, read, edit, and delete concurrency limits via the Prefect UI.
When creating a concurrency limit, you can specify the following parameters:
/
, %
, &
, >
, <
, are not allowed.rate_limit
function.Global concurrency limits can be in either an active
or inactive
state.
Global concurrency limits can be configured with slot decay. This is used when the concurrency limit is used as a rate limit, and it governs the pace at which slots are released or become available for reuse after being occupied. These slots effectively represent the concurrency capacity within a specific concurrency limit. The concept is best understood as the rate at which these slots \"decay\" or refresh.
To configure slot decay, you can set the slot_decay_per_second
parameter when defining or adjusting a concurrency limit.
For practical use, consider the following:
Higher values: Setting slot_decay_per_second
to a higher value, such as 5.0, results in slots becoming available relatively quickly. In this scenario, a slot that was occupied by a task will free up after just 0.2
(1.0 / 5.0
) seconds.
Lower values: Conversely, setting slot_decay_per_second
to a lower value, like 0.1, causes slots to become available more slowly. In this scenario it would take 10
(1.0 / 0.1
) seconds for a slot to become available again after occupancy
Slot decay provides fine-grained control over the availability of slots, enabling you to optimize the rate of your workflow based on your specific requirements.
","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#using-the-concurrency-context-manager","title":"Using theconcurrency
context manager","text":"The concurrency
context manager allows control over the maximum number of concurrent operations. You can select either the synchronous (sync
) or asynchronous (async
) version, depending on your use case. Here's how to use it:
Concurrency limits are implicitly created
When using the concurrency
context manager, the concurrency limit you use will be created, in an inactive state, if it does not already exist.
Sync
from prefect import flow, task\nfrom prefect.concurrency.sync import concurrency\n\n\n@task\ndef process_data(x, y):\n with concurrency(\"database\", occupy=1):\n return x + y\n\n\n@flow\ndef my_flow():\n for x, y in [(1, 2), (2, 3), (3, 4), (4, 5)]:\n process_data.submit(x, y)\n\n\nif __name__ == \"__main__\":\n my_flow()\n
Async
import asyncio\nfrom prefect import flow, task\nfrom prefect.concurrency.asyncio import concurrency\n\n\n@task\nasync def process_data(x, y):\n async with concurrency(\"database\", occupy=1):\n return x + y\n\n\n@flow\nasync def my_flow():\n for x, y in [(1, 2), (2, 3), (3, 4), (4, 5)]:\n await process_data.submit(x, y)\n\n\nif __name__ == \"__main__\":\n asyncio.run(my_flow())\n
prefect.concurrency.sync
module for sync usage and the prefect.concurrency.asyncio
module for async usage.process_data
task, taking x
and y
as input arguments. Inside this task, the concurrency context manager controls concurrency, using the database
concurrency limit and occupying one slot. If another task attempts to run with the same limit and no slots are available, that task will be blocked until a slot becomes available.my_flow
is defined. Within this flow, it iterates through a list of tuples, each containing pairs of x and y values. For each pair, the process_data
task is submitted with the corresponding x and y values for processing.rate_limit
","text":"The Rate Limit feature provides control over the frequency of requests or operations, ensuring responsible usage and system stability. Depending on your requirements, you can utilize rate_limit
to govern both synchronous (sync) and asynchronous (async) operations. Here's how to make the most of it:
Slot decay
When using the rate_limit
function the concurrency limit you use must have a slot decay configured.
Sync
from prefect import flow, task\nfrom prefect.concurrency.sync import rate_limit\n\n\n@task\ndef make_http_request():\n rate_limit(\"rate-limited-api\")\n print(\"Making an HTTP request...\")\n\n\n@flow\ndef my_flow():\n for _ in range(10):\n make_http_request.submit()\n\n\nif __name__ == \"__main__\":\n my_flow()\n
Async
import asyncio\n\nfrom prefect import flow, task\nfrom prefect.concurrency.asyncio import rate_limit\n\n\n@task\nasync def make_http_request():\n await rate_limit(\"rate-limited-api\")\n print(\"Making an HTTP request...\")\n\n\n@flow\nasync def my_flow():\n for _ in range(10):\n await make_http_request.submit()\n\n\nif __name__ == \"__main__\":\n asyncio.run(my_flow())\n
rate_limit
function. Use the prefect.concurrency.sync
module for sync usage and the prefect.concurrency.asyncio
module for async usage.make_http_request
task. Inside this task, the rate_limit
function is used to ensure that the requests are made at a controlled pace.my_flow
is defined. Within this flow the make_http_request
task is submitted 10 times.concurrency
and rate_limit
outside of a flow","text":"concurreny
and rate_limit
can be used outside of a flow to control concurrency and rate limits for any operation.
import asyncio\n\nfrom prefect.concurrency.asyncio import rate_limit\n\n\nasync def main():\n for _ in range(10):\n await rate_limit(\"rate-limited-api\")\n print(\"Making an HTTP request...\")\n\n\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n
","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#use-cases","title":"Use cases","text":"","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#throttling-task-submission","title":"Throttling task submission","text":"Throttling task submission to avoid overloading resources, to comply with external rate limits, or ensure a steady, controlled flow of work.
In this scenario the rate_limit
function is used to throttle the submission of tasks. The rate limit acts as a bottleneck, ensuring that tasks are submitted at a controlled rate, governed by the slot_decay_per_second
setting on the associated concurrency limit.
from prefect import flow, task\nfrom prefect.concurrency.sync import rate_limit\n\n\n@task\ndef my_task(i):\n return i\n\n\n@flow\ndef my_flow():\n for _ in range(100):\n rate_limit(\"slow-my-flow\", occupy=1)\n my_task.submit(1)\n\n\nif __name__ == \"__main__\":\n my_flow()\n
","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#managing-database-connections","title":"Managing database connections","text":"Managing the maximum number of concurrent database connections to avoid exhausting database resources.
In this scenario we've setup a concurrency limit named database
and given it a maximum concurrency limit that matches the maximum number of database connections we want to allow. We then use the concurrency
context manager to control the number of database connections allowed at any one time.
from prefect import flow, task, concurrency\nimport psycopg2\n\n@task\ndef database_query(query):\n # Here we request a single slot on the 'database' concurrency limit. This\n # will block in the case that all of the database connections are in use\n # ensuring that we never exceed the maximum number of database connections.\n with concurrency(\"database\", occupy=1):\n connection = psycopg2.connect(\"<connection_string>\")\n cursor = connection.cursor()\n cursor.execute(query)\n result = cursor.fetchall()\n connection.close()\n return result\n\n@flow\ndef my_flow():\n queries = [\"SELECT * FROM table1\", \"SELECT * FROM table2\", \"SELECT * FROM table3\"]\n\n for query in queries:\n database_query.submit(query)\n\nif __name__ == \"__main__\":\n my_flow()\n
","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#parallel-data-processing","title":"Parallel data processing","text":"Limiting the maximum number of parallel processing tasks.
In this scenario we want to limit the number of process_data
tasks to five at any one time. We do this by using the concurrency
context manager to request five slots on the data-processing
concurrency limit. This will block until five slots are free and then submit five more tasks, ensuring that we never exceed the maximum number of parallel processing tasks.
import asyncio\nfrom prefect.concurrency.sync import concurrency\n\n\nasync def process_data(data):\n print(f\"Processing: {data}\")\n await asyncio.sleep(1)\n return f\"Processed: {data}\"\n\n\nasync def main():\n data_items = list(range(100))\n processed_data = []\n\n while data_items:\n with concurrency(\"data-processing\", occupy=5):\n chunk = [data_items.pop() for _ in range(5)]\n processed_data += await asyncio.gather(\n *[process_data(item) for item in chunk]\n )\n\n print(processed_data)\n\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n
","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/host/","title":"Hosting a Prefect server instance","text":"After you install Prefect you have a Python SDK client that can communicate with Prefect Cloud, the platform hosted by Prefect. You also have an API server instance backed by a database and a UI.
In this section you'll learn how to host your own Prefect server instance. If you would like to host a Prefect server instance on Kubernetes, check out the prefect-server Helm chart.
Spin up a local Prefect server UI by running the prefect server start
CLI command in the terminal:
prefect server start\n
Open the URL for the Prefect server UI (http://127.0.0.1:4200 by default) in a browser.
Shut down the Prefect server with ctrl + c in the terminal.","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#differences-between-a-self-hosted-prefect-server-instance-and-prefect-cloud","title":"Differences between a self-hosted Prefect server instance and Prefect Cloud","text":"
A self-hosted Prefect server instance and Prefect Cloud share a common set of features. Prefect Cloud includes the following additional features:
You can read more about Prefect Cloud in the Cloud section.
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#configuring-a-prefect-server-instance","title":"Configuring a Prefect server instance","text":"Go to your terminal session and run this command to set the API URL to point to a Prefect server instance:
prefect config set PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n
PREFECT_API_URL
required when running Prefect inside a container
You must set the API server address to use Prefect within a container, such as a Docker container.
You can save the API server address in a Prefect profile. Whenever that profile is active, the API endpoint will be be at that address.
See Profiles & Configuration for more information on profiles and configurable Prefect settings.
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#prefect-database","title":"Prefect database","text":"The Prefect database persists data to track the state of your flow runs and related Prefect concepts, including:
Currently Prefect supports the following databases:
pg_trgm
extension, so it must be installed and enabled.A local SQLite database is the default database and is configured upon Prefect installation. The database is located at ~/.prefect/prefect.db
by default.
To reset your database, run the CLI command:
prefect server database reset -y\n
This command will clear all data and reapply the schema.
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#database-settings","title":"Database settings","text":"Prefect provides several settings for configuring the database. Here are the default settings:
PREFECT_API_DATABASE_CONNECTION_URL='sqlite+aiosqlite:///${PREFECT_HOME}/prefect.db'\nPREFECT_API_DATABASE_ECHO='False'\nPREFECT_API_DATABASE_MIGRATE_ON_START='True'\nPREFECT_API_DATABASE_PASSWORD='None'\n
You can save a setting to your active Prefect profile with prefect config set
.
To connect Prefect to a PostgreSQL database, you can set the following environment variable:
prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"postgresql+asyncpg://postgres:yourTopSecretPassword@localhost:5432/prefect\"\n
The above environment variable assumes that:
postgres
yourTopSecretPassword
localhost
5432
prefect
To quickly start a PostgreSQL instance that can be used as your Prefect database, use the following command, which will start a Docker container running PostgreSQL:
docker run -d --name prefect-postgres -v prefectdb:/var/lib/postgresql/data -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=yourTopSecretPassword -e POSTGRES_DB=prefect postgres:latest\n
The above command:
postgres
Docker image, which is compatible with Prefect.prefect-postgres
.prefect
with a user postgres
and yourTopSecretPassword
password.prefectdb
to provide persistence if you ever have to restart or rebuild that container.Then you'll want to run the command above to set your current Prefect Profile to the PostgreSQL database instance running in your Docker container.
prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"postgresql+asyncpg://postgres:yourTopSecretPassword@localhost:5432/prefect\"\n
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#confirming-your-postgresql-database-configuration","title":"Confirming your PostgreSQL database configuration","text":"Inspect your Prefect profile to confirm that the environment variable has been set properly:
prefect config view --show-sources\n
You should see output similar to the following:\n\nPREFECT_PROFILE='my_profile'\nPREFECT_API_DATABASE_CONNECTION_URL='********' (from profile)\nPREFECT_API_URL='http://127.0.0.1:4200/api' (from profile)\n
Start the Prefect server and it should begin to use your PostgreSQL database instance:
prefect server start\n
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#in-memory-database","title":"In-memory database","text":"One of the benefits of SQLite is in-memory database support.
To use an in-memory SQLite database, set the following environment variable:
prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false\"\n
Use SQLite database for testing only
SQLite is only supported by Prefect for testing purposes and is not compatible with multiprocessing.
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#migrations","title":"Migrations","text":"Prefect uses Alembic to manage database migrations. Alembic is a database migration tool for usage with the SQLAlchemy Database Toolkit for Python. Alembic provides a framework for generating and applying schema changes to a database.
To apply migrations to your database you can run the following commands:
To upgrade:
prefect server database upgrade -y\n
To downgrade:
prefect server database downgrade -y\n
You can use the -r
flag to specify a specific migration version to upgrade or downgrade to. For example, to downgrade to the previous migration version you can run:
prefect server database downgrade -y -r -1\n
or to downgrade to a specific revision:
prefect server database downgrade -y -r d20618ce678e\n
To downgrade all migrations, use the base
revision.
See the contributing docs for information on how to create new database migrations.
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#notifications","title":"Notifications","text":"When you use Prefect Cloud you gain access to a hosted platform with Workspace & User controls, Events, and Automations. Prefect Cloud has an option for automation notifications. The more limited Notifications option is provided for the self-hosted Prefect server.
Notifications enable you to set up alerts that are sent when a flow enters any state you specify. When your flow and task runs changes state, Prefect notes the state change and checks whether the new state matches any notification policies. If it does, a new notification is queued.
Prefect supports sending notifications via:
Notifications in Prefect Cloud
Prefect Cloud uses the robust Automations interface to enable notifications related to flow run state changes and work pool status.
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#configure-notifications","title":"Configure notifications","text":"To configure a notification in a Prefect server, go to the Notifications page and select Create Notification or the + button.
Notifications are structured just as you would describe them to someone. You can choose:
For email notifications (supported on Prefect Cloud only), the configuration requires email addresses to which the message is sent.
For Slack notifications, the configuration requires webhook credentials for your Slack and the channel to which the message is sent.
For example, to get a Slack message if a flow with a daily-etl
tag fails, the notification will read:
If a run of any flow with daily-etl tag enters a failed state, send a notification to my-slack-webhook
When the conditions of the notification are triggered, you\u2019ll receive a message:
The fuzzy-leopard run of the daily-etl flow entered a failed state at 22-06-27 16:21:37 EST.
On the Notifications page you can pause, edit, or delete any configured notification.
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/logs/","title":"Logging","text":"Prefect enables you to log a variety of useful information about your flow and task runs, capturing information about your workflows for purposes such as monitoring, troubleshooting, and auditing.
Prefect captures logs for your flow and task runs by default, even if you have not started a Prefect server with prefect server start
.
You can view and filter logs in the Prefect UI or Prefect Cloud, or access log records via the API.
Prefect enables fine-grained customization of log levels for flows and tasks, including configuration for default levels and log message formatting.
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-overview","title":"Logging overview","text":"Whenever you run a flow, Prefect automatically logs events for flow runs and task runs, along with any custom log handlers you have configured. No configuration is needed to enable Prefect logging.
For example, say you created a simple flow in a file flow.py
. If you create a local flow run with python flow.py
, you'll see an example of the log messages created automatically by Prefect:
16:45:44.534 | INFO | prefect.engine - Created flow run 'gray-dingo' for flow\n'hello-flow'\n16:45:44.534 | INFO | Flow run 'gray-dingo' - Using task runner 'SequentialTaskRunner'\n16:45:44.598 | INFO | Flow run 'gray-dingo' - Created task run 'hello-task-54135dc1-0'\nfor task 'hello-task'\nHello world!\n16:45:44.650 | INFO | Task run 'hello-task-54135dc1-0' - Finished in state\nCompleted(None)\n16:45:44.672 | INFO | Flow run 'gray-dingo' - Finished in state\nCompleted('All states completed.')\n
You can see logs for a flow run in the Prefect UI by navigating to the Flow Runs page and selecting a specific flow run to inspect.
These log messages reflect the logging configuration for log levels and message formatters. You may customize the log levels captured and the default message format through configuration, and you can capture custom logging events by explicitly emitting log messages during flow and task runs.
Prefect supports the standard Python logging levels CRITICAL
, ERROR
, WARNING
, INFO
, and DEBUG
. By default, Prefect displays INFO
-level and above events. You can configure the root logging level as well as specific logging levels for flow and task runs.
Prefect provides several settings for configuring logging level and loggers.
By default, Prefect displays INFO
-level and above logging records. You may change this level to DEBUG
and DEBUG
-level logs created by Prefect will be shown as well. You may need to change the log level used by loggers from other libraries to see their log records.
You can override any logging configuration by setting an environment variable or Prefect Profile setting using the syntax PREFECT_LOGGING_[PATH]_[TO]_[KEY]
, with [PATH]_[TO]_[KEY]
corresponding to the nested address of any setting.
For example, to change the default logging levels for Prefect to DEBUG
, you can set the environment variable PREFECT_LOGGING_LEVEL=\"DEBUG\"
.
You may also configure the \"root\" Python logger. The root logger receives logs from all loggers unless they explicitly opt out by disabling propagation. By default, the root logger is configured to output WARNING
level logs to the console. As with other logging settings, you can override this from the environment or in the logging configuration file. For example, you can change the level with the variable PREFECT_LOGGING_ROOT_LEVEL
.
You may adjust the log level used by specific handlers. For example, you could set PREFECT_LOGGING_HANDLERS_API_LEVEL=ERROR
to have only ERROR
logs reported to the Prefect API. The console handlers will still default to level INFO
.
There is a logging.yml
file packaged with Prefect that defines the default logging configuration.
You can customize logging configuration by creating your own version of logging.yml
with custom settings, by either creating the file at the default location (/.prefect/logging.yml
) or by specifying the path to the file with PREFECT_LOGGING_SETTINGS_PATH
. (If the file does not exist at the specified location, Prefect ignores the setting and uses the default configuration.)
See the Python Logging configuration documentation for more information about the configuration options and syntax used by logging.yml
.
To access the Prefect logger, import from prefect import get_run_logger
. You can send messages to the logger in both flows and tasks.
To log from a flow, retrieve a logger instance with get_run_logger()
, then call the standard Python logging methods.
from prefect import flow, get_run_logger\n\n@flow(name=\"log-example-flow\")\ndef logger_flow():\n logger = get_run_logger()\n logger.info(\"INFO level log message.\")\n
Prefect automatically uses the flow run logger based on the flow context. If you run the above code, Prefect captures the following as a log event.
15:35:17.304 | INFO | Flow run 'mottled-marten' - INFO level log message.\n
The default flow run log formatter uses the flow run name for log messages.
Note
Starting in 2.7.11, if you use a logger that sends logs to the API outside of a flow or task run, a warning will be displayed instead of an error. You can silence this warning by setting `PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW=ignore` or have the logger raise an error by setting the value to `error`.\n
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-in-tasks","title":"Logging in tasks","text":"Logging in tasks works much as logging in flows: retrieve a logger instance with get_run_logger()
, then call the standard Python logging methods.
from prefect import flow, task, get_run_logger\n\n@task(name=\"log-example-task\")\ndef logger_task():\n logger = get_run_logger()\n logger.info(\"INFO level log message from a task.\")\n\n@flow(name=\"log-example-flow\")\ndef logger_flow():\n logger_task()\n
Prefect automatically uses the task run logger based on the task context. The default task run log formatter uses the task run name for log messages.
15:33:47.179 | INFO | Task run 'logger_task-80a1ffd1-0' - INFO level log message from a task.\n
The underlying log model for task runs captures the task name, task run ID, and parent flow run ID, which are persisted to the database for reporting and may also be used in custom message formatting.
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-print-statements","title":"Logging print statements","text":"Prefect provides the log_prints
option to enable the logging of print
statements at the task or flow level. When log_prints=True
for a given task or flow, the Python builtin print
will be patched to redirect to the Prefect logger for the scope of that task or flow.
By default, tasks and subflows will inherit the log_prints
setting from their parent flow, unless opted out with their own explicit log_prints
setting.
from prefect import task, flow\n\n@task\ndef my_task():\n print(\"we're logging print statements from a task\")\n\n@flow(log_prints=True)\ndef my_flow():\n print(\"we're logging print statements from a flow\")\n my_task()\n
Will output:
15:52:11.244 | INFO | prefect.engine - Created flow run 'emerald-gharial' for flow 'my-flow'\n15:52:11.812 | INFO | Flow run 'emerald-gharial' - we're logging print statements from a flow\n15:52:11.926 | INFO | Flow run 'emerald-gharial' - Created task run 'my_task-20c6ece6-0' for task 'my_task'\n15:52:11.927 | INFO | Flow run 'emerald-gharial' - Executing 'my_task-20c6ece6-0' immediately...\n15:52:12.217 | INFO | Task run 'my_task-20c6ece6-0' - we're logging print statements from a task\n
from prefect import task, flow\n\n@task\ndef my_task(log_prints=False):\n print(\"not logging print statements in this task\")\n\n@flow(log_prints=True)\ndef my_flow():\n print(\"we're logging print statements from a flow\")\n my_task()\n
Using log_prints=False
at the task level will output:
15:52:11.244 | INFO | prefect.engine - Created flow run 'emerald-gharial' for flow 'my-flow'\n15:52:11.812 | INFO | Flow run 'emerald-gharial' - we're logging print statements from a flow\n15:52:11.926 | INFO | Flow run 'emerald-gharial' - Created task run 'my_task-20c6ece6-0' for task 'my_task'\n15:52:11.927 | INFO | Flow run 'emerald-gharial' - Executing 'my_task-20c6ece6-0' immediately...\nnot logging print statements in this task\n
You can also configure this behavior globally for all Prefect flows, tasks, and subflows.
prefect config set PREFECT_LOGGING_LOG_PRINTS=True\n
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#formatters","title":"Formatters","text":"Prefect log formatters specify the format of log messages. You can see details of message formatting for different loggers in logging.yml
. For example, the default formatting for task run log records is:
\"%(asctime)s.%(msecs)03d | %(levelname)-7s | Task run %(task_run_name)r - %(message)s\"\n
The variables available to interpolate in log messages varies by logger. In addition to the run context, message string, and any keyword arguments, flow and task run loggers have access to additional variables.
The flow run logger has the following:
flow_run_name
flow_run_id
flow_name
The task run logger has the following:
task_run_id
flow_run_id
task_run_name
task_name
flow_run_name
flow_name
You can specify custom formatting by setting an environment variable or by modifying the formatter in a logging.yml
file as described earlier. For example, to change the formatting for the flow runs formatter:
PREFECT_LOGGING_FORMATTERS_STANDARD_FLOW_RUN_FMT=\"%(asctime)s.%(msecs)03d | %(levelname)-7s | %(flow_run_id)s - %(message)s\"\n
The resulting messages, using the flow run ID instead of name, would look like this:
10:40:01.211 | INFO | e43a5a80-417a-41c4-a39e-2ef7421ee1fc - Created task run\n'othertask-1c085beb-3' for task 'othertask'\n
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#styles","title":"Styles","text":"By default, Prefect highlights specific keywords in the console logs with a variety of colors.
Highlighting can be toggled on/off with the PREFECT_LOGGING_COLORS
setting, e.g.
PREFECT_LOGGING_COLORS=False\n
You can change what gets highlighted and also adjust the colors by updating the styles in a logging.yml
file. Below lists the specific keys built-in to the PrefectConsoleHighlighter
.
URLs:
log.web_url
log.local_url
Log levels:
log.info_level
log.warning_level
log.error_level
log.critical_level
State types:
log.pending_state
log.running_state
log.scheduled_state
log.completed_state
log.cancelled_state
log.failed_state
log.crashed_state
Flow (run) names:
log.flow_run_name
log.flow_name
Task (run) names:
log.task_run_name
log.task_name
You can also build your own handler with a custom highlighter. For example, to additionally highlight emails:
my_package_or_module.py
(rename as needed) in the same directory as the flow run script, or ideally part of a Python package so it's available in site-packages
to be accessed anywhere within your environment.import logging\nfrom typing import Dict, Union\n\nfrom rich.highlighter import Highlighter\n\nfrom prefect.logging.handlers import PrefectConsoleHandler\nfrom prefect.logging.highlighters import PrefectConsoleHighlighter\n\nclass CustomConsoleHighlighter(PrefectConsoleHighlighter):\n base_style = \"log.\"\n highlights = PrefectConsoleHighlighter.highlights + [\n # ?P<email> is naming this expression as `email`\n r\"(?P<email>[\\w-]+@([\\w-]+\\.)+[\\w-]+)\",\n ]\n\nclass CustomConsoleHandler(PrefectConsoleHandler):\n def __init__(\n self,\n highlighter: Highlighter = CustomConsoleHighlighter,\n styles: Dict[str, str] = None,\n level: Union[int, str] = logging.NOTSET,\n ):\n super().__init__(highlighter=highlighter, styles=styles, level=level)\n
/.prefect/logging.yml
to use my_package_or_module.CustomConsoleHandler
and additionally reference the base_style and named expression: log.email
. console_flow_runs:\n level: 0\n class: my_package_or_module.CustomConsoleHandler\n formatter: flow_runs\n styles:\n log.email: magenta\n # other styles can be appended here, e.g.\n # log.completed_state: green\n
my@email.com
is colored in magenta here.from prefect import flow, get_run_logger\n\n@flow\ndef log_email_flow():\n logger = get_run_logger()\n logger.info(\"my@email.com\")\n\nlog_email_flow()\n
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#applying-markup-in-logs","title":"Applying markup in logs","text":"To use Rich's markup in Prefect logs, first configure PREFECT_LOGGING_MARKUP
.
PREFECT_LOGGING_MARKUP=True\n
Then, the following will highlight \"fancy\" in red.
from prefect import flow, get_run_logger\n\n@flow\ndef my_flow():\n logger = get_run_logger()\n logger.info(\"This is [bold red]fancy[/]\")\n\nmy_flow()\n
Inaccurate logs could result
Although this can be convenient, the downside is, if enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output, e.g. DROP TABLE [dbo].[SomeTable];\"
outputs DROP TABLE .[SomeTable];
.
Logged events are also persisted to the Prefect database. A log record includes the following data:
Column Description id Primary key ID of the log record. created Timestamp specifying when the record was created. updated Timestamp specifying when the record was updated. name String specifying the name of the logger. level Integer representation of the logging level. flow_run_id ID of the flow run associated with the log record. If the log record is for a task run, this is the parent flow of the task. task_run_id ID of the task run associated with the log record. Null if logging a flow run event. message Log message. timestamp The client-side timestamp of this logged statement.For more information, see Log schema in the API documentation.
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#including-logs-from-other-libraries","title":"Including logs from other libraries","text":"By default, Prefect won't capture log statements from libraries that your flows and tasks use. You can tell Prefect to include logs from these libraries with the PREFECT_LOGGING_EXTRA_LOGGERS
setting.
To use this setting, specify one or more Python library names to include, separated by commas. For example, if you want to make sure Prefect captures Dask and SciPy logging statements with your flow and task run logs:
PREFECT_LOGGING_EXTRA_LOGGERS=dask,scipy\n
You can set this setting as an environment variable or in a profile. See Settings for more details about how to use settings.
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/managed-execution/","title":"Managed Execution","text":"Prefect Cloud can run your flows on your behalf with Prefect Managed work pools. Flows run with this work pool do not require a worker or cloud provider account. Prefect handles the infrastructure and code execution for you.
Managed execution is a great option for users who want to get started quickly, with no infrastructure setup.
Managed Execution is in beta
Managed Execution is currently in beta. Features are likely to change without warning.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#usage-guide","title":"Usage guide","text":"Run a flow with managed infrastructure in three steps.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#step-1","title":"Step 1","text":"Create a new work pool of type Prefect Managed in the UI or the CLI. Here's the command to create a new work pool using the CLI:
prefect work-pool create my-managed-pool --type prefect:managed\n
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#step-2","title":"Step 2","text":"Create a deployment using the flow deploy
method or prefect.yaml
.
Specify the name of your managed work pool, as shown in this example that uses the deploy
method:
from prefect import flow\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/desertaxle/demo.git\",\n entrypoint=\"flow.py:my_flow\",\n ).deploy(\n name=\"test-managed-flow\",\n work_pool_name=\"my-managed-pool\",\n )\n
With your CLI authenticated to your Prefect Cloud workspace, run the script to create your deployment:
python managed-execution.py\n
Note that this deployment uses flow code stored in a GitHub repository.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#step-3","title":"Step 3","text":"Run the deployment from the UI or from the CLI.
That's it! You ran a flow on remote infrastructure without any infrastructure setup, starting a worker, or needing a cloud provider account.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#adding-dependencies","title":"Adding dependencies","text":"Prefect can install Python packages in the container that runs your flow at runtime. You can specify these dependencies in the Pip Packages field in the UI, or by configuring job_variables={\"pip_packages\": [\"pandas\", \"prefect-aws\"]}
in your deployment creation like this:
from prefect import flow\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/desertaxle/demo.git\",\n entrypoint=\"flow.py:my_flow\",\n ).deploy(\n name=\"test-managed-flow\",\n work_pool_name=\"my-managed-pool\",\n job_variables={\"pip_packages\": [\"pandas\", \"prefect-aws\"]}\n )\n
Alternatively, you can create a requirements.txt
file and reference it in your prefect.yaml
pull_step
.
Managed execution requires Prefect 2.14.4 or newer.
All limitations listed below may change without warning during the beta period. We will update this page as we make changes.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#concurrency-work-pools","title":"Concurrency & work pools","text":"Free tier accounts are limited to:
prefect:managed
pools.Pro tier and above accounts are limited to:
prefect:managed
pools.At this time, managed execution requires that you run the official Prefect Docker image: prefecthq/prefect:2-latest
. However, as noted above, you can install Python package dependencies at runtime. If you need to use your own image, we recommend using another type of work pool.
Flow code must be stored in an accessible remote location. This means git-based cloud providers such as GitHub, Bitbucket, or GitLab are supported. Remote block-based storage is also supported, so S3, GCS, and Azure Blob are additional code storage options.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#resources","title":"Resources","text":"Memory is limited to 2GB of RAM, which includes all operations such as dependency installation. Maximum job run time is 24 hours.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#usage-limits","title":"Usage limits","text":"Free tier accounts are limited to ten compute hours per workspace per month. Pro tier and above accounts are limited to 250 hours per workspace per month. You can view your compute hours quota usage on the Work Pools page in the UI.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#next-steps","title":"Next steps","text":"Read more about creating deployments in the deployment guide.
If you find that you need more control over your infrastructure, such as the ability to run custom Docker images, serverless push work pools might be a good option. Read more here.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/migration-guide/","title":"Migrating from Prefect 1 to Prefect 2","text":"This guide is designed to help you migrate your workflows from Prefect 1 to Prefect 2.
","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-stayed-the-same","title":"What stayed the same","text":"Prefect 2 still:
Prefect 2 requires modifications to your existing tasks, flows, and deployment patterns. We've organized this section into the following categories:
Since Prefect 2 allows running native Python code within the flow function, some abstractions are no longer necessary:
Parameter
tasks: in Prefect 2, inputs to your flow function are automatically treated as parameters of your flow. You can define the parameter values in your flow code when you create your Deployment
, or when you schedule an ad-hoc flow run. One benefit of Prefect parametrization is built-in type validation with pydantic.state_handlers
: in Prefect 2, you can build custom logic that reacts to task-run states within your flow function without the need for state_handlers
. The page \" How to take action on a state change of a task run\" provides a further explanation and code examples.signals
, Prefect 2 allows you to raise an arbitrary exception in your task or flow and return a custom state. For more details and examples, see How can I stop the task run based on a custom logic.case
are no longer required. Use Python native if...else
statements to build a conditional logic. The Discourse tag \"conditional-logic\" provides more resources.resource_manager
is no longer necessary. As long as you point to your flow script in your Deployment
, you can share database connections and any other resources between tasks in your flow. The Discourse page How to clean up resources used in a flow provides a full example.The changes listed below require you to modify your workflow code. The following table shows how Prefect 1 concepts have been implemented in Prefect 2. The last column contains references to additional resources that provide more details and examples.
Concept Prefect 1 Prefect 2 Reference links Flow definition.with Flow(\"flow_name\") as flow:
@flow(name=\"flow_name\")
How can I define a flow? Flow executor that determines how to execute your task runs. Executor such as LocalExecutor
. Task runner such as ConcurrentTaskRunner
. What is the default TaskRunner (executor)? Configuration that determines how and where to execute your flow runs. Run configuration such as flow.run_config = DockerRun()
. Create an infrastructure block such as a Docker Container and specify it as the infrastructure when creating a deployment. How can I run my flow in a Docker container? Assignment of schedules and default parameter values. Schedules are attached to the flow object and default parameter values are defined within the Parameter tasks. Schedules and default parameters are assigned to a flow\u2019s Deployment
, rather than to a Flow object. How can I attach a schedule to a flow? Retries @task(max_retries=2, retry_delay=timedelta(seconds=5))
@task(retries=2, retry_delay_seconds=5)
How can I specify the retry behavior for a specific task? Logger syntax. Logger is retrieved from prefect.context
and can only be used within tasks. In Prefect 2, you can log not only from tasks, but also within flows. To get the logger object, use: prefect.get_run_logger()
. How can I add logs to my flow? The syntax and contents of Prefect context. Context is a thread-safe way of accessing variables related to the flow run and task run. The syntax to retrieve it: prefect.context
. Context is still available, but its content is much richer, allowing you to retrieve even more information about your flow runs and task runs. The syntax to retrieve it: prefect.context.get_run_context()
. How to access Prefect context values? Task library. Included in the main Prefect Core repository. Separated into individual repositories per system, cloud provider, or technology. How to migrate Prefect 1 tasks to Prefect 2 integrations.","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-changed-in-dataflow-orchestration","title":"What changed in dataflow orchestration?","text":"Let\u2019s look at the differences in how Prefect 2 transitions your flow and task runs between various execution states.
Completed
, while in Prefect 1, this flow run has a Success
state. You can find more about that topic here.To deploy your Prefect 1 flows, you have to send flow metadata to the backend in a step called registration. Prefect 2 no longer requires flow pre-registration. Instead, you create a Deployment that specifies the entry point to your flow code and optionally specifies:
DockerContainer
, KubernetesJob
, or ECSTask
).Interval
, Cron
, or RRule
schedule).parameters
, flow deployment name
, and more).default
is used.The API is now implemented as a REST API rather than GraphQL. This page illustrates how you can interact with the API.
In Prefect 1, the logical grouping of flows was based on projects. Prefect 2 provides a much more flexible way of organizing your flows, tasks, and deployments through customizable filters and\u00a0tags. This page provides more details on how to assign tags to various Prefect 2 objects.
The role of agents has changed:
The following new components and capabilities are enabled by Prefect 2.
async
support.pydantic
validation.subflows
concept: Prefect 1 only allowed the flow-of-flows orchestrator pattern. With Prefect 2 subflows, you gain a natural and intuitive way of organizing your flows into modular sub-components. For more details, see the following list of resources about subflows.Apart from new features, Prefect 2 simplifies many usage patterns and provides a much more seamless onboarding experience.
Every time you run a flow, whether it is tracked by the API server or ad-hoc through a Python script, it is on the same UI page for easier debugging and observability.
","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#code-as-workflows","title":"Code as workflows","text":"With Prefect 2, your functions\u00a0are\u00a0your flows and tasks. Prefect 2 automatically detects your flows and tasks without the need to define a rigid DAG structure. While use of tasks is encouraged to provide you the maximum visibility into your workflows, they are no longer required. You can add a single @flow
decorator to your main function to transform any Python script into a Prefect workflow.
The built-in SQLite database automatically tracks all your locally executed flow runs. As soon as you start a Prefect server and open the Prefect UI in your browser (or authenticate your CLI with your Prefect Cloud workspace), you can see all your locally executed flow runs in the UI. You don't even need to start an agent.
Then, when you want to move toward scheduled, repeatable workflows, you can build a deployment and send it to the server by running a CLI command or a Python script.
Prefect 2 eliminates ambiguities in many ways. For example. there is no more confusion between Prefect Core and Prefect Server \u2014 Prefect 2 unifies those into a single open source product. This product is also much easier to deploy with no requirement for Docker or docker-compose.
If you want to switch your backend to use Prefect Cloud for an easier production-level managed experience, Prefect profiles let you quickly connect to your workspace.
In Prefect 1, there are several confusing ways you could implement caching
. Prefect 2 resolves those ambiguities by providing a single cache_key_fn
function paired with cache_expiration
, allowing you to define arbitrary caching mechanisms \u2014 no more confusion about whether you need to use cache_for
, cache_validator
, or file-based caching using targets
.
For more details on how to configure caching, check out the following resources:
A similarly confusing concept in Prefect 1 was distinguishing between the functional and imperative APIs. This distinction caused ambiguities with respect to how to define state dependencies between tasks. Prefect 1 users were often unsure whether they should use the functional upstream_tasks
keyword argument or the imperative methods such as task.set_upstream()
, task.set_downstream()
, or flow.set_dependencies()
. In Prefect 2, there is only the functional API.
We know migrations can be tough. We encourage you to take it step-by-step and experiment with the new features.
To make the migration process easier for you:
Happy Engineering!
","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/moving-data/","title":"Read and Write Data to and from Cloud Provider Storage","text":"Writing data to cloud-based storage and reading data from that storage is a common task in data engineering. In this guide we'll learn how to use Prefect to move data to and from AWS, Azure, and GCP blob storage.
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#prerequisites","title":"Prerequisites","text":"In the CLI, install the Prefect integration library for your cloud provider:
AWSAzureGCPprefect-aws provides blocks for interacting with AWS services.
pip install -U prefect-aws\n
prefect-azure provides blocks for interacting with Azure services.
pip install -U prefect-azure\n
prefect-gcp provides blocks for interacting with GCP services.
pip install -U prefect-gcp\n
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#register-the-block-types","title":"Register the block types","text":"Register the new block types with Prefect Cloud (or with your self-hosted Prefect server instance):
AWSAzureGCP
prefect block register -m prefect_aws \n
prefect block register -m prefect_azure \n
prefect block register -m prefect_gcp\n
We should see a message in the CLI that several block types were registered. If we check the UI, we should see the new block types listed.
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#create-a-storage-bucket","title":"Create a storage bucket","text":"Create a storage bucket in the cloud provider account. Ensure the bucket is publicly accessible or create a user or service account with the appropriate permissions to fetch and write data to the bucket.
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#create-a-credentials-block","title":"Create a credentials block","text":"If the bucket is private, there are several options to authenticate:
If saving credential details in a block we can use a credentials block specific to the cloud provider or use a more generic secret block. We can create blocks via the UI or Python code. Below we'll use Python code to create a credentials block for our cloud provider.
Credentials safety
Reminder, don't store credential values in public locations such as public git platform repositories. In the examples below we use environment variables to store credential values.
AWSAzureGCPimport os\nfrom prefect_aws import AwsCredentials\n\nmy_aws_creds = AwsCredentials(\n aws_access_key_id=\"123abc\",\n aws_secret_access_key=os.environ.get(\"MY_AWS_SECRET_ACCESS_KEY\"),\n)\nmy_aws_creds.save(name=\"my-aws-creds-block\", overwrite=True)\n
import os\nfrom prefect_azure import AzureBlobStorageCredentials\n\nmy_azure_creds = AzureBlobStorageCredentials(\n connection_string=os.environ.get(\"MY_AZURE_CONNECTION_STRING\"),\n)\nmy_azure_creds.save(name=\"my-azure-creds-block\", overwrite=True)\n
We recommend specifying the service account key file contents as a string, rather than the path to the file, because that file might not be available in your production environments.
import os\nfrom prefect_gcp import GCPCredentials\n\nmy_gcp_creds = GCPCredentials(\n service_account_info=os.environ.get(\"GCP_SERVICE_ACCOUNT_KEY_FILE_CONTENTS\"), \n)\nmy_gcp_creds.save(name=\"my-gcp-creds-block\", overwrite=True)\n
Run the code to create the block. We should see a message that the block was created.
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#create-a-storage-block","title":"Create a storage block","text":"Let's create a block for the chosen cloud provider using Python code or the UI. In this example we'll use Python code.
AWSAzureGCPNote that the S3Bucket
block is not the same as the S3
block that ships with Prefect. The S3Bucket
block we use in this example is part of the prefect-aws
library and provides additional functionality.
We'll reference the credentials block created above.
from prefect_aws import S3Bucket\n\ns3bucket = S3Bucket.create(\n bucket=\"my-bucket-name\",\n credentials=\"my-aws-creds-block\"\n )\ns3bucket.save(name=\"my-s3-bucket-block\", overwrite=True)\n
Note that the AzureBlobStorageCredentials
block is not the same as the Azure block that ships with Prefect. The AzureBlobStorageCredentials
block we use in this example is part of the prefect-azure
library and provides additional functionality.
Azure blob storage doesn't require a separate block, the connection string used in the AzureBlobStorageCredentials
block can encode the information needed.
Note that the GcsBucket
block is not the same as the GCS
block that ships with Prefect. The GcsBucket
block is part of the prefect-gcp
library and provides additional functionality. We'll use it here.
We'll reference the credentials block created above.
from prefect_gcp.cloud_storage import GcsBucket\n\ngcsbucket = GcsBucket(\n bucket=\"my-bucket-name\", \n credentials=\"my-gcp-creds-block\"\n )\ngcsbucket.save(name=\"my-gcs-bucket-block\", overwrite=True)\n
Run the code to create the block. We should see a message that the block was created.
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#write-data","title":"Write data","text":"Use your new block inside a flow to write data to your cloud provider.
AWSAzureGCPfrom pathlib import Path\nfrom prefect import flow\nfrom prefect_aws.s3 import S3Bucket\n\n@flow()\ndef upload_to_s3():\n \"\"\"Flow function to upload data\"\"\"\n path = Path(\"my_path_to/my_file.parquet\")\n aws_block = S3Bucket.load(\"my-s3-bucket-block\")\n aws_block.upload_from_path(from_path=path, to_path=path)\n\nif __name__ == \"__main__\":\n upload_to_s3()\n
from prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_upload\n\n@flow\ndef upload_to_azure():\n \"\"\"Flow function to upload data\"\"\"\n blob_storage_credentials = AzureBlobStorageCredentials.load(\n name=\"my-azure-creds-block\"\n )\n\n with open(\"my_path_to/my_file.parquet\", \"rb\") as f:\n blob_storage_upload(\n data=f.read(),\n container=\"my_container\",\n blob=\"my_path_to/my_file.parquet\",\n blob_storage_credentials=blob_storage_credentials,\n )\n\nif __name__ == \"__main__\":\n upload_to_azure()\n
from pathlib import Path\nfrom prefect import flow\nfrom prefect_gcp.cloud_storage import GcsBucket\n\n@flow()\ndef upload_to_gcs():\n \"\"\"Flow function to upload data\"\"\"\n path = Path(\"my_path_to/my_file.parquet\")\n gcs_block = GcsBucket.load(\"my-gcs-bucket-block\")\n gcs_block.upload_from_path(from_path=path, to_path=path)\n\nif __name__ == \"__main__\":\n upload_to_gcs()\n
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#read-data","title":"Read data","text":"Use your block to read data from your cloud provider inside a flow.
AWSAzureGCPfrom prefect import flow\nfrom prefect_aws import S3Bucket\n\n@flow\ndef download_from_s3():\n \"\"\"Flow function to download data\"\"\"\n s3_block = S3Bucket.load(\"my-s3-bucket-block\")\n s3_block.get_directory(\n from_path=\"my_path_to/my_file.parquet\", \n local_path=\"my_path_to/my_file.parquet\"\n )\n\nif __name__ == \"__main__\":\n download_from_s3()\n
from prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_download\n\n@flow\ndef download_from_azure():\n \"\"\"Flow function to download data\"\"\"\n blob_storage_credentials = AzureBlobStorageCredentials.load(\n name=\"my-azure-creds-block\"\n )\n blob_storage_download(\n blob=\"my_path_to/my_file.parquet\",\n container=\"my_container\",\n blob_storage_credentials=blob_storage_credentials,\n )\n\nif __name__ == \"__main__\":\n download_from_azure()\n
from prefect import flow\nfrom prefect_gcp.cloud_storage import GcsBucket\n\n@flow\ndef download_from_gcs():\n gcs_block = GcsBucket.load(\"my-gcs-bucket-block\")\n gcs_block.get_directory(\n from_path=\"my_path_to/my_file.parquet\", \n local_path=\"my_path_to/my_file.parquet\"\n )\n\nif __name__ == \"__main__\":\n download_from_gcs()\n
In this guide we've seen how to use Prefect to read data from and write data to cloud providers!
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#next-steps","title":"Next steps","text":"Check out the prefect-aws
, prefect-azure
, and prefect-gcp
docs to see additional methods for interacting with cloud storage providers. Each library also contains blocks for interacting with other cloud-provider services.
In this guide, we will configure a deployment that uses a work pool for dynamically provisioned infrastructure.
All Prefect flow runs are tracked by the API. The API does not require prior registration of flows. With Prefect, you can call a flow locally or on a remote environment and it will be tracked.
A deployment turns your workflow into an application that can be interacted with and managed via the Prefect API. A deployment enables you to:
Deployments created with .serve
A deployment created with the Python flow.serve
method or the serve
function runs flows in a subprocess on the same machine where the deployment is created. It does not use a work pool or worker.
A work pool-based deployment is useful when you want to dynamically scale the infrastructure where your flow code runs. Work pool-based deployments contain information about the infrastructure type and configuration for your workflow execution.
Work pool-based deployment infrastructure options include the following:
.serve
.The following diagram provides a high-level overview of the conceptual elements involved in defining a work-pool based deployment that is polled by a worker and executes a flow run based on that deployment.
%%{\n init: {\n 'theme': 'base',\n 'themeVariables': {\n 'fontSize': '19px'\n }\n }\n}%%\n\nflowchart LR\n F(\"<div style='margin: 5px 10px 5px 5px;'>Flow Code</div>\"):::yellow -.-> A(\"<div style='margin: 5px 10px 5px 5px;'>Deployment Definition</div>\"):::gold\n subgraph Server [\"<div style='width: 150px; text-align: center; margin-top: 5px;'>Prefect API</div>\"]\n D(\"<div style='margin: 5px 10px 5px 5px;'>Deployment</div>\"):::green\n end\n subgraph Remote Storage [\"<div style='width: 160px; text-align: center; margin-top: 5px;'>Remote Storage</div>\"]\n B(\"<div style='margin: 5px 6px 5px 5px;'>Flow</div>\"):::yellow\n end\n subgraph Infrastructure [\"<div style='width: 150px; text-align: center; margin-top: 5px;'>Infrastructure</div>\"]\n G(\"<div style='margin: 5px 10px 5px 5px;'>Flow Run</div>\"):::blue\n end\n\n A --> D\n D --> E(\"<div style='margin: 5px 10px 5px 5px;'>Worker</div>\"):::red\n B -.-> E\n A -.-> B\n E -.-> G\n\n classDef gold fill:goldenrod,stroke:goldenrod,stroke-width:4px,color:black\n classDef yellow fill:gold,stroke:gold,stroke-width:4px,color:black\n classDef gray fill:lightgray,stroke:lightgray,stroke-width:4px\n classDef blue fill:blue,stroke:blue,stroke-width:4px,color:white\n classDef green fill:green,stroke:green,stroke-width:4px,color:white\n classDef red fill:red,stroke:red,stroke-width:4px,color:white\n classDef dkgray fill:darkgray,stroke:darkgray,stroke-width:4px,color:white
The work pool types above require a worker to be running on your infrastructure to poll a work pool for scheduled flow runs.
Additional work pool options available with Prefect Cloud
Prefect Cloud offers other flavors of work pools that don't require a worker:
Push Work Pools - serverless cloud options that don't require a worker because Prefect Cloud submits them to your serverless cloud infrastructure on your behalf. Prefect can auto-provision your cloud infrastructure for you and set it up to use your work pool.
Managed Execution Prefect Cloud submits and runs your deployment on serverless infrastructure. No cloud provider account required.
In this guide, we focus on deployments that require a worker.
Work pool-based deployments that use a worker also allow you to assign a work queue name to prioritize work and allow you to limit concurrent runs at the work pool level.
When creating a deployment that uses a work pool and worker, we must answer two basic questions:
The tutorial shows how you can create a deployment with a long-running process using .serve
and how to move to a work-pool-based deployment setup with .deploy
. See the discussion of when you might want to move to work-pool-based deployments there.
Next, we'll explore how to use .deploy
to create deployments with Python code. If you'd prefer to learn about using a YAML-based alternative for managing deployment configuration, skip to the later section on prefect.yaml
.
.deploy
","text":"","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#automatically-bake-your-code-into-a-docker-image","title":"Automatically bake your code into a Docker image","text":"You can create a deployment from Python code by calling the .deploy
method on a flow.
from prefect import flow\n\n\n@flow(log_prints=True)\ndef buy():\n print(\"Buying securities\")\n\n\nif __name__ == \"__main__\":\n buy.deploy(\n name=\"my-code-baked-into-an-image-deployment\", \n work_pool_name=\"my-docker-pool\", \n image=\"my_registry/my_image:my_image_tag\"\n )\n
Make sure you have the work pool created in the Prefect Cloud workspace you are authenticated to or on your running self-hosted server instance. Then run the script to create a deployment (in future examples this step will be omitted for brevity):
python buy.py\n
You should see messages in your terminal that Docker is building your image. When the deployment build succeeds you will see helpful information in your terminal showing you how to start a worker for your deployment and how to run your deployment. Your deployment will be visible on the Deployments
page in the UI.
By default, .deploy
will build a Docker image with your flow code baked into it and push the image to the Docker Hub registry specified in the image
argument`.
Authentication to Docker Hub
You need your environment to be authenticated to your Docker registry to push an image to it.
You can specify a registry other than Docker Hub by providing the full registry path in the image
argument.
Warning
If building a Docker image, the environment in which you are creating the deployment needs to have Docker installed and running.
To avoid pushing to a registry, set push=False
in the .deploy
method.
if __name__ == \"__main__\":\n buy.deploy(\n name=\"my-code-baked-into-an-image-deployment\", \n work_pool_name=\"my-docker-pool\", \n image=\"my_registry/my_image:my_image_tag\",\n push=False\n )\n
To avoid building an image, set build=False
in the .deploy
method.
if __name__ == \"__main__\":\n buy.deploy(\n name=\"my-code-baked-into-an-image-deployment\", \n work_pool_name=\"my-docker-pool\", \n image=\"discdiver/no-build-image:1.0\",\n build=False\n )\n
The specified image will need to be available in your deployment's execution environment for your flow code to be accessible.
Prefect generates a Dockerfile for you that will build an image based off of one of Prefect's published images. The generated Dockerfile will copy the current directory into the Docker image and install any dependencies listed in a requirements.txt
file.
If you want to use a custom Dockerfile, you can specify the path to the Dockerfile with the DeploymentImage
class:
from prefect import flow\nfrom prefect.deployments import DeploymentImage\n\n\n@flow(log_prints=True)\ndef buy():\n print(\"Selling securities\")\n\n\nif __name__ == \"__main__\":\n buy.deploy(\n name=\"my-custom-dockerfile-deployment\", \n work_pool_name=\"my-docker-pool\", \n image=DeploymentImage(\n name=\"my_image\",\n tag=\"deploy-guide\",\n dockerfile=\"Dockerfile\"\n ),\n push=False\n)\n
The DeploymentImage
object allows for a great deal of image customization.
For example, you can install a private Python package from GCP's artifact registry like this:
Create a custom base Dockerfile.
FROM python:3.10\n\nARG AUTHED_ARTIFACT_REG_URL\nCOPY ./requirements.txt /requirements.txt\n\nRUN pip install --extra-index-url ${AUTHED_ARTIFACT_REG_URL} -r /requirements.txt\n
Create our deployment by leveraging the DeploymentImage class.
private-package.pyfrom prefect import flow\nfrom prefect.deployments.runner import DeploymentImage\nfrom prefect.blocks.system import Secret\nfrom my_private_package import do_something_cool\n\n\n@flow(log_prints=True)\ndef my_flow():\n do_something_cool()\n\n\nif __name__ == \"__main__\":\n artifact_reg_url: Secret = Secret.load(\"artifact-reg-url\")\n\n my_flow.deploy(\n name=\"my-deployment\",\n work_pool_name=\"k8s-demo\",\n image=DeploymentImage(\n name=\"my-image\",\n tag=\"test\",\n dockerfile=\"Dockerfile\",\n buildargs={\"AUTHED_ARTIFACT_REG_URL\": artifact_reg_url.get()},\n ),\n )\n
Note that we used a Prefect Secret block to load the URL configuration for the artifact registry above.
See all the optional keyword arguments for the DeploymentImage class here.
Default Docker namespace
You can set the PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE
setting to append a default Docker namespace to all images you build with .deploy
. This is great if you use a private registry to store your images.
To set a default Docker namespace for your current profile run:
prefect config set PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE=<docker-registry-url>/<organization-or-username>\n
Once set, you can omit the namespace from your image name when creating a deployment:
with_default_docker_namespace.pyif __name__ == \"__main__\":\n buy.deploy(\n name=\"my-code-baked-into-an-image-deployment\", \n work_pool_name=\"my-docker-pool\", \n image=\"my_image:my_image_tag\"\n )\n
The above code will build an image with the format <docker-registry-url>/<organization-or-username>/my_image:my_image_tag
when PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE
is set.
While baking code into Docker images is a popular deployment option, many teams decide to store their workflow code in git-based storage, such as GitHub, Bitbucket, or Gitlab. Let's see how to do that next.
","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#store-your-code-in-git-based-cloud-storage","title":"Store your code in git-based cloud storage","text":"If you don't specify an image
argument for .deploy
, then you need to specify where to pull the flow code from at runtime with the from_source
method.
Here's how we can pull our flow code from a GitHub repository.
git_storage.pyfrom prefect import flow\n\nif __name__ == \"__main__\":\n flow.from_source(\n \"https://github.com/my_github_account/my_repo/my_file.git\",\n entrypoint=\"flows/no-image.py:hello_world\",\n ).deploy(\n name=\"no-image-deployment\",\n work_pool_name=\"my_pool\",\n build=False\n )\n
The entrypoint
is the path to the file the flow is located in and the function name, separated by a colon.
Alternatively, you could specify a git-based cloud storage URL for a Bitbucket or Gitlab repository.
Note
If you don't specify an image as part of your deployment creation, the image specified in the work pool will be used to run your flow.
After creating a deployment you might change your flow code. Generally, you can just push your code to GitHub, without rebuilding your deployment. The exception is if something that the server needs to know about changes, such as the flow entrypoint parameters. Rerunning the Python script with .deploy
will update your deployment on the server with the new flow code.
If you need to provide additional configuration, such as specifying a private repository, you can provide a GitRepository
object instead of a URL:
from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=GitRepository(\n url=\"https://github.com/org/private-repo.git\",\n branch=\"dev\",\n credentials={\n \"access_token\": Secret.load(\"github-access-token\")\n }\n ),\n entrypoint=\"flows/no-image.py:hello_world\",\n ).deploy(\n name=\"private-git-storage-deployment\",\n work_pool_name=\"my_pool\",\n build=False\n )\n
Note the use of the Secret block to load the GitHub access token. Alternatively, you could provide a username and password to the username
and password
fields of the credentials
argument.
Another option for flow code storage is any fsspec-supported storage location, such as AWS S3, GCP GCS, or Azure Blob Storage.
For example, you can pass the S3 bucket path to source
.
from prefect import flow\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=\"s3://my-bucket/my-folder\",\n entrypoint=\"flows.py:my_flow\",\n ).deploy(\n name=\"deployment-from-aws-flow\",\n work_pool=\"my_pool\",\n )\n
In the example above your credentials will be auto-discovered from your deployment creation environment and credentials will need to be available in your runtime environment.
If you need additional configuration for your cloud-based storage - for example, with a private S3 Bucket - we recommend using a storage block. A storage block also ensures your credentials will be available in both your deployment creation environment and your execution environment.
Here's an example that uses an S3Bucket
block from the prefect-aws library.
from prefect import flow\nfrom prefect_aws.s3 import S3Bucket\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=S3Bucket.load(\"my-code-storage\"), entrypoint=\"my_file.py:my_flow\"\n ).deploy(name=\"test-s3\", work_pool=\"my_pool\")\n
If you are familiar with the deployment creation mechanics with .serve
, you will notice that .deploy
is very similar. .deploy
just requires a work pool name and has a number of parameters dealing with flow-code storage for Docker images.
Unlike .serve
, if you don't specify an image to use for your flow, you must to specify where to pull the flow code from at runtime with the from_source
method, whereas from_source
is optional with .serve
.
.deploy
","text":"Our examples thus far have explored options for where to store flow code. Let's turn our attention to other deployment configuration options.
To pass parameters to your flow, you can use the parameters
argument in the .deploy
method. Just pass in a dictionary of key-value pairs.
from prefect import flow\n\n@flow\ndef hello_world(name: str):\n print(f\"Hello, {name}!\")\n\nif __name__ == \"__main__\":\n hello_world.deploy(\n name=\"pass-params-deployment\",\n work_pool_name=\"my_pool\",\n parameters=dict(name=\"Prefect\"),\n image=\"my_registry/my_image:my_image_tag\",\n )\n
The job_variables
parameter allows you to fine-tune the infrastructure settings for a deployment. The values passed in override default values in the specified work pool's base job template.
You can override environment variables, such as image_pull_policy
and image
, for a specific deployment with the job_variables
argument.
if __name__ == \"__main__\":\n get_repo_info.deploy(\n name=\"my-deployment-never-pull\", \n work_pool_name=\"my-docker-pool\", \n job_variables={\"image_pull_policy\": \"Never\"},\n image=\"my-image:my-tag\"\",\n push=False\n )\n
Similarly, you can override the environment variables specified in a work pool through the job_variables
parameter:
if __name__ == \"__main__\":\n get_repo_info.deploy(\n name=\"my-deployment-never-pull\", \n work_pool_name=\"my-docker-pool\", \n job_variables={\"env\": {\"EXTRA_PIP_PACKAGES\": \"boto3\"} },\n image=\"my-image:my-tag\"\",\n push=False\n )\n
The dictionary key \"EXTRA_PIP_PACKAGES\" denotes a special environment variable that Prefect will use to install additional Python packages at runtime. This approach is an alternative to building an image with a custom requirements.txt
copied into it.
For more information on overriding job variables see this guide.
","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#working-with-multiple-deployments-with-deploy","title":"Working with multiple deployments withdeploy
","text":"You can create multiple deployments from one or more Python files that use .deploy
. These deployments can be managed independently of one another, allowing you to deploy the same flow with different configurations in the same codebase.
To create multiple work pool-based deployments at once you can use the deploy
function, which is analogous to the serve
function.
from prefect import deploy, flow\n\n@flow(log_prints=True)\ndef buy():\n print(\"Buying securities\")\n\n\nif __name__ == \"__main__\":\n deploy(\n buy.to_deployment(name=\"dev-deploy\", work_pool_name=\"my-dev-work-pool\"),\n buy.to_deployment(name=\"prod-deploy\", work_pool_name=\"my-prod-work-pool\"),\n image=\"my-registry/my-image:dev\",\n push=False,\n )\n
Note that in the example above we created two deployments from the same flow, but with different work pools. Alternatively, we could have created two deployments from different flows.
from prefect import deploy, flow\n\n@flow(log_prints=True)\ndef buy():\n print(\"Buying securities.\")\n\n@flow(log_prints=True)\ndef sell():\n print(\"Selling securities.\")\n\n\nif __name__ == \"__main__\":\n deploy(\n buy.to_deployment(name=\"buy-deploy\"),\n sell.to_deployment(name=\"sell-deploy\"),\n work_pool_name=\"my-dev-work-pool\"\n image=\"my-registry/my-image:dev\",\n push=False,\n )\n
In the example above the code for both flows gets baked into the same image.
We can specify that one or more flows should be pulled from a remote location at runtime by using the from_source
method. Here's an example of deploying two flows, one defined locally and one defined in a remote repository:
from prefect import deploy, flow\n\n\n@flow(log_prints=True)\ndef local_flow():\n print(\"I'm a flow!\")\n\nif __name__ == \"__main__\":\n deploy(\n local_flow.to_deployment(name=\"example-deploy-local-flow\"),\n flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n ).to_deployment(\n name=\"example-deploy-remote-flow\",\n ),\n work_pool_name=\"my-work-pool\",\n image=\"my-registry/my-image:dev\",\n )\n
You could pass any number of flows to the deploy
function. This behavior is useful if using a monorepo approach to your workflows.
The prefect.yaml
file is a YAML file describing base settings for your deployments, procedural steps for preparing deployments, and instructions for preparing the execution environment for a deployment run.
You can initialize your deployment configuration, which creates the prefect.yaml
file, by running the CLI command prefect init
in any directory or repository that stores your flow code.
Deployment configuration recipes
Prefect ships with many off-the-shelf \"recipes\" that allow you to get started with more structure within your prefect.yaml
file; run prefect init
to be prompted with available recipes in your installation. You can provide a recipe name in your initialization command with the --recipe
flag, otherwise Prefect will attempt to guess an appropriate recipe based on the structure of your working directory (for example if you initialize within a git
repository, Prefect will use the git
recipe).
The prefect.yaml
file contains deployment configuration for deployments created from this file, default instructions for how to build and push any necessary code artifacts (such as Docker images), and default instructions for pulling a deployment in remote execution environments (e.g., cloning a GitHub repository).
Any deployment configuration can be overridden via options available on the prefect deploy
CLI command when creating a deployment.
prefect.yaml
file flexibility
In older versions of Prefect, this file had to be in the root of your repository or project directory and named prefect.yaml
. Now this file can be located in a directory outside the project or a subdirectory inside the project. It can be named differently, provided the filename ends in .yaml
. You can even have multiple prefect.yaml
files with the same name in different directories. By default, prefect deploy
will use a prefect.yaml
file in the project's root directory. To use a custom deployment configuration file, supply the new --prefect-file
CLI argument when running the deploy
command from the root of your project directory:
prefect deploy --prefect-file path/to/my_file.yaml
The base structure for prefect.yaml
is as follows:
# generic metadata\nprefect-version: null\nname: null\n\n# preparation steps\nbuild: null\npush: null\n\n# runtime steps\npull: null\n\n# deployment configurations\ndeployments:\n- # base metadata\n name: null\n version: null\n tags: []\n description: null\n schedule: null\n\n # flow-specific fields\n entrypoint: null\n parameters: {}\n\n # infra-specific fields\n work_pool:\n name: null\n work_queue_name: null\n job_variables: {}\n
The metadata fields are always pre-populated for you. These fields are for bookkeeping purposes only. The other sections are pre-populated based on recipe; if no recipe is provided, Prefect will attempt to guess an appropriate one based on local configuration.
You can create deployments via the CLI command prefect deploy
without ever needing to alter the deployments
section of your prefect.yaml
file \u2014 the prefect deploy
command will help in deployment creation via interactive prompts. The prefect.yaml
file facilitates version-controlling your deployment configuration and managing multiple deployments.
Deployment actions defined in your prefect.yaml
file control the lifecycle of the creation and execution of your deployments. The three actions available are build
, push
, and pull
. pull
is the only required deployment action \u2014 it is used to define how Prefect will pull your deployment in remote execution environments.
Each action is defined as a list of steps that are executing in sequence.
Each step has the following format:
section:\n- prefect_package.path.to.importable.step:\n id: \"step-id\" # optional\n requires: \"pip-installable-package-spec\" # optional\n kwarg1: value\n kwarg2: more-values\n
Every step can optionally provide a requires
field that Prefect will use to auto-install in the event that the step cannot be found in the current environment. Each step can also specify an id
for the step which is used when referencing step outputs in later steps. The additional fields map directly onto Python keyword arguments to the step function. Within a given section, steps always run in the order that they are provided within the prefect.yaml
file.
Deployment Instruction Overrides
build
, push
, and pull
sections can all be overridden on a per-deployment basis by defining build
, push
, and pull
fields within a deployment definition in the prefect.yaml
file.
The prefect deploy
command will use any build
, push
, or pull
instructions provided in a deployment's definition in the prefect.yaml
file.
This capability is useful with multiple deployments that require different deployment instructions.
","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#the-build-action","title":"The build action","text":"The build section of prefect.yaml
is where any necessary side effects for running your deployments are built - the most common type of side effect produced here is a Docker image. If you initialize with the docker recipe, you will be prompted to provide required information, such as image name and tag:
prefect init --recipe docker\n>> image_name: < insert image name here >\n>> tag: < insert image tag here >\n
Use --field
to avoid the interactive experience
We recommend that you only initialize a recipe when you are first creating your deployment structure, and afterwards store your configuration files within version control. However, sometimes you may need to initialize programmatically and avoid the interactive prompts. To do so, provide all required fields for your recipe using the --field
flag:
prefect init --recipe docker \\\n --field image_name=my-repo/my-image \\\n --field tag=my-tag\n
build:\n- prefect_docker.deployments.steps.build_docker_image:\n requires: prefect-docker>=0.3.0\n image_name: my-repo/my-image\n tag: my-tag\n dockerfile: auto\n push: true\n
Once you've confirmed that these fields are set to their desired values, this step will automatically build a Docker image with the provided name and tag and push it to the repository referenced by the image name. As the prefect-docker
package documentation notes, this step produces a few fields that can optionally be used in future steps or within prefect.yaml
as template values. It is best practice to use {{ image }}
within prefect.yaml
(specifically the work pool's job variables section) so that you don't risk having your build step and deployment specification get out of sync with hardcoded values.
Note
Note that in the build step example above, we relied on the prefect-docker
package; in cases that deal with external services, additional packages are often required and will be auto-installed for you.
Pass output to downstream steps
Each deployment action can be composed of multiple steps. For example, if you wanted to build a Docker image tagged with the current commit hash, you could use the run_shell_script
step and feed the output into the build_docker_image
step:
build:\n - prefect.deployments.steps.run_shell_script:\n id: get-commit-hash\n script: git rev-parse --short HEAD\n stream_output: false\n - prefect_docker.deployments.steps.build_docker_image:\n requires: prefect-docker\n image_name: my-image\n image_tag: \"{{ get-commit-hash.stdout }}\"\n dockerfile: auto\n
Note that the id
field is used in the run_shell_script
step so that its output can be referenced in the next step.
The push section is most critical for situations in which code is not stored on persistent filesystems or in version control. In this scenario, code is often pushed and pulled from a Cloud storage bucket of some kind (e.g., S3, GCS, Azure Blobs, etc.). The push section allows users to specify and customize the logic for pushing this code repository to arbitrary remote locations.
For example, a user wishing to store their code in an S3 bucket and rely on default worker settings for its runtime environment could use the s3
recipe:
prefect init --recipe s3\n>> bucket: < insert bucket name here >\n
Inspecting our newly created prefect.yaml
file we find that the push
and pull
sections have been templated out for us as follows:
push:\n- prefect_aws.deployments.steps.push_to_s3:\n id: push-code\n requires: prefect-aws>=0.3.0\n bucket: my-bucket\n folder: project-name\n credentials: null\n\npull:\n- prefect_aws.deployments.steps.pull_from_s3:\n requires: prefect-aws>=0.3.0\n bucket: my-bucket\n folder: \"{{ push-code.folder }}\"\n credentials: null\n
The bucket has been populated with our provided value (which also could have been provided with the --field
flag); note that the folder
property of the push
step is a template - the pull_from_s3
step outputs both a bucket
value as well as a folder
value that can be used to template downstream steps. Doing this helps you keep your steps consistent across edits.
As discussed above, if you are using blocks, the credentials section can be templated with a block reference for secure and dynamic credentials access:
push:\n- prefect_aws.deployments.steps.push_to_s3:\n requires: prefect-aws>=0.3.0\n bucket: my-bucket\n folder: project-name\n credentials: \"{{ prefect.blocks.aws-credentials.dev-credentials }}\"\n
Anytime you run prefect deploy
, this push
section will be executed upon successful completion of your build
section. For more information on the mechanics of steps, see below.
The pull section is the most important section within the prefect.yaml
file. It contains instructions for preparing your flows for a deployment run. These instructions will be executed each time a deployment created within this folder is run via a worker.
There are three main types of steps that typically show up in a pull
section:
set_working_directory
: this step simply sets the working directory for the process prior to importing your flowgit_clone
: this step clones the provided repository on the provided branchpull_from_{cloud}
: this step pulls the working directory from a Cloud storage location (e.g., S3)Use block and variable references
All block and variable references within your pull step will remain unresolved until runtime and will be pulled each time your deployment is run. This allows you to avoid storing sensitive information insecurely; it also allows you to manage certain types of configuration from the API and UI without having to rebuild your deployment every time.
Below is an example of how to use an existing GitHubCredentials
block to clone a private GitHub repository:
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/org/repo.git\n credentials: \"{{ prefect.blocks.github-credentials.my-credentials }}\"\n
Alternatively, you can specify a BitBucketCredentials
or GitLabCredentials
block to clone from Bitbucket or GitLab. In lieu of a credentials block, you can also provide a GitHub, GitLab, or Bitbucket token directly to the 'access_token` field. You can use a Secret block to do this securely:
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://bitbucket.org/org/repo.git\n access_token: \"{{ prefect.blocks.secret.bitbucket-token }}\"\n
","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#utility-steps","title":"Utility steps","text":"Utility steps can be used within a build, push, or pull action to assist in managing the deployment lifecycle:
run_shell_script
allows for the execution of one or more shell commands in a subprocess, and returns the standard output and standard error of the script. This step is useful for scripts that require execution in a specific environment, or those which have specific input and output requirements.Here is an example of retrieving the short Git commit hash of the current repository to use as a Docker image tag:
build:\n - prefect.deployments.steps.run_shell_script:\n id: get-commit-hash\n script: git rev-parse --short HEAD\n stream_output: false\n - prefect_docker.deployments.steps.build_docker_image:\n requires: prefect-docker>=0.3.0\n image_name: my-image\n tag: \"{{ get-commit-hash.stdout }}\"\n dockerfile: auto\n
Provided environment variables are not expanded by default
To expand environment variables in your shell script, set expand_env_vars: true
in your run_shell_script
step. For example:
- prefect.deployments.steps.run_shell_script:\n id: get-user\n script: echo $USER\n stream_output: true\n expand_env_vars: true\n
Without expand_env_vars: true
, the above step would return a literal string $USER
instead of the current user.
pip_install_requirements
installs dependencies from a requirements.txt
file within a specified directory.Below is an example of installing dependencies from a requirements.txt
file after cloning:
pull:\n - prefect.deployments.steps.git_clone:\n id: clone-step # needed in order to be referenced in subsequent steps\n repository: https://github.com/org/repo.git\n - prefect.deployments.steps.pip_install_requirements:\n directory: {{ clone-step.directory }} # `clone-step` is a user-provided `id` field\n requirements_file: requirements.txt\n
Below is an example that retrieves an access token from a 3rd party Key Vault and uses it in a private clone step:
pull:\n- prefect.deployments.steps.run_shell_script:\n id: get-access-token\n script: az keyvault secret show --name <secret name> --vault-name <secret vault> --query \"value\" --output tsv\n stream_output: false\n- prefect.deployments.steps.git_clone:\n repository: https://bitbucket.org/samples/deployments.git\n branch: master\n access_token: \"{{ get-access-token.stdout }}\"\n
You can also run custom steps by packaging them. In the example below, retrieve_secrets
is a custom python module that has been packaged into the default working directory of a Docker image (which is /opt/prefect by default). main
is the function entry point, which returns an access token (e.g. return {\"access_token\": access_token}
) like the preceding example, but utilizing the Azure Python SDK for retrieval.
- retrieve_secrets.main:\n id: get-access-token\n- prefect.deployments.steps.git_clone:\n repository: https://bitbucket.org/samples/deployments.git\n branch: master\n access_token: '{{ get-access-token.access_token }}'\n
","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#templating-options","title":"Templating options","text":"Values that you place within your prefect.yaml
file can reference dynamic values in several different ways:
build
and push
produce named fields such as image_name
; you can reference these fields within prefect.yaml
and prefect deploy
will populate them with each call. References must be enclosed in double brackets and be of the form \"{{ field_name }}\"
{{ prefect.blocks.block_type.block_slug }}
. It is highly recommended that you use block references for any sensitive information (such as a GitHub access token or any credentials) to avoid hardcoding these values in plaintext{{ prefect.variables.variable_name }}
. Variables can be used to reference non-sensitive, reusable pieces of information such as a default image name or a default work pool name.{{ $MY_ENV_VAR }}
. This is especially useful for referencing environment variables that are set at runtime.As an example, consider the following prefect.yaml
file:
build:\n- prefect_docker.deployments.steps.build_docker_image:\n id: build-image\n requires: prefect-docker>=0.3.0\n image_name: my-repo/my-image\n tag: my-tag\n dockerfile: auto\n push: true\n\ndeployments:\n- # base metadata\n name: null\n version: \"{{ build-image.tag }}\"\n tags:\n - \"{{ $my_deployment_tag }}\"\n - \"{{ prefect.variables.some_common_tag }}\"\n description: null\n schedule: null\n\n # flow-specific fields\n entrypoint: null\n parameters: {}\n\n # infra-specific fields\n work_pool:\n name: \"my-k8s-work-pool\"\n work_queue_name: null\n job_variables:\n image: \"{{ build-image.image }}\"\n cluster_config: \"{{ prefect.blocks.kubernetes-cluster-config.my-favorite-config }}\"\n
So long as our build
steps produce fields called image_name
and tag
, every time we deploy a new version of our deployment, the {{ build-image.image }}
variable will be dynamically populated with the relevant values.
Docker step
The most commonly used build step is prefect_docker.deployments.steps.build_docker_image
which produces both the image_name
and tag
fields.
For an example, check out the deployments tutorial.
A prefect.yaml
file can have multiple deployment configurations that control the behavior of several deployments. These deployments can be managed independently of one another, allowing you to deploy the same flow with different configurations in the same codebase.
Prefect supports multiple deployment declarations within the prefect.yaml
file. This method of declaring multiple deployments allows the configuration for all deployments to be version controlled and deployed with a single command.
New deployment declarations can be added to the prefect.yaml
file by adding a new entry to the deployments
list. Each deployment declaration must have a unique name
field which is used to select deployment declarations when using the prefect deploy
command.
Warning
When using a prefect.yaml
file that is in another directory or differently named, remember that the value for the deployment entrypoint
must be relative to the root directory of the project.
For example, consider the following prefect.yaml
file:
build: ...\npush: ...\npull: ...\n\ndeployments:\n- name: deployment-1\n entrypoint: flows/hello.py:my_flow\n parameters:\n number: 42,\n message: Don't panic!\n work_pool:\n name: my-process-work-pool\n work_queue_name: primary-queue\n\n- name: deployment-2\n entrypoint: flows/goodbye.py:my_other_flow\n work_pool:\n name: my-process-work-pool\n work_queue_name: secondary-queue\n\n- name: deployment-3\n entrypoint: flows/hello.py:yet_another_flow\n work_pool:\n name: my-docker-work-pool\n work_queue_name: tertiary-queue\n
This file has three deployment declarations, each referencing a different flow. Each deployment declaration has a unique name
field and can be deployed individually by using the --name
flag when deploying.
For example, to deploy deployment-1
you would run:
prefect deploy --name deployment-1\n
To deploy multiple deployments you can provide multiple --name
flags:
prefect deploy --name deployment-1 --name deployment-2\n
To deploy multiple deployments with the same name, you can prefix the deployment name with its flow name:
prefect deploy --name my_flow/deployment-1 --name my_other_flow/deployment-1\n
To deploy all deployments you can use the --all
flag:
prefect deploy --all\n
To deploy deployments that match a pattern you can run:
prefect deploy -n my-flow/* -n *dev/my-deployment -n dep*prod\n
The above command will deploy all deployments from the flow my-flow
, all flows ending in dev
with a deployment named my-deployment
, and all deployments starting with dep
and ending in prod
.
CLI Options When Deploying Multiple Deployments
When deploying more than one deployment with a single prefect deploy
command, any additional attributes provided via the CLI will be ignored.
To provide overrides to a deployment via the CLI, you must deploy that deployment individually.
","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#reusing-configuration-across-deployments","title":"Reusing configuration across deployments","text":"Because a prefect.yaml
file is a standard YAML file, you can use YAML aliases to reuse configuration across deployments.
This functionality is useful when multiple deployments need to share the work pool configuration, deployment actions, or other configurations.
You can declare a YAML alias by using the &{alias_name}
syntax and insert that alias elsewhere in the file with the *{alias_name}
syntax. When aliasing YAML maps, you can also override specific fields of the aliased map by using the <<: *{alias_name}
syntax and adding additional fields below.
We recommend adding a definitions
section to your prefect.yaml
file at the same level as the deployments
section to store your aliases.
For example, consider the following prefect.yaml
file:
build: ...\npush: ...\npull: ...\n\ndefinitions:\n work_pools:\n my_docker_work_pool: &my_docker_work_pool\n name: my-docker-work-pool\n work_queue_name: default\n job_variables:\n image: \"{{ build-image.image }}\"\n schedules:\n every_ten_minutes: &every_10_minutes\n interval: 600\n actions:\n docker_build: &docker_build\n - prefect_docker.deployments.steps.build_docker_image: &docker_build_config\n id: build-image\n requires: prefect-docker>=0.3.0\n image_name: my-example-image\n tag: dev\n dockerfile: auto\n push: true\n\ndeployments:\n- name: deployment-1\n entrypoint: flows/hello.py:my_flow\n schedule: *every_10_minutes\n parameters:\n number: 42,\n message: Don't panic!\n work_pool: *my_docker_work_pool\n build: *docker_build # Uses the full docker_build action with no overrides\n\n- name: deployment-2\n entrypoint: flows/goodbye.py:my_other_flow\n work_pool: *my_docker_work_pool\n build:\n - prefect_docker.deployments.steps.build_docker_image:\n <<: *docker_build_config # Uses the docker_build_config alias and overrides the dockerfile field\n dockerfile: Dockerfile.custom\n\n- name: deployment-3\n entrypoint: flows/hello.py:yet_another_flow\n schedule: *every_10_minutes\n work_pool:\n name: my-process-work-pool\n work_queue_name: primary-queue\n
In the above example, we are using YAML aliases to reuse work pool, schedule, and build configuration across multiple deployments:
deployment-1
and deployment-2
are using the same work pool configurationdeployment-1
and deployment-3
are using the same scheduledeployment-1
and deployment-2
are using the same build deployment action, but deployment-2
is overriding the dockerfile
field to use a custom DockerfileBelow are fields that can be added to each deployment declaration.
Property Descriptionname
The name to give to the created deployment. Used with the prefect deploy
command to create or update specific deployments. version
An optional version for the deployment. tags
A list of strings to assign to the deployment as tags. description
An optional description for the deployment. schedule
An optional schedule to assign to the deployment. Fields for this section are documented in the Schedule Fields section. triggers
An optional array of triggers to assign to the deployment entrypoint
Required path to the .py
file containing the flow you want to deploy (relative to the root directory of your development folder) combined with the name of the flow function. Should be in the format path/to/file.py:flow_function_name
. parameters
Optional default values to provide for the parameters of the deployed flow. Should be an object with key/value pairs. enforce_parameter_schema
Boolean flag that determines whether the API should validate the parameters passed to a flow run against the parameter schema generated for the deployed flow. work_pool
Information on where to schedule flow runs for the deployment. Fields for this section are documented in the Work Pool Fields section.","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#schedule-fields","title":"Schedule fields","text":"Below are fields that can be added to a deployment declaration's schedule
section.
interval
Number of seconds indicating the time between flow runs. Cannot be used in conjunction with cron
or rrule
. anchor_date
Datetime string indicating the starting or \"anchor\" date to begin the schedule. If no anchor_date
is supplied, the current UTC time is used. Can only be used with interval
. timezone
String name of a time zone, used to enforce localization behaviors like DST boundaries. See the IANA Time Zone Database for valid time zones. cron
A valid cron string. Cannot be used in conjunction with interval
or rrule
. day_or
Boolean indicating how croniter handles day and day_of_week entries. Must be used with cron
. Defaults to True
. rrule
String representation of an RRule schedule. See the rrulestr
examples for syntax. Cannot be used in conjunction with interval
or cron
. For more information about schedules, see the Schedules concept doc.
","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#work-pool-fields","title":"Work pool fields","text":"Below are fields that can be added to a deployment declaration's work_pool
section.
name
The name of the work pool to schedule flow runs in for the deployment. work_queue_name
The name of the work queue within the specified work pool to schedule flow runs in for the deployment. If not provided, the default queue for the specified work pool will be used. job_variables
Values used to override the default values in the specified work pool's base job template. Maps directly to a created deployments infra_overrides
attribute.","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#deployment-mechanics","title":"Deployment mechanics","text":"Anytime you run prefect deploy
in a directory that contains a prefect.yaml
file, the following actions are taken in order:
prefect.yaml
file is loaded. First, the build
section is loaded and all variable and block references are resolved. The steps are then run in the order provided.push
section is loaded and all variable and block references are resolved; the steps within this section are then run in the order providedpull
section is templated with any step outputs but is not run. Note that block references are not hydrated for security purposes - block references are always resolved at runtimeprefect deploy
CLI are then overlaid on the values loaded from the file.Deployment Instruction Overrides
The build
, push
, and pull
sections in deployment definitions take precedence over the corresponding sections above them in prefect.yaml
.
Each time a step is run, the following actions are taken in order:
requires
keyword is used to install the necessary packagesNow that you are familiar with creating deployments, you may want to explore infrastructure options for running your deployments:
Prefect tracks information about the current flow or task run with a run context. The run context can be thought of as a global variable that allows the Prefect engine to determine relationships between your runs, such as which flow your task was called from.
The run context itself contains many internal objects used by Prefect to manage execution of your run and is only available in specific situations. For this reason, we expose a simple interface that only includes the items you care about and dynamically retrieves additional information when necessary. We call this the \"runtime context\" as it contains information that can be accessed only when a run is happening.
Mock values via environment variable
Oftentimes, you may want to mock certain values for testing purposes. For example, by manually setting an ID or a scheduled start time to ensure your code is functioning properly. Starting in version 2.10.3
, you can mock values in runtime via environment variable using the schema PREFECT__RUNTIME__{SUBMODULE}__{KEY_NAME}=value
:
$ export PREFECT__RUNTIME__TASK_RUN__FAKE_KEY='foo'\n$ python -c 'from prefect.runtime import task_run; print(task_run.fake_key)' # \"foo\"\n
If the environment variable mocks an existing runtime attribute, the value is cast to the same type. This works for runtime attributes of basic types (bool
, int
, float
and str
) and pendulum.DateTime
. For complex types like list
or dict
, we suggest mocking them using monkeypatch or a similar tool.
The prefect.runtime
module is the home for all runtime context access. Each major runtime concept has its own submodule:
deployment
: Access information about the deployment for the current runflow_run
: Access information about the current flow runtask_run
: Access information about the current task runFor example:
my_runtime_info.pyfrom prefect import flow, task\nfrom prefect import runtime\n\n@flow(log_prints=True)\ndef my_flow(x):\n print(\"My name is\", runtime.flow_run.name)\n print(\"I belong to deployment\", runtime.deployment.name)\n my_task(2)\n\n@task\ndef my_task(y):\n print(\"My name is\", runtime.task_run.name)\n print(\"Flow run parameters:\", runtime.flow_run.parameters)\n\nmy_flow(1)\n
Running this file will produce output similar to the following:
10:08:02.948 | INFO | prefect.engine - Created flow run 'solid-gibbon' for flow 'my-flow'\n10:08:03.555 | INFO | Flow run 'solid-gibbon' - My name is solid-gibbon\n10:08:03.558 | INFO | Flow run 'solid-gibbon' - I belong to deployment None\n10:08:03.703 | INFO | Flow run 'solid-gibbon' - Created task run 'my_task-0' for task 'my_task'\n10:08:03.704 | INFO | Flow run 'solid-gibbon' - Executing 'my_task-0' immediately...\n10:08:04.006 | INFO | Task run 'my_task-0' - My name is my_task-0\n10:08:04.007 | INFO | Task run 'my_task-0' - Flow run parameters: {'x': 1}\n10:08:04.105 | INFO | Task run 'my_task-0' - Finished in state Completed()\n10:08:04.968 | INFO | Flow run 'solid-gibbon' - Finished in state Completed('All states completed.')\n
Above, we demonstrated access to information about the current flow run, task run, and deployment. When run without a deployment (via python my_runtime_info.py
), you should see \"I belong to deployment None\"
logged. When information is not available, the runtime will always return an empty value. Because this flow was run outside of a deployment, there is no deployment data. If this flow was run as part of a deployment, we'd see the name of the deployment instead.
See the runtime API reference for a full list of available attributes.
","tags":["flows","subflows","tasks","deployments"],"boost":2},{"location":"guides/runtime-context/#accessing-the-run-context-directly","title":"Accessing the run context directly","text":"The current run context can be accessed with prefect.context.get_run_context()
. This function will raise an exception if no run context is available, meaning you are not in a flow or task run. If a task run context is available, it will be returned even if a flow run context is available.
Alternatively, you can access the flow run or task run context explicitly. This will, for example, allow you to access the flow run context from a task run.
Note that we do not send the flow run context to distributed task workers because the context is costly to serialize and deserialize.
from prefect.context import FlowRunContext, TaskRunContext\n\nflow_run_ctx = FlowRunContext.get()\ntask_run_ctx = TaskRunContext.get()\n
Unlike get_run_context
, these method calls will not raise an error if the context is not available. Instead, they will return None
.
Prefect's local settings are documented and type-validated.
By modifying the default settings, you can customize various aspects of the system. You can override a setting with an environment variable or by updating the setting in a Prefect profile.
Prefect profiles are persisted groups of settings on your local machine. A single profile is always active.
Initially, a default profile named default
is active and contains no settings overrides.
All currently active settings can be viewed from the command line by running the following command:
prefect config view --show-defaults\n
When you switch to a different profile, all of the settings configured in the newly activated profile are applied.
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#commonly-configured-settings","title":"Commonly configured settings","text":"This section describes some commonly configured settings. See Configuring settings for details on setting and unsetting configuration values.
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#prefect_api_key","title":"PREFECT_API_KEY","text":"The PREFECT_API_KEY
value specifies the API key used to authenticate with Prefect Cloud.
PREFECT_API_KEY=\"[API-KEY]\"\n
Generally, you will set the PREFECT_API_URL
and PREFECT_API_KEY
for your active profile by running prefect cloud login
. If you're curious, read more about managing API keys.
The PREFECT_API_URL
value specifies the API endpoint of your Prefect Cloud workspace or a self-hosted Prefect server instance.
For example, if using Prefect Cloud:
PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n
You can view your Account ID and Workspace ID in your browser URL when at a Prefect Cloud workspace page. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.
If using a local Prefect server instance, set your API URL like this:
PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n
PREFECT_API_URL
setting for workers
If using a worker (agent and block-based deployments are legacy) that can create flow runs for deployments in remote environments, PREFECT_API_URL
must be set for the environment in which your worker is running.
If you want the worker to communicate with Prefect Cloud or a Prefect server instance from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL
in that environment.
Running the Prefect UI behind a reverse proxy
When using a reverse proxy (such as Nginx or Traefik) to proxy traffic to a locally-hosted Prefect UI instance, the Prefect server instance also needs to be configured to know how to connect to the API. The PREFECT_UI_API_URL
should be set to the external proxy URL (e.g. if your external URL is https://prefect-server.example.com/ then set PREFECT_UI_API_URL=https://prefect-server.example.com/api
for the Prefect server process). You can also accomplish this by setting PREFECT_API_URL
to the API URL, as this setting is used as a fallback if PREFECT_UI_API_URL
is not set.
The PREFECT_HOME
value specifies the local Prefect directory for configuration files, profiles, and the location of the default Prefect SQLite database.
PREFECT_HOME='~/.prefect'\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#prefect_local_storage_path","title":"PREFECT_LOCAL_STORAGE_PATH","text":"The PREFECT_LOCAL_STORAGE_PATH
value specifies the default location of local storage for flow runs.
PREFECT_LOCAL_STORAGE_PATH='${PREFECT_HOME}/storage'\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#database-settings","title":"Database settings","text":"If running a self-hosted Prefect server instance, there are several database configuration settings you can read about here.
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#logging-settings","title":"Logging settings","text":"Prefect provides several logging configuration settings that you can read about in the logging docs.
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#configuring-settings","title":"Configuring settings","text":"The prefect config
CLI commands enable you to view, set, and unset settings.
The prefect config view
command will display settings that override default values.
$ prefect config view\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG'\n
You can show the sources of values with --show-sources
:
$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG' (from env)\n
You can also include default values with --show-defaults
:
$ prefect config view --show-defaults\nPREFECT_PROFILE='default'\nPREFECT_AGENT_PREFETCH_SECONDS='10' (from defaults)\nPREFECT_AGENT_QUERY_INTERVAL='5.0' (from defaults)\nPREFECT_API_KEY='None' (from defaults)\nPREFECT_API_REQUEST_TIMEOUT='60.0' (from defaults)\nPREFECT_API_URL='None' (from defaults)\n...\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#setting-and-clearing-values","title":"Setting and clearing values","text":"The prefect config set
command lets you change the value of a default setting.
A commonly used example is setting the PREFECT_API_URL
, which you may need to change when interacting with different Prefect server instances or Prefect Cloud.
# use a local Prefect server\nprefect config set PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n\n# use Prefect Cloud\nprefect config set PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n
If you want to configure a setting to use its default value, use the prefect config unset
command.
prefect config unset PREFECT_API_URL\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#overriding-defaults-with-environment-variables","title":"Overriding defaults with environment variables","text":"All settings have keys that match the environment variable that can be used to override them.
For example, configuring the home directory:
# environment variable\nexport PREFECT_HOME=\"/path/to/home\"\n
# python\nimport prefect.settings\nprefect.settings.PREFECT_HOME.value() # PosixPath('/path/to/home')\n
Configuring the a server instance's port:
# environment variable\nexport PREFECT_SERVER_API_PORT=4242\n
# python\nprefect.settings.PREFECT_SERVER_API_PORT.value() # 4242\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#configuration-profiles","title":"Configuration profiles","text":"Prefect allows you to persist settings instead of setting an environment variable each time you open a new shell. Settings are persisted to profiles, which allow you to move between groups of settings quickly.
The prefect profile
CLI commands enable you to create, review, and manage profiles.
If you configured settings for a profile, prefect profile inspect
displays those settings:
$ prefect profile inspect\nPREFECT_PROFILE = \"default\"\nPREFECT_API_KEY = \"pnu_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\"\nPREFECT_API_URL = \"http://127.0.0.1:4200/api\"\n
You can pass the name of a profile to view its settings:
$ prefect profile create test\n$ prefect profile inspect test\nPREFECT_PROFILE=\"test\"\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#creating-and-removing-profiles","title":"Creating and removing profiles","text":"Create a new profile with no settings:
$ prefect profile create test\nCreated profile 'test' at /Users/terry/.prefect/profiles.toml.\n
Create a new profile foo
with settings cloned from an existing default
profile:
$ prefect profile create foo --from default\nCreated profile 'cloud' matching 'default' at /Users/terry/.prefect/profiles.toml.\n
Rename a profile:
$ prefect profile rename temp test\nRenamed profile 'temp' to 'test'.\n
Remove a profile:
$ prefect profile delete test\nRemoved profile 'test'.\n
Removing the default profile resets it:
$ prefect profile delete default\nReset profile 'default'.\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#change-values-in-profiles","title":"Change values in profiles","text":"Set a value in the current profile:
$ prefect config set VAR=X\nSet variable 'VAR' to 'X'\nUpdated profile 'default'\n
Set multiple values in the current profile:
$ prefect config set VAR2=Y VAR3=Z\nSet variable 'VAR2' to 'Y'\nSet variable 'VAR3' to 'Z'\nUpdated profile 'default'\n
You can set a value in another profile by passing the --profile NAME
option to a CLI command:
$ prefect --profile \"foo\" config set VAR=Y\nSet variable 'VAR' to 'Y'\nUpdated profile 'foo'\n
Unset values in the current profile to restore the defaults:
$ prefect config unset VAR2 VAR3\nUnset variable 'VAR2'\nUnset variable 'VAR3'\nUpdated profile 'default'\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#inspecting-profiles","title":"Inspecting profiles","text":"See a list of available profiles:
$ prefect profile ls\n* default\ncloud\ntest\nlocal\n
View all settings for a profile:
$ prefect profile inspect cloud\nPREFECT_API_URL='https://api.prefect.cloud/api/accounts/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx\nx/workspaces/xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'\nPREFECT_API_KEY='xxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' \n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#using-profiles","title":"Using profiles","text":"The profile named default
is used by default. There are several methods to switch to another profile.
The recommended method is to use the prefect profile use
command with the name of the profile:
$ prefect profile use foo\nProfile 'test' now active.\n
Alternatively, you can set the environment variable PREFECT_PROFILE
to the name of the profile:
export PREFECT_PROFILE=foo\n
Or, specify the profile in the CLI command for one-time usage:
prefect --profile \"foo\" ...\n
Note that this option must come before the subcommand. For example, to list flow runs using the profile foo
:
prefect --profile \"foo\" flow-run ls\n
You may use the -p
flag as well:
prefect -p \"foo\" flow-run ls\n
You may also create an 'alias' to automatically use your profile:
$ alias prefect-foo=\"prefect --profile 'foo' \"\n# uses our profile!\n$ prefect-foo config view \n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#conflicts-with-environment-variables","title":"Conflicts with environment variables","text":"If setting the profile from the CLI with --profile
, environment variables that conflict with settings in the profile will be ignored.
In all other cases, environment variables will take precedence over the value in the profile.
For example, a value set in a profile will be used by default:
$ prefect config set PREFECT_LOGGING_LEVEL=\"ERROR\"\n$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='ERROR' (from profile)\n
But, setting an environment variable will override the profile setting:
$ export PREFECT_LOGGING_LEVEL=\"DEBUG\"\n$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG' (from env)\n
Unless the profile is explicitly requested when using the CLI:
$ prefect --profile default config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='ERROR' (from profile)\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#profile-files","title":"Profile files","text":"Profiles are persisted to the file location specified by PREFECT_PROFILES_PATH
. The default location is a profiles.toml
file in the PREFECT_HOME
directory:
$ prefect config view --show-defaults\n...\nPREFECT_PROFILES_PATH='${PREFECT_HOME}/profiles.toml'\n...\n
The TOML format is used to store profile data.
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/state-change-hooks/","title":"State Change Hooks","text":"State change hooks execute code in response to changes in flow or task run states, enabling you to define actions for specific state transitions in a workflow. This guide provides examples of real-world use cases.
","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#example-use-cases","title":"Example use cases","text":"","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#send-a-notification-when-a-flow-run-fails","title":"Send a notification when a flow run fails","text":"State change hooks enable you to customize messages sent when tasks transition between states, such as sending notifications containing sensitive information when tasks enter a Failed
state. Let's run a client-side hook upon a flow run entering a Failed
state.
from prefect import flow\nfrom prefect.blocks.core import Block\nfrom prefect.settings import PREFECT_API_URL\n\ndef notify_slack(flow, flow_run, state):\n slack_webhook_block = Block.load(\n \"slack-webhook/my-slack-webhook\"\n )\n\n slack_webhook_block.notify(\n (\n f\"Your job {flow_run.name} entered {state.name} \"\n f\"with message:\\n\\n\"\n f\"See <https://{PREFECT_API_URL.value()}/flow-runs/\"\n f\"flow-run/{flow_run.id}|the flow run in the UI>\\n\\n\"\n f\"Tags: {flow_run.tags}\\n\\n\"\n f\"Scheduled start: {flow_run.expected_start_time}\"\n )\n )\n\n@flow(on_failure=[notify_slack], retries=1)\ndef failing_flow():\n raise ValueError(\"oops!\")\n\nif __name__ == \"__main__\":\n failing_flow()\n
Note that because we've configured retries in this example, the on_failure
hook will not run until all retries
have completed, when the flow run enters a Failed
state.
State change hooks can aid in managing infrastructure cleanup in scenarios where tasks spin up individual infrastructure resources independently of Prefect. When a flow run crashes, tasks may exit abruptly, resulting in the potential omission of cleanup logic within the tasks. State change hooks can be used to ensure infrastructure is properly cleaned up even when a flow run enters a Crashed
state!
Let's create a hook that deletes a Cloud Run job if the flow run crashes.
import os\nfrom prefect import flow, task\nfrom prefect.blocks.system import String\nfrom prefect.client import get_client\nimport prefect.runtime\n\nasync def delete_cloud_run_job(flow, flow_run, state):\n \"\"\"Flow run state change hook that deletes a Cloud Run Job if\n the flow run crashes.\"\"\"\n\n # retrieve Cloud Run job name\n cloud_run_job_name = await String.load(\n name=\"crashing-flow-cloud-run-job\"\n )\n\n # delete Cloud Run job\n delete_job_command = f\"yes | gcloud beta run jobs delete \n {cloud_run_job_name.value} --region us-central1\"\n os.system(delete_job_command)\n\n # clean up the Cloud Run job string block as well\n async with get_client() as client:\n block_document = await client.read_block_document_by_name(\n \"crashing-flow-cloud-run-job\", block_type_slug=\"string\"\n )\n await client.delete_block_document(block_document.id)\n\n@task\ndef my_task_that_crashes():\n raise SystemExit(\"Crashing on purpose!\")\n\n@flow(on_crashed=[delete_cloud_run_job])\ndef crashing_flow():\n \"\"\"Save the flow run name (i.e. Cloud Run job name) as a \n String block. It then executes a task that ends up crashing.\"\"\"\n flow_run_name = prefect.runtime.flow_run.name\n cloud_run_job_name = String(value=flow_run_name)\n cloud_run_job_name.save(\n name=\"crashing-flow-cloud-run-job\", overwrite=True\n )\n\n my_task_that_crashes()\n\nif __name__ == \"__main__\":\n crashing_flow()\n
","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/testing/","title":"Testing","text":"Once you have some awesome flows, you probably want to test them!
","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/testing/#unit-testing-flows","title":"Unit testing flows","text":"Prefect provides a simple context manager for unit tests that allows you to run flows and tasks against a temporary local SQLite database.
from prefect import flow\nfrom prefect.testing.utilities import prefect_test_harness\n\n@flow\ndef my_favorite_flow():\n return 42\n\ndef test_my_favorite_flow():\n with prefect_test_harness():\n # run the flow against a temporary testing database\n assert my_favorite_flow() == 42\n
For more extensive testing, you can leverage prefect_test_harness
as a fixture in your unit testing framework. For example, when using pytest
:
from prefect import flow\nimport pytest\nfrom prefect.testing.utilities import prefect_test_harness\n\n@pytest.fixture(autouse=True, scope=\"session\")\ndef prefect_test_fixture():\n with prefect_test_harness():\n yield\n\n@flow\ndef my_favorite_flow():\n return 42\n\ndef test_my_favorite_flow():\n assert my_favorite_flow() == 42\n
Note
In this example, the fixture is scoped to run once for the entire test session. In most cases, you will not need a clean database for each test and just want to isolate your test runs to a test database. Creating a new test database per test creates significant overhead, so we recommend scoping the fixture to the session. If you need to isolate some tests fully, you can use the test harness again to create a fresh database.
","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/testing/#unit-testing-tasks","title":"Unit testing tasks","text":"To test an individual task, you can access the original function using .fn
:
from prefect import flow, task\n\n@task\ndef my_favorite_task():\n return 42\n\n@flow\ndef my_favorite_flow():\n val = my_favorite_task()\n return val\n\ndef test_my_favorite_task():\n assert my_favorite_task.fn() == 42\n
Disable logger
If your task makes uses a logger, you can disable the logger in order to avoid the RuntimeError
raised from a missing flow context.
from prefect.logging import disable_run_logger\n\ndef test_my_favorite_task():\n with disable_run_logger():\n assert my_favorite_task.fn() == 42\n
","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/troubleshooting/","title":"Troubleshooting","text":"Don't Panic! If you experience an error with Prefect, there are many paths to understanding and resolving it. The first troubleshooting step is confirming that you are running the latest version of Prefect. If you are not, be sure to upgrade to the latest version, since the issue may have already been fixed. Beyond that, there are several categories of errors:
Prefect is constantly evolving, adding new features and fixing bugs. Chances are that a patch has already been identified and released. Search existing issues for similar reports and check out the Release Notes. Upgrade to the newest version with the following command:
pip install --upgrade prefect\n
Different components may use different versions of Prefect:
Integration Versions
Keep in mind that integrations are versioned and released independently of the core Prefect library. They should be upgraded simultaneously with the core library, using the same method.
","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#logs","title":"Logs","text":"In many cases, there will be an informative stack trace in Prefect's logs. Read it carefully, locate the source of the error, and try to identify the cause.
There are two types of logs:
If your flow and task logs are empty, there may have been an infrastructure issue that prevented your flow from starting. Check your worker logs for more details.
If there is no clear indication of what went wrong, try updating the logging level from the default INFO
level to the DEBUG
level. Settings such as the logging level are propagated from the worker environment to the flow run environment and can be set via environment variables or the prefect config set
CLI:
# Using the CLI\nprefect config set PREFECT_LOGGING_LEVEL=DEBUG\n\n# Using environment variables\nexport PREFECT_LOGGING_LEVEL=DEBUG\n
The DEBUG
logging level produces a high volume of logs so consider setting it back to INFO
once any issues are resolved.
When using Prefect Cloud, there are the additional concerns of authentication and authorization. The Prefect API authenticates users and service accounts - collectively known as actors - with API keys. Missing, incorrect, or expired API keys will result in a 401 response with detail Invalid authentication credentials
. Use the following command to check your authentication, replacing $PREFECT_API_KEY
with your API key:
curl -s -H \"Authorization: Bearer $PREFECT_API_KEY\" \"https://api.prefect.cloud/api/me/\"\n
Users vs Service Accounts
Service accounts - sometimes referred to as bots - represent non-human actors that interact with Prefect such as workers and CI/CD systems. Each human that interacts with Prefect should be represented as a user. User API keys start with pnu_
and service account API keys start with pnb_
.
Supposing the response succeeds, let's check our authorization. Actors can be members of workspaces. An actor attempting an action in a workspace they are not a member of will result in a 404 response. Use the following command to check your actor's workspace memberships:
curl -s -H \"Authorization: Bearer $PREFECT_API_KEY\" \"https://api.prefect.cloud/api/me/workspaces\"\n
Formatting JSON
Python comes with a helpful tool for formatting JSON. Append the following to the end of the command above to make the output more readable: | python -m json.tool
Make sure your actor is a member of the workspace you are working in. Within a workspace, an actor has a role which grants them certain permissions. Insufficient permissions will result in an error. For example, starting an agent or worker with the Viewer role, will result in errors.
","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#execution","title":"Execution","text":"Prefect flows can be executed locally by the user, or remotely by a worker or agent. Local execution generally means that you - the user - run your flow directly with a command like python flow.py
. Remote execution generally means that a worker runs your flow via a deployment, optionally on different infrastructure.
With remote execution, the creation of your flow run happens separately from its execution. Flow runs are assigned to a work pool and a work queue. For flow runs to execute, a worker must be subscribed to the work pool and work queue, otherwise the flow runs will go from Scheduled
to Late
. Ensure that your work pool and work queue have a subscribed worker.
Local and remote execution can also differ in their treatment of relative imports. If switching from local to remote execution results in local import errors, try replicating the behavior by executing the flow locally with the -m
flag (i.e. python -m flow
instead of python flow.py
). Read more about -m
here.
Summary: Requests require a trailing /
in the request URL.
If you write a test that does not include a trailing /
when making a request to a specific endpoint:
async def test_example(client):\n response = await client.post(\"/my_route\")\n assert response.status_code == 201\n
You'll see a failure like:
E assert 307 == 201\nE + where 307 = <Response [307 Temporary Redirect]>.status_code\n
To resolve this, include the trailing /
:
async def test_example(client):\n response = await client.post(\"/my_route/\")\n assert response.status_code == 201\n
Note: requests to nested URLs may exhibit the opposite behavior and require no trailing slash:
async def test_nested_example(client):\n response = await client.post(\"/my_route/filter/\")\n assert response.status_code == 307\n\n response = await client.post(\"/my_route/filter\")\n assert response.status_code == 200\n
Reference: \"HTTPX disabled redirect following by default\" in 0.22.0
.
pytest.PytestUnraisableExceptionWarning
or ResourceWarning
","text":"As you're working with one of the FlowRunner
implementations, you may get an error like this one:
E pytest.PytestUnraisableExceptionWarning: Exception ignored in: <ssl.SSLSocket fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0>\nE\nE Traceback (most recent call last):\nE File \".../pytest_asyncio/plugin.py\", line 306, in setup\nE res = await func(**_add_kwargs(func, kwargs, event_loop, request))\nE ResourceWarning: unclosed <ssl.SSLSocket fd=10, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 60605), raddr=('127.0.0.1', 6443)>\n\n.../_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning\n
This error is saying that your test suite (or the prefect
library code) opened a connection to something (like a Docker daemon or a Kubernetes cluster) and didn't close it.
It may help to re-run the specific test with PYTHONTRACEMALLOC=25 pytest ...
so that Python can display more of the stack trace where the connection was opened.
Upgrading from agents to workers significantly enhances the experience of deploying flows. It simplifies the specification of each flow's infrastructure and runtime environment.
A worker is the fusion of an agent with an infrastructure block. Like agents, workers poll a work pool for flow runs that are scheduled to start. Like infrastructure blocks, workers are typed - they work with only one kind of infrastructure, and they specify the default configuration for jobs submitted to that infrastructure.
Accordingly, workers are not a drop-in replacement for agents. Using workers requires deploying flows differently. In particular, deploying a flow with a worker does not involve specifying an infrastructure block. Instead, infrastructure configuration is specified on the work pool and passed to each worker that polls work from that pool.
This guide provides an overview of the differences between agents and workers. It also describes how to upgrade from agents to workers in just a few quick steps.
","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#enhancements","title":"Enhancements","text":"","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#workers","title":"Workers","text":".deploy()
or the alternative deployment experience with prefect.yaml
are more flexible and easier to use than block and agent-based deployments.Deployment CLI and Python SDK:
prefect deployment build <entrypoint>
/prefect deployment apply
--> prefect deploy
Prefect will now automatically detect flows in your repo and provide a wizard \ud83e\uddd9 to guide you through setting required attributes for your deployments.
Deployment.build_from_flow
--> flow.deploy
Configuring remote flow code storage:
storage blocks --> pull action
When using the YAML-based deployment API, you can configure a pull action in your prefect.yaml
file to specify how to retrieve flow code for your deployments. You can use configuration from your existing storage blocks to define your pull action via templating.
When using the Python deployment API, you can pass any storage block to the flow.deploy
method to specify how to retrieve flow code for your deployment.
Configuring flow run infrastructure:
infrastructure blocks --> typed work pool
Default infrastructure config is now set on the typed work pool, and can be overwritten by individual deployments.
Managing multiple deployments:
Create and/or update many deployments at once through a prefect.yaml
file or use the deploy
function.
prefect.yaml
file.Deployment-level infrastructure overrides operate in much the same way.
infra_override
-> job_variable
The process for starting an agent and starting a worker in your environment are virtually identical.
prefect agent start --pool <work pool name>
--> prefect worker start --pool <work pool name>
Worker Helm chart
If you host your agents in a Kubernetes cluster, you can use the Prefect worker Helm chart to host workers in your cluster.
If you have existing deployments that use infrastructure blocks, you can quickly upgrade them to be compatible with workers by following these steps:
This new work pool will replace your infrastructure block.
You can use the .publish_as_work_pool
method on any infrastructure block to create a work pool with the same configuration.
For example, if you have a KubernetesJob
infrastructure block named 'my-k8s-job', you can create a work pool with the same configuration with this script:
from prefect.infrastructure import KubernetesJob\n\nKubernetesJob.load(\"my-k8s-job\").publish_as_work_pool()\n
Running this script will create a work pool named 'my-k8s-job' with the same configuration as your infrastructure block.\n
Serving flows
If you are using a Process
infrastructure block and a LocalFilesystem
storage block (or aren't using an infrastructure and storage block at all), you can use flow.serve
to create a deployment without needing to specify a work pool name or start a worker.
This is a quick way to create a deployment for a flow and is a great way to manage your deployments if you don't need the dynamic infrastructure creation or configuration offered by workers.
Check out our Docker guide for how to build a served flow into a Docker image and host it in your environment.
This worker will replace your agent and poll your new work pool for flow runs to execute.
prefect worker start -p <work pool name>\n
To deploy your flows to the new work pool, you can use flow.deploy
for a Pythonic deployment experience or prefect deploy
for a YAML-based deployment experience.
If you currently use Deployment.build_from_flow
, we recommend using flow.deploy
.
If you currently use prefect deployment build
and prefect deployment apply
, we recommend using prefect deploy
.
flow.deploy
","text":"If you have a Python script that uses Deployment.build_from_flow
, you can replace it with flow.deploy
.
Most arguments to Deployment.build_from_flow
can be translated directly to flow.deploy
, but here are some changes that you may need to make:
infrastructure
with work_pool_name
..publish_as_work_pool
method on your infrastructure block, use the name of the created work pool.infra_overrides
with job_variables
.storage
with a call to flow.from_source
.flow.from_source
will load your flow from a remote storage location and make it deployable. Your existing storage block can be passed to the source
argument of flow.from_source
.Below are some examples of how to translate Deployment.build_from_flow
to flow.deploy
.
If you aren't using any blocks:
from prefect import flow\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n print(f\"Hello {name}! I'm a flow from a Python script!\")\n\nif __name__ == \"__main__\":\n Deployment.build_from_flow(\n my_flow,\n name=\"my-deployment\",\n parameters=dict(name=\"Marvin\"),\n )\n
You can replace Deployment.build_from_flow
with flow.serve
:
from prefect import flow\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n print(f\"Hello {name}! I'm a flow from a Python script!\")\n\nif __name__ == \"__main__\":\n my_flow.serve(\n name=\"my-deployment\",\n parameters=dict(name=\"Marvin\"),\n )\n
This will start a process that will serve your flow and execute any flow runs that are scheduled to start.
","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#deploying-using-a-storage-block","title":"Deploying using a storage block","text":"If you currently use a storage block to load your flow code but no infrastructure block:
from prefect import flow\nfrom prefect.storage import GitHub\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n print(f\"Hello {name}! I'm a flow from a GitHub repo!\")\n\nif __name__ == \"__main__\":\n Deployment.build_from_flow(\n my_flow,\n name=\"my-deployment\",\n storage=GitHub.load(\"demo-repo\"),\n parameters=dict(name=\"Marvin\"),\n )\n
you can use flow.from_source
to load your flow from the same location and flow.serve
to create a deployment:
from prefect import flow\nfrom prefect.storage import GitHub\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=GitHub.load(\"demo-repo\"),\n entrypoint=\"example.py:my_flow\"\n ).serve(\n name=\"my-deployment\",\n parameters=dict(name=\"Marvin\"),\n )\n
This will allow you to execute scheduled flow runs without starting a worker. Additionally, the process serving your flow will regularly check for updates to your flow code and automatically update the flow if it detects any changes to the code.
","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#deploying-using-an-infrastructure-and-storage-block","title":"Deploying using an infrastructure and storage block","text":"For the code below, we'll need to create a work pool from our infrastructure block and pass it to flow.deploy
as the work_pool_name
argument. We'll also need to pass our storage block to flow.from_source
as the source
argument.
from prefect import flow\nfrom prefect.deployments import Deployment\nfrom prefect.filesystems import GitHub\nfrom prefect.infrastructure.kubernetes import KubernetesJob\n\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n print(f\"Hello {name}! I'm a flow from a GitHub repo!\")\n\n\nif __name__ == \"__main__\":\n Deployment.build_from_flow(\n my_flow,\n name=\"my-deployment\",\n storage=GitHub.load(\"demo-repo\"),\n entrypoint=\"example.py:my_flow\",\n infrastructure=KubernetesJob.load(\"my-k8s-job\"),\n infra_overrides=dict(pull_policy=\"Never\"),\n parameters=dict(name=\"Marvin\"),\n )\n
The equivalent deployment code using flow.deploy
would look like this:
from prefect import flow\nfrom prefect.storage import GitHub\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=GitHub.load(\"demo-repo\"),\n entrypoint=\"example.py:my_flow\"\n ).deploy(\n name=\"my-deployment\",\n work_pool_name=\"my-k8s-job\",\n job_variables=dict(pull_policy=\"Never\"),\n parameters=dict(name=\"Marvin\"),\n )\n
Note that when using flow.from_source(...).deploy(...)
, the flow you're deploying does not need to be available locally before running your script.
If you currently bake your flow code into a Docker image before deploying, you can use the image
argument of flow.deploy
to build a Docker image as part of your deployment process:
from prefect import flow\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n print(f\"Hello {name}! I'm a flow from a Docker image!\")\n\n\nif __name__ == \"__main__\":\n my_flow.deploy(\n name=\"my-deployment\",\n image=\"my-repo/my-image:latest\",\n work_pool_name=\"my-k8s-job\",\n job_variables=dict(pull_policy=\"Never\"),\n parameters=dict(name=\"Marvin\"),\n )\n
You can skip a flow.from_source
call when building an image with flow.deploy
. Prefect will keep track of the flow's source code location in the image and load it from that location when the flow is executed.
prefect deploy
","text":"Always run prefect deploy
commands from the root level of your repo!
With agents, you might have had multiple deployment.yaml
files, but under worker deployment patterns, each repo will have a single prefect.yaml
file located at the root of the repo that contains deployment configuration for all flows in that repo.
To set up a new prefect.yaml
file for your deployments, run the following command from the root level of your repo:
perfect deploy\n
This will start a wizard that will guide you through setting up your deployment.
For step 4, select y
on the last prompt to save the configuration for the deployment.
Saving the configuration for your deployment will result in a prefect.yaml
file populated with your first deployment. You can use this YAML file to edit and define multiple deployments for this repo.
You can add more deployments to the deployments
list in your prefect.yaml
file and/or by continuing to use the deployment creation wizard.
For more information on deployments, check out our in-depth guide for deploying flows to work pools.
","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/using-the-client/","title":"Using the Prefect Orchestration Client","text":"","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#overview","title":"Overview","text":"In the API reference for the PrefectClient
, you can find many useful client methods that make it simpler to do things such as:
N
completed flow runs from my workspaceThe PrefectClient
is an async context manager, so you can use it like this:
from prefect import get_client\n\nasync with get_client() as client:\n response = await client.hello()\n print(response.json()) # \ud83d\udc4b\n
","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#examples","title":"Examples","text":"","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#rescheduling-late-flow-runs","title":"Rescheduling late flow runs","text":"Sometimes, you may need to bulk reschedule flow runs that are late - for example, if you've accidentally scheduled many flow runs of a deployment to an inactive work pool.
To do this, we can delete late flow runs and create new ones in a Scheduled
state with a delay.
This example reschedules the last 3 late flow runs of a deployment named healthcheck-storage-test
to run 6 hours later than their original expected start time. It also deletes any remaining late flow runs of that deployment.
import asyncio\nfrom datetime import datetime, timedelta, timezone\nfrom typing import Optional\n\nfrom prefect import get_client\nfrom prefect.client.schemas.filters import (\n DeploymentFilter, FlowRunFilter\n)\nfrom prefect.client.schemas.objects import FlowRun\nfrom prefect.client.schemas.sorting import FlowRunSort\nfrom prefect.states import Scheduled\n\nasync def reschedule_late_flow_runs(\n deployment_name: str,\n delay: timedelta,\n most_recent_n: int,\n delete_remaining: bool = True,\n states: Optional[list[str]] = None\n) -> list[FlowRun]:\n if not states:\n states = [\"Late\"]\n\n async with get_client() as client:\n flow_runs = await client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state=dict(name=dict(any_=states)),\n expected_start_time=dict(\n before_=datetime.now(timezone.utc)\n ),\n ),\n deployment_filter=DeploymentFilter(\n name={'like_': deployment_name}\n ),\n sort=FlowRunSort.START_TIME_DESC,\n limit=most_recent_n if not delete_remaining else None\n )\n\n if not flow_runs:\n print(f\"No flow runs found in states: {states!r}\")\n return []\n\n rescheduled_flow_runs = []\n for i, run in enumerate(flow_runs):\n await client.delete_flow_run(flow_run_id=run.id)\n if i < most_recent_n:\n new_run = await client.create_flow_run_from_deployment(\n deployment_id=run.deployment_id,\n state=Scheduled(\n scheduled_time=run.expected_start_time + delay\n ),\n )\n rescheduled_flow_runs.append(new_run)\n\n return rescheduled_flow_runs\n\nif __name__ == \"__main__\":\n rescheduled_flow_runs = asyncio.run(\n reschedule_late_flow_runs(\n deployment_name=\"healthcheck-storage-test\",\n delay=timedelta(hours=6),\n most_recent_n=3,\n )\n )\n\n print(f\"Rescheduled {len(rescheduled_flow_runs)} flow runs\")\n\n assert all(\n run.state.is_scheduled() for run in rescheduled_flow_runs\n )\n assert all(\n run.expected_start_time > datetime.now(timezone.utc)\n for run in rescheduled_flow_runs\n )\n
","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#get-the-last-n-completed-flow-runs-from-my-workspace","title":"Get the last N
completed flow runs from my workspace","text":"To get the last N
completed flow runs from our workspace, we can make use of read_flow_runs
and prefect.client.schemas
.
This example gets the last three completed flow runs from our workspace:
import asyncio\nfrom typing import Optional\n\nfrom prefect import get_client\nfrom prefect.client.schemas.filters import FlowRunFilter\nfrom prefect.client.schemas.objects import FlowRun\nfrom prefect.client.schemas.sorting import FlowRunSort\n\nasync def get_most_recent_flow_runs(\n n: int = 3,\n states: Optional[list[str]] = None\n) -> list[FlowRun]:\n if not states:\n states = [\"COMPLETED\"]\n\n async with get_client() as client:\n return await client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state={'type': {'any_': states}}\n ),\n sort=FlowRunSort.END_TIME_DESC,\n limit=n,\n )\n\nif __name__ == \"__main__\":\n last_3_flow_runs: list[FlowRun] = asyncio.run(\n get_most_recent_flow_runs()\n )\n print(last_3_flow_runs)\n\n assert all(\n run.state.is_completed() for run in last_3_flow_runs\n )\n assert (\n end_times := [run.end_time for run in last_3_flow_runs]\n ) == sorted(end_times, reverse=True)\n
Instead of the last three from the whole workspace, you could also use the DeploymentFilter
like the previous example to get the last three completed flow runs of a specific deployment.
There are other ways to filter objects like flow runs
See the filters API reference
for more ways to filter flow runs and other objects in your Prefect ecosystem.
Variables enable you to store and reuse non-sensitive bits of data, such as configuration information. Variables are named, mutable string values, much like environment variables. Variables are scoped to a Prefect server instance or a single workspace in Prefect Cloud.
Variables can be created or modified at any time, but are intended for values with infrequent writes and frequent reads. Variable values may be cached for quicker retrieval.
While variable values are most commonly loaded during flow runtime, they can be loaded in other contexts, at any time, such that they can be used to pass configuration information to Prefect configuration files, such as deployment steps.
Variables are not Encrypted
Using variables to store sensitive information, such as credentials, is not recommended. Instead, use Secret blocks to store and access sensitive information.
","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#managing-variables","title":"Managing variables","text":"You can create, read, edit and delete variables via the Prefect UI, API, and CLI. Names must adhere to traditional variable naming conventions:
Values must:
Optionally, you can add tags to the variable.
","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#via-the-prefect-ui","title":"Via the Prefect UI","text":"You can see all the variables in your Prefect server instance or Prefect Cloud workspace on the Variables page of the Prefect UI. Both the name and value of all variables are visible to anyone with access to the server or workspace.
To create a new variable, select the + button next to the header of the Variables page. Enter the name and value of the variable.
","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#via-the-rest-api","title":"Via the REST API","text":"Variables can be created and deleted via the REST API. You can also set and get variables via the API with either the variable name or ID. See the REST reference for more information.
","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#via-the-cli","title":"Via the CLI","text":"You can list, inspect, and delete variables via the command line interface with the prefect variable ls
, prefect variable inspect <name>
, and prefect variable delete <name>
commands, respectively.
In addition to the UI and API, variables can be referenced in code and in certain Prefect configuration files.
","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#in-python-code","title":"In Python code","text":"You can access any variable via the Python SDK via the .get()
method. If you attempt to reference a variable that does not exist, the method will return None
.
from prefect import variables\n\n# from a synchronous context\nanswer = variables.get('the_answer')\nprint(answer)\n# 42\n\n# from an asynchronous context\nanswer = await variables.get('the_answer')\nprint(answer)\n# 42\n\n# without a default value\nanswer = variables.get('not_the_answer')\nprint(answer)\n# None\n\n# with a default value\nanswer = variables.get('not_the_answer', default='42')\nprint(answer)\n# 42\n
","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#in-prefectyaml-deployment-steps","title":"In prefect.yaml
deployment steps","text":"In .yaml
files, variables are denoted by quotes and double curly brackets, like so: \"{{ prefect.variables.my_variable }}\"
. You can use variables to templatize deployment steps by referencing them in the prefect.yaml
file used to create deployments. For example, you could pass a variable in to specify a branch for a git repo in a deployment pull
step:
pull:\n- prefect.deployments.steps.git_clone:\n repository: https://github.com/PrefectHQ/hello-projects.git\n branch: \"{{ prefect.variables.deployment_branch }}\"\n
The deployment_branch
variable will be evaluated at runtime for the deployed flow, allowing changes to be made to variables used in a pull action without updating a deployment directly.
Use webhooks in your Prefect Cloud workspace to receive, observe, and react to events from other systems in your ecosystem. Each webhook exposes a unique URL endpoint to receive events from other systems and transforms them into Prefect events for use in automations.
Webhooks are defined by two essential components: a unique URL and a template that translates incoming web requests to a Prefect event.
","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#configuring-webhooks","title":"Configuring webhooks","text":"","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#via-the-prefect-cloud-api","title":"Via the Prefect Cloud API","text":"Webhooks are managed via the Webhooks API endpoints. This is a Prefect Cloud-only feature. You authenticate API calls using the standard authentication methods you use with Prefect Cloud.
","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#via-prefect-cloud","title":"Via Prefect Cloud","text":"Webhooks can be created and managed from the Prefect Cloud UI.
","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#via-the-prefect-cli","title":"Via the Prefect CLI","text":"Webhooks can be managed and interacted with via the prefect cloud webhook
command group.
prefect cloud webhook --help\n
You can create your first webhook by invoking create
:
prefect cloud webhook create your-webhook-name \\\n --description \"Receives webhooks from your system\" \\\n --template '{ \"event\": \"your.event.name\", \"resource\": { \"prefect.resource.id\": \"your.resource.id\" } }'\n
Note the template string, which is discussed in greater detail below
You can retrieve details for a specific webhook by ID using get
, or optionally query all webhooks in your workspace via ls
:
# get webhook by ID\nprefect cloud webhook get <webhook-id>\n\n# list all configured webhooks in your workspace\n\nprefect cloud webhook ls\n
If you need to disable an existing webhook without deleting it, use toggle
:
prefect cloud webhook toggle <webhook-id>\nWebhook is now disabled\n\nprefect cloud webhook toggle <webhook-id>\nWebhook is now enabled\n
If you are concerned that your webhook endpoint may have been compromised, use rotate
to generate a new, random endpoint
prefect cloud webhook rotate <webhook-url-slug>\n
","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#webhook-endpoints","title":"Webhook endpoints","text":"The webhook endpoints have randomly generated opaque URLs that do not divulge any information about your Prefect Cloud workspace. They are rooted at https://api.prefect.cloud/hooks/
. For example: https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ
. Prefect Cloud assigns this URL when you create a webhook; it cannot be set via the API. You may rotate your webhook URL at any time without losing the associated configuration.
All webhooks may accept requests via the most common HTTP methods:
GET
, HEAD
, and DELETE
may be used for webhooks that define a static event template, or a template that does not depend on the body of the HTTP request. The headers of the request will be available for templates.POST
, PUT
, and PATCH
may be used when the webhook request will include a body. See How HTTP request components are handled for more details on how the body is parsed.Prefect Cloud webhooks are deliberately quiet to the outside world, and will only return a 204 No Content
response when they are successful, and a 400 Bad Request
error when there is any error interpreting the request. For more visibility when your webhooks fail, see the Troubleshooting section below.
The purpose of a webhook is to accept an HTTP request from another system and produce a Prefect event from it. You may find that you often have little influence or control over the format of those requests, so Prefect's webhook system gives you full control over how you turn those notifications from other systems into meaningful events in your Prefect Cloud workspace. The template you define for each webhook will determine how individual components of the incoming HTTP request become the event name and resource labels of the resulting Prefect event.
As with the templates available in Prefect Cloud Automation for defining notifications and other parameters, you will write templates in Jinja2. All of the built-in Jinja2 blocks and filters are available, as well as the filters from the jinja2-humanize-extensions
package.
Your goal when defining your event template is to produce a valid JSON object that defines (at minimum) the event
name and the resource[\"prefect.resource.id\"]
, which are required of all events. The simplest template is one in which these are statically defined.
Let's see a static webhook template example. Say you want to configure a webhook that will notify Prefect when your recommendations
machine learning model has been updated, so you can then send a Slack notification to your team and run a few subsequent deployments. Those models are produced on a daily schedule by another team that is using cron
for scheduling. They aren't able to use Prefect for their flows (yet!), but they are happy to add a curl
to the end of their daily script to notify you. Because this webhook will only be used for a single event from a single resource, your template can be entirely static:
{\n \"event\": \"model.refreshed\",\n \"resource\": {\n \"prefect.resource.id\": \"product.models.recommendations\",\n \"prefect.resource.name\": \"Recommendations [Products]\",\n \"producing-team\": \"Data Science\"\n }\n}\n
Make sure to produce valid JSON
The output of your template, when rendered, should be a valid string that can be parsed, for example, with json.loads
.
A webhook with this template may be invoked via any of the HTTP methods, including a GET
request with no body, so the team you are integrating with can include this line at the end of their daily script:
curl https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ\n
Each time the script hits the webhook, the webhook will produce a single Prefect event with that name and resource in your workspace.
","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#event-fields-that-prefect-cloud-populates-for-you","title":"Event fields that Prefect Cloud populates for you","text":"You may notice that you only had to provide the event
and resource
definition, which is not a completely fleshed out event. Prefect Cloud will set default values for any missing fields, such as occurred
and id
, so you don't need to set them in your template. Additionally, Prefect Cloud will add the webhook itself as a related resource on all of the events it produces.
If your template does not produce a payload
field, the payload
will default to a standard set of debugging information, including the HTTP method, headers, and body.
Now let's say that after a few days you and the Data Science team are getting a lot of value from the automations you have set up with the static webhook. You've agreed to upgrade this webhook to handle all of the various models that the team produces. It's time to add some dynamic information to your webhook template.
Your colleagues on the team have adjusted their daily cron
scripts to POST
a small body that includes the ID and name of the model that was updated:
curl \\\n -d \"model=recommendations\" \\\n -d \"friendly_name=Recommendations%20[Products]\" \\\n -X POST https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ\n
This script will send a POST
request and the body will include a traditional URL-encoded form with two fields describing the model that was updated: model
and friendly_name
. Here's the webhook code that uses Jinja to receive these values in your template and produce different events for the different models:
{\n \"event\": \"model.refreshed\",\n \"resource\": {\n \"prefect.resource.id\": \"product.models.{{ body.model }}\",\n \"prefect.resource.name\": \"{{ body.friendly_name }}\",\n \"producing-team\": \"Data Science\"\n }\n}\n
All subsequent POST
requests will produce events with those variable resource IDs and names. The other statically-defined parts, such as event
or the producing-team
label you included earlier will still be used.
Use Jinja2's default
filter to handle missing values
Jinja2 has a helpful default
filter that can compensate for missing values in the request. In this example, you may want to use the model's ID in place of the friendly name when the friendly name is not provided: {{ body.friendly_name|default(body.model) }}
.
The Jinja2 template context includes the three parts of the incoming HTTP request:
method
is the uppercased string of the HTTP method, like GET
or POST
.headers
is a case-insensitive dictionary of the HTTP headers included with the request. To prevent accidental disclosures, the Authorization
header is removed.body
represents the body that was posted to the webhook, with a best-effort approach to parse it into an object you can access.HTTP headers are available without any alteration as a dict
-like object, but you may access them with header names in any case. For example, these template expressions all return the value of the Content-Length
header:
{{ headers['Content-Length'] }}\n\n{{ headers['content-length'] }}\n\n{{ headers['CoNtEnt-LeNgTh'] }}\n
The HTTP request body goes through some light preprocessing to make it more useful in templates. If the Content-Type
of the request is application/json
, the body will be parsed as a JSON object and made available to the webhook templates. If the Content-Type
is application/x-www-form-urlencoded
(as in our example above), the body is parsed into a flat dict
-like object of key-value pairs. Jinja2 supports both index and attribute access to the fields of these objects, so the following two expressions are equivalent:
{{ body['friendly_name'] }}\n\n{{ body.friendly_name }}\n
Only for Python identifiers
Jinja2's syntax only allows attribute-like access if the key is a valid Python identifier, so body.friendly-name
will not work. Use body['friendly-name']
in those cases.
You may not have much control over the client invoking your webhook, but would still like for bodies that look like JSON to be parsed as such. Prefect Cloud will attempt to parse any other content type (like text/plain
) as if it were JSON first. In any case where the body cannot be transformed into JSON, it will be made available to your templates as a Python str
.
In cases where you have more control over the client, your webhook can accept Prefect events directly with a simple pass-through template:
{{ body|tojson }}\n
This template accepts the incoming body (assuming it was in JSON format) and just passes it through unmodified. This allows a POST
of a partial Prefect event as in this example:
POST /hooks/AERylZ_uewzpDx-8fcweHQ HTTP/1.1\nHost: api.prefect.cloud\nContent-Type: application/json\nContent-Length: 228\n\n{\n \"event\": \"model.refreshed\",\n \"resource\": {\n \"prefect.resource.id\": \"product.models.recommendations\",\n \"prefect.resource.name\": \"Recommendations [Products]\",\n \"producing-team\": \"Data Science\"\n }\n}\n
The resulting event will be filled out with the default values for occurred
, id
, and other fields as described above.
The Cloud Native Computing Foundation has standardized CloudEvents for use by systems to exchange event information in a common format. These events are supported by major cloud providers and a growing number of cloud-native systems. Prefect Cloud can interpret a webhook containing a CloudEvent natively with the following template:
{{ body|from_cloud_event(headers) }}\n
The resulting event will use the CloudEvent's subject
as the resource (or the source
if no subject
is available). The CloudEvent's data
attribute will become the Prefect event's payload['data']
, and the other CloudEvent metadata will be at payload['cloudevents']
. If you would like to handle CloudEvents in a more specific way tailored to your use case, use a dynamic template to interpret the incoming body
.
The initial configuration of your webhook may require some trial and error as you get the sender and your receiving webhook speaking a compatible language. While you are in this phase, you may find the Event Feed in the UI to be indispensable for seeing the events as they are happening.
When Prefect Cloud encounters an error during receipt of a webhook, it will produce a prefect-cloud.webhook.failed
event in your workspace. This event will include critical information about the HTTP method, headers, and body it received, as well as what the template rendered. Keep an eye out for these events when something goes wrong.
Microsoft Azure Container Instances (ACI) provides a convenient and simple service for quickly spinning up a Docker container that can host a Prefect Agent and execute flow runs.
","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#prerequisites","title":"Prerequisites","text":"To follow this quickstart, you'll need the following:
Like most Azure resources, ACI applications must live in a resource group. If you don\u2019t already have a resource group you\u2019d like to use, create a new one by running the az group create
command. For example, this example creates a resource group called prefect-agents
in the eastus
region:
az group create --name prefect-agents --location eastus\n
Feel free to change the group name or location to match your use case. You can also run az account list-locations -o table
to see all available resource group locations for your account.
Prefect provides pre-configured Docker images you can use to quickly stand up a container instance. These Docker images include Python and Prefect. For example, the image prefecthq/prefect:2-python3.10
includes the latest release version of Prefect and Python 3.10.
To create the container instance, use the az container create
command. This example shows the syntax, but you'll need to provide the correct values for [ACCOUNT-ID]
,[WORKSPACE-ID]
, [API-KEY]
, and any dependencies you need to pip install
on the instance. These options are discussed below.
az container create \\\n--resource-group prefect-agents \\\n--name prefect-agent-example \\\n--image prefecthq/prefect:2-python3.10 \\\n--secure-environment-variables PREFECT_API_URL='https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]' PREFECT_API_KEY='[API-KEY]' \\\n--command-line \"/bin/bash -c 'pip install adlfs s3fs requests pandas; prefect agent start -p default-agent-pool -q test'\"\n
When the container instance is running, go to Prefect Cloud and select the Work Pools page. Select default-agent-pool, then select the Queues tab to see work queues configured on this work pool. When the container instance is running and the agent has started, the test
work queue displays \"Healthy\" status. This work queue and agent are ready to execute deployments configured to run on the test
queue.
Agents and queues
The agent running in this container instance can now pick up and execute flow runs for any deployment configured to use the test
queue on the default-agent-pool
work pool.
Let's break down the details of the az container create
command used here.
The az container create command
creates a new ACI container.
--resource-group prefect-agents
tells Azure which resource group the new container is created in. Here, the examples uses the prefect-agents
resource group created earlier.
--name prefect-agent-example
determines the container name you will see in the Azure Portal. You can set any name you\u2019d like here to suit your use case, but container instance names must be unique in your resource group.
--image prefecthq/prefect:2-python3.10
tells ACI which Docker images to run. The script above pulls a public Prefect image from Docker Hub. You can also build custom images and push them to a public container registry so ACI can access them. Or you can push your image to a private Azure Container Registry and use it to create a container instance.
--secure-environment-variables
sets environment variables that are only visible from inside the container. They do not show up when viewing the container\u2019s metadata. You'll populate these environment variables with a few pieces of information to configure the execution environment of the container instance so it can communicate with your Prefect Cloud workspace:
PREFECT_API_KEY
]/concepts/settings/#prefect_api_key) value specifying the API key used to authenticate with your Prefect Cloud workspace. (Pro and Enterprise tier accounts can use a service account API key.)PREFECT_API_URL
value specifying the API endpoint of your Prefect Cloud workspace.--command-line
lets you override the container\u2019s normal entry point and run a command instead. The script above uses this section to install the adlfs
pip package so it can read flow code from Azure Blob Storage, along with s3fs
, pandas
, and requests
. It then runs the Prefect agent, in this case using the default work pool and a test
work queue. If you want to use a different work pool or queue, make sure to change these values appropriately.
Following the example of the Flow deployments tutorial, let's create a deployment that can be executed by the agent on this container instance.
In an environment where you have installed Prefect, create a new folder called health_test
, and within it create a new file called health_flow.py
containing the following code.
import prefect\nfrom prefect import task, flow\nfrom prefect import get_run_logger\n\n\n@task\ndef say_hi():\n logger = get_run_logger()\n logger.info(\"Hello from the Health Check Flow! \ud83d\udc4b\")\n\n\n@task\ndef log_platform_info():\n import platform\n import sys\n from prefect.server.api.server import SERVER_API_VERSION\n\n logger = get_run_logger()\n logger.info(\"Host's network name = %s\", platform.node())\n logger.info(\"Python version = %s\", platform.python_version())\n logger.info(\"Platform information (instance type) = %s \", platform.platform())\n logger.info(\"OS/Arch = %s/%s\", sys.platform, platform.machine())\n logger.info(\"Prefect Version = %s \ud83d\ude80\", prefect.__version__)\n logger.info(\"Prefect API Version = %s\", SERVER_API_VERSION)\n\n\n@flow(name=\"Health Check Flow\")\ndef health_check_flow():\n hi = say_hi()\n log_platform_info(wait_for=[hi])\n
Now create a deployment for this flow script, making sure that it's configured to use the test
queue on the default-agent-pool
work pool.
prefect deployment build --infra process --storage-block azure/flowsville/health_test --name health-test --pool default-agent-pool --work-queue test --apply health_flow.py:health_check_flow\n
Once created, any flow runs for this deployment will be picked up by the agent running on this container instance.
Infrastructure and storage
This Prefect deployment example was built using the Process
infrastructure type and Azure Blob Storage.
You might wonder why your deployment needs process infrastructure rather than DockerContainer
infrastructure when you are deploying a Docker image to ACI.
A Prefect deployment\u2019s infrastructure type describes how you want Prefect agents to run flows for the deployment. With DockerContainer
infrastructure, the agent will try to use Docker to spin up a new container for each flow run. Since you\u2019ll be starting your own container on ACI, you don\u2019t need Prefect to do it for you. Specifying process infrastructure on the deployment tells Prefect you want to agent to run flows by starting a process in your ACI container.
You can use any storage type as long as you've configured a block for it before creating the deployment.
","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#cleaning-up","title":"Cleaning up","text":"Note that ACI instances may incur usage charges while running, but must be running for the agent to pick up and execute flow runs.
To stop a container, use the az container stop
command:
az container stop --resource-group prefect-agents --name prefect-agent-example\n
To delete a container, use the az container delete
command:
az container delete --resource-group prefect-agents --name prefect-agent-example\n
","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/daemonize/","title":"Daemonize Processes for Prefect Deployments","text":"When running workflow applications, it can be helpful to create long-running processes that run at startup and are robust to failure. In this guide you'll learn how to set up a systemd service to create long-running Prefect processes that poll for scheduled flow runs.
A systemd service is ideal for running a long-lived process on a Linux VM or physical Linux server. We will leverage systemd and see how to automatically start a Prefect worker or long-lived serve
process when Linux starts. This approach provides resilience by automatically restarting the process if it crashes.
In this guide we will:
.serve
processsudo
commands).If using an AWS t2-micro EC2 instance with an AWS Linux image, you can install Python and pip with sudo yum install -y python3 python3-pip
.
Create a user account on your linux system for the Prefect process. While you can run a worker or serve process as root, it's good security practice to avoid doing so unless you are sure you need to.
In a terminal, run:
sudo useradd -m prefect\nsudo passwd prefect\n
When prompted, enter a password for the prefect
account.
Next, log in to the prefect
account by running:
sudo su prefect\n
","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#step-2-install-prefect","title":"Step 2: Install Prefect","text":"Run:
pip3 install prefect\n
This guide assumes you are installing Prefect globally, not in a virtual environment. If running a systemd service in a virtual environment, you'll just need to change the ExecPath. For example, if using venv, change the ExecPath to target the prefect
application in the bin
subdirectory of your virtual environment.
Next, set up your environment so that the Prefect client will know which server to connect to.
If connecting to Prefect Cloud, follow the instructions to obtain an API key and then run the following:
prefect cloud login -k YOUR_API_KEY\n
When prompted, choose the Prefect workspace you'd like to log in to.
If connecting to a self-hosted Prefect server instance instead of Prefect Cloud, run the following and substitute the IP address of your server:
prefect config set PREFECT_API_URL=http://your-prefect-server-IP:4200\n
Finally, run the exit
command to sign out of the prefect
Linux account. This command switches you back to your sudo-enabled account so you will can run the commands in the next section.
See the section below if you are setting up a Prefect worker. Skip to the next section if you are setting up a Prefect .serve
process.
Move into the /etc/systemd/system
folder and open a file for editing. We use the Vim text editor below.
cd /etc/systemd/system\nsudo vim my-prefect-service.service\n
my-prefect-service.service[Unit]\nDescription=Prefect worker\n\n[Service]\nUser=prefect\nWorkingDirectory=/home\nExecStart=prefect worker start --pool YOUR_WORK_POOL_NAME\nRestart=always\n\n[Install]\nWantedBy=multi-user.target\n
Make sure you substitute your own work pool name.
","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#setting-up-a-systemd-service-for-serve","title":"Setting up a systemd service for.serve
","text":"Copy your flow entrypoint Python file and any other files needed for your flow to run into the /home
directory (or the directory of your choice).
Here's a basic example flow:
my_file.pyfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef say_hi():\n print(\"Hello!\")\n\nif __name__==\"__main__\":\n say_hi.serve(name=\"Greeting from daemonized .serve\")\n
If you want to make changes to your flow code without restarting your process, you can push your code to git-based cloud storage (GitHub, BitBucket, GitLab) and use flow.from_source().serve()
, as in the example below.
if __name__ == \"__main__\":\nflow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"path/to/my_remote_flow_code_file.py:say_hi\",\n).serve(name=\"deployment-with-github-storage\")\n
Make sure you substitute your own flow code entrypoint path.
Note that if you change the flow entrypoint parameters, you will need to restart the process.
Move into the /etc/systemd/system
folder and open a file for editing. We use the Vim text editor below.
cd /etc/systemd/system\nsudo vim my-prefect-service.service\n
my-prefect-service.service[Unit]\nDescription=Prefect serve \n\n[Service]\nUser=prefect\nWorkingDirectory=/home\nExecStart=python3 my_file.py\nRestart=always\n\n[Install]\nWantedBy=multi-user.target\n
","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#save-enable-and-start-the-service","title":"Save, enable, and start the service","text":"To save the file and exit Vim hit the escape key, type :wq!
, then press the return key.
Next, run sudo systemctl daemon-reload
to make systemd aware of your new service.
Then, run sudo systemctl enable my-prefect-service
to enable the service. This command will ensure it runs when your system boots.
Next, run sudo systemctl start my-prefect-service
to start the service.
Run your deployment from UI and check out the logs on the Flow Runs page.
You can see if your daemonized Prefect worker or serve process is running and see the Prefect logs with systemctl status my-prefect-service
.
That's it! You now have a systemd service that starts when your system boots, and will restart if it ever crashes.
","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#next-steps","title":"Next steps","text":"If you want to set up a long-lived process on a Windows machine the pattern is similar. Instead of systemd, you can use NSSM.
Check out other Prefect guides to see what else you can do with Prefect!
","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/","title":"Developing a New Worker Type","text":"Advanced Topic
This tutorial is for users who want to extend the Prefect framework and completing this successfully will require deep knowledge of Prefect concepts. For standard use cases, we recommend using one of the available workers instead.
Prefect workers are responsible for setting up execution infrastructure and starting flow runs on that infrastructure.
A list of available workers can be found here. What if you want to execute your flow runs on infrastructure that doesn't have an available worker type? This tutorial will walk you through creating a custom worker that can run your flows on your chosen infrastructure.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-configuration","title":"Worker configuration","text":"When setting up an execution environment for a flow run, a worker receives configuration for the infrastructure it is designed to work with. Examples of configuration values include memory allocation, CPU allocation, credentials, image name, etc. The worker then uses this configuration to create the execution environment and start the flow run.
How are the configuration values populated?
The work pool that a worker polls for flow runs has a base job template associated with it. The template is the contract for how configuration values populate for each flow run.
The keys in the job_configuration
section of this base job template match the worker's configuration class attributes. The values in the job_configuration
section of the base job template are used to populate the attributes of the worker's configuration class.
The work pool creator gets to decide how they want to populate the values in the job_configuration
section of the base job template. The values can be hard-coded, templated using placeholders, or a mix of these two approaches. Because you, as the worker developer, don't know how the work pool creator will populate the values, you should set sensible defaults for your configuration class attributes as a matter of best practice.
BaseJobConfiguration
subclass","text":"A worker developer defines their worker's configuration to function with a class extending BaseJobConfiguration
.
BaseJobConfiguration
has attributes that are common to all workers:
name
The name to assign to the created execution environment. env
Environment variables to set in the created execution environment. labels
The labels assigned to the created execution environment for metadata purposes. command
The command to use when starting a flow run. Prefect sets values for each attribute before giving the configuration to the worker. If you want to customize the values of these attributes, use the prepare_for_flow_run
method.
Here's an example prepare_for_flow_run
method that adds a label to the execution environment:
def prepare_for_flow_run(\n self, flow_run, deployment = None, flow = None,\n): \n super().prepare_for_flow_run(flow_run, deployment, flow) \n self.labels.append(\"my-custom-label\")\n
A worker configuration class is a Pydantic model, so you can add additional attributes to your configuration class as Pydantic fields. For example, if you want to allow memory and CPU requests for your worker, you can do so like this:
from pydantic import Field\nfrom prefect.workers.base import BaseJobConfiguration\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n memory: int = Field(\n default=1024,\n description=\"Memory allocation for the execution environment.\"\n )\n cpu: int = Field(\n default=500, \n description=\"CPU allocation for the execution environment.\"\n )\n
This configuration class will populate the job_configuration
section of the resulting base job template.
For this example, the base job template would look like this:
job_configuration:\n name: \"{{ name }}\"\n env: \"{{ env }}\"\n labels: \"{{ labels }}\"\n command: \"{{ command }}\"\n memory: \"{{ memory }}\"\n cpu: \"{{ cpu }}\"\nvariables:\n type: object\n properties:\n name:\n title: Name\n description: Name given to infrastructure created by a worker.\n type: string\n env:\n title: Environment Variables\n description: Environment variables to set when starting a flow run.\n type: object\n additionalProperties:\n type: string\n labels:\n title: Labels\n description: Labels applied to infrastructure created by a worker.\n type: object\n additionalProperties:\n type: string\n command:\n title: Command\n description: The command to use when starting a flow run. In most cases,\n this should be left blank and the command will be automatically generated\n by the worker.\n type: string\n memory:\n title: Memory\n description: Memory allocation for the execution environment.\n type: integer\n default: 1024\n cpu:\n title: CPU\n description: CPU allocation for the execution environment.\n type: integer\n default: 500\n
This base job template defines what values can be provided by deployment creators on a per-deployment basis and how those provided values will be translated into the configuration values that the worker will use to create the execution environment.
Notice that each attribute for the class was added in the job_configuration
section with placeholders whose name matches the attribute name. The variables
section was also populated with the OpenAPI schema for each attribute. If a configuration class is used without explicitly declaring any template variables, the template variables will be inferred from the configuration class attributes.
You can customize the template for each attribute for situations where the configuration values should use more sophisticated templating. For example, if you want to add units for the memory
attribute, you can do so like this:
from pydantic import Field\nfrom prefect.workers.base import BaseJobConfiguration\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n memory: str = Field(\n default=\"1024Mi\",\n description=\"Memory allocation for the execution environment.\"\n template=\"{{ memory_request }}Mi\"\n )\n cpu: str = Field(\n default=\"500m\", \n description=\"CPU allocation for the execution environment.\"\n template=\"{{ cpu_request }}m\"\n )\n
Notice that we changed the type of each attribute to str
to accommodate the units, and we added a new template
attribute to each attribute. The template
attribute is used to populate the job_configuration
section of the resulting base job template.
For this example, the job_configuration
section of the resulting base job template would look like this:
job_configuration:\n name: \"{{ name }}\"\n env: \"{{ env }}\"\n labels: \"{{ labels }}\"\n command: \"{{ command }}\"\n memory: \"{{ memory_request }}Mi\"\n cpu: \"{{ cpu_request }}m\"\n
Note that to use custom templates, you will need to declare the template variables used in the template because the names of those variables can no longer be inferred from the configuration class attributes. We will cover how to declare the default variable schema in the Worker Template Variables section.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#rules-for-template-variable-interpolation","title":"Rules for template variable interpolation","text":"When defining a job configuration model, it's useful to understand how template variables are interpolated into the job configuration. The templating engine follows a few simple rules:
job_configuration
section, the key will be replaced with the value template variable.job_configuration
section and no value is provided for the template variable, the key will be removed from the job_configuration
section.These rules allow worker developers and work pool maintainers to define template variables that can be complex types like dictionaries and lists. These rules also mean that worker developers should give reasonable default values to job configuration fields whenever possible because values are not guaranteed to be provided if template variables are unset.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#template-variable-usage-strategies","title":"Template variable usage strategies","text":"Template variables define the interface that deployment creators interact with to configure the execution environments of their deployments. The complexity of this interface can be controlled via the template variables that are defined for a base job template. This control allows work pool maintainers to find a point along the spectrum of flexibility and simplicity appropriate for their organization.
There are two patterns that are represented in current worker implementations:
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#pass-through","title":"Pass-through","text":"In the pass-through pattern, template variables are passed through to the job configuration with little change. This pattern exposes complete control to deployment creators but also requires them to understand the details of the execution environment.
This pattern is useful when the execution environment is simple, and the deployment creators are expected to have high technical knowledge.
The Docker worker is an example of a worker that uses this pattern.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#infrastructure-as-code-templating","title":"Infrastructure as code templating","text":"Depending on the infrastructure they interact with, workers can sometimes employ a declarative infrastructure syntax (i.e., infrastructure as code) to create execution environments (e.g., a Kubernetes manifest or an ECS task definition).
In the IaC pattern, it's often useful to use template variables to template portions of the declarative syntax which then can be used to generate the declarative syntax into a final form.
This approach allows work pool creators to provide a simpler interface to deployment creators while also controlling which portions of infrastructure are configurable by deployment creators.
The Kubernetes worker is an example of a worker that uses this pattern.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#configuring-credentials","title":"Configuring credentials","text":"When executing flow runs within cloud services, workers will often need credentials to authenticate with those services. For example, a worker that executes flow runs in AWS Fargate will need AWS credentials. As a worker developer, you can use blocks to accept credentials configuration from the user.
For example, if you want to allow the user to configure AWS credentials, you can do so like this:
from prefect_aws import AwsCredentials\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n aws_credentials: AwsCredentials = Field(\n default=None,\n description=\"AWS credentials to use when creating AWS resources.\"\n )\n
Users can create and assign a block to the aws_credentials
attribute in the UI and the worker will use these credentials when interacting with AWS resources.
Providing template variables for a base job template defines the fields that deployment creators can override per deployment. The work pool creator ultimately defines the template variables for a base job template, but the worker developer is able to define default template variables for the worker to make it easier to use.
Default template variables for a worker are defined by implementing the BaseVariables
class. Like the BaseJobConfiguration
class, the BaseVariables
class has attributes that are common to all workers:
name
The name to assign to the created execution environment. env
Environment variables to set in the created execution environment. labels
The labels assigned the created execution environment for metadata purposes. command
The command to use when starting a flow run. Additional attributes can be added to the BaseVariables
class to define additional template variables. For example, if you want to allow memory and CPU requests for your worker, you can do so like this:
from pydantic import Field\nfrom prefect.workers.base import BaseVariables\n\nclass MyWorkerTemplateVariables(BaseVariables):\n memory_request: int = Field(\n default=1024,\n description=\"Memory allocation for the execution environment.\"\n )\n cpu_request: int = Field(\n default=500, \n description=\"CPU allocation for the execution environment.\"\n )\n
When MyWorkerTemplateVariables
is used in conjunction with MyWorkerConfiguration
from the Customizing Configuration Attribute Templates section, the resulting base job template will look like this:
job_configuration:\n name: \"{{ name }}\"\n env: \"{{ env }}\"\n labels: \"{{ labels }}\"\n command: \"{{ command }}\"\n memory: \"{{ memory_request }}Mi\"\n cpu: \"{{ cpu_request }}m\"\nvariables:\n type: object\n properties:\n name:\n title: Name\n description: Name given to infrastructure created by a worker.\n type: string\n env:\n title: Environment Variables\n description: Environment variables to set when starting a flow run.\n type: object\n additionalProperties:\n type: string\n labels:\n title: Labels\n description: Labels applied to infrastructure created by a worker.\n type: object\n additionalProperties:\n type: string\n command:\n title: Command\n description: The command to use when starting a flow run. In most cases,\n this should be left blank and the command will be automatically generated\n by the worker.\n type: string\n memory_request:\n title: Memory Request\n description: Memory allocation for the execution environment.\n type: integer\n default: 1024\n cpu_request:\n title: CPU Request\n description: CPU allocation for the execution environment.\n type: integer\n default: 500\n
Note that template variable classes are never used directly. Instead, they are used to generate a schema that is used to populate the variables
section of a base job template and validate the template variables provided by the user.
We don't recommend using template variable classes within your worker implementation for validation purposes because the work pool creator ultimately defines the template variables. The configuration class should handle any necessary run-time validation.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-implementation","title":"Worker implementation","text":"Workers set up execution environments using provided configuration. Workers also observe the execution environment as the flow run executes and report any crashes to the Prefect API.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#attributes","title":"Attributes","text":"To implement a worker, you must implement the BaseWorker
class and provide it with the following attributes:
type
The type of the worker. Yes job_configuration
The configuration class for the worker. Yes job_configuration_variables
The template variables class for the worker. No _documentation_url
Link to documentation for the worker. No _logo_url
Link to a logo for the worker. No _description
A description of the worker. No","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#methods","title":"Methods","text":"","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#run","title":"run
","text":"In addition to the attributes above, you must also implement a run
method. The run
method is called for each flow run the worker receives for execution from the work pool.
The run
method has the following signature:
async def run(\n self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None,\n ) -> BaseWorkerResult:\n ...\n
The run
method is passed: the flow run to execute, the execution environment configuration for the flow run, and a task status object that allows the worker to track whether the flow run was submitted successfully.
The run
method must also return a BaseWorkerResult
object. The BaseWorkerResult
object returned contains information about the flow run execution. For the most part, you can implement the BaseWorkerResult
with no modifications like so:
from prefect.workers.base import BaseWorkerResult\n\nclass MyWorkerResult(BaseWorkerResult):\n \"\"\"Result returned by the MyWorker.\"\"\"\n
If you would like to return more information about a flow run, then additional attributes can be added to the BaseWorkerResult
class.
kill_infrastructure
","text":"Workers must implement a kill_infrastructure
method to support flow run cancellation. The kill_infrastructure
method is called when a flow run is canceled and is passed an identifier for the infrastructure to tear down and the execution environment configuration for the flow run.
The infrastructure_pid
passed to the kill_infrastructure
method is the same identifier used to mark a flow run execution as started in the run
method. The infrastructure_pid
must be a string, but it can take on any format you choose.
The infrastructure_pid
should contain enough information to uniquely identify the infrastructure created for a flow run when used with the job_configuration
passed to the kill_infrastructure
method. Examples of useful information include: the cluster name, the hostname, the process ID, the container ID, etc.
If a worker cannot tear down infrastructure created for a flow run, the kill_infrastructure
command should raise an InfrastructureNotFound
or InfrastructureNotAvailable
exception.
Below is an example of a worker implementation. This example is not intended to be a complete implementation but to illustrate the aforementioned concepts.
from prefect.workers.base import BaseWorker, BaseWorkerResult, BaseJobConfiguration, BaseVariables\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n memory: str = Field(\n default=\"1024Mi\",\n description=\"Memory allocation for the execution environment.\"\n template=\"{{ memory_request }}Mi\"\n )\n cpu: str = Field(\n default=\"500m\", \n description=\"CPU allocation for the execution environment.\"\n template=\"{{ cpu_request }}m\"\n )\n\nclass MyWorkerTemplateVariables(BaseVariables):\n memory_request: int = Field(\n default=1024,\n description=\"Memory allocation for the execution environment.\"\n )\n cpu_request: int = Field(\n default=500, \n description=\"CPU allocation for the execution environment.\"\n )\n\nclass MyWorkerResult(BaseWorkerResult):\n \"\"\"Result returned by the MyWorker.\"\"\"\n\nclass MyWorker(BaseWorker):\n type = \"my-worker\"\n job_configuration = MyWorkerConfiguration\n job_configuration_variables = MyWorkerTemplateVariables\n _documentation_url = \"https://example.com/docs\"\n _logo_url = \"https://example.com/logo\"\n _description = \"My worker description.\"\n\n async def run(\n self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None,\n ) -> BaseWorkerResult:\n # Create the execution environment and start execution\n job = await self._create_and_start_job(configuration)\n\n if task_status:\n # Use a unique ID to mark the run as started. This ID is later used to tear down infrastructure\n # if the flow run is cancelled.\n task_status.started(job.id) \n\n # Monitor the execution\n job_status = await self._watch_job(job, configuration)\n\n exit_code = job_status.exit_code if job_status else -1 # Get result of execution for reporting\n return MyWorkerResult(\n status_code=exit_code,\n identifier=job.id,\n )\n\n async def kill_infrastructure(self, infrastructure_pid: str, configuration: BaseJobConfiguration) -> None:\n # Tear down the execution environment\n await self._kill_job(infrastructure_pid, configuration)\n
Most of the execution logic is omitted from the example above, but it shows that the typical order of operations in the run
method is: 1. Create the execution environment and start the flow run execution 2. Mark the flow run as started via the passed task_status
object 3. Monitor the execution 4. Get the execution's final status from the infrastructure and return a BaseWorkerResult
object
To see other examples of worker implementations, see the ProcessWorker
and KubernetesWorker
implementations.
Workers can be started via the Prefect CLI by providing the --type
option to the prefect worker start
CLI command. To make your worker type available via the CLI, it must be available at import time.
If your worker is in a package, you can add an entry point to your setup file in the following format:
entry_points={\n \"prefect.collections\": [\n \"my_package_name = my_worker_module\",\n ]\n},\n
Prefect will discover this entry point and load your work module in the specified module. The entry point will allow the worker to be available via the CLI.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/kubernetes/","title":"Running Flows with Kubernetes","text":"This guide will walk you through running your flows on Kubernetes. Though much of the guide is general to any Kubernetes cluster, there are differences between the managed Kubernetes offerings between cloud providers, especially when it comes to container registries and access management. We'll focus on Amazon Elastic Kubernetes Service (EKS).
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#prerequisites","title":"Prerequisites","text":"Before we begin, there are a few pre-requisites:
Prefect is tested against Kubernetes 1.26.0 and newer minor versions.
Administrator Access
Though not strictly necessary, you may want to ensure you have admin access, both in Prefect Cloud and in your cloud provider. Admin access is only necessary during the initial setup and can be downgraded after.
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-cluster","title":"Create a cluster","text":"Let's start by creating a new cluster. If you already have one, skip ahead to the next section.
AWSGCPAzureOne easy way to get set up with a cluster in EKS is with eksctl
. Node pools can be backed by either EC2 instances or FARGATE. Let's choose FARGATE so there's less to manage. The following command takes around 15 minutes and must not be interrupted:
# Replace the cluster name with your own value\neksctl create cluster --fargate --name <CLUSTER-NAME>\n\n# Authenticate to the cluster.\naws eks update-kubeconfig --name <CLUSTER-NAME>\n
You can get a GKE cluster up and running with a few commands using the gcloud
CLI. We'll build a bare-bones cluster that is accessible over the open internet - this should not be used in a production environment. To deploy the cluster, your project must have a VPC network configured.
First, authenticate to GCP by setting the following configuration options.
# Authenticate to gcloud\ngcloud auth login\n\n# Specify the project & zone to deploy the cluster to\n# Replace the project name with your GCP project name\ngcloud config set project <GCP-PROJECT-NAME>\ngcloud config set compute/zone <AVAILABILITY-ZONE>\n
Next, deploy the cluster - this command will take ~15 minutes to complete. Once the cluster has been created, authenticate to the cluster.
# Create cluster\n# Replace the cluster name with your own value\ngcloud container clusters create <CLUSTER-NAME> --num-nodes=1 \\\n--machine-type=n1-standard-2\n\n# Authenticate to the cluster\ngcloud container clusters <CLUSTER-NAME> --region <AVAILABILITY-ZONE>\n
GCP Gotchas
ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=Service account \"000000000000-compute@developer.gserviceaccount.com\" is disabled.\n
Organizational Policy
page within IAM.creation failed: Constraint constraints/compute.vmExternalIpAccess violated for project 000000000000. Add instance projects/<GCP-PROJECT-NAME>/zones/us-east1-b/instances/gke-gke-guide-1-default-pool-c369c84d-wcfl to the constraint to use external IP with it.\"\n
You can quickly create an AKS cluster using the Azure CLI, or use the Cloud Shell directly from the Azure portal shell.azure.com.
First, authenticate to Azure if not already done.
az login\n
Next, deploy the cluster - this command will take ~4 minutes to complete. Once the cluster has been created, authenticate to the cluster.
# Create a Resource Group at the desired location, e.g. westus\n az group create --name <RESOURCE-GROUP-NAME> --location <LOCATION>\n\n # Create a kubernetes cluster with default kubernetes version, default SKU load balancer (Standard) and default vm set type (VirtualMachineScaleSets)\n az aks create --resource-group <RESOURCE-GROUP-NAME> --name <CLUSTER-NAME>\n\n # Configure kubectl to connect to your Kubernetes cluster\n az aks get-credentials --resource-group <RESOURCE-GROUP-NAME> --name <CLUSTER-NAME>\n\n # Verify the connection by listing the cluster nodes\n kubectl get nodes\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-container-registry","title":"Create a container registry","text":"Besides a cluster, the other critical resource we'll need is a container registry. A registry is not strictly required, but in most cases you'll want to use custom images and/or have more control over where images are stored. If you already have a registry, skip ahead to the next section.
AWSGCPAzureLet's create a registry using the AWS CLI and authenticate the docker daemon to said registry:
# Replace the image name with your own value\naws ecr create-repository --repository-name <IMAGE-NAME>\n\n# Login to ECR\n# Replace the region and account ID with your own values\naws ecr get-login-password --region <REGION> | docker login \\\n --username AWS --password-stdin <AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com\n
Let's create a registry using the gcloud CLI and authenticate the docker daemon to said registry:
# Create artifact registry repository to host your custom image\n# Replace the repository name with your own value; it can be the \n# same name as your image\ngcloud artifacts repositories create <REPOSITORY-NAME> \\\n--repository-format=docker --location=us\n\n# Authenticate to artifact registry\ngcloud auth configure-docker us-docker.pkg.dev\n
Let's create a registry using the Azure CLI and authenticate the docker daemon to said registry:
# Name must be a lower-case alphanumeric\n# Tier SKU can easily be updated later, e.g. az acr update --name <REPOSITORY-NAME> --sku Standard\naz acr create --resource-group <RESOURCE-GROUP-NAME> \\\n --name <REPOSITORY-NAME> \\\n --sku Basic\n\n# Attach ACR to AKS cluster\n# You need Owner, Account Administrator, or Co-Administrator role on your Azure subscription as per Azure docs\naz aks update --resource-group <RESOURCE-GROUP-NAME> --name <CLUSTER-NAME> --attach-acr <REPOSITORY-NAME>\n\n# You can verify AKS can now reach ACR\naz aks check-acr --resource-group RESOURCE-GROUP-NAME> --name <CLUSTER-NAME> --acr <REPOSITORY-NAME>.azurecr.io\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-kubernetes-work-pool","title":"Create a Kubernetes work pool","text":"Work pools allow you to manage deployment infrastructure. We'll configure the default values for our Kubernetes base job template. Note that these values can be overridden by individual deployments.
Let's switch to the Prefect Cloud UI, where we'll create a new Kubernetes work pool (alternatively, you could use the Prefect CLI to create a work pool).
Let's look at a few popular configuration options.
Environment Variables
Add environment variables to set when starting a flow run. So long as you are using a Prefect-maintained image and haven't overwritten the image's entrypoint, you can specify Python packages to install at runtime with {\"EXTRA_PIP_PACKAGES\":\"my_package\"}
. For example {\"EXTRA_PIP_PACKAGES\":\"pandas==1.2.3\"}
will install pandas version 1.2.3. Alternatively, you can specify package installation in a custom Dockerfile, which can allow you to take advantage of image caching. As we'll see below, Prefect can help us create a Dockerfile with our flow code and the packages specified in a requirements.txt
file baked in.
Namespace
Set the Kubernetes namespace to create jobs within, such as prefect
. By default, set to default.
Image
Specify the Docker container image for created jobs. If not set, the latest Prefect 2 image will be used (i.e. prefecthq/prefect:2-latest
). Note that you can override this on each deployment through job_variables
.
Image Pull Policy
Select from the dropdown options to specify when to pull the image. When using the IfNotPresent
policy, make sure to use unique image tags, as otherwise old images could get cached on your nodes.
Finished Job TTL
Number of seconds before finished jobs are automatically cleaned up by Kubernetes' controller. You may want to set to 60 so that completed flow runs are cleaned up after a minute.
Pod Watch Timeout Seconds
Number of seconds for pod creation to complete before timing out. Consider setting to 300, especially if using a serverless type node pool, as these tend to have longer startup times.
Kubernetes Cluster Config
You can configure the Kubernetes cluster to use for job creation by specifying a KubernetesClusterConfig
block. Generally you should leave the cluster config blank as the worker should be provisioned with appropriate access and permissions. Typically this setting is used when a worker is deployed to a cluster that is different from the cluster where flow runs are executed.
Advanced Settings
Want to modify the default base job template to add other fields or delete existing fields?
Select the Advanced tab and edit the JSON representation of the base job template.
For example, to set a CPU request, add the following section under variables:
\"cpu_request\": {\n \"title\": \"CPU Request\",\n \"description\": \"The CPU allocation to request for this pod.\",\n \"default\": \"default\",\n \"type\": \"string\"\n},\n
Next add the following to the first containers
item under job_configuration
:
...\n\"containers\": [\n {\n ...,\n \"resources\": {\n \"requests\": {\n \"cpu\": \"{{ cpu_request }}\"\n }\n }\n }\n],\n...\n
Running deployments with this work pool will now request the specified CPU.
After configuring the work pool settings, move to the next screen.
Give the work pool a name and save.
Our new Kubernetes work pool should now appear in the list of work pools.
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-prefect-cloud-api-key","title":"Create a Prefect Cloud API key","text":"While in the Prefect Cloud UI, create a Prefect Cloud API key if you don't already have one. Click on your profile avatar picture, then click your name to go to your profile settings, click API Keys and hit the plus button to create a new API key here. Make sure to store it safely along with your other passwords, ideally via a password manager.
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#deploy-a-worker-using-helm","title":"Deploy a worker using Helm","text":"With our cluster and work pool created, it's time to deploy a worker, which will set up Kubernetes infrastructure to run our flows. The best way to deploy a worker is using the Prefect Helm Chart.
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#add-the-prefect-helm-repository","title":"Add the Prefect Helm repository","text":"Add the Prefect Helm repository to your Helm client:
helm repo add prefect https://prefecthq.github.io/prefect-helm\nhelm repo update\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-namespace","title":"Create a namespace","text":"Create a new namespace in your Kubernetes cluster to deploy the Prefect worker:
kubectl create namespace prefect\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-kubernetes-secret-for-the-prefect-api-key","title":"Create a Kubernetes secret for the Prefect API key","text":"kubectl create secret generic prefect-api-key \\\n--namespace=prefect --from-literal=key=your-prefect-cloud-api-key\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#configure-helm-chart-values","title":"Configure Helm chart values","text":"Create a values.yaml
file to customize the Prefect worker configuration. Add the following contents to the file:
worker:\n cloudApiConfig:\n accountId: <target account ID>\n workspaceId: <target workspace ID>\n config:\n workPool: <target work pool name>\n
These settings will ensure that the worker connects to the proper account, workspace, and work pool.
View your Account ID and Workspace ID in your browser URL when logged into Prefect Cloud. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-helm-release","title":"Create a Helm release","text":"Let's install the Prefect worker using the Helm chart with your custom values.yaml
file:
helm install prefect-worker prefect/prefect-worker \\\n --namespace=prefect \\\n -f values.yaml\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#verify-deployment","title":"Verify deployment","text":"Check the status of your Prefect worker deployment:
kubectl get pods -n prefect\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#define-a-flow","title":"Define a flow","text":"Let's start simple with a flow that just logs a message. In a directory named flows
, create a file named hello.py
with the following contents:
from prefect import flow, get_run_logger, tags\n\n@flow\ndef hello(name: str = \"Marvin\"):\n logger = get_run_logger()\n logger.info(f\"Hello, {name}!\")\n\nif __name__ == \"__main__\":\n with tags(\"local\"):\n hello()\n
Run the flow locally with python hello.py
to verify that it works. Note that we use the tags
context manager to tag the flow run as local
. This step is not required, but does add some helpful metadata.
Prefect has two recommended options for creating a deployment with dynamic infrastructure. You can define a deployment in a Python script using the flow.deploy
mechanics or in a prefect.yaml
definition file. The prefect.yaml
file currently allows for more customization in terms of push and pull steps. Kubernetes objects are defined in YAML, so we expect many teams using Kubernetes work pools to create their deployments with YAML as well. To learn about the Python deployment creation method with flow.deploy
refer to the Workers & Work Pools tutorial page.
The prefect.yaml
file is used by the prefect deploy
command to deploy our flows. As a part of that process it will also build and push our image. Create a new file named prefect.yaml
with the following contents:
# Generic metadata about this project\nname: flows\nprefect-version: 2.13.8\n\n# build section allows you to manage and build docker images\nbuild:\n- prefect_docker.deployments.steps.build_docker_image:\n id: build-image\n requires: prefect-docker>=0.4.0\n image_name: \"{{ $PREFECT_IMAGE_NAME }}\"\n tag: latest\n dockerfile: auto\n platform: \"linux/amd64\"\n\n# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_docker.deployments.steps.push_docker_image:\n requires: prefect-docker>=0.4.0\n image_name: \"{{ build-image.image_name }}\"\n tag: \"{{ build-image.tag }}\"\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect.deployments.steps.set_working_directory:\n directory: /opt/prefect/flows\n\n# the definitions section allows you to define reusable components for your deployments\ndefinitions:\n tags: &common_tags\n - \"eks\"\n work_pool: &common_work_pool\n name: \"kubernetes\"\n job_variables:\n image: \"{{ build-image.image }}\"\n\n# the deployments section allows you to provide configuration for deploying flows\ndeployments:\n- name: \"default\"\n tags: *common_tags\n schedule: null\n entrypoint: \"flows/hello.py:hello\"\n work_pool: *common_work_pool\n\n- name: \"arthur\"\n tags: *common_tags\n schedule: null\n entrypoint: \"flows/hello.py:hello\"\n parameters:\n name: \"Arthur\"\n work_pool: *common_work_pool\n
We define two deployments of the hello
flow: default
and arthur
. Note that by specifying dockerfile: auto
, Prefect will automatically create a dockerfile that installs any requirements.txt
and copies over the current directory. You can pass a custom Dockerfile instead with dockerfile: Dockerfile
or dockerfile: path/to/Dockerfile
. Also note that we are specifically building for the linux/amd64
platform. This specification is often necessary when images are built on Macs with M series chips but run on cloud provider instances.
Deployment specific build, push, and pull
The build, push, and pull steps can be overridden for each deployment. This allows for more custom behavior, such as specifying a different image for each deployment.
Let's make sure we define our requirements in a requirements.txt
file:
prefect>=2.13.8\nprefect-docker>=0.4.0\nprefect-kubernetes>=0.3.1\n
The directory should now look something like this:
.\n\u251c\u2500\u2500 prefect.yaml\n\u2514\u2500\u2500 flows\n \u251c\u2500\u2500 requirements.txt\n \u2514\u2500\u2500 hello.py\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#tag-images-with-a-git-sha","title":"Tag images with a Git SHA","text":"If your code is stored in a GitHub repository, it's good practice to tag your images with the Git SHA of the code used to build it. This can be done in the prefect.yaml
file with a few minor modifications, and isn't yet an option with the Python deployment creation method. Let's use the run_shell_script
command to grab the SHA and pass it to the tag
parameter of build_docker_image
:
build:\n- prefect.deployments.steps.run_shell_script:\n id: get-commit-hash\n script: git rev-parse --short HEAD\n stream_output: false\n- prefect_docker.deployments.steps.build_docker_image:\n id: build-image\n requires: prefect-docker>=0.4.0\n image_name: \"{{ $PREFECT_IMAGE_NAME }}\"\n tag: \"{{ get-commit-hash.stdout }}\"\n dockerfile: auto\n platform: \"linux/amd64\"\n
Let's also set the SHA as a tag for easy identification in the UI:
definitions:\n tags: &common_tags\n - \"eks\"\n - \"{{ get-commit-hash.stdout }}\"\n work_pool: &common_work_pool\n name: \"kubernetes\"\n job_variables:\n image: \"{{ build-image.image }}\"\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#authenticate-to-prefect","title":"Authenticate to Prefect","text":"Before we deploy the flows to Prefect, we will need to authenticate via the Prefect CLI. We will also need to ensure that all of our flow's dependencies are present at deploy
time.
This example uses a virtual environment to ensure consistency across environments.
# Create a virtualenv & activate it\nvirtualenv prefect-demo\nsource prefect-demo/bin/activate\n\n# Install dependencies of your flow\nprefect-demo/bin/pip install -r requirements.txt\n\n# Authenticate to Prefect & select the appropriate \n# workspace to deploy your flows to\nprefect-demo/bin/prefect cloud login\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#deploy-the-flows","title":"Deploy the flows","text":"Now we're ready to deploy our flows which will build our images. The image name determines which registry it will end up in. We have configured our prefect.yaml
file to get the image name from the PREFECT_IMAGE_NAME
environment variable, so let's set that first:
export PREFECT_IMAGE_NAME=<AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/<IMAGE-NAME>\n
export PREFECT_IMAGE_NAME=us-docker.pkg.dev/<GCP-PROJECT-NAME>/<REPOSITORY-NAME>/<IMAGE-NAME>\n
export PREFECT_IMAGE_NAME=<REPOSITORY-NAME>.azurecr.io/<IMAGE-NAME>\n
To deploy your flows, ensure your Docker daemon is running first. Deploy all the flows with prefect deploy --all
or deploy them individually by name: prefect deploy -n hello/default
or prefect deploy -n hello/arthur
.
Once the deployments are successfully created, we can run them from the UI or the CLI:
prefect deployment run hello/default\nprefect deployment run hello/arthur\n
Congratulations! You just ran two deployments in Kubernetes. Head over to the UI to check their status!
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/overriding-job-variables/","title":"Deeper Dive: Overriding Work Pool Job Variables","text":"As described in the Deploying Flows to Work Pools and Workers guide, there are two ways to deploy flows to work pools: using a prefect.yaml
file or using the .deploy()
method.
In both cases, you can override job variables on a work pool for a given deployment.
While exactly which job variables are available to be overridden depends on the type of work pool you're using at a given time, this guide will explore some common patterns for overriding job variables in both deployment methods.
","tags":["deployments","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#background","title":"Background","text":"First of all, what are \"job variables\"?
Job variables are infrastructure related values that are configurable on a work pool, which may be relevant to how your flow run executes on your infrastructure.
Let's use env
- the only job variable that is configurable for all work pool types - as an example.
When you create or edit a work pool, you can specify a set of environment variables that will be set in the runtime environment of the flow run.
For example, you might want a certain deployment to have the following environment variables available:
{\n \"EXECUTION_ENV\": \"staging\",\n \"MY_NOT_SO_SECRET_CONFIG\": \"plumbus\",\n}\n
Rather than hardcoding these values into your work pool in the UI and making them available to all deployments associated with that work pool, you can override these values on a per-deployment basis.
Let's look at how to do that.
","tags":["deployments","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#how-to-override-job-variables","title":"How to override job variables","text":"Say we have the following repo structure:
\u00bb tree\n.\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 requirements.txt\n\u251c\u2500\u2500 demo_project\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 daily_flow.py\n
... and we have some demo_flow.py
file like this:
import os\nfrom prefect import flow, task\n\n@task\ndef do_something_important(not_so_secret_value: str) -> None:\n print(f\"Doing something important with {not_so_secret_value}!\")\n\n@flow(log_prints=True)\ndef some_work():\n environment = os.environ.get(\"EXECUTION_ENVIRONMENT\", \"local\")\n\n print(f\"Coming to you live from {environment}!\")\n\n not_so_secret_value = os.environ.get(\"MY_NOT_SO_SECRET_CONFIG\")\n\n if not_so_secret_value is None:\n raise ValueError(\"You forgot to set MY_NOT_SO_SECRET_CONFIG!\")\n\n do_something_important(not_so_secret_value)\n
","tags":["deployments","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#using-a-prefectyaml-file","title":"Using a prefect.yaml
file","text":"In this case, let's also say we have the following deployment definition in a prefect.yaml
file at the root of our repository:
deployments:\n- name: demo-deployment\n entrypoint: demo_project/demo_flow.py:some_work\n work_pool:\n name: local\n schedule: null\n
Note
While not the focus of this guide, note that this deployment definition uses a default \"global\" pull
step, because one is not explicitly defined on the deployment. For reference, here's what that would look like at the top of the prefect.yaml
file:
pull:\n- prefect.deployments.steps.git_clone: &clone_repo\n repository: https://github.com/some-user/prefect-monorepo\n branch: main\n
","tags":["deployments","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#hard-coded-job-variables","title":"Hard-coded job variables","text":"To provide the EXECUTION_ENVIRONMENT
and MY_NOT_SO_SECRET_CONFIG
environment variables to this deployment, we can add a job_variables
section to our deployment definition in the prefect.yaml
file:
deployments:\n- name: demo-deployment\n entrypoint: demo_project/demo_flow.py:some_work\n work_pool:\n name: local\n job_variables:\n env:\n EXECUTION_ENVIRONMENT: staging\n MY_NOT_SO_SECRET_CONFIG: plumbus\n schedule: null\n
... and then run prefect deploy -n demo-deployment
to deploy the flow with these job variables.
We should then be able to see the job variables in the Configuration
tab of the deployment in the UI:
If you want to use environment variables that are already set in your local environment, you can template these in the prefect.yaml
file using the {{ $ENV_VAR_NAME }}
syntax:
deployments:\n- name: demo-deployment\n entrypoint: demo_project/demo_flow.py:some_work\n work_pool:\n name: local\n job_variables:\n env:\n EXECUTION_ENVIRONMENT: \"{{ $EXECUTION_ENVIRONMENT }}\"\n MY_NOT_SO_SECRET_CONFIG: \"{{ $MY_NOT_SO_SECRET_CONFIG }}\"\n schedule: null\n
Note
This assumes that the machine where prefect deploy
is run would have these environment variables set.
export EXECUTION_ENVIRONMENT=staging\nexport MY_NOT_SO_SECRET_CONFIG=plumbus\n
As before, run prefect deploy -n demo-deployment
to deploy the flow with these job variables, and you should see them in the UI under the Configuration
tab.
.deploy()
method","text":"If you're using the .deploy()
method to deploy your flow, the process is similar, but instead of having your prefect.yaml
file define the job variables, you can pass them as a dictionary to the job_variables
argument of the .deploy()
method.
We could add the following block to our demo_project/daily_flow.py
file from the setup section:
if __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/zzstoatzz/prefect-monorepo.git\",\n entrypoint=\"src/demo_project/demo_flow.py:some_work\"\n ).deploy(\n name=\"demo-deployment\",\n work_pool_name=\"local\", # can only .deploy() to a local work pool in prefect>=2.15.1\n job_variables={\n \"env\": {\n \"EXECUTION_ENVIRONMENT\": os.environ.get(\"EXECUTION_ENVIRONMENT\", \"local\"),\n \"MY_NOT_SO_SECRET_CONFIG\": os.environ.get(\"MY_NOT_SO_SECRET_CONFIG\")\n }\n }\n )\n
Note
The above example works assuming a couple things: - the machine where this script is run would have these environment variables set.
export EXECUTION_ENVIRONMENT=staging\nexport MY_NOT_SO_SECRET_CONFIG=plumbus\n
demo_project/daily_flow.py
already exists in the repository at the specified pathRunning this script with something like:
python demo_project/daily_flow.py\n
... will deploy the flow with the specified job variables, which should then be visible in the UI under the Configuration
tab.
Push work pools are a special type of work pool that allows Prefect Cloud to submit flow runs for execution to serverless computing infrastructure without running a worker. Push work pools currently support execution in AWS ECS tasks, Azure Container Instances, Google Cloud Run jobs, and Modal.
In this guide you will:
You can automatically provision infrastructure and create your push work pool using the prefect work-pool create
CLI command with the --provision-infra
flag. This approach greatly simplifies the setup process.
Let's explore automatic infrastructure provisioning for push work pools first, and then we'll cover how to manually set up your push work pool.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#automatic-infrastructure-provisioning","title":"Automatic infrastructure provisioning","text":"With Perfect Cloud you can provision infrastructure for use with an AWS ECS, Google Cloud Run, ACI push work pool. Push work pools in Prefect Cloud simplify the setup and management of the infrastructure necessary to run your flows. However, setting up infrastructure on your cloud provider can still be a time-consuming process. Prefect can dramatically simplify this process by automatically provisioning the necessary infrastructure for you.
We'll use the prefect work-pool create
CLI command with the --provision-infra
flag to automatically provision your serverless cloud resources and set up your Prefect workspace to use a new push pool.
To use automatic infrastructure provisioning, you'll need to have the relevant cloud CLI library installed and to have authenticated with your cloud provider.
AWS ECSAzure Container InstancesGoogle Cloud RunModalInstall the AWS CLI, authenticate with your AWS account, and set a default region.
If you already have the AWS CLI installed, be sure to update to the latest version.
You will need the following permissions in your authenticated AWS account:
IAM Permissions:
Amazon ECS Permissions:
Amazon EC2 Permissions:
Amazon ECR Permissions:
If you want to use AWS managed policies, you can use the following:
Note that the above policies will give you all the permissions needed, but are more permissive than necessary.
Docker is also required to build and push images to your registry. You can install Docker here.
Install the Azure CLI and authenticate with your Azure account.
If you already have the Azure CLI installed, be sure to update to the latest version with az upgrade
.
You will also need the following roles in your Azure subscription:
Docker is also required to build and push images to your registry. You can install Docker here.
Install the gcloud CLI and authenticate with your GCP project.
If you already have the gcloud CLI installed, be sure to update to the latest version with gcloud components update
.
You will also need the following permissions in your GCP project:
Docker is also required to build and push images to your registry. You can install Docker here.
Install modal
by running:
pip install modal\n
Create a Modal API token by running:
modal token new\n
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#automatically-creating-a-new-push-work-pool-and-provisioning-infrastructure","title":"Automatically creating a new push work pool and provisioning infrastructure","text":"Here's the command to create a new push work pool and configure the necessary infrastructure.
AWS ECSAzure Container InstancesGoogle Cloud RunModalprefect work-pool create --type ecs:push --provision-infra my-ecs-pool\n
Using the --provision-infra
flag will automatically set up your default AWS account to be ready to execute flows via ECS tasks. In your AWS account, this command will create a new IAM user, IAM policy, ECS cluster that uses AWS Fargate, VPC, and ECR repository if they don't already exist. In your Prefect workspace, this command will create an AWSCredentials
block for storing the generated credentials.
Here's an abbreviated example output from running the command:
\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-ecs-pool will require: \u2502\n\u2502 \u2502\n\u2502 - Creating an IAM user for managing ECS tasks: prefect-ecs-user \u2502\n\u2502 - Creating and attaching an IAM policy for managing ECS tasks: prefect-ecs-policy \u2502\n\u2502 - Storing generated AWS credentials in a block \u2502\n\u2502 - Creating an ECS cluster for running Prefect flows: prefect-ecs-cluster \u2502\n\u2502 - Creating a VPC with CIDR 172.31.0.0/16 for running ECS tasks: prefect-ecs-vpc \u2502\n\u2502 - Creating an ECR repository for storing Prefect images: prefect-flows \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\nProvisioning IAM user\nCreating IAM policy\nGenerating AWS credentials\nCreating AWS credentials block\nProvisioning ECS cluster\nProvisioning VPC\nCreating internet gateway\nSetting up subnets\nSetting up security group\nProvisioning ECR repository\nAuthenticating with ECR\nSetting default Docker build namespace\nProvisioning Infrastructure \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-ecs-pool'!\n
Default Docker build namespace
After infrastructure provisioning completes, you will be logged into your new ECR repository and the default Docker build namespace will be set to the URL of the registry.
While the default namespace is set, you will not need to provide the registry URL when building images as part of your deployment process.
To take advantage of this, you can write your deploy scripts like this:
example_deploy_script.pyfrom prefect import flow\nfrom prefect.deployments import DeploymentImage\n\n@flow(log_prints=True) \ndef my_flow(name: str = \"world\"): \n print(f\"Hello {name}! I'm a flow running in a ECS task!\") \n\n\nif __name__ == \"__main__\":\n my_flow.deploy(\n name=\"my-deployment\", \n work_pool_name=\"my-work-pool\",\n image=DeploymentImage( \n name=\"my-repository:latest\",\n platform=\"linux/amd64\",\n ) \n ) \n
This will build an image with the tag <ecr-registry-url>/my-image:latest
and push it to the registry.
Your image name will need to match the name of the repository created with your work pool. You can create new repositories in the ECR console.
prefect work-pool create --type azure-container-instance:push --provision-infra my-aci-pool\n
Using the --provision-infra
flag will automatically set up your default Azure account to be ready to execute flows via Azure Container Instances. In your Azure account, this command will create a resource group, app registration, service account with necessary permission, generate a secret for the app registration, and create an Azure Container Registry, if they don't already exist. In your Prefect workspace, this command will create an AzureContainerInstanceCredentials
block for storing the client secret value from the generated secret.
Here's an abbreviated example output from running the command:
\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-aci-work-pool will require: \u2502\n\u2502 \u2502\n\u2502 Updates in subscription Azure subscription 1 \u2502\n\u2502 \u2502\n\u2502 - Create a resource group in location eastus \u2502\n\u2502 - Create an app registration in Azure AD prefect-aci-push-pool-app \u2502\n\u2502 - Create/use a service principal for app registration \u2502\n\u2502 - Generate a secret for app registration \u2502\n\u2502 - Create an Azure Container Registry with prefix prefect \u2502\n\u2502 - Create an identity prefect-acr-identity to allow access to the created registry \u2502\n\u2502 - Assign Contributor role to service account \u2502\n\u2502 - Create an ACR registry for image hosting \u2502\n\u2502 - Create an identity for Azure Container Instance to allow access to the registry \u2502\n\u2502 \u2502\n\u2502 Updates in Prefect workspace \u2502\n\u2502 \u2502\n\u2502 - Create Azure Container Instance credentials block aci-push-pool-credentials \u2502\n\u2502 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: \nCreating resource group\nCreating app registration\nGenerating secret for app registration\nCreating ACI credentials block\nACI credentials block 'aci-push-pool-credentials' created in Prefect Cloud\nAssigning Contributor role to service account\nCreating Azure Container Registry\nCreating identity\nProvisioning infrastructure... \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned for 'my-aci-work-pool' work pool!\nCreated work pool 'my-aci-work-pool'!\n
Default Docker build namespace
After infrastructure provisioning completes, you will be logged into your new Azure Container Registry and the default Docker build namespace will be set to the URL of the registry.
While the default namespace is set, any images you build without specifying a registry or username/organization will be pushed to the registry.
To take advantage of this functionality, you can write your deploy scripts like this:
example_deploy_script.pyfrom prefect import flow \nfrom prefect.deployments import DeploymentImage \n\n\n@flow(log_prints=True) \ndef my_flow(name: str = \"world\"): \n print(f\"Hello {name}! I'm a flow running on an Azure Container Instance!\") \n\n\nif __name__ == \"__main__\": \n my_flow.deploy( \n name=\"my-deployment\",\n work_pool_name=\"my-work-pool\", \n image=DeploymentImage( \n name=\"my-image:latest\", \n platform=\"linux/amd64\", \n ) \n ) \n
This will build an image with the tag <acr-registry-url>/my-image:latest
and push it to the registry.
prefect work-pool create --type cloud-run:push --provision-infra my-cloud-run-pool \n
Using the --provision-infra
flag will allow you to select a GCP project to use for your work pool and automatically configure it to be ready to execute flows via Cloud Run. In your GCP project, this command will activate the Cloud Run API, create a service account, and create a key for the service account, if they don't already exist. In your Prefect workspace, this command will create a GCPCredentials
block for storing the service account key.
Here's an abbreviated example output from running the command:
\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-cloud-run-pool will require: \u2502\n\u2502 \u2502\n\u2502 Updates in GCP project central-kit-405415 in region us-central1 \u2502\n\u2502 \u2502\n\u2502 - Activate the Cloud Run API for your project \u2502\n\u2502 - Activate the Artifact Registry API for your project \u2502\n\u2502 - Create an Artifact Registry repository named prefect-images \u2502\n\u2502 - Create a service account for managing Cloud Run jobs: prefect-cloud-run \u2502\n\u2502 - Service account will be granted the following roles: \u2502\n\u2502 - Service Account User \u2502\n\u2502 - Cloud Run Developer \u2502\n\u2502 - Create a key for service account prefect-cloud-run \u2502\n\u2502 \u2502\n\u2502 Updates in Prefect workspace \u2502\n\u2502 \u2502\n\u2502 - Create GCP credentials block my--pool-push-pool-credentials to store the service account key \u2502\n\u2502 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\nActivating Cloud Run API\nActivating Artifact Registry API\nCreating Artifact Registry repository\nConfiguring authentication to Artifact Registry\nSetting default Docker build namespace\nCreating service account\nAssigning roles to service account\nCreating service account key\nCreating GCP credentials block\nProvisioning Infrastructure \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-cloud-run-pool'!\n
Default Docker build namespace
After infrastructure provisioning completes, you will be logged into your new Artifact Registry repository and the default Docker build namespace will be set to the URL of the repository.
While the default namespace is set, any images you build without specifying a registry or username/organization will be pushed to the repository.
To take advantage of this functionality, you can write your deploy scripts like this:
example_deploy_script.pyfrom prefect import flow \nfrom prefect.deployments import DeploymentImage \n\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n print(f\"Hello {name}! I'm a flow running on Cloud Run!\")\n\n\nif __name__ == \"__main__\": \n my_flow.deploy( \n name=\"my-deployment\",\n work_pool_name=\"above-ground\",\n image=DeploymentImage(\n name=\"my-image:latest\",\n platform=\"linux/amd64\",\n )\n )\n
This will build an image with the tag <region>-docker.pkg.dev/<project>/<repository-name>/my-image:latest
and push it to the repository.
prefect work-pool create --type modal:push --provision-infra my-modal-pool \n
Using the --provision-infra
flag will trigger the creation of a ModalCredentials
block in your Prefect Cloud workspace. This block will store your Modal API token, which is used to authenticate with Modal's API. By default, the token for your current Modal profile will be used for the new ModalCredentials
block. If Prefect is unable to discover a Modal API token for your current profile, you will be prompted to create a new one.
That's it! You're ready to create and schedule deployments that use your new push work pool. Reminder that no worker is needed to run flows with a push work pool.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#using-existing-resources-with-automatic-infrastructure-provisioning","title":"Using existing resources with automatic infrastructure provisioning","text":"If you already have the necessary infrastructure set up, Prefect will detect that upon work pool creation and the infrastructure provisioning for that resource will be skipped.
For example, here's how prefect work-pool create my-work-pool --provision-infra
looks when existing Azure resources are detected:
Proceed with infrastructure provisioning? [y/n]: y\nCreating resource group\nResource group 'prefect-aci-push-pool-rg' already exists in location 'eastus'.\nCreating app registration\nApp registration 'prefect-aci-push-pool-app' already exists.\nGenerating secret for app registration\nProvisioning infrastructure\nACI credentials block 'bb-push-pool-credentials' created\nAssigning Contributor role to service account...\nService principal with object ID '4be6fed7-...' already has the 'Contributor' role assigned in \n'/subscriptions/.../'\nCreating Azure Container Instance\nContainer instance 'prefect-aci-push-pool-container' already exists.\nCreating Azure Container Instance credentials block\nProvisioning infrastructure... \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-work-pool'!\n
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#provisioning-infrastructure-for-an-existing-push-work-pool","title":"Provisioning infrastructure for an existing push work pool","text":"If you already have a push work pool set up, but haven't configured the necessary infrastructure, you can use the provision-infra
sub-command to provision the infrastructure for that work pool. For example, you can run the following command if you have a work pool named \"my-work-pool\".
prefect work-pool provision-infra my-work-pool\n
Prefect will create the necessary infrastructure for the my-work-pool
work pool and provide you with a summary of the changes to be made:
\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-work-pool will require: \u2502\n\u2502 \u2502\n\u2502 Updates in subscription Azure subscription 1 \u2502\n\u2502 \u2502\n\u2502 - Create a resource group in location eastus \u2502\n\u2502 - Create an app registration in Azure AD prefect-aci-push-pool-app \u2502\n\u2502 - Create/use a service principal for app registration \u2502\n\u2502 - Generate a secret for app registration \u2502\n\u2502 - Assign Contributor role to service account \u2502\n\u2502 - Create Azure Container Instance 'aci-push-pool-container' in resource group prefect-aci-push-pool-rg \u2502\n\u2502 \u2502\n\u2502 Updates in Prefect workspace \u2502\n\u2502 \u2502\n\u2502 - Create Azure Container Instance credentials block aci-push-pool-credentials \u2502\n\u2502 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\n
This command can speed up your infrastructure setup process.
As with the examples above, you will need to have the related cloud CLI library installed and be authenticated with your cloud provider.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#manual-infrastructure-provisioning","title":"Manual infrastructure provisioning","text":"If you prefer to set up your infrastructure manually, don't include the --provision-infra
flag in the CLI command. In the examples below, we'll create a push work pool via the Prefect Cloud UI.
To push work to ECS, AWS credentials are required.
Create a user and attach the AmazonECS_FullAccess permissions.
From that user's page create credentials and store them somewhere safe for use in the next section.
To push work to Azure, an Azure subscription, resource group and tenant secret are required.
Create Subscription and Resource Group
Create App Registration
Add App Registration to Resource Group
A GCP service account and an API Key are required, to push work to Cloud Run.
Create a service account by navigating to the service accounts page and clicking Create. Name and describe your service account, and click continue to configure permissions.
The service account must have two roles at a minimum, Cloud Run Developer, and Service Account User.
Once the Service account is created, navigate to its Keys page to add an API key. Create a JSON type key, download it, and store it somewhere safe for use in the next section.
A Modal API token is required to push work to Modal.
Create a Modal API token by navigating to Settings in the Modal UI. In the API Tokens section of the Settings page, click New Token.
Copy the token ID and token secret and store them somewhere safe for use in the next section.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#work-pool-configuration","title":"Work pool configuration","text":"Our push work pool will store information about what type of infrastructure our flow will run on, what default values to provide to compute jobs, and other important execution environment parameters. Because our push work pool needs to integrate securely with your serverless infrastructure, we need to start by storing our credentials in Prefect Cloud, which we'll do by making a block.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#creating-a-credentials-block","title":"Creating a Credentials block","text":"AWS ECSAzure Container InstancesGoogle Cloud RunModalNavigate to the blocks page, click create new block, and select AWS Credentials for the type.
For use in a push work pool, this block must have the region and cluster name filled out, in addition to access key and access key secret.
Provide any other optional information and create your block.
Navigate to the blocks page and click the \"+\" at the top to create a new block. Find the Azure Container Instance Credentials block and click \"Add +\".
Locate the client ID and tenant ID on your app registration and use the client secret you saved earlier. Be sure to use the value of the secret, not the secret ID!
Provide any other optional information and click \"Create\".
Navigate to the blocks page, click create new block, and select GCP Credentials for the type.
For use in a push work pool, this block must have the contents of the JSON key stored in the Service Account Info field, as such:
Provide any other optional information and create your block.
Navigate to the blocks page, click create new block, and select Modal Credentials for the type.
For use in a push work pool, this block must have the token ID and token secret stored in the Token ID and Token Secret fields, respectively.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#creating-a-push-work-pool","title":"Creating a push work pool","text":"Now navigate to the work pools page. Click Create to start configuring your push work pool by selecting a push option in the infrastructure type step.
AWS ECSAzure Container InstancesGoogle Cloud RunModalEach step has several optional fields that are detailed in the work pools documentation. Select the block you created under the AWS Credentials field. This will allow Prefect Cloud to securely interact with your ECS cluster.
Fill in the subscription ID and resource group name from the resource group you created. Add the Azure Container Instance Credentials block you created in the step above.
Each step has several optional fields that are detailed in the work pools documentation. Select the block you created under the GCP Credentials field. This will allow Prefect Cloud to securely interact with your GCP project.
Each step has several optional fields that are detailed in the work pools documentation. Select the block you created under the Modal Credentials field. This will allow Prefect Cloud to securely interact with your Modal account.
Create your pool and you are ready to deploy flows to your Push work pool.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#deployment","title":"Deployment","text":"Deployment details are described in the deployments concept section. Your deployment needs to be configured to send flow runs to our push work pool. For example, if you create a deployment through the interactive command line experience, choose the work pool you just created. If you are deploying an existing prefect.yaml
file, the deployment would contain:
work_pool:\n name: my-push-pool\n
Deploying your flow to the my-push-pool
work pool will ensure that runs that are ready for execution will be submitted immediately, without the need for a worker to poll for them.
Serverless infrastructure may require a certain image architecture
Note that serverless infrastructure may assume a certain Docker image architecture; for example, Google Cloud Run will fail to run images built with linux/arm64
architecture. If using Prefect to build your image, you can change the image architecture through the platform
keyword (e.g., platform=\"linux/amd64\"
).
With your deployment created, navigate to its detail page and create a new flow run. You'll see the flow start running without ever having to poll the work pool, because Prefect Cloud securely connected to your serverless infrastructure, created a job, ran the job, and began reporting on its execution.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/serverless-workers/","title":"Run Deployments on Serverless Infrastructure with Prefect Workers","text":"Prefect provides hybrid work pools for workers to run flows on the serverless platforms of major cloud providers. The following options are available:
Push work pools don't require a worker
Options for push work pool versions of AWS ECS, Azure Container Instances, and Google Cloud Run that do not require a worker are available with Prefect Cloud. These push work pool options require connection configuration information to be stored on Prefect Cloud. Read more in the Serverless Push Work Pool Guide.
This is a brief overview of the options to run workflows on serverless infrastructure. For in-depth guides, see the Prefect integration libraries:
prefect-aws
docsprefect-gcp
docs.Choosing between Google Cloud Run and Google Vertex AI
Google Vertex AI is well-suited for machine learning model training applications in which GPUs or TPUs and high resource levels are desired.
","tags":["work pools","deployments","Cloud Run","GCP","Vertex AI","AWS ECS","Azure Container Instances","ACI"],"boost":2},{"location":"guides/deployment/serverless-workers/#steps","title":"Steps","text":"Options for push versions on AWS ECS, Azure Container Instances, and Google Cloud Run work pools that do not require a worker are available with Prefect Cloud. Read more in the Serverless Push Work Pool Guide.
Learn more about workers and work pools in the Prefect concept documentation.
","tags":["work pools","deployments","Cloud Run","GCP","Vertex AI","AWS ECS","Azure Container Instances","ACI"],"boost":2},{"location":"guides/deployment/storage-guide/","title":"Where to Store Your Flow Code","text":"When a flow runs, the execution environment needs access to its code. Flow code is not stored in a Prefect server database instance or Prefect Cloud. When deploying a flow, you have several flow code storage options.
This guide discusses storage options with a focus on deployments created with the interactive CLI experience or a prefect.yaml
file. If you'd like to create your deployments using Python code, see the discussion of flow code storage on the .deploy
tab of Deploying Flows to Work pools and Workers guide.
Local flow code storage is often used with a Local Subprocess work pool for initial experimentation.
To create a deployment with local storage and a Local Subprocess work pool, do the following:
prefect deploy
from the root of the directory containing your flow code.You are then shown the location that your flow code will be fetched from when a flow is run. For example:
Your Prefect workers will attempt to load your flow from: \n/my-path/my-flow-file.py. To see more options for managing your flow's code, run:\n\n $ prefect init\n
When deploying a flow to production, you most likely want code to run with infrastructure-specific configuration. The flow code storage options shown below are recommended for production deployments.
","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-2-git-based-storage","title":"Option 2: Git-based storage","text":"Git-based version control platforms are popular locations for code storage. They provide redundancy, version control, and easier collaboration.
GitHub is the most popular cloud-based repository hosting provider. GitLab and Bitbucket are other popular options. Prefect supports each of these platforms.
","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#creating-a-deployment-with-git-based-storage","title":"Creating a deployment with git-based storage","text":"Run prefect deploy
from the root directory of the git repository and create a new deployment. You will see a series of prompts. Select that you want to create a new deployment, select the flow code entrypoint, and name your deployment.
Prefect detects that you are in a git repository and asks if you want to store your flow code in a git repository. Select \"y\" and you will be prompted to confirm the URL of your git repository and the branch name, as in the example below:
? Your Prefect workers will need access to this flow's code in order to run it. \nWould you like your workers to pull your flow code from its remote repository when running this flow? [y/n] (y): \n? Is https://github.com/my_username/my_repo.git the correct URL to pull your flow code from? [y/n] (y): \n? Is main the correct branch to pull your flow code from? [y/n] (y): \n? Is this a private repository? [y/n]: y\n
In this example, the git repository is hosted on GitHub. If you are using Bitbucket or GitLab, the URL will match your provider. If the repository is public, enter \"n\" and you are on your way.
If the repository is private, you can enter a token to access your private repository. This token will be saved in an encrypted Prefect Secret block.
? Please enter a token that can be used to access your private repository. This token will be saved as a Secret block via the Prefect API: \"123_abc_this_is_my_token\"\n
Verify that you have a new Secret block in your active workspace named in the format \"deployment-my-deployment-my-flow-name-repo-token\".
Creating access tokens differs for each provider.
GitHubBitbucketGitLabWe recommend using HTTPS with fine-grained Personal Access Tokens so that you can limit access by repository. See the GitHub docs for Personal Access Tokens (PATs).
Under Your Profile->Developer Settings->Personal access tokens->Fine-grained token choose Generate New Token and fill in the required fields. Under Repository access choose Only select repositories and grant the token permissions for Contents.
We recommend using HTTPS with Repository, Project, or Workspace Access Tokens.
You can create a Repository Access Token with Scopes->Repositories->Read.
Bitbucket requires you prepend the token string with x-token-auth:
So the full string looks like x-token-auth:abc_123_this_is_my_token
.
We recommend using HTTPS with Project Access Tokens.
In your repository in the GitLab UI, select Settings->Repository->Project Access Tokens and check read_repository under Select scopes.
If you want to configure a Secret block ahead of time, create the block via code or the Prefect UI and reference it in your prefect.yaml
file.
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://bitbucket.org/org/my-private-repo.git\n access_token: \"{{ prefect.blocks.secret.my-block-name }}\"\n
Alternatively, you can create a Credentials block ahead of time and reference it in the prefect.yaml
pull step.
pip install -U prefect-github
prefect block register -m prefect_github
.pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/discdiver/my-private-repo.git\n credentials: \"{{ prefect.blocks.github-credentials.my-block-name }}\"\n
pip install -U prefect-bitbucket
prefect block register -m prefect_bitbucket
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://bitbucket.org/org/my-private-repo.git\n credentials: \"{{ prefect.blocks.bitbucket-credentials.my-block-name }}\"\n
pip install -U prefect-gitlab
prefect block register -m prefect_gitlab
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://gitlab.com/org/my-private-repo.git\n credentials: \"{{ prefect.blocks.gitlab-credentials.my-block-name }}\"\n
Push your code
When you make a change to your code, Prefect does not push your code to your git-based version control platform. You need to push your code manually or as part of your CI/CD pipeline. This design decision is an intentional one to avoid confusion about the git history and push process.
","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-3-docker-based-storage","title":"Option 3: Docker-based storage","text":"Another popular way to store your flow code is to include it in a Docker image. The following work pools use Docker containers, so the flow code can be directly baked into the image:
Push-based serverless cloud-based options (no worker required)
Run prefect init
in the root of your repository and choose docker
for the project name and answer the prompts to create a prefect.yaml
file with a build step that will create a Docker image with the flow code built in. See the Workers and Work Pools page of the tutorial for more info.
prefect deploy
from the root of your repository to create a deployment. CI/CD may not require push or pull steps
You don't need push or pull steps in the prefect.yaml
file if using CI/CD to build a Docker image outside of Prefect. Instead, the work pool can reference the image directly.
You can store your code in an AWS S3 bucket, Azure Blob Storage container, or GCP GCS bucket and specify the destination directly in the push
and pull
steps of your prefect.yaml
file.
To create a templated prefect.yaml
file run prefect init
and select the recipe for the applicable cloud-provider storage. Below are the recipe options and the relevant portions of the prefect.yaml
file.
Choose s3
as the recipe and enter the bucket name when prompted.
# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_aws.deployments.steps.push_to_s3:\n id: push_code\n requires: prefect-aws>=0.3.4\n bucket: my-bucket\n folder: my-folder\n credentials: \"{{ prefect.blocks.aws-credentials.my-credentials-block }}\" # if private\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_aws.deployments.steps.pull_from_s3:\n id: pull_code\n requires: prefect-aws>=0.3.4\n bucket: '{{ push_code.bucket }}'\n folder: '{{ push_code.folder }}'\n credentials: \"{{ prefect.blocks.aws-credentials.my-credentials-block }}\" # if private \n
If the bucket requires authentication to access it, you can do the following:
pip install -U prefect-aws
prefect block register -m prefect_aws
Choose azure
as the recipe and enter the container name when prompted.
# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_azure.deployments.steps.push_to_azure_blob_storage:\n id: push_code\n requires: prefect-azure>=0.2.8\n container: my-prefect-azure-container\n folder: my-folder\n credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.my-credentials-block }}\" # if private\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_azure.deployments.steps.pull_from_azure_blob_storage:\n id: pull_code\n requires: prefect-azure>=0.2.8\n container: '{{ push_code.container }}'\n folder: '{{ push_code.folder }}'\n credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.my-credentials-block }}\" # if private\n
If the blob requires authentication to access it, you can do the following:
pip install -U prefect-azure
prefect block register -m prefect_azure
Choose `gcs`` as the recipe and enter the bucket name when prompted.
# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_gcp.deployment.steps.push_to_gcs:\n id: push_code\n requires: prefect-gcp>=0.4.3\n bucket: my-bucket\n folder: my-folder\n credentials: \"{{ prefect.blocks.gcp-credentials.my-credentials-block }}\" # if private \n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_gcp.deployment.steps.pull_from_gcs:\n id: pull_code\n requires: prefect-gcp>=0.4.3\n bucket: '{{ push_code.bucket }}'\n folder: '{{ pull_code.folder }}'\n credentials: \"{{ prefect.blocks.gcp-credentials.my-credentials-block }}\" # if private \n
If the bucket requires authentication to access it, you can do the following:
pip install -U prefect-gcp
prefect block register -m prefect_gcp
Another option for authentication is for the worker to have access to the storage location at runtime via SSH keys.
Alternatively, you can inject environment variables into your deployment like this example that uses an environment variable named CUSTOM_FOLDER
:
push:\n - prefect_gcp.deployment.steps.push_to_gcs:\n id: push_code\n requires: prefect-gcp>=0.4.3\n bucket: my-bucket\n folder: '{{ $CUSTOM_FOLDER }}'\n
","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#including-and-excluding-files-from-storage","title":"Including and excluding files from storage","text":"By default, Prefect uploads all files in the current folder to the configured storage location when you create a deployment.
When using a git repository, Docker image, or cloud-provider storage location, you may want to exclude certain files or directories.
.gitignore
file. .dockerignore
file. .prefectignore
file serves the same purpose and follows a similar syntax as those files. So an entry of *.pyc
will exclude all .pyc
files from upload.In earlier versions of Prefect storage blocks were the recommended way to store flow code. Storage blocks are still supported, but not recommended.
As shown above, repositories can be referenced directly through interactive prompts with prefect deploy
or in a prefect.yaml
. When authentication is needed, Secret or Credential blocks can be referenced, and in some cases created automatically through interactive deployment creation prompts.
You've seen options for where to store your flow code.
We recommend using Docker-based storage or git-based storage for your production deployments.
Check out more guides to reach your goals with Prefect.
","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"integrations/","title":"Integrations","text":"Prefect integrations are organized into collections of pre-built tasks, flows, blocks and more that are installable as PyPI packages.
Airbyte
Maintained by Prefect
Alert
Maintained by Khuyen Tran
AWS
Maintained by Prefect
Azure
Maintained by Prefect
Bitbucket
Maintained by Prefect
Census
Maintained by Prefect
Coiled
Maintained by Coiled
CubeJS
Maintained by Alessandro Lollo
Dask
Maintained by Prefect
Databricks
Maintained by Prefect
dbt
Maintained by Prefect
Docker
Maintained by Prefect
Earthdata
Maintained by Giorgio Basile
Maintained by Prefect
Firebolt
Maintained by Prefect
Fivetran
Maintained by Fivetran
Fugue
Maintained by The Fugue Development Team
GCP
Maintained by Prefect
GitHub
Maintained by Prefect
GitLab
Maintained by Prefect
Google Sheets
Maintained by Stefano Cascavilla
Great Expectations
Maintained by Prefect
HashiCorp Vault
Maintained by Pavel Chekin
Hex
Maintained by Prefect
Hightouch
Maintained by Prefect
Jupyter
Maintained by Prefect
Kubernetes
Maintained by Prefect
KV
Maintained by Zanie Blue
MetricFlow
Maintained by Alessandro Lollo
Monday
Maintained by Prefect
MonteCarlo
Maintained by Prefect
OpenAI
Maintained by Prefect
OpenMetadata
Maintained by Prefect
Planetary Computer
Maintained by Giorgio Basile
Ray
Maintained by Prefect
Shell
Maintained by Prefect
Sifflet
Maintained by Sifflet and Alessandro Lollo
Slack
Maintained by Prefect
Snowflake
Maintained by Prefect
Soda Cloud
Maintained by Alessandro Lollo
Soda Core
Maintained by Soda and Alessandro Lollo
Spark on Kubernetes
Maintained by Manoj Babu Katragadda
SQLAlchemy
Maintained by Prefect
Stitch
Maintained by Alessandro Lollo
Transform
Maintained by Alessandro Lollo
Maintained by Prefect
","tags":["tasks","flows","blocks","collections","task library","integrations","Airbyte","Alert","AWS","Azure","Bitbucket","Census","Coiled","CubeJS","Dask","Databricks","dbt","Docker","Earthdata","Email","Firebolt","Fivetran","Fugue","GCP","GitHub","GitLab","Google Sheets","Great Expectations","HashiCorp Vault","Hex","Hightouch","Jupyter","Kubernetes","KV","MetricFlow","Monday","MonteCarlo","OpenAI","OpenMetadata","Planetary Computer","Ray","Shell","Sifflet","Slack","Snowflake","Soda Cloud","Soda Core","Spark on Kubernetes","SQLAlchemy","Stitch","Transform","Twitter"],"boost":2},{"location":"integrations/contribute/","title":"Contribute","text":"We welcome contributors! You can help contribute blocks and integrations by following these steps.
","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contributing-blocks","title":"Contributing Blocks","text":"Building your own custom block is simple!
Block
.Attributes
and Example
section in the docstring._logo_url
to point to a relevant image.pydantic.Field
s of the block with a type annotation, default
or default_factory
, and a short description about the field.For example, this is how the Secret block is implemented:
from pydantic import Field, SecretStr\nfrom prefect.blocks.core import Block\n\nclass Secret(Block):\n \"\"\"\n A block that represents a secret value. The value stored in this block will be obfuscated when\n this block is logged or shown in the UI.\n\n Attributes:\n value: A string value that should be kept secret.\n\n Example:\n ```python\n from prefect.blocks.system import Secret\n secret_block = Secret.load(\"BLOCK_NAME\")\n\n # Access the stored secret\n secret_block.get()\n ```\n \"\"\"\n\n _logo_url = \"https://example.com/logo.png\"\n\n value: SecretStr = Field(\n default=..., description=\"A string value that should be kept secret.\"\n ) # ... indicates it's a required field\n\n def get(self):\n return self.value.get_secret_value()\n
To view in Prefect Cloud or the Prefect server UI, register the block.
","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contributing-integrations","title":"Contributing Integrations","text":"Anyone can create and share a Prefect Integration and we encourage anyone interested in creating an integration to do so!
","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#generate-a-project","title":"Generate a project","text":"To help you get started with your integration, we've created a template that gives the tools you need to create and publish your integration.
Use the Prefect Integration template to get started creating an integration with a bootstrapped project!
","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#list-a-project-in-the-integrations-catalog","title":"List a project in the Integrations Catalog","text":"To list your integration in the Prefect Integrations Catalog, submit a PR to the Prefect repository adding a file to the docs/integrations/catalog
directory with details about your integration. Please use TEMPLATE.yaml
in that folder as a guide.
If you'd like to help contribute to fix an issue or add a feature to any of our Integrations, please propose changes through a pull request from a fork of the repository.
pip install -e \".[dev]\"\n
pre-commit
to perform quality checks prior to commit: pre-commit install\n
git commit
, git push
, and create a pull requestInstall the Integration via pip
.
For example, to use prefect-aws
:
pip install prefect-aws\n
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#registering-blocks-from-an-integration","title":"Registering Blocks from an Integration","text":"Once the Prefect Integration is installed, register the blocks within the integration to view them in the Prefect Cloud UI:
For example, to register the blocks available in prefect-aws
:
prefect block register -m prefect_aws\n
Updating blocks from an integrations
If you install an updated Prefect integration that adds fields to a block type, you will need to re-register that block type.
Loading a block in code
To use the load
method on a Block, you must already have a block document saved either through code or through the Prefect UI.
Learn more about Blocks here!
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#using-tasks-and-flows-from-an-integration","title":"Using Tasks and Flows from an Integration","text":"Integrations also contain pre-built tasks and flows that can be imported and called within your code.
As an example, to read a secret from AWS Secrets Manager with the read_secret
task:
from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import read_secret\n\n@flow\ndef connect_to_database():\n aws_credentials = AwsCredentials.load(\"MY_BLOCK_NAME\")\n secret_value = read_secret(\n secret_name=\"db_password\",\n aws_credentials=aws_credentials\n )\n\n # Use secret_value to connect to a database\n
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#customizing-tasks-and-flows-from-an-integration","title":"Customizing Tasks and Flows from an Integration","text":"To customize the settings of a task or flow pre-configured in a collection, use with_options
:
from prefect import flow\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n\ncustom_run_dbt_cloud_job = trigger_dbt_cloud_job_run_and_wait_for_completion.with_options(\n name=\"Run My DBT Cloud Job\",\n retries=2,\n retry_delay_seconds=10\n)\n\n@flow\ndef run_dbt_job_flow():\n run_result = custom_run_dbt_cloud_job(\n dbt_cloud_credentials=DbtCloudCredentials.load(\"my-dbt-cloud-credentials\"),\n job_id=1\n )\n\nrun_dbt_job_flow()\n
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#recipes-and-tutorials","title":"Recipes and Tutorials","text":"To learn more about how to use Integrations, check out Prefect recipes on GitHub. These recipes provide examples of how Integrations can be used in various scenarios.
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"recipes/recipes/","title":"Prefect Recipes","text":"Prefect recipes are common, extensible examples for setting up Prefect in your execution environment with ready-made ingredients such as Dockerfiles, Terraform files, and GitHub Actions.
Recipes are useful when you are looking for tutorials on how to deploy a worker, use event-driven flows, set up unit testing, and more.
The following are Prefect recipes specific to Prefect 2. You can find a full repository of recipes at https://github.com/PrefectHQ/prefect-recipes and additional recipes at Prefect Discourse.
","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#recipe-catalog","title":"Recipe catalog","text":"Agent on Azure with KubernetesConfigure Prefect on Azure with Kubernetes, running a Prefect agent to execute deployment flow runs.
Maintained by Prefect
This recipe uses:
Agent on ECS Fargate with AWS CLI
Run a Prefect 2 agent on ECS Fargate using the AWS CLI.
Maintained by Prefect
This recipe uses:
Agent on ECS Fargate with Terraform
Run a Prefect 2 agent on ECS Fargate using Terraform.
Maintained by Prefect
This recipe uses:
Agent on an Azure VM
Set up an Azure VM and run a Prefect agent.
Maintained by Prefect
This recipe uses:
Flow Deployment with GitHub Actions
Deploy a Prefect flow with storage and infrastructure blocks, update and push Docker image to container registry.
Maintained by Prefect
This recipe uses:
Flow Deployment with GitHub Storage and Docker Infrastructure
Create a deployment with GitHub as a storage and Docker Container as an infrastructure
Maintained by Prefect
This recipe uses:
Prefect server on an AKS Cluster
Deploy a Prefect server to an Azure Kubernetes Service (AKS) Cluster with Azure Blob Storage.
Maintained by Prefect
This recipe uses:
Serverless Prefect with AWS Chalice
Execute Prefect flows in an AWS Lambda function managed by Chalice.
Maintained by Prefect
This recipe uses:
Serverless Workflows with ECSTask Blocks
Deploy a Prefect agent to AWS ECS Fargate using GitHub Actions and ECSTask infrastructure blocks.
Maintained by Prefect
This recipe uses:
","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#contributing-recipes","title":"Contributing recipes","text":"
We're always looking for new recipe contributions! See the Prefect Recipes repository for details on how you can add your Prefect recipe, share best practices with fellow Prefect users, and earn some swag.
Prefect recipes provide a vital cookbook where users can find helpful code examples and, when appropriate, common steps for specific Prefect use cases.
We love recipes from anyone who has example code that another Prefect user can benefit from (e.g. a Prefect flow that loads data into Snowflake).
Have a blog post, Discourse article, or tutorial you\u2019d like to share as a recipe? All submissions are welcome. Clone the prefect-recipes repo, create a branch, add a link to your recipe to the README, and submit a PR. Have more questions? Read on.
","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-is-a-recipe","title":"What is a recipe?","text":"A Prefect recipe is like a cookbook recipe: it tells you what you need \u2014 the ingredients \u2014 and some basic steps, but assumes you can put the pieces together. Think of the Hello Fresh meal experience, but for dataflows.
A tutorial, on the other hand, is Julia Child holding your hand through the entire cooking process: explaining each ingredient and procedure, demonstrating best practices, pointing out potential problems, and generally making sure you can\u2019t stray from the happy path to a delicious meal.
We love Julia, and we love tutorials. But we don\u2019t expect that a Prefect recipe should handhold users through every step and possible contingency of a solution. A recipe can start from an expectation of more expertise and problem-solving ability on the part of the reader.
To see an example of a high quality recipe, check out Serverless with AWS Chalice. This recipe includes all of the elements we like to see.
","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#steps-to-add-your-recipe","title":"Steps to add your recipe","text":"Here\u2019s our guide to creating a recipe:
# Clone the repository\ngit clone git@github.com:PrefectHQ/prefect-recipes.git\ncd prefect-recipes\n\n# Create and checkout a new branch\n\ngit checkout -b new_recipe_branch_name\n
flows-advanced/
folder. A Prefect Recipes maintainer will help you find the best place for your recipe. Just want to direct others to a project you made, whether it be a repo or a blogpost? Simply link to it in the Prefect Recipes README!That\u2019s it!
","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-makes-a-good-recipe","title":"What makes a good recipe?","text":"Every recipe is useful, as other Prefect users can adapt the recipe to their needs. Particularly good ones help a Prefect user bake a great dataflow solution! Take a look at the prefect-recipes repo to see some examples.
","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-are-the-common-ingredients-of-a-good-recipe","title":"What are the common ingredients of a good recipe?","text":"A thoughtful README can take a recipe from good to great. Here are some best practices that we\u2019ve found make for a great recipe README:
We hope you\u2019ll feel comfortable sharing your Prefect solutions as recipes in the prefect-recipes repo. Collaboration and knowledge sharing are defining attributes of our Prefect Community!
Have questions about sharing or using recipes? Reach out on our active Prefect Slack Community!
Happy engineering!
","tags":["recipes","best practices","examples"],"boost":2},{"location":"tutorial/","title":"Tutorial Overview","text":"This tutorial provides a guided walk-through of Prefect core concepts and instructions on how to use them.
By the end of this tutorial you will have:
These four topics will get most users to their first production deployment.
Advanced users that need more governance and control of their workflow infrastructure can go one step further by:
If you're looking for examples of more advanced operations (like deploying on Kubernetes), check out Prefect's guides.
","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/#prerequisites","title":"Prerequisites","text":"pip install -U prefect
See the install guide for more detailed instructions, if needed.
To get the most out of this tutorial, we recommend using Prefect Cloud. Sign up for a forever free Prefect Cloud account or accept your organization's invite to join their Prefect Cloud account.
prefect cloud login
CLI command to authenticate to Prefect Cloud from your environment.prefect cloud login\n
Choose Log in with a web browser and click the Authorize button in the browser window that opens.
As an alternative to using Prefect Cloud, you can self-host a Prefect server instance. If you choose this option, run prefect server start
to start a local Prefect server instance.
Prefect orchestrates workflows \u2014 it simplifies the creation, scheduling, and monitoring of complex data pipelines. With Prefect, you define workflows as Python code and let it handle the rest.
Prefect also provides error handling, retry mechanisms, and a user-friendly dashboard for monitoring. It's the easiest way to transform any Python function into a unit of work that can be observed and orchestrated.
Just bring your Python code, sprinkle in a few decorators, and go!
","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/#first-steps-flows","title":"First steps: Flows","text":"Let's begin by learning how to create your first Prefect flow - click here to get started.
","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/deployments/","title":"Deploying Flows","text":"Reminder to connect to Prefect Cloud or a self-hosted Prefect server instance
Some features in this tutorial, such as scheduling, require you to be connected to a Prefect server. If using a self-hosted setup, run prefect server start
to run both the webserver and UI. If using Prefect Cloud, make sure you have successfully authenticated your local environment.
Some of the most common reasons to use an orchestration tool such as Prefect are for scheduling and event-based triggering. Up to this point, we\u2019ve demonstrated running Prefect flows as scripts, but this means you have been the one triggering and managing flow runs. You can certainly continue to trigger your workflows in this way and use Prefect as a monitoring layer for other schedulers or systems, but you will miss out on many of the other benefits and features that Prefect offers.
Deploying a flow exposes an API and UI so that you can:
Deploying a flow is the act of specifying where and how it will run. This information is encapsulated and sent to Prefect as a deployment that contains the crucial metadata needed for remote orchestration. Deployments elevate workflows from functions that you call manually to API-managed entities.
Attributes of a deployment include (but are not limited to):
Using our get_repo_info
flow from the previous sections, we can easily create a deployment for it by calling a single method on the flow object: flow.serve
.
import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info.serve(name=\"my-first-deployment\")\n
Running this script will do two things:
Deployments must be defined in static files
Flows can be defined and run interactively, that is, within REPLs or Notebooks. Deployments, on the other hand, require that your flow definition be in a known file (which can be located on a remote filesystem in certain setups, as we'll see in the next section of the tutorial).
Because this deployment has no schedule or triggering automation, you will need to use the UI or API to create runs for it. Let's use the CLI (in a separate terminal window) to create a run for this deployment:
prefect deployment run 'get-repo-info/my-first-deployment'\n
If you are watching either your terminal or your UI, you should see the newly created run execute successfully! Let's take this example further by adding a schedule and additional metadata.
","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#additional-options","title":"Additional options","text":"The serve
method on flows exposes many options for the deployment. Let's use a few of these options now:
cron
: a keyword that allows us to set a cron string schedule for the deployment; see schedules for more advanced scheduling optionstags
: a keyword that allows us to tag this deployment and its runs for bookkeeping and filtering purposesdescription
: a keyword that allows us to document what this deployment does; by default the description is set from the docstring of the flow function, but we did not document our flow functionversion
: a keyword that allows us to track changes to our deployment; by default a hash of the file containing the flow is used; popular options include semver tags or git commit hashesLet's add these options to our deployment:
if __name__ == \"__main__\":\n get_repo_info.serve(\n name=\"my-first-deployment\",\n cron=\"* * * * *\",\n tags=[\"testing\", \"tutorial\"],\n description=\"Given a GitHub repository, logs repository statistics for that repo.\",\n version=\"tutorial/deployments\",\n )\n
When you rerun this script, you will find an updated deployment in the UI that is actively scheduling work! Stop the script in the CLI using CTRL+C
and your schedule will be automatically paused.
.serve
is a long-running process
For remotely triggered or scheduled runs to be executed, your script with flow.serve
must be actively running.
This method is useful for creating deployments for single flows, but what if we have two or more flows? This situation only requires a few additional method calls and imports to get up and running:
multi_flow_deployment.pyimport time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n \"Fastest flow this side of the Mississippi.\"\n return\n\n\nif __name__ == \"__main__\":\n slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n fast_deploy = fast_flow.to_deployment(name=\"fast\")\n serve(slow_deploy, fast_deploy)\n
A few observations are in order:
flow.to_deployment
interface exposes the exact same options as flow.serve
; this method produces a deployment objectserve(...)
is calledSpend some time experimenting with this setup. A few potential next steps for exploration include:
sleep
Hybrid execution option
Another implication of Prefect's deployment interface is that you can choose to use our hybrid execution model. Whether you use Prefect Cloud or host a Prefect server instance yourself, you can run work flows in the environments best suited to their execution. This model allows you efficient use of your infrastructure resources while maintaining the privacy of your code and data. There is no ingress required. For more information read more about our hybrid model.
","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#next-steps","title":"Next steps","text":"Congratulations! You now have your first working deployment.
Deploying flows through the serve
method is a fast way to start scheduling flows with Prefect. However, if your team has more complex infrastructure requirements or you'd like to have Prefect manage flow execution, you can deploy flows to a work pool.
Learn about work pools and how Prefect Cloud can handle infrastructure configuration for you in the next step of the tutorial.
","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/flows/","title":"Flows","text":"Prerequisites
This tutorial assumes you have already installed Prefect and connected to Prefect Cloud or a self-hosted server instance. See the prerequisites section of the tutorial for more details.
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#what-is-a-flow","title":"What is a flow?","text":"Flows are like functions. They can take inputs, perform work, and return an output. In fact, you can turn any function into a Prefect flow by adding the @flow
decorator. When a function becomes a flow, its behavior changes, giving it the following advantages:
The simplest way to get started with Prefect is to annotate a Python function with the\u00a0@flow
\u00a0decorator. The script below fetches statistics about the main Prefect repository. Note that httpx is an HTTP client library and a dependency of Prefect. Let's turn this function into a Prefect flow and run the script:
import httpx\nfrom prefect import flow\n\n\n@flow\ndef get_repo_info():\n url = \"https://api.github.com/repos/PrefectHQ/prefect\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\nif __name__ == \"__main__\":\n get_repo_info()\n
Running this file will result in some interesting output:
12:47:42.792 | INFO | prefect.engine - Created flow run 'ludicrous-warthog' for flow 'get-repo-info'\nPrefectHQ/prefect repository statistics \ud83e\udd13:\nStars \ud83c\udf20 : 12146\nForks \ud83c\udf74 : 1245\n12:47:45.008 | INFO | Flow run 'ludicrous-warthog' - Finished in state Completed()\n
Flows can contain arbitrary Python
As we can see above, flow definitions can contain arbitrary Python logic.
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#parameters","title":"Parameters","text":"As with any Python function, you can pass arguments to a flow. The positional and keyword arguments defined on your flow function are called parameters. Prefect will automatically perform type conversion using any provided type hints. Let's make the repository a string parameter with a default value:
repo_info.pyimport httpx\nfrom prefect import flow\n\n\n@flow\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info(repo_name=\"PrefectHQ/marvin\")\n
We can call our flow with varying values for the repo_name
parameter (including \"bad\" values):
python repo_info.py\n
Try passing repo_name=\"missing-org/missing-repo\"
.
You should see
HTTPStatusError: Client error '404 Not Found' for url '<https://api.github.com/repos/missing-org/missing-repo>'\n
Now navigate to your Prefect dashboard and compare the displays for these two runs.
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#logging","title":"Logging","text":"Prefect enables you to log a variety of useful information about your flow and task runs, capturing information about your workflows for purposes such as monitoring, troubleshooting, and auditing. If we navigate to our dashboard and explore the runs we created above, we will notice that the repository statistics are not captured in the flow run logs. Let's fix that by adding some logging to our flow:
repo_info.pyimport httpx\nfrom prefect import flow, get_run_logger\n\n\n@flow\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n logger = get_run_logger()\n logger.info(\"%s repository statistics \ud83e\udd13:\", repo_name)\n logger.info(f\"Stars \ud83c\udf20 : %d\", repo[\"stargazers_count\"])\n logger.info(f\"Forks \ud83c\udf74 : %d\", repo[\"forks_count\"])\n
Now the output looks more consistent and, more importantly, our statistics are stored in the Prefect backend and displayed in the UI for this flow run:
12:47:42.792 | INFO | prefect.engine - Created flow run 'ludicrous-warthog' for flow 'get-repo-info'\n12:47:43.016 | INFO | Flow run 'ludicrous-warthog' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n12:47:43.016 | INFO | Flow run 'ludicrous-warthog' - Stars \ud83c\udf20 : 12146\n12:47:43.042 | INFO | Flow run 'ludicrous-warthog' - Forks \ud83c\udf74 : 1245\n12:47:45.008 | INFO | Flow run 'ludicrous-warthog' - Finished in state Completed()\n
log_prints=True
We could have achieved the exact same outcome by using Prefect's convenient log_prints
keyword argument in the flow
decorator:
@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n ...\n
Logging vs Artifacts
The example above is for educational purposes. In general, it is better to use Prefect artifacts for storing metrics and output. Logs are best for tracking progress and debugging errors.
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#retries","title":"Retries","text":"So far our script works, but in the future unexpected errors may occur. For example the GitHub API may be temporarily unavailable or rate limited. Retries help make our flow more resilient. Let's add retry functionality to our example above:
repo_info.pyimport httpx\nfrom prefect import flow\n\n\n@flow(retries=3, retry_delay_seconds=5, log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\nif __name__ == \"__main__\":\n get_repo_info()\n
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#next-tasks","title":"Next: Tasks","text":"As you have seen, adding a flow decorator converts our Python function to a resilient and observable workflow. In the next section, you'll supercharge this flow by using tasks to break down the workflow's complexity and make it more performant and observable - click here to continue.
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/tasks/","title":"Tasks","text":"","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#what-is-a-task","title":"What is a task?","text":"A task is any Python function decorated with a @task
decorator called within a flow. You can think of a flow as a recipe for connecting a known sequence of tasks together. Tasks, and the dependencies between them, are displayed in the flow run graph, enabling you to break down a complex flow into something you can observe, understand and control at a more granular level. When a function becomes a task, it can be executed concurrently and its return value can be cached.
Flows and tasks share some common features:
name
, description
and tags
for organization and bookkeeping.Network calls (such as our GET
requests to the GitHub API) are particularly useful as tasks because they take advantage of task features such as retries, caching, and concurrency.
Tasks must be called from flows
All tasks must be called from within a flow. Tasks may not call other tasks directly.
When to use tasks
Not all functions in a flow need be tasks. Use them only when their features are useful.
Let's take our flow from before and move the request into a task:
repo_info.pyimport httpx\nfrom prefect import flow, task\n\n\n@task\ndef get_url(url: str, params: dict = None):\n response = httpx.get(url, params=params)\n response.raise_for_status()\n return response.json()\n\n\n@flow(retries=3, retry_delay_seconds=5, log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n repo_stats = get_url(url)\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo_stats['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo_stats['forks_count']}\")\n\nif __name__ == \"__main__\":\n get_repo_info()\n
Running the flow in your terminal will result in something like this:
09:55:55.412 | INFO | prefect.engine - Created flow run 'great-ammonite' for flow 'get-repo-info'\n09:55:55.499 | INFO | Flow run 'great-ammonite' - Created task run 'get_url-0' for task 'get_url'\n09:55:55.500 | INFO | Flow run 'great-ammonite' - Executing 'get_url-0' immediately...\n09:55:55.825 | INFO | Task run 'get_url-0' - Finished in state Completed()\n09:55:55.827 | INFO | Flow run 'great-ammonite' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n09:55:55.827 | INFO | Flow run 'great-ammonite' - Stars \ud83c\udf20 : 12157\n09:55:55.827 | INFO | Flow run 'great-ammonite' - Forks \ud83c\udf74 : 1251\n09:55:55.849 | INFO | Flow run 'great-ammonite' - Finished in state Completed('All states completed.')\n
And you should now see this task run tracked in the UI as well.
","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#caching","title":"Caching","text":"Tasks support the ability to cache their return value. Caching allows you to efficiently reuse results of tasks that may be expensive to reproduce with every flow run, or reuse cached results if the inputs to a task have not changed.
To enable caching, specify a cache_key_fn
\u2014 a function that returns a cache key \u2014 on your task. You may optionally provide a cache_expiration
timedelta indicating when the cache expires. You can define a task that is cached based on its inputs by using the Prefect task_input_hash
. Let's add caching to our get_url
task:
import httpx\nfrom datetime import timedelta\nfrom prefect import flow, task, get_run_logger\nfrom prefect.tasks import task_input_hash\n\n\n@task(cache_key_fn=task_input_hash, \n cache_expiration=timedelta(hours=1),\n )\ndef get_url(url: str, params: dict = None):\n response = httpx.get(url, params=params)\n response.raise_for_status()\n return response.json()\n
You can test this caching behavior by using a personal repository as your workflow parameter - give it a star, or remove a star and see how the output of this task changes (or doesn't) by running your flow multiple times.
Task results and caching
Task results are cached in memory during a flow run and persisted to your home directory by default. Prefect Cloud only stores the cache key, not the data itself.
","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#concurrency","title":"Concurrency","text":"Tasks enable concurrency, allowing you to execute multiple tasks asynchronously. This concurrency can greatly enhance the efficiency and performance of your workflows. Let's expand our script to calculate the average open issues per user. This will require making more requests:
repo_info.pyimport httpx\nfrom datetime import timedelta\nfrom prefect import flow, task\nfrom prefect.tasks import task_input_hash\n\n\n@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(hours=1))\ndef get_url(url: str, params: dict = None):\n response = httpx.get(url, params=params)\n response.raise_for_status()\n return response.json()\n\n\ndef get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n issues = []\n pages = range(1, -(open_issues_count // -per_page) + 1)\n for page in pages:\n issues.append(\n get_url(\n f\"https://api.github.com/repos/{repo_name}/issues\",\n params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n )\n )\n return [i for p in issues for i in p]\n\n\n@flow(retries=3, retry_delay_seconds=5, log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n repo_stats = get_url(f\"https://api.github.com/repos/{repo_name}\")\n issues = get_open_issues(repo_name, repo_stats[\"open_issues_count\"])\n issues_per_user = len(issues) / len(set([i[\"user\"][\"id\"] for i in issues]))\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo_stats['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo_stats['forks_count']}\")\n print(f\"Average open issues per user \ud83d\udc8c : {issues_per_user:.2f}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info()\n
Now we're fetching the data we need, but the requests are happening sequentially. Tasks expose a submit
method that changes the execution from sequential to concurrent. In our specific example, we also need to use the result
method because we are unpacking a list of return values:
def get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n issues = []\n pages = range(1, -(open_issues_count // -per_page) + 1)\n for page in pages:\n issues.append(\n get_url.submit(\n f\"https://api.github.com/repos/{repo_name}/issues\",\n params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n )\n )\n return [i for p in issues for i in p.result()]\n
The logs show that each task is running concurrently:
12:45:28.241 | INFO | prefect.engine - Created flow run 'intrepid-coua' for flow 'get-repo-info'\n12:45:28.311 | INFO | Flow run 'intrepid-coua' - Created task run 'get_url-0' for task 'get_url'\n12:45:28.312 | INFO | Flow run 'intrepid-coua' - Executing 'get_url-0' immediately...\n12:45:28.543 | INFO | Task run 'get_url-0' - Finished in state Completed()\n12:45:28.583 | INFO | Flow run 'intrepid-coua' - Created task run 'get_url-1' for task 'get_url'\n12:45:28.584 | INFO | Flow run 'intrepid-coua' - Submitted task run 'get_url-1' for execution.\n12:45:28.594 | INFO | Flow run 'intrepid-coua' - Created task run 'get_url-2' for task 'get_url'\n12:45:28.594 | INFO | Flow run 'intrepid-coua' - Submitted task run 'get_url-2' for execution.\n12:45:28.609 | INFO | Flow run 'intrepid-coua' - Created task run 'get_url-4' for task 'get_url'\n12:45:28.610 | INFO | Flow run 'intrepid-coua' - Submitted task run 'get_url-4' for execution.\n12:45:28.624 | INFO | Flow run 'intrepid-coua' - Created task run 'get_url-5' for task 'get_url'\n12:45:28.625 | INFO | Flow run 'intrepid-coua' - Submitted task run 'get_url-5' for execution.\n12:45:28.640 | INFO | Flow run 'intrepid-coua' - Created task run 'get_url-6' for task 'get_url'\n12:45:28.641 | INFO | Flow run 'intrepid-coua' - Submitted task run 'get_url-6' for execution.\n12:45:28.708 | INFO | Flow run 'intrepid-coua' - Created task run 'get_url-3' for task 'get_url'\n12:45:28.708 | INFO | Flow run 'intrepid-coua' - Submitted task run 'get_url-3' for execution.\n12:45:29.096 | INFO | Task run 'get_url-6' - Finished in state Completed()\n12:45:29.565 | INFO | Task run 'get_url-2' - Finished in state Completed()\n12:45:29.721 | INFO | Task run 'get_url-5' - Finished in state Completed()\n12:45:29.749 | INFO | Task run 'get_url-4' - Finished in state Completed()\n12:45:29.801 | INFO | Task run 'get_url-3' - Finished in state Completed()\n12:45:29.817 | INFO | Task run 'get_url-1' - Finished in state Completed()\n12:45:29.820 | INFO | Flow run 'intrepid-coua' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n12:45:29.820 | INFO | Flow run 'intrepid-coua' - Stars \ud83c\udf20 : 12159\n12:45:29.821 | INFO | Flow run 'intrepid-coua' - Forks \ud83c\udf74 : 1251\nAverage open issues per user \ud83d\udc8c : 2.27\n12:45:29.838 | INFO | Flow run 'intrepid-coua' - Finished in state Completed('All states completed.')\n
","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#subflows","title":"Subflows","text":"Not only can you call tasks within a flow, but you can also call other flows! Child flows are called\u00a0subflows\u00a0and allow you to efficiently manage, track, and version common multi-task logic.
Subflows are a great way to organize your workflows and offer more visibility within the UI.
Let's add a flow
decorator to our get_open_issues
function:
@flow\ndef get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n issues = []\n pages = range(1, -(open_issues_count // -per_page) + 1)\n for page in pages:\n issues.append(\n get_url.submit(\n f\"https://api.github.com/repos/{repo_name}/issues\",\n params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n )\n )\n return [i for p in issues for i in p.result()]\n
Whenever we run the parent flow, a new run will be generated for related functions within that as well. Not only is this run tracked as a subflow run of the main flow, but you can also inspect it independently in the UI!
","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#next-deployments","title":"Next: Deployments","text":"We now have a flow with tasks, subflows, retries, logging, caching, and concurrent execution. In the next section, we'll see how we can deploy this flow in order to run it on a schedule and/or external infrastructure - click here to learn how to create your first deployment.
","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/work-pools/","title":"Work Pools","text":"","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/work-pools/#why-work-pools","title":"Why work pools?","text":"Work pools are a bridge between the Prefect orchestration layer and infrastructure for flow runs that can be dynamically provisioned. To transition from persistent infrastructure to dynamic infrastructure, use flow.deploy
instead of flow.serve
.
Choosing Between flow.deploy()
and flow.serve()
Earlier in the tutorial you used serve
to deploy your flows. For many use cases, serve
is sufficient to meet scheduling and orchestration needs. Work pools are optional. If infrastructure needs escalate, work pools can become a handy tool. The best part? You're not locked into one method. You can seamlessly combine approaches as needed.
Deployment definition methods differ slightly for work pools
When you use work-pool-based execution, you define deployments differently. Deployments for workers are configured with deploy
, which requires additional configuration. A deployment created with serve
cannot be used with a work pool.
The primary reason to use work pools is for dynamic infrastructure provisioning and configuration. For example, you might have a workflow that has expensive infrastructure requirements and is run infrequently. In this case, you don't want an idle process running within that infrastructure.
Other advantages to using work pools include:
Prefect provides several types of work pools. Prefect Cloud provides a Prefect Managed work pool option that is the simplest way to run workflows remotely. A cloud-provider account, such as AWS, is not required with a Prefect Managed work pool.
","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/work-pools/#set-up-a-work-pool","title":"Set up a work pool","text":"Prefect Cloud
This tutorial uses Prefect Cloud to deploy flows to work pools. Managed execution and push work pools are available in Prefect Cloud only. If you are not using Prefect Cloud, please learn about work pools below and then proceed to the next tutorial that uses worker-based work pools.
","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/work-pools/#create-a-prefect-managed-work-pool","title":"Create a Prefect Managed work pool","text":"In your terminal, run the following command to set up a work pool named my-managed-pool
of type prefect:managed
.
prefect work-pool create my-managed-pool --type prefect:managed \n
Let\u2019s confirm that the work pool was successfully created by running the following command.
prefect work-pool ls\n
You should see your new my-managed-pool
in the output list.
Finally, let\u2019s double check that you can see this work pool in the UI.
Navigate to the Work Pools tab and verify that you see my-managed-pool
listed.
Feel free to select Edit from the three-dot menu on right of the work pool card to view the details of your work pool.
Work pools contain configuration that is used to provision infrastructure for flow runs. For example, you can specify additional Python packages or environment variables that should be set for all deployments that use this work pool. Note that individual deployments can override the work pool configuration.
Now that you\u2019ve set up your work pool, we can deploy a flow to this work pool. Let's deploy your tutorial flow to my-managed-pool
.
From our previous steps, we now have:
Let's update our repo_info.py
file to create a deployment in Prefect Cloud.
The updates that we need to make to repo_info.py
are:
flow.serve
to flow.deploy
.flow.deploy
which work pool to deploy to.Here's what the updated repo_info.py
looks like:
import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info.from_source(\n source=\"https://github.com/discdiver/demos.git\", \n entrypoint=\"repo_info.py:get_repo_info\"\n ).deploy(\n name=\"my-first-deployment\", \n work_pool_name=\"my-managed-pool\", \n )\n
In the from_source
method, we specify the source of our flow code.
In the deploy
method, we specify the name of our deployment and the name of the work pool that we created earlier.
You can store your flow code in any of several types of remote storage. In this example, we use a GitHub repository, but you could use a Docker image, as you'll see in an upcoming section of the tutorial. Alternatively, you could store your flow code in cloud provider storage such as AWS S3, or within a different git-based cloud provider such as GitLab or Bitbucket.
Note
In the example above, we store our code in a GitHub repository. If you make changes to the flow code, you will need to push those changes to your own GitHub account and update the source
argument of from_source
to point to your repository.
Run the script again and you should see a message in the CLI that your deployment was created with instructions for how to run it.
Successfully created/updated all deployments!\n\n Deployments \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name \u2503 Status \u2503 Details \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 get-repo-info/my-first-deployment | applied \u2502 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\nTo schedule a run for this deployment, use the following command:\n\n $ prefect deployment run 'get-repo-info/my-first-deployment'\n\n\nYou can also run your flow via the Prefect UI: https://app.prefect.cloud/account/\nabc/workspace/123/deployments/deployment/xyz\n
Navigate to your Prefect Cloud UI and view your new deployment. Click the Run button to trigger a run of your deployment.
Because this deployment was configured with a Prefect Managed work pool, Prefect Cloud will run your flow on your behalf.
View the logs in the UI.
Now that you've updated your script, you can run it to register your deployment on Prefect Cloud:
python repo_info.py\n
","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/work-pools/#schedule-a-deployment-run","title":"Schedule a deployment run","text":"Now everything is set up for us to submit a flow-run to the work pool. Go ahead and run the deployment from the CLI or the UI.
prefect deployment run 'get_repo_info/my-deployment'\n
Prefect Managed work pools are a great way to get started with Prefect. See the Managed Execution guide for more details.
Many users will find that they need more control over the infrastructure that their flows run on. Prefect Cloud's push work pools are a popular option in those cases.
","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/work-pools/#push-work-pools-with-automatic-infrastructure-provisioning","title":"Push work pools with automatic infrastructure provisioning","text":"Serverless push work pools scale infinitely and provide more configuration options than Prefect Managed work pools.
Prefect provides push work pools for AWS ECS on Fargate, Azure Container Instances, Google Cloud Run, and Modal. To use a push work pool, you will need an account with sufficient permissions on the cloud provider that you want to use. We'll use GCP for this example.
Setting up the cloud provider pieces for infrastructure can be tricky and time consuming. Fortunately, Prefect can automatically provision infrastructure for you and wire it all together to work with your push work pool.
","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/work-pools/#create-a-push-work-pool-with-automatic-infrastructure-provisioning","title":"Create a push work pool with automatic infrastructure provisioning","text":"In your terminal, run the following command to set up a push work pool.
Install the gcloud CLI and authenticate with your GCP project.
If you already have the gcloud CLI installed, be sure to update to the latest version with gcloud components update
.
You will need the following permissions in your GCP project:
Docker is also required to build and push images to your registry. You can install Docker here.
Run the following command to set up a work pool named my-cloud-run-pool
of type cloud-run:push
.
prefect work-pool create --type cloud-run:push --provision-infra my-cloud-run-pool \n
Using the --provision-infra
flag allows you to select a GCP project to use for your work pool and automatically configure it to be ready to execute flows via Cloud Run. In your GCP project, this command will activate the Cloud Run API, create a service account, and create a key for the service account, if they don't already exist. In your Prefect workspace, this command will create a GCPCredentials
block for storing the service account key.
Here's an abbreviated example output from running the command:
\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-cloud-run-pool will require: \u2502\n\u2502 \u2502\n\u2502 Updates in GCP project central-kit-405415 in region us-central1 \u2502\n\u2502 \u2502\n\u2502 - Activate the Cloud Run API for your project \u2502\n\u2502 - Activate the Artifact Registry API for your project \u2502\n\u2502 - Create an Artifact Registry repository named prefect-images \u2502\n\u2502 - Create a service account for managing Cloud Run jobs: prefect-cloud-run \u2502\n\u2502 - Service account will be granted the following roles: \u2502\n\u2502 - Service Account User \u2502\n\u2502 - Cloud Run Developer \u2502\n\u2502 - Create a key for service account prefect-cloud-run \u2502\n\u2502 \u2502\n\u2502 Updates in Prefect workspace \u2502\n\u2502 \u2502\n\u2502 - Create GCP credentials block my--pool-push-pool-credentials to store the service account key \u2502\n\u2502 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\nActivating Cloud Run API\nActivating Artifact Registry API\nCreating Artifact Registry repository\nConfiguring authentication to Artifact Registry\nSetting default Docker build namespace\nCreating service account\nAssigning roles to service account\nCreating service account key\nCreating GCP credentials block\nProvisioning Infrastructure \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-cloud-run-pool'!\n
After infrastructure provisioning completes, you will be logged into your new Artifact Registry repository and the default Docker build namespace will be set to the URL of the repository.
While the default namespace is set, any images you build without specifying a registry or username/organization will be pushed to the repository.
To take advantage of this functionality, you can write your deploy script like this:
example_deploy_script.pyfrom prefect import flow \nfrom prefect.deployments import DeploymentImage \n\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n print(f\"Hello {name}! I'm a flow running on Cloud Run!\")\n\n\nif __name__ == \"__main__\": \n my_flow.deploy( \n name=\"my-deployment\",\n work_pool_name=\"above-ground\",\n image=DeploymentImage(\n name=\"my-image:latest\",\n platform=\"linux/amd64\",\n )\n )\n
Running this script will build a Docker image with the tag <region>-docker.pkg.dev/<project>/<repository-name>/my-image:latest
and push it to your repository.
Tip
Make sure you have Docker running locally before running this script.
Note that you only need to include an object of the DeploymentImage
class with the argument platform=\"linux/amd64
if you're building your image on a machine with an ARM-based processor. Otherwise, you could just pass image=\"my-image:latest\"
to deploy
.
See the Push Work Pool guide for more details and example commands for each cloud provider.
","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/work-pools/#next-step","title":"Next step","text":"Congratulations! You've learned how to deploy flows to work pools. If these work pool options meet all of your needs, we encourage you to go deeper with the concepts docs or explore our how-to guides to see examples of particular Prefect use cases.
However, if you need more control over your infrastructure, want to run your workflows in Kubernetes, or are running a self-hosted Prefect server instance, we encourage you to see the next section of the tutorial. There you'll learn how to use work pools that rely on a worker and see how to customize Docker images for container-based infrastructure.
","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/workers/","title":"Workers","text":"","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#prerequisites","title":"Prerequisites","text":"Docker installed and running on your machine.
","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#why-workers","title":"Why workers","text":"In the previous section of the tutorial, you learned how work pools are a bridge between the Prefect orchestration layer and infrastructure for flow runs that can be dynamically provisioned. You saw how you can transition from persistent infrastructure to dynamic infrastructure by using flow.deploy
instead of flow.serve
.
Work pools that rely on client-side workers take this a step further by enabling you to run work flows in your own Docker containers, Kubernetes clusters, and serverless environments such as AWS ECS, Azure Container Instances, and GCP Cloud Run.
The architecture of a worker-based work pool deployment can be summarized with the following diagram:
graph TD\n subgraph your_infra[\"Your Execution Environment\"]\n worker[\"Worker\"]\n subgraph flow_run_infra[Flow Run Infra]\n flow_run_a((\"Flow Run A\"))\n end\n subgraph flow_run_infra_2[Flow Run Infra]\n flow_run_b((\"Flow Run B\"))\n end \n end\n\n subgraph api[\"Prefect API\"]\n Deployment --> |assigned to| work_pool\n work_pool([\"Work Pool\"])\n end\n\n worker --> |polls| work_pool\n worker --> |creates| flow_run_infra\n worker --> |creates| flow_run_infra_2
Notice above that the worker is in charge of provisioning the flow run infrastructure. In context of this tutorial, that flow run infrastructure is an ephemeral Docker container to host each flow run. Different worker types create different types of flow run infrastructure.
Now that we\u2019ve reviewed the concepts of a work pool and worker, let\u2019s create them so that you can deploy your tutorial flow, and execute it later using the Prefect API.
","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#set-up-a-work-pool-and-worker","title":"Set up a work pool and worker","text":"For this tutorial you will create a Docker type work pool via the CLI.
Using the Docker work pool type means that all work sent to this work pool will run within a dedicated Docker container using a Docker client available to the worker.
Other work pool types
There are work pool types for serverless computing environments such as AWS ECS, Azure Container Instances, Google Cloud Run, and Vertex AI. Kubernetes is also a popular work pool type.
These options are expanded upon in various How-to Guides.
","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#create-a-work-pool","title":"Create a work pool","text":"In your terminal, run the following command to set up a Docker type work pool.
prefect work-pool create --type docker my-docker-pool\n
Let\u2019s confirm that the work pool was successfully created by running the following command in the same terminal.
prefect work-pool ls\n
You should see your new my-docker-pool
listed in the output.
Finally, let\u2019s double check that you can see this work pool in your Prefect UI.
Navigate to the Work Pools tab and verify that you see my-docker-pool
listed.
When you click into my-docker-pool
you should see a red status icon signifying that this work pool is not ready.
To make the work pool ready, you need to start a worker.
","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#start-a-worker","title":"Start a worker","text":"Workers are a lightweight polling process that kick off scheduled flow runs on a specific type of infrastructure (such as Docker). To start a worker on your local machine, open a new terminal and confirm that your virtual environment has prefect
installed.
Run the following command in this new terminal to start the worker:
prefect worker start --pool my-docker-pool\n
You should see the worker start. It's now polling the Prefect API to check for any scheduled flow runs it should pick up and then submit for execution. You\u2019ll see your new worker listed in the UI under the Workers tab of the Work Pools page with a recent last polled date.
You should also be able to see a Ready
status indicator on your work pool - progress!
You will need to keep this terminal session active for the worker to continue to pick up jobs. Since you are running this worker locally, the worker will terminate if you close the terminal. Therefore, in a production setting this worker should run as a daemonized or managed process.
Now that you\u2019ve set up your work pool and worker, we have what we need to kick off and execute flow runs of flows deployed to this work pool. Let's deploy your tutorial flow to my-docker-pool
.
From our previous steps, we now have:
Now it\u2019s time to put it all together. We're going to update our repo_info.py
file to build a Docker image and update our deployment so our worker can execute it.
The updates that you need to make to repo_info.py
are:
flow.serve
to flow.deploy
.flow.deploy
which work pool to deploy to.flow.deploy
the name to use for the Docker image that will be built.Here's what the updated repo_info.py
looks like:
import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info.deploy(\n name=\"my-first-deployment\", \n work_pool_name=\"my-docker-pool\", \n image=\"my-first-deployment-image:tutorial\",\n push=False\n )\n
Why the push=False
?
For this tutorial, your Docker worker is running on your machine, so we don't need to push the image built by flow.deploy
to a registry. When your worker is running on a remote machine, you will need to push the image to a registry that the worker can access.
Remove the push=False
argument, include your registry name, and ensure you've authenticated with the Docker CLI to push the image to a registry.
Now that you've updated your script, you can run it to deploy your flow to the work pool:
python repo_info.py\n
Prefect will build a custom Docker image containing your workflow code that the worker can use to dynamically spawn Docker containers whenever this workflow needs to run.
What Dockerfile?
In this example, Prefect generates a Dockerfile for you that will build an image based off of one of Prefect's published images. The generated Dockerfile will copy the current directory into the Docker image and install any dependencies listed in a requirements.txt
file.
If you want to use a custom Dockerfile, you can specify the path to the Dockerfile using the DeploymentImage
class:
import httpx\nfrom prefect import flow\nfrom prefect.deployments import DeploymentImage\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info.deploy(\n name=\"my-first-deployment\", \n work_pool_name=\"my-docker-pool\", \n image=DeploymentImage(\n name=\"my-first-deployment-image\",\n tag=\"tutorial\",\n dockerfile=\"Dockerfile\"\n ),\n push=False\n )\n
","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#modify-the-deployment","title":"Modify the deployment","text":"If you need to make updates to your deployment, you can do so by modifying your script and rerunning it. You'll need to make one update to specify a value for job_variables
to ensure your Docker worker can successfully execute scheduled runs for this flow. See the example below.
The job_variables
section allows you to fine-tune the infrastructure settings for a specific deployment. These values override default values in the specified work pool's base job template.
When testing images locally without pushing them to a registry (to avoid potential errors like docker.errors.NotFound), it's recommended to include an image_pull_policy
job_variable set to Never
. However, for production workflows, always consider pushing images to a remote registry for more reliability and accessibility.
Here's how you can quickly set the image_pull_policy
to be Never
for this tutorial deployment without affecting the default value set on your work pool:
import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info.deploy(\n name=\"my-first-deployment\", \n work_pool_name=\"my-docker-pool\", \n job_variables={\"image_pull_policy\": \"Never\"},\n image=\"my-first-deployment-image:tutorial\",\n push=False\n )\n
To register this update to your deployment's parameters with Prefect's API, run:
python repo_info.py\n
Now everything is set for us to submit a flow-run to the work pool:
prefect deployment run 'get_repo_info/my-deployment'\n
Common Pitfall
Did you know?
A Prefect flow can have more than one deployment. This pattern can be useful if you want your flow to run in different execution environments.
","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#next-steps","title":"Next steps","text":"prefect.yaml
.Happy building!
","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"],"fields":{"title":{"boost":1000.0},"text":{"boost":1.0},"tags":{"boost":1000000.0}}},"docs":[{"location":"","title":"Welcome to Prefect","text":"Prefect is a workflow orchestration tool empowering developers to build, observe, and react to data pipelines.
It's the easiest way to transform any Python function into a unit of work that can be observed and orchestrated. Just bring your Python code, sprinkle in a few decorators, and go!
With Prefect you gain:
Get up and running quickly with the quickstart guide.
Want more hands on practice to productionize your workflows? Follow our tutorial.
For deeper dives on common use cases, explore our guides.
Take your understanding even further with Prefect's concepts and API reference.
Join Prefect's vibrant community of over 26,000 engineers to learn with others and share your knowledge!
Need help?
Get your questions answered by a Prefect Product Advocate! Book a Meeting
","tags":["getting started","quick start","overview"],"boost":2},{"location":"faq/","title":"Frequently Asked Questions","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#prefect","title":"Prefect","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#how-is-prefect-licensed","title":"How is Prefect licensed?","text":"Prefect is licensed under the Apache 2.0 License, an OSI approved open-source license. If you have any questions about licensing, please contact us.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#is-the-prefect-v2-cloud-url-different-than-the-prefect-v1-cloud-url","title":"Is the Prefect v2 Cloud URL different than the Prefect v1 Cloud URL?","text":"Yes. Prefect Cloud for v2 is at app.prefect.cloud/ while Prefect Cloud for v1 is at cloud.prefect.io.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#the-prefect-orchestration-engine","title":"The Prefect Orchestration Engine","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#why-was-the-prefect-orchestration-engine-created","title":"Why was the Prefect orchestration engine created?","text":"The Prefect orchestration engine has three major objectives:
As Prefect has matured, so has the modern data stack. The on-demand, dynamic, highly scalable workflows that used to exist principally in the domain of data science and analytics are now prevalent throughout all of data engineering. Few companies have workflows that don\u2019t deal with streaming data, uncertain timing, runtime logic, complex dependencies, versioning, or custom scheduling.
This means that the current generation of workflow managers are built around the wrong abstraction: the directed acyclic graph (DAG). DAGs are an increasingly arcane, constrained way of representing the dynamic, heterogeneous range of modern data and computation patterns.
Furthermore, as workflows have become more complex, it has become even more important to focus on the developer experience of building, testing, and monitoring them. Faced with an explosion of available tools, it is more important than ever for development teams to seek orchestration tools that will be compatible with any code, tools, or services they may require in the future.
And finally, this additional complexity means that providing clear and consistent insight into the behavior of the orchestration engine and any decisions it makes is critically important.
The Prefect orchestration engine represents a unified solution to these three problems.
The Prefect orchestration engine is capable of governing any code through a well-defined series of state transitions designed to maximize the user's understanding of what happened during execution. It's popular to describe \"workflows as code\" or \"orchestration as code,\" but the Prefect engine represents \"code as workflows\": rather than ask users to change how they work to meet the requirements of the orchestrator, we've defined an orchestrator that adapts to how our users work.
To achieve this, we've leveraged the familiar tools of native Python: first class functions, type annotations, and async
support. Users are free to implement as much \u2014 or as little \u2014 of the Prefect engine as is useful for their objectives.
No, Prefect Cloud hosts an instance of the Prefect API for you. In fact, each workspace in Prefect Cloud corresponds directly to a single instance of the Prefect orchestration engine. See the Prefect Cloud Overview for more information.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#features","title":"Features","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#does-prefect-support-mapping","title":"Does Prefect support mapping?","text":"Yes! For more information, see the Task.map
API reference
@flow\ndef my_flow():\n\n # map over a constant\n for i in range(10):\n my_mapped_task(i)\n\n # map over a task's output\n l = list_task()\n for i in l.wait().result():\n my_mapped_task_2(i)\n
Note that when tasks are called on constant values, they cannot detect their upstream edges automatically. In this example, my_mapped_task_2
does not know that it is downstream from list_task()
. Prefect will have convenience functions for detecting these associations, and Prefect's .map()
operator will automatically track them.
Yes! For more information, see the Tasks
section.
Yes!
Prefect supports communicating via proxies through the use of environment variables. You can read more about this in the Installation documentation and the article Using Prefect Cloud with proxies.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-run-prefect-flows-on-linux","title":"Can I run Prefect flows on Linux?","text":"Yes!
See the Installation documentation and Linux installation notes for details on getting started with Prefect on Linux.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-run-prefect-flows-on-windows","title":"Can I run Prefect flows on Windows?","text":"Yes!
See the Installation documentation and Windows installation notes for details on getting started with Prefect on Windows.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#what-external-requirements-does-prefect-have","title":"What external requirements does Prefect have?","text":"Prefect does not have any additional requirements besides those installed by pip install --pre prefect
. The entire system, including the UI and services, can be run in a single process via prefect server start
and does not require Docker.
Prefect Cloud users do not need to worry about the Prefect database. Prefect Cloud uses PostgreSQL on GCP behind the scenes. To use PostgreSQL with a self-hosted Prefect server, users must provide the connection string for a running database via the PREFECT_API_DATABASE_CONNECTION_URL
environment variable.
A self-hosted Prefect server can work with SQLite and PostgreSQL. New Prefect installs default to a SQLite database hosted at ~/.prefect/prefect.db
on Mac or Linux machines. SQLite and PostgreSQL are not installed by Prefect.
SQLite generally works well for getting started and exploring Prefect. We have tested it with up to hundreds of thousands of task runs. Many users may be able to stay on SQLite for some time. However, for production uses, Prefect Cloud or self-hosted PostgreSQL is highly recommended. Under write-heavy workloads, SQLite performance can begin to suffer. Users running many flows with high degrees of parallelism or concurrency should use PostgreSQL.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#relationship-with-other-prefect-products","title":"Relationship with other Prefect products","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-a-flow-written-with-prefect-1-be-orchestrated-with-prefect-2-and-vice-versa","title":"Can a flow written with Prefect 1 be orchestrated with Prefect 2 and vice versa?","text":"No. Flows written with the Prefect 1 client must be rewritten with the Prefect 2 client. For most flows, this should take just a few minutes. See our migration guide and our Upgrade to Prefect 2 post for more information.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-a-use-prefect-1-and-prefect-2-at-the-same-time-on-my-local-machine","title":"Can a use Prefect 1 and Prefect 2 at the same time on my local machine?","text":"Yes. Just use different virtual environments.
","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"api-ref/","title":"API Reference","text":"Prefect auto-generates reference documentation for the following components:
Self-hosted docs
When self-hosting, you can access REST API documentation at the /docs
endpoint of your PREFECT_API_URL
- for example, if you ran prefect server start
with no additional configuration you can find this reference at http://localhost:4200/docs.
prefect.agent
","text":"DEPRECATION WARNING:
This module is deprecated as of March 2024 and will not be available after September 2024. Agents have been replaced by workers, which offer enhanced functionality and better performance.
For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent","title":"PrefectAgent
","text":"Source code in prefect/agent.py
@deprecated_class(\n start_date=\"Mar 2024\",\n help=\"Use a worker instead. Refer to the upgrade guide for more information: https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass PrefectAgent:\n def __init__(\n self,\n work_queues: List[str] = None,\n work_queue_prefix: Union[str, List[str]] = None,\n work_pool_name: str = None,\n prefetch_seconds: int = None,\n default_infrastructure: Infrastructure = None,\n default_infrastructure_document_id: UUID = None,\n limit: Optional[int] = None,\n ) -> None:\n if default_infrastructure and default_infrastructure_document_id:\n raise ValueError(\n \"Provide only one of 'default_infrastructure' and\"\n \" 'default_infrastructure_document_id'.\"\n )\n\n self.work_queues: Set[str] = set(work_queues) if work_queues else set()\n self.work_pool_name = work_pool_name\n self.prefetch_seconds = prefetch_seconds\n self.submitting_flow_run_ids = set()\n self.cancelling_flow_run_ids = set()\n self.scheduled_task_scopes = set()\n self.started = False\n self.logger = get_logger(\"agent\")\n self.task_group: Optional[anyio.abc.TaskGroup] = None\n self.limit: Optional[int] = limit\n self.limiter: Optional[anyio.CapacityLimiter] = None\n self.client: Optional[PrefectClient] = None\n\n if isinstance(work_queue_prefix, str):\n work_queue_prefix = [work_queue_prefix]\n self.work_queue_prefix = work_queue_prefix\n\n self._work_queue_cache_expiration: pendulum.DateTime = None\n self._work_queue_cache: List[WorkQueue] = []\n\n if default_infrastructure:\n self.default_infrastructure_document_id = (\n default_infrastructure._block_document_id\n )\n self.default_infrastructure = default_infrastructure\n elif default_infrastructure_document_id:\n self.default_infrastructure_document_id = default_infrastructure_document_id\n self.default_infrastructure = None\n else:\n self.default_infrastructure = Process()\n self.default_infrastructure_document_id = None\n\n async def update_matched_agent_work_queues(self):\n if self.work_queue_prefix:\n if self.work_pool_name:\n matched_queues = await self.client.read_work_queues(\n work_pool_name=self.work_pool_name,\n work_queue_filter=WorkQueueFilter(\n name=WorkQueueFilterName(startswith_=self.work_queue_prefix)\n ),\n )\n else:\n matched_queues = await self.client.match_work_queues(\n self.work_queue_prefix, work_pool_name=DEFAULT_AGENT_WORK_POOL_NAME\n )\n\n matched_queues = set(q.name for q in matched_queues)\n if matched_queues != self.work_queues:\n new_queues = matched_queues - self.work_queues\n removed_queues = self.work_queues - matched_queues\n if new_queues:\n self.logger.info(\n f\"Matched new work queues: {', '.join(new_queues)}\"\n )\n if removed_queues:\n self.logger.info(\n f\"Work queues no longer matched: {', '.join(removed_queues)}\"\n )\n self.work_queues = matched_queues\n\n async def get_work_queues(self) -> AsyncIterator[WorkQueue]:\n \"\"\"\n Loads the work queue objects corresponding to the agent's target work\n queues. If any of them don't exist, they are created.\n \"\"\"\n\n # if the queue cache has not expired, yield queues from the cache\n now = pendulum.now(\"UTC\")\n if (self._work_queue_cache_expiration or now) > now:\n for queue in self._work_queue_cache:\n yield queue\n return\n\n # otherwise clear the cache, set the expiration for 30 seconds, and\n # reload the work queues\n self._work_queue_cache.clear()\n self._work_queue_cache_expiration = now.add(seconds=30)\n\n await self.update_matched_agent_work_queues()\n\n for name in self.work_queues:\n try:\n work_queue = await self.client.read_work_queue_by_name(\n work_pool_name=self.work_pool_name, name=name\n )\n except (ObjectNotFound, Exception):\n work_queue = None\n\n # if the work queue wasn't found and the agent is NOT polling\n # for queues using a regex, try to create it\n if work_queue is None and not self.work_queue_prefix:\n try:\n work_queue = await self.client.create_work_queue(\n work_pool_name=self.work_pool_name, name=name\n )\n except Exception:\n # if creating it raises an exception, it was probably just\n # created by some other agent; rather than entering a re-read\n # loop with new error handling, we log the exception and\n # continue.\n self.logger.exception(f\"Failed to create work queue {name!r}.\")\n continue\n else:\n log_str = f\"Created work queue {name!r}\"\n if self.work_pool_name:\n log_str = (\n f\"Created work queue {name!r} in work pool\"\n f\" {self.work_pool_name!r}.\"\n )\n else:\n log_str = f\"Created work queue '{name}'.\"\n self.logger.info(log_str)\n\n if work_queue is None:\n self.logger.error(\n f\"Work queue '{name!r}' with prefix {self.work_queue_prefix} wasn't\"\n \" found\"\n )\n else:\n self._work_queue_cache.append(work_queue)\n yield work_queue\n\n async def get_and_submit_flow_runs(self) -> List[FlowRun]:\n \"\"\"\n The principle method on agents. Queries for scheduled flow runs and submits\n them for execution in parallel.\n \"\"\"\n if not self.started:\n raise RuntimeError(\n \"Agent is not started. Use `async with PrefectAgent()...`\"\n )\n\n self.logger.debug(\"Checking for scheduled flow runs...\")\n\n before = pendulum.now(\"utc\").add(\n seconds=self.prefetch_seconds or PREFECT_AGENT_PREFETCH_SECONDS.value()\n )\n\n submittable_runs: List[FlowRun] = []\n\n if self.work_pool_name:\n responses = await self.client.get_scheduled_flow_runs_for_work_pool(\n work_pool_name=self.work_pool_name,\n work_queue_names=[wq.name async for wq in self.get_work_queues()],\n scheduled_before=before,\n )\n submittable_runs.extend([response.flow_run for response in responses])\n\n else:\n # load runs from each work queue\n async for work_queue in self.get_work_queues():\n # print a nice message if the work queue is paused\n if work_queue.is_paused:\n self.logger.info(\n f\"Work queue {work_queue.name!r} ({work_queue.id}) is paused.\"\n )\n\n else:\n try:\n queue_runs = await self.client.get_runs_in_work_queue(\n id=work_queue.id, limit=10, scheduled_before=before\n )\n submittable_runs.extend(queue_runs)\n except ObjectNotFound:\n self.logger.error(\n f\"Work queue {work_queue.name!r} ({work_queue.id}) not\"\n \" found.\"\n )\n except Exception as exc:\n self.logger.exception(exc)\n\n submittable_runs.sort(key=lambda run: run.next_scheduled_start_time)\n\n for flow_run in submittable_runs:\n # don't resubmit a run\n if flow_run.id in self.submitting_flow_run_ids:\n continue\n\n try:\n if self.limiter:\n self.limiter.acquire_on_behalf_of_nowait(flow_run.id)\n except anyio.WouldBlock:\n self.logger.info(\n f\"Flow run limit reached; {self.limiter.borrowed_tokens} flow runs\"\n \" in progress.\"\n )\n break\n else:\n self.logger.info(f\"Submitting flow run '{flow_run.id}'\")\n self.submitting_flow_run_ids.add(flow_run.id)\n self.task_group.start_soon(\n self.submit_run,\n flow_run,\n )\n\n return list(\n filter(lambda run: run.id in self.submitting_flow_run_ids, submittable_runs)\n )\n\n async def check_for_cancelled_flow_runs(self):\n if not self.started:\n raise RuntimeError(\n \"Agent is not started. Use `async with PrefectAgent()...`\"\n )\n\n self.logger.debug(\"Checking for cancelled flow runs...\")\n\n work_queue_filter = (\n WorkQueueFilter(name=WorkQueueFilterName(any_=list(self.work_queues)))\n if self.work_queues\n else None\n )\n\n work_pool_filter = (\n WorkPoolFilter(name=WorkPoolFilterName(any_=[self.work_pool_name]))\n if self.work_pool_name\n else WorkPoolFilter(name=WorkPoolFilterName(any_=[\"default-agent-pool\"]))\n )\n named_cancelling_flow_runs = await self.client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state=FlowRunFilterState(\n type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n ),\n # Avoid duplicate cancellation calls\n id=FlowRunFilterId(not_any_=list(self.cancelling_flow_run_ids)),\n ),\n work_pool_filter=work_pool_filter,\n work_queue_filter=work_queue_filter,\n )\n\n typed_cancelling_flow_runs = await self.client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state=FlowRunFilterState(\n type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n ),\n # Avoid duplicate cancellation calls\n id=FlowRunFilterId(not_any_=list(self.cancelling_flow_run_ids)),\n ),\n work_pool_filter=work_pool_filter,\n work_queue_filter=work_queue_filter,\n )\n\n cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n if cancelling_flow_runs:\n self.logger.info(\n f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n )\n\n for flow_run in cancelling_flow_runs:\n self.cancelling_flow_run_ids.add(flow_run.id)\n self.task_group.start_soon(self.cancel_run, flow_run)\n\n return cancelling_flow_runs\n\n async def cancel_run(self, flow_run: FlowRun) -> None:\n \"\"\"\n Cancel a flow run by killing its infrastructure\n \"\"\"\n if not flow_run.infrastructure_pid:\n self.logger.error(\n f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n \" attached. Cancellation cannot be guaranteed.\"\n )\n await self._mark_flow_run_as_cancelled(\n flow_run,\n state_updates={\n \"message\": (\n \"This flow run is missing infrastructure tracking information\"\n \" and cancellation cannot be guaranteed.\"\n )\n },\n )\n return\n\n try:\n infrastructure = await self.get_infrastructure(flow_run)\n if infrastructure.is_using_a_runner:\n self.logger.info(\n f\"Skipping cancellation because flow run {str(flow_run.id)!r} is\"\n \" using enhanced cancellation. A dedicated runner will handle\"\n \" cancellation.\"\n )\n return\n except Exception:\n self.logger.exception(\n f\"Failed to get infrastructure for flow run '{flow_run.id}'. \"\n \"Flow run cannot be cancelled.\"\n )\n # Note: We leave this flow run in the cancelling set because it cannot be\n # cancelled and this will prevent additional attempts.\n return\n\n if not hasattr(infrastructure, \"kill\"):\n self.logger.error(\n f\"Flow run '{flow_run.id}' infrastructure {infrastructure.type!r} \"\n \"does not support killing created infrastructure. \"\n \"Cancellation cannot be guaranteed.\"\n )\n return\n\n self.logger.info(\n f\"Killing {infrastructure.type} {flow_run.infrastructure_pid} for flow run \"\n f\"'{flow_run.id}'...\"\n )\n try:\n await infrastructure.kill(flow_run.infrastructure_pid)\n except InfrastructureNotFound as exc:\n self.logger.warning(f\"{exc} Marking flow run as cancelled.\")\n await self._mark_flow_run_as_cancelled(flow_run)\n except InfrastructureNotAvailable as exc:\n self.logger.warning(f\"{exc} Flow run cannot be cancelled by this agent.\")\n except Exception:\n self.logger.exception(\n \"Encountered exception while killing infrastructure for flow run \"\n f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n )\n # We will try again on generic exceptions\n self.cancelling_flow_run_ids.remove(flow_run.id)\n return\n else:\n await self._mark_flow_run_as_cancelled(flow_run)\n self.logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n\n async def _mark_flow_run_as_cancelled(\n self, flow_run: FlowRun, state_updates: Optional[dict] = None\n ) -> None:\n state_updates = state_updates or {}\n state_updates.setdefault(\"name\", \"Cancelled\")\n state_updates.setdefault(\"type\", StateType.CANCELLED)\n state = flow_run.state.copy(update=state_updates)\n\n await self.client.set_flow_run_state(flow_run.id, state, force=True)\n\n # Do not remove the flow run from the cancelling set immediately because\n # the API caches responses for the `read_flow_runs` and we do not want to\n # duplicate cancellations.\n await self._schedule_task(\n 60 * 10, self.cancelling_flow_run_ids.remove, flow_run.id\n )\n\n async def get_infrastructure(self, flow_run: FlowRun) -> Infrastructure:\n deployment = await self.client.read_deployment(flow_run.deployment_id)\n\n flow = await self.client.read_flow(deployment.flow_id)\n\n # overrides only apply when configuring known infra blocks\n if not deployment.infrastructure_document_id:\n if self.default_infrastructure:\n infra_block = self.default_infrastructure\n else:\n infra_document = await self.client.read_block_document(\n self.default_infrastructure_document_id\n )\n infra_block = Block._from_block_document(infra_document)\n\n # Add flow run metadata to the infrastructure\n prepared_infrastructure = infra_block.prepare_for_flow_run(\n flow_run, deployment=deployment, flow=flow\n )\n return prepared_infrastructure\n\n ## get infra\n infra_document = await self.client.read_block_document(\n deployment.infrastructure_document_id\n )\n\n # this piece of logic applies any overrides that may have been set on the\n # deployment; overrides are defined as dot.delimited paths on possibly nested\n # attributes of the infrastructure block\n doc_dict = infra_document.dict()\n infra_dict = doc_dict.get(\"data\", {})\n for override, value in (deployment.infra_overrides or {}).items():\n nested_fields = override.split(\".\")\n data = infra_dict\n for field in nested_fields[:-1]:\n data = data[field]\n\n # once we reach the end, set the value\n data[nested_fields[-1]] = value\n\n # reconstruct the infra block\n doc_dict[\"data\"] = infra_dict\n infra_document = BlockDocument(**doc_dict)\n infrastructure_block = Block._from_block_document(infra_document)\n\n # TODO: Here the agent may update the infrastructure with agent-level settings\n\n # Add flow run metadata to the infrastructure\n prepared_infrastructure = infrastructure_block.prepare_for_flow_run(\n flow_run, deployment=deployment, flow=flow\n )\n\n return prepared_infrastructure\n\n async def submit_run(self, flow_run: FlowRun) -> None:\n \"\"\"\n Submit a flow run to the infrastructure\n \"\"\"\n ready_to_submit = await self._propose_pending_state(flow_run)\n\n if ready_to_submit:\n try:\n infrastructure = await self.get_infrastructure(flow_run)\n except Exception as exc:\n self.logger.exception(\n f\"Failed to get infrastructure for flow run '{flow_run.id}'.\"\n )\n await self._propose_failed_state(flow_run, exc)\n if self.limiter:\n self.limiter.release_on_behalf_of(flow_run.id)\n else:\n # Wait for submission to be completed. Note that the submission function\n # may continue to run in the background after this exits.\n readiness_result = await self.task_group.start(\n self._submit_run_and_capture_errors, flow_run, infrastructure\n )\n\n if readiness_result and not isinstance(readiness_result, Exception):\n try:\n await self.client.update_flow_run(\n flow_run_id=flow_run.id,\n infrastructure_pid=str(readiness_result),\n )\n except Exception:\n self.logger.exception(\n \"An error occurred while setting the `infrastructure_pid`\"\n f\" on flow run {flow_run.id!r}. The flow run will not be\"\n \" cancellable.\"\n )\n\n self.logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n else:\n # If the run is not ready to submit, release the concurrency slot\n if self.limiter:\n self.limiter.release_on_behalf_of(flow_run.id)\n\n self.submitting_flow_run_ids.remove(flow_run.id)\n\n async def _submit_run_and_capture_errors(\n self,\n flow_run: FlowRun,\n infrastructure: Infrastructure,\n task_status: anyio.abc.TaskStatus = None,\n ) -> Union[InfrastructureResult, Exception]:\n # Note: There is not a clear way to determine if task_status.started() has been\n # called without peeking at the internal `_future`. Ideally we could just\n # check if the flow run id has been removed from `submitting_flow_run_ids`\n # but it is not so simple to guarantee that this coroutine yields back\n # to `submit_run` to execute that line when exceptions are raised during\n # submission.\n try:\n result = await infrastructure.run(task_status=task_status)\n except Exception as exc:\n if not task_status._future.done():\n # This flow run was being submitted and did not start successfully\n self.logger.exception(\n f\"Failed to submit flow run '{flow_run.id}' to infrastructure.\"\n )\n # Mark the task as started to prevent agent crash\n task_status.started(exc)\n await self._propose_crashed_state(\n flow_run, \"Flow run could not be submitted to infrastructure\"\n )\n else:\n self.logger.exception(\n f\"An error occurred while monitoring flow run '{flow_run.id}'. \"\n \"The flow run will not be marked as failed, but an issue may have \"\n \"occurred.\"\n )\n return exc\n finally:\n if self.limiter:\n self.limiter.release_on_behalf_of(flow_run.id)\n\n if not task_status._future.done():\n self.logger.error(\n f\"Infrastructure returned without reporting flow run '{flow_run.id}' \"\n \"as started or raising an error. This behavior is not expected and \"\n \"generally indicates improper implementation of infrastructure. The \"\n \"flow run will not be marked as failed, but an issue may have occurred.\"\n )\n # Mark the task as started to prevent agent crash\n task_status.started()\n\n if result.status_code != 0:\n await self._propose_crashed_state(\n flow_run,\n (\n \"Flow run infrastructure exited with non-zero status code\"\n f\" {result.status_code}.\"\n ),\n )\n\n return result\n\n async def _propose_pending_state(self, flow_run: FlowRun) -> bool:\n state = flow_run.state\n try:\n state = await propose_state(self.client, Pending(), flow_run_id=flow_run.id)\n except Abort as exc:\n self.logger.info(\n (\n f\"Aborted submission of flow run '{flow_run.id}'. \"\n f\"Server sent an abort signal: {exc}\"\n ),\n )\n return False\n except Exception:\n self.logger.error(\n f\"Failed to update state of flow run '{flow_run.id}'\",\n exc_info=True,\n )\n return False\n\n if not state.is_pending():\n self.logger.info(\n (\n f\"Aborted submission of flow run '{flow_run.id}': \"\n f\"Server returned a non-pending state {state.type.value!r}\"\n ),\n )\n return False\n\n return True\n\n async def _propose_failed_state(self, flow_run: FlowRun, exc: Exception) -> None:\n try:\n await propose_state(\n self.client,\n await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n flow_run_id=flow_run.id,\n )\n except Abort:\n # We've already failed, no need to note the abort but we don't want it to\n # raise in the agent process\n pass\n except Exception:\n self.logger.error(\n f\"Failed to update state of flow run '{flow_run.id}'\",\n exc_info=True,\n )\n\n async def _propose_crashed_state(self, flow_run: FlowRun, message: str) -> None:\n try:\n state = await propose_state(\n self.client,\n Crashed(message=message),\n flow_run_id=flow_run.id,\n )\n except Abort:\n # Flow run already marked as failed\n pass\n except Exception:\n self.logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n else:\n if state.is_crashed():\n self.logger.info(\n f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n )\n\n async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n \"\"\"\n Schedule a background task to start after some time.\n\n These tasks will be run immediately when the agent exits instead of waiting.\n\n The function may be async or sync. Async functions will be awaited.\n \"\"\"\n\n async def wrapper(task_status):\n # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n # time or shutdown\n if self.started:\n with anyio.CancelScope() as scope:\n self.scheduled_task_scopes.add(scope)\n task_status.started()\n await anyio.sleep(__in_seconds)\n\n self.scheduled_task_scopes.remove(scope)\n else:\n task_status.started()\n\n result = fn(*args, **kwargs)\n if inspect.iscoroutine(result):\n await result\n\n await self.task_group.start(wrapper)\n\n # Context management ---------------------------------------------------------------\n\n async def start(self):\n self.started = True\n self.task_group = anyio.create_task_group()\n self.limiter = (\n anyio.CapacityLimiter(self.limit) if self.limit is not None else None\n )\n self.client = get_client()\n await self.client.__aenter__()\n await self.task_group.__aenter__()\n\n async def shutdown(self, *exc_info):\n self.started = False\n # We must cancel scheduled task scopes before closing the task group\n for scope in self.scheduled_task_scopes:\n scope.cancel()\n await self.task_group.__aexit__(*exc_info)\n await self.client.__aexit__(*exc_info)\n self.task_group = None\n self.client = None\n self.submitting_flow_run_ids.clear()\n self.cancelling_flow_run_ids.clear()\n self.scheduled_task_scopes.clear()\n self._work_queue_cache_expiration = None\n self._work_queue_cache = []\n\n async def __aenter__(self):\n await self.start()\n return self\n\n async def __aexit__(self, *exc_info):\n await self.shutdown(*exc_info)\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.cancel_run","title":"cancel_run
async
","text":"Cancel a flow run by killing its infrastructure
Source code inprefect/agent.py
async def cancel_run(self, flow_run: FlowRun) -> None:\n \"\"\"\n Cancel a flow run by killing its infrastructure\n \"\"\"\n if not flow_run.infrastructure_pid:\n self.logger.error(\n f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n \" attached. Cancellation cannot be guaranteed.\"\n )\n await self._mark_flow_run_as_cancelled(\n flow_run,\n state_updates={\n \"message\": (\n \"This flow run is missing infrastructure tracking information\"\n \" and cancellation cannot be guaranteed.\"\n )\n },\n )\n return\n\n try:\n infrastructure = await self.get_infrastructure(flow_run)\n if infrastructure.is_using_a_runner:\n self.logger.info(\n f\"Skipping cancellation because flow run {str(flow_run.id)!r} is\"\n \" using enhanced cancellation. A dedicated runner will handle\"\n \" cancellation.\"\n )\n return\n except Exception:\n self.logger.exception(\n f\"Failed to get infrastructure for flow run '{flow_run.id}'. \"\n \"Flow run cannot be cancelled.\"\n )\n # Note: We leave this flow run in the cancelling set because it cannot be\n # cancelled and this will prevent additional attempts.\n return\n\n if not hasattr(infrastructure, \"kill\"):\n self.logger.error(\n f\"Flow run '{flow_run.id}' infrastructure {infrastructure.type!r} \"\n \"does not support killing created infrastructure. \"\n \"Cancellation cannot be guaranteed.\"\n )\n return\n\n self.logger.info(\n f\"Killing {infrastructure.type} {flow_run.infrastructure_pid} for flow run \"\n f\"'{flow_run.id}'...\"\n )\n try:\n await infrastructure.kill(flow_run.infrastructure_pid)\n except InfrastructureNotFound as exc:\n self.logger.warning(f\"{exc} Marking flow run as cancelled.\")\n await self._mark_flow_run_as_cancelled(flow_run)\n except InfrastructureNotAvailable as exc:\n self.logger.warning(f\"{exc} Flow run cannot be cancelled by this agent.\")\n except Exception:\n self.logger.exception(\n \"Encountered exception while killing infrastructure for flow run \"\n f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n )\n # We will try again on generic exceptions\n self.cancelling_flow_run_ids.remove(flow_run.id)\n return\n else:\n await self._mark_flow_run_as_cancelled(flow_run)\n self.logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.get_and_submit_flow_runs","title":"get_and_submit_flow_runs
async
","text":"The principle method on agents. Queries for scheduled flow runs and submits them for execution in parallel.
Source code inprefect/agent.py
async def get_and_submit_flow_runs(self) -> List[FlowRun]:\n \"\"\"\n The principle method on agents. Queries for scheduled flow runs and submits\n them for execution in parallel.\n \"\"\"\n if not self.started:\n raise RuntimeError(\n \"Agent is not started. Use `async with PrefectAgent()...`\"\n )\n\n self.logger.debug(\"Checking for scheduled flow runs...\")\n\n before = pendulum.now(\"utc\").add(\n seconds=self.prefetch_seconds or PREFECT_AGENT_PREFETCH_SECONDS.value()\n )\n\n submittable_runs: List[FlowRun] = []\n\n if self.work_pool_name:\n responses = await self.client.get_scheduled_flow_runs_for_work_pool(\n work_pool_name=self.work_pool_name,\n work_queue_names=[wq.name async for wq in self.get_work_queues()],\n scheduled_before=before,\n )\n submittable_runs.extend([response.flow_run for response in responses])\n\n else:\n # load runs from each work queue\n async for work_queue in self.get_work_queues():\n # print a nice message if the work queue is paused\n if work_queue.is_paused:\n self.logger.info(\n f\"Work queue {work_queue.name!r} ({work_queue.id}) is paused.\"\n )\n\n else:\n try:\n queue_runs = await self.client.get_runs_in_work_queue(\n id=work_queue.id, limit=10, scheduled_before=before\n )\n submittable_runs.extend(queue_runs)\n except ObjectNotFound:\n self.logger.error(\n f\"Work queue {work_queue.name!r} ({work_queue.id}) not\"\n \" found.\"\n )\n except Exception as exc:\n self.logger.exception(exc)\n\n submittable_runs.sort(key=lambda run: run.next_scheduled_start_time)\n\n for flow_run in submittable_runs:\n # don't resubmit a run\n if flow_run.id in self.submitting_flow_run_ids:\n continue\n\n try:\n if self.limiter:\n self.limiter.acquire_on_behalf_of_nowait(flow_run.id)\n except anyio.WouldBlock:\n self.logger.info(\n f\"Flow run limit reached; {self.limiter.borrowed_tokens} flow runs\"\n \" in progress.\"\n )\n break\n else:\n self.logger.info(f\"Submitting flow run '{flow_run.id}'\")\n self.submitting_flow_run_ids.add(flow_run.id)\n self.task_group.start_soon(\n self.submit_run,\n flow_run,\n )\n\n return list(\n filter(lambda run: run.id in self.submitting_flow_run_ids, submittable_runs)\n )\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.get_work_queues","title":"get_work_queues
async
","text":"Loads the work queue objects corresponding to the agent's target work queues. If any of them don't exist, they are created.
Source code inprefect/agent.py
async def get_work_queues(self) -> AsyncIterator[WorkQueue]:\n \"\"\"\n Loads the work queue objects corresponding to the agent's target work\n queues. If any of them don't exist, they are created.\n \"\"\"\n\n # if the queue cache has not expired, yield queues from the cache\n now = pendulum.now(\"UTC\")\n if (self._work_queue_cache_expiration or now) > now:\n for queue in self._work_queue_cache:\n yield queue\n return\n\n # otherwise clear the cache, set the expiration for 30 seconds, and\n # reload the work queues\n self._work_queue_cache.clear()\n self._work_queue_cache_expiration = now.add(seconds=30)\n\n await self.update_matched_agent_work_queues()\n\n for name in self.work_queues:\n try:\n work_queue = await self.client.read_work_queue_by_name(\n work_pool_name=self.work_pool_name, name=name\n )\n except (ObjectNotFound, Exception):\n work_queue = None\n\n # if the work queue wasn't found and the agent is NOT polling\n # for queues using a regex, try to create it\n if work_queue is None and not self.work_queue_prefix:\n try:\n work_queue = await self.client.create_work_queue(\n work_pool_name=self.work_pool_name, name=name\n )\n except Exception:\n # if creating it raises an exception, it was probably just\n # created by some other agent; rather than entering a re-read\n # loop with new error handling, we log the exception and\n # continue.\n self.logger.exception(f\"Failed to create work queue {name!r}.\")\n continue\n else:\n log_str = f\"Created work queue {name!r}\"\n if self.work_pool_name:\n log_str = (\n f\"Created work queue {name!r} in work pool\"\n f\" {self.work_pool_name!r}.\"\n )\n else:\n log_str = f\"Created work queue '{name}'.\"\n self.logger.info(log_str)\n\n if work_queue is None:\n self.logger.error(\n f\"Work queue '{name!r}' with prefix {self.work_queue_prefix} wasn't\"\n \" found\"\n )\n else:\n self._work_queue_cache.append(work_queue)\n yield work_queue\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.submit_run","title":"submit_run
async
","text":"Submit a flow run to the infrastructure
Source code inprefect/agent.py
async def submit_run(self, flow_run: FlowRun) -> None:\n \"\"\"\n Submit a flow run to the infrastructure\n \"\"\"\n ready_to_submit = await self._propose_pending_state(flow_run)\n\n if ready_to_submit:\n try:\n infrastructure = await self.get_infrastructure(flow_run)\n except Exception as exc:\n self.logger.exception(\n f\"Failed to get infrastructure for flow run '{flow_run.id}'.\"\n )\n await self._propose_failed_state(flow_run, exc)\n if self.limiter:\n self.limiter.release_on_behalf_of(flow_run.id)\n else:\n # Wait for submission to be completed. Note that the submission function\n # may continue to run in the background after this exits.\n readiness_result = await self.task_group.start(\n self._submit_run_and_capture_errors, flow_run, infrastructure\n )\n\n if readiness_result and not isinstance(readiness_result, Exception):\n try:\n await self.client.update_flow_run(\n flow_run_id=flow_run.id,\n infrastructure_pid=str(readiness_result),\n )\n except Exception:\n self.logger.exception(\n \"An error occurred while setting the `infrastructure_pid`\"\n f\" on flow run {flow_run.id!r}. The flow run will not be\"\n \" cancellable.\"\n )\n\n self.logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n else:\n # If the run is not ready to submit, release the concurrency slot\n if self.limiter:\n self.limiter.release_on_behalf_of(flow_run.id)\n\n self.submitting_flow_run_ids.remove(flow_run.id)\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/artifacts/","title":"prefect.artifacts","text":"","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts","title":"prefect.artifacts
","text":"Interface for creating and reading artifacts.
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_link_artifact","title":"create_link_artifact
async
","text":"Create a link artifact.
Parameters:
Name Type Description Defaultlink
str
The link to create.
requiredlink_text
Optional[str]
The link text.
None
key
Optional[str]
A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.
None
description
Optional[str]
A user-specified description of the artifact.
None
Returns:
Type DescriptionUUID
The table artifact ID.
Source code inprefect/artifacts.py
@sync_compatible\nasync def create_link_artifact(\n link: str,\n link_text: Optional[str] = None,\n key: Optional[str] = None,\n description: Optional[str] = None,\n) -> UUID:\n \"\"\"\n Create a link artifact.\n\n Arguments:\n link: The link to create.\n link_text: The link text.\n key: A user-provided string identifier.\n Required for the artifact to show in the Artifacts page in the UI.\n The key must only contain lowercase letters, numbers, and dashes.\n description: A user-specified description of the artifact.\n\n\n Returns:\n The table artifact ID.\n \"\"\"\n formatted_link = f\"[{link_text}]({link})\" if link_text else f\"[{link}]({link})\"\n artifact = await _create_artifact(\n key=key,\n type=\"markdown\",\n description=description,\n data=formatted_link,\n )\n\n return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_markdown_artifact","title":"create_markdown_artifact
async
","text":"Create a markdown artifact.
Parameters:
Name Type Description Defaultmarkdown
str
The markdown to create.
requiredkey
Optional[str]
A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.
None
description
Optional[str]
A user-specified description of the artifact.
None
Returns:
Type DescriptionUUID
The table artifact ID.
Source code inprefect/artifacts.py
@sync_compatible\nasync def create_markdown_artifact(\n markdown: str,\n key: Optional[str] = None,\n description: Optional[str] = None,\n) -> UUID:\n \"\"\"\n Create a markdown artifact.\n\n Arguments:\n markdown: The markdown to create.\n key: A user-provided string identifier.\n Required for the artifact to show in the Artifacts page in the UI.\n The key must only contain lowercase letters, numbers, and dashes.\n description: A user-specified description of the artifact.\n\n Returns:\n The table artifact ID.\n \"\"\"\n artifact = await _create_artifact(\n key=key,\n type=\"markdown\",\n description=description,\n data=markdown,\n )\n\n return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_table_artifact","title":"create_table_artifact
async
","text":"Create a table artifact.
Parameters:
Name Type Description Defaulttable
Union[Dict[str, List[Any]], List[Dict[str, Any]], List[List[Any]]]
The table to create.
requiredkey
Optional[str]
A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.
None
description
Optional[str]
A user-specified description of the artifact.
None
Returns:
Type DescriptionUUID
The table artifact ID.
Source code inprefect/artifacts.py
@sync_compatible\nasync def create_table_artifact(\n table: Union[Dict[str, List[Any]], List[Dict[str, Any]], List[List[Any]]],\n key: Optional[str] = None,\n description: Optional[str] = None,\n) -> UUID:\n \"\"\"\n Create a table artifact.\n\n Arguments:\n table: The table to create.\n key: A user-provided string identifier.\n Required for the artifact to show in the Artifacts page in the UI.\n The key must only contain lowercase letters, numbers, and dashes.\n description: A user-specified description of the artifact.\n\n Returns:\n The table artifact ID.\n \"\"\"\n\n def _sanitize_nan_values(item):\n \"\"\"\n Sanitize NaN values in a given item. The item can be a dict, list or float.\n \"\"\"\n\n if isinstance(item, list):\n return [_sanitize_nan_values(sub_item) for sub_item in item]\n\n elif isinstance(item, dict):\n return {k: _sanitize_nan_values(v) for k, v in item.items()}\n\n elif isinstance(item, float) and math.isnan(item):\n return None\n\n else:\n return item\n\n sanitized_table = _sanitize_nan_values(table)\n\n if isinstance(table, dict) and all(isinstance(v, list) for v in table.values()):\n pass\n elif isinstance(table, list) and all(isinstance(v, (list, dict)) for v in table):\n pass\n else:\n raise TypeError(INVALID_TABLE_TYPE_ERROR)\n\n formatted_table = json.dumps(sanitized_table)\n\n artifact = await _create_artifact(\n key=key,\n type=\"table\",\n description=description,\n data=formatted_table,\n )\n\n return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/context/","title":"prefect.context","text":"","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context","title":"prefect.context
","text":"Async and thread safe models for passing runtime context data.
These contexts should never be directly mutated by the user.
For more user-accessible information about the current run, see prefect.runtime
.
ContextModel
","text":" Bases: BaseModel
A base model for context data that forbids mutation and extra data while providing a context manager
Source code inprefect/context.py
class ContextModel(BaseModel):\n \"\"\"\n A base model for context data that forbids mutation and extra data while providing\n a context manager\n \"\"\"\n\n # The context variable for storing data must be defined by the child class\n __var__: ContextVar\n _token: Token = PrivateAttr(None)\n\n class Config:\n allow_mutation = False\n arbitrary_types_allowed = True\n extra = \"forbid\"\n\n def __enter__(self):\n if self._token is not None:\n raise RuntimeError(\n \"Context already entered. Context enter calls cannot be nested.\"\n )\n self._token = self.__var__.set(self)\n return self\n\n def __exit__(self, *_):\n if not self._token:\n raise RuntimeError(\n \"Asymmetric use of context. Context exit called without an enter.\"\n )\n self.__var__.reset(self._token)\n self._token = None\n\n @classmethod\n def get(cls: Type[T]) -> Optional[T]:\n return cls.__var__.get(None)\n\n def copy(self, **kwargs):\n \"\"\"\n Duplicate the context model, optionally choosing which fields to include, exclude, or change.\n\n Attributes:\n include: Fields to include in new model.\n exclude: Fields to exclude from new model, as with values this takes precedence over include.\n update: Values to change/add in the new model. Note: the data is not validated before creating\n the new model - you should trust this data.\n deep: Set to `True` to make a deep copy of the model.\n\n Returns:\n A new model instance.\n \"\"\"\n # Remove the token on copy to avoid re-entrance errors\n new = super().copy(**kwargs)\n new._token = None\n return new\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.ContextModel.copy","title":"copy
","text":"Duplicate the context model, optionally choosing which fields to include, exclude, or change.
Attributes:
Name Type Descriptioninclude
Fields to include in new model.
exclude
Fields to exclude from new model, as with values this takes precedence over include.
update
Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data.
deep
Set to True
to make a deep copy of the model.
Returns:
Type DescriptionA new model instance.
Source code inprefect/context.py
def copy(self, **kwargs):\n \"\"\"\n Duplicate the context model, optionally choosing which fields to include, exclude, or change.\n\n Attributes:\n include: Fields to include in new model.\n exclude: Fields to exclude from new model, as with values this takes precedence over include.\n update: Values to change/add in the new model. Note: the data is not validated before creating\n the new model - you should trust this data.\n deep: Set to `True` to make a deep copy of the model.\n\n Returns:\n A new model instance.\n \"\"\"\n # Remove the token on copy to avoid re-entrance errors\n new = super().copy(**kwargs)\n new._token = None\n return new\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.EngineContext","title":"EngineContext
","text":" Bases: RunContext
The context for a flow run. Data in this context is only available from within a flow run function.
Attributes:
Name Type Descriptionflow
Optional[Flow]
The flow instance associated with the run
flow_run
Optional[FlowRun]
The API metadata for the flow run
task_runner
BaseTaskRunner
The task runner instance being used for the flow run
task_run_futures
List[PrefectFuture]
A list of futures for task runs submitted within this flow run
task_run_states
List[State]
A list of states for task runs created within this flow run
task_run_results
Dict[int, State]
A mapping of result ids to task run states for this flow run
flow_run_states
List[State]
A list of states for flow runs created within this flow run
sync_portal
Optional[BlockingPortal]
A blocking portal for sync task/flow runs in an async flow
timeout_scope
Optional[CancelScope]
The cancellation scope for flow level timeouts
Source code inprefect/context.py
class EngineContext(RunContext):\n \"\"\"\n The context for a flow run. Data in this context is only available from within a\n flow run function.\n\n Attributes:\n flow: The flow instance associated with the run\n flow_run: The API metadata for the flow run\n task_runner: The task runner instance being used for the flow run\n task_run_futures: A list of futures for task runs submitted within this flow run\n task_run_states: A list of states for task runs created within this flow run\n task_run_results: A mapping of result ids to task run states for this flow run\n flow_run_states: A list of states for flow runs created within this flow run\n sync_portal: A blocking portal for sync task/flow runs in an async flow\n timeout_scope: The cancellation scope for flow level timeouts\n \"\"\"\n\n flow: Optional[\"Flow\"] = None\n flow_run: Optional[FlowRun] = None\n autonomous_task_run: Optional[TaskRun] = None\n task_runner: BaseTaskRunner\n log_prints: bool = False\n parameters: Optional[Dict[str, Any]] = None\n\n # Result handling\n result_factory: ResultFactory\n\n # Counter for task calls allowing unique\n task_run_dynamic_keys: Dict[str, int] = Field(default_factory=dict)\n\n # Counter for flow pauses\n observed_flow_pauses: Dict[str, int] = Field(default_factory=dict)\n\n # Tracking for objects created by this flow run\n task_run_futures: List[PrefectFuture] = Field(default_factory=list)\n task_run_states: List[State] = Field(default_factory=list)\n task_run_results: Dict[int, State] = Field(default_factory=dict)\n flow_run_states: List[State] = Field(default_factory=list)\n\n # The synchronous portal is only created for async flows for creating engine calls\n # from synchronous task and subflow calls\n sync_portal: Optional[anyio.abc.BlockingPortal] = None\n timeout_scope: Optional[anyio.abc.CancelScope] = None\n\n # Task group that can be used for background tasks during the flow run\n background_tasks: anyio.abc.TaskGroup\n\n # Events worker to emit events to Prefect Cloud\n events: Optional[EventsWorker] = None\n\n __var__ = ContextVar(\"flow_run\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.PrefectObjectRegistry","title":"PrefectObjectRegistry
","text":" Bases: ContextModel
A context that acts as a registry for all Prefect objects that are registered during load and execution.
Attributes:
Name Type Descriptionstart_time
DateTimeTZ
The time the object registry was created.
block_code_execution
bool
If set, flow calls will be ignored.
capture_failures
bool
If set, failures during init will be silenced and tracked.
Source code inprefect/context.py
class PrefectObjectRegistry(ContextModel):\n \"\"\"\n A context that acts as a registry for all Prefect objects that are\n registered during load and execution.\n\n Attributes:\n start_time: The time the object registry was created.\n block_code_execution: If set, flow calls will be ignored.\n capture_failures: If set, failures during __init__ will be silenced and tracked.\n \"\"\"\n\n start_time: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n\n _instance_registry: Dict[Type[T], List[T]] = PrivateAttr(\n default_factory=lambda: defaultdict(list)\n )\n\n # Failures will be a tuple of (exception, instance, args, kwargs)\n _instance_init_failures: Dict[\n Type[T], List[Tuple[Exception, T, Tuple, Dict]]\n ] = PrivateAttr(default_factory=lambda: defaultdict(list))\n\n block_code_execution: bool = False\n capture_failures: bool = False\n\n __var__ = ContextVar(\"object_registry\")\n\n def get_instances(self, type_: Type[T]) -> List[T]:\n instances = []\n for registered_type, type_instances in self._instance_registry.items():\n if type_ in registered_type.mro():\n instances.extend(type_instances)\n return instances\n\n def get_instance_failures(\n self, type_: Type[T]\n ) -> List[Tuple[Exception, T, Tuple, Dict]]:\n failures = []\n for type__ in type_.mro():\n failures.extend(self._instance_init_failures[type__])\n return failures\n\n def register_instance(self, object):\n # TODO: Consider using a 'Set' to avoid duplicate entries\n self._instance_registry[type(object)].append(object)\n\n def register_init_failure(\n self, exc: Exception, object: Any, init_args: Tuple, init_kwargs: Dict\n ):\n self._instance_init_failures[type(object)].append(\n (exc, object, init_args, init_kwargs)\n )\n\n @classmethod\n def register_instances(cls, type_: Type[T]) -> Type[T]:\n \"\"\"\n Decorator for a class that adds registration to the `PrefectObjectRegistry`\n on initialization of instances.\n \"\"\"\n original_init = type_.__init__\n\n def __register_init__(__self__: T, *args: Any, **kwargs: Any) -> None:\n registry = cls.get()\n try:\n original_init(__self__, *args, **kwargs)\n except Exception as exc:\n if not registry or not registry.capture_failures:\n raise\n else:\n registry.register_init_failure(exc, __self__, args, kwargs)\n else:\n if registry:\n registry.register_instance(__self__)\n\n update_wrapper(__register_init__, original_init)\n\n type_.__init__ = __register_init__\n return type_\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.PrefectObjectRegistry.register_instances","title":"register_instances
classmethod
","text":"Decorator for a class that adds registration to the PrefectObjectRegistry
on initialization of instances.
prefect/context.py
@classmethod\ndef register_instances(cls, type_: Type[T]) -> Type[T]:\n \"\"\"\n Decorator for a class that adds registration to the `PrefectObjectRegistry`\n on initialization of instances.\n \"\"\"\n original_init = type_.__init__\n\n def __register_init__(__self__: T, *args: Any, **kwargs: Any) -> None:\n registry = cls.get()\n try:\n original_init(__self__, *args, **kwargs)\n except Exception as exc:\n if not registry or not registry.capture_failures:\n raise\n else:\n registry.register_init_failure(exc, __self__, args, kwargs)\n else:\n if registry:\n registry.register_instance(__self__)\n\n update_wrapper(__register_init__, original_init)\n\n type_.__init__ = __register_init__\n return type_\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.RunContext","title":"RunContext
","text":" Bases: ContextModel
The base context for a flow or task run. Data in this context will always be available when get_run_context
is called.
Attributes:
Name Type Descriptionstart_time
DateTimeTZ
The time the run context was entered
client
PrefectClient
The Prefect client instance being used for API communication
Source code inprefect/context.py
class RunContext(ContextModel):\n \"\"\"\n The base context for a flow or task run. Data in this context will always be\n available when `get_run_context` is called.\n\n Attributes:\n start_time: The time the run context was entered\n client: The Prefect client instance being used for API communication\n \"\"\"\n\n start_time: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n input_keyset: Optional[Dict[str, Dict[str, str]]] = None\n client: PrefectClient\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.SettingsContext","title":"SettingsContext
","text":" Bases: ContextModel
The context for a Prefect settings.
This allows for safe concurrent access and modification of settings.
Attributes:
Name Type Descriptionprofile
Profile
The profile that is in use.
settings
Settings
The complete settings model.
Source code inprefect/context.py
class SettingsContext(ContextModel):\n \"\"\"\n The context for a Prefect settings.\n\n This allows for safe concurrent access and modification of settings.\n\n Attributes:\n profile: The profile that is in use.\n settings: The complete settings model.\n \"\"\"\n\n profile: Profile\n settings: Settings\n\n __var__ = ContextVar(\"settings\")\n\n def __hash__(self) -> int:\n return hash(self.settings)\n\n def __enter__(self):\n \"\"\"\n Upon entrance, we ensure the home directory for the profile exists.\n \"\"\"\n return_value = super().__enter__()\n\n try:\n prefect_home = Path(self.settings.value_of(PREFECT_HOME))\n prefect_home.mkdir(mode=0o0700, exist_ok=True)\n except OSError:\n warnings.warn(\n (\n \"Failed to create the Prefect home directory at \"\n f\"{self.settings.value_of(PREFECT_HOME)}\"\n ),\n stacklevel=2,\n )\n\n return return_value\n\n @classmethod\n def get(cls) -> \"SettingsContext\":\n # Return the global context instead of `None` if no context exists\n return super().get() or GLOBAL_SETTINGS_CONTEXT\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.TagsContext","title":"TagsContext
","text":" Bases: ContextModel
The context for prefect.tags
management.
Attributes:
Name Type Descriptioncurrent_tags
Set[str]
A set of current tags in the context
Source code inprefect/context.py
class TagsContext(ContextModel):\n \"\"\"\n The context for `prefect.tags` management.\n\n Attributes:\n current_tags: A set of current tags in the context\n \"\"\"\n\n current_tags: Set[str] = Field(default_factory=set)\n\n @classmethod\n def get(cls) -> \"TagsContext\":\n # Return an empty `TagsContext` instead of `None` if no context exists\n return cls.__var__.get(TagsContext())\n\n __var__ = ContextVar(\"tags\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.TaskRunContext","title":"TaskRunContext
","text":" Bases: RunContext
The context for a task run. Data in this context is only available from within a task run function.
Attributes:
Name Type Descriptiontask
Task
The task instance associated with the task run
task_run
TaskRun
The API metadata for this task run
Source code inprefect/context.py
class TaskRunContext(RunContext):\n \"\"\"\n The context for a task run. Data in this context is only available from within a\n task run function.\n\n Attributes:\n task: The task instance associated with the task run\n task_run: The API metadata for this task run\n \"\"\"\n\n task: \"Task\"\n task_run: TaskRun\n log_prints: bool = False\n parameters: Dict[str, Any]\n\n # Result handling\n result_factory: ResultFactory\n\n __var__ = ContextVar(\"task_run\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.get_run_context","title":"get_run_context
","text":"Get the current run context from within a task or flow function.
Returns:
Type DescriptionUnion[FlowRunContext, TaskRunContext]
A FlowRunContext
or TaskRunContext
depending on the function type.
Raises RuntimeError: If called outside of a flow or task run.
Source code inprefect/context.py
def get_run_context() -> Union[FlowRunContext, TaskRunContext]:\n \"\"\"\n Get the current run context from within a task or flow function.\n\n Returns:\n A `FlowRunContext` or `TaskRunContext` depending on the function type.\n\n Raises\n RuntimeError: If called outside of a flow or task run.\n \"\"\"\n task_run_ctx = TaskRunContext.get()\n if task_run_ctx:\n return task_run_ctx\n\n flow_run_ctx = FlowRunContext.get()\n if flow_run_ctx:\n return flow_run_ctx\n\n raise MissingContextError(\n \"No run context available. You are not in a flow or task run context.\"\n )\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.get_settings_context","title":"get_settings_context
","text":"Get the current settings context which contains profile information and the settings that are being used.
Generally, the settings that are being used are a combination of values from the profile and environment. See prefect.context.use_profile
for more details.
prefect/context.py
def get_settings_context() -> SettingsContext:\n \"\"\"\n Get the current settings context which contains profile information and the\n settings that are being used.\n\n Generally, the settings that are being used are a combination of values from the\n profile and environment. See `prefect.context.use_profile` for more details.\n \"\"\"\n settings_ctx = SettingsContext.get()\n\n if not settings_ctx:\n raise MissingContextError(\"No settings context found.\")\n\n return settings_ctx\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.registry_from_script","title":"registry_from_script
","text":"Return a fresh registry with instances populated from execution of a script.
Source code inprefect/context.py
def registry_from_script(\n path: str,\n block_code_execution: bool = True,\n capture_failures: bool = True,\n) -> PrefectObjectRegistry:\n \"\"\"\n Return a fresh registry with instances populated from execution of a script.\n \"\"\"\n with PrefectObjectRegistry(\n block_code_execution=block_code_execution,\n capture_failures=capture_failures,\n ) as registry:\n load_script_as_module(path)\n\n return registry\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.root_settings_context","title":"root_settings_context
","text":"Return the settings context that will exist as the root context for the module.
The profile to use is determined with the following precedence - Command line via 'prefect --profile ' - Environment variable via 'PREFECT_PROFILE' - Profiles file via the 'active' key Source code in prefect/context.py
def root_settings_context():\n \"\"\"\n Return the settings context that will exist as the root context for the module.\n\n The profile to use is determined with the following precedence\n - Command line via 'prefect --profile <name>'\n - Environment variable via 'PREFECT_PROFILE'\n - Profiles file via the 'active' key\n \"\"\"\n profiles = prefect.settings.load_profiles()\n active_name = profiles.active_name\n profile_source = \"in the profiles file\"\n\n if \"PREFECT_PROFILE\" in os.environ:\n active_name = os.environ[\"PREFECT_PROFILE\"]\n profile_source = \"by environment variable\"\n\n if (\n sys.argv[0].endswith(\"/prefect\")\n and len(sys.argv) >= 3\n and sys.argv[1] == \"--profile\"\n ):\n active_name = sys.argv[2]\n profile_source = \"by command line argument\"\n\n if active_name not in profiles.names:\n print(\n (\n f\"WARNING: Active profile {active_name!r} set {profile_source} not \"\n \"found. The default profile will be used instead. \"\n ),\n file=sys.stderr,\n )\n active_name = \"default\"\n\n with use_profile(\n profiles[active_name],\n # Override environment variables if the profile was set by the CLI\n override_environment_variables=profile_source == \"by command line argument\",\n ) as settings_context:\n return settings_context\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.tags","title":"tags
","text":"Context manager to add tags to flow and task run calls.
Tags are always combined with any existing tags.
Yields:
Type DescriptionSet[str]
The current set of tags
Examples:
>>> from prefect import tags, task, flow\n>>> @task\n>>> def my_task():\n>>> pass\n
Run a task with tags
>>> @flow\n>>> def my_flow():\n>>> with tags(\"a\", \"b\"):\n>>> my_task() # has tags: a, b\n
Run a flow with tags
>>> @flow\n>>> def my_flow():\n>>> pass\n>>> with tags(\"a\", \"b\"):\n>>> my_flow() # has tags: a, b\n
Run a task with nested tag contexts
>>> @flow\n>>> def my_flow():\n>>> with tags(\"a\", \"b\"):\n>>> with tags(\"c\", \"d\"):\n>>> my_task() # has tags: a, b, c, d\n>>> my_task() # has tags: a, b\n
Inspect the current tags
>>> @flow\n>>> def my_flow():\n>>> with tags(\"c\", \"d\"):\n>>> with tags(\"e\", \"f\") as current_tags:\n>>> print(current_tags)\n>>> with tags(\"a\", \"b\"):\n>>> my_flow()\n{\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"}\n
Source code in prefect/context.py
@contextmanager\ndef tags(*new_tags: str) -> Set[str]:\n \"\"\"\n Context manager to add tags to flow and task run calls.\n\n Tags are always combined with any existing tags.\n\n Yields:\n The current set of tags\n\n Examples:\n >>> from prefect import tags, task, flow\n >>> @task\n >>> def my_task():\n >>> pass\n\n Run a task with tags\n\n >>> @flow\n >>> def my_flow():\n >>> with tags(\"a\", \"b\"):\n >>> my_task() # has tags: a, b\n\n Run a flow with tags\n\n >>> @flow\n >>> def my_flow():\n >>> pass\n >>> with tags(\"a\", \"b\"):\n >>> my_flow() # has tags: a, b\n\n Run a task with nested tag contexts\n\n >>> @flow\n >>> def my_flow():\n >>> with tags(\"a\", \"b\"):\n >>> with tags(\"c\", \"d\"):\n >>> my_task() # has tags: a, b, c, d\n >>> my_task() # has tags: a, b\n\n Inspect the current tags\n\n >>> @flow\n >>> def my_flow():\n >>> with tags(\"c\", \"d\"):\n >>> with tags(\"e\", \"f\") as current_tags:\n >>> print(current_tags)\n >>> with tags(\"a\", \"b\"):\n >>> my_flow()\n {\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"}\n \"\"\"\n current_tags = TagsContext.get().current_tags\n new_tags = current_tags.union(new_tags)\n with TagsContext(current_tags=new_tags):\n yield new_tags\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.use_profile","title":"use_profile
","text":"Switch to a profile for the duration of this context.
Profile contexts are confined to an async context in a single thread.
Parameters:
Name Type Description Defaultprofile
Union[Profile, str]
The name of the profile to load or an instance of a Profile.
requiredoverride_environment_variable
If set, variables in the profile will take precedence over current environment variables. By default, environment variables will override profile settings.
requiredinclude_current_context
bool
If set, the new settings will be constructed with the current settings context as a base. If not set, the use_base settings will be loaded from the environment and defaults.
True
Yields:
Type DescriptionThe created SettingsContext
object
prefect/context.py
@contextmanager\ndef use_profile(\n profile: Union[Profile, str],\n override_environment_variables: bool = False,\n include_current_context: bool = True,\n):\n \"\"\"\n Switch to a profile for the duration of this context.\n\n Profile contexts are confined to an async context in a single thread.\n\n Args:\n profile: The name of the profile to load or an instance of a Profile.\n override_environment_variable: If set, variables in the profile will take\n precedence over current environment variables. By default, environment\n variables will override profile settings.\n include_current_context: If set, the new settings will be constructed\n with the current settings context as a base. If not set, the use_base settings\n will be loaded from the environment and defaults.\n\n Yields:\n The created `SettingsContext` object\n \"\"\"\n if isinstance(profile, str):\n profiles = prefect.settings.load_profiles()\n profile = profiles[profile]\n\n if not isinstance(profile, Profile):\n raise TypeError(\n f\"Unexpected type {type(profile).__name__!r} for `profile`. \"\n \"Expected 'str' or 'Profile'.\"\n )\n\n # Create a copy of the profiles settings as we will mutate it\n profile_settings = profile.settings.copy()\n\n existing_context = SettingsContext.get()\n if existing_context and include_current_context:\n settings = existing_context.settings\n else:\n settings = prefect.settings.get_settings_from_env()\n\n if not override_environment_variables:\n for key in os.environ:\n if key in prefect.settings.SETTING_VARIABLES:\n profile_settings.pop(prefect.settings.SETTING_VARIABLES[key], None)\n\n new_settings = settings.copy_with_update(updates=profile_settings)\n\n with SettingsContext(profile=profile, settings=new_settings) as ctx:\n yield ctx\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/engine/","title":"prefect.engine","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine","title":"prefect.engine
","text":"Client-side execution and orchestration of flows and tasks.
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--engine-process-overview","title":"Engine process overview","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--flows","title":"Flows","text":"The flow is called by the user or an existing flow run is executed in a new process.
See Flow.__call__
and prefect.engine.__main__
(python -m prefect.engine
)
A synchronous function acts as an entrypoint to the engine. The engine executes on a dedicated \"global loop\" thread. For asynchronous flow calls, we return a coroutine from the entrypoint so the user can enter the engine without blocking their event loop.
See enter_flow_run_engine_from_flow_call
, enter_flow_run_engine_from_subprocess
The thread that calls the entrypoint waits until orchestration of the flow run completes. This thread is referred to as the \"user\" thread and is usually the \"main\" thread. The thread is not blocked while waiting \u2014 it allows the engine to send work back to it. This allows us to send calls back to the user thread from the global loop thread.
See wait_for_call_in_loop_thread
and call_soon_in_waiting_thread
The asynchronous engine branches depending on if the flow run exists already and if there is a parent flow run in the current context.
See create_then_begin_flow_run
, create_and_begin_subflow_run
, and retrieve_flow_then_begin_flow_run
The asynchronous engine prepares for execution of the flow run. This includes starting the task runner, preparing context, etc.
See begin_flow_run
The flow run is orchestrated through states, calling the user's function as necessary. Generally the user's function is sent for execution on the user thread. If the flow function cannot be safely executed on the user thread, e.g. it is a synchronous child in an asynchronous parent it will be scheduled on a worker thread instead.
See orchestrate_flow_run
, call_soon_in_waiting_thread
, call_soon_in_new_thread
The task is called or submitted by the user. We require that this is always within a flow.
See Task.__call__
and Task.submit
A synchronous function acts as an entrypoint to the engine. Unlike flow calls, this will not block until completion if submit
was used.
See enter_task_run_engine
A future is created for the task call. Creation of the task run and submission to the task runner is scheduled as a background task so submission of many tasks can occur concurrently.
See create_task_run_future
and create_task_run_then_submit
The engine branches depending on if a future, state, or result is requested. If a future is requested, it is returned immediately to the user thread. Otherwise, the engine will wait for the task run to complete and return the final state or result.
See get_task_call_return_value
An engine function is submitted to the task runner. The task runner will schedule this function for execution on a worker. When executed, it will prepare for orchestration and wait for completion of the run.
See create_task_run_then_submit
and begin_task_run
The task run is orchestrated through states, calling the user's function as necessary. The user's function is always executed in a worker thread for isolation.
See orchestrate_task_run
, call_soon_in_new_thread
_Ideally, for local and sequential task runners we would send the task run to the user thread as we do for flows. See #9855.
begin_flow_run
async
","text":"Begins execution of a flow run; blocks until completion of the flow run
Note that the flow_run
contains a parameters
attribute which is the serialized parameters sent to the backend while the parameters
argument here should be the deserialized and validated dictionary of python objects.
Returns:
Type DescriptionState
The final state of the run
Source code inprefect/engine.py
async def begin_flow_run(\n flow: Flow,\n flow_run: FlowRun,\n parameters: Dict[str, Any],\n client: PrefectClient,\n user_thread: threading.Thread,\n) -> State:\n \"\"\"\n Begins execution of a flow run; blocks until completion of the flow run\n\n - Starts a task runner\n - Determines the result storage block to use\n - Orchestrates the flow run (runs the user-function and generates tasks)\n - Waits for tasks to complete / shutsdown the task runner\n - Sets a terminal state for the flow run\n\n Note that the `flow_run` contains a `parameters` attribute which is the serialized\n parameters sent to the backend while the `parameters` argument here should be the\n deserialized and validated dictionary of python objects.\n\n Returns:\n The final state of the run\n \"\"\"\n logger = flow_run_logger(flow_run, flow)\n\n log_prints = should_log_prints(flow)\n flow_run_context = PartialModel(FlowRunContext, log_prints=log_prints)\n\n async with AsyncExitStack() as stack:\n await stack.enter_async_context(\n report_flow_run_crashes(flow_run=flow_run, client=client, flow=flow)\n )\n\n # Create a task group for background tasks\n flow_run_context.background_tasks = await stack.enter_async_context(\n anyio.create_task_group()\n )\n\n # If the flow is async, we need to provide a portal so sync tasks can run\n flow_run_context.sync_portal = (\n stack.enter_context(start_blocking_portal()) if flow.isasync else None\n )\n\n task_runner = flow.task_runner.duplicate()\n if task_runner is NotImplemented:\n # Backwards compatibility; will not support concurrent flow runs\n task_runner = flow.task_runner\n logger.warning(\n f\"Task runner {type(task_runner).__name__!r} does not implement the\"\n \" `duplicate` method and will fail if used for concurrent execution of\"\n \" the same flow.\"\n )\n\n logger.debug(\n f\"Starting {type(flow.task_runner).__name__!r}; submitted tasks \"\n f\"will be run {CONCURRENCY_MESSAGES[flow.task_runner.concurrency_type]}...\"\n )\n\n flow_run_context.task_runner = await stack.enter_async_context(\n task_runner.start()\n )\n\n flow_run_context.result_factory = await ResultFactory.from_flow(\n flow, client=client\n )\n\n if log_prints:\n stack.enter_context(patch_print())\n\n terminal_or_paused_state = await orchestrate_flow_run(\n flow,\n flow_run=flow_run,\n parameters=parameters,\n wait_for=None,\n client=client,\n partial_flow_run_context=flow_run_context,\n # Orchestration needs to be interruptible if it has a timeout\n interruptible=flow.timeout_seconds is not None,\n user_thread=user_thread,\n )\n\n if terminal_or_paused_state.is_paused():\n timeout = terminal_or_paused_state.state_details.pause_timeout\n msg = \"Currently paused and suspending execution.\"\n if timeout:\n msg += f\" Resume before {timeout.to_rfc3339_string()} to finish execution.\"\n logger.log(level=logging.INFO, msg=msg)\n await APILogHandler.aflush()\n\n return terminal_or_paused_state\n else:\n terminal_state = terminal_or_paused_state\n\n # If debugging, use the more complete `repr` than the usual `str` description\n display_state = repr(terminal_state) if PREFECT_DEBUG_MODE else str(terminal_state)\n\n logger.log(\n level=logging.INFO if terminal_state.is_completed() else logging.ERROR,\n msg=f\"Finished in state {display_state}\",\n )\n\n # When a \"root\" flow run finishes, flush logs so we do not have to rely on handling\n # during interpreter shutdown\n await APILogHandler.aflush()\n\n return terminal_state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_task_map","title":"begin_task_map
async
","text":"Async entrypoint for task mapping
Source code inprefect/engine.py
async def begin_task_map(\n task: Task,\n flow_run_context: Optional[FlowRunContext],\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n return_type: EngineReturnType,\n task_runner: Optional[BaseTaskRunner],\n autonomous: bool = False,\n) -> List[Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun]]:\n \"\"\"Async entrypoint for task mapping\"\"\"\n # We need to resolve some futures to map over their data, collect the upstream\n # links beforehand to retain relationship tracking.\n task_inputs = {\n k: await collect_task_run_inputs(v, max_depth=0) for k, v in parameters.items()\n }\n\n # Resolve the top-level parameters in order to get mappable data of a known length.\n # Nested parameters will be resolved in each mapped child where their relationships\n # will also be tracked.\n parameters = await resolve_inputs(parameters, max_depth=1)\n\n # Ensure that any parameters in kwargs are expanded before this check\n parameters = explode_variadic_parameter(task.fn, parameters)\n\n iterable_parameters = {}\n static_parameters = {}\n annotated_parameters = {}\n for key, val in parameters.items():\n if isinstance(val, (allow_failure, quote)):\n # Unwrap annotated parameters to determine if they are iterable\n annotated_parameters[key] = val\n val = val.unwrap()\n\n if isinstance(val, unmapped):\n static_parameters[key] = val.value\n elif isiterable(val):\n iterable_parameters[key] = list(val)\n else:\n static_parameters[key] = val\n\n if not len(iterable_parameters):\n raise MappingMissingIterable(\n \"No iterable parameters were received. Parameters for map must \"\n f\"include at least one iterable. Parameters: {parameters}\"\n )\n\n iterable_parameter_lengths = {\n key: len(val) for key, val in iterable_parameters.items()\n }\n lengths = set(iterable_parameter_lengths.values())\n if len(lengths) > 1:\n raise MappingLengthMismatch(\n \"Received iterable parameters with different lengths. Parameters for map\"\n f\" must all be the same length. Got lengths: {iterable_parameter_lengths}\"\n )\n\n map_length = list(lengths)[0]\n\n task_runs = []\n for i in range(map_length):\n call_parameters = {key: value[i] for key, value in iterable_parameters.items()}\n call_parameters.update({key: value for key, value in static_parameters.items()})\n\n # Add default values for parameters; these are skipped earlier since they should\n # not be mapped over\n for key, value in get_parameter_defaults(task.fn).items():\n call_parameters.setdefault(key, value)\n\n # Re-apply annotations to each key again\n for key, annotation in annotated_parameters.items():\n call_parameters[key] = annotation.rewrap(call_parameters[key])\n\n # Collapse any previously exploded kwargs\n call_parameters = collapse_variadic_parameters(task.fn, call_parameters)\n\n if autonomous:\n task_runs.append(\n await create_autonomous_task_run(\n task=task,\n parameters=call_parameters,\n )\n )\n else:\n task_runs.append(\n partial(\n get_task_call_return_value,\n task=task,\n flow_run_context=flow_run_context,\n parameters=call_parameters,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=task_runner,\n extra_task_inputs=task_inputs,\n )\n )\n\n if autonomous:\n return task_runs\n\n # Maintain the order of the task runs when using the sequential task runner\n runner = task_runner if task_runner else flow_run_context.task_runner\n if runner.concurrency_type == TaskConcurrencyType.SEQUENTIAL:\n return [await task_run() for task_run in task_runs]\n\n return await gather(*task_runs)\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_task_run","title":"begin_task_run
async
","text":"Entrypoint for task run execution.
This function is intended for submission to the task runner.
This method may be called from a worker so we ensure the settings context has been entered. For example, with a runner that is executing tasks in the same event loop, we will likely not enter the context again because the current context already matches:
main thread: --> Flow called with settings A --> begin_task_run
executes same event loop --> Profile A matches and is not entered again
However, with execution on a remote environment, we are going to need to ensure the settings for the task run are respected by entering the context:
main thread: --> Flow called with settings A --> begin_task_run
is scheduled on a remote worker, settings A is serialized remote worker: --> Remote worker imports Prefect (may not occur) --> Global settings is loaded with default settings --> begin_task_run
executes on a different event loop than the flow --> Current settings is not set or does not match, settings A is entered
prefect/engine.py
async def begin_task_run(\n task: Task,\n task_run: TaskRun,\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n result_factory: ResultFactory,\n log_prints: bool,\n settings: prefect.context.SettingsContext,\n):\n \"\"\"\n Entrypoint for task run execution.\n\n This function is intended for submission to the task runner.\n\n This method may be called from a worker so we ensure the settings context has been\n entered. For example, with a runner that is executing tasks in the same event loop,\n we will likely not enter the context again because the current context already\n matches:\n\n main thread:\n --> Flow called with settings A\n --> `begin_task_run` executes same event loop\n --> Profile A matches and is not entered again\n\n However, with execution on a remote environment, we are going to need to ensure the\n settings for the task run are respected by entering the context:\n\n main thread:\n --> Flow called with settings A\n --> `begin_task_run` is scheduled on a remote worker, settings A is serialized\n remote worker:\n --> Remote worker imports Prefect (may not occur)\n --> Global settings is loaded with default settings\n --> `begin_task_run` executes on a different event loop than the flow\n --> Current settings is not set or does not match, settings A is entered\n \"\"\"\n maybe_flow_run_context = prefect.context.FlowRunContext.get()\n\n async with AsyncExitStack() as stack:\n # The settings context may be null on a remote worker so we use the safe `.get`\n # method and compare it to the settings required for this task run\n if prefect.context.SettingsContext.get() != settings:\n stack.enter_context(settings)\n setup_logging()\n\n if maybe_flow_run_context:\n # Accessible if on a worker that is running in the same thread as the flow\n client = maybe_flow_run_context.client\n # Only run the task in an interruptible thread if it in the same thread as\n # the flow _and_ the flow run has a timeout attached. If the task is on a\n # worker, the flow run timeout will not be raised in the worker process.\n interruptible = maybe_flow_run_context.timeout_scope is not None\n else:\n # Otherwise, retrieve a new clien`t\n client = await stack.enter_async_context(get_client())\n interruptible = False\n await stack.enter_async_context(anyio.create_task_group())\n\n await stack.enter_async_context(report_task_run_crashes(task_run, client))\n\n # TODO: Use the background tasks group to manage logging for this task\n\n if log_prints:\n stack.enter_context(patch_print())\n\n await check_api_reachable(\n client, f\"Cannot orchestrate task run '{task_run.id}'\"\n )\n try:\n state = await orchestrate_task_run(\n task=task,\n task_run=task_run,\n parameters=parameters,\n wait_for=wait_for,\n result_factory=result_factory,\n log_prints=log_prints,\n interruptible=interruptible,\n client=client,\n )\n\n if not maybe_flow_run_context:\n # When a a task run finishes on a remote worker flush logs to prevent\n # loss if the process exits\n await APILogHandler.aflush()\n\n except Abort as abort:\n # Task run probably already completed, fetch its state\n task_run = await client.read_task_run(task_run.id)\n\n if task_run.state.is_final():\n task_run_logger(task_run).info(\n f\"Task run '{task_run.id}' already finished.\"\n )\n else:\n # TODO: This is a concerning case; we should determine when this occurs\n # 1. This can occur when the flow run is not in a running state\n task_run_logger(task_run).warning(\n f\"Task run '{task_run.id}' received abort during orchestration: \"\n f\"{abort} Task run is in {task_run.state.type.value} state.\"\n )\n state = task_run.state\n\n except Pause:\n # A pause signal here should mean the flow run suspended, so we\n # should do the same. We'll look up the flow run's pause state to\n # try and reuse it, so we capture any data like timeouts.\n flow_run = await client.read_flow_run(task_run.flow_run_id)\n if flow_run.state and flow_run.state.is_paused():\n state = flow_run.state\n else:\n state = Suspended()\n\n task_run_logger(task_run).info(\n \"Task run encountered a pause signal during orchestration.\"\n )\n\n return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.collect_task_run_inputs","title":"collect_task_run_inputs
async
","text":"This function recurses through an expression to generate a set of any discernible task run inputs it finds in the data structure. It produces a set of all inputs found.
Examples:
>>> task_inputs = {\n>>> k: await collect_task_run_inputs(v) for k, v in parameters.items()\n>>> }\n
Source code in prefect/engine.py
async def collect_task_run_inputs(expr: Any, max_depth: int = -1) -> Set[TaskRunInput]:\n \"\"\"\n This function recurses through an expression to generate a set of any discernible\n task run inputs it finds in the data structure. It produces a set of all inputs\n found.\n\n Examples:\n >>> task_inputs = {\n >>> k: await collect_task_run_inputs(v) for k, v in parameters.items()\n >>> }\n \"\"\"\n # TODO: This function needs to be updated to detect parameters and constants\n\n inputs = set()\n futures = set()\n\n def add_futures_and_states_to_inputs(obj):\n if isinstance(obj, PrefectFuture):\n # We need to wait for futures to be submitted before we can get the task\n # run id but we want to do so asynchronously\n futures.add(obj)\n elif is_state(obj):\n if obj.state_details.task_run_id:\n inputs.add(TaskRunResult(id=obj.state_details.task_run_id))\n # Expressions inside quotes should not be traversed\n elif isinstance(obj, quote):\n raise StopVisiting\n else:\n state = get_state_for_result(obj)\n if state and state.state_details.task_run_id:\n inputs.add(TaskRunResult(id=state.state_details.task_run_id))\n\n visit_collection(\n expr,\n visit_fn=add_futures_and_states_to_inputs,\n return_data=False,\n max_depth=max_depth,\n )\n\n await asyncio.gather(*[future._wait_for_submission() for future in futures])\n for future in futures:\n inputs.add(TaskRunResult(id=future.task_run.id))\n\n return inputs\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_and_begin_subflow_run","title":"create_and_begin_subflow_run
async
","text":"Async entrypoint for flows calls within a flow run
Subflows differ from parent flows in that they - Resolve futures in passed parameters into values - Create a dummy task for representation in the parent flow - Retrieve default result storage from the parent flow rather than the server
Returns:
Type DescriptionAny
The final state of the run
Source code inprefect/engine.py
@inject_client\nasync def create_and_begin_subflow_run(\n flow: Flow,\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n return_type: EngineReturnType,\n client: PrefectClient,\n user_thread: threading.Thread,\n) -> Any:\n \"\"\"\n Async entrypoint for flows calls within a flow run\n\n Subflows differ from parent flows in that they\n - Resolve futures in passed parameters into values\n - Create a dummy task for representation in the parent flow\n - Retrieve default result storage from the parent flow rather than the server\n\n Returns:\n The final state of the run\n \"\"\"\n parent_flow_run_context = FlowRunContext.get()\n parent_logger = get_run_logger(parent_flow_run_context)\n log_prints = should_log_prints(flow)\n terminal_state = None\n\n parent_logger.debug(f\"Resolving inputs to {flow.name!r}\")\n task_inputs = {k: await collect_task_run_inputs(v) for k, v in parameters.items()}\n\n if wait_for:\n task_inputs[\"wait_for\"] = await collect_task_run_inputs(wait_for)\n\n rerunning = parent_flow_run_context.flow_run.run_count > 1\n\n # Generate a task in the parent flow run to represent the result of the subflow run\n dummy_task = Task(name=flow.name, fn=flow.fn, version=flow.version)\n parent_task_run = await client.create_task_run(\n task=dummy_task,\n flow_run_id=parent_flow_run_context.flow_run.id,\n dynamic_key=_dynamic_key_for_task_run(parent_flow_run_context, dummy_task),\n task_inputs=task_inputs,\n state=Pending(),\n )\n\n # Resolve any task futures in the input\n parameters = await resolve_inputs(parameters)\n\n if parent_task_run.state.is_final() and not (\n rerunning and not parent_task_run.state.is_completed()\n ):\n # Retrieve the most recent flow run from the database\n flow_runs = await client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n parent_task_run_id={\"any_\": [parent_task_run.id]}\n ),\n sort=FlowRunSort.EXPECTED_START_TIME_ASC,\n )\n flow_run = flow_runs[-1]\n\n # Set up variables required downstream\n terminal_state = flow_run.state\n logger = flow_run_logger(flow_run, flow)\n\n else:\n flow_run = await client.create_flow_run(\n flow,\n parameters=flow.serialize_parameters(parameters),\n parent_task_run_id=parent_task_run.id,\n state=parent_task_run.state if not rerunning else Pending(),\n tags=TagsContext.get().current_tags,\n )\n\n parent_logger.info(\n f\"Created subflow run {flow_run.name!r} for flow {flow.name!r}\"\n )\n\n logger = flow_run_logger(flow_run, flow)\n ui_url = PREFECT_UI_URL.value()\n if ui_url:\n logger.info(\n f\"View at {ui_url}/flow-runs/flow-run/{flow_run.id}\",\n extra={\"send_to_api\": False},\n )\n\n result_factory = await ResultFactory.from_flow(\n flow, client=parent_flow_run_context.client\n )\n\n if flow.should_validate_parameters:\n try:\n parameters = flow.validate_parameters(parameters)\n except Exception:\n message = \"Validation of flow parameters failed with error:\"\n logger.exception(message)\n terminal_state = await propose_state(\n client,\n state=await exception_to_failed_state(\n message=message, result_factory=result_factory\n ),\n flow_run_id=flow_run.id,\n )\n\n if terminal_state is None or not terminal_state.is_final():\n async with AsyncExitStack() as stack:\n await stack.enter_async_context(\n report_flow_run_crashes(flow_run=flow_run, client=client, flow=flow)\n )\n\n task_runner = flow.task_runner.duplicate()\n if task_runner is NotImplemented:\n # Backwards compatibility; will not support concurrent flow runs\n task_runner = flow.task_runner\n logger.warning(\n f\"Task runner {type(task_runner).__name__!r} does not implement\"\n \" the `duplicate` method and will fail if used for concurrent\"\n \" execution of the same flow.\"\n )\n\n await stack.enter_async_context(task_runner.start())\n\n if log_prints:\n stack.enter_context(patch_print())\n\n terminal_state = await orchestrate_flow_run(\n flow,\n flow_run=flow_run,\n parameters=parameters,\n wait_for=wait_for,\n # If the parent flow run has a timeout, then this one needs to be\n # interruptible as well\n interruptible=parent_flow_run_context.timeout_scope is not None,\n client=client,\n partial_flow_run_context=PartialModel(\n FlowRunContext,\n sync_portal=parent_flow_run_context.sync_portal,\n task_runner=task_runner,\n background_tasks=parent_flow_run_context.background_tasks,\n result_factory=result_factory,\n log_prints=log_prints,\n ),\n user_thread=user_thread,\n )\n\n # Display the full state (including the result) if debugging\n display_state = repr(terminal_state) if PREFECT_DEBUG_MODE else str(terminal_state)\n logger.log(\n level=logging.INFO if terminal_state.is_completed() else logging.ERROR,\n msg=f\"Finished in state {display_state}\",\n )\n\n # Track the subflow state so the parent flow can use it to determine its final state\n parent_flow_run_context.flow_run_states.append(terminal_state)\n\n if return_type == \"state\":\n return terminal_state\n elif return_type == \"result\":\n return await terminal_state.result(fetch=True)\n else:\n raise ValueError(f\"Invalid return type for flow engine {return_type!r}.\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_autonomous_task_run","title":"create_autonomous_task_run
async
","text":"Create a task run in the API for an autonomous task submission and store the provided parameters using the existing result storage mechanism.
Source code inprefect/engine.py
async def create_autonomous_task_run(task: Task, parameters: Dict[str, Any]) -> TaskRun:\n \"\"\"Create a task run in the API for an autonomous task submission and store\n the provided parameters using the existing result storage mechanism.\n \"\"\"\n async with get_client() as client:\n state = Scheduled()\n if parameters:\n parameters_id = uuid4()\n state.state_details.task_parameters_id = parameters_id\n\n # TODO: Improve use of result storage for parameter storage / reference\n task.persist_result = True\n\n factory = await ResultFactory.from_autonomous_task(task, client=client)\n await factory.store_parameters(parameters_id, parameters)\n\n task_run = await client.create_task_run(\n task=task,\n flow_run_id=None,\n dynamic_key=f\"{task.task_key}-{str(uuid4())[:NUM_CHARS_DYNAMIC_KEY]}\",\n state=state,\n )\n\n engine_logger.debug(f\"Submitted run of task {task.name!r} for execution\")\n\n return task_run\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_then_begin_flow_run","title":"create_then_begin_flow_run
async
","text":"Async entrypoint for flow calls
Creates the flow run in the backend, then enters the main flow run engine.
Source code inprefect/engine.py
@inject_client\nasync def create_then_begin_flow_run(\n flow: Flow,\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n return_type: EngineReturnType,\n client: PrefectClient,\n user_thread: threading.Thread,\n) -> Any:\n \"\"\"\n Async entrypoint for flow calls\n\n Creates the flow run in the backend, then enters the main flow run engine.\n \"\"\"\n # TODO: Returns a `State` depending on `return_type` and we can add an overload to\n # the function signature to clarify this eventually.\n\n await check_api_reachable(client, \"Cannot create flow run\")\n\n state = Pending()\n if flow.should_validate_parameters:\n try:\n parameters = flow.validate_parameters(parameters)\n except Exception:\n state = await exception_to_failed_state(\n message=\"Validation of flow parameters failed with error:\"\n )\n\n flow_run = await client.create_flow_run(\n flow,\n # Send serialized parameters to the backend\n parameters=flow.serialize_parameters(parameters),\n state=state,\n tags=TagsContext.get().current_tags,\n )\n\n engine_logger.info(f\"Created flow run {flow_run.name!r} for flow {flow.name!r}\")\n\n logger = flow_run_logger(flow_run, flow)\n\n ui_url = PREFECT_UI_URL.value()\n if ui_url:\n logger.info(\n f\"View at {ui_url}/flow-runs/flow-run/{flow_run.id}\",\n extra={\"send_to_api\": False},\n )\n\n if state.is_failed():\n logger.error(state.message)\n engine_logger.info(\n f\"Flow run {flow_run.name!r} received invalid parameters and is marked as\"\n \" failed.\"\n )\n else:\n state = await begin_flow_run(\n flow=flow,\n flow_run=flow_run,\n parameters=parameters,\n client=client,\n user_thread=user_thread,\n )\n\n if return_type == \"state\":\n return state\n elif return_type == \"result\":\n return await state.result(fetch=True)\n else:\n raise ValueError(f\"Invalid return type for flow engine {return_type!r}.\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_flow_run_engine_from_flow_call","title":"enter_flow_run_engine_from_flow_call
","text":"Sync entrypoint for flow calls.
This function does the heavy lifting of ensuring we can get into an async context for flow run execution with minimal overhead.
Source code inprefect/engine.py
def enter_flow_run_engine_from_flow_call(\n flow: Flow,\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n return_type: EngineReturnType,\n) -> Union[State, Awaitable[State]]:\n \"\"\"\n Sync entrypoint for flow calls.\n\n This function does the heavy lifting of ensuring we can get into an async context\n for flow run execution with minimal overhead.\n \"\"\"\n setup_logging()\n\n registry = PrefectObjectRegistry.get()\n if registry and registry.block_code_execution:\n engine_logger.warning(\n f\"Script loading is in progress, flow {flow.name!r} will not be executed.\"\n \" Consider updating the script to only call the flow if executed\"\n f' directly:\\n\\n\\tif __name__ == \"__main__\":\\n\\t\\t{flow.fn.__name__}()'\n )\n return None\n\n if TaskRunContext.get():\n raise RuntimeError(\n \"Flows cannot be run from within tasks. Did you mean to call this \"\n \"flow in a flow?\"\n )\n\n parent_flow_run_context = FlowRunContext.get()\n is_subflow_run = parent_flow_run_context is not None\n\n if wait_for is not None and not is_subflow_run:\n raise ValueError(\"Only flows run as subflows can wait for dependencies.\")\n\n begin_run = create_call(\n create_and_begin_subflow_run if is_subflow_run else create_then_begin_flow_run,\n flow=flow,\n parameters=parameters,\n wait_for=wait_for,\n return_type=return_type,\n client=parent_flow_run_context.client if is_subflow_run else None,\n user_thread=threading.current_thread(),\n )\n\n # On completion of root flows, wait for the global thread to ensure that\n # any work there is complete\n done_callbacks = (\n [create_call(wait_for_global_loop_exit)] if not is_subflow_run else None\n )\n\n # WARNING: You must define any context managers here to pass to our concurrency\n # api instead of entering them in here in the engine entrypoint. Otherwise, async\n # flows will not use the context as this function _exits_ to return an awaitable to\n # the user. Generally, you should enter contexts _within_ the async `begin_run`\n # instead but if you need to enter a context from the main thread you'll need to do\n # it here.\n contexts = [capture_sigterm()]\n\n if flow.isasync and (\n not is_subflow_run or (is_subflow_run and parent_flow_run_context.flow.isasync)\n ):\n # return a coro for the user to await if the flow is async\n # unless it is an async subflow called in a sync flow\n retval = from_async.wait_for_call_in_loop_thread(\n begin_run,\n done_callbacks=done_callbacks,\n contexts=contexts,\n )\n\n else:\n retval = from_sync.wait_for_call_in_loop_thread(\n begin_run,\n done_callbacks=done_callbacks,\n contexts=contexts,\n )\n\n return retval\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_flow_run_engine_from_subprocess","title":"enter_flow_run_engine_from_subprocess
","text":"Sync entrypoint for flow runs that have been submitted for execution by an agent
Differs from enter_flow_run_engine_from_flow_call
in that we have a flow run id but not a flow object. The flow must be retrieved before execution can begin. Additionally, this assumes that the caller is always in a context without an event loop as this should be called from a fresh process.
prefect/engine.py
def enter_flow_run_engine_from_subprocess(flow_run_id: UUID) -> State:\n \"\"\"\n Sync entrypoint for flow runs that have been submitted for execution by an agent\n\n Differs from `enter_flow_run_engine_from_flow_call` in that we have a flow run id\n but not a flow object. The flow must be retrieved before execution can begin.\n Additionally, this assumes that the caller is always in a context without an event\n loop as this should be called from a fresh process.\n \"\"\"\n\n # Ensure collections are imported and have the opportunity to register types before\n # loading the user code from the deployment\n prefect.plugins.load_prefect_collections()\n\n setup_logging()\n\n state = from_sync.wait_for_call_in_loop_thread(\n create_call(\n retrieve_flow_then_begin_flow_run,\n flow_run_id,\n user_thread=threading.current_thread(),\n ),\n contexts=[capture_sigterm()],\n )\n\n APILogHandler.flush()\n return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_task_run_engine","title":"enter_task_run_engine
","text":"Sync entrypoint for task calls
Source code inprefect/engine.py
def enter_task_run_engine(\n task: Task,\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n return_type: EngineReturnType,\n task_runner: Optional[BaseTaskRunner],\n mapped: bool,\n) -> Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun]:\n \"\"\"Sync entrypoint for task calls\"\"\"\n\n flow_run_context = FlowRunContext.get()\n\n if not flow_run_context:\n if (\n not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n or return_type == \"future\"\n or mapped\n ):\n raise RuntimeError(\n \"Tasks cannot be run outside of a flow by default.\"\n \" If you meant to submit an autonomous task, you need to set\"\n \" `prefect config set PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING=true`\"\n \" and use `your_task.submit()` instead of `your_task()`.\"\n \" Mapping autonomous tasks is not yet supported.\"\n )\n from prefect.task_engine import submit_autonomous_task_run_to_engine\n\n return submit_autonomous_task_run_to_engine(\n task=task,\n task_run=None,\n parameters=parameters,\n task_runner=task_runner,\n wait_for=wait_for,\n return_type=return_type,\n client=get_client(),\n )\n\n if TaskRunContext.get():\n raise RuntimeError(\n \"Tasks cannot be run from within tasks. Did you mean to call this \"\n \"task in a flow?\"\n )\n\n if flow_run_context.timeout_scope and flow_run_context.timeout_scope.cancel_called:\n raise TimeoutError(\"Flow run timed out\")\n\n begin_run = create_call(\n begin_task_map if mapped else get_task_call_return_value,\n task=task,\n flow_run_context=flow_run_context,\n parameters=parameters,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=task_runner,\n )\n\n if task.isasync and flow_run_context.flow.isasync:\n # return a coro for the user to await if an async task in an async flow\n return from_async.wait_for_call_in_loop_thread(begin_run)\n else:\n return from_sync.wait_for_call_in_loop_thread(begin_run)\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.get_state_for_result","title":"get_state_for_result
","text":"Get the state related to a result object.
link_state_to_result
must have been called first.
prefect/engine.py
def get_state_for_result(obj: Any) -> Optional[State]:\n \"\"\"\n Get the state related to a result object.\n\n `link_state_to_result` must have been called first.\n \"\"\"\n flow_run_context = FlowRunContext.get()\n if flow_run_context:\n return flow_run_context.task_run_results.get(id(obj))\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.link_state_to_result","title":"link_state_to_result
","text":"Caches a link between a state and a result and its components using the id
of the components to map to the state. The cache is persisted to the current flow run context since task relationships are limited to within a flow run.
This allows dependency tracking to occur when results are passed around. Note: Because id
is used, we cannot cache links between singleton objects.
We only cache the relationship between components 1-layer deep. Example: Given the result [1, [\"a\",\"b\"], (\"c\",)], the following elements will be mapped to the state: - [1, [\"a\",\"b\"], (\"c\",)] - [\"a\",\"b\"] - (\"c\",)
Note: the int `1` will not be mapped to the state because it is a singleton.\n
Other Notes: We do not hash the result because: - If changes are made to the object in the flow between task calls, we can still track that they are related. - Hashing can be expensive. - Not all objects are hashable.
We do not set an attribute, e.g. __prefect_state__
, on the result because:
prefect/engine.py
def link_state_to_result(state: State, result: Any) -> None:\n \"\"\"\n Caches a link between a state and a result and its components using\n the `id` of the components to map to the state. The cache is persisted to the\n current flow run context since task relationships are limited to within a flow run.\n\n This allows dependency tracking to occur when results are passed around.\n Note: Because `id` is used, we cannot cache links between singleton objects.\n\n We only cache the relationship between components 1-layer deep.\n Example:\n Given the result [1, [\"a\",\"b\"], (\"c\",)], the following elements will be\n mapped to the state:\n - [1, [\"a\",\"b\"], (\"c\",)]\n - [\"a\",\"b\"]\n - (\"c\",)\n\n Note: the int `1` will not be mapped to the state because it is a singleton.\n\n Other Notes:\n We do not hash the result because:\n - If changes are made to the object in the flow between task calls, we can still\n track that they are related.\n - Hashing can be expensive.\n - Not all objects are hashable.\n\n We do not set an attribute, e.g. `__prefect_state__`, on the result because:\n\n - Mutating user's objects is dangerous.\n - Unrelated equality comparisons can break unexpectedly.\n - The field can be preserved on copy.\n - We cannot set this attribute on Python built-ins.\n \"\"\"\n\n flow_run_context = FlowRunContext.get()\n\n def link_if_trackable(obj: Any) -> None:\n \"\"\"Track connection between a task run result and its associated state if it has a unique ID.\n\n We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256\n because they are singletons.\n\n This function will mutate the State if the object is an untrackable type by setting the value\n for `State.state_details.untrackable_result` to `True`.\n\n \"\"\"\n if (type(obj) in UNTRACKABLE_TYPES) or (\n isinstance(obj, int) and (-5 <= obj <= 256)\n ):\n state.state_details.untrackable_result = True\n return\n flow_run_context.task_run_results[id(obj)] = state\n\n if flow_run_context:\n visit_collection(expr=result, visit_fn=link_if_trackable, max_depth=1)\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.orchestrate_flow_run","title":"orchestrate_flow_run
async
","text":"Executes a flow run.
Note on flow timeoutsSince async flows are run directly in the main event loop, timeout behavior will match that described by anyio. If the flow is awaiting something, it will immediately return; otherwise, the next time it awaits it will exit. Sync flows are being task runner in a worker thread, which cannot be interrupted. The worker thread will exit at the next task call. The worker thread also has access to the status of the cancellation scope at FlowRunContext.timeout_scope.cancel_called
which allows it to raise a TimeoutError
to respect the timeout.
Returns:
Type DescriptionState
The final state of the run
Source code inprefect/engine.py
async def orchestrate_flow_run(\n flow: Flow,\n flow_run: FlowRun,\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n interruptible: bool,\n client: PrefectClient,\n partial_flow_run_context: PartialModel[FlowRunContext],\n user_thread: threading.Thread,\n) -> State:\n \"\"\"\n Executes a flow run.\n\n Note on flow timeouts:\n Since async flows are run directly in the main event loop, timeout behavior will\n match that described by anyio. If the flow is awaiting something, it will\n immediately return; otherwise, the next time it awaits it will exit. Sync flows\n are being task runner in a worker thread, which cannot be interrupted. The worker\n thread will exit at the next task call. The worker thread also has access to the\n status of the cancellation scope at `FlowRunContext.timeout_scope.cancel_called`\n which allows it to raise a `TimeoutError` to respect the timeout.\n\n Returns:\n The final state of the run\n \"\"\"\n\n logger = flow_run_logger(flow_run, flow)\n\n flow_run_context = None\n parent_flow_run_context = FlowRunContext.get()\n\n try:\n # Resolve futures in any non-data dependencies to ensure they are ready\n if wait_for is not None:\n await resolve_inputs({\"wait_for\": wait_for}, return_data=False)\n except UpstreamTaskError as upstream_exc:\n return await propose_state(\n client,\n Pending(name=\"NotReady\", message=str(upstream_exc)),\n flow_run_id=flow_run.id,\n # if orchestrating a run already in a pending state, force orchestration to\n # update the state name\n force=flow_run.state.is_pending(),\n )\n\n state = await propose_state(client, Running(), flow_run_id=flow_run.id)\n\n # flag to ensure we only update the flow run name once\n run_name_set = False\n\n await _run_flow_hooks(flow=flow, flow_run=flow_run, state=state)\n\n while state.is_running():\n waited_for_task_runs = False\n\n # Update the flow run to the latest data\n flow_run = await client.read_flow_run(flow_run.id)\n try:\n with partial_flow_run_context.finalize(\n flow=flow,\n flow_run=flow_run,\n client=client,\n parameters=parameters,\n ) as flow_run_context:\n # update flow run name\n if not run_name_set and flow.flow_run_name:\n flow_run_name = _resolve_custom_flow_run_name(\n flow=flow, parameters=parameters\n )\n\n await client.update_flow_run(\n flow_run_id=flow_run.id, name=flow_run_name\n )\n logger.extra[\"flow_run_name\"] = flow_run_name\n logger.debug(\n f\"Renamed flow run {flow_run.name!r} to {flow_run_name!r}\"\n )\n flow_run.name = flow_run_name\n run_name_set = True\n\n args, kwargs = parameters_to_args_kwargs(flow.fn, parameters)\n logger.debug(\n f\"Executing flow {flow.name!r} for flow run {flow_run.name!r}...\"\n )\n\n if PREFECT_DEBUG_MODE:\n logger.debug(f\"Executing {call_repr(flow.fn, *args, **kwargs)}\")\n else:\n logger.debug(\n \"Beginning execution...\", extra={\"state_message\": True}\n )\n\n flow_call = create_call(flow.fn, *args, **kwargs)\n\n # This check for a parent call is needed for cases where the engine\n # was entered directly during testing\n parent_call = get_current_call()\n\n if parent_call and (\n not parent_flow_run_context\n or (\n parent_flow_run_context\n and parent_flow_run_context.flow.isasync == flow.isasync\n )\n ):\n from_async.call_soon_in_waiting_thread(\n flow_call, thread=user_thread, timeout=flow.timeout_seconds\n )\n else:\n from_async.call_soon_in_new_thread(\n flow_call, timeout=flow.timeout_seconds\n )\n\n result = await flow_call.aresult()\n\n waited_for_task_runs = await wait_for_task_runs_and_report_crashes(\n flow_run_context.task_run_futures, client=client\n )\n except PausedRun as exc:\n # could get raised either via utility or by returning Paused from a task run\n # if a task run pauses, we set its state as the flow's state\n # to preserve reschedule and timeout behavior\n paused_flow_run = await client.read_flow_run(flow_run.id)\n if paused_flow_run.state.is_running():\n state = await propose_state(\n client,\n state=exc.state,\n flow_run_id=flow_run.id,\n )\n\n return state\n paused_flow_run_state = paused_flow_run.state\n return paused_flow_run_state\n except CancelledError as exc:\n if not flow_call.timedout():\n # If the flow call was not cancelled by us; this is a crash\n raise\n # Construct a new exception as `TimeoutError`\n original = exc\n exc = TimeoutError()\n exc.__cause__ = original\n logger.exception(\"Encountered exception during execution:\")\n terminal_state = await exception_to_failed_state(\n exc,\n message=f\"Flow run exceeded timeout of {flow.timeout_seconds} seconds\",\n result_factory=flow_run_context.result_factory,\n name=\"TimedOut\",\n )\n except Exception:\n # Generic exception in user code\n logger.exception(\"Encountered exception during execution:\")\n terminal_state = await exception_to_failed_state(\n message=\"Flow run encountered an exception.\",\n result_factory=flow_run_context.result_factory,\n )\n else:\n if result is None:\n # All tasks and subflows are reference tasks if there is no return value\n # If there are no tasks, use `None` instead of an empty iterable\n result = (\n flow_run_context.task_run_futures\n + flow_run_context.task_run_states\n + flow_run_context.flow_run_states\n ) or None\n\n terminal_state = await return_value_to_state(\n await resolve_futures_to_states(result),\n result_factory=flow_run_context.result_factory,\n )\n\n if not waited_for_task_runs:\n # An exception occurred that prevented us from waiting for task runs to\n # complete. Ensure that we wait for them before proposing a final state\n # for the flow run.\n await wait_for_task_runs_and_report_crashes(\n flow_run_context.task_run_futures, client=client\n )\n\n # Before setting the flow run state, store state.data using\n # block storage and send the resulting data document to the Prefect API instead.\n # This prevents the pickled return value of flow runs\n # from being sent to the Prefect API and stored in the Prefect database.\n # state.data is left as is, otherwise we would have to load\n # the data from block storage again after storing.\n state = await propose_state(\n client,\n state=terminal_state,\n flow_run_id=flow_run.id,\n )\n\n await _run_flow_hooks(flow=flow, flow_run=flow_run, state=state)\n\n if state.type != terminal_state.type and PREFECT_DEBUG_MODE:\n logger.debug(\n (\n f\"Received new state {state} when proposing final state\"\n f\" {terminal_state}\"\n ),\n extra={\"send_to_api\": False},\n )\n\n if not state.is_final() and not state.is_paused():\n logger.info(\n (\n f\"Received non-final state {state.name!r} when proposing final\"\n f\" state {terminal_state.name!r} and will attempt to run again...\"\n ),\n )\n # Attempt to enter a running state again\n state = await propose_state(client, Running(), flow_run_id=flow_run.id)\n\n return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.orchestrate_task_run","title":"orchestrate_task_run
async
","text":"Execute a task run
This function should be submitted to a task runner. We must construct the context here instead of receiving it already populated since we may be in a new environment.
Proposes a RUNNING state, then - if accepted, the task user function will be run - if rejected, the received state will be returned
When the user function is run, the result will be used to determine a final state - if an exception is encountered, it is trapped and stored in a FAILED state - otherwise, return_value_to_state
is used to determine the state
If the final state is COMPLETED, we generate a cache key as specified by the task
The final state is then proposed - if accepted, this is the final state and will be returned - if rejected and a new final state is provided, it will be returned - if rejected and a non-final state is provided, we will attempt to enter a RUNNING state again
Returns:
Type DescriptionState
The final state of the run
Source code inprefect/engine.py
async def orchestrate_task_run(\n task: Task,\n task_run: TaskRun,\n parameters: Dict[str, Any],\n wait_for: Optional[Iterable[PrefectFuture]],\n result_factory: ResultFactory,\n log_prints: bool,\n interruptible: bool,\n client: PrefectClient,\n) -> State:\n \"\"\"\n Execute a task run\n\n This function should be submitted to a task runner. We must construct the context\n here instead of receiving it already populated since we may be in a new environment.\n\n Proposes a RUNNING state, then\n - if accepted, the task user function will be run\n - if rejected, the received state will be returned\n\n When the user function is run, the result will be used to determine a final state\n - if an exception is encountered, it is trapped and stored in a FAILED state\n - otherwise, `return_value_to_state` is used to determine the state\n\n If the final state is COMPLETED, we generate a cache key as specified by the task\n\n The final state is then proposed\n - if accepted, this is the final state and will be returned\n - if rejected and a new final state is provided, it will be returned\n - if rejected and a non-final state is provided, we will attempt to enter a RUNNING\n state again\n\n Returns:\n The final state of the run\n \"\"\"\n flow_run_context = prefect.context.FlowRunContext.get()\n if flow_run_context:\n flow_run = flow_run_context.flow_run\n else:\n flow_run = await client.read_flow_run(task_run.flow_run_id)\n logger = task_run_logger(task_run, task=task, flow_run=flow_run)\n\n partial_task_run_context = PartialModel(\n TaskRunContext,\n task_run=task_run,\n task=task,\n client=client,\n result_factory=result_factory,\n log_prints=log_prints,\n )\n task_introspection_start_time = time.perf_counter()\n try:\n # Resolve futures in parameters into data\n resolved_parameters = await resolve_inputs(parameters)\n # Resolve futures in any non-data dependencies to ensure they are ready\n await resolve_inputs({\"wait_for\": wait_for}, return_data=False)\n except UpstreamTaskError as upstream_exc:\n return await propose_state(\n client,\n Pending(name=\"NotReady\", message=str(upstream_exc)),\n task_run_id=task_run.id,\n # if orchestrating a run already in a pending state, force orchestration to\n # update the state name\n force=task_run.state.is_pending(),\n )\n task_introspection_end_time = time.perf_counter()\n\n introspection_time = round(\n task_introspection_end_time - task_introspection_start_time, 3\n )\n threshold = PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD.value()\n if threshold and introspection_time > threshold:\n logger.warning(\n f\"Task parameter introspection took {introspection_time} seconds \"\n f\", exceeding `PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD` of {threshold}. \"\n \"Try wrapping large task parameters with \"\n \"`prefect.utilities.annotations.quote` for increased performance, \"\n \"e.g. `my_task(quote(param))`. To disable this message set \"\n \"`PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD=0`.\"\n )\n\n # Generate the cache key to attach to proposed states\n # The cache key uses a TaskRunContext that does not include a `timeout_context``\n cache_key = (\n task.cache_key_fn(\n partial_task_run_context.finalize(parameters=resolved_parameters),\n resolved_parameters,\n )\n if task.cache_key_fn\n else None\n )\n\n task_run_context = partial_task_run_context.finalize(parameters=resolved_parameters)\n\n # Ignore the cached results for a cache key, default = false\n # Setting on task level overrules the Prefect setting (env var)\n refresh_cache = (\n task.refresh_cache\n if task.refresh_cache is not None\n else PREFECT_TASKS_REFRESH_CACHE.value()\n )\n\n # Emit an event to capture that the task run was in the `PENDING` state.\n last_event = _emit_task_run_state_change_event(\n task_run=task_run, initial_state=None, validated_state=task_run.state\n )\n last_state = task_run.state\n\n # Completed states with persisted results should have result data. If it's missing,\n # this could be a manual state transition, so we should use the Unknown result type\n # to represent that we know we don't know the result.\n if (\n last_state\n and last_state.is_completed()\n and result_factory.persist_result\n and not last_state.data\n ):\n state = await propose_state(\n client,\n state=Completed(data=await UnknownResult.create()),\n task_run_id=task_run.id,\n force=True,\n )\n\n # Transition from `PENDING` -> `RUNNING`\n try:\n state = await propose_state(\n client,\n Running(\n state_details=StateDetails(\n cache_key=cache_key, refresh_cache=refresh_cache\n )\n ),\n task_run_id=task_run.id,\n )\n except Pause as exc:\n # We shouldn't get a pause signal without a state, but if this happens,\n # just use a Paused state to assume an in-process pause.\n state = exc.state if exc.state else Paused()\n\n # If a flow submits tasks and then pauses, we may reach this point due\n # to concurrency timing because the tasks will try to transition after\n # the flow run has paused. Orchestration will send back a Paused state\n # for the task runs.\n if state.state_details.pause_reschedule:\n # If we're being asked to pause and reschedule, we should exit the\n # task and expect to be resumed later.\n raise\n\n if state.is_paused():\n BACKOFF_MAX = 10 # Seconds\n backoff_count = 0\n\n async def tick():\n nonlocal backoff_count\n if backoff_count < BACKOFF_MAX:\n backoff_count += 1\n interval = 1 + backoff_count + random.random() * backoff_count\n await anyio.sleep(interval)\n\n # Enter a loop to wait for the task run to be resumed, i.e.\n # become Pending, and then propose a Running state again.\n while True:\n await tick()\n\n # Propose a Running state again. We do this instead of reading the\n # task run because if the flow run times out, this lets\n # orchestration fail the task run.\n try:\n state = await propose_state(\n client,\n Running(\n state_details=StateDetails(\n cache_key=cache_key, refresh_cache=refresh_cache\n )\n ),\n task_run_id=task_run.id,\n )\n except Pause as exc:\n if not exc.state:\n continue\n\n if exc.state.state_details.pause_reschedule:\n # If the pause state includes pause_reschedule, we should exit the\n # task and expect to be resumed later. We've already checked for this\n # above, but we check again here in case the state changed; e.g. the\n # flow run suspended.\n raise\n else:\n # Propose a Running state again.\n continue\n else:\n break\n\n # Emit an event to capture the result of proposing a `RUNNING` state.\n last_event = _emit_task_run_state_change_event(\n task_run=task_run,\n initial_state=last_state,\n validated_state=state,\n follows=last_event,\n )\n last_state = state\n\n # flag to ensure we only update the task run name once\n run_name_set = False\n\n # Only run the task if we enter a `RUNNING` state\n while state.is_running():\n # Retrieve the latest metadata for the task run context\n task_run = await client.read_task_run(task_run.id)\n\n with task_run_context.copy(\n update={\"task_run\": task_run, \"start_time\": pendulum.now(\"UTC\")}\n ):\n try:\n args, kwargs = parameters_to_args_kwargs(task.fn, resolved_parameters)\n # update task run name\n if not run_name_set and task.task_run_name:\n task_run_name = _resolve_custom_task_run_name(\n task=task, parameters=resolved_parameters\n )\n await client.set_task_run_name(\n task_run_id=task_run.id, name=task_run_name\n )\n logger.extra[\"task_run_name\"] = task_run_name\n logger.debug(\n f\"Renamed task run {task_run.name!r} to {task_run_name!r}\"\n )\n task_run.name = task_run_name\n run_name_set = True\n\n if PREFECT_DEBUG_MODE.value():\n logger.debug(f\"Executing {call_repr(task.fn, *args, **kwargs)}\")\n else:\n logger.debug(\n \"Beginning execution...\", extra={\"state_message\": True}\n )\n\n call = from_async.call_soon_in_new_thread(\n create_call(task.fn, *args, **kwargs), timeout=task.timeout_seconds\n )\n result = await call.aresult()\n\n except (CancelledError, asyncio.CancelledError) as exc:\n if not call.timedout():\n # If the task call was not cancelled by us; this is a crash\n raise\n # Construct a new exception as `TimeoutError`\n original = exc\n exc = TimeoutError()\n exc.__cause__ = original\n logger.exception(\"Encountered exception during execution:\")\n terminal_state = await exception_to_failed_state(\n exc,\n message=(\n f\"Task run exceeded timeout of {task.timeout_seconds} seconds\"\n ),\n result_factory=task_run_context.result_factory,\n name=\"TimedOut\",\n )\n except Exception as exc:\n logger.exception(\"Encountered exception during execution:\")\n terminal_state = await exception_to_failed_state(\n exc,\n message=\"Task run encountered an exception\",\n result_factory=task_run_context.result_factory,\n )\n else:\n terminal_state = await return_value_to_state(\n result,\n result_factory=task_run_context.result_factory,\n )\n\n # for COMPLETED tasks, add the cache key and expiration\n if terminal_state.is_completed():\n terminal_state.state_details.cache_expiration = (\n (pendulum.now(\"utc\") + task.cache_expiration)\n if task.cache_expiration\n else None\n )\n terminal_state.state_details.cache_key = cache_key\n\n if terminal_state.is_failed():\n # Defer to user to decide whether failure is retriable\n terminal_state.state_details.retriable = (\n await _check_task_failure_retriable(task, task_run, terminal_state)\n )\n state = await propose_state(client, terminal_state, task_run_id=task_run.id)\n last_event = _emit_task_run_state_change_event(\n task_run=task_run,\n initial_state=last_state,\n validated_state=state,\n follows=last_event,\n )\n last_state = state\n\n await _run_task_hooks(\n task=task,\n task_run=task_run,\n state=state,\n )\n\n if state.type != terminal_state.type and PREFECT_DEBUG_MODE:\n logger.debug(\n (\n f\"Received new state {state} when proposing final state\"\n f\" {terminal_state}\"\n ),\n extra={\"send_to_api\": False},\n )\n\n if not state.is_final() and not state.is_paused():\n logger.info(\n (\n f\"Received non-final state {state.name!r} when proposing final\"\n f\" state {terminal_state.name!r} and will attempt to run\"\n \" again...\"\n ),\n )\n # Attempt to enter a running state again\n state = await propose_state(client, Running(), task_run_id=task_run.id)\n last_event = _emit_task_run_state_change_event(\n task_run=task_run,\n initial_state=last_state,\n validated_state=state,\n follows=last_event,\n )\n last_state = state\n\n # If debugging, use the more complete `repr` than the usual `str` description\n display_state = repr(state) if PREFECT_DEBUG_MODE else str(state)\n\n logger.log(\n level=logging.INFO if state.is_completed() else logging.ERROR,\n msg=f\"Finished in state {display_state}\",\n )\n return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.pause_flow_run","title":"pause_flow_run
async
","text":"Pauses the current flow run by blocking execution until resumed.
When called within a flow run, execution will block and no downstream tasks will run until the flow is resumed. Task runs that have already started will continue running. A timeout parameter can be passed that will fail the flow run if it has not been resumed within the specified time.
Parameters:
Name Type Description Defaultflow_run_id
UUID
a flow run id. If supplied, this function will attempt to pause the specified flow run outside of the flow run process. When paused, the flow run will continue execution until the NEXT task is orchestrated, at which point the flow will exit. Any tasks that have already started will run until completion. When resumed, the flow run will be rescheduled to finish execution. In order pause a flow run in this way, the flow needs to have an associated deployment and results need to be configured with the persist_results
option.
None
timeout
int
the number of seconds to wait for the flow to be resumed before failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming.
3600
poll_interval
int
The number of seconds between checking whether the flow has been resumed. Defaults to 10 seconds.
10
reschedule
bool
Flag that will reschedule the flow run if resumed. Instead of blocking execution, the flow will gracefully exit (with no result returned) instead. To use this flag, a flow needs to have an associated deployment and results need to be configured with the persist_results
option.
False
key
str
An optional key to prevent calling pauses more than once. This defaults to the number of pauses observed by the flow so far, and prevents pauses that use the \"reschedule\" option from running the same pause twice. A custom key can be supplied for custom pausing behavior.
None
wait_for_input
Optional[Type[T]]
a subclass of RunInput
or any type supported by Pydantic. If provided when the flow pauses, the flow will wait for the input to be provided before resuming. If the flow is resumed without providing the input, the flow will fail. If the flow is resumed with the input, the flow will resume and the input will be loaded and returned from this function.
None
@task\ndef task_one():\n for i in range(3):\n sleep(1)\n\n@flow\ndef my_flow():\n terminal_state = task_one.submit(return_state=True)\n if terminal_state.type == StateType.COMPLETED:\n print(\"Task one succeeded! Pausing flow run..\")\n pause_flow_run(timeout=2)\n else:\n print(\"Task one failed. Skipping pause flow run..\")\n
Source code in prefect/engine.py
@sync_compatible\n@deprecated_parameter(\n \"flow_run_id\", start_date=\"Dec 2023\", help=\"Use `suspend_flow_run` instead.\"\n)\n@deprecated_parameter(\n \"reschedule\",\n start_date=\"Dec 2023\",\n when=lambda p: p is True,\n help=\"Use `suspend_flow_run` instead.\",\n)\n@experimental_parameter(\n \"wait_for_input\", group=\"flow_run_input\", when=lambda y: y is not None\n)\nasync def pause_flow_run(\n wait_for_input: Optional[Type[T]] = None,\n flow_run_id: UUID = None,\n timeout: int = 3600,\n poll_interval: int = 10,\n reschedule: bool = False,\n key: str = None,\n) -> Optional[T]:\n \"\"\"\n Pauses the current flow run by blocking execution until resumed.\n\n When called within a flow run, execution will block and no downstream tasks will\n run until the flow is resumed. Task runs that have already started will continue\n running. A timeout parameter can be passed that will fail the flow run if it has not\n been resumed within the specified time.\n\n Args:\n flow_run_id: a flow run id. If supplied, this function will attempt to pause\n the specified flow run outside of the flow run process. When paused, the\n flow run will continue execution until the NEXT task is orchestrated, at\n which point the flow will exit. Any tasks that have already started will\n run until completion. When resumed, the flow run will be rescheduled to\n finish execution. In order pause a flow run in this way, the flow needs to\n have an associated deployment and results need to be configured with the\n `persist_results` option.\n timeout: the number of seconds to wait for the flow to be resumed before\n failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds\n any configured flow-level timeout, the flow might fail even after resuming.\n poll_interval: The number of seconds between checking whether the flow has been\n resumed. Defaults to 10 seconds.\n reschedule: Flag that will reschedule the flow run if resumed. Instead of\n blocking execution, the flow will gracefully exit (with no result returned)\n instead. To use this flag, a flow needs to have an associated deployment and\n results need to be configured with the `persist_results` option.\n key: An optional key to prevent calling pauses more than once. This defaults to\n the number of pauses observed by the flow so far, and prevents pauses that\n use the \"reschedule\" option from running the same pause twice. A custom key\n can be supplied for custom pausing behavior.\n wait_for_input: a subclass of `RunInput` or any type supported by\n Pydantic. If provided when the flow pauses, the flow will wait for the\n input to be provided before resuming. If the flow is resumed without\n providing the input, the flow will fail. If the flow is resumed with the\n input, the flow will resume and the input will be loaded and returned\n from this function.\n\n Example:\n ```python\n @task\n def task_one():\n for i in range(3):\n sleep(1)\n\n @flow\n def my_flow():\n terminal_state = task_one.submit(return_state=True)\n if terminal_state.type == StateType.COMPLETED:\n print(\"Task one succeeded! Pausing flow run..\")\n pause_flow_run(timeout=2)\n else:\n print(\"Task one failed. Skipping pause flow run..\")\n ```\n\n \"\"\"\n if flow_run_id:\n if wait_for_input is not None:\n raise RuntimeError(\"Cannot wait for input when pausing out of process.\")\n\n return await _out_of_process_pause(\n flow_run_id=flow_run_id,\n timeout=timeout,\n reschedule=reschedule,\n key=key,\n )\n else:\n return await _in_process_pause(\n timeout=timeout,\n poll_interval=poll_interval,\n reschedule=reschedule,\n key=key,\n wait_for_input=wait_for_input,\n )\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.propose_state","title":"propose_state
async
","text":"Propose a new state for a flow run or task run, invoking Prefect orchestration logic.
If the proposed state is accepted, the provided state
will be augmented with details and returned.
If the proposed state is rejected, a new state returned by the Prefect API will be returned.
If the proposed state results in a WAIT instruction from the Prefect API, the function will sleep and attempt to propose the state again.
If the proposed state results in an ABORT instruction from the Prefect API, an error will be raised.
Parameters:
Name Type Description Defaultstate
State
a new state for the task or flow run
requiredtask_run_id
UUID
an optional task run id, used when proposing task run states
None
flow_run_id
UUID
an optional flow run id, used when proposing flow run states
None
Returns:
Type DescriptionState
a State model representation of the flow or task run state
Raises:
Type DescriptionValueError
if neither task_run_id or flow_run_id is provided
Abort
if an ABORT instruction is received from the Prefect API
Source code inprefect/engine.py
async def propose_state(\n client: PrefectClient,\n state: State,\n force: bool = False,\n task_run_id: UUID = None,\n flow_run_id: UUID = None,\n) -> State:\n \"\"\"\n Propose a new state for a flow run or task run, invoking Prefect orchestration logic.\n\n If the proposed state is accepted, the provided `state` will be augmented with\n details and returned.\n\n If the proposed state is rejected, a new state returned by the Prefect API will be\n returned.\n\n If the proposed state results in a WAIT instruction from the Prefect API, the\n function will sleep and attempt to propose the state again.\n\n If the proposed state results in an ABORT instruction from the Prefect API, an\n error will be raised.\n\n Args:\n state: a new state for the task or flow run\n task_run_id: an optional task run id, used when proposing task run states\n flow_run_id: an optional flow run id, used when proposing flow run states\n\n Returns:\n a [State model][prefect.client.schemas.objects.State] representation of the\n flow or task run state\n\n Raises:\n ValueError: if neither task_run_id or flow_run_id is provided\n prefect.exceptions.Abort: if an ABORT instruction is received from\n the Prefect API\n \"\"\"\n\n # Determine if working with a task run or flow run\n if not task_run_id and not flow_run_id:\n raise ValueError(\"You must provide either a `task_run_id` or `flow_run_id`\")\n\n # Handle task and sub-flow tracing\n if state.is_final():\n if isinstance(state.data, BaseResult) and state.data.has_cached_object():\n # Avoid fetching the result unless it is cached, otherwise we defeat\n # the purpose of disabling `cache_result_in_memory`\n result = await state.result(raise_on_failure=False, fetch=True)\n else:\n result = state.data\n\n link_state_to_result(state, result)\n\n # Handle repeated WAITs in a loop instead of recursively, to avoid\n # reaching max recursion depth in extreme cases.\n async def set_state_and_handle_waits(set_state_func) -> OrchestrationResult:\n response = await set_state_func()\n while response.status == SetStateStatus.WAIT:\n engine_logger.debug(\n f\"Received wait instruction for {response.details.delay_seconds}s: \"\n f\"{response.details.reason}\"\n )\n await anyio.sleep(response.details.delay_seconds)\n response = await set_state_func()\n return response\n\n # Attempt to set the state\n if task_run_id:\n set_state = partial(client.set_task_run_state, task_run_id, state, force=force)\n response = await set_state_and_handle_waits(set_state)\n elif flow_run_id:\n set_state = partial(client.set_flow_run_state, flow_run_id, state, force=force)\n response = await set_state_and_handle_waits(set_state)\n else:\n raise ValueError(\n \"Neither flow run id or task run id were provided. At least one must \"\n \"be given.\"\n )\n\n # Parse the response to return the new state\n if response.status == SetStateStatus.ACCEPT:\n # Update the state with the details if provided\n state.id = response.state.id\n state.timestamp = response.state.timestamp\n if response.state.state_details:\n state.state_details = response.state.state_details\n return state\n\n elif response.status == SetStateStatus.ABORT:\n raise prefect.exceptions.Abort(response.details.reason)\n\n elif response.status == SetStateStatus.REJECT:\n if response.state.is_paused():\n raise Pause(response.details.reason, state=response.state)\n return response.state\n\n else:\n raise ValueError(\n f\"Received unexpected `SetStateStatus` from server: {response.status!r}\"\n )\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.report_flow_run_crashes","title":"report_flow_run_crashes
async
","text":"Detect flow run crashes during this context and update the run to a proper final state.
This context must reraise the exception to properly exit the run.
Source code inprefect/engine.py
@asynccontextmanager\nasync def report_flow_run_crashes(flow_run: FlowRun, client: PrefectClient, flow: Flow):\n \"\"\"\n Detect flow run crashes during this context and update the run to a proper final\n state.\n\n This context _must_ reraise the exception to properly exit the run.\n \"\"\"\n\n try:\n yield\n except (Abort, Pause):\n # Do not capture internal signals as crashes\n raise\n except BaseException as exc:\n state = await exception_to_crashed_state(exc)\n logger = flow_run_logger(flow_run)\n with anyio.CancelScope(shield=True):\n logger.error(f\"Crash detected! {state.message}\")\n logger.debug(\"Crash details:\", exc_info=exc)\n flow_run_state = await propose_state(client, state, flow_run_id=flow_run.id)\n engine_logger.debug(\n f\"Reported crashed flow run {flow_run.name!r} successfully!\"\n )\n\n # Only `on_crashed` and `on_cancellation` flow run state change hooks can be called here.\n # We call the hooks after the state change proposal to `CRASHED` is validated\n # or rejected (if it is in a `CANCELLING` state).\n await _run_flow_hooks(\n flow=flow,\n flow_run=flow_run,\n state=flow_run_state,\n )\n\n # Reraise the exception\n raise\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.report_task_run_crashes","title":"report_task_run_crashes
async
","text":"Detect task run crashes during this context and update the run to a proper final state.
This context must reraise the exception to properly exit the run.
Source code inprefect/engine.py
@asynccontextmanager\nasync def report_task_run_crashes(task_run: TaskRun, client: PrefectClient):\n \"\"\"\n Detect task run crashes during this context and update the run to a proper final\n state.\n\n This context _must_ reraise the exception to properly exit the run.\n \"\"\"\n try:\n yield\n except (Abort, Pause):\n # Do not capture internal signals as crashes\n raise\n except BaseException as exc:\n state = await exception_to_crashed_state(exc)\n logger = task_run_logger(task_run)\n with anyio.CancelScope(shield=True):\n logger.error(f\"Crash detected! {state.message}\")\n logger.debug(\"Crash details:\", exc_info=exc)\n await client.set_task_run_state(\n state=state,\n task_run_id=task_run.id,\n force=True,\n )\n engine_logger.debug(\n f\"Reported crashed task run {task_run.name!r} successfully!\"\n )\n\n # Reraise the exception\n raise\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.resolve_inputs","title":"resolve_inputs
async
","text":"Resolve any Quote
, PrefectFuture
, or State
types nested in parameters into data.
Returns:
Type DescriptionDict[str, Any]
A copy of the parameters with resolved data
Raises:
Type DescriptionUpstreamTaskError
If any of the upstream states are not COMPLETED
prefect/engine.py
async def resolve_inputs(\n parameters: Dict[str, Any], return_data: bool = True, max_depth: int = -1\n) -> Dict[str, Any]:\n \"\"\"\n Resolve any `Quote`, `PrefectFuture`, or `State` types nested in parameters into\n data.\n\n Returns:\n A copy of the parameters with resolved data\n\n Raises:\n UpstreamTaskError: If any of the upstream states are not `COMPLETED`\n \"\"\"\n\n futures = set()\n states = set()\n result_by_state = {}\n\n if not parameters:\n return {}\n\n def collect_futures_and_states(expr, context):\n # Expressions inside quotes should not be traversed\n if isinstance(context.get(\"annotation\"), quote):\n raise StopVisiting()\n\n if isinstance(expr, PrefectFuture):\n futures.add(expr)\n if is_state(expr):\n states.add(expr)\n\n return expr\n\n visit_collection(\n parameters,\n visit_fn=collect_futures_and_states,\n return_data=False,\n max_depth=max_depth,\n context={},\n )\n\n # Wait for all futures so we do not block when we retrieve the state in `resolve_input`\n states.update(await asyncio.gather(*[future._wait() for future in futures]))\n\n # Only retrieve the result if requested as it may be expensive\n if return_data:\n finished_states = [state for state in states if state.is_final()]\n\n state_results = await asyncio.gather(\n *[\n state.result(raise_on_failure=False, fetch=True)\n for state in finished_states\n ]\n )\n\n for state, result in zip(finished_states, state_results):\n result_by_state[state] = result\n\n def resolve_input(expr, context):\n state = None\n\n # Expressions inside quotes should not be modified\n if isinstance(context.get(\"annotation\"), quote):\n raise StopVisiting()\n\n if isinstance(expr, PrefectFuture):\n state = expr._final_state\n elif is_state(expr):\n state = expr\n else:\n return expr\n\n # Do not allow uncompleted upstreams except failures when `allow_failure` has\n # been used\n if not state.is_completed() and not (\n # TODO: Note that the contextual annotation here is only at the current level\n # if `allow_failure` is used then another annotation is used, this will\n # incorrectly evaluate to false \u2014 to resolve this, we must track all\n # annotations wrapping the current expression but this is not yet\n # implemented.\n isinstance(context.get(\"annotation\"), allow_failure) and state.is_failed()\n ):\n raise UpstreamTaskError(\n f\"Upstream task run '{state.state_details.task_run_id}' did not reach a\"\n \" 'COMPLETED' state.\"\n )\n\n return result_by_state.get(state)\n\n resolved_parameters = {}\n for parameter, value in parameters.items():\n try:\n resolved_parameters[parameter] = visit_collection(\n value,\n visit_fn=resolve_input,\n return_data=return_data,\n # we're manually going 1 layer deeper here\n max_depth=max_depth - 1,\n remove_annotations=True,\n context={},\n )\n except UpstreamTaskError:\n raise\n except Exception as exc:\n raise PrefectException(\n f\"Failed to resolve inputs in parameter {parameter!r}. If your\"\n \" parameter type is not supported, consider using the `quote`\"\n \" annotation to skip resolution of inputs.\"\n ) from exc\n\n return resolved_parameters\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.resume_flow_run","title":"resume_flow_run
async
","text":"Resumes a paused flow.
Parameters:
Name Type Description Defaultflow_run_id
the flow_run_id to resume
requiredrun_input
Optional[Dict]
a dictionary of inputs to provide to the flow run.
None
Source code in prefect/engine.py
@sync_compatible\nasync def resume_flow_run(flow_run_id, run_input: Optional[Dict] = None):\n \"\"\"\n Resumes a paused flow.\n\n Args:\n flow_run_id: the flow_run_id to resume\n run_input: a dictionary of inputs to provide to the flow run.\n \"\"\"\n client = get_client()\n async with client:\n flow_run = await client.read_flow_run(flow_run_id)\n\n if not flow_run.state.is_paused():\n raise NotPausedError(\"Cannot resume a run that isn't paused!\")\n\n response = await client.resume_flow_run(flow_run_id, run_input=run_input)\n\n if response.status == SetStateStatus.REJECT:\n if response.state.type == StateType.FAILED:\n raise FlowPauseTimeout(\"Flow run can no longer be resumed.\")\n else:\n raise RuntimeError(f\"Cannot resume this run: {response.details.reason}\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.retrieve_flow_then_begin_flow_run","title":"retrieve_flow_then_begin_flow_run
async
","text":"Async entrypoint for flow runs that have been submitted for execution by an agent
prefect/engine.py
@inject_client\nasync def retrieve_flow_then_begin_flow_run(\n flow_run_id: UUID,\n client: PrefectClient,\n user_thread: threading.Thread,\n) -> State:\n \"\"\"\n Async entrypoint for flow runs that have been submitted for execution by an agent\n\n - Retrieves the deployment information\n - Loads the flow object using deployment information\n - Updates the flow run version\n \"\"\"\n flow_run = await client.read_flow_run(flow_run_id)\n\n entrypoint = os.environ.get(\"PREFECT__FLOW_ENTRYPOINT\")\n\n try:\n flow = (\n load_flow_from_entrypoint(entrypoint)\n if entrypoint\n else await load_flow_from_flow_run(flow_run, client=client)\n )\n except Exception:\n message = (\n \"Flow could not be retrieved from\"\n f\" {'entrypoint' if entrypoint else 'deployment'}.\"\n )\n flow_run_logger(flow_run).exception(message)\n state = await exception_to_failed_state(message=message)\n await client.set_flow_run_state(\n state=state, flow_run_id=flow_run_id, force=True\n )\n return state\n\n # Update the flow run policy defaults to match settings on the flow\n # Note: Mutating the flow run object prevents us from performing another read\n # operation if these properties are used by the client downstream\n if flow_run.empirical_policy.retry_delay is None:\n flow_run.empirical_policy.retry_delay = flow.retry_delay_seconds\n\n if flow_run.empirical_policy.retries is None:\n flow_run.empirical_policy.retries = flow.retries\n\n await client.update_flow_run(\n flow_run_id=flow_run_id,\n flow_version=flow.version,\n empirical_policy=flow_run.empirical_policy,\n )\n\n if flow.should_validate_parameters:\n failed_state = None\n try:\n parameters = flow.validate_parameters(flow_run.parameters)\n except Exception:\n message = \"Validation of flow parameters failed with error: \"\n flow_run_logger(flow_run).exception(message)\n failed_state = await exception_to_failed_state(message=message)\n\n if failed_state is not None:\n await propose_state(\n client,\n state=failed_state,\n flow_run_id=flow_run_id,\n )\n return failed_state\n else:\n parameters = flow_run.parameters\n\n # Ensure default values are populated\n parameters = {**get_parameter_defaults(flow.fn), **parameters}\n\n return await begin_flow_run(\n flow=flow,\n flow_run=flow_run,\n parameters=parameters,\n client=client,\n user_thread=user_thread,\n )\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.suspend_flow_run","title":"suspend_flow_run
async
","text":"Suspends a flow run by stopping code execution until resumed.
When suspended, the flow run will continue execution until the NEXT task is orchestrated, at which point the flow will exit. Any tasks that have already started will run until completion. When resumed, the flow run will be rescheduled to finish execution. In order suspend a flow run in this way, the flow needs to have an associated deployment and results need to be configured with the persist_results
option.
Parameters:
Name Type Description Defaultflow_run_id
Optional[UUID]
a flow run id. If supplied, this function will attempt to suspend the specified flow run. If not supplied will attempt to suspend the current flow run.
None
timeout
Optional[int]
the number of seconds to wait for the flow to be resumed before failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming.
3600
key
Optional[str]
An optional key to prevent calling suspend more than once. This defaults to a random string and prevents suspends from running the same suspend twice. A custom key can be supplied for custom suspending behavior.
None
wait_for_input
Optional[Type[T]]
a subclass of RunInput
or any type supported by Pydantic. If provided when the flow suspends, the flow will remain suspended until receiving the input before resuming. If the flow is resumed without providing the input, the flow will fail. If the flow is resumed with the input, the flow will resume and the input will be loaded and returned from this function.
None
Source code in prefect/engine.py
@sync_compatible\n@inject_client\n@experimental_parameter(\n \"wait_for_input\", group=\"flow_run_input\", when=lambda y: y is not None\n)\nasync def suspend_flow_run(\n wait_for_input: Optional[Type[T]] = None,\n flow_run_id: Optional[UUID] = None,\n timeout: Optional[int] = 3600,\n key: Optional[str] = None,\n client: PrefectClient = None,\n) -> Optional[T]:\n \"\"\"\n Suspends a flow run by stopping code execution until resumed.\n\n When suspended, the flow run will continue execution until the NEXT task is\n orchestrated, at which point the flow will exit. Any tasks that have\n already started will run until completion. When resumed, the flow run will\n be rescheduled to finish execution. In order suspend a flow run in this\n way, the flow needs to have an associated deployment and results need to be\n configured with the `persist_results` option.\n\n Args:\n flow_run_id: a flow run id. If supplied, this function will attempt to\n suspend the specified flow run. If not supplied will attempt to\n suspend the current flow run.\n timeout: the number of seconds to wait for the flow to be resumed before\n failing. Defaults to 1 hour (3600 seconds). If the pause timeout\n exceeds any configured flow-level timeout, the flow might fail even\n after resuming.\n key: An optional key to prevent calling suspend more than once. This\n defaults to a random string and prevents suspends from running the\n same suspend twice. A custom key can be supplied for custom\n suspending behavior.\n wait_for_input: a subclass of `RunInput` or any type supported by\n Pydantic. If provided when the flow suspends, the flow will remain\n suspended until receiving the input before resuming. If the flow is\n resumed without providing the input, the flow will fail. If the flow is\n resumed with the input, the flow will resume and the input will be\n loaded and returned from this function.\n \"\"\"\n context = FlowRunContext.get()\n\n if flow_run_id is None:\n if TaskRunContext.get():\n raise RuntimeError(\"Cannot suspend task runs.\")\n\n if context is None or context.flow_run is None:\n raise RuntimeError(\n \"Flow runs can only be suspended from within a flow run.\"\n )\n\n logger = get_run_logger(context=context)\n logger.info(\n \"Suspending flow run, execution will be rescheduled when this flow run is\"\n \" resumed.\"\n )\n flow_run_id = context.flow_run.id\n suspending_current_flow_run = True\n pause_counter = _observed_flow_pauses(context)\n pause_key = key or str(pause_counter)\n else:\n # Since we're suspending another flow run we need to generate a pause\n # key that won't conflict with whatever suspends/pauses that flow may\n # have. Since this method won't be called during that flow run it's\n # okay that this is non-deterministic.\n suspending_current_flow_run = False\n pause_key = key or str(uuid4())\n\n proposed_state = Suspended(timeout_seconds=timeout, pause_key=pause_key)\n\n if wait_for_input:\n wait_for_input = run_input_subclass_from_type(wait_for_input)\n run_input_keyset = keyset_from_paused_state(proposed_state)\n proposed_state.state_details.run_input_keyset = run_input_keyset\n\n try:\n state = await propose_state(\n client=client,\n state=proposed_state,\n flow_run_id=flow_run_id,\n )\n except Abort as exc:\n # Aborted requests mean the suspension is not allowed\n raise RuntimeError(f\"Flow run cannot be suspended: {exc}\")\n\n if state.is_running():\n # The orchestrator rejected the suspended state which means that this\n # suspend has happened before and the flow run has been resumed.\n if wait_for_input:\n # The flow run wanted input, so we need to load it and return it\n # to the user.\n return await wait_for_input.load(run_input_keyset)\n return\n\n if not state.is_paused():\n # If we receive anything but a PAUSED state, we are unable to continue\n raise RuntimeError(\n f\"Flow run cannot be suspended. Received unexpected state from API: {state}\"\n )\n\n if wait_for_input:\n await wait_for_input.save(run_input_keyset)\n\n if suspending_current_flow_run:\n # Exit this process so the run can be resubmitted later\n raise Pause()\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/events/","title":"prefect.events","text":"","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events","title":"prefect.events
","text":"","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Event","title":"Event
","text":" Bases: PrefectBaseModel
The client-side view of an event that has happened to a Resource
Source code inprefect/events/schemas.py
class Event(PrefectBaseModel):\n \"\"\"The client-side view of an event that has happened to a Resource\"\"\"\n\n occurred: DateTimeTZ = Field(\n default_factory=pendulum.now,\n description=\"When the event happened from the sender's perspective\",\n )\n event: str = Field(\n description=\"The name of the event that happened\",\n )\n resource: Resource = Field(\n description=\"The primary Resource this event concerns\",\n )\n related: List[RelatedResource] = Field(\n default_factory=list,\n description=\"A list of additional Resources involved in this event\",\n )\n payload: Dict[str, Any] = Field(\n default_factory=dict,\n description=\"An open-ended set of data describing what happened\",\n )\n id: UUID = Field(\n default_factory=uuid4,\n description=\"The client-provided identifier of this event\",\n )\n follows: Optional[UUID] = Field(\n None,\n description=(\n \"The ID of an event that is known to have occurred prior to this \"\n \"one. If set, this may be used to establish a more precise \"\n \"ordering of causally-related events when they occur close enough \"\n \"together in time that the system may receive them out-of-order.\"\n ),\n )\n\n @property\n def involved_resources(self) -> Iterable[Resource]:\n return [self.resource] + list(self.related)\n\n @validator(\"related\")\n def enforce_maximum_related_resources(cls, value: List[RelatedResource]):\n if len(value) > MAXIMUM_RELATED_RESOURCES:\n raise ValueError(\n \"The maximum number of related resources \"\n f\"is {MAXIMUM_RELATED_RESOURCES}\"\n )\n\n return value\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.RelatedResource","title":"RelatedResource
","text":" Bases: Resource
A Resource with a specific role in an Event
Source code inprefect/events/schemas.py
class RelatedResource(Resource):\n \"\"\"A Resource with a specific role in an Event\"\"\"\n\n @root_validator(pre=True)\n def requires_resource_role(cls, values: Dict[str, Any]):\n labels = values.get(\"__root__\")\n if not isinstance(labels, dict):\n return values\n\n labels = cast(Dict[str, str], labels)\n\n if \"prefect.resource.role\" not in labels:\n raise ValueError(\n \"Related Resources must include the prefect.resource.role label\"\n )\n if not labels[\"prefect.resource.role\"]:\n raise ValueError(\"The prefect.resource.role label must be non-empty\")\n\n return values\n\n @property\n def role(self) -> str:\n return self[\"prefect.resource.role\"]\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Resource","title":"Resource
","text":" Bases: Labelled
An observable business object of interest to the user
Source code inprefect/events/schemas.py
class Resource(Labelled):\n \"\"\"An observable business object of interest to the user\"\"\"\n\n @root_validator(pre=True)\n def enforce_maximum_labels(cls, values: Dict[str, Any]):\n labels = values.get(\"__root__\")\n if not isinstance(labels, dict):\n return values\n\n if len(labels) > MAXIMUM_LABELS_PER_RESOURCE:\n raise ValueError(\n \"The maximum number of labels per resource \"\n f\"is {MAXIMUM_LABELS_PER_RESOURCE}\"\n )\n\n return values\n\n @root_validator(pre=True)\n def requires_resource_id(cls, values: Dict[str, Any]):\n labels = values.get(\"__root__\")\n if not isinstance(labels, dict):\n return values\n\n labels = cast(Dict[str, str], labels)\n\n if \"prefect.resource.id\" not in labels:\n raise ValueError(\"Resources must include the prefect.resource.id label\")\n if not labels[\"prefect.resource.id\"]:\n raise ValueError(\"The prefect.resource.id label must be non-empty\")\n\n return values\n\n @property\n def id(self) -> str:\n return self[\"prefect.resource.id\"]\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.emit_event","title":"emit_event
","text":"Send an event to Prefect Cloud.
Parameters:
Name Type Description Defaultevent
str
The name of the event that happened.
requiredresource
Dict[str, str]
The primary Resource this event concerns.
requiredoccurred
Optional[DateTimeTZ]
When the event happened from the sender's perspective. Defaults to the current datetime.
None
related
Optional[Union[List[Dict[str, str]], List[RelatedResource]]]
A list of additional Resources involved in this event.
None
payload
Optional[Dict[str, Any]]
An open-ended set of data describing what happened.
None
id
Optional[UUID]
The sender-provided identifier for this event. Defaults to a random UUID.
None
follows
Optional[Event]
The event that preceded this one. If the preceding event happened more than 5 minutes prior to this event the follows relationship will not be set.
None
Returns:
Type DescriptionOptional[Event]
The event that was emitted if worker is using a client that emit
Optional[Event]
events, otherwise None.
Source code inprefect/events/utilities.py
def emit_event(\n event: str,\n resource: Dict[str, str],\n occurred: Optional[DateTimeTZ] = None,\n related: Optional[Union[List[Dict[str, str]], List[RelatedResource]]] = None,\n payload: Optional[Dict[str, Any]] = None,\n id: Optional[UUID] = None,\n follows: Optional[Event] = None,\n) -> Optional[Event]:\n \"\"\"\n Send an event to Prefect Cloud.\n\n Args:\n event: The name of the event that happened.\n resource: The primary Resource this event concerns.\n occurred: When the event happened from the sender's perspective.\n Defaults to the current datetime.\n related: A list of additional Resources involved in this event.\n payload: An open-ended set of data describing what happened.\n id: The sender-provided identifier for this event. Defaults to a random\n UUID.\n follows: The event that preceded this one. If the preceding event\n happened more than 5 minutes prior to this event the follows\n relationship will not be set.\n\n Returns:\n The event that was emitted if worker is using a client that emit\n events, otherwise None.\n \"\"\"\n if not emit_events_to_cloud():\n return None\n\n operational_clients = [AssertingEventsClient, PrefectCloudEventsClient]\n worker_instance = EventsWorker.instance()\n\n if worker_instance.client_type not in operational_clients:\n return None\n\n event_kwargs = {\n \"event\": event,\n \"resource\": resource,\n }\n\n if occurred is None:\n occurred = pendulum.now(\"UTC\")\n event_kwargs[\"occurred\"] = occurred\n\n if related is not None:\n event_kwargs[\"related\"] = related\n\n if payload is not None:\n event_kwargs[\"payload\"] = payload\n\n if id is not None:\n event_kwargs[\"id\"] = id\n\n if follows is not None:\n if -TIGHT_TIMING < (occurred - follows.occurred) < TIGHT_TIMING:\n event_kwargs[\"follows\"] = follows.id\n\n event_obj = Event(**event_kwargs)\n worker_instance.send(event_obj)\n\n return event_obj\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/exceptions/","title":"prefect.exceptions","text":"","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions","title":"prefect.exceptions
","text":"Prefect-specific exceptions.
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.Abort","title":"Abort
","text":" Bases: PrefectSignal
Raised when the API sends an 'ABORT' instruction during state proposal.
Indicates that the run should exit immediately.
Source code inprefect/exceptions.py
class Abort(PrefectSignal):\n \"\"\"\n Raised when the API sends an 'ABORT' instruction during state proposal.\n\n Indicates that the run should exit immediately.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.BlockMissingCapabilities","title":"BlockMissingCapabilities
","text":" Bases: PrefectException
Raised when a block does not have required capabilities for a given operation.
Source code inprefect/exceptions.py
class BlockMissingCapabilities(PrefectException):\n \"\"\"\n Raised when a block does not have required capabilities for a given operation.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.CancelledRun","title":"CancelledRun
","text":" Bases: PrefectException
Raised when the result from a cancelled run is retrieved and an exception is not attached.
This occurs when a string is attached to the state instead of an exception or if the state's data is null.
Source code inprefect/exceptions.py
class CancelledRun(PrefectException):\n \"\"\"\n Raised when the result from a cancelled run is retrieved and an exception\n is not attached.\n\n This occurs when a string is attached to the state instead of an exception\n or if the state's data is null.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.CrashedRun","title":"CrashedRun
","text":" Bases: PrefectException
Raised when the result from a crashed run is retrieved.
This occurs when a string is attached to the state instead of an exception or if the state's data is null.
Source code inprefect/exceptions.py
class CrashedRun(PrefectException):\n \"\"\"\n Raised when the result from a crashed run is retrieved.\n\n This occurs when a string is attached to the state instead of an exception or if\n the state's data is null.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ExternalSignal","title":"ExternalSignal
","text":" Bases: BaseException
Base type for external signal-like exceptions that should never be caught by users.
Source code inprefect/exceptions.py
class ExternalSignal(BaseException):\n \"\"\"\n Base type for external signal-like exceptions that should never be caught by users.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FailedRun","title":"FailedRun
","text":" Bases: PrefectException
Raised when the result from a failed run is retrieved and an exception is not attached.
This occurs when a string is attached to the state instead of an exception or if the state's data is null.
Source code inprefect/exceptions.py
class FailedRun(PrefectException):\n \"\"\"\n Raised when the result from a failed run is retrieved and an exception is not\n attached.\n\n This occurs when a string is attached to the state instead of an exception or if\n the state's data is null.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowPauseTimeout","title":"FlowPauseTimeout
","text":" Bases: PrefectException
Raised when a flow pause times out
Source code inprefect/exceptions.py
class FlowPauseTimeout(PrefectException):\n \"\"\"Raised when a flow pause times out\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowRunWaitTimeout","title":"FlowRunWaitTimeout
","text":" Bases: PrefectException
Raised when a flow run takes longer than a given timeout
Source code inprefect/exceptions.py
class FlowRunWaitTimeout(PrefectException):\n \"\"\"Raised when a flow run takes longer than a given timeout\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowScriptError","title":"FlowScriptError
","text":" Bases: PrefectException
Raised when a script errors during evaluation while attempting to load a flow.
Source code inprefect/exceptions.py
class FlowScriptError(PrefectException):\n \"\"\"\n Raised when a script errors during evaluation while attempting to load a flow.\n \"\"\"\n\n def __init__(\n self,\n user_exc: Exception,\n script_path: str,\n ) -> None:\n message = f\"Flow script at {script_path!r} encountered an exception\"\n super().__init__(message)\n\n self.user_exc = user_exc\n\n def rich_user_traceback(self, **kwargs):\n trace = Traceback.extract(\n type(self.user_exc),\n self.user_exc,\n self.user_exc.__traceback__.tb_next.tb_next.tb_next.tb_next,\n )\n return Traceback(trace, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureError","title":"InfrastructureError
","text":" Bases: PrefectException
A base class for exceptions related to infrastructure blocks
Source code inprefect/exceptions.py
class InfrastructureError(PrefectException):\n \"\"\"\n A base class for exceptions related to infrastructure blocks\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureNotAvailable","title":"InfrastructureNotAvailable
","text":" Bases: PrefectException
Raised when infrastructure is not accessible from the current machine. For example, if a process was spawned on another machine it cannot be managed.
Source code inprefect/exceptions.py
class InfrastructureNotAvailable(PrefectException):\n \"\"\"\n Raised when infrastructure is not accessible from the current machine. For example,\n if a process was spawned on another machine it cannot be managed.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureNotFound","title":"InfrastructureNotFound
","text":" Bases: PrefectException
Raised when infrastructure is missing, likely because it has exited or been deleted.
Source code inprefect/exceptions.py
class InfrastructureNotFound(PrefectException):\n \"\"\"\n Raised when infrastructure is missing, likely because it has exited or been\n deleted.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InvalidNameError","title":"InvalidNameError
","text":" Bases: PrefectException
, ValueError
Raised when a name contains characters that are not permitted.
Source code inprefect/exceptions.py
class InvalidNameError(PrefectException, ValueError):\n \"\"\"\n Raised when a name contains characters that are not permitted.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InvalidRepositoryURLError","title":"InvalidRepositoryURLError
","text":" Bases: PrefectException
Raised when an incorrect URL is provided to a GitHub filesystem block.
Source code inprefect/exceptions.py
class InvalidRepositoryURLError(PrefectException):\n \"\"\"Raised when an incorrect URL is provided to a GitHub filesystem block.\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MappingLengthMismatch","title":"MappingLengthMismatch
","text":" Bases: PrefectException
Raised when attempting to call Task.map with arguments of different lengths.
Source code inprefect/exceptions.py
class MappingLengthMismatch(PrefectException):\n \"\"\"\n Raised when attempting to call Task.map with arguments of different lengths.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MappingMissingIterable","title":"MappingMissingIterable
","text":" Bases: PrefectException
Raised when attempting to call Task.map with all static arguments
Source code inprefect/exceptions.py
class MappingMissingIterable(PrefectException):\n \"\"\"\n Raised when attempting to call Task.map with all static arguments\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingContextError","title":"MissingContextError
","text":" Bases: PrefectException
, RuntimeError
Raised when a method is called that requires a task or flow run context to be active but one cannot be found.
Source code inprefect/exceptions.py
class MissingContextError(PrefectException, RuntimeError):\n \"\"\"\n Raised when a method is called that requires a task or flow run context to be\n active but one cannot be found.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingFlowError","title":"MissingFlowError
","text":" Bases: PrefectException
Raised when a given flow name is not found in the expected script.
Source code inprefect/exceptions.py
class MissingFlowError(PrefectException):\n \"\"\"\n Raised when a given flow name is not found in the expected script.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingProfileError","title":"MissingProfileError
","text":" Bases: PrefectException
, ValueError
Raised when a profile name does not exist.
Source code inprefect/exceptions.py
class MissingProfileError(PrefectException, ValueError):\n \"\"\"\n Raised when a profile name does not exist.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingResult","title":"MissingResult
","text":" Bases: PrefectException
Raised when a result is missing from a state; often when result persistence is disabled and the state is retrieved from the API.
Source code inprefect/exceptions.py
class MissingResult(PrefectException):\n \"\"\"\n Raised when a result is missing from a state; often when result persistence is\n disabled and the state is retrieved from the API.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.NotPausedError","title":"NotPausedError
","text":" Bases: PrefectException
Raised when attempting to unpause a run that isn't paused.
Source code inprefect/exceptions.py
class NotPausedError(PrefectException):\n \"\"\"Raised when attempting to unpause a run that isn't paused.\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ObjectAlreadyExists","title":"ObjectAlreadyExists
","text":" Bases: PrefectException
Raised when the client receives a 409 (conflict) from the API.
Source code inprefect/exceptions.py
class ObjectAlreadyExists(PrefectException):\n \"\"\"\n Raised when the client receives a 409 (conflict) from the API.\n \"\"\"\n\n def __init__(self, http_exc: Exception, *args, **kwargs):\n self.http_exc = http_exc\n super().__init__(*args, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ObjectNotFound","title":"ObjectNotFound
","text":" Bases: PrefectException
Raised when the client receives a 404 (not found) from the API.
Source code inprefect/exceptions.py
class ObjectNotFound(PrefectException):\n \"\"\"\n Raised when the client receives a 404 (not found) from the API.\n \"\"\"\n\n def __init__(self, http_exc: Exception, *args, **kwargs):\n self.http_exc = http_exc\n super().__init__(*args, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ParameterBindError","title":"ParameterBindError
","text":" Bases: TypeError
, PrefectException
Raised when args and kwargs cannot be converted to parameters.
Source code inprefect/exceptions.py
class ParameterBindError(TypeError, PrefectException):\n \"\"\"\n Raised when args and kwargs cannot be converted to parameters.\n \"\"\"\n\n def __init__(self, msg: str):\n super().__init__(msg)\n\n @classmethod\n def from_bind_failure(\n cls, fn: Callable, exc: TypeError, call_args: List, call_kwargs: Dict\n ) -> Self:\n fn_signature = str(inspect.signature(fn)).strip(\"()\")\n\n base = f\"Error binding parameters for function '{fn.__name__}': {exc}\"\n signature = f\"Function '{fn.__name__}' has signature '{fn_signature}'\"\n received = f\"received args: {call_args} and kwargs: {list(call_kwargs.keys())}\"\n msg = f\"{base}.\\n{signature} but {received}.\"\n return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ParameterTypeError","title":"ParameterTypeError
","text":" Bases: PrefectException
Raised when a parameter does not pass Pydantic type validation.
Source code inprefect/exceptions.py
class ParameterTypeError(PrefectException):\n \"\"\"\n Raised when a parameter does not pass Pydantic type validation.\n \"\"\"\n\n def __init__(self, msg: str):\n super().__init__(msg)\n\n @classmethod\n def from_validation_error(cls, exc: pydantic.ValidationError) -> Self:\n bad_params = [f'{err[\"loc\"][0]}: {err[\"msg\"]}' for err in exc.errors()]\n msg = \"Flow run received invalid parameters:\\n - \" + \"\\n - \".join(bad_params)\n return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.Pause","title":"Pause
","text":" Bases: PrefectSignal
Raised when a flow run is PAUSED and needs to exit for resubmission.
Source code inprefect/exceptions.py
class Pause(PrefectSignal):\n \"\"\"\n Raised when a flow run is PAUSED and needs to exit for resubmission.\n \"\"\"\n\n def __init__(self, *args, state=None, **kwargs):\n super().__init__(*args, **kwargs)\n self.state = state\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PausedRun","title":"PausedRun
","text":" Bases: PrefectException
Raised when the result from a paused run is retrieved.
Source code inprefect/exceptions.py
class PausedRun(PrefectException):\n \"\"\"\n Raised when the result from a paused run is retrieved.\n \"\"\"\n\n def __init__(self, *args, state=None, **kwargs):\n super().__init__(*args, **kwargs)\n self.state = state\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectException","title":"PrefectException
","text":" Bases: Exception
Base exception type for Prefect errors.
Source code inprefect/exceptions.py
class PrefectException(Exception):\n \"\"\"\n Base exception type for Prefect errors.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectHTTPStatusError","title":"PrefectHTTPStatusError
","text":" Bases: HTTPStatusError
Raised when client receives a Response
that contains an HTTPStatusError.
Used to include API error details in the error messages that the client provides users.
Source code inprefect/exceptions.py
class PrefectHTTPStatusError(HTTPStatusError):\n \"\"\"\n Raised when client receives a `Response` that contains an HTTPStatusError.\n\n Used to include API error details in the error messages that the client provides users.\n \"\"\"\n\n @classmethod\n def from_httpx_error(cls: Type[Self], httpx_error: HTTPStatusError) -> Self:\n \"\"\"\n Generate a `PrefectHTTPStatusError` from an `httpx.HTTPStatusError`.\n \"\"\"\n try:\n details = httpx_error.response.json()\n except Exception:\n details = None\n\n error_message, *more_info = str(httpx_error).split(\"\\n\")\n\n if details:\n message_components = [error_message, f\"Response: {details}\", *more_info]\n else:\n message_components = [error_message, *more_info]\n\n new_message = \"\\n\".join(message_components)\n\n return cls(\n new_message, request=httpx_error.request, response=httpx_error.response\n )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectHTTPStatusError.from_httpx_error","title":"from_httpx_error
classmethod
","text":"Generate a PrefectHTTPStatusError
from an httpx.HTTPStatusError
.
prefect/exceptions.py
@classmethod\ndef from_httpx_error(cls: Type[Self], httpx_error: HTTPStatusError) -> Self:\n \"\"\"\n Generate a `PrefectHTTPStatusError` from an `httpx.HTTPStatusError`.\n \"\"\"\n try:\n details = httpx_error.response.json()\n except Exception:\n details = None\n\n error_message, *more_info = str(httpx_error).split(\"\\n\")\n\n if details:\n message_components = [error_message, f\"Response: {details}\", *more_info]\n else:\n message_components = [error_message, *more_info]\n\n new_message = \"\\n\".join(message_components)\n\n return cls(\n new_message, request=httpx_error.request, response=httpx_error.response\n )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectSignal","title":"PrefectSignal
","text":" Bases: BaseException
Base type for signal-like exceptions that should never be caught by users.
Source code inprefect/exceptions.py
class PrefectSignal(BaseException):\n \"\"\"\n Base type for signal-like exceptions that should never be caught by users.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ProtectedBlockError","title":"ProtectedBlockError
","text":" Bases: PrefectException
Raised when an operation is prevented due to block protection.
Source code inprefect/exceptions.py
class ProtectedBlockError(PrefectException):\n \"\"\"\n Raised when an operation is prevented due to block protection.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ReservedArgumentError","title":"ReservedArgumentError
","text":" Bases: PrefectException
, TypeError
Raised when a function used with Prefect has an argument with a name that is reserved for a Prefect feature
Source code inprefect/exceptions.py
class ReservedArgumentError(PrefectException, TypeError):\n \"\"\"\n Raised when a function used with Prefect has an argument with a name that is\n reserved for a Prefect feature\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ScriptError","title":"ScriptError
","text":" Bases: PrefectException
Raised when a script errors during evaluation while attempting to load data
Source code inprefect/exceptions.py
class ScriptError(PrefectException):\n \"\"\"\n Raised when a script errors during evaluation while attempting to load data\n \"\"\"\n\n def __init__(\n self,\n user_exc: Exception,\n path: str,\n ) -> None:\n message = f\"Script at {str(path)!r} encountered an exception: {user_exc!r}\"\n super().__init__(message)\n self.user_exc = user_exc\n\n # Strip script run information from the traceback\n self.user_exc.__traceback__ = _trim_traceback(\n self.user_exc.__traceback__,\n remove_modules=[prefect.utilities.importtools],\n )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.SignatureMismatchError","title":"SignatureMismatchError
","text":" Bases: PrefectException
, TypeError
Raised when parameters passed to a function do not match its signature.
Source code inprefect/exceptions.py
class SignatureMismatchError(PrefectException, TypeError):\n \"\"\"Raised when parameters passed to a function do not match its signature.\"\"\"\n\n def __init__(self, msg: str):\n super().__init__(msg)\n\n @classmethod\n def from_bad_params(cls, expected_params: List[str], provided_params: List[str]):\n msg = (\n f\"Function expects parameters {expected_params} but was provided with\"\n f\" parameters {provided_params}\"\n )\n return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.TerminationSignal","title":"TerminationSignal
","text":" Bases: ExternalSignal
Raised when a flow run receives a termination signal.
Source code inprefect/exceptions.py
class TerminationSignal(ExternalSignal):\n \"\"\"\n Raised when a flow run receives a termination signal.\n \"\"\"\n\n def __init__(self, signal: int):\n self.signal = signal\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UnfinishedRun","title":"UnfinishedRun
","text":" Bases: PrefectException
Raised when the result from a run that is not finished is retrieved.
For example, if a run is in a SCHEDULED, PENDING, CANCELLING, or RUNNING state.
Source code inprefect/exceptions.py
class UnfinishedRun(PrefectException):\n \"\"\"\n Raised when the result from a run that is not finished is retrieved.\n\n For example, if a run is in a SCHEDULED, PENDING, CANCELLING, or RUNNING state.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UnspecifiedFlowError","title":"UnspecifiedFlowError
","text":" Bases: PrefectException
Raised when multiple flows are found in the expected script and no name is given.
Source code inprefect/exceptions.py
class UnspecifiedFlowError(PrefectException):\n \"\"\"\n Raised when multiple flows are found in the expected script and no name is given.\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UpstreamTaskError","title":"UpstreamTaskError
","text":" Bases: PrefectException
Raised when a task relies on the result of another task but that task is not 'COMPLETE'
Source code inprefect/exceptions.py
class UpstreamTaskError(PrefectException):\n \"\"\"\n Raised when a task relies on the result of another task but that task is not\n 'COMPLETE'\n \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.exception_traceback","title":"exception_traceback
","text":"Convert an exception to a printable string with a traceback
Source code inprefect/exceptions.py
def exception_traceback(exc: Exception) -> str:\n \"\"\"\n Convert an exception to a printable string with a traceback\n \"\"\"\n tb = traceback.TracebackException.from_exception(exc)\n return \"\".join(list(tb.format()))\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/filesystems/","title":"prefect.filesystems","text":"","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems","title":"prefect.filesystems
","text":"","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure","title":"Azure
","text":" Bases: WritableFileSystem
, WritableDeploymentStorage
Store data as a file on Azure Datalake and Azure Blob Storage.
ExampleLoad stored Azure config:
from prefect.filesystems import Azure\n\naz_block = Azure.load(\"BLOCK_NAME\")\n
Source code in prefect/filesystems.py
class Azure(WritableFileSystem, WritableDeploymentStorage):\n \"\"\"\n Store data as a file on Azure Datalake and Azure Blob Storage.\n\n Example:\n Load stored Azure config:\n ```python\n from prefect.filesystems import Azure\n\n az_block = Azure.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"Azure\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#azure\"\n\n bucket_path: str = Field(\n default=...,\n description=\"An Azure storage bucket path.\",\n example=\"my-bucket/a-directory-within\",\n )\n azure_storage_connection_string: Optional[SecretStr] = Field(\n default=None,\n title=\"Azure storage connection string\",\n description=(\n \"Equivalent to the AZURE_STORAGE_CONNECTION_STRING environment variable.\"\n ),\n )\n azure_storage_account_name: Optional[SecretStr] = Field(\n default=None,\n title=\"Azure storage account name\",\n description=(\n \"Equivalent to the AZURE_STORAGE_ACCOUNT_NAME environment variable.\"\n ),\n )\n azure_storage_account_key: Optional[SecretStr] = Field(\n default=None,\n title=\"Azure storage account key\",\n description=\"Equivalent to the AZURE_STORAGE_ACCOUNT_KEY environment variable.\",\n )\n azure_storage_tenant_id: Optional[SecretStr] = Field(\n None,\n title=\"Azure storage tenant ID\",\n description=\"Equivalent to the AZURE_TENANT_ID environment variable.\",\n )\n azure_storage_client_id: Optional[SecretStr] = Field(\n None,\n title=\"Azure storage client ID\",\n description=\"Equivalent to the AZURE_CLIENT_ID environment variable.\",\n )\n azure_storage_client_secret: Optional[SecretStr] = Field(\n None,\n title=\"Azure storage client secret\",\n description=\"Equivalent to the AZURE_CLIENT_SECRET environment variable.\",\n )\n azure_storage_anon: bool = Field(\n default=True,\n title=\"Azure storage anonymous connection\",\n description=(\n \"Set the 'anon' flag for ADLFS. This should be False for systems that\"\n \" require ADLFS to use DefaultAzureCredentials.\"\n ),\n )\n azure_storage_container: Optional[SecretStr] = Field(\n default=None,\n title=\"Azure storage container\",\n description=(\n \"Blob Container in Azure Storage Account. If set the 'bucket_path' will\"\n \" be interpreted using the following URL format:\"\n \"'az://<container>@<storage_account>.dfs.core.windows.net/<bucket_path>'.\"\n ),\n )\n _remote_file_system: RemoteFileSystem = None\n\n @property\n def basepath(self) -> str:\n if self.azure_storage_container:\n return (\n f\"az://{self.azure_storage_container.get_secret_value()}\"\n f\"@{self.azure_storage_account_name.get_secret_value()}\"\n f\".dfs.core.windows.net/{self.bucket_path}\"\n )\n else:\n return f\"az://{self.bucket_path}\"\n\n @property\n def filesystem(self) -> RemoteFileSystem:\n settings = {}\n if self.azure_storage_connection_string:\n settings[\n \"connection_string\"\n ] = self.azure_storage_connection_string.get_secret_value()\n if self.azure_storage_account_name:\n settings[\n \"account_name\"\n ] = self.azure_storage_account_name.get_secret_value()\n if self.azure_storage_account_key:\n settings[\"account_key\"] = self.azure_storage_account_key.get_secret_value()\n if self.azure_storage_tenant_id:\n settings[\"tenant_id\"] = self.azure_storage_tenant_id.get_secret_value()\n if self.azure_storage_client_id:\n settings[\"client_id\"] = self.azure_storage_client_id.get_secret_value()\n if self.azure_storage_client_secret:\n settings[\n \"client_secret\"\n ] = self.azure_storage_client_secret.get_secret_value()\n settings[\"anon\"] = self.azure_storage_anon\n self._remote_file_system = RemoteFileSystem(\n basepath=self.basepath, settings=settings\n )\n return self._remote_file_system\n\n @sync_compatible\n async def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n ) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n\n @sync_compatible\n async def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n ) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path, to_path=to_path, ignore_file=ignore_file\n )\n\n @sync_compatible\n async def read_path(self, path: str) -> bytes:\n return await self.filesystem.read_path(path)\n\n @sync_compatible\n async def write_path(self, path: str, content: bytes) -> str:\n return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure.get_directory","title":"get_directory
async
","text":"Downloads a directory from a given remote path to a local directory.
Defaults to downloading the entire contents of the block's basepath to the current working directory.
Source code inprefect/filesystems.py
@sync_compatible\nasync def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure.put_directory","title":"put_directory
async
","text":"Uploads a directory from a given local path to a remote directory.
Defaults to uploading the entire contents of the current working directory to the block's basepath.
Source code inprefect/filesystems.py
@sync_compatible\nasync def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path, to_path=to_path, ignore_file=ignore_file\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS","title":"GCS
","text":" Bases: WritableFileSystem
, WritableDeploymentStorage
Store data as a file on Google Cloud Storage.
ExampleLoad stored GCS config:
from prefect.filesystems import GCS\n\ngcs_block = GCS.load(\"BLOCK_NAME\")\n
Source code in prefect/filesystems.py
class GCS(WritableFileSystem, WritableDeploymentStorage):\n \"\"\"\n Store data as a file on Google Cloud Storage.\n\n Example:\n Load stored GCS config:\n ```python\n from prefect.filesystems import GCS\n\n gcs_block = GCS.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/422d13bb838cf247eb2b2cf229ce6a2e717d601b-256x256.png\"\n _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#gcs\"\n\n bucket_path: str = Field(\n default=...,\n description=\"A GCS bucket path.\",\n example=\"my-bucket/a-directory-within\",\n )\n service_account_info: Optional[SecretStr] = Field(\n default=None,\n description=\"The contents of a service account keyfile as a JSON string.\",\n )\n project: Optional[str] = Field(\n default=None,\n description=(\n \"The project the GCS bucket resides in. If not provided, the project will\"\n \" be inferred from the credentials or environment.\"\n ),\n )\n\n @property\n def basepath(self) -> str:\n return f\"gcs://{self.bucket_path}\"\n\n @property\n def filesystem(self) -> RemoteFileSystem:\n settings = {}\n if self.service_account_info:\n try:\n settings[\"token\"] = json.loads(\n self.service_account_info.get_secret_value()\n )\n except json.JSONDecodeError:\n raise ValueError(\n \"Unable to load provided service_account_info. Please make sure\"\n \" that the provided value is a valid JSON string.\"\n )\n remote_file_system = RemoteFileSystem(\n basepath=f\"gcs://{self.bucket_path}\", settings=settings\n )\n return remote_file_system\n\n @sync_compatible\n async def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n ) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n\n @sync_compatible\n async def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n ) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path, to_path=to_path, ignore_file=ignore_file\n )\n\n @sync_compatible\n async def read_path(self, path: str) -> bytes:\n return await self.filesystem.read_path(path)\n\n @sync_compatible\n async def write_path(self, path: str, content: bytes) -> str:\n return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS.get_directory","title":"get_directory
async
","text":"Downloads a directory from a given remote path to a local directory.
Defaults to downloading the entire contents of the block's basepath to the current working directory.
Source code inprefect/filesystems.py
@sync_compatible\nasync def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS.put_directory","title":"put_directory
async
","text":"Uploads a directory from a given local path to a remote directory.
Defaults to uploading the entire contents of the current working directory to the block's basepath.
Source code inprefect/filesystems.py
@sync_compatible\nasync def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path, to_path=to_path, ignore_file=ignore_file\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GitHub","title":"GitHub
","text":" Bases: ReadableDeploymentStorage
Interact with files stored on GitHub repositories.
Source code inprefect/filesystems.py
class GitHub(ReadableDeploymentStorage):\n \"\"\"\n Interact with files stored on GitHub repositories.\n \"\"\"\n\n _block_type_name = \"GitHub\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/41971cfecfea5f79ff334164f06ecb34d1038dd4-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#github\"\n\n repository: str = Field(\n default=...,\n description=(\n \"The URL of a GitHub repository to read from, in either HTTPS or SSH\"\n \" format.\"\n ),\n )\n reference: Optional[str] = Field(\n default=None,\n description=\"An optional reference to pin to; can be a branch name or tag.\",\n )\n access_token: Optional[SecretStr] = Field(\n name=\"Personal Access Token\",\n default=None,\n description=(\n \"A GitHub Personal Access Token (PAT) with repo scope.\"\n \" To use a fine-grained PAT, provide '{username}:{PAT}' as the value.\"\n ),\n )\n include_git_objects: bool = Field(\n default=True,\n description=(\n \"Whether to include git objects when copying the repo contents to a\"\n \" directory.\"\n ),\n )\n\n @validator(\"access_token\")\n def _ensure_credentials_go_with_https(cls, v: str, values: dict) -> str:\n \"\"\"Ensure that credentials are not provided with 'SSH' formatted GitHub URLs.\n\n Note: validates `access_token` specifically so that it only fires when\n private repositories are used.\n \"\"\"\n if v is not None:\n if urllib.parse.urlparse(values[\"repository\"]).scheme != \"https\":\n raise InvalidRepositoryURLError(\n \"Crendentials can only be used with GitHub repositories \"\n \"using the 'HTTPS' format. You must either remove the \"\n \"credential if you wish to use the 'SSH' format and are not \"\n \"using a private repository, or you must change the repository \"\n \"URL to the 'HTTPS' format. \"\n )\n\n return v\n\n def _create_repo_url(self) -> str:\n \"\"\"Format the URL provided to the `git clone` command.\n\n For private repos: https://<oauth-key>@github.com/<username>/<repo>.git\n All other repos should be the same as `self.repository`.\n \"\"\"\n url_components = urllib.parse.urlparse(self.repository)\n if url_components.scheme == \"https\" and self.access_token is not None:\n updated_components = url_components._replace(\n netloc=f\"{self.access_token.get_secret_value()}@{url_components.netloc}\"\n )\n full_url = urllib.parse.urlunparse(updated_components)\n else:\n full_url = self.repository\n\n return full_url\n\n @staticmethod\n def _get_paths(\n dst_dir: Union[str, None], src_dir: str, sub_directory: str\n ) -> Tuple[str, str]:\n \"\"\"Returns the fully formed paths for GitHubRepository contents in the form\n (content_source, content_destination).\n \"\"\"\n if dst_dir is None:\n content_destination = Path(\".\").absolute()\n else:\n content_destination = Path(dst_dir)\n\n content_source = Path(src_dir)\n\n if sub_directory:\n content_destination = content_destination.joinpath(sub_directory)\n content_source = content_source.joinpath(sub_directory)\n\n return str(content_source), str(content_destination)\n\n @sync_compatible\n async def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n ) -> None:\n \"\"\"\n Clones a GitHub project specified in `from_path` to the provided `local_path`;\n defaults to cloning the repository reference configured on the Block to the\n present working directory.\n\n Args:\n from_path: If provided, interpreted as a subdirectory of the underlying\n repository that will be copied to the provided local path.\n local_path: A local path to clone to; defaults to present working directory.\n \"\"\"\n # CONSTRUCT COMMAND\n cmd = [\"git\", \"clone\", self._create_repo_url()]\n if self.reference:\n cmd += [\"-b\", self.reference]\n\n # Limit git history\n cmd += [\"--depth\", \"1\"]\n\n # Clone to a temporary directory and move the subdirectory over\n with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n cmd.append(tmp_dir)\n\n err_stream = io.StringIO()\n out_stream = io.StringIO()\n process = await run_process(cmd, stream_output=(out_stream, err_stream))\n if process.returncode != 0:\n err_stream.seek(0)\n raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n content_source, content_destination = self._get_paths(\n dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n )\n\n ignore_func = None\n if not self.include_git_objects:\n ignore_func = ignore_patterns(\".git\")\n\n copytree(\n src=content_source,\n dst=content_destination,\n dirs_exist_ok=True,\n ignore=ignore_func,\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GitHub.get_directory","title":"get_directory
async
","text":"Clones a GitHub project specified in from_path
to the provided local_path
; defaults to cloning the repository reference configured on the Block to the present working directory.
Parameters:
Name Type Description Defaultfrom_path
Optional[str]
If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path.
None
local_path
Optional[str]
A local path to clone to; defaults to present working directory.
None
Source code in prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n \"\"\"\n Clones a GitHub project specified in `from_path` to the provided `local_path`;\n defaults to cloning the repository reference configured on the Block to the\n present working directory.\n\n Args:\n from_path: If provided, interpreted as a subdirectory of the underlying\n repository that will be copied to the provided local path.\n local_path: A local path to clone to; defaults to present working directory.\n \"\"\"\n # CONSTRUCT COMMAND\n cmd = [\"git\", \"clone\", self._create_repo_url()]\n if self.reference:\n cmd += [\"-b\", self.reference]\n\n # Limit git history\n cmd += [\"--depth\", \"1\"]\n\n # Clone to a temporary directory and move the subdirectory over\n with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n cmd.append(tmp_dir)\n\n err_stream = io.StringIO()\n out_stream = io.StringIO()\n process = await run_process(cmd, stream_output=(out_stream, err_stream))\n if process.returncode != 0:\n err_stream.seek(0)\n raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n content_source, content_destination = self._get_paths(\n dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n )\n\n ignore_func = None\n if not self.include_git_objects:\n ignore_func = ignore_patterns(\".git\")\n\n copytree(\n src=content_source,\n dst=content_destination,\n dirs_exist_ok=True,\n ignore=ignore_func,\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem","title":"LocalFileSystem
","text":" Bases: WritableFileSystem
, WritableDeploymentStorage
Store data as a file on a local file system.
ExampleLoad stored local file system config:
from prefect.filesystems import LocalFileSystem\n\nlocal_file_system_block = LocalFileSystem.load(\"BLOCK_NAME\")\n
Source code in prefect/filesystems.py
class LocalFileSystem(WritableFileSystem, WritableDeploymentStorage):\n \"\"\"\n Store data as a file on a local file system.\n\n Example:\n Load stored local file system config:\n ```python\n from prefect.filesystems import LocalFileSystem\n\n local_file_system_block = LocalFileSystem.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"Local File System\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/ad39089fa66d273b943394a68f003f7a19aa850e-48x48.png\"\n _documentation_url = (\n \"https://docs.prefect.io/concepts/filesystems/#local-filesystem\"\n )\n\n basepath: Optional[str] = Field(\n default=None, description=\"Default local path for this block to write to.\"\n )\n\n @validator(\"basepath\", pre=True)\n def cast_pathlib(cls, value):\n if isinstance(value, Path):\n return str(value)\n return value\n\n def _resolve_path(self, path: str) -> Path:\n # Only resolve the base path at runtime, default to the current directory\n basepath = (\n Path(self.basepath).expanduser().resolve()\n if self.basepath\n else Path(\".\").resolve()\n )\n\n # Determine the path to access relative to the base path, ensuring that paths\n # outside of the base path are off limits\n if path is None:\n return basepath\n\n path: Path = Path(path).expanduser()\n\n if not path.is_absolute():\n path = basepath / path\n else:\n path = path.resolve()\n if basepath not in path.parents and (basepath != path):\n raise ValueError(\n f\"Provided path {path} is outside of the base path {basepath}.\"\n )\n\n return path\n\n @sync_compatible\n async def get_directory(\n self, from_path: str = None, local_path: str = None\n ) -> None:\n \"\"\"\n Copies a directory from one place to another on the local filesystem.\n\n Defaults to copying the entire contents of the block's basepath to the current working directory.\n \"\"\"\n if not from_path:\n from_path = Path(self.basepath).expanduser().resolve()\n else:\n from_path = self._resolve_path(from_path)\n\n if not local_path:\n local_path = Path(\".\").resolve()\n else:\n local_path = Path(local_path).resolve()\n\n if from_path == local_path:\n # If the paths are the same there is no need to copy\n # and we avoid shutil.copytree raising an error\n return\n\n # .prefectignore exists in the original location, not the current location which\n # is most likely temporary\n if (from_path / Path(\".prefectignore\")).exists():\n ignore_func = await self._get_ignore_func(\n local_path=from_path, ignore_file=from_path / Path(\".prefectignore\")\n )\n else:\n ignore_func = None\n\n copytree(from_path, local_path, dirs_exist_ok=True, ignore=ignore_func)\n\n async def _get_ignore_func(self, local_path: str, ignore_file: str):\n with open(ignore_file, \"r\") as f:\n ignore_patterns = f.readlines()\n included_files = filter_files(root=local_path, ignore_patterns=ignore_patterns)\n\n def ignore_func(directory, files):\n relative_path = Path(directory).relative_to(local_path)\n\n files_to_ignore = [\n f for f in files if str(relative_path / f) not in included_files\n ]\n return files_to_ignore\n\n return ignore_func\n\n @sync_compatible\n async def put_directory(\n self, local_path: str = None, to_path: str = None, ignore_file: str = None\n ) -> None:\n \"\"\"\n Copies a directory from one place to another on the local filesystem.\n\n Defaults to copying the entire contents of the current working directory to the block's basepath.\n An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore.\n \"\"\"\n destination_path = self._resolve_path(to_path)\n\n if not local_path:\n local_path = Path(\".\").absolute()\n\n if ignore_file:\n ignore_func = await self._get_ignore_func(\n local_path=local_path, ignore_file=ignore_file\n )\n else:\n ignore_func = None\n\n if local_path == destination_path:\n pass\n else:\n copytree(\n src=local_path,\n dst=destination_path,\n ignore=ignore_func,\n dirs_exist_ok=True,\n )\n\n @sync_compatible\n async def read_path(self, path: str) -> bytes:\n path: Path = self._resolve_path(path)\n\n # Check if the path exists\n if not path.exists():\n raise ValueError(f\"Path {path} does not exist.\")\n\n # Validate that its a file\n if not path.is_file():\n raise ValueError(f\"Path {path} is not a file.\")\n\n async with await anyio.open_file(str(path), mode=\"rb\") as f:\n content = await f.read()\n\n return content\n\n @sync_compatible\n async def write_path(self, path: str, content: bytes) -> str:\n path: Path = self._resolve_path(path)\n\n # Construct the path if it does not exist\n path.parent.mkdir(exist_ok=True, parents=True)\n\n # Check if the file already exists\n if path.exists() and not path.is_file():\n raise ValueError(f\"Path {path} already exists and is not a file.\")\n\n async with await anyio.open_file(path, mode=\"wb\") as f:\n await f.write(content)\n # Leave path stringify to the OS\n return str(path)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem.get_directory","title":"get_directory
async
","text":"Copies a directory from one place to another on the local filesystem.
Defaults to copying the entire contents of the block's basepath to the current working directory.
Source code inprefect/filesystems.py
@sync_compatible\nasync def get_directory(\n self, from_path: str = None, local_path: str = None\n) -> None:\n \"\"\"\n Copies a directory from one place to another on the local filesystem.\n\n Defaults to copying the entire contents of the block's basepath to the current working directory.\n \"\"\"\n if not from_path:\n from_path = Path(self.basepath).expanduser().resolve()\n else:\n from_path = self._resolve_path(from_path)\n\n if not local_path:\n local_path = Path(\".\").resolve()\n else:\n local_path = Path(local_path).resolve()\n\n if from_path == local_path:\n # If the paths are the same there is no need to copy\n # and we avoid shutil.copytree raising an error\n return\n\n # .prefectignore exists in the original location, not the current location which\n # is most likely temporary\n if (from_path / Path(\".prefectignore\")).exists():\n ignore_func = await self._get_ignore_func(\n local_path=from_path, ignore_file=from_path / Path(\".prefectignore\")\n )\n else:\n ignore_func = None\n\n copytree(from_path, local_path, dirs_exist_ok=True, ignore=ignore_func)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem.put_directory","title":"put_directory
async
","text":"Copies a directory from one place to another on the local filesystem.
Defaults to copying the entire contents of the current working directory to the block's basepath. An ignore_file
path may be provided that can include gitignore style expressions for filepaths to ignore.
prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n self, local_path: str = None, to_path: str = None, ignore_file: str = None\n) -> None:\n \"\"\"\n Copies a directory from one place to another on the local filesystem.\n\n Defaults to copying the entire contents of the current working directory to the block's basepath.\n An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore.\n \"\"\"\n destination_path = self._resolve_path(to_path)\n\n if not local_path:\n local_path = Path(\".\").absolute()\n\n if ignore_file:\n ignore_func = await self._get_ignore_func(\n local_path=local_path, ignore_file=ignore_file\n )\n else:\n ignore_func = None\n\n if local_path == destination_path:\n pass\n else:\n copytree(\n src=local_path,\n dst=destination_path,\n ignore=ignore_func,\n dirs_exist_ok=True,\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem","title":"RemoteFileSystem
","text":" Bases: WritableFileSystem
, WritableDeploymentStorage
Store data as a file on a remote file system.
Supports any remote file system supported by fsspec
. The file system is specified using a protocol. For example, \"s3://my-bucket/my-folder/\" will use S3.
Load stored remote file system config:
from prefect.filesystems import RemoteFileSystem\n\nremote_file_system_block = RemoteFileSystem.load(\"BLOCK_NAME\")\n
Source code in prefect/filesystems.py
class RemoteFileSystem(WritableFileSystem, WritableDeploymentStorage):\n \"\"\"\n Store data as a file on a remote file system.\n\n Supports any remote file system supported by `fsspec`. The file system is specified\n using a protocol. For example, \"s3://my-bucket/my-folder/\" will use S3.\n\n Example:\n Load stored remote file system config:\n ```python\n from prefect.filesystems import RemoteFileSystem\n\n remote_file_system_block = RemoteFileSystem.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"Remote File System\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/e86b41bc0f9c99ba9489abeee83433b43d5c9365-48x48.png\"\n _documentation_url = (\n \"https://docs.prefect.io/concepts/filesystems/#remote-file-system\"\n )\n\n basepath: str = Field(\n default=...,\n description=\"Default path for this block to write to.\",\n example=\"s3://my-bucket/my-folder/\",\n )\n settings: Dict[str, Any] = Field(\n default_factory=dict,\n description=\"Additional settings to pass through to fsspec.\",\n )\n\n # Cache for the configured fsspec file system used for access\n _filesystem: fsspec.AbstractFileSystem = None\n\n @validator(\"basepath\")\n def check_basepath(cls, value):\n scheme, netloc, _, _, _ = urllib.parse.urlsplit(value)\n\n if not scheme:\n raise ValueError(f\"Base path must start with a scheme. Got {value!r}.\")\n\n if not netloc:\n raise ValueError(\n f\"Base path must include a location after the scheme. Got {value!r}.\"\n )\n\n if scheme == \"file\":\n raise ValueError(\n \"Base path scheme cannot be 'file'. Use `LocalFileSystem` instead for\"\n \" local file access.\"\n )\n\n return value\n\n def _resolve_path(self, path: str) -> str:\n base_scheme, base_netloc, base_urlpath, _, _ = urllib.parse.urlsplit(\n self.basepath\n )\n scheme, netloc, urlpath, _, _ = urllib.parse.urlsplit(path)\n\n # Confirm that absolute paths are valid\n if scheme:\n if scheme != base_scheme:\n raise ValueError(\n f\"Path {path!r} with scheme {scheme!r} must use the same scheme as\"\n f\" the base path {base_scheme!r}.\"\n )\n\n if netloc:\n if (netloc != base_netloc) or not urlpath.startswith(base_urlpath):\n raise ValueError(\n f\"Path {path!r} is outside of the base path {self.basepath!r}.\"\n )\n\n return f\"{self.basepath.rstrip('/')}/{urlpath.lstrip('/')}\"\n\n @sync_compatible\n async def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n ) -> None:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n if from_path is None:\n from_path = str(self.basepath)\n else:\n from_path = self._resolve_path(from_path)\n\n if local_path is None:\n local_path = Path(\".\").absolute()\n\n # validate that from_path has a trailing slash for proper fsspec behavior across versions\n if not from_path.endswith(\"/\"):\n from_path += \"/\"\n\n return self.filesystem.get(from_path, local_path, recursive=True)\n\n @sync_compatible\n async def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n overwrite: bool = True,\n ) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n if to_path is None:\n to_path = str(self.basepath)\n else:\n to_path = self._resolve_path(to_path)\n\n if local_path is None:\n local_path = \".\"\n\n included_files = None\n if ignore_file:\n with open(ignore_file, \"r\") as f:\n ignore_patterns = f.readlines()\n\n included_files = filter_files(\n local_path, ignore_patterns, include_dirs=True\n )\n\n counter = 0\n for f in Path(local_path).rglob(\"*\"):\n relative_path = f.relative_to(local_path)\n if included_files and str(relative_path) not in included_files:\n continue\n\n if to_path.endswith(\"/\"):\n fpath = to_path + relative_path.as_posix()\n else:\n fpath = to_path + \"/\" + relative_path.as_posix()\n\n if f.is_dir():\n pass\n else:\n f = f.as_posix()\n if overwrite:\n self.filesystem.put_file(f, fpath, overwrite=True)\n else:\n self.filesystem.put_file(f, fpath)\n\n counter += 1\n\n return counter\n\n @sync_compatible\n async def read_path(self, path: str) -> bytes:\n path = self._resolve_path(path)\n\n with self.filesystem.open(path, \"rb\") as file:\n content = await run_sync_in_worker_thread(file.read)\n\n return content\n\n @sync_compatible\n async def write_path(self, path: str, content: bytes) -> str:\n path = self._resolve_path(path)\n dirpath = path[: path.rindex(\"/\")]\n\n self.filesystem.makedirs(dirpath, exist_ok=True)\n\n with self.filesystem.open(path, \"wb\") as file:\n await run_sync_in_worker_thread(file.write, content)\n return path\n\n @property\n def filesystem(self) -> fsspec.AbstractFileSystem:\n if not self._filesystem:\n scheme, _, _, _, _ = urllib.parse.urlsplit(self.basepath)\n\n try:\n self._filesystem = fsspec.filesystem(scheme, **self.settings)\n except ImportError as exc:\n # The path is a remote file system that uses a lib that is not installed\n raise RuntimeError(\n f\"File system created with scheme {scheme!r} from base path \"\n f\"{self.basepath!r} could not be created. \"\n \"You are likely missing a Python module required to use the given \"\n \"storage protocol.\"\n ) from exc\n\n return self._filesystem\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem.get_directory","title":"get_directory
async
","text":"Downloads a directory from a given remote path to a local directory.
Defaults to downloading the entire contents of the block's basepath to the current working directory.
Source code inprefect/filesystems.py
@sync_compatible\nasync def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n if from_path is None:\n from_path = str(self.basepath)\n else:\n from_path = self._resolve_path(from_path)\n\n if local_path is None:\n local_path = Path(\".\").absolute()\n\n # validate that from_path has a trailing slash for proper fsspec behavior across versions\n if not from_path.endswith(\"/\"):\n from_path += \"/\"\n\n return self.filesystem.get(from_path, local_path, recursive=True)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem.put_directory","title":"put_directory
async
","text":"Uploads a directory from a given local path to a remote directory.
Defaults to uploading the entire contents of the current working directory to the block's basepath.
Source code inprefect/filesystems.py
@sync_compatible\nasync def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n overwrite: bool = True,\n) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n if to_path is None:\n to_path = str(self.basepath)\n else:\n to_path = self._resolve_path(to_path)\n\n if local_path is None:\n local_path = \".\"\n\n included_files = None\n if ignore_file:\n with open(ignore_file, \"r\") as f:\n ignore_patterns = f.readlines()\n\n included_files = filter_files(\n local_path, ignore_patterns, include_dirs=True\n )\n\n counter = 0\n for f in Path(local_path).rglob(\"*\"):\n relative_path = f.relative_to(local_path)\n if included_files and str(relative_path) not in included_files:\n continue\n\n if to_path.endswith(\"/\"):\n fpath = to_path + relative_path.as_posix()\n else:\n fpath = to_path + \"/\" + relative_path.as_posix()\n\n if f.is_dir():\n pass\n else:\n f = f.as_posix()\n if overwrite:\n self.filesystem.put_file(f, fpath, overwrite=True)\n else:\n self.filesystem.put_file(f, fpath)\n\n counter += 1\n\n return counter\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3","title":"S3
","text":" Bases: WritableFileSystem
, WritableDeploymentStorage
Store data as a file on AWS S3.
ExampleLoad stored S3 config:
from prefect.filesystems import S3\n\ns3_block = S3.load(\"BLOCK_NAME\")\n
Source code in prefect/filesystems.py
class S3(WritableFileSystem, WritableDeploymentStorage):\n \"\"\"\n Store data as a file on AWS S3.\n\n Example:\n Load stored S3 config:\n ```python\n from prefect.filesystems import S3\n\n s3_block = S3.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"S3\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"\n _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#s3\"\n\n bucket_path: str = Field(\n default=...,\n description=\"An S3 bucket path.\",\n example=\"my-bucket/a-directory-within\",\n )\n aws_access_key_id: Optional[SecretStr] = Field(\n default=None,\n title=\"AWS Access Key ID\",\n description=\"Equivalent to the AWS_ACCESS_KEY_ID environment variable.\",\n example=\"AKIAIOSFODNN7EXAMPLE\",\n )\n aws_secret_access_key: Optional[SecretStr] = Field(\n default=None,\n title=\"AWS Secret Access Key\",\n description=\"Equivalent to the AWS_SECRET_ACCESS_KEY environment variable.\",\n example=\"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\",\n )\n\n _remote_file_system: RemoteFileSystem = None\n\n @property\n def basepath(self) -> str:\n return f\"s3://{self.bucket_path}\"\n\n @property\n def filesystem(self) -> RemoteFileSystem:\n settings = {}\n if self.aws_access_key_id:\n settings[\"key\"] = self.aws_access_key_id.get_secret_value()\n if self.aws_secret_access_key:\n settings[\"secret\"] = self.aws_secret_access_key.get_secret_value()\n self._remote_file_system = RemoteFileSystem(\n basepath=f\"s3://{self.bucket_path}\", settings=settings\n )\n return self._remote_file_system\n\n @sync_compatible\n async def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n ) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n\n @sync_compatible\n async def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n ) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path, to_path=to_path, ignore_file=ignore_file\n )\n\n @sync_compatible\n async def read_path(self, path: str) -> bytes:\n return await self.filesystem.read_path(path)\n\n @sync_compatible\n async def write_path(self, path: str, content: bytes) -> str:\n return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3.get_directory","title":"get_directory
async
","text":"Downloads a directory from a given remote path to a local directory.
Defaults to downloading the entire contents of the block's basepath to the current working directory.
Source code inprefect/filesystems.py
@sync_compatible\nasync def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3.put_directory","title":"put_directory
async
","text":"Uploads a directory from a given local path to a remote directory.
Defaults to uploading the entire contents of the current working directory to the block's basepath.
Source code inprefect/filesystems.py
@sync_compatible\nasync def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path, to_path=to_path, ignore_file=ignore_file\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB","title":"SMB
","text":" Bases: WritableFileSystem
, WritableDeploymentStorage
Store data as a file on a SMB share.
ExampleLoad stored SMB config:
from prefect.filesystems import SMB\nsmb_block = SMB.load(\"BLOCK_NAME\")\n
Source code in prefect/filesystems.py
class SMB(WritableFileSystem, WritableDeploymentStorage):\n \"\"\"\n Store data as a file on a SMB share.\n\n Example:\n Load stored SMB config:\n\n ```python\n from prefect.filesystems import SMB\n smb_block = SMB.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"SMB\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/3f624663f7beb97d011d011bffd51ecf6c499efc-195x195.png\"\n _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#smb\"\n\n share_path: str = Field(\n default=...,\n description=\"SMB target (requires <SHARE>, followed by <PATH>).\",\n example=\"/SHARE/dir/subdir\",\n )\n smb_username: Optional[SecretStr] = Field(\n default=None,\n title=\"SMB Username\",\n description=\"Username with access to the target SMB SHARE.\",\n )\n smb_password: Optional[SecretStr] = Field(\n default=None, title=\"SMB Password\", description=\"Password for SMB access.\"\n )\n smb_host: str = Field(\n default=..., tile=\"SMB server/hostname\", description=\"SMB server/hostname.\"\n )\n smb_port: Optional[int] = Field(\n default=None, title=\"SMB port\", description=\"SMB port (default: 445).\"\n )\n\n _remote_file_system: RemoteFileSystem = None\n\n @property\n def basepath(self) -> str:\n return f\"smb://{self.smb_host.rstrip('/')}/{self.share_path.lstrip('/')}\"\n\n @property\n def filesystem(self) -> RemoteFileSystem:\n settings = {}\n if self.smb_username:\n settings[\"username\"] = self.smb_username.get_secret_value()\n if self.smb_password:\n settings[\"password\"] = self.smb_password.get_secret_value()\n if self.smb_host:\n settings[\"host\"] = self.smb_host\n if self.smb_port:\n settings[\"port\"] = self.smb_port\n self._remote_file_system = RemoteFileSystem(\n basepath=f\"smb://{self.smb_host.rstrip('/')}/{self.share_path.lstrip('/')}\",\n settings=settings,\n )\n return self._remote_file_system\n\n @sync_compatible\n async def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n ) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n\n @sync_compatible\n async def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n ) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path,\n to_path=to_path,\n ignore_file=ignore_file,\n overwrite=False,\n )\n\n @sync_compatible\n async def read_path(self, path: str) -> bytes:\n return await self.filesystem.read_path(path)\n\n @sync_compatible\n async def write_path(self, path: str, content: bytes) -> str:\n return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB.get_directory","title":"get_directory
async
","text":"Downloads a directory from a given remote path to a local directory. Defaults to downloading the entire contents of the block's basepath to the current working directory.
Source code inprefect/filesystems.py
@sync_compatible\nasync def get_directory(\n self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n \"\"\"\n Downloads a directory from a given remote path to a local directory.\n Defaults to downloading the entire contents of the block's basepath to the current working directory.\n \"\"\"\n return await self.filesystem.get_directory(\n from_path=from_path, local_path=local_path\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB.put_directory","title":"put_directory
async
","text":"Uploads a directory from a given local path to a remote directory. Defaults to uploading the entire contents of the current working directory to the block's basepath.
Source code inprefect/filesystems.py
@sync_compatible\nasync def put_directory(\n self,\n local_path: Optional[str] = None,\n to_path: Optional[str] = None,\n ignore_file: Optional[str] = None,\n) -> int:\n \"\"\"\n Uploads a directory from a given local path to a remote directory.\n Defaults to uploading the entire contents of the current working directory to the block's basepath.\n \"\"\"\n return await self.filesystem.put_directory(\n local_path=local_path,\n to_path=to_path,\n ignore_file=ignore_file,\n overwrite=False,\n )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/flow_runs/","title":"prefect.flow_runs","text":"","tags":["Python API","flow-runs"]},{"location":"api-ref/prefect/flow_runs/#prefect.flow_runs","title":"prefect.flow_runs
","text":"","tags":["Python API","flow-runs"]},{"location":"api-ref/prefect/flow_runs/#prefect.flow_runs.wait_for_flow_run","title":"wait_for_flow_run
async
","text":"Waits for the prefect flow run to finish and returns the FlowRun
Parameters:
Name Type Description Defaultflow_run_id
UUID
The flow run ID for the flow run to wait for.
requiredtimeout
Optional[int]
The wait timeout in seconds. Defaults to 10800 (3 hours).
10800
poll_interval
int
The poll interval in seconds. Defaults to 5.
5
Returns:
Name Type DescriptionFlowRun
FlowRun
The finished flow run.
Raises:
Type DescriptionFlowWaitTimeout
If flow run goes over the timeout.
Examples:
Create a flow run for a deployment and wait for it to finish:
import asyncio\n\nfrom prefect import get_client\nfrom prefect.flow_runs import wait_for_flow_run\n\nasync def main():\n async with get_client() as client:\n flow_run = await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)\n print(flow_run.state)\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n
Trigger multiple flow runs and wait for them to finish:
import asyncio\n\nfrom prefect import get_client\nfrom prefect.flow_runs import wait_for_flow_run\n\nasync def main(num_runs: int):\n async with get_client() as client:\n flow_runs = [\n await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n for _\n in range(num_runs)\n ]\n coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]\n finished_flow_runs = await asyncio.gather(*coros)\n print([flow_run.state for flow_run in finished_flow_runs])\n\nif __name__ == \"__main__\":\n asyncio.run(main(num_runs=10))\n
Source code in prefect/flow_runs.py
@inject_client\nasync def wait_for_flow_run(\n flow_run_id: UUID,\n timeout: Optional[int] = 10800,\n poll_interval: int = 5,\n client: Optional[PrefectClient] = None,\n log_states: bool = False,\n) -> FlowRun:\n \"\"\"\n Waits for the prefect flow run to finish and returns the FlowRun\n\n Args:\n flow_run_id: The flow run ID for the flow run to wait for.\n timeout: The wait timeout in seconds. Defaults to 10800 (3 hours).\n poll_interval: The poll interval in seconds. Defaults to 5.\n\n Returns:\n FlowRun: The finished flow run.\n\n Raises:\n prefect.exceptions.FlowWaitTimeout: If flow run goes over the timeout.\n\n Examples:\n Create a flow run for a deployment and wait for it to finish:\n ```python\n import asyncio\n\n from prefect import get_client\n from prefect.flow_runs import wait_for_flow_run\n\n async def main():\n async with get_client() as client:\n flow_run = await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)\n print(flow_run.state)\n\n if __name__ == \"__main__\":\n asyncio.run(main())\n\n ```\n\n Trigger multiple flow runs and wait for them to finish:\n ```python\n import asyncio\n\n from prefect import get_client\n from prefect.flow_runs import wait_for_flow_run\n\n async def main(num_runs: int):\n async with get_client() as client:\n flow_runs = [\n await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n for _\n in range(num_runs)\n ]\n coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]\n finished_flow_runs = await asyncio.gather(*coros)\n print([flow_run.state for flow_run in finished_flow_runs])\n\n if __name__ == \"__main__\":\n asyncio.run(main(num_runs=10))\n\n ```\n \"\"\"\n assert client is not None, \"Client injection failed\"\n logger = get_logger()\n with anyio.move_on_after(timeout):\n while True:\n flow_run = await client.read_flow_run(flow_run_id)\n flow_state = flow_run.state\n if log_states:\n logger.info(f\"Flow run is in state {flow_run.state.name!r}\")\n if flow_state and flow_state.is_final():\n return flow_run\n await anyio.sleep(poll_interval)\n raise FlowRunWaitTimeout(\n f\"Flow run with ID {flow_run_id} exceeded watch timeout of {timeout} seconds\"\n )\n
","tags":["Python API","flow-runs"]},{"location":"api-ref/prefect/flows/","title":"prefect.flows","text":"","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows","title":"prefect.flows
","text":"Module containing the base workflow class and decorator - for most use cases, using the @flow
decorator is preferred.
Flow
","text":" Bases: Generic[P, R]
A Prefect workflow definition.
Note
We recommend using the @flow
decorator for most use-cases.
Wraps a function with an entrypoint to the Prefect engine. To preserve the input and output types, we use the generic type variables P
and R
for \"Parameters\" and \"Returns\" respectively.
Parameters:
Name Type Description Defaultfn
Callable[P, R]
The function defining the workflow.
requiredname
Optional[str]
An optional name for the flow; if not provided, the name will be inferred from the given function.
None
version
Optional[str]
An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null.
None
flow_run_name
Optional[Union[Callable[[], str], str]]
An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.
None
task_runner
Union[Type[BaseTaskRunner], BaseTaskRunner]
An optional task runner to use for task execution within the flow; if not provided, a ConcurrentTaskRunner
will be used.
ConcurrentTaskRunner
description
str
An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function.
None
timeout_seconds
Union[int, float]
An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called.
None
validate_parameters
bool
By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined as x: int
and \"5\" is passed, it will be resolved to 5
. If set to False
, no validation will be performed on flow parameters.
True
retries
Optional[int]
An optional number of times to retry on flow run failure.
None
retry_delay_seconds
Optional[Union[int, float]]
An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries
is nonzero.
None
persist_result
Optional[bool]
An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults to None
, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.
None
result_storage
Optional[ResultStorage]
An optional block to use to persist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow.
None
result_serializer
Optional[ResultSerializer]
An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value of PREFECT_RESULTS_DEFAULT_SERIALIZER
will be used unless called as a subflow, at which point the default will be loaded from the parent flow.
None
on_failure
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of callables to run when the flow enters a failed state.
None
on_completion
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of callables to run when the flow enters a completed state.
None
on_cancellation
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of callables to run when the flow enters a cancelling state.
None
on_crashed
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of callables to run when the flow enters a crashed state.
None
on_running
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of callables to run when the flow enters a running state.
None
Source code in prefect/flows.py
@PrefectObjectRegistry.register_instances\nclass Flow(Generic[P, R]):\n \"\"\"\n A Prefect workflow definition.\n\n !!! note\n We recommend using the [`@flow` decorator][prefect.flows.flow] for most use-cases.\n\n Wraps a function with an entrypoint to the Prefect engine. To preserve the input\n and output types, we use the generic type variables `P` and `R` for \"Parameters\" and\n \"Returns\" respectively.\n\n Args:\n fn: The function defining the workflow.\n name: An optional name for the flow; if not provided, the name will be inferred\n from the given function.\n version: An optional version string for the flow; if not provided, we will\n attempt to create a version string as a hash of the file containing the\n wrapped function; if the file cannot be located, the version will be null.\n flow_run_name: An optional name to distinguish runs of this flow; this name can\n be provided as a string template with the flow's parameters as variables,\n or a function that returns a string.\n task_runner: An optional task runner to use for task execution within the flow;\n if not provided, a `ConcurrentTaskRunner` will be used.\n description: An optional string description for the flow; if not provided, the\n description will be pulled from the docstring for the decorated function.\n timeout_seconds: An optional number of seconds indicating a maximum runtime for\n the flow. If the flow exceeds this runtime, it will be marked as failed.\n Flow execution may continue until the next task is called.\n validate_parameters: By default, parameters passed to flows are validated by\n Pydantic. This will check that input values conform to the annotated types\n on the function. Where possible, values will be coerced into the correct\n type; for example, if a parameter is defined as `x: int` and \"5\" is passed,\n it will be resolved to `5`. If set to `False`, no validation will be\n performed on flow parameters.\n retries: An optional number of times to retry on flow run failure.\n retry_delay_seconds: An optional number of seconds to wait before retrying the\n flow after failure. This is only applicable if `retries` is nonzero.\n persist_result: An optional toggle indicating whether the result of this flow\n should be persisted to result storage. Defaults to `None`, which indicates\n that Prefect should choose whether the result should be persisted depending on\n the features being used.\n result_storage: An optional block to use to persist the result of this flow.\n This value will be used as the default for any tasks in this flow.\n If not provided, the local file system will be used unless called as\n a subflow, at which point the default will be loaded from the parent flow.\n result_serializer: An optional serializer to use to serialize the result of this\n flow for persistence. This value will be used as the default for any tasks\n in this flow. If not provided, the value of `PREFECT_RESULTS_DEFAULT_SERIALIZER`\n will be used unless called as a subflow, at which point the default will be\n loaded from the parent flow.\n on_failure: An optional list of callables to run when the flow enters a failed state.\n on_completion: An optional list of callables to run when the flow enters a completed state.\n on_cancellation: An optional list of callables to run when the flow enters a cancelling state.\n on_crashed: An optional list of callables to run when the flow enters a crashed state.\n on_running: An optional list of callables to run when the flow enters a running state.\n \"\"\"\n\n # NOTE: These parameters (types, defaults, and docstrings) should be duplicated\n # exactly in the @flow decorator\n def __init__(\n self,\n fn: Callable[P, R],\n name: Optional[str] = None,\n version: Optional[str] = None,\n flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n retries: Optional[int] = None,\n retry_delay_seconds: Optional[Union[int, float]] = None,\n task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = ConcurrentTaskRunner,\n description: str = None,\n timeout_seconds: Union[int, float] = None,\n validate_parameters: bool = True,\n persist_result: Optional[bool] = None,\n result_storage: Optional[ResultStorage] = None,\n result_serializer: Optional[ResultSerializer] = None,\n cache_result_in_memory: bool = True,\n log_prints: Optional[bool] = None,\n on_completion: Optional[\n List[Callable[[FlowSchema, FlowRun, State], None]]\n ] = None,\n on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_cancellation: Optional[\n List[Callable[[FlowSchema, FlowRun, State], None]]\n ] = None,\n on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n ):\n if name is not None and not isinstance(name, str):\n raise TypeError(\n \"Expected string for flow parameter 'name'; got {} instead. {}\".format(\n type(name).__name__,\n (\n \"Perhaps you meant to call it? e.g.\"\n \" '@flow(name=get_flow_run_name())'\"\n if callable(name)\n else \"\"\n ),\n )\n )\n\n # Validate if hook passed is list and contains callables\n hook_categories = [\n on_completion,\n on_failure,\n on_cancellation,\n on_crashed,\n on_running,\n ]\n hook_names = [\n \"on_completion\",\n \"on_failure\",\n \"on_cancellation\",\n \"on_crashed\",\n \"on_running\",\n ]\n for hooks, hook_name in zip(hook_categories, hook_names):\n if hooks is not None:\n if not hooks:\n raise ValueError(f\"Empty list passed for '{hook_name}'\")\n try:\n hooks = list(hooks)\n except TypeError:\n raise TypeError(\n f\"Expected iterable for '{hook_name}'; got\"\n f\" {type(hooks).__name__} instead. Please provide a list of\"\n f\" hooks to '{hook_name}':\\n\\n\"\n f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n \" my_flow():\\n\\tpass\"\n )\n\n for hook in hooks:\n if not callable(hook):\n raise TypeError(\n f\"Expected callables in '{hook_name}'; got\"\n f\" {type(hook).__name__} instead. Please provide a list of\"\n f\" hooks to '{hook_name}':\\n\\n\"\n f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n \" my_flow():\\n\\tpass\"\n )\n\n if not callable(fn):\n raise TypeError(\"'fn' must be callable\")\n\n # Validate name if given\n if name:\n raise_on_name_with_banned_characters(name)\n\n self.name = name or fn.__name__.replace(\"_\", \"-\")\n\n if flow_run_name is not None:\n if not isinstance(flow_run_name, str) and not callable(flow_run_name):\n raise TypeError(\n \"Expected string or callable for 'flow_run_name'; got\"\n f\" {type(flow_run_name).__name__} instead.\"\n )\n self.flow_run_name = flow_run_name\n\n task_runner = task_runner or ConcurrentTaskRunner()\n self.task_runner = (\n task_runner() if isinstance(task_runner, type) else task_runner\n )\n\n self.log_prints = log_prints\n\n self.description = description or inspect.getdoc(fn)\n update_wrapper(self, fn)\n self.fn = fn\n self.isasync = is_async_fn(self.fn)\n\n raise_for_reserved_arguments(self.fn, [\"return_state\", \"wait_for\"])\n\n # Version defaults to a hash of the function's file\n flow_file = inspect.getsourcefile(self.fn)\n if not version:\n try:\n version = file_hash(flow_file)\n except (FileNotFoundError, TypeError, OSError):\n pass # `getsourcefile` can return null values and \"<stdin>\" for objects in repls\n self.version = version\n\n self.timeout_seconds = float(timeout_seconds) if timeout_seconds else None\n\n # FlowRunPolicy settings\n # TODO: We can instantiate a `FlowRunPolicy` and add Pydantic bound checks to\n # validate that the user passes positive numbers here\n self.retries = (\n retries if retries is not None else PREFECT_FLOW_DEFAULT_RETRIES.value()\n )\n\n self.retry_delay_seconds = (\n retry_delay_seconds\n if retry_delay_seconds is not None\n else PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS.value()\n )\n\n self.parameters = parameter_schema(self.fn)\n self.should_validate_parameters = validate_parameters\n\n if self.should_validate_parameters:\n # Try to create the validated function now so that incompatibility can be\n # raised at declaration time rather than at runtime\n # We cannot, however, store the validated function on the flow because it\n # is not picklable in some environments\n try:\n ValidatedFunction(self.fn, config={\"arbitrary_types_allowed\": True})\n except pydantic.ConfigError as exc:\n raise ValueError(\n \"Flow function is not compatible with `validate_parameters`. \"\n \"Disable validation or change the argument names.\"\n ) from exc\n\n self.persist_result = persist_result\n self.result_storage = result_storage\n self.result_serializer = result_serializer\n self.cache_result_in_memory = cache_result_in_memory\n\n # Check for collision in the registry\n registry = PrefectObjectRegistry.get()\n\n if registry and any(\n other\n for other in registry.get_instances(Flow)\n if other.name == self.name and id(other.fn) != id(self.fn)\n ):\n file = inspect.getsourcefile(self.fn)\n line_number = inspect.getsourcelines(self.fn)[1]\n warnings.warn(\n f\"A flow named {self.name!r} and defined at '{file}:{line_number}' \"\n \"conflicts with another flow. Consider specifying a unique `name` \"\n \"parameter in the flow definition:\\n\\n \"\n \"`@flow(name='my_unique_name', ...)`\"\n )\n self.on_completion = on_completion\n self.on_failure = on_failure\n self.on_cancellation = on_cancellation\n self.on_crashed = on_crashed\n self.on_running = on_running\n\n # Used for flows loaded from remote storage\n self._storage: Optional[RunnerStorage] = None\n self._entrypoint: Optional[str] = None\n\n module = fn.__module__\n if module in (\"__main__\", \"__prefect_loader__\"):\n module_name = inspect.getfile(fn)\n module = module_name if module_name != \"__main__\" else module\n\n self._entrypoint = f\"{module}:{fn.__name__}\"\n\n def with_options(\n self,\n *,\n name: str = None,\n version: str = None,\n retries: Optional[int] = None,\n retry_delay_seconds: Optional[Union[int, float]] = None,\n description: str = None,\n flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = None,\n timeout_seconds: Union[int, float] = None,\n validate_parameters: bool = None,\n persist_result: Optional[bool] = NotSet,\n result_storage: Optional[ResultStorage] = NotSet,\n result_serializer: Optional[ResultSerializer] = NotSet,\n cache_result_in_memory: bool = None,\n log_prints: Optional[bool] = NotSet,\n on_completion: Optional[\n List[Callable[[FlowSchema, FlowRun, State], None]]\n ] = None,\n on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_cancellation: Optional[\n List[Callable[[FlowSchema, FlowRun, State], None]]\n ] = None,\n on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n ) -> Self:\n \"\"\"\n Create a new flow from the current object, updating provided options.\n\n Args:\n name: A new name for the flow.\n version: A new version for the flow.\n description: A new description for the flow.\n flow_run_name: An optional name to distinguish runs of this flow; this name\n can be provided as a string template with the flow's parameters as variables,\n or a function that returns a string.\n task_runner: A new task runner for the flow.\n timeout_seconds: A new number of seconds to fail the flow after if still\n running.\n validate_parameters: A new value indicating if flow calls should validate\n given parameters.\n retries: A new number of times to retry on flow run failure.\n retry_delay_seconds: A new number of seconds to wait before retrying the\n flow after failure. This is only applicable if `retries` is nonzero.\n persist_result: A new option for enabling or disabling result persistence.\n result_storage: A new storage type to use for results.\n result_serializer: A new serializer to use for results.\n cache_result_in_memory: A new value indicating if the flow's result should\n be cached in memory.\n on_failure: A new list of callables to run when the flow enters a failed state.\n on_completion: A new list of callables to run when the flow enters a completed state.\n on_cancellation: A new list of callables to run when the flow enters a cancelling state.\n on_crashed: A new list of callables to run when the flow enters a crashed state.\n on_running: A new list of callables to run when the flow enters a running state.\n\n Returns:\n A new `Flow` instance.\n\n Examples:\n\n Create a new flow from an existing flow and update the name:\n\n >>> @flow(name=\"My flow\")\n >>> def my_flow():\n >>> return 1\n >>>\n >>> new_flow = my_flow.with_options(name=\"My new flow\")\n\n Create a new flow from an existing flow, update the task runner, and call\n it without an intermediate variable:\n\n >>> from prefect.task_runners import SequentialTaskRunner\n >>>\n >>> @flow\n >>> def my_flow(x, y):\n >>> return x + y\n >>>\n >>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n >>> assert state.result() == 4\n\n \"\"\"\n new_flow = Flow(\n fn=self.fn,\n name=name or self.name,\n description=description or self.description,\n flow_run_name=flow_run_name or self.flow_run_name,\n version=version or self.version,\n task_runner=task_runner or self.task_runner,\n retries=retries if retries is not None else self.retries,\n retry_delay_seconds=(\n retry_delay_seconds\n if retry_delay_seconds is not None\n else self.retry_delay_seconds\n ),\n timeout_seconds=(\n timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n ),\n validate_parameters=(\n validate_parameters\n if validate_parameters is not None\n else self.should_validate_parameters\n ),\n persist_result=(\n persist_result if persist_result is not NotSet else self.persist_result\n ),\n result_storage=(\n result_storage if result_storage is not NotSet else self.result_storage\n ),\n result_serializer=(\n result_serializer\n if result_serializer is not NotSet\n else self.result_serializer\n ),\n cache_result_in_memory=(\n cache_result_in_memory\n if cache_result_in_memory is not None\n else self.cache_result_in_memory\n ),\n log_prints=log_prints if log_prints is not NotSet else self.log_prints,\n on_completion=on_completion or self.on_completion,\n on_failure=on_failure or self.on_failure,\n on_cancellation=on_cancellation or self.on_cancellation,\n on_crashed=on_crashed or self.on_crashed,\n on_running=on_running or self.on_running,\n )\n new_flow._storage = self._storage\n new_flow._entrypoint = self._entrypoint\n return new_flow\n\n def validate_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Validate parameters for compatibility with the flow by attempting to cast the inputs to the\n associated types specified by the function's type annotations.\n\n Returns:\n A new dict of parameters that have been cast to the appropriate types\n\n Raises:\n ParameterTypeError: if the provided parameters are not valid\n \"\"\"\n args, kwargs = parameters_to_args_kwargs(self.fn, parameters)\n\n if HAS_PYDANTIC_V2:\n has_v1_models = any(isinstance(o, V1BaseModel) for o in args) or any(\n isinstance(o, V1BaseModel) for o in kwargs.values()\n )\n has_v2_types = any(is_v2_type(o) for o in args) or any(\n is_v2_type(o) for o in kwargs.values()\n )\n\n if has_v1_models and has_v2_types:\n raise ParameterTypeError(\n \"Cannot mix Pydantic v1 and v2 types as arguments to a flow.\"\n )\n\n if has_v1_models:\n validated_fn = V1ValidatedFunction(\n self.fn, config={\"arbitrary_types_allowed\": True}\n )\n else:\n validated_fn = V2ValidatedFunction(\n self.fn, config={\"arbitrary_types_allowed\": True}\n )\n\n else:\n validated_fn = ValidatedFunction(\n self.fn, config={\"arbitrary_types_allowed\": True}\n )\n\n try:\n model = validated_fn.init_model_instance(*args, **kwargs)\n except pydantic.ValidationError as exc:\n # We capture the pydantic exception and raise our own because the pydantic\n # exception is not picklable when using a cythonized pydantic installation\n raise ParameterTypeError.from_validation_error(exc) from None\n except V2ValidationError as exc:\n # We capture the pydantic exception and raise our own because the pydantic\n # exception is not picklable when using a cythonized pydantic installation\n raise ParameterTypeError.from_validation_error(exc) from None\n\n # Get the updated parameter dict with cast values from the model\n cast_parameters = {\n k: v\n for k, v in model._iter()\n if k in model.__fields_set__ or model.__fields__[k].default_factory\n }\n return cast_parameters\n\n def serialize_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Convert parameters to a serializable form.\n\n Uses FastAPI's `jsonable_encoder` to convert to JSON compatible objects without\n converting everything directly to a string. This maintains basic types like\n integers during API roundtrips.\n \"\"\"\n serialized_parameters = {}\n for key, value in parameters.items():\n try:\n serialized_parameters[key] = jsonable_encoder(value)\n except (TypeError, ValueError):\n logger.debug(\n f\"Parameter {key!r} for flow {self.name!r} is of unserializable \"\n f\"type {type(value).__name__!r} and will not be stored \"\n \"in the backend.\"\n )\n serialized_parameters[key] = f\"<{type(value).__name__}>\"\n return serialized_parameters\n\n @sync_compatible\n @deprecated_parameter(\n \"schedule\",\n start_date=\"Mar 2023\",\n when=lambda p: p is not None,\n help=\"Use `schedules` instead.\",\n )\n @deprecated_parameter(\n \"is_schedule_active\",\n start_date=\"Mar 2023\",\n when=lambda p: p is not None,\n help=\"Use `paused` instead.\",\n )\n async def to_deployment(\n self,\n name: str,\n interval: Optional[\n Union[\n Iterable[Union[int, float, datetime.timedelta]],\n int,\n float,\n datetime.timedelta,\n ]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n ) -> \"RunnerDeployment\":\n \"\"\"\n Creates a runner deployment object for this flow.\n\n Args:\n name: The name to give the created deployment.\n interval: An interval on which to execute the new deployment. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this deployment.\n rrule: An rrule schedule of when to execute runs of this deployment.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options such as `timezone`.\n schedule: A schedule object defining when to execute runs of this deployment.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n parameters: A dictionary of default parameter values to pass to runs of this deployment.\n triggers: A list of triggers that will kick off runs of this deployment.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for the created deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n\n Examples:\n Prepare two deployments and serve them:\n\n ```python\n from prefect import flow, serve\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n @flow\n def my_other_flow(name):\n print(f\"goodbye {name}\")\n\n if __name__ == \"__main__\":\n hello_deploy = my_flow.to_deployment(\"hello\", tags=[\"dev\"])\n bye_deploy = my_other_flow.to_deployment(\"goodbye\", tags=[\"dev\"])\n serve(hello_deploy, bye_deploy)\n ```\n \"\"\"\n from prefect.deployments.runner import RunnerDeployment\n\n if not name.endswith(\".py\"):\n raise_on_name_with_banned_characters(name)\n if self._storage and self._entrypoint:\n return await RunnerDeployment.from_storage(\n storage=self._storage,\n entrypoint=self._entrypoint,\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n paused=paused,\n schedules=schedules,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n tags=tags,\n triggers=triggers,\n parameters=parameters or {},\n description=description,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n else:\n return RunnerDeployment.from_flow(\n self,\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n paused=paused,\n schedules=schedules,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n tags=tags,\n triggers=triggers,\n parameters=parameters or {},\n description=description,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n entrypoint_type=entrypoint_type,\n )\n\n @sync_compatible\n async def serve(\n self,\n name: str,\n interval: Optional[\n Union[\n Iterable[Union[int, float, datetime.timedelta]],\n int,\n float,\n datetime.timedelta,\n ]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n parameters: Optional[dict] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n pause_on_shutdown: bool = True,\n print_starting_message: bool = True,\n limit: Optional[int] = None,\n webserver: bool = False,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n ):\n \"\"\"\n Creates a deployment for this flow and starts a runner to monitor for scheduled work.\n\n Args:\n name: The name to give the created deployment.\n interval: An interval on which to execute the deployment. Accepts a number or a\n timedelta object to create a single schedule. If a number is given, it will be\n interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n multiple schedules.\n cron: A cron schedule string of when to execute runs of this deployment.\n Also accepts an iterable of cron schedule strings to create multiple schedules.\n rrule: An rrule schedule string of when to execute runs of this deployment.\n Also accepts an iterable of rrule schedule strings to create multiple schedules.\n triggers: A list of triggers that will kick off runs of this deployment.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object defining when to execute runs of this deployment. Used to\n define additional scheduling options such as `timezone`.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n parameters: A dictionary of default parameter values to pass to runs of this deployment.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for the created deployment.\n pause_on_shutdown: If True, provided schedule will be paused when the serve function is stopped.\n If False, the schedules will continue running.\n print_starting_message: Whether or not to print the starting message when flow is served.\n limit: The maximum number of runs that can be executed concurrently.\n webserver: Whether or not to start a monitoring webserver for this flow.\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n\n Examples:\n Serve a flow:\n\n ```python\n from prefect import flow\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n if __name__ == \"__main__\":\n my_flow.serve(\"example-deployment\")\n ```\n\n Serve a flow and run it every hour:\n\n ```python\n from prefect import flow\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n if __name__ == \"__main__\":\n my_flow.serve(\"example-deployment\", interval=3600)\n ```\n \"\"\"\n from prefect.runner import Runner\n\n # Handling for my_flow.serve(__file__)\n # Will set name to name of file where my_flow.serve() without the extension\n # Non filepath strings will pass through unchanged\n name = Path(name).stem\n\n runner = Runner(name=name, pause_on_shutdown=pause_on_shutdown, limit=limit)\n deployment_id = await runner.add_flow(\n self,\n name=name,\n triggers=triggers,\n interval=interval,\n cron=cron,\n rrule=rrule,\n paused=paused,\n schedules=schedules,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n parameters=parameters,\n description=description,\n tags=tags,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n entrypoint_type=entrypoint_type,\n )\n if print_starting_message:\n help_message = (\n f\"[green]Your flow {self.name!r} is being served and polling for\"\n \" scheduled runs!\\n[/]\\nTo trigger a run for this flow, use the\"\n \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n f\" '{self.name}/{name}'\\n[/]\"\n )\n if PREFECT_UI_URL:\n help_message += (\n \"\\nYou can also run your flow via the Prefect UI:\"\n f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}[/]\\n\"\n )\n\n console = Console()\n console.print(help_message, soft_wrap=True)\n await runner.start(webserver=webserver)\n\n @classmethod\n @sync_compatible\n async def from_source(\n cls: Type[F],\n source: Union[str, RunnerStorage, ReadableDeploymentStorage],\n entrypoint: str,\n ) -> F:\n \"\"\"\n Loads a flow from a remote source.\n\n Args:\n source: Either a URL to a git repository or a storage object.\n entrypoint: The path to a file containing a flow and the name of the flow function in\n the format `./path/to/file.py:flow_func_name`.\n\n Returns:\n A new `Flow` instance.\n\n Examples:\n Load a flow from a public git repository:\n\n\n ```python\n from prefect import flow\n from prefect.runner.storage import GitRepository\n from prefect.blocks.system import Secret\n\n my_flow = flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n )\n\n my_flow()\n ```\n\n Load a flow from a private git repository using an access token stored in a `Secret` block:\n\n ```python\n from prefect import flow\n from prefect.runner.storage import GitRepository\n from prefect.blocks.system import Secret\n\n my_flow = flow.from_source(\n source=GitRepository(\n url=\"https://github.com/org/repo.git\",\n credentials={\"access_token\": Secret.load(\"github-access-token\")}\n ),\n entrypoint=\"flows.py:my_flow\",\n )\n\n my_flow()\n ```\n \"\"\"\n if isinstance(source, str):\n storage = create_storage_from_url(source)\n elif isinstance(source, RunnerStorage):\n storage = source\n elif hasattr(source, \"get_directory\"):\n storage = BlockStorageAdapter(source)\n else:\n raise TypeError(\n f\"Unsupported source type {type(source).__name__!r}. Please provide a\"\n \" URL to remote storage or a storage object.\"\n )\n with tempfile.TemporaryDirectory() as tmpdir:\n storage.set_base_path(Path(tmpdir))\n await storage.pull_code()\n\n full_entrypoint = str(storage.destination / entrypoint)\n flow: \"Flow\" = await from_async.wait_for_call_in_new_thread(\n create_call(load_flow_from_entrypoint, full_entrypoint)\n )\n flow._storage = storage\n flow._entrypoint = entrypoint\n\n return flow\n\n @sync_compatible\n async def deploy(\n self,\n name: str,\n work_pool_name: Optional[str] = None,\n image: Optional[Union[str, DeploymentImage]] = None,\n build: bool = True,\n push: bool = True,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[dict] = None,\n interval: Optional[Union[int, float, datetime.timedelta]] = None,\n cron: Optional[str] = None,\n rrule: Optional[str] = None,\n paused: Optional[bool] = None,\n schedules: Optional[List[MinimalDeploymentSchedule]] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n parameters: Optional[dict] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n print_next_steps: bool = True,\n ) -> UUID:\n \"\"\"\n Deploys a flow to run on dynamic infrastructure via a work pool.\n\n By default, calling this method will build a Docker image for the flow, push it to a registry,\n and create a deployment via the Prefect API that will run the flow on the given schedule.\n\n If you want to use an existing image, you can pass `build=False` to skip building and pushing\n an image.\n\n Args:\n name: The name to give the created deployment.\n work_pool_name: The name of the work pool to use for this deployment. Defaults to\n the value of `PREFECT_DEFAULT_WORK_POOL_NAME`.\n image: The name of the Docker image to build, including the registry and\n repository. Pass a DeploymentImage instance to customize the Dockerfile used\n and build arguments.\n build: Whether or not to build a new image for the flow. If False, the provided\n image will be used as-is and pulled at runtime.\n push: Whether or not to skip pushing the built image to a registry.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n interval: An interval on which to execute the deployment. Accepts a number or a\n timedelta object to create a single schedule. If a number is given, it will be\n interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n multiple schedules.\n cron: A cron schedule string of when to execute runs of this deployment.\n Also accepts an iterable of cron schedule strings to create multiple schedules.\n rrule: An rrule schedule string of when to execute runs of this deployment.\n Also accepts an iterable of rrule schedule strings to create multiple schedules.\n triggers: A list of triggers that will kick off runs of this deployment.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object defining when to execute runs of this deployment. Used to\n define additional scheduling options like `timezone`.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n parameters: A dictionary of default parameter values to pass to runs of this deployment.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for the created deployment.\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n print_next_steps_message: Whether or not to print a message with next steps\n after deploying the deployments.\n\n Returns:\n The ID of the created/updated deployment.\n\n Examples:\n Deploy a local flow to a work pool:\n\n ```python\n from prefect import flow\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n if __name__ == \"__main__\":\n my_flow.deploy(\n \"example-deployment\",\n work_pool_name=\"my-work-pool\",\n image=\"my-repository/my-image:dev\",\n )\n ```\n\n Deploy a remotely stored flow to a work pool:\n\n ```python\n from prefect import flow\n\n if __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n ).deploy(\n \"example-deployment\",\n work_pool_name=\"my-work-pool\",\n image=\"my-repository/my-image:dev\",\n )\n ```\n \"\"\"\n work_pool_name = work_pool_name or PREFECT_DEFAULT_WORK_POOL_NAME.value()\n\n try:\n async with get_client() as client:\n work_pool = await client.read_work_pool(work_pool_name)\n except ObjectNotFound as exc:\n raise ValueError(\n f\"Could not find work pool {work_pool_name!r}. Please create it before\"\n \" deploying this flow.\"\n ) from exc\n\n deployment = await self.to_deployment(\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedules=schedules,\n paused=paused,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n triggers=triggers,\n parameters=parameters,\n description=description,\n tags=tags,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n entrypoint_type=entrypoint_type,\n )\n\n deployment_ids = await deploy(\n deployment,\n work_pool_name=work_pool_name,\n image=image,\n build=build,\n push=push,\n print_next_steps_message=False,\n )\n\n if print_next_steps:\n console = Console()\n if not work_pool.is_push_pool and not work_pool.is_managed_pool:\n console.print(\n \"\\nTo execute flow runs from this deployment, start a worker in a\"\n \" separate terminal that pulls work from the\"\n f\" {work_pool_name!r} work pool:\"\n )\n console.print(\n f\"\\n\\t$ prefect worker start --pool {work_pool_name!r}\",\n style=\"blue\",\n )\n console.print(\n \"\\nTo schedule a run for this deployment, use the following command:\"\n )\n console.print(\n f\"\\n\\t$ prefect deployment run '{self.name}/{name}'\\n\",\n style=\"blue\",\n )\n if PREFECT_UI_URL:\n console.print(\n \"\\nYou can also run your flow via the Prefect UI:\"\n f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_ids[0]}[/]\\n\"\n )\n\n return deployment_ids[0]\n\n @overload\n def __call__(self: \"Flow[P, NoReturn]\", *args: P.args, **kwargs: P.kwargs) -> None:\n # `NoReturn` matches if a type can't be inferred for the function which stops a\n # sync function from matching the `Coroutine` overload\n ...\n\n @overload\n def __call__(\n self: \"Flow[P, Coroutine[Any, Any, T]]\", *args: P.args, **kwargs: P.kwargs\n ) -> Awaitable[T]:\n ...\n\n @overload\n def __call__(\n self: \"Flow[P, T]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> T:\n ...\n\n @overload\n def __call__(\n self: \"Flow[P, T]\",\n *args: P.args,\n return_state: Literal[True],\n **kwargs: P.kwargs,\n ) -> State[T]:\n ...\n\n def __call__(\n self,\n *args: \"P.args\",\n return_state: bool = False,\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: \"P.kwargs\",\n ):\n \"\"\"\n Run the flow and return its result.\n\n\n Flow parameter values must be serializable by Pydantic.\n\n If writing an async flow, this call must be awaited.\n\n This will create a new flow run in the API.\n\n Args:\n *args: Arguments to run the flow with.\n return_state: Return a Prefect State containing the result of the\n flow run.\n wait_for: Upstream task futures to wait for before starting the flow if called as a subflow\n **kwargs: Keyword arguments to run the flow with.\n\n Returns:\n If `return_state` is False, returns the result of the flow run.\n If `return_state` is True, returns the result of the flow run\n wrapped in a Prefect State which provides error handling.\n\n Examples:\n\n Define a flow\n\n >>> @flow\n >>> def my_flow(name):\n >>> print(f\"hello {name}\")\n >>> return f\"goodbye {name}\"\n\n Run a flow\n\n >>> my_flow(\"marvin\")\n hello marvin\n \"goodbye marvin\"\n\n Run a flow with additional tags\n\n >>> from prefect import tags\n >>> with tags(\"db\", \"blue\"):\n >>> my_flow(\"foo\")\n \"\"\"\n from prefect.engine import enter_flow_run_engine_from_flow_call\n\n # Convert the call args/kwargs to a parameter dict\n parameters = get_call_parameters(self.fn, args, kwargs)\n\n return_type = \"state\" if return_state else \"result\"\n\n task_viz_tracker = get_task_viz_tracker()\n if task_viz_tracker:\n # this is a subflow, for now return a single task and do not go further\n # we can add support for exploring subflows for tasks in the future.\n return track_viz_task(self.isasync, self.name, parameters)\n\n return enter_flow_run_engine_from_flow_call(\n self,\n parameters,\n wait_for=wait_for,\n return_type=return_type,\n )\n\n @overload\n def _run(self: \"Flow[P, NoReturn]\", *args: P.args, **kwargs: P.kwargs) -> State[T]:\n # `NoReturn` matches if a type can't be inferred for the function which stops a\n # sync function from matching the `Coroutine` overload\n ...\n\n @overload\n def _run(\n self: \"Flow[P, Coroutine[Any, Any, T]]\", *args: P.args, **kwargs: P.kwargs\n ) -> Awaitable[T]:\n ...\n\n @overload\n def _run(self: \"Flow[P, T]\", *args: P.args, **kwargs: P.kwargs) -> State[T]:\n ...\n\n def _run(\n self,\n *args: \"P.args\",\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: \"P.kwargs\",\n ):\n \"\"\"\n Run the flow and return its final state.\n\n Examples:\n\n Run a flow and get the returned result\n\n >>> state = my_flow._run(\"marvin\")\n >>> state.result()\n \"goodbye marvin\"\n \"\"\"\n from prefect.engine import enter_flow_run_engine_from_flow_call\n\n # Convert the call args/kwargs to a parameter dict\n parameters = get_call_parameters(self.fn, args, kwargs)\n\n return enter_flow_run_engine_from_flow_call(\n self,\n parameters,\n wait_for=wait_for,\n return_type=\"state\",\n )\n\n @sync_compatible\n async def visualize(self, *args, **kwargs):\n \"\"\"\n Generates a graphviz object representing the current flow. In IPython notebooks,\n it's rendered inline, otherwise in a new window as a PNG.\n\n Raises:\n - ImportError: If `graphviz` isn't installed.\n - GraphvizExecutableNotFoundError: If the `dot` executable isn't found.\n - FlowVisualizationError: If the flow can't be visualized for any other reason.\n \"\"\"\n if not PREFECT_UNIT_TEST_MODE:\n warnings.warn(\n \"`flow.visualize()` will execute code inside of your flow that is not\"\n \" decorated with `@task` or `@flow`.\"\n )\n\n try:\n with TaskVizTracker() as tracker:\n if self.isasync:\n await self.fn(*args, **kwargs)\n else:\n self.fn(*args, **kwargs)\n\n graph = build_task_dependencies(tracker)\n\n visualize_task_dependencies(graph, self.name)\n\n except GraphvizImportError:\n raise\n except GraphvizExecutableNotFoundError:\n raise\n except VisualizationUnsupportedError:\n raise\n except FlowVisualizationError:\n raise\n except Exception as e:\n msg = (\n \"It's possible you are trying to visualize a flow that contains \"\n \"code that directly interacts with the result of a task\"\n \" inside of the flow. \\nTry passing a `viz_return_value` \"\n \"to the task decorator, e.g. `@task(viz_return_value=[1, 2, 3]).`\"\n )\n\n new_exception = type(e)(str(e) + \"\\n\" + msg)\n # Copy traceback information from the original exception\n new_exception.__traceback__ = e.__traceback__\n raise new_exception\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.deploy","title":"deploy
async
","text":"Deploys a flow to run on dynamic infrastructure via a work pool.
By default, calling this method will build a Docker image for the flow, push it to a registry, and create a deployment via the Prefect API that will run the flow on the given schedule.
If you want to use an existing image, you can pass build=False
to skip building and pushing an image.
Parameters:
Name Type Description Defaultname
str
The name to give the created deployment.
requiredwork_pool_name
Optional[str]
The name of the work pool to use for this deployment. Defaults to the value of PREFECT_DEFAULT_WORK_POOL_NAME
.
None
image
Optional[Union[str, DeploymentImage]]
The name of the Docker image to build, including the registry and repository. Pass a DeploymentImage instance to customize the Dockerfile used and build arguments.
None
build
bool
Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime.
True
push
bool
Whether or not to skip pushing the built image to a registry.
True
work_queue_name
Optional[str]
The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.
None
job_variables
Optional[dict]
Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.
None
interval
Optional[Union[int, float, timedelta]]
An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules.
None
cron
Optional[str]
A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules.
None
rrule
Optional[str]
An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules.
None
triggers
Optional[List[DeploymentTrigger]]
A list of triggers that will kick off runs of this deployment.
None
paused
Optional[bool]
Whether or not to set this deployment as paused.
None
schedules
Optional[List[MinimalDeploymentSchedule]]
A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone
.
None
schedule
Optional[SCHEDULE_TYPES]
A schedule object defining when to execute runs of this deployment. Used to define additional scheduling options like timezone
.
None
is_schedule_active
Optional[bool]
Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.
None
parameters
Optional[dict]
A dictionary of default parameter values to pass to runs of this deployment.
None
description
Optional[str]
A description for the created deployment. Defaults to the flow's description if not provided.
None
tags
Optional[List[str]]
A list of tags to associate with the created deployment for organizational purposes.
None
version
Optional[str]
A version for the created deployment. Defaults to the flow's version.
None
enforce_parameter_schema
bool
Whether or not the Prefect API should enforce the parameter schema for the created deployment.
False
entrypoint_type
EntrypointType
Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.
FILE_PATH
print_next_steps_message
Whether or not to print a message with next steps after deploying the deployments.
requiredReturns:
Type DescriptionUUID
The ID of the created/updated deployment.
Examples:
Deploy a local flow to a work pool:
from prefect import flow\n\n@flow\ndef my_flow(name):\n print(f\"hello {name}\")\n\nif __name__ == \"__main__\":\n my_flow.deploy(\n \"example-deployment\",\n work_pool_name=\"my-work-pool\",\n image=\"my-repository/my-image:dev\",\n )\n
Deploy a remotely stored flow to a work pool:
from prefect import flow\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n ).deploy(\n \"example-deployment\",\n work_pool_name=\"my-work-pool\",\n image=\"my-repository/my-image:dev\",\n )\n
Source code in prefect/flows.py
@sync_compatible\nasync def deploy(\n self,\n name: str,\n work_pool_name: Optional[str] = None,\n image: Optional[Union[str, DeploymentImage]] = None,\n build: bool = True,\n push: bool = True,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[dict] = None,\n interval: Optional[Union[int, float, datetime.timedelta]] = None,\n cron: Optional[str] = None,\n rrule: Optional[str] = None,\n paused: Optional[bool] = None,\n schedules: Optional[List[MinimalDeploymentSchedule]] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n parameters: Optional[dict] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n print_next_steps: bool = True,\n) -> UUID:\n \"\"\"\n Deploys a flow to run on dynamic infrastructure via a work pool.\n\n By default, calling this method will build a Docker image for the flow, push it to a registry,\n and create a deployment via the Prefect API that will run the flow on the given schedule.\n\n If you want to use an existing image, you can pass `build=False` to skip building and pushing\n an image.\n\n Args:\n name: The name to give the created deployment.\n work_pool_name: The name of the work pool to use for this deployment. Defaults to\n the value of `PREFECT_DEFAULT_WORK_POOL_NAME`.\n image: The name of the Docker image to build, including the registry and\n repository. Pass a DeploymentImage instance to customize the Dockerfile used\n and build arguments.\n build: Whether or not to build a new image for the flow. If False, the provided\n image will be used as-is and pulled at runtime.\n push: Whether or not to skip pushing the built image to a registry.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n interval: An interval on which to execute the deployment. Accepts a number or a\n timedelta object to create a single schedule. If a number is given, it will be\n interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n multiple schedules.\n cron: A cron schedule string of when to execute runs of this deployment.\n Also accepts an iterable of cron schedule strings to create multiple schedules.\n rrule: An rrule schedule string of when to execute runs of this deployment.\n Also accepts an iterable of rrule schedule strings to create multiple schedules.\n triggers: A list of triggers that will kick off runs of this deployment.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object defining when to execute runs of this deployment. Used to\n define additional scheduling options like `timezone`.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n parameters: A dictionary of default parameter values to pass to runs of this deployment.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for the created deployment.\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n print_next_steps_message: Whether or not to print a message with next steps\n after deploying the deployments.\n\n Returns:\n The ID of the created/updated deployment.\n\n Examples:\n Deploy a local flow to a work pool:\n\n ```python\n from prefect import flow\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n if __name__ == \"__main__\":\n my_flow.deploy(\n \"example-deployment\",\n work_pool_name=\"my-work-pool\",\n image=\"my-repository/my-image:dev\",\n )\n ```\n\n Deploy a remotely stored flow to a work pool:\n\n ```python\n from prefect import flow\n\n if __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n ).deploy(\n \"example-deployment\",\n work_pool_name=\"my-work-pool\",\n image=\"my-repository/my-image:dev\",\n )\n ```\n \"\"\"\n work_pool_name = work_pool_name or PREFECT_DEFAULT_WORK_POOL_NAME.value()\n\n try:\n async with get_client() as client:\n work_pool = await client.read_work_pool(work_pool_name)\n except ObjectNotFound as exc:\n raise ValueError(\n f\"Could not find work pool {work_pool_name!r}. Please create it before\"\n \" deploying this flow.\"\n ) from exc\n\n deployment = await self.to_deployment(\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedules=schedules,\n paused=paused,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n triggers=triggers,\n parameters=parameters,\n description=description,\n tags=tags,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n entrypoint_type=entrypoint_type,\n )\n\n deployment_ids = await deploy(\n deployment,\n work_pool_name=work_pool_name,\n image=image,\n build=build,\n push=push,\n print_next_steps_message=False,\n )\n\n if print_next_steps:\n console = Console()\n if not work_pool.is_push_pool and not work_pool.is_managed_pool:\n console.print(\n \"\\nTo execute flow runs from this deployment, start a worker in a\"\n \" separate terminal that pulls work from the\"\n f\" {work_pool_name!r} work pool:\"\n )\n console.print(\n f\"\\n\\t$ prefect worker start --pool {work_pool_name!r}\",\n style=\"blue\",\n )\n console.print(\n \"\\nTo schedule a run for this deployment, use the following command:\"\n )\n console.print(\n f\"\\n\\t$ prefect deployment run '{self.name}/{name}'\\n\",\n style=\"blue\",\n )\n if PREFECT_UI_URL:\n console.print(\n \"\\nYou can also run your flow via the Prefect UI:\"\n f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_ids[0]}[/]\\n\"\n )\n\n return deployment_ids[0]\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.from_source","title":"from_source
async
classmethod
","text":"Loads a flow from a remote source.
Parameters:
Name Type Description Defaultsource
Union[str, RunnerStorage, ReadableDeploymentStorage]
Either a URL to a git repository or a storage object.
requiredentrypoint
str
The path to a file containing a flow and the name of the flow function in the format ./path/to/file.py:flow_func_name
.
Returns:
Type DescriptionF
A new Flow
instance.
Examples:
Load a flow from a public git repository:
from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nmy_flow = flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n)\n\nmy_flow()\n
Load a flow from a private git repository using an access token stored in a Secret
block:
from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nmy_flow = flow.from_source(\n source=GitRepository(\n url=\"https://github.com/org/repo.git\",\n credentials={\"access_token\": Secret.load(\"github-access-token\")}\n ),\n entrypoint=\"flows.py:my_flow\",\n)\n\nmy_flow()\n
Source code in prefect/flows.py
@classmethod\n@sync_compatible\nasync def from_source(\n cls: Type[F],\n source: Union[str, RunnerStorage, ReadableDeploymentStorage],\n entrypoint: str,\n) -> F:\n \"\"\"\n Loads a flow from a remote source.\n\n Args:\n source: Either a URL to a git repository or a storage object.\n entrypoint: The path to a file containing a flow and the name of the flow function in\n the format `./path/to/file.py:flow_func_name`.\n\n Returns:\n A new `Flow` instance.\n\n Examples:\n Load a flow from a public git repository:\n\n\n ```python\n from prefect import flow\n from prefect.runner.storage import GitRepository\n from prefect.blocks.system import Secret\n\n my_flow = flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n )\n\n my_flow()\n ```\n\n Load a flow from a private git repository using an access token stored in a `Secret` block:\n\n ```python\n from prefect import flow\n from prefect.runner.storage import GitRepository\n from prefect.blocks.system import Secret\n\n my_flow = flow.from_source(\n source=GitRepository(\n url=\"https://github.com/org/repo.git\",\n credentials={\"access_token\": Secret.load(\"github-access-token\")}\n ),\n entrypoint=\"flows.py:my_flow\",\n )\n\n my_flow()\n ```\n \"\"\"\n if isinstance(source, str):\n storage = create_storage_from_url(source)\n elif isinstance(source, RunnerStorage):\n storage = source\n elif hasattr(source, \"get_directory\"):\n storage = BlockStorageAdapter(source)\n else:\n raise TypeError(\n f\"Unsupported source type {type(source).__name__!r}. Please provide a\"\n \" URL to remote storage or a storage object.\"\n )\n with tempfile.TemporaryDirectory() as tmpdir:\n storage.set_base_path(Path(tmpdir))\n await storage.pull_code()\n\n full_entrypoint = str(storage.destination / entrypoint)\n flow: \"Flow\" = await from_async.wait_for_call_in_new_thread(\n create_call(load_flow_from_entrypoint, full_entrypoint)\n )\n flow._storage = storage\n flow._entrypoint = entrypoint\n\n return flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.serialize_parameters","title":"serialize_parameters
","text":"Convert parameters to a serializable form.
Uses FastAPI's jsonable_encoder
to convert to JSON compatible objects without converting everything directly to a string. This maintains basic types like integers during API roundtrips.
prefect/flows.py
def serialize_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Convert parameters to a serializable form.\n\n Uses FastAPI's `jsonable_encoder` to convert to JSON compatible objects without\n converting everything directly to a string. This maintains basic types like\n integers during API roundtrips.\n \"\"\"\n serialized_parameters = {}\n for key, value in parameters.items():\n try:\n serialized_parameters[key] = jsonable_encoder(value)\n except (TypeError, ValueError):\n logger.debug(\n f\"Parameter {key!r} for flow {self.name!r} is of unserializable \"\n f\"type {type(value).__name__!r} and will not be stored \"\n \"in the backend.\"\n )\n serialized_parameters[key] = f\"<{type(value).__name__}>\"\n return serialized_parameters\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.serve","title":"serve
async
","text":"Creates a deployment for this flow and starts a runner to monitor for scheduled work.
Parameters:
Name Type Description Defaultname
str
The name to give the created deployment.
requiredinterval
Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]
An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules.
None
cron
Optional[Union[Iterable[str], str]]
A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules.
None
rrule
Optional[Union[Iterable[str], str]]
An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules.
None
triggers
Optional[List[DeploymentTrigger]]
A list of triggers that will kick off runs of this deployment.
None
paused
Optional[bool]
Whether or not to set this deployment as paused.
None
schedules
Optional[List[FlexibleScheduleList]]
A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone
.
None
schedule
Optional[SCHEDULE_TYPES]
A schedule object defining when to execute runs of this deployment. Used to define additional scheduling options such as timezone
.
None
is_schedule_active
Optional[bool]
Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.
None
parameters
Optional[dict]
A dictionary of default parameter values to pass to runs of this deployment.
None
description
Optional[str]
A description for the created deployment. Defaults to the flow's description if not provided.
None
tags
Optional[List[str]]
A list of tags to associate with the created deployment for organizational purposes.
None
version
Optional[str]
A version for the created deployment. Defaults to the flow's version.
None
enforce_parameter_schema
bool
Whether or not the Prefect API should enforce the parameter schema for the created deployment.
False
pause_on_shutdown
bool
If True, provided schedule will be paused when the serve function is stopped. If False, the schedules will continue running.
True
print_starting_message
bool
Whether or not to print the starting message when flow is served.
True
limit
Optional[int]
The maximum number of runs that can be executed concurrently.
None
webserver
bool
Whether or not to start a monitoring webserver for this flow.
False
entrypoint_type
EntrypointType
Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.
FILE_PATH
Examples:
Serve a flow:
from prefect import flow\n\n@flow\ndef my_flow(name):\n print(f\"hello {name}\")\n\nif __name__ == \"__main__\":\n my_flow.serve(\"example-deployment\")\n
Serve a flow and run it every hour:
from prefect import flow\n\n@flow\ndef my_flow(name):\n print(f\"hello {name}\")\n\nif __name__ == \"__main__\":\n my_flow.serve(\"example-deployment\", interval=3600)\n
Source code in prefect/flows.py
@sync_compatible\nasync def serve(\n self,\n name: str,\n interval: Optional[\n Union[\n Iterable[Union[int, float, datetime.timedelta]],\n int,\n float,\n datetime.timedelta,\n ]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n parameters: Optional[dict] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n pause_on_shutdown: bool = True,\n print_starting_message: bool = True,\n limit: Optional[int] = None,\n webserver: bool = False,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n):\n \"\"\"\n Creates a deployment for this flow and starts a runner to monitor for scheduled work.\n\n Args:\n name: The name to give the created deployment.\n interval: An interval on which to execute the deployment. Accepts a number or a\n timedelta object to create a single schedule. If a number is given, it will be\n interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n multiple schedules.\n cron: A cron schedule string of when to execute runs of this deployment.\n Also accepts an iterable of cron schedule strings to create multiple schedules.\n rrule: An rrule schedule string of when to execute runs of this deployment.\n Also accepts an iterable of rrule schedule strings to create multiple schedules.\n triggers: A list of triggers that will kick off runs of this deployment.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object defining when to execute runs of this deployment. Used to\n define additional scheduling options such as `timezone`.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n parameters: A dictionary of default parameter values to pass to runs of this deployment.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for the created deployment.\n pause_on_shutdown: If True, provided schedule will be paused when the serve function is stopped.\n If False, the schedules will continue running.\n print_starting_message: Whether or not to print the starting message when flow is served.\n limit: The maximum number of runs that can be executed concurrently.\n webserver: Whether or not to start a monitoring webserver for this flow.\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n\n Examples:\n Serve a flow:\n\n ```python\n from prefect import flow\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n if __name__ == \"__main__\":\n my_flow.serve(\"example-deployment\")\n ```\n\n Serve a flow and run it every hour:\n\n ```python\n from prefect import flow\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n if __name__ == \"__main__\":\n my_flow.serve(\"example-deployment\", interval=3600)\n ```\n \"\"\"\n from prefect.runner import Runner\n\n # Handling for my_flow.serve(__file__)\n # Will set name to name of file where my_flow.serve() without the extension\n # Non filepath strings will pass through unchanged\n name = Path(name).stem\n\n runner = Runner(name=name, pause_on_shutdown=pause_on_shutdown, limit=limit)\n deployment_id = await runner.add_flow(\n self,\n name=name,\n triggers=triggers,\n interval=interval,\n cron=cron,\n rrule=rrule,\n paused=paused,\n schedules=schedules,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n parameters=parameters,\n description=description,\n tags=tags,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n entrypoint_type=entrypoint_type,\n )\n if print_starting_message:\n help_message = (\n f\"[green]Your flow {self.name!r} is being served and polling for\"\n \" scheduled runs!\\n[/]\\nTo trigger a run for this flow, use the\"\n \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n f\" '{self.name}/{name}'\\n[/]\"\n )\n if PREFECT_UI_URL:\n help_message += (\n \"\\nYou can also run your flow via the Prefect UI:\"\n f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}[/]\\n\"\n )\n\n console = Console()\n console.print(help_message, soft_wrap=True)\n await runner.start(webserver=webserver)\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.to_deployment","title":"to_deployment
async
","text":"Creates a runner deployment object for this flow.
Parameters:
Name Type Description Defaultname
str
The name to give the created deployment.
requiredinterval
Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]
An interval on which to execute the new deployment. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.
None
cron
Optional[Union[Iterable[str], str]]
A cron schedule of when to execute runs of this deployment.
None
rrule
Optional[Union[Iterable[str], str]]
An rrule schedule of when to execute runs of this deployment.
None
paused
Optional[bool]
Whether or not to set this deployment as paused.
None
schedules
Optional[List[FlexibleScheduleList]]
A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options such as timezone
.
None
schedule
Optional[SCHEDULE_TYPES]
A schedule object defining when to execute runs of this deployment.
None
is_schedule_active
Optional[bool]
Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.
None
parameters
Optional[dict]
A dictionary of default parameter values to pass to runs of this deployment.
None
triggers
Optional[List[DeploymentTrigger]]
A list of triggers that will kick off runs of this deployment.
None
description
Optional[str]
A description for the created deployment. Defaults to the flow's description if not provided.
None
tags
Optional[List[str]]
A list of tags to associate with the created deployment for organizational purposes.
None
version
Optional[str]
A version for the created deployment. Defaults to the flow's version.
None
enforce_parameter_schema
bool
Whether or not the Prefect API should enforce the parameter schema for the created deployment.
False
work_pool_name
Optional[str]
The name of the work pool to use for this deployment.
None
work_queue_name
Optional[str]
The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.
None
job_variables
Optional[Dict[str, Any]]
Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for
None
entrypoint_type
EntrypointType
Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.
FILE_PATH
Examples:
Prepare two deployments and serve them:
from prefect import flow, serve\n\n@flow\ndef my_flow(name):\n print(f\"hello {name}\")\n\n@flow\ndef my_other_flow(name):\n print(f\"goodbye {name}\")\n\nif __name__ == \"__main__\":\n hello_deploy = my_flow.to_deployment(\"hello\", tags=[\"dev\"])\n bye_deploy = my_other_flow.to_deployment(\"goodbye\", tags=[\"dev\"])\n serve(hello_deploy, bye_deploy)\n
Source code in prefect/flows.py
@sync_compatible\n@deprecated_parameter(\n \"schedule\",\n start_date=\"Mar 2023\",\n when=lambda p: p is not None,\n help=\"Use `schedules` instead.\",\n)\n@deprecated_parameter(\n \"is_schedule_active\",\n start_date=\"Mar 2023\",\n when=lambda p: p is not None,\n help=\"Use `paused` instead.\",\n)\nasync def to_deployment(\n self,\n name: str,\n interval: Optional[\n Union[\n Iterable[Union[int, float, datetime.timedelta]],\n int,\n float,\n datetime.timedelta,\n ]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n) -> \"RunnerDeployment\":\n \"\"\"\n Creates a runner deployment object for this flow.\n\n Args:\n name: The name to give the created deployment.\n interval: An interval on which to execute the new deployment. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this deployment.\n rrule: An rrule schedule of when to execute runs of this deployment.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options such as `timezone`.\n schedule: A schedule object defining when to execute runs of this deployment.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n parameters: A dictionary of default parameter values to pass to runs of this deployment.\n triggers: A list of triggers that will kick off runs of this deployment.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for the created deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n\n Examples:\n Prepare two deployments and serve them:\n\n ```python\n from prefect import flow, serve\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n @flow\n def my_other_flow(name):\n print(f\"goodbye {name}\")\n\n if __name__ == \"__main__\":\n hello_deploy = my_flow.to_deployment(\"hello\", tags=[\"dev\"])\n bye_deploy = my_other_flow.to_deployment(\"goodbye\", tags=[\"dev\"])\n serve(hello_deploy, bye_deploy)\n ```\n \"\"\"\n from prefect.deployments.runner import RunnerDeployment\n\n if not name.endswith(\".py\"):\n raise_on_name_with_banned_characters(name)\n if self._storage and self._entrypoint:\n return await RunnerDeployment.from_storage(\n storage=self._storage,\n entrypoint=self._entrypoint,\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n paused=paused,\n schedules=schedules,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n tags=tags,\n triggers=triggers,\n parameters=parameters or {},\n description=description,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n else:\n return RunnerDeployment.from_flow(\n self,\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n paused=paused,\n schedules=schedules,\n schedule=schedule,\n is_schedule_active=is_schedule_active,\n tags=tags,\n triggers=triggers,\n parameters=parameters or {},\n description=description,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n entrypoint_type=entrypoint_type,\n )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.validate_parameters","title":"validate_parameters
","text":"Validate parameters for compatibility with the flow by attempting to cast the inputs to the associated types specified by the function's type annotations.
Returns:
Type DescriptionDict[str, Any]
A new dict of parameters that have been cast to the appropriate types
Raises:
Type DescriptionParameterTypeError
if the provided parameters are not valid
Source code inprefect/flows.py
def validate_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Validate parameters for compatibility with the flow by attempting to cast the inputs to the\n associated types specified by the function's type annotations.\n\n Returns:\n A new dict of parameters that have been cast to the appropriate types\n\n Raises:\n ParameterTypeError: if the provided parameters are not valid\n \"\"\"\n args, kwargs = parameters_to_args_kwargs(self.fn, parameters)\n\n if HAS_PYDANTIC_V2:\n has_v1_models = any(isinstance(o, V1BaseModel) for o in args) or any(\n isinstance(o, V1BaseModel) for o in kwargs.values()\n )\n has_v2_types = any(is_v2_type(o) for o in args) or any(\n is_v2_type(o) for o in kwargs.values()\n )\n\n if has_v1_models and has_v2_types:\n raise ParameterTypeError(\n \"Cannot mix Pydantic v1 and v2 types as arguments to a flow.\"\n )\n\n if has_v1_models:\n validated_fn = V1ValidatedFunction(\n self.fn, config={\"arbitrary_types_allowed\": True}\n )\n else:\n validated_fn = V2ValidatedFunction(\n self.fn, config={\"arbitrary_types_allowed\": True}\n )\n\n else:\n validated_fn = ValidatedFunction(\n self.fn, config={\"arbitrary_types_allowed\": True}\n )\n\n try:\n model = validated_fn.init_model_instance(*args, **kwargs)\n except pydantic.ValidationError as exc:\n # We capture the pydantic exception and raise our own because the pydantic\n # exception is not picklable when using a cythonized pydantic installation\n raise ParameterTypeError.from_validation_error(exc) from None\n except V2ValidationError as exc:\n # We capture the pydantic exception and raise our own because the pydantic\n # exception is not picklable when using a cythonized pydantic installation\n raise ParameterTypeError.from_validation_error(exc) from None\n\n # Get the updated parameter dict with cast values from the model\n cast_parameters = {\n k: v\n for k, v in model._iter()\n if k in model.__fields_set__ or model.__fields__[k].default_factory\n }\n return cast_parameters\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.visualize","title":"visualize
async
","text":"Generates a graphviz object representing the current flow. In IPython notebooks, it's rendered inline, otherwise in a new window as a PNG.
Raises:
Type Description-ImportError
If graphviz
isn't installed.
-GraphvizExecutableNotFoundError
If the dot
executable isn't found.
-FlowVisualizationError
If the flow can't be visualized for any other reason.
Source code inprefect/flows.py
@sync_compatible\nasync def visualize(self, *args, **kwargs):\n \"\"\"\n Generates a graphviz object representing the current flow. In IPython notebooks,\n it's rendered inline, otherwise in a new window as a PNG.\n\n Raises:\n - ImportError: If `graphviz` isn't installed.\n - GraphvizExecutableNotFoundError: If the `dot` executable isn't found.\n - FlowVisualizationError: If the flow can't be visualized for any other reason.\n \"\"\"\n if not PREFECT_UNIT_TEST_MODE:\n warnings.warn(\n \"`flow.visualize()` will execute code inside of your flow that is not\"\n \" decorated with `@task` or `@flow`.\"\n )\n\n try:\n with TaskVizTracker() as tracker:\n if self.isasync:\n await self.fn(*args, **kwargs)\n else:\n self.fn(*args, **kwargs)\n\n graph = build_task_dependencies(tracker)\n\n visualize_task_dependencies(graph, self.name)\n\n except GraphvizImportError:\n raise\n except GraphvizExecutableNotFoundError:\n raise\n except VisualizationUnsupportedError:\n raise\n except FlowVisualizationError:\n raise\n except Exception as e:\n msg = (\n \"It's possible you are trying to visualize a flow that contains \"\n \"code that directly interacts with the result of a task\"\n \" inside of the flow. \\nTry passing a `viz_return_value` \"\n \"to the task decorator, e.g. `@task(viz_return_value=[1, 2, 3]).`\"\n )\n\n new_exception = type(e)(str(e) + \"\\n\" + msg)\n # Copy traceback information from the original exception\n new_exception.__traceback__ = e.__traceback__\n raise new_exception\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.with_options","title":"with_options
","text":"Create a new flow from the current object, updating provided options.
Parameters:
Name Type Description Defaultname
str
A new name for the flow.
None
version
str
A new version for the flow.
None
description
str
A new description for the flow.
None
flow_run_name
Optional[Union[Callable[[], str], str]]
An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.
None
task_runner
Union[Type[BaseTaskRunner], BaseTaskRunner]
A new task runner for the flow.
None
timeout_seconds
Union[int, float]
A new number of seconds to fail the flow after if still running.
None
validate_parameters
bool
A new value indicating if flow calls should validate given parameters.
None
retries
Optional[int]
A new number of times to retry on flow run failure.
None
retry_delay_seconds
Optional[Union[int, float]]
A new number of seconds to wait before retrying the flow after failure. This is only applicable if retries
is nonzero.
None
persist_result
Optional[bool]
A new option for enabling or disabling result persistence.
NotSet
result_storage
Optional[ResultStorage]
A new storage type to use for results.
NotSet
result_serializer
Optional[ResultSerializer]
A new serializer to use for results.
NotSet
cache_result_in_memory
bool
A new value indicating if the flow's result should be cached in memory.
None
on_failure
Optional[List[Callable[[Flow, FlowRun, State], None]]]
A new list of callables to run when the flow enters a failed state.
None
on_completion
Optional[List[Callable[[Flow, FlowRun, State], None]]]
A new list of callables to run when the flow enters a completed state.
None
on_cancellation
Optional[List[Callable[[Flow, FlowRun, State], None]]]
A new list of callables to run when the flow enters a cancelling state.
None
on_crashed
Optional[List[Callable[[Flow, FlowRun, State], None]]]
A new list of callables to run when the flow enters a crashed state.
None
on_running
Optional[List[Callable[[Flow, FlowRun, State], None]]]
A new list of callables to run when the flow enters a running state.
None
Returns:
Type DescriptionSelf
A new Flow
instance.
Create a new flow from an existing flow and update the name:\n\n>>> @flow(name=\"My flow\")\n>>> def my_flow():\n>>> return 1\n>>>\n>>> new_flow = my_flow.with_options(name=\"My new flow\")\n\nCreate a new flow from an existing flow, update the task runner, and call\nit without an intermediate variable:\n\n>>> from prefect.task_runners import SequentialTaskRunner\n>>>\n>>> @flow\n>>> def my_flow(x, y):\n>>> return x + y\n>>>\n>>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n>>> assert state.result() == 4\n
Source code in prefect/flows.py
def with_options(\n self,\n *,\n name: str = None,\n version: str = None,\n retries: Optional[int] = None,\n retry_delay_seconds: Optional[Union[int, float]] = None,\n description: str = None,\n flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = None,\n timeout_seconds: Union[int, float] = None,\n validate_parameters: bool = None,\n persist_result: Optional[bool] = NotSet,\n result_storage: Optional[ResultStorage] = NotSet,\n result_serializer: Optional[ResultSerializer] = NotSet,\n cache_result_in_memory: bool = None,\n log_prints: Optional[bool] = NotSet,\n on_completion: Optional[\n List[Callable[[FlowSchema, FlowRun, State], None]]\n ] = None,\n on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_cancellation: Optional[\n List[Callable[[FlowSchema, FlowRun, State], None]]\n ] = None,\n on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n) -> Self:\n \"\"\"\n Create a new flow from the current object, updating provided options.\n\n Args:\n name: A new name for the flow.\n version: A new version for the flow.\n description: A new description for the flow.\n flow_run_name: An optional name to distinguish runs of this flow; this name\n can be provided as a string template with the flow's parameters as variables,\n or a function that returns a string.\n task_runner: A new task runner for the flow.\n timeout_seconds: A new number of seconds to fail the flow after if still\n running.\n validate_parameters: A new value indicating if flow calls should validate\n given parameters.\n retries: A new number of times to retry on flow run failure.\n retry_delay_seconds: A new number of seconds to wait before retrying the\n flow after failure. This is only applicable if `retries` is nonzero.\n persist_result: A new option for enabling or disabling result persistence.\n result_storage: A new storage type to use for results.\n result_serializer: A new serializer to use for results.\n cache_result_in_memory: A new value indicating if the flow's result should\n be cached in memory.\n on_failure: A new list of callables to run when the flow enters a failed state.\n on_completion: A new list of callables to run when the flow enters a completed state.\n on_cancellation: A new list of callables to run when the flow enters a cancelling state.\n on_crashed: A new list of callables to run when the flow enters a crashed state.\n on_running: A new list of callables to run when the flow enters a running state.\n\n Returns:\n A new `Flow` instance.\n\n Examples:\n\n Create a new flow from an existing flow and update the name:\n\n >>> @flow(name=\"My flow\")\n >>> def my_flow():\n >>> return 1\n >>>\n >>> new_flow = my_flow.with_options(name=\"My new flow\")\n\n Create a new flow from an existing flow, update the task runner, and call\n it without an intermediate variable:\n\n >>> from prefect.task_runners import SequentialTaskRunner\n >>>\n >>> @flow\n >>> def my_flow(x, y):\n >>> return x + y\n >>>\n >>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n >>> assert state.result() == 4\n\n \"\"\"\n new_flow = Flow(\n fn=self.fn,\n name=name or self.name,\n description=description or self.description,\n flow_run_name=flow_run_name or self.flow_run_name,\n version=version or self.version,\n task_runner=task_runner or self.task_runner,\n retries=retries if retries is not None else self.retries,\n retry_delay_seconds=(\n retry_delay_seconds\n if retry_delay_seconds is not None\n else self.retry_delay_seconds\n ),\n timeout_seconds=(\n timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n ),\n validate_parameters=(\n validate_parameters\n if validate_parameters is not None\n else self.should_validate_parameters\n ),\n persist_result=(\n persist_result if persist_result is not NotSet else self.persist_result\n ),\n result_storage=(\n result_storage if result_storage is not NotSet else self.result_storage\n ),\n result_serializer=(\n result_serializer\n if result_serializer is not NotSet\n else self.result_serializer\n ),\n cache_result_in_memory=(\n cache_result_in_memory\n if cache_result_in_memory is not None\n else self.cache_result_in_memory\n ),\n log_prints=log_prints if log_prints is not NotSet else self.log_prints,\n on_completion=on_completion or self.on_completion,\n on_failure=on_failure or self.on_failure,\n on_cancellation=on_cancellation or self.on_cancellation,\n on_crashed=on_crashed or self.on_crashed,\n on_running=on_running or self.on_running,\n )\n new_flow._storage = self._storage\n new_flow._entrypoint = self._entrypoint\n return new_flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.flow","title":"flow
","text":"Decorator to designate a function as a Prefect workflow.
This decorator may be used for asynchronous or synchronous functions.
Flow parameters must be serializable by Pydantic.
Parameters:
Name Type Description Defaultname
Optional[str]
An optional name for the flow; if not provided, the name will be inferred from the given function.
None
version
Optional[str]
An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null.
None
flow_run_name
Optional[Union[Callable[[], str], str]]
An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.
None
retries
int
An optional number of times to retry on flow run failure.
None
retry_delay_seconds
Union[int, float]
An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries
is nonzero.
None
task_runner
BaseTaskRunner
An optional task runner to use for task execution within the flow; if not provided, a ConcurrentTaskRunner
will be instantiated.
ConcurrentTaskRunner
description
str
An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function.
None
timeout_seconds
Union[int, float]
An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called.
None
validate_parameters
bool
By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined as x: int
and \"5\" is passed, it will be resolved to 5
. If set to False
, no validation will be performed on flow parameters.
True
persist_result
Optional[bool]
An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults to None
, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.
None
result_storage
Optional[ResultStorage]
An optional block to use to persist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow.
None
result_serializer
Optional[ResultSerializer]
An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value of PREFECT_RESULTS_DEFAULT_SERIALIZER
will be used unless called as a subflow, at which point the default will be loaded from the parent flow.
None
cache_result_in_memory
bool
An optional toggle indicating whether the cached result of a running the flow should be stored in memory. Defaults to True
.
True
log_prints
Optional[bool]
If set, print
statements in the flow will be redirected to the Prefect logger for the flow run. Defaults to None
, which indicates that the value from the parent flow should be used. If this is a parent flow, the default is pulled from the PREFECT_LOGGING_LOG_PRINTS
setting.
None
on_completion
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of functions to call when the flow run is completed. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.
None
on_failure
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of functions to call when the flow run fails. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.
None
on_cancellation
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of functions to call when the flow run is cancelled. These functions will be passed the flow, flow run, and final state.
None
on_crashed
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of functions to call when the flow run crashes. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.
None
on_running
Optional[List[Callable[[Flow, FlowRun, State], None]]]
An optional list of functions to call when the flow run is started. Each function should accept three arguments: the flow, the flow run, and the current state
None
Returns:
Type DescriptionA callable Flow
object which, when called, will run the flow and return its
final state.
Examples:
Define a simple flow
>>> from prefect import flow\n>>> @flow\n>>> def add(x, y):\n>>> return x + y\n
Define an async flow
>>> @flow\n>>> async def add(x, y):\n>>> return x + y\n
Define a flow with a version and description
>>> @flow(version=\"first-flow\", description=\"This flow is empty!\")\n>>> def my_flow():\n>>> pass\n
Define a flow with a custom name
>>> @flow(name=\"The Ultimate Flow\")\n>>> def my_flow():\n>>> pass\n
Define a flow that submits its tasks to dask
>>> from prefect_dask.task_runners import DaskTaskRunner\n>>>\n>>> @flow(task_runner=DaskTaskRunner)\n>>> def my_flow():\n>>> pass\n
Source code in prefect/flows.py
def flow(\n __fn=None,\n *,\n name: Optional[str] = None,\n version: Optional[str] = None,\n flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n retries: int = None,\n retry_delay_seconds: Union[int, float] = None,\n task_runner: BaseTaskRunner = ConcurrentTaskRunner,\n description: str = None,\n timeout_seconds: Union[int, float] = None,\n validate_parameters: bool = True,\n persist_result: Optional[bool] = None,\n result_storage: Optional[ResultStorage] = None,\n result_serializer: Optional[ResultSerializer] = None,\n cache_result_in_memory: bool = True,\n log_prints: Optional[bool] = None,\n on_completion: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_cancellation: Optional[\n List[Callable[[FlowSchema, FlowRun, State], None]]\n ] = None,\n on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n):\n \"\"\"\n Decorator to designate a function as a Prefect workflow.\n\n This decorator may be used for asynchronous or synchronous functions.\n\n Flow parameters must be serializable by Pydantic.\n\n Args:\n name: An optional name for the flow; if not provided, the name will be inferred\n from the given function.\n version: An optional version string for the flow; if not provided, we will\n attempt to create a version string as a hash of the file containing the\n wrapped function; if the file cannot be located, the version will be null.\n flow_run_name: An optional name to distinguish runs of this flow; this name can\n be provided as a string template with the flow's parameters as variables,\n or a function that returns a string.\n retries: An optional number of times to retry on flow run failure.\n retry_delay_seconds: An optional number of seconds to wait before retrying the\n flow after failure. This is only applicable if `retries` is nonzero.\n task_runner: An optional task runner to use for task execution within the flow; if\n not provided, a `ConcurrentTaskRunner` will be instantiated.\n description: An optional string description for the flow; if not provided, the\n description will be pulled from the docstring for the decorated function.\n timeout_seconds: An optional number of seconds indicating a maximum runtime for\n the flow. If the flow exceeds this runtime, it will be marked as failed.\n Flow execution may continue until the next task is called.\n validate_parameters: By default, parameters passed to flows are validated by\n Pydantic. This will check that input values conform to the annotated types\n on the function. Where possible, values will be coerced into the correct\n type; for example, if a parameter is defined as `x: int` and \"5\" is passed,\n it will be resolved to `5`. If set to `False`, no validation will be\n performed on flow parameters.\n persist_result: An optional toggle indicating whether the result of this flow\n should be persisted to result storage. Defaults to `None`, which indicates\n that Prefect should choose whether the result should be persisted depending on\n the features being used.\n result_storage: An optional block to use to persist the result of this flow.\n This value will be used as the default for any tasks in this flow.\n If not provided, the local file system will be used unless called as\n a subflow, at which point the default will be loaded from the parent flow.\n result_serializer: An optional serializer to use to serialize the result of this\n flow for persistence. This value will be used as the default for any tasks\n in this flow. If not provided, the value of `PREFECT_RESULTS_DEFAULT_SERIALIZER`\n will be used unless called as a subflow, at which point the default will be\n loaded from the parent flow.\n cache_result_in_memory: An optional toggle indicating whether the cached result of\n a running the flow should be stored in memory. Defaults to `True`.\n log_prints: If set, `print` statements in the flow will be redirected to the\n Prefect logger for the flow run. Defaults to `None`, which indicates that\n the value from the parent flow should be used. If this is a parent flow,\n the default is pulled from the `PREFECT_LOGGING_LOG_PRINTS` setting.\n on_completion: An optional list of functions to call when the flow run is\n completed. Each function should accept three arguments: the flow, the flow\n run, and the final state of the flow run.\n on_failure: An optional list of functions to call when the flow run fails. Each\n function should accept three arguments: the flow, the flow run, and the\n final state of the flow run.\n on_cancellation: An optional list of functions to call when the flow run is\n cancelled. These functions will be passed the flow, flow run, and final state.\n on_crashed: An optional list of functions to call when the flow run crashes. Each\n function should accept three arguments: the flow, the flow run, and the\n final state of the flow run.\n on_running: An optional list of functions to call when the flow run is started. Each\n function should accept three arguments: the flow, the flow run, and the current state\n\n Returns:\n A callable `Flow` object which, when called, will run the flow and return its\n final state.\n\n Examples:\n Define a simple flow\n\n >>> from prefect import flow\n >>> @flow\n >>> def add(x, y):\n >>> return x + y\n\n Define an async flow\n\n >>> @flow\n >>> async def add(x, y):\n >>> return x + y\n\n Define a flow with a version and description\n\n >>> @flow(version=\"first-flow\", description=\"This flow is empty!\")\n >>> def my_flow():\n >>> pass\n\n Define a flow with a custom name\n\n >>> @flow(name=\"The Ultimate Flow\")\n >>> def my_flow():\n >>> pass\n\n Define a flow that submits its tasks to dask\n\n >>> from prefect_dask.task_runners import DaskTaskRunner\n >>>\n >>> @flow(task_runner=DaskTaskRunner)\n >>> def my_flow():\n >>> pass\n \"\"\"\n if __fn:\n return cast(\n Flow[P, R],\n Flow(\n fn=__fn,\n name=name,\n version=version,\n flow_run_name=flow_run_name,\n task_runner=task_runner,\n description=description,\n timeout_seconds=timeout_seconds,\n validate_parameters=validate_parameters,\n retries=retries,\n retry_delay_seconds=retry_delay_seconds,\n persist_result=persist_result,\n result_storage=result_storage,\n result_serializer=result_serializer,\n cache_result_in_memory=cache_result_in_memory,\n log_prints=log_prints,\n on_completion=on_completion,\n on_failure=on_failure,\n on_cancellation=on_cancellation,\n on_crashed=on_crashed,\n on_running=on_running,\n ),\n )\n else:\n return cast(\n Callable[[Callable[P, R]], Flow[P, R]],\n partial(\n flow,\n name=name,\n version=version,\n flow_run_name=flow_run_name,\n task_runner=task_runner,\n description=description,\n timeout_seconds=timeout_seconds,\n validate_parameters=validate_parameters,\n retries=retries,\n retry_delay_seconds=retry_delay_seconds,\n persist_result=persist_result,\n result_storage=result_storage,\n result_serializer=result_serializer,\n cache_result_in_memory=cache_result_in_memory,\n log_prints=log_prints,\n on_completion=on_completion,\n on_failure=on_failure,\n on_cancellation=on_cancellation,\n on_crashed=on_crashed,\n on_running=on_running,\n ),\n )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_entrypoint","title":"load_flow_from_entrypoint
","text":"Extract a flow object from a script at an entrypoint by running all of the code in the file.
Parameters:
Name Type Description Defaultentrypoint
str
a string in the format <path_to_script>:<flow_func_name>
or a module path to a flow function
Returns:
Type DescriptionFlow
The flow object from the script
Raises:
Type DescriptionFlowScriptError
If an exception is encountered while running the script
MissingFlowError
If the flow function specified in the entrypoint does not exist
Source code inprefect/flows.py
def load_flow_from_entrypoint(entrypoint: str) -> Flow:\n \"\"\"\n Extract a flow object from a script at an entrypoint by running all of the code in the file.\n\n Args:\n entrypoint: a string in the format `<path_to_script>:<flow_func_name>` or a module path\n to a flow function\n\n Returns:\n The flow object from the script\n\n Raises:\n FlowScriptError: If an exception is encountered while running the script\n MissingFlowError: If the flow function specified in the entrypoint does not exist\n \"\"\"\n with PrefectObjectRegistry(\n block_code_execution=True,\n capture_failures=True,\n ):\n if \":\" in entrypoint:\n # split by the last colon once to handle Windows paths with drive letters i.e C:\\path\\to\\file.py:do_stuff\n path, func_name = entrypoint.rsplit(\":\", maxsplit=1)\n else:\n path, func_name = entrypoint.rsplit(\".\", maxsplit=1)\n try:\n flow = import_object(entrypoint)\n except AttributeError as exc:\n raise MissingFlowError(\n f\"Flow function with name {func_name!r} not found in {path!r}. \"\n ) from exc\n\n if not isinstance(flow, Flow):\n raise MissingFlowError(\n f\"Function with name {func_name!r} is not a flow. Make sure that it is \"\n \"decorated with '@flow'.\"\n )\n\n return flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_script","title":"load_flow_from_script
","text":"Extract a flow object from a script by running all of the code in the file.
If the script has multiple flows in it, a flow name must be provided to specify the flow to return.
Parameters:
Name Type Description Defaultpath
str
A path to a Python script containing flows
requiredflow_name
str
An optional flow name to look for in the script
None
Returns:
Type DescriptionFlow
The flow object from the script
Raises:
Type DescriptionFlowScriptError
If an exception is encountered while running the script
MissingFlowError
If no flows exist in the iterable
MissingFlowError
If a flow name is provided and that flow does not exist
UnspecifiedFlowError
If multiple flows exist but no flow name was provided
Source code inprefect/flows.py
def load_flow_from_script(path: str, flow_name: str = None) -> Flow:\n \"\"\"\n Extract a flow object from a script by running all of the code in the file.\n\n If the script has multiple flows in it, a flow name must be provided to specify\n the flow to return.\n\n Args:\n path: A path to a Python script containing flows\n flow_name: An optional flow name to look for in the script\n\n Returns:\n The flow object from the script\n\n Raises:\n FlowScriptError: If an exception is encountered while running the script\n MissingFlowError: If no flows exist in the iterable\n MissingFlowError: If a flow name is provided and that flow does not exist\n UnspecifiedFlowError: If multiple flows exist but no flow name was provided\n \"\"\"\n return select_flow(\n load_flows_from_script(path),\n flow_name=flow_name,\n from_message=f\"in script '{path}'\",\n )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_text","title":"load_flow_from_text
","text":"Load a flow from a text script.
The script will be written to a temporary local file path so errors can refer to line numbers and contextual tracebacks can be provided.
Source code inprefect/flows.py
def load_flow_from_text(script_contents: AnyStr, flow_name: str):\n \"\"\"\n Load a flow from a text script.\n\n The script will be written to a temporary local file path so errors can refer\n to line numbers and contextual tracebacks can be provided.\n \"\"\"\n with NamedTemporaryFile(\n mode=\"wt\" if isinstance(script_contents, str) else \"wb\",\n prefix=f\"flow-script-{flow_name}\",\n suffix=\".py\",\n delete=False,\n ) as tmpfile:\n tmpfile.write(script_contents)\n tmpfile.flush()\n try:\n flow = load_flow_from_script(tmpfile.name, flow_name=flow_name)\n finally:\n # windows compat\n tmpfile.close()\n os.remove(tmpfile.name)\n return flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flows_from_script","title":"load_flows_from_script
","text":"Load all flow objects from the given python script. All of the code in the file will be executed.
Returns:
Type DescriptionList[Flow]
A list of flows
Raises:
Type DescriptionFlowScriptError
If an exception is encountered while running the script
Source code inprefect/flows.py
def load_flows_from_script(path: str) -> List[Flow]:\n \"\"\"\n Load all flow objects from the given python script. All of the code in the file\n will be executed.\n\n Returns:\n A list of flows\n\n Raises:\n FlowScriptError: If an exception is encountered while running the script\n \"\"\"\n return registry_from_script(path).get_instances(Flow)\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.select_flow","title":"select_flow
","text":"Select the only flow in an iterable or a flow specified by name.
Returns A single flow object
Raises:
Type DescriptionMissingFlowError
If no flows exist in the iterable
MissingFlowError
If a flow name is provided and that flow does not exist
UnspecifiedFlowError
If multiple flows exist but no flow name was provided
Source code inprefect/flows.py
def select_flow(\n flows: Iterable[Flow], flow_name: str = None, from_message: str = None\n) -> Flow:\n \"\"\"\n Select the only flow in an iterable or a flow specified by name.\n\n Returns\n A single flow object\n\n Raises:\n MissingFlowError: If no flows exist in the iterable\n MissingFlowError: If a flow name is provided and that flow does not exist\n UnspecifiedFlowError: If multiple flows exist but no flow name was provided\n \"\"\"\n # Convert to flows by name\n flows = {f.name: f for f in flows}\n\n # Add a leading space if given, otherwise use an empty string\n from_message = (\" \" + from_message) if from_message else \"\"\n if not flows:\n raise MissingFlowError(f\"No flows found{from_message}.\")\n\n elif flow_name and flow_name not in flows:\n raise MissingFlowError(\n f\"Flow {flow_name!r} not found{from_message}. \"\n f\"Found the following flows: {listrepr(flows.keys())}. \"\n \"Check to make sure that your flow function is decorated with `@flow`.\"\n )\n\n elif not flow_name and len(flows) > 1:\n raise UnspecifiedFlowError(\n (\n f\"Found {len(flows)} flows{from_message}:\"\n f\" {listrepr(sorted(flows.keys()))}. Specify a flow name to select a\"\n \" flow.\"\n ),\n )\n\n if flow_name:\n return flows[flow_name]\n else:\n return list(flows.values())[0]\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/futures/","title":"prefect.futures","text":"","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures","title":"prefect.futures
","text":"Futures represent the execution of a task and allow retrieval of the task run's state.
This module contains the definition for futures as well as utilities for resolving futures in nested data structures.
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture","title":"PrefectFuture
","text":" Bases: Generic[R, A]
Represents the result of a computation happening in a task runner.
When tasks are called, they are submitted to a task runner which creates a future for access to the state and result of the task.
Examples:
Define a task that returns a string
>>> from prefect import flow, task\n>>> @task\n>>> def my_task() -> str:\n>>> return \"hello\"\n
Calls of this task in a flow will return a future
>>> @flow\n>>> def my_flow():\n>>> future = my_task.submit() # PrefectFuture[str, Sync] includes result type\n>>> future.task_run.id # UUID for the task run\n
Wait for the task to complete
>>> @flow\n>>> def my_flow():\n>>> future = my_task.submit()\n>>> final_state = future.wait()\n
Wait N seconds for the task to complete
>>> @flow\n>>> def my_flow():\n>>> future = my_task.submit()\n>>> final_state = future.wait(0.1)\n>>> if final_state:\n>>> ... # Task done\n>>> else:\n>>> ... # Task not done yet\n
Wait for a task to complete and retrieve its result
>>> @flow\n>>> def my_flow():\n>>> future = my_task.submit()\n>>> result = future.result()\n>>> assert result == \"hello\"\n
Wait N seconds for a task to complete and retrieve its result
>>> @flow\n>>> def my_flow():\n>>> future = my_task.submit()\n>>> result = future.result(timeout=5)\n>>> assert result == \"hello\"\n
Retrieve the state of a task without waiting for completion
>>> @flow\n>>> def my_flow():\n>>> future = my_task.submit()\n>>> state = future.get_state()\n
Source code in prefect/futures.py
class PrefectFuture(Generic[R, A]):\n \"\"\"\n Represents the result of a computation happening in a task runner.\n\n When tasks are called, they are submitted to a task runner which creates a future\n for access to the state and result of the task.\n\n Examples:\n Define a task that returns a string\n\n >>> from prefect import flow, task\n >>> @task\n >>> def my_task() -> str:\n >>> return \"hello\"\n\n Calls of this task in a flow will return a future\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task.submit() # PrefectFuture[str, Sync] includes result type\n >>> future.task_run.id # UUID for the task run\n\n Wait for the task to complete\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task.submit()\n >>> final_state = future.wait()\n\n Wait N seconds for the task to complete\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task.submit()\n >>> final_state = future.wait(0.1)\n >>> if final_state:\n >>> ... # Task done\n >>> else:\n >>> ... # Task not done yet\n\n Wait for a task to complete and retrieve its result\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task.submit()\n >>> result = future.result()\n >>> assert result == \"hello\"\n\n Wait N seconds for a task to complete and retrieve its result\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task.submit()\n >>> result = future.result(timeout=5)\n >>> assert result == \"hello\"\n\n Retrieve the state of a task without waiting for completion\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task.submit()\n >>> state = future.get_state()\n \"\"\"\n\n def __init__(\n self,\n name: str,\n key: UUID,\n task_runner: \"BaseTaskRunner\",\n asynchronous: A = True,\n _final_state: State[R] = None, # Exposed for testing\n ) -> None:\n self.key = key\n self.name = name\n self.asynchronous = asynchronous\n self.task_run = None\n self._final_state = _final_state\n self._exception: Optional[Exception] = None\n self._task_runner = task_runner\n self._submitted = anyio.Event()\n\n self._loop = asyncio.get_running_loop()\n\n @overload\n def wait(\n self: \"PrefectFuture[R, Async]\", timeout: None = None\n ) -> Awaitable[State[R]]:\n ...\n\n @overload\n def wait(self: \"PrefectFuture[R, Sync]\", timeout: None = None) -> State[R]:\n ...\n\n @overload\n def wait(\n self: \"PrefectFuture[R, Async]\", timeout: float\n ) -> Awaitable[Optional[State[R]]]:\n ...\n\n @overload\n def wait(self: \"PrefectFuture[R, Sync]\", timeout: float) -> Optional[State[R]]:\n ...\n\n def wait(self, timeout=None):\n \"\"\"\n Wait for the run to finish and return the final state\n\n If the timeout is reached before the run reaches a final state,\n `None` is returned.\n \"\"\"\n wait = create_call(self._wait, timeout=timeout)\n if self.asynchronous:\n return from_async.call_soon_in_loop_thread(wait).aresult()\n else:\n # type checking cannot handle the overloaded timeout passing\n return from_sync.call_soon_in_loop_thread(wait).result() # type: ignore\n\n @overload\n async def _wait(self, timeout: None = None) -> State[R]:\n ...\n\n @overload\n async def _wait(self, timeout: float) -> Optional[State[R]]:\n ...\n\n async def _wait(self, timeout=None):\n \"\"\"\n Async implementation for `wait`\n \"\"\"\n await self._wait_for_submission()\n\n if self._final_state:\n return self._final_state\n\n self._final_state = await self._task_runner.wait(self.key, timeout)\n return self._final_state\n\n @overload\n def result(\n self: \"PrefectFuture[R, Sync]\",\n timeout: float = None,\n raise_on_failure: bool = True,\n ) -> R:\n ...\n\n @overload\n def result(\n self: \"PrefectFuture[R, Sync]\",\n timeout: float = None,\n raise_on_failure: bool = False,\n ) -> Union[R, Exception]:\n ...\n\n @overload\n def result(\n self: \"PrefectFuture[R, Async]\",\n timeout: float = None,\n raise_on_failure: bool = True,\n ) -> Awaitable[R]:\n ...\n\n @overload\n def result(\n self: \"PrefectFuture[R, Async]\",\n timeout: float = None,\n raise_on_failure: bool = False,\n ) -> Awaitable[Union[R, Exception]]:\n ...\n\n def result(self, timeout: float = None, raise_on_failure: bool = True):\n \"\"\"\n Wait for the run to finish and return the final state.\n\n If the timeout is reached before the run reaches a final state, a `TimeoutError`\n will be raised.\n\n If `raise_on_failure` is `True` and the task run failed, the task run's\n exception will be raised.\n \"\"\"\n result = create_call(\n self._result, timeout=timeout, raise_on_failure=raise_on_failure\n )\n if self.asynchronous:\n return from_async.call_soon_in_loop_thread(result).aresult()\n else:\n return from_sync.call_soon_in_loop_thread(result).result()\n\n async def _result(self, timeout: float = None, raise_on_failure: bool = True):\n \"\"\"\n Async implementation of `result`\n \"\"\"\n final_state = await self._wait(timeout=timeout)\n if not final_state:\n raise TimeoutError(\"Call timed out before task finished.\")\n return await final_state.result(raise_on_failure=raise_on_failure, fetch=True)\n\n @overload\n def get_state(\n self: \"PrefectFuture[R, Async]\", client: PrefectClient = None\n ) -> Awaitable[State[R]]:\n ...\n\n @overload\n def get_state(\n self: \"PrefectFuture[R, Sync]\", client: PrefectClient = None\n ) -> State[R]:\n ...\n\n def get_state(self, client: PrefectClient = None):\n \"\"\"\n Get the current state of the task run.\n \"\"\"\n if self.asynchronous:\n return cast(Awaitable[State[R]], self._get_state(client=client))\n else:\n return cast(State[R], sync(self._get_state, client=client))\n\n @inject_client\n async def _get_state(self, client: PrefectClient = None) -> State[R]:\n assert client is not None # always injected\n\n # We must wait for the task run id to be populated\n await self._wait_for_submission()\n\n task_run = await client.read_task_run(self.task_run.id)\n\n if not task_run:\n raise RuntimeError(\"Future has no associated task run in the server.\")\n\n # Update the task run reference\n self.task_run = task_run\n return task_run.state\n\n async def _wait_for_submission(self):\n await run_coroutine_in_loop_from_async(self._loop, self._submitted.wait())\n\n def __hash__(self) -> int:\n return hash(self.key)\n\n def __repr__(self) -> str:\n return f\"PrefectFuture({self.name!r})\"\n\n def __bool__(self) -> bool:\n warnings.warn(\n (\n \"A 'PrefectFuture' from a task call was cast to a boolean; \"\n \"did you mean to check the result of the task instead? \"\n \"e.g. `if my_task().result(): ...`\"\n ),\n stacklevel=2,\n )\n return True\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.get_state","title":"get_state
","text":"Get the current state of the task run.
Source code inprefect/futures.py
def get_state(self, client: PrefectClient = None):\n \"\"\"\n Get the current state of the task run.\n \"\"\"\n if self.asynchronous:\n return cast(Awaitable[State[R]], self._get_state(client=client))\n else:\n return cast(State[R], sync(self._get_state, client=client))\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.result","title":"result
","text":"Wait for the run to finish and return the final state.
If the timeout is reached before the run reaches a final state, a TimeoutError
will be raised.
If raise_on_failure
is True
and the task run failed, the task run's exception will be raised.
prefect/futures.py
def result(self, timeout: float = None, raise_on_failure: bool = True):\n \"\"\"\n Wait for the run to finish and return the final state.\n\n If the timeout is reached before the run reaches a final state, a `TimeoutError`\n will be raised.\n\n If `raise_on_failure` is `True` and the task run failed, the task run's\n exception will be raised.\n \"\"\"\n result = create_call(\n self._result, timeout=timeout, raise_on_failure=raise_on_failure\n )\n if self.asynchronous:\n return from_async.call_soon_in_loop_thread(result).aresult()\n else:\n return from_sync.call_soon_in_loop_thread(result).result()\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.wait","title":"wait
","text":"Wait for the run to finish and return the final state
If the timeout is reached before the run reaches a final state, None
is returned.
prefect/futures.py
def wait(self, timeout=None):\n \"\"\"\n Wait for the run to finish and return the final state\n\n If the timeout is reached before the run reaches a final state,\n `None` is returned.\n \"\"\"\n wait = create_call(self._wait, timeout=timeout)\n if self.asynchronous:\n return from_async.call_soon_in_loop_thread(wait).aresult()\n else:\n # type checking cannot handle the overloaded timeout passing\n return from_sync.call_soon_in_loop_thread(wait).result() # type: ignore\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.call_repr","title":"call_repr
","text":"Generate a repr for a function call as \"fn_name(arg_value, kwarg_name=kwarg_value)\"
Source code inprefect/futures.py
def call_repr(__fn: Callable, *args: Any, **kwargs: Any) -> str:\n \"\"\"\n Generate a repr for a function call as \"fn_name(arg_value, kwarg_name=kwarg_value)\"\n \"\"\"\n\n name = __fn.__name__\n\n # TODO: If this computation is concerningly expensive, we can iterate checking the\n # length at each arg or avoid calling `repr` on args with large amounts of\n # data\n call_args = \", \".join(\n [repr(arg) for arg in args]\n + [f\"{key}={repr(val)}\" for key, val in kwargs.items()]\n )\n\n # Enforce a maximum length\n if len(call_args) > 100:\n call_args = call_args[:100] + \"...\"\n\n return f\"{name}({call_args})\"\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.resolve_futures_to_data","title":"resolve_futures_to_data
async
","text":"Given a Python built-in collection, recursively find PrefectFutures
and build a new collection with the same structure with futures resolved to their results. Resolving futures to their results may wait for execution to complete and require communication with the API.
Unsupported object types will be returned without modification.
Source code inprefect/futures.py
async def resolve_futures_to_data(\n expr: Union[PrefectFuture[R, Any], Any],\n raise_on_failure: bool = True,\n) -> Union[R, Any]:\n \"\"\"\n Given a Python built-in collection, recursively find `PrefectFutures` and build a\n new collection with the same structure with futures resolved to their results.\n Resolving futures to their results may wait for execution to complete and require\n communication with the API.\n\n Unsupported object types will be returned without modification.\n \"\"\"\n futures: Set[PrefectFuture] = set()\n\n maybe_expr = visit_collection(\n expr,\n visit_fn=partial(_collect_futures, futures),\n return_data=False,\n context={},\n )\n if maybe_expr is not None:\n expr = maybe_expr\n\n # Get results\n results = await asyncio.gather(\n *[\n # We must wait for the future in the thread it was created in\n from_async.call_soon_in_loop_thread(\n create_call(future._result, raise_on_failure=raise_on_failure)\n ).aresult()\n for future in futures\n ]\n )\n\n results_by_future = dict(zip(futures, results))\n\n def replace_futures_with_results(expr, context):\n # Expressions inside quotes should not be modified\n if isinstance(context.get(\"annotation\"), quote):\n raise StopVisiting()\n\n if isinstance(expr, PrefectFuture):\n return results_by_future[expr]\n else:\n return expr\n\n return visit_collection(\n expr,\n visit_fn=replace_futures_with_results,\n return_data=True,\n context={},\n )\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.resolve_futures_to_states","title":"resolve_futures_to_states
async
","text":"Given a Python built-in collection, recursively find PrefectFutures
and build a new collection with the same structure with futures resolved to their final states. Resolving futures to their final states may wait for execution to complete.
Unsupported object types will be returned without modification.
Source code inprefect/futures.py
async def resolve_futures_to_states(\n expr: Union[PrefectFuture[R, Any], Any],\n) -> Union[State[R], Any]:\n \"\"\"\n Given a Python built-in collection, recursively find `PrefectFutures` and build a\n new collection with the same structure with futures resolved to their final states.\n Resolving futures to their final states may wait for execution to complete.\n\n Unsupported object types will be returned without modification.\n \"\"\"\n futures: Set[PrefectFuture] = set()\n\n visit_collection(\n expr,\n visit_fn=partial(_collect_futures, futures),\n return_data=False,\n context={},\n )\n\n # Get final states for each future\n states = await asyncio.gather(\n *[\n # We must wait for the future in the thread it was created in\n from_async.call_soon_in_loop_thread(create_call(future._wait)).aresult()\n for future in futures\n ]\n )\n\n states_by_future = dict(zip(futures, states))\n\n def replace_futures_with_states(expr, context):\n # Expressions inside quotes should not be modified\n if isinstance(context.get(\"annotation\"), quote):\n raise StopVisiting()\n\n if isinstance(expr, PrefectFuture):\n return states_by_future[expr]\n else:\n return expr\n\n return visit_collection(\n expr,\n visit_fn=replace_futures_with_states,\n return_data=True,\n context={},\n )\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/infrastructure/","title":"prefect.infrastructure","text":"","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure","title":"prefect.infrastructure
","text":"","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer","title":"DockerContainer
","text":" Bases: Infrastructure
Runs a command in a container.
Requires a Docker Engine to be connectable. Docker settings will be retrieved from the environment.
Click here to see a tutorial.
Attributes:
Name Type Descriptionauto_remove
bool
If set, the container will be removed on completion. Otherwise, the container will remain after exit for inspection.
command
bool
A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.
env
bool
Environment variables to set for the container.
image
str
An optional string specifying the tag of a Docker image to use. Defaults to the Prefect image.
image_pull_policy
Optional[ImagePullPolicy]
Specifies if the image should be pulled. One of 'ALWAYS', 'NEVER', 'IF_NOT_PRESENT'.
image_registry
Optional[DockerRegistry]
A DockerRegistry
block containing credentials to use if image
is stored in a private image registry.
labels
Optional[DockerRegistry]
An optional dictionary of labels, mapping name to value.
name
Optional[DockerRegistry]
An optional name for the container.
network_mode
Optional[str]
Set the network mode for the created container. Defaults to 'host' if a local API url is detected, otherwise the Docker default of 'bridge' is used. If 'networks' is set, this cannot be set.
networks
List[str]
An optional list of strings specifying Docker networks to connect the container to.
stream_output
bool
If set, stream output from the container to local standard output.
volumes
List[str]
An optional list of volume mount strings in the format of \"local_path:container_path\".
memswap_limit
Union[int, str]
Total memory (memory + swap), -1 to disable swap. Should only be set if mem_limit
is also set. If mem_limit
is set, this defaults to allowing the container to use as much swap as memory. For example, if mem_limit
is 300m and memswap_limit
is not set, the container can use 600m in total of memory and swap.
mem_limit
Union[float, str]
Memory limit of the created container. Accepts float values to enforce a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g. If a string is given without a unit, bytes are assumed.
privileged
bool
Give extended privileges to this container.
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer--connecting-to-a-locally-hosted-prefect-api","title":"Connecting to a locally hosted Prefect API","text":"If using a local API URL on Linux, we will update the network mode default to 'host' to enable connectivity. If using another OS or an alternative network mode is used, we will replace 'localhost' in the API URL with 'host.docker.internal'. Generally, this will enable connectivity, but the API URL can be provided as an environment variable to override inference in more complex use-cases.
Note, if using 'host.docker.internal' in the API URL on Linux, the API must be bound to 0.0.0.0 or the Docker IP address to allow connectivity. On macOS, this is not necessary and the API is connectable while bound to localhost.
Source code inprefect/infrastructure/container.py
@deprecated_class(\n start_date=\"Mar 2024\",\n help=\"Use the Docker worker from prefect-docker instead.\"\n \" Refer to the upgrade guide for more information:\"\n \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass DockerContainer(Infrastructure):\n \"\"\"\n Runs a command in a container.\n\n Requires a Docker Engine to be connectable. Docker settings will be retrieved from\n the environment.\n\n Click [here](https://docs.prefect.io/guides/deployment/docker) to see a tutorial.\n\n Attributes:\n auto_remove: If set, the container will be removed on completion. Otherwise,\n the container will remain after exit for inspection.\n command: A list of strings specifying the command to run in the container to\n start the flow run. In most cases you should not override this.\n env: Environment variables to set for the container.\n image: An optional string specifying the tag of a Docker image to use.\n Defaults to the Prefect image.\n image_pull_policy: Specifies if the image should be pulled. One of 'ALWAYS',\n 'NEVER', 'IF_NOT_PRESENT'.\n image_registry: A `DockerRegistry` block containing credentials to use if `image` is stored in a private\n image registry.\n labels: An optional dictionary of labels, mapping name to value.\n name: An optional name for the container.\n network_mode: Set the network mode for the created container. Defaults to 'host'\n if a local API url is detected, otherwise the Docker default of 'bridge' is\n used. If 'networks' is set, this cannot be set.\n networks: An optional list of strings specifying Docker networks to connect the\n container to.\n stream_output: If set, stream output from the container to local standard output.\n volumes: An optional list of volume mount strings in the format of\n \"local_path:container_path\".\n memswap_limit: Total memory (memory + swap), -1 to disable swap. Should only be\n set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\n allowing the container to use as much swap as memory. For example, if\n `mem_limit` is 300m and `memswap_limit` is not set, the container can use\n 600m in total of memory and swap.\n mem_limit: Memory limit of the created container. Accepts float values to enforce\n a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g.\n If a string is given without a unit, bytes are assumed.\n privileged: Give extended privileges to this container.\n\n ## Connecting to a locally hosted Prefect API\n\n If using a local API URL on Linux, we will update the network mode default to 'host'\n to enable connectivity. If using another OS or an alternative network mode is used,\n we will replace 'localhost' in the API URL with 'host.docker.internal'. Generally,\n this will enable connectivity, but the API URL can be provided as an environment\n variable to override inference in more complex use-cases.\n\n Note, if using 'host.docker.internal' in the API URL on Linux, the API must be bound\n to 0.0.0.0 or the Docker IP address to allow connectivity. On macOS, this is not\n necessary and the API is connectable while bound to localhost.\n \"\"\"\n\n type: Literal[\"docker-container\"] = Field(\n default=\"docker-container\", description=\"The type of infrastructure.\"\n )\n image: str = Field(\n default_factory=get_prefect_image_name,\n description=\"Tag of a Docker image to use. Defaults to the Prefect image.\",\n )\n image_pull_policy: Optional[ImagePullPolicy] = Field(\n default=None, description=\"Specifies if the image should be pulled.\"\n )\n image_registry: Optional[DockerRegistry] = None\n networks: List[str] = Field(\n default_factory=list,\n description=(\n \"A list of strings specifying Docker networks to connect the container to.\"\n ),\n )\n network_mode: Optional[str] = Field(\n default=None,\n description=(\n \"The network mode for the created container (e.g. host, bridge). If\"\n \" 'networks' is set, this cannot be set.\"\n ),\n )\n auto_remove: bool = Field(\n default=False,\n description=\"If set, the container will be removed on completion.\",\n )\n volumes: List[str] = Field(\n default_factory=list,\n description=(\n \"A list of volume mount strings in the format of\"\n ' \"local_path:container_path\".'\n ),\n )\n stream_output: bool = Field(\n default=True,\n description=(\n \"If set, the output will be streamed from the container to local standard\"\n \" output.\"\n ),\n )\n memswap_limit: Union[int, str] = Field(\n default=None,\n description=(\n \"Total memory (memory + swap), -1 to disable swap. Should only be \"\n \"set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\"\n \"allowing the container to use as much swap as memory. For example, if \"\n \"`mem_limit` is 300m and `memswap_limit` is not set, the container can use \"\n \"600m in total of memory and swap.\"\n ),\n )\n mem_limit: Union[float, str] = Field(\n default=None,\n description=(\n \"Memory limit of the created container. Accepts float values to enforce \"\n \"a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g. \"\n \"If a string is given without a unit, bytes are assumed.\"\n ),\n )\n privileged: bool = Field(\n default=False,\n description=\"Give extended privileges to this container.\",\n )\n\n _block_type_name = \"Docker Container\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/14a315b79990200db7341e42553e23650b34bb96-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer\"\n\n @validator(\"labels\")\n def convert_labels_to_docker_format(cls, labels: Dict[str, str]):\n labels = labels or {}\n new_labels = {}\n for name, value in labels.items():\n if \"/\" in name:\n namespace, key = name.split(\"/\", maxsplit=1)\n new_namespace = \".\".join(reversed(namespace.split(\".\")))\n new_labels[f\"{new_namespace}.{key}\"] = value\n else:\n new_labels[name] = value\n return new_labels\n\n @validator(\"volumes\")\n def check_volume_format(cls, volumes):\n for volume in volumes:\n if \":\" not in volume:\n raise ValueError(\n \"Invalid volume specification. \"\n f\"Expected format 'path:container_path', but got {volume!r}\"\n )\n\n return volumes\n\n @sync_compatible\n async def run(\n self,\n task_status: Optional[anyio.abc.TaskStatus] = None,\n ) -> Optional[bool]:\n if not self.command:\n raise ValueError(\"Docker container cannot be run with empty command.\")\n\n # The `docker` library uses requests instead of an async http library so it must\n # be run in a thread to avoid blocking the event loop.\n container = await run_sync_in_worker_thread(self._create_and_start_container)\n container_pid = self._get_infrastructure_pid(container_id=container.id)\n\n # Mark as started and return the infrastructure id\n if task_status:\n task_status.started(container_pid)\n\n # Monitor the container\n container = await run_sync_in_worker_thread(\n self._watch_container_safe, container\n )\n\n exit_code = container.attrs[\"State\"].get(\"ExitCode\")\n return DockerContainerResult(\n status_code=exit_code if exit_code is not None else -1,\n identifier=container_pid,\n )\n\n async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n docker_client = self._get_client()\n base_url, container_id = self._parse_infrastructure_pid(infrastructure_pid)\n\n if docker_client.api.base_url != base_url:\n raise InfrastructureNotAvailable(\n \"\".join(\n [\n (\n f\"Unable to stop container {container_id!r}: the current\"\n \" Docker API \"\n ),\n (\n f\"URL {docker_client.api.base_url!r} does not match the\"\n \" expected \"\n ),\n f\"API base URL {base_url}.\",\n ]\n )\n )\n try:\n container = docker_client.containers.get(container_id=container_id)\n except docker.errors.NotFound:\n raise InfrastructureNotFound(\n f\"Unable to stop container {container_id!r}: The container was not\"\n \" found.\"\n )\n\n try:\n container.stop(timeout=grace_seconds)\n except Exception:\n raise\n\n def preview(self):\n # TODO: build and document a more sophisticated preview\n docker_client = self._get_client()\n try:\n return json.dumps(self._build_container_settings(docker_client))\n finally:\n docker_client.close()\n\n async def generate_work_pool_base_job_template(self):\n from prefect.workers.utilities import (\n get_default_base_job_template_for_infrastructure_type,\n )\n\n base_job_template = await get_default_base_job_template_for_infrastructure_type(\n self.get_corresponding_worker_type()\n )\n if base_job_template is None:\n return await super().generate_work_pool_base_job_template()\n for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n if key == \"command\":\n base_job_template[\"variables\"][\"properties\"][\"command\"][\n \"default\"\n ] = shlex.join(value)\n elif key == \"image_registry\":\n self.logger.warning(\n \"Image registry blocks are not supported by Docker\"\n \" work pools. Please authenticate to your registry using\"\n \" the `docker login` command on your worker instances.\"\n )\n elif key in [\n \"type\",\n \"block_type_slug\",\n \"_block_document_id\",\n \"_block_document_name\",\n \"_is_anonymous\",\n ]:\n continue\n elif key == \"image_pull_policy\":\n new_value = None\n if value == ImagePullPolicy.ALWAYS:\n new_value = \"Always\"\n elif value == ImagePullPolicy.NEVER:\n new_value = \"Never\"\n elif value == ImagePullPolicy.IF_NOT_PRESENT:\n new_value = \"IfNotPresent\"\n\n base_job_template[\"variables\"][\"properties\"][key][\"default\"] = new_value\n elif key in base_job_template[\"variables\"][\"properties\"]:\n base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n else:\n self.logger.warning(\n f\"Variable {key!r} is not supported by Docker work pools. Skipping.\"\n )\n\n return base_job_template\n\n def get_corresponding_worker_type(self):\n return \"docker\"\n\n def _get_infrastructure_pid(self, container_id: str) -> str:\n \"\"\"Generates a Docker infrastructure_pid string in the form of\n `<docker_host_base_url>:<container_id>`.\n \"\"\"\n docker_client = self._get_client()\n base_url = docker_client.api.base_url\n docker_client.close()\n return f\"{base_url}:{container_id}\"\n\n def _parse_infrastructure_pid(self, infrastructure_pid: str) -> Tuple[str, str]:\n \"\"\"Splits a Docker infrastructure_pid into its component parts\"\"\"\n\n # base_url can contain `:` so we only want the last item of the split\n base_url, container_id = infrastructure_pid.rsplit(\":\", 1)\n return base_url, str(container_id)\n\n def _build_container_settings(\n self,\n docker_client: \"DockerClient\",\n ) -> Dict:\n network_mode = self._get_network_mode()\n return dict(\n image=self.image,\n network=self.networks[0] if self.networks else None,\n network_mode=network_mode,\n command=self.command,\n environment=self._get_environment_variables(network_mode),\n auto_remove=self.auto_remove,\n labels={**CONTAINER_LABELS, **self.labels},\n extra_hosts=self._get_extra_hosts(docker_client),\n name=self._get_container_name(),\n volumes=self.volumes,\n mem_limit=self.mem_limit,\n memswap_limit=self.memswap_limit,\n privileged=self.privileged,\n )\n\n def _create_and_start_container(self) -> \"Container\":\n if self.image_registry:\n # If an image registry block was supplied, load an authenticated Docker\n # client from the block. Otherwise, use an unauthenticated client to\n # pull images from public registries.\n docker_client = self.image_registry.get_docker_client()\n else:\n docker_client = self._get_client()\n container_settings = self._build_container_settings(docker_client)\n\n if self._should_pull_image(docker_client):\n self.logger.info(f\"Pulling image {self.image!r}...\")\n self._pull_image(docker_client)\n\n container = self._create_container(docker_client, **container_settings)\n\n # Add additional networks after the container is created; only one network can\n # be attached at creation time\n if len(self.networks) > 1:\n for network_name in self.networks[1:]:\n network = docker_client.networks.get(network_name)\n network.connect(container)\n\n # Start the container\n container.start()\n\n docker_client.close()\n\n return container\n\n def _get_image_and_tag(self) -> Tuple[str, Optional[str]]:\n return parse_image_tag(self.image)\n\n def _determine_image_pull_policy(self) -> ImagePullPolicy:\n \"\"\"\n Determine the appropriate image pull policy.\n\n 1. If they specified an image pull policy, use that.\n\n 2. If they did not specify an image pull policy and gave us\n the \"latest\" tag, use ImagePullPolicy.always.\n\n 3. If they did not specify an image pull policy and did not\n specify a tag, use ImagePullPolicy.always.\n\n 4. If they did not specify an image pull policy and gave us\n a tag other than \"latest\", use ImagePullPolicy.if_not_present.\n\n This logic matches the behavior of Kubernetes.\n See:https://kubernetes.io/docs/concepts/containers/images/#imagepullpolicy-defaulting\n \"\"\"\n if not self.image_pull_policy:\n _, tag = self._get_image_and_tag()\n if tag == \"latest\" or not tag:\n return ImagePullPolicy.ALWAYS\n return ImagePullPolicy.IF_NOT_PRESENT\n return self.image_pull_policy\n\n def _get_network_mode(self) -> Optional[str]:\n # User's value takes precedence; this may collide with the incompatible options\n # mentioned below.\n if self.network_mode:\n if sys.platform != \"linux\" and self.network_mode == \"host\":\n warnings.warn(\n f\"{self.network_mode!r} network mode is not supported on platform \"\n f\"{sys.platform!r} and may not work as intended.\"\n )\n return self.network_mode\n\n # Network mode is not compatible with networks or ports (we do not support ports\n # yet though)\n if self.networks:\n return None\n\n # Check for a local API connection\n api_url = self.env.get(\"PREFECT_API_URL\", PREFECT_API_URL.value())\n\n if api_url:\n try:\n _, netloc, _, _, _, _ = urllib.parse.urlparse(api_url)\n except Exception as exc:\n warnings.warn(\n f\"Failed to parse host from API URL {api_url!r} with exception: \"\n f\"{exc}\\nThe network mode will not be inferred.\"\n )\n return None\n\n host = netloc.split(\":\")[0]\n\n # If using a locally hosted API, use a host network on linux\n if sys.platform == \"linux\" and (host == \"127.0.0.1\" or host == \"localhost\"):\n return \"host\"\n\n # Default to unset\n return None\n\n def _should_pull_image(self, docker_client: \"DockerClient\") -> bool:\n \"\"\"\n Decide whether we need to pull the Docker image.\n \"\"\"\n image_pull_policy = self._determine_image_pull_policy()\n\n if image_pull_policy is ImagePullPolicy.ALWAYS:\n return True\n elif image_pull_policy is ImagePullPolicy.NEVER:\n return False\n elif image_pull_policy is ImagePullPolicy.IF_NOT_PRESENT:\n try:\n # NOTE: images.get() wants the tag included with the image\n # name, while images.pull() wants them split.\n docker_client.images.get(self.image)\n except docker.errors.ImageNotFound:\n self.logger.debug(f\"Could not find Docker image locally: {self.image}\")\n return True\n return False\n\n def _pull_image(self, docker_client: \"DockerClient\"):\n \"\"\"\n Pull the image we're going to use to create the container.\n \"\"\"\n image, tag = self._get_image_and_tag()\n\n return docker_client.images.pull(image, tag)\n\n def _create_container(self, docker_client: \"DockerClient\", **kwargs) -> \"Container\":\n \"\"\"\n Create a docker container with retries on name conflicts.\n\n If the container already exists with the given name, an incremented index is\n added.\n \"\"\"\n # Create the container with retries on name conflicts (with an incremented idx)\n index = 0\n container = None\n name = original_name = kwargs.pop(\"name\")\n\n while not container:\n from docker.errors import APIError\n\n try:\n display_name = repr(name) if name else \"with auto-generated name\"\n self.logger.info(f\"Creating Docker container {display_name}...\")\n container = docker_client.containers.create(name=name, **kwargs)\n except APIError as exc:\n if \"Conflict\" in str(exc) and \"container name\" in str(exc):\n self.logger.info(\n f\"Docker container name {display_name} already exists; \"\n \"retrying...\"\n )\n index += 1\n name = f\"{original_name}-{index}\"\n else:\n raise\n\n self.logger.info(\n f\"Docker container {container.name!r} has status {container.status!r}\"\n )\n return container\n\n def _watch_container_safe(self, container: \"Container\") -> \"Container\":\n # Monitor the container capturing the latest snapshot while capturing\n # not found errors\n docker_client = self._get_client()\n\n try:\n for latest_container in self._watch_container(docker_client, container.id):\n container = latest_container\n except docker.errors.NotFound:\n # The container was removed during watching\n self.logger.warning(\n f\"Docker container {container.name} was removed before we could wait \"\n \"for its completion.\"\n )\n finally:\n docker_client.close()\n\n return container\n\n def _watch_container(\n self, docker_client: \"DockerClient\", container_id: str\n ) -> Generator[None, None, \"Container\"]:\n container: \"Container\" = docker_client.containers.get(container_id)\n\n status = container.status\n self.logger.info(\n f\"Docker container {container.name!r} has status {container.status!r}\"\n )\n yield container\n\n if self.stream_output:\n try:\n for log in container.logs(stream=True):\n log: bytes\n print(log.decode().rstrip())\n except docker.errors.APIError as exc:\n if \"marked for removal\" in str(exc):\n self.logger.warning(\n f\"Docker container {container.name} was marked for removal\"\n \" before logs could be retrieved. Output will not be\"\n \" streamed. \"\n )\n else:\n self.logger.exception(\n \"An unexpected Docker API error occurred while streaming\"\n f\" output from container {container.name}.\"\n )\n\n container.reload()\n if container.status != status:\n self.logger.info(\n f\"Docker container {container.name!r} has status\"\n f\" {container.status!r}\"\n )\n yield container\n\n container.wait()\n self.logger.info(\n f\"Docker container {container.name!r} has status {container.status!r}\"\n )\n yield container\n\n def _get_client(self):\n try:\n with warnings.catch_warnings():\n # Silence warnings due to use of deprecated methods within dockerpy\n # See https://github.com/docker/docker-py/pull/2931\n warnings.filterwarnings(\n \"ignore\",\n message=\"distutils Version classes are deprecated.*\",\n category=DeprecationWarning,\n )\n\n docker_client = docker.from_env()\n\n except docker.errors.DockerException as exc:\n raise RuntimeError(\"Could not connect to Docker.\") from exc\n\n return docker_client\n\n def _get_container_name(self) -> Optional[str]:\n \"\"\"\n Generates a container name to match the configured name, ensuring it is Docker\n compatible.\n \"\"\"\n # Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+` in the end\n if not self.name:\n return None\n\n return (\n slugify(\n self.name,\n lowercase=False,\n # Docker does not limit length but URL limits apply eventually so\n # limit the length for safety\n max_length=250,\n # Docker allows these characters for container names\n regex_pattern=r\"[^a-zA-Z0-9_.-]+\",\n ).lstrip(\n # Docker does not allow leading underscore, dash, or period\n \"_-.\"\n )\n # Docker does not allow 0 character names so cast to null if the name is\n # empty after slufification\n or None\n )\n\n def _get_extra_hosts(self, docker_client) -> Dict[str, str]:\n \"\"\"\n A host.docker.internal -> host-gateway mapping is necessary for communicating\n with the API on Linux machines. Docker Desktop on macOS will automatically\n already have this mapping.\n \"\"\"\n if sys.platform == \"linux\" and (\n # Do not warn if the user has specified a host manually that does not use\n # a local address\n \"PREFECT_API_URL\" not in self.env\n or re.search(\n \".*(localhost)|(127.0.0.1)|(host.docker.internal).*\",\n self.env[\"PREFECT_API_URL\"],\n )\n ):\n user_version = packaging.version.parse(\n format_outlier_version_name(docker_client.version()[\"Version\"])\n )\n required_version = packaging.version.parse(\"20.10.0\")\n\n if user_version < required_version:\n warnings.warn(\n \"`host.docker.internal` could not be automatically resolved to\"\n \" your local ip address. This feature is not supported on Docker\"\n f\" Engine v{user_version}, upgrade to v{required_version}+ if you\"\n \" encounter issues.\"\n )\n return {}\n else:\n # Compatibility for linux -- https://github.com/docker/cli/issues/2290\n # Only supported by Docker v20.10.0+ which is our minimum recommend version\n return {\"host.docker.internal\": \"host-gateway\"}\n\n def _get_environment_variables(self, network_mode):\n # If the API URL has been set by the base environment rather than the by the\n # user, update the value to ensure connectivity when using a bridge network by\n # updating local connections to use the docker internal host unless the\n # network mode is \"host\" where localhost is available already.\n env = {**self._base_environment(), **self.env}\n\n if (\n \"PREFECT_API_URL\" in env\n and \"PREFECT_API_URL\" not in self.env\n and network_mode != \"host\"\n ):\n env[\"PREFECT_API_URL\"] = (\n env[\"PREFECT_API_URL\"]\n .replace(\"localhost\", \"host.docker.internal\")\n .replace(\"127.0.0.1\", \"host.docker.internal\")\n )\n\n # Drop null values allowing users to \"unset\" variables\n return {key: value for key, value in env.items() if value is not None}\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainerResult","title":"DockerContainerResult
","text":" Bases: InfrastructureResult
Contains information about a completed Docker container
Source code inprefect/infrastructure/container.py
class DockerContainerResult(InfrastructureResult):\n \"\"\"Contains information about a completed Docker container\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure","title":"Infrastructure
","text":" Bases: Block
, ABC
prefect/infrastructure/base.py
@deprecated_class(\n start_date=\"Mar 2024\",\n help=\"Use the `BaseWorker` class to create custom infrastructure integrations instead.\"\n \" Refer to the upgrade guide for more information:\"\n \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass Infrastructure(Block, abc.ABC):\n _block_schema_capabilities = [\"run-infrastructure\"]\n\n type: str\n\n env: Dict[str, Optional[str]] = pydantic.Field(\n default_factory=dict,\n title=\"Environment\",\n description=\"Environment variables to set in the configured infrastructure.\",\n )\n labels: Dict[str, str] = pydantic.Field(\n default_factory=dict,\n description=\"Labels applied to the infrastructure for metadata purposes.\",\n )\n name: Optional[str] = pydantic.Field(\n default=None,\n description=\"Name applied to the infrastructure for identification.\",\n )\n command: Optional[List[str]] = pydantic.Field(\n default=None,\n description=\"The command to run in the infrastructure.\",\n )\n\n async def generate_work_pool_base_job_template(self):\n if self._block_document_id is None:\n raise BlockNotSavedError(\n \"Cannot publish as work pool, block has not been saved. Please call\"\n \" `.save()` on your block before publishing.\"\n )\n\n block_schema = self.__class__.schema()\n return {\n \"job_configuration\": {\"block\": \"{{ block }}\"},\n \"variables\": {\n \"type\": \"object\",\n \"properties\": {\n \"block\": {\n \"title\": \"Block\",\n \"description\": (\n \"The infrastructure block to use for job creation.\"\n ),\n \"allOf\": [{\"$ref\": f\"#/definitions/{self.__class__.__name__}\"}],\n \"default\": {\n \"$ref\": {\"block_document_id\": str(self._block_document_id)}\n },\n }\n },\n \"required\": [\"block\"],\n \"definitions\": {self.__class__.__name__: block_schema},\n },\n }\n\n def get_corresponding_worker_type(self):\n return \"block\"\n\n @sync_compatible\n async def publish_as_work_pool(self, work_pool_name: Optional[str] = None):\n \"\"\"\n Creates a work pool configured to use the given block as the job creator.\n\n Used to migrate from a agents setup to a worker setup.\n\n Args:\n work_pool_name: The name to give to the created work pool. If not provided, the name of the current\n block will be used.\n \"\"\"\n\n base_job_template = await self.generate_work_pool_base_job_template()\n work_pool_name = work_pool_name or self._block_document_name\n\n if work_pool_name is None:\n raise ValueError(\n \"`work_pool_name` must be provided if the block has not been saved.\"\n )\n\n console = Console()\n\n try:\n async with prefect.get_client() as client:\n work_pool = await client.create_work_pool(\n work_pool=WorkPoolCreate(\n name=work_pool_name,\n type=self.get_corresponding_worker_type(),\n base_job_template=base_job_template,\n )\n )\n except ObjectAlreadyExists:\n console.print(\n (\n f\"Work pool with name {work_pool_name!r} already exists, please use\"\n \" a different name.\"\n ),\n style=\"red\",\n )\n return\n\n console.print(\n f\"Work pool {work_pool.name} created!\",\n style=\"green\",\n )\n if PREFECT_UI_URL:\n console.print(\n \"You see your new work pool in the UI at\"\n f\" {PREFECT_UI_URL.value()}/work-pools/work-pool/{work_pool.name}\"\n )\n\n deploy_script = (\n \"my_flow.deploy(work_pool_name='{work_pool.name}', image='my_image:tag')\"\n )\n if not hasattr(self, \"image\"):\n deploy_script = (\n \"my_flow.from_source(source='https://github.com/org/repo.git',\"\n f\" entrypoint='flow.py:my_flow').deploy(work_pool_name='{work_pool.name}')\"\n )\n console.print(\n \"\\nYou can deploy a flow to this work pool by calling\"\n f\" [blue].deploy[/]:\\n\\n\\t{deploy_script}\\n\"\n )\n console.print(\n \"\\nTo start a worker to execute flow runs in this work pool run:\\n\"\n )\n console.print(f\"\\t[blue]prefect worker start --pool {work_pool.name}[/]\\n\")\n\n @abc.abstractmethod\n async def run(\n self,\n task_status: anyio.abc.TaskStatus = None,\n ) -> InfrastructureResult:\n \"\"\"\n Run the infrastructure.\n\n If provided a `task_status`, the status will be reported as started when the\n infrastructure is successfully created. The status return value will be an\n identifier for the infrastructure.\n\n The call will then monitor the created infrastructure, returning a result at\n the end containing a status code indicating if the infrastructure exited cleanly\n or encountered an error.\n \"\"\"\n # Note: implementations should include `sync_compatible`\n\n @abc.abstractmethod\n def preview(self) -> str:\n \"\"\"\n View a preview of the infrastructure that would be run.\n \"\"\"\n\n @property\n def logger(self):\n return get_logger(f\"prefect.infrastructure.{self.type}\")\n\n @property\n def is_using_a_runner(self):\n return self.command is not None and \"prefect flow-run execute\" in shlex.join(\n self.command\n )\n\n @classmethod\n def _base_environment(cls) -> Dict[str, str]:\n \"\"\"\n Environment variables that should be passed to all created infrastructure.\n\n These values should be overridable with the `env` field.\n \"\"\"\n return get_current_settings().to_environment_variables(exclude_unset=True)\n\n def prepare_for_flow_run(\n self: Self,\n flow_run: \"FlowRun\",\n deployment: Optional[\"Deployment\"] = None,\n flow: Optional[\"Flow\"] = None,\n ) -> Self:\n \"\"\"\n Return an infrastructure block that is prepared to execute a flow run.\n \"\"\"\n if deployment is not None:\n deployment_labels = self._base_deployment_labels(deployment)\n else:\n deployment_labels = {}\n\n if flow is not None:\n flow_labels = self._base_flow_labels(flow)\n else:\n flow_labels = {}\n\n return self.copy(\n update={\n \"env\": {**self._base_flow_run_environment(flow_run), **self.env},\n \"labels\": {\n **self._base_flow_run_labels(flow_run),\n **deployment_labels,\n **flow_labels,\n **self.labels,\n },\n \"name\": self.name or flow_run.name,\n \"command\": self.command or self._base_flow_run_command(),\n }\n )\n\n @staticmethod\n def _base_flow_run_command() -> List[str]:\n \"\"\"\n Generate a command for a flow run job.\n \"\"\"\n if experiment_enabled(\"enhanced_cancellation\"):\n if (\n PREFECT_EXPERIMENTAL_WARN\n and PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION\n ):\n warnings.warn(\n EXPERIMENTAL_WARNING.format(\n feature=\"Enhanced flow run cancellation\",\n group=\"enhanced_cancellation\",\n help=\"\",\n ),\n ExperimentalFeature,\n stacklevel=3,\n )\n return [\"prefect\", \"flow-run\", \"execute\"]\n\n return [\"python\", \"-m\", \"prefect.engine\"]\n\n @staticmethod\n def _base_flow_run_labels(flow_run: \"FlowRun\") -> Dict[str, str]:\n \"\"\"\n Generate a dictionary of labels for a flow run job.\n \"\"\"\n return {\n \"prefect.io/flow-run-id\": str(flow_run.id),\n \"prefect.io/flow-run-name\": flow_run.name,\n \"prefect.io/version\": prefect.__version__,\n }\n\n @staticmethod\n def _base_flow_run_environment(flow_run: \"FlowRun\") -> Dict[str, str]:\n \"\"\"\n Generate a dictionary of environment variables for a flow run job.\n \"\"\"\n environment = {}\n environment[\"PREFECT__FLOW_RUN_ID\"] = str(flow_run.id)\n return environment\n\n @staticmethod\n def _base_deployment_labels(deployment: \"Deployment\") -> Dict[str, str]:\n labels = {\n \"prefect.io/deployment-name\": deployment.name,\n }\n if deployment.updated is not None:\n labels[\"prefect.io/deployment-updated\"] = deployment.updated.in_timezone(\n \"utc\"\n ).to_iso8601_string()\n return labels\n\n @staticmethod\n def _base_flow_labels(flow: \"Flow\") -> Dict[str, str]:\n return {\n \"prefect.io/flow-name\": flow.name,\n }\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.prepare_for_flow_run","title":"prepare_for_flow_run
","text":"Return an infrastructure block that is prepared to execute a flow run.
Source code inprefect/infrastructure/base.py
def prepare_for_flow_run(\n self: Self,\n flow_run: \"FlowRun\",\n deployment: Optional[\"Deployment\"] = None,\n flow: Optional[\"Flow\"] = None,\n) -> Self:\n \"\"\"\n Return an infrastructure block that is prepared to execute a flow run.\n \"\"\"\n if deployment is not None:\n deployment_labels = self._base_deployment_labels(deployment)\n else:\n deployment_labels = {}\n\n if flow is not None:\n flow_labels = self._base_flow_labels(flow)\n else:\n flow_labels = {}\n\n return self.copy(\n update={\n \"env\": {**self._base_flow_run_environment(flow_run), **self.env},\n \"labels\": {\n **self._base_flow_run_labels(flow_run),\n **deployment_labels,\n **flow_labels,\n **self.labels,\n },\n \"name\": self.name or flow_run.name,\n \"command\": self.command or self._base_flow_run_command(),\n }\n )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.preview","title":"preview
abstractmethod
","text":"View a preview of the infrastructure that would be run.
Source code inprefect/infrastructure/base.py
@abc.abstractmethod\ndef preview(self) -> str:\n \"\"\"\n View a preview of the infrastructure that would be run.\n \"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.publish_as_work_pool","title":"publish_as_work_pool
async
","text":"Creates a work pool configured to use the given block as the job creator.
Used to migrate from a agents setup to a worker setup.
Parameters:
Name Type Description Defaultwork_pool_name
Optional[str]
The name to give to the created work pool. If not provided, the name of the current block will be used.
None
Source code in prefect/infrastructure/base.py
@sync_compatible\nasync def publish_as_work_pool(self, work_pool_name: Optional[str] = None):\n \"\"\"\n Creates a work pool configured to use the given block as the job creator.\n\n Used to migrate from a agents setup to a worker setup.\n\n Args:\n work_pool_name: The name to give to the created work pool. If not provided, the name of the current\n block will be used.\n \"\"\"\n\n base_job_template = await self.generate_work_pool_base_job_template()\n work_pool_name = work_pool_name or self._block_document_name\n\n if work_pool_name is None:\n raise ValueError(\n \"`work_pool_name` must be provided if the block has not been saved.\"\n )\n\n console = Console()\n\n try:\n async with prefect.get_client() as client:\n work_pool = await client.create_work_pool(\n work_pool=WorkPoolCreate(\n name=work_pool_name,\n type=self.get_corresponding_worker_type(),\n base_job_template=base_job_template,\n )\n )\n except ObjectAlreadyExists:\n console.print(\n (\n f\"Work pool with name {work_pool_name!r} already exists, please use\"\n \" a different name.\"\n ),\n style=\"red\",\n )\n return\n\n console.print(\n f\"Work pool {work_pool.name} created!\",\n style=\"green\",\n )\n if PREFECT_UI_URL:\n console.print(\n \"You see your new work pool in the UI at\"\n f\" {PREFECT_UI_URL.value()}/work-pools/work-pool/{work_pool.name}\"\n )\n\n deploy_script = (\n \"my_flow.deploy(work_pool_name='{work_pool.name}', image='my_image:tag')\"\n )\n if not hasattr(self, \"image\"):\n deploy_script = (\n \"my_flow.from_source(source='https://github.com/org/repo.git',\"\n f\" entrypoint='flow.py:my_flow').deploy(work_pool_name='{work_pool.name}')\"\n )\n console.print(\n \"\\nYou can deploy a flow to this work pool by calling\"\n f\" [blue].deploy[/]:\\n\\n\\t{deploy_script}\\n\"\n )\n console.print(\n \"\\nTo start a worker to execute flow runs in this work pool run:\\n\"\n )\n console.print(f\"\\t[blue]prefect worker start --pool {work_pool.name}[/]\\n\")\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.run","title":"run
abstractmethod
async
","text":"Run the infrastructure.
If provided a task_status
, the status will be reported as started when the infrastructure is successfully created. The status return value will be an identifier for the infrastructure.
The call will then monitor the created infrastructure, returning a result at the end containing a status code indicating if the infrastructure exited cleanly or encountered an error.
Source code inprefect/infrastructure/base.py
@abc.abstractmethod\nasync def run(\n self,\n task_status: anyio.abc.TaskStatus = None,\n) -> InfrastructureResult:\n \"\"\"\n Run the infrastructure.\n\n If provided a `task_status`, the status will be reported as started when the\n infrastructure is successfully created. The status return value will be an\n identifier for the infrastructure.\n\n The call will then monitor the created infrastructure, returning a result at\n the end containing a status code indicating if the infrastructure exited cleanly\n or encountered an error.\n \"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig","title":"KubernetesClusterConfig
","text":" Bases: Block
Stores configuration for interaction with Kubernetes clusters.
See from_file
for creation.
Attributes:
Name Type Descriptionconfig
Dict
The entire loaded YAML contents of a kubectl config file
context_name
str
The name of the kubectl context to use
ExampleLoad a saved Kubernetes cluster config:
from prefect.blocks.kubernetes import KubernetesClusterConfig\n\ncluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n
Source code in prefect/blocks/kubernetes.py
class KubernetesClusterConfig(Block):\n \"\"\"\n Stores configuration for interaction with Kubernetes clusters.\n\n See `from_file` for creation.\n\n Attributes:\n config: The entire loaded YAML contents of a kubectl config file\n context_name: The name of the kubectl context to use\n\n Example:\n Load a saved Kubernetes cluster config:\n ```python\n from prefect.blocks.kubernetes import KubernetesClusterConfig\n\n cluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"Kubernetes Cluster Config\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig\"\n\n config: Dict = Field(\n default=..., description=\"The entire contents of a kubectl config file.\"\n )\n context_name: str = Field(\n default=..., description=\"The name of the kubectl context to use.\"\n )\n\n @validator(\"config\", pre=True)\n def parse_yaml_config(cls, value):\n if isinstance(value, str):\n return yaml.safe_load(value)\n return value\n\n @classmethod\n def from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n \"\"\"\n Create a cluster config from the a Kubernetes config file.\n\n By default, the current context in the default Kubernetes config file will be\n used.\n\n An alternative file or context may be specified.\n\n The entire config file will be loaded and stored.\n \"\"\"\n kube_config = kubernetes.config.kube_config\n\n path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n path = path.expanduser().resolve()\n\n # Determine the context\n existing_contexts, current_context = kube_config.list_kube_config_contexts(\n config_file=str(path)\n )\n context_names = {ctx[\"name\"] for ctx in existing_contexts}\n if context_name:\n if context_name not in context_names:\n raise ValueError(\n f\"Context {context_name!r} not found. \"\n f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n )\n else:\n context_name = current_context[\"name\"]\n\n # Load the entire config file\n config_file_contents = path.read_text()\n config_dict = yaml.safe_load(config_file_contents)\n\n return cls(config=config_dict, context_name=context_name)\n\n def get_api_client(self) -> \"ApiClient\":\n \"\"\"\n Returns a Kubernetes API client for this cluster config.\n \"\"\"\n return kubernetes.config.kube_config.new_client_from_config_dict(\n config_dict=self.config, context=self.context_name\n )\n\n def configure_client(self) -> None:\n \"\"\"\n Activates this cluster configuration by loading the configuration into the\n Kubernetes Python client. After calling this, Kubernetes API clients can use\n this config's context.\n \"\"\"\n kubernetes.config.kube_config.load_kube_config_from_dict(\n config_dict=self.config, context=self.context_name\n )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig.configure_client","title":"configure_client
","text":"Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context.
Source code inprefect/blocks/kubernetes.py
def configure_client(self) -> None:\n \"\"\"\n Activates this cluster configuration by loading the configuration into the\n Kubernetes Python client. After calling this, Kubernetes API clients can use\n this config's context.\n \"\"\"\n kubernetes.config.kube_config.load_kube_config_from_dict(\n config_dict=self.config, context=self.context_name\n )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig.from_file","title":"from_file
classmethod
","text":"Create a cluster config from the a Kubernetes config file.
By default, the current context in the default Kubernetes config file will be used.
An alternative file or context may be specified.
The entire config file will be loaded and stored.
Source code inprefect/blocks/kubernetes.py
@classmethod\ndef from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n \"\"\"\n Create a cluster config from the a Kubernetes config file.\n\n By default, the current context in the default Kubernetes config file will be\n used.\n\n An alternative file or context may be specified.\n\n The entire config file will be loaded and stored.\n \"\"\"\n kube_config = kubernetes.config.kube_config\n\n path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n path = path.expanduser().resolve()\n\n # Determine the context\n existing_contexts, current_context = kube_config.list_kube_config_contexts(\n config_file=str(path)\n )\n context_names = {ctx[\"name\"] for ctx in existing_contexts}\n if context_name:\n if context_name not in context_names:\n raise ValueError(\n f\"Context {context_name!r} not found. \"\n f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n )\n else:\n context_name = current_context[\"name\"]\n\n # Load the entire config file\n config_file_contents = path.read_text()\n config_dict = yaml.safe_load(config_file_contents)\n\n return cls(config=config_dict, context_name=context_name)\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig.get_api_client","title":"get_api_client
","text":"Returns a Kubernetes API client for this cluster config.
Source code inprefect/blocks/kubernetes.py
def get_api_client(self) -> \"ApiClient\":\n \"\"\"\n Returns a Kubernetes API client for this cluster config.\n \"\"\"\n return kubernetes.config.kube_config.new_client_from_config_dict(\n config_dict=self.config, context=self.context_name\n )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob","title":"KubernetesJob
","text":" Bases: Infrastructure
Runs a command as a Kubernetes Job.
For a guided tutorial, see How to use Kubernetes with Prefect. For more information, including examples for customizing the resulting manifest, see KubernetesJob
infrastructure concepts.
Attributes:
Name Type Descriptioncluster_config
Optional[KubernetesClusterConfig]
An optional Kubernetes cluster config to use for this job.
command
Optional[KubernetesClusterConfig]
A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.
customizations
JsonPatch
A list of JSON 6902 patches to apply to the base Job manifest.
env
JsonPatch
Environment variables to set for the container.
finished_job_ttl
Optional[int]
The number of seconds to retain jobs after completion. If set, finished jobs will be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be manually removed.
image
Optional[str]
An optional string specifying the image reference of a container image to use for the job, for example, docker.io/prefecthq/prefect:2-latest. The behavior is as described in https://kubernetes.io/docs/concepts/containers/images/#image-names. Defaults to the Prefect image.
image_pull_policy
Optional[KubernetesImagePullPolicy]
The Kubernetes image pull policy to use for job containers.
job
KubernetesManifest
The base manifest for the Kubernetes Job.
job_watch_timeout_seconds
Optional[int]
Number of seconds to wait for the job to complete before marking it as crashed. Defaults to None
, which means no timeout will be enforced.
labels
Optional[int]
An optional dictionary of labels to add to the job.
name
Optional[int]
An optional name for the job.
namespace
Optional[str]
An optional string signifying the Kubernetes namespace to use.
pod_watch_timeout_seconds
int
Number of seconds to watch for pod creation before timing out (default 60).
service_account_name
Optional[str]
An optional string specifying which Kubernetes service account to use.
stream_output
bool
If set, stream output from the job to local standard output.
Source code inprefect/infrastructure/kubernetes.py
@deprecated_class(\n start_date=\"Mar 2024\",\n help=\"Use the Kubernetes worker from prefect-kubernetes instead.\"\n \" Refer to the upgrade guide for more information:\"\n \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass KubernetesJob(Infrastructure):\n \"\"\"\n Runs a command as a Kubernetes Job.\n\n For a guided tutorial, see [How to use Kubernetes with Prefect](https://medium.com/the-prefect-blog/how-to-use-kubernetes-with-prefect-419b2e8b8cb2/).\n For more information, including examples for customizing the resulting manifest, see [`KubernetesJob` infrastructure concepts](https://docs.prefect.io/concepts/infrastructure/#kubernetesjob).\n\n Attributes:\n cluster_config: An optional Kubernetes cluster config to use for this job.\n command: A list of strings specifying the command to run in the container to\n start the flow run. In most cases you should not override this.\n customizations: A list of JSON 6902 patches to apply to the base Job manifest.\n env: Environment variables to set for the container.\n finished_job_ttl: The number of seconds to retain jobs after completion. If set, finished jobs will\n be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be\n manually removed.\n image: An optional string specifying the image reference of a container image\n to use for the job, for example, docker.io/prefecthq/prefect:2-latest. The\n behavior is as described in https://kubernetes.io/docs/concepts/containers/images/#image-names.\n Defaults to the Prefect image.\n image_pull_policy: The Kubernetes image pull policy to use for job containers.\n job: The base manifest for the Kubernetes Job.\n job_watch_timeout_seconds: Number of seconds to wait for the job to complete\n before marking it as crashed. Defaults to `None`, which means no timeout will be enforced.\n labels: An optional dictionary of labels to add to the job.\n name: An optional name for the job.\n namespace: An optional string signifying the Kubernetes namespace to use.\n pod_watch_timeout_seconds: Number of seconds to watch for pod creation before timing out (default 60).\n service_account_name: An optional string specifying which Kubernetes service account to use.\n stream_output: If set, stream output from the job to local standard output.\n \"\"\"\n\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob\"\n\n type: Literal[\"kubernetes-job\"] = Field(\n default=\"kubernetes-job\", description=\"The type of infrastructure.\"\n )\n # shortcuts for the most common user-serviceable settings\n image: Optional[str] = Field(\n default=None,\n description=(\n \"The image reference of a container image to use for the job, for example,\"\n \" `docker.io/prefecthq/prefect:2-latest`.The behavior is as described in\"\n \" the Kubernetes documentation and uses the latest version of Prefect by\"\n \" default, unless an image is already present in a provided job manifest.\"\n ),\n )\n namespace: Optional[str] = Field(\n default=None,\n description=(\n \"The Kubernetes namespace to use for this job. Defaults to 'default' \"\n \"unless a namespace is already present in a provided job manifest.\"\n ),\n )\n service_account_name: Optional[str] = Field(\n default=None, description=\"The Kubernetes service account to use for this job.\"\n )\n image_pull_policy: Optional[KubernetesImagePullPolicy] = Field(\n default=None,\n description=\"The Kubernetes image pull policy to use for job containers.\",\n )\n\n # connection to a cluster\n cluster_config: Optional[KubernetesClusterConfig] = Field(\n default=None, description=\"The Kubernetes cluster config to use for this job.\"\n )\n\n # settings allowing full customization of the Job\n job: KubernetesManifest = Field(\n default_factory=lambda: KubernetesJob.base_job_manifest(),\n description=\"The base manifest for the Kubernetes Job.\",\n title=\"Base Job Manifest\",\n )\n customizations: JsonPatch = Field(\n default_factory=lambda: JsonPatch([]),\n description=\"A list of JSON 6902 patches to apply to the base Job manifest.\",\n )\n\n # controls the behavior of execution\n job_watch_timeout_seconds: Optional[int] = Field(\n default=None,\n description=(\n \"Number of seconds to wait for the job to complete before marking it as\"\n \" crashed. Defaults to `None`, which means no timeout will be enforced.\"\n ),\n )\n pod_watch_timeout_seconds: int = Field(\n default=60,\n description=\"Number of seconds to watch for pod creation before timing out.\",\n )\n stream_output: bool = Field(\n default=True,\n description=(\n \"If set, output will be streamed from the job to local standard output.\"\n ),\n )\n finished_job_ttl: Optional[int] = Field(\n default=None,\n description=(\n \"The number of seconds to retain jobs after completion. If set, finished\"\n \" jobs will be cleaned up by Kubernetes after the given delay. If None\"\n \" (default), jobs will need to be manually removed.\"\n ),\n )\n\n # internal-use only right now\n _api_dns_name: Optional[str] = None # Replaces 'localhost' in API URL\n\n _block_type_name = \"Kubernetes Job\"\n\n @validator(\"job\")\n def ensure_job_includes_all_required_components(cls, value: KubernetesManifest):\n patch = JsonPatch.from_diff(value, cls.base_job_manifest())\n missing_paths = sorted([op[\"path\"] for op in patch if op[\"op\"] == \"add\"])\n if missing_paths:\n raise ValueError(\n \"Job is missing required attributes at the following paths: \"\n f\"{', '.join(missing_paths)}\"\n )\n return value\n\n @validator(\"job\")\n def ensure_job_has_compatible_values(cls, value: KubernetesManifest):\n patch = JsonPatch.from_diff(value, cls.base_job_manifest())\n incompatible = sorted(\n [\n f\"{op['path']} must have value {op['value']!r}\"\n for op in patch\n if op[\"op\"] == \"replace\"\n ]\n )\n if incompatible:\n raise ValueError(\n \"Job has incompatible values for the following attributes: \"\n f\"{', '.join(incompatible)}\"\n )\n return value\n\n @validator(\"customizations\", pre=True)\n def cast_customizations_to_a_json_patch(\n cls, value: Union[List[Dict], JsonPatch, str]\n ) -> JsonPatch:\n if isinstance(value, list):\n return JsonPatch(value)\n elif isinstance(value, str):\n try:\n return JsonPatch(json.loads(value))\n except json.JSONDecodeError as exc:\n raise ValueError(\n f\"Unable to parse customizations as JSON: {value}. Please make sure\"\n \" that the provided value is a valid JSON string.\"\n ) from exc\n return value\n\n @root_validator\n def default_namespace(cls, values):\n job = values.get(\"job\")\n\n namespace = values.get(\"namespace\")\n job_namespace = job[\"metadata\"].get(\"namespace\") if job else None\n\n if not namespace and not job_namespace:\n values[\"namespace\"] = \"default\"\n\n return values\n\n @root_validator\n def default_image(cls, values):\n job = values.get(\"job\")\n image = values.get(\"image\")\n job_image = (\n job[\"spec\"][\"template\"][\"spec\"][\"containers\"][0].get(\"image\")\n if job\n else None\n )\n\n if not image and not job_image:\n values[\"image\"] = get_prefect_image_name()\n\n return values\n\n # Support serialization of the 'JsonPatch' type\n class Config:\n arbitrary_types_allowed = True\n json_encoders = {JsonPatch: lambda p: p.patch}\n\n def dict(self, *args, **kwargs) -> Dict:\n d = super().dict(*args, **kwargs)\n d[\"customizations\"] = self.customizations.patch\n return d\n\n @classmethod\n def base_job_manifest(cls) -> KubernetesManifest:\n \"\"\"Produces the bare minimum allowed Job manifest\"\"\"\n return {\n \"apiVersion\": \"batch/v1\",\n \"kind\": \"Job\",\n \"metadata\": {\"labels\": {}},\n \"spec\": {\n \"template\": {\n \"spec\": {\n \"parallelism\": 1,\n \"completions\": 1,\n \"restartPolicy\": \"Never\",\n \"containers\": [\n {\n \"name\": \"prefect-job\",\n \"env\": [],\n }\n ],\n }\n }\n },\n }\n\n # Note that we're using the yaml package to load both YAML and JSON files below.\n # This works because YAML is a strict superset of JSON:\n #\n # > The YAML 1.23 specification was published in 2009. Its primary focus was\n # > making YAML a strict superset of JSON. It also removed many of the problematic\n # > implicit typing recommendations.\n #\n # https://yaml.org/spec/1.2.2/#12-yaml-history\n\n @classmethod\n def job_from_file(cls, filename: str) -> KubernetesManifest:\n \"\"\"Load a Kubernetes Job manifest from a YAML or JSON file.\"\"\"\n with open(filename, \"r\", encoding=\"utf-8\") as f:\n return yaml.load(f, yaml.SafeLoader)\n\n @classmethod\n def customize_from_file(cls, filename: str) -> JsonPatch:\n \"\"\"Load an RFC 6902 JSON patch from a YAML or JSON file.\"\"\"\n with open(filename, \"r\", encoding=\"utf-8\") as f:\n return JsonPatch(yaml.load(f, yaml.SafeLoader))\n\n @sync_compatible\n async def run(\n self,\n task_status: Optional[anyio.abc.TaskStatus] = None,\n ) -> KubernetesJobResult:\n if not self.command:\n raise ValueError(\"Kubernetes job cannot be run with empty command.\")\n\n self._configure_kubernetes_library_client()\n manifest = self.build_job()\n job = await run_sync_in_worker_thread(self._create_job, manifest)\n\n pid = await run_sync_in_worker_thread(self._get_infrastructure_pid, job)\n # Indicate that the job has started\n if task_status is not None:\n task_status.started(pid)\n\n # Monitor the job until completion\n status_code = await run_sync_in_worker_thread(\n self._watch_job, job.metadata.name\n )\n return KubernetesJobResult(identifier=pid, status_code=status_code)\n\n async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n self._configure_kubernetes_library_client()\n job_cluster_uid, job_namespace, job_name = self._parse_infrastructure_pid(\n infrastructure_pid\n )\n\n if not job_namespace == self.namespace:\n raise InfrastructureNotAvailable(\n f\"Unable to kill job {job_name!r}: The job is running in namespace \"\n f\"{job_namespace!r} but this block is configured to use \"\n f\"{self.namespace!r}.\"\n )\n\n current_cluster_uid = self._get_cluster_uid()\n if job_cluster_uid != current_cluster_uid:\n raise InfrastructureNotAvailable(\n f\"Unable to kill job {job_name!r}: The job is running on another \"\n \"cluster.\"\n )\n\n with self.get_batch_client() as batch_client:\n try:\n batch_client.delete_namespaced_job(\n name=job_name,\n namespace=job_namespace,\n grace_period_seconds=grace_seconds,\n # Foreground propagation deletes dependent objects before deleting owner objects.\n # This ensures that the pods are cleaned up before the job is marked as deleted.\n # See: https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion\n propagation_policy=\"Foreground\",\n )\n except kubernetes.client.exceptions.ApiException as exc:\n if exc.status == 404:\n raise InfrastructureNotFound(\n f\"Unable to kill job {job_name!r}: The job was not found.\"\n ) from exc\n else:\n raise\n\n def preview(self):\n return yaml.dump(self.build_job())\n\n def get_corresponding_worker_type(self):\n return \"kubernetes\"\n\n async def generate_work_pool_base_job_template(self):\n from prefect.workers.utilities import (\n get_default_base_job_template_for_infrastructure_type,\n )\n\n base_job_template = await get_default_base_job_template_for_infrastructure_type(\n self.get_corresponding_worker_type()\n )\n assert (\n base_job_template is not None\n ), \"Failed to retrieve default base job template.\"\n for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n if key == \"command\":\n base_job_template[\"variables\"][\"properties\"][\"command\"][\n \"default\"\n ] = shlex.join(value)\n elif key in [\n \"type\",\n \"block_type_slug\",\n \"_block_document_id\",\n \"_block_document_name\",\n \"_is_anonymous\",\n \"job\",\n \"customizations\",\n ]:\n continue\n elif key == \"image_pull_policy\":\n base_job_template[\"variables\"][\"properties\"][\"image_pull_policy\"][\n \"default\"\n ] = value.value\n elif key == \"cluster_config\":\n base_job_template[\"variables\"][\"properties\"][\"cluster_config\"][\n \"default\"\n ] = {\n \"$ref\": {\n \"block_document_id\": str(self.cluster_config._block_document_id)\n }\n }\n elif key in base_job_template[\"variables\"][\"properties\"]:\n base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n else:\n self.logger.warning(\n f\"Variable {key!r} is not supported by Kubernetes work pools.\"\n \" Skipping.\"\n )\n\n custom_job_manifest = self.dict(exclude_unset=True, exclude_defaults=True).get(\n \"job\"\n )\n if custom_job_manifest:\n job_manifest = self.build_job()\n else:\n job_manifest = copy.deepcopy(\n base_job_template[\"job_configuration\"][\"job_manifest\"]\n )\n job_manifest = self.customizations.apply(job_manifest)\n base_job_template[\"job_configuration\"][\"job_manifest\"] = job_manifest\n\n return base_job_template\n\n def build_job(self) -> KubernetesManifest:\n \"\"\"Builds the Kubernetes Job Manifest\"\"\"\n job_manifest = copy.copy(self.job)\n job_manifest = self._shortcut_customizations().apply(job_manifest)\n job_manifest = self.customizations.apply(job_manifest)\n return job_manifest\n\n @contextmanager\n def get_batch_client(self) -> Generator[\"BatchV1Api\", None, None]:\n with kubernetes.client.ApiClient() as client:\n try:\n yield kubernetes.client.BatchV1Api(api_client=client)\n finally:\n client.rest_client.pool_manager.clear()\n\n @contextmanager\n def get_client(self) -> Generator[\"CoreV1Api\", None, None]:\n with kubernetes.client.ApiClient() as client:\n try:\n yield kubernetes.client.CoreV1Api(api_client=client)\n finally:\n client.rest_client.pool_manager.clear()\n\n def _get_infrastructure_pid(self, job: \"V1Job\") -> str:\n \"\"\"\n Generates a Kubernetes infrastructure PID.\n\n The PID is in the format: \"<cluster uid>:<namespace>:<job name>\".\n \"\"\"\n cluster_uid = self._get_cluster_uid()\n pid = f\"{cluster_uid}:{self.namespace}:{job.metadata.name}\"\n return pid\n\n def _parse_infrastructure_pid(\n self, infrastructure_pid: str\n ) -> Tuple[str, str, str]:\n \"\"\"\n Parse a Kubernetes infrastructure PID into its component parts.\n\n Returns a cluster UID, namespace, and job name.\n \"\"\"\n cluster_uid, namespace, job_name = infrastructure_pid.split(\":\", 2)\n return cluster_uid, namespace, job_name\n\n def _get_cluster_uid(self) -> str:\n \"\"\"\n Gets a unique id for the current cluster being used.\n\n There is no real unique identifier for a cluster. However, the `kube-system`\n namespace is immutable and has a persistence UID that we use instead.\n\n PREFECT_KUBERNETES_CLUSTER_UID can be set in cases where the `kube-system`\n namespace cannot be read e.g. when a cluster role cannot be created. If set,\n this variable will be used and we will not attempt to read the `kube-system`\n namespace.\n\n See https://github.com/kubernetes/kubernetes/issues/44954\n \"\"\"\n # Default to an environment variable\n env_cluster_uid = os.environ.get(\"PREFECT_KUBERNETES_CLUSTER_UID\")\n if env_cluster_uid:\n return env_cluster_uid\n\n # Read the UID from the cluster namespace\n with self.get_client() as client:\n namespace = client.read_namespace(\"kube-system\")\n cluster_uid = namespace.metadata.uid\n\n return cluster_uid\n\n def _configure_kubernetes_library_client(self) -> None:\n \"\"\"\n Set the correct kubernetes client configuration.\n\n WARNING: This action is not threadsafe and may override the configuration\n specified by another `KubernetesJob` instance.\n \"\"\"\n # TODO: Investigate returning a configured client so calls on other threads\n # will not invalidate the config needed here\n\n # if a k8s cluster block is provided to the flow runner, use that\n if self.cluster_config:\n self.cluster_config.configure_client()\n else:\n # If no block specified, try to load Kubernetes configuration within a cluster. If that doesn't\n # work, try to load the configuration from the local environment, allowing\n # any further ConfigExceptions to bubble up.\n try:\n kubernetes.config.load_incluster_config()\n except kubernetes.config.ConfigException:\n kubernetes.config.load_kube_config()\n\n def _shortcut_customizations(self) -> JsonPatch:\n \"\"\"Produces the JSON 6902 patch for the most commonly used customizations, like\n image and namespace, which we offer as top-level parameters (with sensible\n default values)\"\"\"\n shortcuts = []\n\n if self.namespace:\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/metadata/namespace\",\n \"value\": self.namespace,\n }\n )\n\n if self.image:\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/spec/template/spec/containers/0/image\",\n \"value\": self.image,\n }\n )\n\n shortcuts += [\n {\n \"op\": \"add\",\n \"path\": (\n f\"/metadata/labels/{self._slugify_label_key(key).replace('/', '~1', 1)}\"\n ),\n \"value\": self._slugify_label_value(value),\n }\n for key, value in self.labels.items()\n ]\n\n shortcuts += [\n {\n \"op\": \"add\",\n \"path\": \"/spec/template/spec/containers/0/env/-\",\n \"value\": {\"name\": key, \"value\": value},\n }\n for key, value in self._get_environment_variables().items()\n ]\n\n if self.image_pull_policy:\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/spec/template/spec/containers/0/imagePullPolicy\",\n \"value\": self.image_pull_policy.value,\n }\n )\n\n if self.service_account_name:\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/spec/template/spec/serviceAccountName\",\n \"value\": self.service_account_name,\n }\n )\n\n if self.finished_job_ttl is not None:\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/spec/ttlSecondsAfterFinished\",\n \"value\": self.finished_job_ttl,\n }\n )\n\n if self.command:\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/spec/template/spec/containers/0/args\",\n \"value\": self.command,\n }\n )\n\n if self.name:\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/metadata/generateName\",\n \"value\": self._slugify_name(self.name) + \"-\",\n }\n )\n else:\n # Generate name is required\n shortcuts.append(\n {\n \"op\": \"add\",\n \"path\": \"/metadata/generateName\",\n \"value\": (\n \"prefect-job-\"\n # We generate a name using a hash of the primary job settings\n + stable_hash(\n *self.command,\n *self.env.keys(),\n *[v for v in self.env.values() if v is not None],\n )\n + \"-\"\n ),\n }\n )\n\n return JsonPatch(shortcuts)\n\n def _get_job(self, job_id: str) -> Optional[\"V1Job\"]:\n with self.get_batch_client() as batch_client:\n try:\n job = batch_client.read_namespaced_job(job_id, self.namespace)\n except kubernetes.client.exceptions.ApiException:\n self.logger.error(f\"Job {job_id!r} was removed.\", exc_info=True)\n return None\n return job\n\n def _get_job_pod(self, job_name: str) -> \"V1Pod\":\n \"\"\"Get the first running pod for a job.\"\"\"\n\n # Wait until we find a running pod for the job\n # if `pod_watch_timeout_seconds` is None, no timeout will be enforced\n watch = kubernetes.watch.Watch()\n self.logger.debug(f\"Job {job_name!r}: Starting watch for pod start...\")\n last_phase = None\n with self.get_client() as client:\n for event in watch.stream(\n func=client.list_namespaced_pod,\n namespace=self.namespace,\n label_selector=f\"job-name={job_name}\",\n timeout_seconds=self.pod_watch_timeout_seconds,\n ):\n phase = event[\"object\"].status.phase\n if phase != last_phase:\n self.logger.info(f\"Job {job_name!r}: Pod has status {phase!r}.\")\n\n if phase != \"Pending\":\n watch.stop()\n return event[\"object\"]\n\n last_phase = phase\n\n self.logger.error(f\"Job {job_name!r}: Pod never started.\")\n\n def _watch_job(self, job_name: str) -> int:\n \"\"\"\n Watch a job.\n\n Return the final status code of the first container.\n \"\"\"\n self.logger.debug(f\"Job {job_name!r}: Monitoring job...\")\n\n job = self._get_job(job_name)\n if not job:\n return -1\n\n pod = self._get_job_pod(job_name)\n if not pod:\n return -1\n\n # Calculate the deadline before streaming output\n deadline = (\n (time.monotonic() + self.job_watch_timeout_seconds)\n if self.job_watch_timeout_seconds is not None\n else None\n )\n\n if self.stream_output:\n with self.get_client() as client:\n logs = client.read_namespaced_pod_log(\n pod.metadata.name,\n self.namespace,\n follow=True,\n _preload_content=False,\n container=\"prefect-job\",\n )\n try:\n for log in logs.stream():\n print(log.decode().rstrip())\n\n # Check if we have passed the deadline and should stop streaming\n # logs\n remaining_time = (\n deadline - time.monotonic() if deadline else None\n )\n if deadline and remaining_time <= 0:\n break\n\n except Exception:\n self.logger.warning(\n (\n \"Error occurred while streaming logs - \"\n \"Job will continue to run but logs will \"\n \"no longer be streamed to stdout.\"\n ),\n exc_info=True,\n )\n\n with self.get_batch_client() as batch_client:\n # Check if the job is completed before beginning a watch\n job = batch_client.read_namespaced_job(\n name=job_name, namespace=self.namespace\n )\n completed = job.status.completion_time is not None\n\n while not completed:\n remaining_time = (\n math.ceil(deadline - time.monotonic()) if deadline else None\n )\n if deadline and remaining_time <= 0:\n self.logger.error(\n f\"Job {job_name!r}: Job did not complete within \"\n f\"timeout of {self.job_watch_timeout_seconds}s.\"\n )\n return -1\n\n watch = kubernetes.watch.Watch()\n # The kubernetes library will disable retries if the timeout kwarg is\n # present regardless of the value so we do not pass it unless given\n # https://github.com/kubernetes-client/python/blob/84f5fea2a3e4b161917aa597bf5e5a1d95e24f5a/kubernetes/base/watch/watch.py#LL160\n timeout_seconds = (\n {\"timeout_seconds\": remaining_time} if deadline else {}\n )\n\n for event in watch.stream(\n func=batch_client.list_namespaced_job,\n field_selector=f\"metadata.name={job_name}\",\n namespace=self.namespace,\n **timeout_seconds,\n ):\n if event[\"type\"] == \"DELETED\":\n self.logger.error(f\"Job {job_name!r}: Job has been deleted.\")\n completed = True\n elif event[\"object\"].status.completion_time:\n if not event[\"object\"].status.succeeded:\n # Job failed, exit while loop and return pod exit code\n self.logger.error(f\"Job {job_name!r}: Job failed.\")\n completed = True\n # Check if the job has reached its backoff limit\n # and stop watching if it has\n elif (\n event[\"object\"].spec.backoff_limit is not None\n and event[\"object\"].status.failed is not None\n and event[\"object\"].status.failed\n > event[\"object\"].spec.backoff_limit\n ):\n self.logger.error(\n f\"Job {job_name!r}: Job reached backoff limit.\"\n )\n completed = True\n # If the job has no backoff limit, check if it has failed\n # and stop watching if it has\n elif (\n not event[\"object\"].spec.backoff_limit\n and event[\"object\"].status.failed\n ):\n completed = True\n\n if completed:\n watch.stop()\n break\n\n with self.get_client() as core_client:\n # Get all pods for the job\n pods = core_client.list_namespaced_pod(\n namespace=self.namespace, label_selector=f\"job-name={job_name}\"\n )\n # Get the status for only the most recently used pod\n pods.items.sort(\n key=lambda pod: pod.metadata.creation_timestamp, reverse=True\n )\n most_recent_pod = pods.items[0] if pods.items else None\n first_container_status = (\n most_recent_pod.status.container_statuses[0]\n if most_recent_pod\n else None\n )\n if not first_container_status:\n self.logger.error(f\"Job {job_name!r}: No pods found for job.\")\n return -1\n\n # In some cases, such as spot instance evictions, the pod will be forcibly\n # terminated and not report a status correctly.\n elif (\n first_container_status.state is None\n or first_container_status.state.terminated is None\n or first_container_status.state.terminated.exit_code is None\n ):\n self.logger.error(\n f\"Could not determine exit code for {job_name!r}.\"\n \"Exit code will be reported as -1.\"\n \"First container status info did not report an exit code.\"\n f\"First container info: {first_container_status}.\"\n )\n return -1\n\n return first_container_status.state.terminated.exit_code\n\n def _create_job(self, job_manifest: KubernetesManifest) -> \"V1Job\":\n \"\"\"\n Given a Kubernetes Job Manifest, create the Job on the configured Kubernetes\n cluster and return its name.\n \"\"\"\n with self.get_batch_client() as batch_client:\n job = batch_client.create_namespaced_job(self.namespace, job_manifest)\n return job\n\n def _slugify_name(self, name: str) -> str:\n \"\"\"\n Slugify text for use as a name.\n\n Keeps only alphanumeric characters and dashes, and caps the length\n of the slug at 45 chars.\n\n The 45 character length allows room for the k8s utility\n \"generateName\" to generate a unique name from the slug while\n keeping the total length of a name below 63 characters, which is\n the limit for e.g. label names that follow RFC 1123 (hostnames) and\n RFC 1035 (domain names).\n\n Args:\n name: The name of the job\n\n Returns:\n the slugified job name\n \"\"\"\n slug = slugify(\n name,\n max_length=45, # Leave enough space for generateName\n regex_pattern=r\"[^a-zA-Z0-9-]+\",\n )\n\n # TODO: Handle the case that the name is an empty string after being\n # slugified.\n\n return slug\n\n def _slugify_label_key(self, key: str) -> str:\n \"\"\"\n Slugify text for use as a label key.\n\n Keys are composed of an optional prefix and name, separated by a slash (/).\n\n Keeps only alphanumeric characters, dashes, underscores, and periods.\n Limits the length of the label prefix to 253 characters.\n Limits the length of the label name to 63 characters.\n\n See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set\n\n Args:\n key: The label key\n\n Returns:\n The slugified label key\n \"\"\"\n if \"/\" in key:\n prefix, name = key.split(\"/\", maxsplit=1)\n else:\n prefix = None\n name = key\n\n name_slug = (\n slugify(name, max_length=63, regex_pattern=r\"[^a-zA-Z0-9-_.]+\").strip(\n \"_-.\" # Must start or end with alphanumeric characters\n )\n or name\n )\n # Fallback to the original if we end up with an empty slug, this will allow\n # Kubernetes to throw the validation error\n\n if prefix:\n prefix_slug = (\n slugify(\n prefix,\n max_length=253,\n regex_pattern=r\"[^a-zA-Z0-9-\\.]+\",\n ).strip(\"_-.\") # Must start or end with alphanumeric characters\n or prefix\n )\n\n return f\"{prefix_slug}/{name_slug}\"\n\n return name_slug\n\n def _slugify_label_value(self, value: str) -> str:\n \"\"\"\n Slugify text for use as a label value.\n\n Keeps only alphanumeric characters, dashes, underscores, and periods.\n Limits the total length of label text to below 63 characters.\n\n See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set\n\n Args:\n value: The text for the label\n\n Returns:\n The slugified value\n \"\"\"\n slug = (\n slugify(value, max_length=63, regex_pattern=r\"[^a-zA-Z0-9-_\\.]+\").strip(\n \"_-.\" # Must start or end with alphanumeric characters\n )\n or value\n )\n # Fallback to the original if we end up with an empty slug, this will allow\n # Kubernetes to throw the validation error\n\n return slug\n\n def _get_environment_variables(self):\n # If the API URL has been set by the base environment rather than the by the\n # user, update the value to ensure connectivity when using a bridge network by\n # updating local connections to use the internal host\n env = {**self._base_environment(), **self.env}\n\n if (\n \"PREFECT_API_URL\" in env\n and \"PREFECT_API_URL\" not in self.env\n and self._api_dns_name\n ):\n env[\"PREFECT_API_URL\"] = (\n env[\"PREFECT_API_URL\"]\n .replace(\"localhost\", self._api_dns_name)\n .replace(\"127.0.0.1\", self._api_dns_name)\n )\n\n # Drop null values allowing users to \"unset\" variables\n return {key: value for key, value in env.items() if value is not None}\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.base_job_manifest","title":"base_job_manifest
classmethod
","text":"Produces the bare minimum allowed Job manifest
Source code inprefect/infrastructure/kubernetes.py
@classmethod\ndef base_job_manifest(cls) -> KubernetesManifest:\n \"\"\"Produces the bare minimum allowed Job manifest\"\"\"\n return {\n \"apiVersion\": \"batch/v1\",\n \"kind\": \"Job\",\n \"metadata\": {\"labels\": {}},\n \"spec\": {\n \"template\": {\n \"spec\": {\n \"parallelism\": 1,\n \"completions\": 1,\n \"restartPolicy\": \"Never\",\n \"containers\": [\n {\n \"name\": \"prefect-job\",\n \"env\": [],\n }\n ],\n }\n }\n },\n }\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.build_job","title":"build_job
","text":"Builds the Kubernetes Job Manifest
Source code inprefect/infrastructure/kubernetes.py
def build_job(self) -> KubernetesManifest:\n \"\"\"Builds the Kubernetes Job Manifest\"\"\"\n job_manifest = copy.copy(self.job)\n job_manifest = self._shortcut_customizations().apply(job_manifest)\n job_manifest = self.customizations.apply(job_manifest)\n return job_manifest\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.customize_from_file","title":"customize_from_file
classmethod
","text":"Load an RFC 6902 JSON patch from a YAML or JSON file.
Source code inprefect/infrastructure/kubernetes.py
@classmethod\ndef customize_from_file(cls, filename: str) -> JsonPatch:\n \"\"\"Load an RFC 6902 JSON patch from a YAML or JSON file.\"\"\"\n with open(filename, \"r\", encoding=\"utf-8\") as f:\n return JsonPatch(yaml.load(f, yaml.SafeLoader))\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.job_from_file","title":"job_from_file
classmethod
","text":"Load a Kubernetes Job manifest from a YAML or JSON file.
Source code inprefect/infrastructure/kubernetes.py
@classmethod\ndef job_from_file(cls, filename: str) -> KubernetesManifest:\n \"\"\"Load a Kubernetes Job manifest from a YAML or JSON file.\"\"\"\n with open(filename, \"r\", encoding=\"utf-8\") as f:\n return yaml.load(f, yaml.SafeLoader)\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJobResult","title":"KubernetesJobResult
","text":" Bases: InfrastructureResult
Contains information about the final state of a completed Kubernetes Job
Source code inprefect/infrastructure/kubernetes.py
class KubernetesJobResult(InfrastructureResult):\n \"\"\"Contains information about the final state of a completed Kubernetes Job\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Process","title":"Process
","text":" Bases: Infrastructure
Run a command in a new process.
Current environment variables and Prefect settings will be included in the created process. Configured environment variables will override any current environment variables.
Attributes:
Name Type Descriptioncommand
A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.
env
Environment variables to set for the new process.
labels
Labels for the process. Labels are for metadata purposes only and cannot be attached to the process itself.
name
A name for the process. For display purposes only.
stream_output
bool
Whether to stream output to local stdout.
working_dir
Union[str, Path, None]
Working directory where the process should be opened. If not set, a tmp directory will be used.
Source code inprefect/infrastructure/process.py
@deprecated_class(\n start_date=\"Mar 2024\",\n help=\"Use the process worker instead.\"\n \" Refer to the upgrade guide for more information:\"\n \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass Process(Infrastructure):\n \"\"\"\n Run a command in a new process.\n\n Current environment variables and Prefect settings will be included in the created\n process. Configured environment variables will override any current environment\n variables.\n\n Attributes:\n command: A list of strings specifying the command to run in the container to\n start the flow run. In most cases you should not override this.\n env: Environment variables to set for the new process.\n labels: Labels for the process. Labels are for metadata purposes only and\n cannot be attached to the process itself.\n name: A name for the process. For display purposes only.\n stream_output: Whether to stream output to local stdout.\n working_dir: Working directory where the process should be opened. If not set,\n a tmp directory will be used.\n \"\"\"\n\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/356e6766a91baf20e1d08bbe16e8b5aaef4d8643-48x48.png\"\n _documentation_url = \"https://docs.prefect.io/concepts/infrastructure/#process\"\n\n type: Literal[\"process\"] = Field(\n default=\"process\", description=\"The type of infrastructure.\"\n )\n stream_output: bool = Field(\n default=True,\n description=(\n \"If set, output will be streamed from the process to local standard output.\"\n ),\n )\n working_dir: Union[str, Path, None] = Field(\n default=None,\n description=(\n \"If set, the process will open within the specified path as the working\"\n \" directory. Otherwise, a temporary directory will be created.\"\n ),\n ) # Underlying accepted types are str, bytes, PathLike[str], None\n\n @sync_compatible\n async def run(\n self,\n task_status: anyio.abc.TaskStatus = None,\n ) -> \"ProcessResult\":\n if not self.command:\n raise ValueError(\"Process cannot be run with empty command.\")\n\n _use_threaded_child_watcher()\n display_name = f\" {self.name!r}\" if self.name else \"\"\n\n # Open a subprocess to execute the flow run\n self.logger.info(f\"Opening process{display_name}...\")\n working_dir_ctx = (\n tempfile.TemporaryDirectory(suffix=\"prefect\")\n if not self.working_dir\n else contextlib.nullcontext(self.working_dir)\n )\n with working_dir_ctx as working_dir:\n self.logger.debug(\n f\"Process{display_name} running command: {' '.join(self.command)} in\"\n f\" {working_dir}\"\n )\n\n # We must add creationflags to a dict so it is only passed as a function\n # parameter on Windows, because the presence of creationflags causes\n # errors on Unix even if set to None\n kwargs: Dict[str, object] = {}\n if sys.platform == \"win32\":\n kwargs[\"creationflags\"] = subprocess.CREATE_NEW_PROCESS_GROUP\n\n process = await run_process(\n self.command,\n stream_output=self.stream_output,\n task_status=task_status,\n task_status_handler=_infrastructure_pid_from_process,\n env=self._get_environment_variables(),\n cwd=working_dir,\n **kwargs,\n )\n\n # Use the pid for display if no name was given\n display_name = display_name or f\" {process.pid}\"\n\n if process.returncode:\n help_message = None\n if process.returncode == -9:\n help_message = (\n \"This indicates that the process exited due to a SIGKILL signal. \"\n \"Typically, this is either caused by manual cancellation or \"\n \"high memory usage causing the operating system to \"\n \"terminate the process.\"\n )\n if process.returncode == -15:\n help_message = (\n \"This indicates that the process exited due to a SIGTERM signal. \"\n \"Typically, this is caused by manual cancellation.\"\n )\n elif process.returncode == 247:\n help_message = (\n \"This indicates that the process was terminated due to high \"\n \"memory usage.\"\n )\n elif (\n sys.platform == \"win32\" and process.returncode == STATUS_CONTROL_C_EXIT\n ):\n help_message = (\n \"Process was terminated due to a Ctrl+C or Ctrl+Break signal. \"\n \"Typically, this is caused by manual cancellation.\"\n )\n\n self.logger.error(\n f\"Process{display_name} exited with status code: {process.returncode}\"\n + (f\"; {help_message}\" if help_message else \"\")\n )\n else:\n self.logger.info(f\"Process{display_name} exited cleanly.\")\n\n return ProcessResult(\n status_code=process.returncode, identifier=str(process.pid)\n )\n\n async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n hostname, pid = _parse_infrastructure_pid(infrastructure_pid)\n\n if hostname != socket.gethostname():\n raise InfrastructureNotAvailable(\n f\"Unable to kill process {pid!r}: The process is running on a different\"\n f\" host {hostname!r}.\"\n )\n\n # In a non-windows environment first send a SIGTERM, then, after\n # `grace_seconds` seconds have passed subsequent send SIGKILL. In\n # Windows we use CTRL_BREAK_EVENT as SIGTERM is useless:\n # https://bugs.python.org/issue26350\n if sys.platform == \"win32\":\n try:\n os.kill(pid, signal.CTRL_BREAK_EVENT)\n except (ProcessLookupError, WindowsError):\n raise InfrastructureNotFound(\n f\"Unable to kill process {pid!r}: The process was not found.\"\n )\n else:\n try:\n os.kill(pid, signal.SIGTERM)\n except ProcessLookupError:\n raise InfrastructureNotFound(\n f\"Unable to kill process {pid!r}: The process was not found.\"\n )\n\n # Throttle how often we check if the process is still alive to keep\n # from making too many system calls in a short period of time.\n check_interval = max(grace_seconds / 10, 1)\n\n with anyio.move_on_after(grace_seconds):\n while True:\n await anyio.sleep(check_interval)\n\n # Detect if the process is still alive. If not do an early\n # return as the process respected the SIGTERM from above.\n try:\n os.kill(pid, 0)\n except ProcessLookupError:\n return\n\n try:\n os.kill(pid, signal.SIGKILL)\n except OSError:\n # We shouldn't ever end up here, but it's possible that the\n # process ended right after the check above.\n return\n\n def preview(self):\n environment = self._get_environment_variables(include_os_environ=False)\n return \" \\\\\\n\".join(\n [f\"{key}={value}\" for key, value in environment.items()]\n + [\" \".join(self.command)]\n )\n\n def _get_environment_variables(self, include_os_environ: bool = True):\n os_environ = os.environ if include_os_environ else {}\n # The base environment must override the current environment or\n # the Prefect settings context may not be respected\n env = {**os_environ, **self._base_environment(), **self.env}\n\n # Drop null values allowing users to \"unset\" variables\n return {key: value for key, value in env.items() if value is not None}\n\n def _base_flow_run_command(self):\n return [get_sys_executable(), \"-m\", \"prefect.engine\"]\n\n def get_corresponding_worker_type(self):\n return \"process\"\n\n async def generate_work_pool_base_job_template(self):\n from prefect.workers.utilities import (\n get_default_base_job_template_for_infrastructure_type,\n )\n\n base_job_template = await get_default_base_job_template_for_infrastructure_type(\n self.get_corresponding_worker_type(),\n )\n assert (\n base_job_template is not None\n ), \"Failed to generate default base job template for Process worker.\"\n for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n if key == \"command\":\n base_job_template[\"variables\"][\"properties\"][\"command\"][\n \"default\"\n ] = shlex.join(value)\n elif key in [\n \"type\",\n \"block_type_slug\",\n \"_block_document_id\",\n \"_block_document_name\",\n \"_is_anonymous\",\n ]:\n continue\n elif key in base_job_template[\"variables\"][\"properties\"]:\n base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n else:\n self.logger.warning(\n f\"Variable {key!r} is not supported by Process work pools.\"\n \" Skipping.\"\n )\n\n return base_job_template\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.ProcessResult","title":"ProcessResult
","text":" Bases: InfrastructureResult
Contains information about the final state of a completed process
Source code inprefect/infrastructure/process.py
class ProcessResult(InfrastructureResult):\n \"\"\"Contains information about the final state of a completed process\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/logging/","title":"Logging","text":"","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/#prefect.logging","title":"prefect.logging
","text":"","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/#prefect.logging.get_logger","title":"get_logger
cached
","text":"Get a prefect
logger. These loggers are intended for internal use within the prefect
package.
See get_run_logger
for retrieving loggers for use within task or flow runs. By default, only run-related loggers are connected to the APILogHandler
.
prefect/logging/loggers.py
@lru_cache()\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Get a `prefect` logger. These loggers are intended for internal use within the\n `prefect` package.\n\n See `get_run_logger` for retrieving loggers for use within task or flow runs.\n By default, only run-related loggers are connected to the `APILogHandler`.\n \"\"\"\n parent_logger = logging.getLogger(\"prefect\")\n\n if name:\n # Append the name if given but allow explicit full names e.g. \"prefect.test\"\n # should not become \"prefect.prefect.test\"\n if not name.startswith(parent_logger.name + \".\"):\n logger = parent_logger.getChild(name)\n else:\n logger = logging.getLogger(name)\n else:\n logger = parent_logger\n\n # Prevent the current API key from being logged in plain text\n obfuscate_api_key_filter = ObfuscateApiKeyFilter()\n logger.addFilter(obfuscate_api_key_filter)\n\n return logger\n
","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/#prefect.logging.get_run_logger","title":"get_run_logger
","text":"Get a Prefect logger for the current task run or flow run.
The logger will be named either prefect.task_runs
or prefect.flow_runs
. Contextual data about the run will be attached to the log records.
These loggers are connected to the APILogHandler
by default to send log records to the API.
Parameters:
Name Type Description Defaultcontext
RunContext
A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed.
None
**kwargs
str
Additional keyword arguments will be attached to the log records in addition to the run metadata
{}
Raises:
Type DescriptionRuntimeError
If no context can be found
Source code inprefect/logging/loggers.py
def get_run_logger(\n context: \"RunContext\" = None, **kwargs: str\n) -> Union[logging.Logger, logging.LoggerAdapter]:\n \"\"\"\n Get a Prefect logger for the current task run or flow run.\n\n The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n Contextual data about the run will be attached to the log records.\n\n These loggers are connected to the `APILogHandler` by default to send log records to\n the API.\n\n Arguments:\n context: A specific context may be provided as an override. By default, the\n context is inferred from global state and this should not be needed.\n **kwargs: Additional keyword arguments will be attached to the log records in\n addition to the run metadata\n\n Raises:\n RuntimeError: If no context can be found\n \"\"\"\n # Check for existing contexts\n task_run_context = prefect.context.TaskRunContext.get()\n flow_run_context = prefect.context.FlowRunContext.get()\n\n # Apply the context override\n if context:\n if isinstance(context, prefect.context.FlowRunContext):\n flow_run_context = context\n elif isinstance(context, prefect.context.TaskRunContext):\n task_run_context = context\n else:\n raise TypeError(\n f\"Received unexpected type {type(context).__name__!r} for context. \"\n \"Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'.\"\n )\n\n # Determine if this is a task or flow run logger\n if task_run_context:\n logger = task_run_logger(\n task_run=task_run_context.task_run,\n task=task_run_context.task,\n flow_run=flow_run_context.flow_run if flow_run_context else None,\n flow=flow_run_context.flow if flow_run_context else None,\n **kwargs,\n )\n elif flow_run_context:\n logger = flow_run_logger(\n flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs\n )\n elif (\n get_logger(\"prefect.flow_run\").disabled\n and get_logger(\"prefect.task_run\").disabled\n ):\n logger = logging.getLogger(\"null\")\n else:\n raise MissingContextError(\"There is no active flow or task run context.\")\n\n return logger\n
","tags":["Python API","logging"]},{"location":"api-ref/prefect/manifests/","title":"prefect.manifests","text":"","tags":["Python API","deployments"]},{"location":"api-ref/prefect/manifests/#prefect.manifests","title":"prefect.manifests
","text":"Manifests are portable descriptions of one or more workflows within a given directory structure.
They are the foundational building blocks for defining Flow Deployments.
","tags":["Python API","deployments"]},{"location":"api-ref/prefect/manifests/#prefect.manifests.Manifest","title":"Manifest
","text":" Bases: BaseModel
A JSON representation of a flow.
Source code inprefect/manifests.py
class Manifest(BaseModel):\n \"\"\"A JSON representation of a flow.\"\"\"\n\n flow_name: str = Field(default=..., description=\"The name of the flow.\")\n import_path: str = Field(\n default=..., description=\"The relative import path for the flow.\"\n )\n parameter_openapi_schema: ParameterSchema = Field(\n default=..., description=\"The OpenAPI schema of the flow's parameters.\"\n )\n
","tags":["Python API","deployments"]},{"location":"api-ref/prefect/serializers/","title":"prefect.serializers","text":"","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers","title":"prefect.serializers
","text":"Serializer implementations for converting objects to bytes and bytes to objects.
All serializers are based on the Serializer
class and include a type
string that allows them to be referenced without referencing the actual class. For example, you can get often specify the JSONSerializer
with the string \"json\". Some serializers support additional settings for configuration of serialization. These are stored on the instance so the same settings can be used to load saved objects.
All serializers must implement dumps
and loads
which convert objects to bytes and bytes to an object respectively.
CompressedJSONSerializer
","text":" Bases: CompressedSerializer
A compressed serializer preconfigured to use the json serializer.
Source code inprefect/serializers.py
class CompressedJSONSerializer(CompressedSerializer):\n \"\"\"\n A compressed serializer preconfigured to use the json serializer.\n \"\"\"\n\n type: Literal[\"compressed/json\"] = \"compressed/json\"\n serializer: Serializer = pydantic.Field(default_factory=JSONSerializer)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedPickleSerializer","title":"CompressedPickleSerializer
","text":" Bases: CompressedSerializer
A compressed serializer preconfigured to use the pickle serializer.
Source code inprefect/serializers.py
class CompressedPickleSerializer(CompressedSerializer):\n \"\"\"\n A compressed serializer preconfigured to use the pickle serializer.\n \"\"\"\n\n type: Literal[\"compressed/pickle\"] = \"compressed/pickle\"\n serializer: Serializer = pydantic.Field(default_factory=PickleSerializer)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedSerializer","title":"CompressedSerializer
","text":" Bases: Serializer
Wraps another serializer, compressing its output. Uses lzma
by default. See compressionlib
for using alternative libraries.
Attributes:
Name Type Descriptionserializer
Serializer
The serializer to use before compression.
compressionlib
str
The import path of a compression module to use. Must have methods compress(bytes) -> bytes
and decompress(bytes) -> bytes
.
level
str
If not null, the level of compression to pass to compress
.
prefect/serializers.py
class CompressedSerializer(Serializer):\n \"\"\"\n Wraps another serializer, compressing its output.\n Uses `lzma` by default. See `compressionlib` for using alternative libraries.\n\n Attributes:\n serializer: The serializer to use before compression.\n compressionlib: The import path of a compression module to use.\n Must have methods `compress(bytes) -> bytes` and `decompress(bytes) -> bytes`.\n level: If not null, the level of compression to pass to `compress`.\n \"\"\"\n\n type: Literal[\"compressed\"] = \"compressed\"\n\n serializer: Serializer\n compressionlib: str = \"lzma\"\n\n @pydantic.validator(\"serializer\", pre=True)\n def cast_type_names_to_serializers(cls, value):\n if isinstance(value, str):\n return Serializer(type=value)\n return value\n\n @pydantic.validator(\"compressionlib\")\n def check_compressionlib(cls, value):\n \"\"\"\n Check that the given pickle library is importable and has compress/decompress\n methods.\n \"\"\"\n try:\n compressor = from_qualified_name(value)\n except (ImportError, AttributeError) as exc:\n raise ValueError(\n f\"Failed to import requested compression library: {value!r}.\"\n ) from exc\n\n if not callable(getattr(compressor, \"compress\", None)):\n raise ValueError(\n f\"Compression library at {value!r} does not have a 'compress' method.\"\n )\n\n if not callable(getattr(compressor, \"decompress\", None)):\n raise ValueError(\n f\"Compression library at {value!r} does not have a 'decompress' method.\"\n )\n\n return value\n\n def dumps(self, obj: Any) -> bytes:\n blob = self.serializer.dumps(obj)\n compressor = from_qualified_name(self.compressionlib)\n return base64.encodebytes(compressor.compress(blob))\n\n def loads(self, blob: bytes) -> Any:\n compressor = from_qualified_name(self.compressionlib)\n uncompressed = compressor.decompress(base64.decodebytes(blob))\n return self.serializer.loads(uncompressed)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedSerializer.check_compressionlib","title":"check_compressionlib
","text":"Check that the given pickle library is importable and has compress/decompress methods.
Source code inprefect/serializers.py
@pydantic.validator(\"compressionlib\")\ndef check_compressionlib(cls, value):\n \"\"\"\n Check that the given pickle library is importable and has compress/decompress\n methods.\n \"\"\"\n try:\n compressor = from_qualified_name(value)\n except (ImportError, AttributeError) as exc:\n raise ValueError(\n f\"Failed to import requested compression library: {value!r}.\"\n ) from exc\n\n if not callable(getattr(compressor, \"compress\", None)):\n raise ValueError(\n f\"Compression library at {value!r} does not have a 'compress' method.\"\n )\n\n if not callable(getattr(compressor, \"decompress\", None)):\n raise ValueError(\n f\"Compression library at {value!r} does not have a 'decompress' method.\"\n )\n\n return value\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.JSONSerializer","title":"JSONSerializer
","text":" Bases: Serializer
Serializes data to JSON.
Input types must be compatible with the stdlib json library.
Wraps the json
library to serialize to UTF-8 bytes instead of string types.
prefect/serializers.py
class JSONSerializer(Serializer):\n \"\"\"\n Serializes data to JSON.\n\n Input types must be compatible with the stdlib json library.\n\n Wraps the `json` library to serialize to UTF-8 bytes instead of string types.\n \"\"\"\n\n type: Literal[\"json\"] = \"json\"\n jsonlib: str = \"json\"\n object_encoder: Optional[str] = pydantic.Field(\n default=\"prefect.serializers.prefect_json_object_encoder\",\n description=(\n \"An optional callable to use when serializing objects that are not \"\n \"supported by the JSON encoder. By default, this is set to a callable that \"\n \"adds support for all types supported by Pydantic.\"\n ),\n )\n object_decoder: Optional[str] = pydantic.Field(\n default=\"prefect.serializers.prefect_json_object_decoder\",\n description=(\n \"An optional callable to use when deserializing objects. This callable \"\n \"is passed each dictionary encountered during JSON deserialization. \"\n \"By default, this is set to a callable that deserializes content created \"\n \"by our default `object_encoder`.\"\n ),\n )\n dumps_kwargs: dict = pydantic.Field(default_factory=dict)\n loads_kwargs: dict = pydantic.Field(default_factory=dict)\n\n @pydantic.validator(\"dumps_kwargs\")\n def dumps_kwargs_cannot_contain_default(cls, value):\n # `default` is set by `object_encoder`. A user provided callable would make this\n # class unserializable anyway.\n if \"default\" in value:\n raise ValueError(\n \"`default` cannot be provided. Use `object_encoder` instead.\"\n )\n return value\n\n @pydantic.validator(\"loads_kwargs\")\n def loads_kwargs_cannot_contain_object_hook(cls, value):\n # `object_hook` is set by `object_decoder`. A user provided callable would make\n # this class unserializable anyway.\n if \"object_hook\" in value:\n raise ValueError(\n \"`object_hook` cannot be provided. Use `object_decoder` instead.\"\n )\n return value\n\n def dumps(self, data: Any) -> bytes:\n json = from_qualified_name(self.jsonlib)\n kwargs = self.dumps_kwargs.copy()\n if self.object_encoder:\n kwargs[\"default\"] = from_qualified_name(self.object_encoder)\n result = json.dumps(data, **kwargs)\n if isinstance(result, str):\n # The standard library returns str but others may return bytes directly\n result = result.encode()\n return result\n\n def loads(self, blob: bytes) -> Any:\n json = from_qualified_name(self.jsonlib)\n kwargs = self.loads_kwargs.copy()\n if self.object_decoder:\n kwargs[\"object_hook\"] = from_qualified_name(self.object_decoder)\n return json.loads(blob.decode(), **kwargs)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.PickleSerializer","title":"PickleSerializer
","text":" Bases: Serializer
Serializes objects using the pickle protocol.
cloudpickle
by default. See picklelib
for using alternative libraries.prefect/serializers.py
class PickleSerializer(Serializer):\n \"\"\"\n Serializes objects using the pickle protocol.\n\n - Uses `cloudpickle` by default. See `picklelib` for using alternative libraries.\n - Stores the version of the pickle library to check for compatibility during\n deserialization.\n - Wraps pickles in base64 for safe transmission.\n \"\"\"\n\n type: Literal[\"pickle\"] = \"pickle\"\n\n picklelib: str = \"cloudpickle\"\n picklelib_version: str = None\n\n @pydantic.validator(\"picklelib\")\n def check_picklelib(cls, value):\n \"\"\"\n Check that the given pickle library is importable and has dumps/loads methods.\n \"\"\"\n try:\n pickler = from_qualified_name(value)\n except (ImportError, AttributeError) as exc:\n raise ValueError(\n f\"Failed to import requested pickle library: {value!r}.\"\n ) from exc\n\n if not callable(getattr(pickler, \"dumps\", None)):\n raise ValueError(\n f\"Pickle library at {value!r} does not have a 'dumps' method.\"\n )\n\n if not callable(getattr(pickler, \"loads\", None)):\n raise ValueError(\n f\"Pickle library at {value!r} does not have a 'loads' method.\"\n )\n\n return value\n\n @pydantic.root_validator\n def check_picklelib_version(cls, values):\n \"\"\"\n Infers a default value for `picklelib_version` if null or ensures it matches\n the version retrieved from the `pickelib`.\n \"\"\"\n picklelib = values.get(\"picklelib\")\n picklelib_version = values.get(\"picklelib_version\")\n\n if not picklelib:\n raise ValueError(\"Unable to check version of unrecognized picklelib module\")\n\n pickler = from_qualified_name(picklelib)\n pickler_version = getattr(pickler, \"__version__\", None)\n\n if not picklelib_version:\n values[\"picklelib_version\"] = pickler_version\n elif picklelib_version != pickler_version:\n warnings.warn(\n (\n f\"Mismatched {picklelib!r} versions. Found {pickler_version} in the\"\n f\" environment but {picklelib_version} was requested. This may\"\n \" cause the serializer to fail.\"\n ),\n RuntimeWarning,\n stacklevel=3,\n )\n\n return values\n\n def dumps(self, obj: Any) -> bytes:\n pickler = from_qualified_name(self.picklelib)\n blob = pickler.dumps(obj)\n return base64.encodebytes(blob)\n\n def loads(self, blob: bytes) -> Any:\n pickler = from_qualified_name(self.picklelib)\n return pickler.loads(base64.decodebytes(blob))\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.PickleSerializer.check_picklelib","title":"check_picklelib
","text":"Check that the given pickle library is importable and has dumps/loads methods.
Source code inprefect/serializers.py
@pydantic.validator(\"picklelib\")\ndef check_picklelib(cls, value):\n \"\"\"\n Check that the given pickle library is importable and has dumps/loads methods.\n \"\"\"\n try:\n pickler = from_qualified_name(value)\n except (ImportError, AttributeError) as exc:\n raise ValueError(\n f\"Failed to import requested pickle library: {value!r}.\"\n ) from exc\n\n if not callable(getattr(pickler, \"dumps\", None)):\n raise ValueError(\n f\"Pickle library at {value!r} does not have a 'dumps' method.\"\n )\n\n if not callable(getattr(pickler, \"loads\", None)):\n raise ValueError(\n f\"Pickle library at {value!r} does not have a 'loads' method.\"\n )\n\n return value\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.PickleSerializer.check_picklelib_version","title":"check_picklelib_version
","text":"Infers a default value for picklelib_version
if null or ensures it matches the version retrieved from the pickelib
.
prefect/serializers.py
@pydantic.root_validator\ndef check_picklelib_version(cls, values):\n \"\"\"\n Infers a default value for `picklelib_version` if null or ensures it matches\n the version retrieved from the `pickelib`.\n \"\"\"\n picklelib = values.get(\"picklelib\")\n picklelib_version = values.get(\"picklelib_version\")\n\n if not picklelib:\n raise ValueError(\"Unable to check version of unrecognized picklelib module\")\n\n pickler = from_qualified_name(picklelib)\n pickler_version = getattr(pickler, \"__version__\", None)\n\n if not picklelib_version:\n values[\"picklelib_version\"] = pickler_version\n elif picklelib_version != pickler_version:\n warnings.warn(\n (\n f\"Mismatched {picklelib!r} versions. Found {pickler_version} in the\"\n f\" environment but {picklelib_version} was requested. This may\"\n \" cause the serializer to fail.\"\n ),\n RuntimeWarning,\n stacklevel=3,\n )\n\n return values\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer","title":"Serializer
","text":" Bases: BaseModel
, Generic[D]
, ABC
A serializer that can encode objects of type 'D' into bytes.
Source code inprefect/serializers.py
@add_type_dispatch\nclass Serializer(BaseModel, Generic[D], abc.ABC):\n \"\"\"\n A serializer that can encode objects of type 'D' into bytes.\n \"\"\"\n\n type: str\n\n @abc.abstractmethod\n def dumps(self, obj: D) -> bytes:\n \"\"\"Encode the object into a blob of bytes.\"\"\"\n\n @abc.abstractmethod\n def loads(self, blob: bytes) -> D:\n \"\"\"Decode the blob of bytes into an object.\"\"\"\n\n class Config:\n extra = \"forbid\"\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer.dumps","title":"dumps
abstractmethod
","text":"Encode the object into a blob of bytes.
Source code inprefect/serializers.py
@abc.abstractmethod\ndef dumps(self, obj: D) -> bytes:\n \"\"\"Encode the object into a blob of bytes.\"\"\"\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer.loads","title":"loads
abstractmethod
","text":"Decode the blob of bytes into an object.
Source code inprefect/serializers.py
@abc.abstractmethod\ndef loads(self, blob: bytes) -> D:\n \"\"\"Decode the blob of bytes into an object.\"\"\"\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.prefect_json_object_decoder","title":"prefect_json_object_decoder
","text":"JSONDecoder.object_hook
for decoding objects from JSON when previously encoded with prefect_json_object_encoder
prefect/serializers.py
def prefect_json_object_decoder(result: dict):\n \"\"\"\n `JSONDecoder.object_hook` for decoding objects from JSON when previously encoded\n with `prefect_json_object_encoder`\n \"\"\"\n if \"__class__\" in result:\n return pydantic.parse_obj_as(\n from_qualified_name(result[\"__class__\"]), result[\"data\"]\n )\n elif \"__exc_type__\" in result:\n return from_qualified_name(result[\"__exc_type__\"])(result[\"message\"])\n else:\n return result\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.prefect_json_object_encoder","title":"prefect_json_object_encoder
","text":"JSONEncoder.default
for encoding objects into JSON with extended type support.
Raises a TypeError
to fallback on other encoders on failure.
prefect/serializers.py
def prefect_json_object_encoder(obj: Any) -> Any:\n \"\"\"\n `JSONEncoder.default` for encoding objects into JSON with extended type support.\n\n Raises a `TypeError` to fallback on other encoders on failure.\n \"\"\"\n if isinstance(obj, BaseException):\n return {\"__exc_type__\": to_qualified_name(obj.__class__), \"message\": str(obj)}\n else:\n return {\n \"__class__\": to_qualified_name(obj.__class__),\n \"data\": pydantic_encoder(obj),\n }\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/settings/","title":"prefect.settings","text":"","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings","title":"prefect.settings
","text":"Prefect settings management.
Each setting is defined as a Setting
type. The name of each setting is stylized in all caps, matching the environment variable that can be used to change the setting.
All settings defined in this file are used to generate a dynamic Pydantic settings class called Settings
. When instantiated, this class will load settings from environment variables and pull default values from the setting definitions.
The current instance of Settings
being used by the application is stored in a SettingsContext
model which allows each instance of the Settings
class to be accessed in an async-safe manner.
Aside from environment variables, we allow settings to be changed during the runtime of the process using profiles. Profiles contain setting overrides that the user may persist without setting environment variables. Profiles are also used internally for managing settings during task run execution where differing settings may be used concurrently in the same process and during testing where we need to override settings to ensure their value is respected as intended.
The SettingsContext
is set when the prefect
module is imported. This context is referred to as the \"root\" settings context for clarity. Generally, this is the only settings context that will be used. When this context is entered, we will instantiate a Settings
object, loading settings from environment variables and defaults, then we will load the active profile and use it to override settings. See enter_root_settings_context
for details on determining the active profile.
Another SettingsContext
may be entered at any time to change the settings being used by the code within the context. Generally, users should not use this. Settings management should be left to Prefect application internals.
Generally, settings should be accessed with SETTING_VARIABLE.value()
which will pull the current Settings
instance from the current SettingsContext
and retrieve the value of the relevant setting.
Accessing a setting's value will also call the Setting.value_callback
which allows settings to be dynamically modified on retrieval. This allows us to make settings dependent on the value of other settings or perform other dynamic effects.
PREFECT_HOME = Setting(Path, default=Path('~') / '.prefect', value_callback=expanduser_in_path)
module-attribute
","text":"Prefect's home directory. Defaults to ~/.prefect
. This directory may be created automatically when required.
PREFECT_EXTRA_ENTRYPOINTS = Setting(str, default='')
module-attribute
","text":"Modules for Prefect to import when Prefect is imported.
Values should be separated by commas, e.g. my_module,my_other_module
. Objects within modules may be specified by a ':' partition, e.g. my_module:my_object
. If a callable object is provided, it will be called with no arguments on import.
PREFECT_DEBUG_MODE = Setting(bool, default=False)
module-attribute
","text":"If True
, places the API in debug mode. This may modify behavior to facilitate debugging, including extra logs and other verbose assistance. Defaults to False
.
PREFECT_CLI_COLORS = Setting(bool, default=True)
module-attribute
","text":"If True
, use colors in CLI output. If False
, output will not include colors codes. Defaults to True
.
PREFECT_CLI_PROMPT = Setting(Optional[bool], default=None)
module-attribute
","text":"If True
, use interactive prompts in CLI commands. If False
, no interactive prompts will be used. If None
, the value will be dynamically determined based on the presence of an interactive-enabled terminal.
PREFECT_CLI_WRAP_LINES = Setting(bool, default=True)
module-attribute
","text":"If True
, wrap text by inserting new lines in long lines in CLI output. If False
, output will not be wrapped. Defaults to True
.
PREFECT_TEST_MODE = Setting(bool, default=False)
module-attribute
","text":"If True
, places the API in test mode. This may modify behavior to facilitate testing. Defaults to False
.
PREFECT_UNIT_TEST_MODE = Setting(bool, default=False)
module-attribute
","text":"This variable only exists to facilitate unit testing. If True
, code is executing in a unit test context. Defaults to False
.
PREFECT_TEST_SETTING = Setting(Any, default=None, value_callback=only_return_value_in_test_mode)
module-attribute
","text":"This variable only exists to facilitate testing of settings. If accessed when PREFECT_TEST_MODE
is not set, None
is returned.
PREFECT_API_TLS_INSECURE_SKIP_VERIFY = Setting(bool, default=False)
module-attribute
","text":"If True
, disables SSL checking to allow insecure requests. This is recommended only during development, e.g. when using self-signed certificates.
PREFECT_API_URL = Setting(str, default=None)
module-attribute
","text":"If provided, the URL of a hosted Prefect API. Defaults to None
.
When using Prefect Cloud, this will include an account and workspace.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SILENCE_API_URL_MISCONFIGURATION","title":"PREFECT_SILENCE_API_URL_MISCONFIGURATION = Setting(bool, default=False)
module-attribute
","text":"If True
, disable the warning when a user accidentally misconfigure its PREFECT_API_URL
Sometimes when a user manually set PREFECT_API_URL
to a custom url,reverse-proxy for example, we would like to silence this warning so we will set it to FALSE
.
PREFECT_API_KEY = Setting(str, default=None, is_secret=True)
module-attribute
","text":"API key used to authenticate with a the Prefect API. Defaults to None
.
PREFECT_API_ENABLE_HTTP2 = Setting(bool, default=True)
module-attribute
","text":"If true, enable support for HTTP/2 for communicating with an API.
If the API does not support HTTP/2, this will have no effect and connections will be made via HTTP/1.1.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_MAX_RETRIES","title":"PREFECT_CLIENT_MAX_RETRIES = Setting(int, default=5)
module-attribute
","text":"The maximum number of retries to perform on failed HTTP requests.
Defaults to 5. Set to 0 to disable retries.
See PREFECT_CLIENT_RETRY_EXTRA_CODES
for details on which HTTP status codes are retried.
PREFECT_CLIENT_RETRY_JITTER_FACTOR = Setting(float, default=0.2)
module-attribute
","text":"A value greater than or equal to zero to control the amount of jitter added to retried client requests. Higher values introduce larger amounts of jitter.
Set to 0 to disable jitter. See clamped_poisson_interval
for details on the how jitter can affect retry lengths.
PREFECT_CLIENT_RETRY_EXTRA_CODES = Setting(str, default='', value_callback=status_codes_as_integers_in_range)
module-attribute
","text":"A comma-separated list of extra HTTP status codes to retry on. Defaults to an empty string. 429, 502 and 503 are always retried. Please note that not all routes are idempotent and retrying may result in unexpected behavior.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_API_URL","title":"PREFECT_CLOUD_API_URL = Setting(str, default='https://api.prefect.cloud/api', value_callback=check_for_deprecated_cloud_url)
module-attribute
","text":"API URL for Prefect Cloud. Used for authentication.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_URL","title":"PREFECT_CLOUD_URL = Setting(str, default=None, deprecated=True, deprecated_start_date='Dec 2022', deprecated_help='Use `PREFECT_CLOUD_API_URL` instead.')
module-attribute
","text":"","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_URL","title":"PREFECT_UI_URL = Setting(Optional[str], default=None, value_callback=default_ui_url)
module-attribute
","text":"The URL for the UI. By default, this is inferred from the PREFECT_API_URL.
When using Prefect Cloud, this will include the account and workspace. When using an ephemeral server, this will be None
.
PREFECT_CLOUD_UI_URL = Setting(str, default=None, value_callback=default_cloud_ui_url)
module-attribute
","text":"The URL for the Cloud UI. By default, this is inferred from the PREFECT_CLOUD_API_URL.
PREFECT_UI_URL will be workspace specific and will be usable in the open source too.In contrast, this value is only valid for Cloud and will not include the workspace.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_REQUEST_TIMEOUT","title":"PREFECT_API_REQUEST_TIMEOUT = Setting(float, default=60.0)
module-attribute
","text":"The default timeout for requests to the API
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN","title":"PREFECT_EXPERIMENTAL_WARN = Setting(bool, default=True)
module-attribute
","text":"If enabled, warn on usage of experimental features.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_PROFILES_PATH","title":"PREFECT_PROFILES_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'profiles.toml', value_callback=template_with_settings(PREFECT_HOME))
module-attribute
","text":"The path to a profiles configuration files.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RESULTS_DEFAULT_SERIALIZER","title":"PREFECT_RESULTS_DEFAULT_SERIALIZER = Setting(str, default='pickle')
module-attribute
","text":"The default serializer to use when not otherwise specified.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RESULTS_PERSIST_BY_DEFAULT","title":"PREFECT_RESULTS_PERSIST_BY_DEFAULT = Setting(bool, default=False)
module-attribute
","text":"The default setting for persisting results when not otherwise specified. If enabled, flow and task results will be persisted unless they opt out.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASKS_REFRESH_CACHE","title":"PREFECT_TASKS_REFRESH_CACHE = Setting(bool, default=False)
module-attribute
","text":"If True
, enables a refresh of cached results: re-executing the task will refresh the cached results. Defaults to False
.
PREFECT_TASK_DEFAULT_RETRIES = Setting(int, default=0)
module-attribute
","text":"This value sets the default number of retries for all tasks. This value does not overwrite individually set retries values on tasks
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_FLOW_DEFAULT_RETRIES","title":"PREFECT_FLOW_DEFAULT_RETRIES = Setting(int, default=0)
module-attribute
","text":"This value sets the default number of retries for all flows. This value does not overwrite individually set retries values on a flow
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS","title":"PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS = Setting(Union[int, float], default=0)
module-attribute
","text":"This value sets the retry delay seconds for all flows. This value does not overwrite individually set retry delay seconds
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS","title":"PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS = Setting(Union[float, int, List[float]], default=0)
module-attribute
","text":"This value sets the default retry delay seconds for all tasks. This value does not overwrite individually set retry delay seconds
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS","title":"PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS = Setting(int, default=30)
module-attribute
","text":"The number of seconds to wait before retrying when a task run cannot secure a concurrency slot from the server.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOCAL_STORAGE_PATH","title":"PREFECT_LOCAL_STORAGE_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'storage', value_callback=template_with_settings(PREFECT_HOME))
module-attribute
","text":"The path to a block storage directory to store things in.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MEMO_STORE_PATH","title":"PREFECT_MEMO_STORE_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'memo_store.toml', value_callback=template_with_settings(PREFECT_HOME))
module-attribute
","text":"The path to the memo store file.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION","title":"PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION = Setting(bool, default=True)
module-attribute
","text":"Controls whether or not block auto-registration on start up should be memoized. Setting to False may result in slower server start up times.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_LEVEL","title":"PREFECT_LOGGING_LEVEL = Setting(str, default='INFO', value_callback=debug_mode_log_level)
module-attribute
","text":"The default logging level for Prefect loggers. Defaults to \"INFO\" during normal operation. Is forced to \"DEBUG\" during debug mode.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_INTERNAL_LEVEL","title":"PREFECT_LOGGING_INTERNAL_LEVEL = Setting(str, default='ERROR', value_callback=debug_mode_log_level)
module-attribute
","text":"The default logging level for Prefect's internal machinery loggers. Defaults to \"ERROR\" during normal operation. Is forced to \"DEBUG\" during debug mode.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_SERVER_LEVEL","title":"PREFECT_LOGGING_SERVER_LEVEL = Setting(str, default='WARNING')
module-attribute
","text":"The default logging level for the Prefect API server.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_SETTINGS_PATH","title":"PREFECT_LOGGING_SETTINGS_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'logging.yml', value_callback=template_with_settings(PREFECT_HOME))
module-attribute
","text":"The path to a custom YAML logging configuration file. If no file is found, the default logging.yml
is used. Defaults to a logging.yml in the Prefect home directory.
PREFECT_LOGGING_EXTRA_LOGGERS = Setting(str, default='', value_callback=get_extra_loggers)
module-attribute
","text":"Additional loggers to attach to Prefect logging at runtime. Values should be comma separated. The handlers attached to the 'prefect' logger will be added to these loggers. Additionally, if the level is not set, it will be set to the same level as the 'prefect' logger.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_LOG_PRINTS","title":"PREFECT_LOGGING_LOG_PRINTS = Setting(bool, default=False)
module-attribute
","text":"If set, print
statements in flows and tasks will be redirected to the Prefect logger for the given run. This setting can be overridden by individual tasks and flows.
PREFECT_LOGGING_TO_API_ENABLED = Setting(bool, default=True)
module-attribute
","text":"Toggles sending logs to the API. If False
, logs sent to the API log handler will not be sent to the API.
PREFECT_LOGGING_TO_API_BATCH_INTERVAL = Setting(float, default=2.0)
module-attribute
","text":"The number of seconds between batched writes of logs to the API.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_BATCH_SIZE","title":"PREFECT_LOGGING_TO_API_BATCH_SIZE = Setting(int, default=4000000)
module-attribute
","text":"The maximum size in bytes for a batch of logs.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_MAX_LOG_SIZE","title":"PREFECT_LOGGING_TO_API_MAX_LOG_SIZE = Setting(int, default=1000000)
module-attribute
","text":"The maximum size in bytes for a single log.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW","title":"PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW = Setting(Literal['warn', 'error', 'ignore'], default='warn')
module-attribute
","text":"Controls the behavior when loggers attempt to send logs to the API handler from outside of a flow.
All logs sent to the API must be associated with a flow run. The API log handler can only be used outside of a flow by manually providing a flow run identifier. Logs that are not associated with a flow run will not be sent to the API. This setting can be used to determine if a warning or error is displayed when the identifier is missing.
The following options are available:
PREFECT_SQLALCHEMY_POOL_SIZE = Setting(int, default=None)
module-attribute
","text":"Controls connection pool size when using a PostgreSQL database with the Prefect API. If not set, the default SQLAlchemy pool size will be used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SQLALCHEMY_MAX_OVERFLOW","title":"PREFECT_SQLALCHEMY_MAX_OVERFLOW = Setting(int, default=None)
module-attribute
","text":"Controls maximum overflow of the connection pool when using a PostgreSQL database with the Prefect API. If not set, the default SQLAlchemy maximum overflow value will be used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_COLORS","title":"PREFECT_LOGGING_COLORS = Setting(bool, default=True)
module-attribute
","text":"Whether to style console logs with color.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_MARKUP","title":"PREFECT_LOGGING_MARKUP = Setting(bool, default=False)
module-attribute
","text":"Whether to interpret strings wrapped in square brackets as a style. This allows styles to be conveniently added to log messages, e.g. [red]This is a red message.[/red]
. However, the downside is, if enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output, e.g. DROP TABLE [dbo].[SomeTable];\"
outputs DROP TABLE .[SomeTable];
.
PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD = Setting(float, default=10.0)
module-attribute
","text":"Threshold time in seconds for logging a warning if task parameter introspection exceeds this duration. Parameter introspection can be a significant performance hit when the parameter is a large collection object, e.g. a large dictionary or DataFrame, and each element needs to be inspected. See prefect.utilities.annotations.quote
for more details. Defaults to 10.0
. Set to 0
to disable logging the warning.
PREFECT_AGENT_QUERY_INTERVAL = Setting(float, default=15)
module-attribute
","text":"The agent loop interval, in seconds. Agents will check for new runs this often. Defaults to 15
.
PREFECT_AGENT_PREFETCH_SECONDS = Setting(int, default=15)
module-attribute
","text":"Agents will look for scheduled runs this many seconds in the future and attempt to run them. This accounts for any additional infrastructure spin-up time or latency in preparing a flow run. Note flow runs will not start before their scheduled time, even if they are prefetched. Defaults to 15
.
PREFECT_ASYNC_FETCH_STATE_RESULT = Setting(bool, default=False)
module-attribute
","text":"Determines whether State.result()
fetches results automatically or not. In Prefect 2.6.0, the State.result()
method was updated to be async to facilitate automatic retrieval of results from storage which means when writing async code you must await
the call. For backwards compatibility, the result is not retrieved by default for async users. You may opt into this per call by passing fetch=True
or toggle this setting to change the behavior globally. This setting does not affect users writing synchronous tasks and flows. This setting does not affect retrieval of results when using Future.result()
.
PREFECT_API_BLOCKS_REGISTER_ON_START = Setting(bool, default=True)
module-attribute
","text":"If set, any block types that have been imported will be registered with the backend on application startup. If not set, block types must be manually registered.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_PASSWORD","title":"PREFECT_API_DATABASE_PASSWORD = Setting(str, default=None, is_secret=True)
module-attribute
","text":"Password to template into the PREFECT_API_DATABASE_CONNECTION_URL
. This is useful if the password must be provided separately from the connection URL. To use this setting, you must include it in your connection URL.
PREFECT_API_DATABASE_CONNECTION_URL = Setting(str, default=None, value_callback=default_database_connection_url, is_secret=True)
module-attribute
","text":"A database connection URL in a SQLAlchemy-compatible format. Prefect currently supports SQLite and Postgres. Note that all Prefect database engines must use an async driver - for SQLite, use sqlite+aiosqlite
and for Postgres use postgresql+asyncpg
.
SQLite in-memory databases can be used by providing the url sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false
, which will allow the database to be accessed by multiple threads. Note that in-memory databases can not be accessed from multiple processes and should only be used for simple tests.
Defaults to a sqlite database stored in the Prefect home directory.
If you need to provide password via a different environment variable, you use the PREFECT_API_DATABASE_PASSWORD
setting. For example:
PREFECT_API_DATABASE_PASSWORD='mypassword'\nPREFECT_API_DATABASE_CONNECTION_URL='postgresql+asyncpg://postgres:${PREFECT_API_DATABASE_PASSWORD}@localhost/prefect'\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_ECHO","title":"PREFECT_API_DATABASE_ECHO = Setting(bool, default=False)
module-attribute
","text":"If True
, SQLAlchemy will log all SQL issued to the database. Defaults to False
.
PREFECT_API_DATABASE_MIGRATE_ON_START = Setting(bool, default=True)
module-attribute
","text":"If True
, the database will be upgraded on application creation. If False
, the database will need to be upgraded manually.
PREFECT_API_DATABASE_TIMEOUT = Setting(Optional[float], default=10.0)
module-attribute
","text":"A statement timeout, in seconds, applied to all database interactions made by the API. Defaults to 10 seconds.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_CONNECTION_TIMEOUT","title":"PREFECT_API_DATABASE_CONNECTION_TIMEOUT = Setting(Optional[float], default=5)
module-attribute
","text":"A connection timeout, in seconds, applied to database connections. Defaults to 5
.
PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS = Setting(float, default=60)
module-attribute
","text":"The scheduler loop interval, in seconds. This determines how often the scheduler will attempt to schedule new flow runs, but has no impact on how quickly either flow runs or task runs are actually executed. Defaults to 60
.
PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE = Setting(int, default=100)
module-attribute
","text":"The number of deployments the scheduler will attempt to schedule in a single batch. If there are more deployments than the batch size, the scheduler immediately attempts to schedule the next batch; it does not sleep for scheduler_loop_seconds
until it has visited every deployment once. Defaults to 100
.
PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS = Setting(int, default=100)
module-attribute
","text":"The scheduler will attempt to schedule up to this many auto-scheduled runs in the future. Note that runs may have fewer than this many scheduled runs, depending on the value of scheduler_max_scheduled_time
. Defaults to 100
.
PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS = Setting(int, default=3)
module-attribute
","text":"The scheduler will attempt to schedule at least this many auto-scheduled runs in the future. Note that runs may have more than this many scheduled runs, depending on the value of scheduler_min_scheduled_time
. Defaults to 3
.
PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME = Setting(timedelta, default=timedelta(days=100))
module-attribute
","text":"The scheduler will create new runs up to this far in the future. Note that this setting will take precedence over scheduler_max_runs
: if a flow runs once a month and scheduler_max_scheduled_time
is three months, then only three runs will be scheduled. Defaults to 100 days (8640000
seconds).
PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME = Setting(timedelta, default=timedelta(hours=1))
module-attribute
","text":"The scheduler will create new runs at least this far in the future. Note that this setting will take precedence over scheduler_min_runs
: if a flow runs every hour and scheduler_min_scheduled_time
is three hours, then three runs will be scheduled even if scheduler_min_runs
is 1. Defaults to 1 hour (3600
seconds).
PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE = Setting(int, default=500)
module-attribute
","text":"The number of flow runs the scheduler will attempt to insert in one batch across all deployments. If the number of flow runs to schedule exceeds this amount, the runs will be inserted in batches of this size. Defaults to 500
.
PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS = Setting(float, default=5)
module-attribute
","text":"The late runs service will look for runs to mark as late this often. Defaults to 5
.
PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS = Setting(timedelta, default=timedelta(seconds=5))
module-attribute
","text":"The late runs service will mark runs as late after they have exceeded their scheduled start time by this many seconds. Defaults to 5
seconds.
PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS = Setting(float, default=5)
module-attribute
","text":"The pause expiration service will look for runs to mark as failed this often. Defaults to 5
.
PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS = Setting(float, default=20)
module-attribute
","text":"The cancellation cleanup service will look non-terminal tasks and subflows this often. Defaults to 20
.
PREFECT_API_DEFAULT_LIMIT = Setting(int, default=200)
module-attribute
","text":"The default limit applied to queries that can return multiple objects, such as POST /flow_runs/filter
.
PREFECT_SERVER_API_HOST = Setting(str, default='127.0.0.1')
module-attribute
","text":"The API's host address (defaults to 127.0.0.1
).
PREFECT_SERVER_API_PORT = Setting(int, default=4200)
module-attribute
","text":"The API's port address (defaults to 4200
).
PREFECT_SERVER_API_KEEPALIVE_TIMEOUT = Setting(int, default=5)
module-attribute
","text":"The API's keep alive timeout (defaults to 5
). Refer to https://www.uvicorn.org/settings/#timeouts for details.
When the API is hosted behind a load balancer, you may want to set this to a value greater than the load balancer's idle timeout.
Note this setting only applies when calling prefect server start
; if hosting the API with another tool you will need to configure this there instead.
PREFECT_UI_ENABLED = Setting(bool, default=True)
module-attribute
","text":"Whether or not to serve the Prefect UI.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_API_URL","title":"PREFECT_UI_API_URL = Setting(str, default=None, value_callback=default_ui_api_url)
module-attribute
","text":"The connection url for communication from the UI to the API. Defaults to PREFECT_API_URL
if set. Otherwise, the default URL is generated from PREFECT_SERVER_API_HOST
and PREFECT_SERVER_API_PORT
. If providing a custom value, the aforementioned settings may be templated into the given string.
PREFECT_SERVER_ANALYTICS_ENABLED = Setting(bool, default=True)
module-attribute
","text":"When enabled, Prefect sends anonymous data (e.g. count of flow runs, package version) on server startup to help us improve our product.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_ENABLED","title":"PREFECT_API_SERVICES_SCHEDULER_ENABLED = Setting(bool, default=True)
module-attribute
","text":"Whether or not to start the scheduling service in the server application. If disabled, you will need to run this service separately to schedule runs for deployments.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_ENABLED","title":"PREFECT_API_SERVICES_LATE_RUNS_ENABLED = Setting(bool, default=True)
module-attribute
","text":"Whether or not to start the late runs service in the server application. If disabled, you will need to run this service separately to have runs past their scheduled start time marked as late.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED","title":"PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED = Setting(bool, default=True)
module-attribute
","text":"Whether or not to start the flow run notifications service in the server application. If disabled, you will need to run this service separately to send flow run notifications.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED","title":"PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED = Setting(bool, default=True)
module-attribute
","text":"Whether or not to start the paused flow run expiration service in the server application. If disabled, paused flows that have timed out will remain in a Paused state until a resume attempt.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH","title":"PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH = Setting(int, default=2000)
module-attribute
","text":"The maximum number of characters allowed for a task run cache key. This setting cannot be changed client-side, it must be set on the server.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED","title":"PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED = Setting(bool, default=True)
module-attribute
","text":"Whether or not to start the cancellation cleanup service in the server application. If disabled, task runs and subflow runs belonging to cancelled flows may remain in non-terminal states.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES","title":"PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES = Setting(int, default=10000)
module-attribute
","text":"The maximum size of a flow run graph on the v2 API
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS","title":"PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS = Setting(int, default=10000)
module-attribute
","text":"The maximum number of artifacts to show on a flow run graph on the v2 API
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS_ON_FLOW_RUN_GRAPH","title":"PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS_ON_FLOW_RUN_GRAPH = Setting(bool, default=False)
module-attribute
","text":"Whether or not to enable artifacts on the flow run graph.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_STATES_ON_FLOW_RUN_GRAPH","title":"PREFECT_EXPERIMENTAL_ENABLE_STATES_ON_FLOW_RUN_GRAPH = Setting(bool, default=False)
module-attribute
","text":"Whether or not to enable flow run states on the flow run graph.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_EVENTS_CLIENT","title":"PREFECT_EXPERIMENTAL_ENABLE_EVENTS_CLIENT = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable experimental Prefect work pools.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_EVENTS_CLIENT","title":"PREFECT_EXPERIMENTAL_WARN_EVENTS_CLIENT = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when experimental Prefect work pools are used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORK_POOLS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORK_POOLS = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable experimental Prefect work pools.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORK_POOLS","title":"PREFECT_EXPERIMENTAL_WARN_WORK_POOLS = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when experimental Prefect work pools are used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORKERS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORKERS = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable experimental Prefect workers.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORKERS","title":"PREFECT_EXPERIMENTAL_WARN_WORKERS = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when experimental Prefect workers are used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_VISUALIZE","title":"PREFECT_EXPERIMENTAL_WARN_VISUALIZE = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when experimental Prefect visualize is used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION","title":"PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable experimental enhanced flow run cancellation.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_DEPLOYMENT_PARAMETERS","title":"PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_DEPLOYMENT_PARAMETERS = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable enhanced deployment parameters.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION","title":"PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when experimental enhanced flow run cancellation is used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_DEPLOYMENT_STATUS","title":"PREFECT_EXPERIMENTAL_ENABLE_DEPLOYMENT_STATUS = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable deployment status in the UI
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_DEPLOYMENT_STATUS","title":"PREFECT_EXPERIMENTAL_WARN_DEPLOYMENT_STATUS = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when deployment status is used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_FLOW_RUN_INPUT","title":"PREFECT_EXPERIMENTAL_FLOW_RUN_INPUT = Setting(bool, default=False)
module-attribute
","text":"Whether or not to enable flow run input.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INPUT","title":"PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INPUT = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable flow run input.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_PROCESS_LIMIT","title":"PREFECT_RUNNER_PROCESS_LIMIT = Setting(int, default=5)
module-attribute
","text":"Maximum number of processes a runner will execute in parallel.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_POLL_FREQUENCY","title":"PREFECT_RUNNER_POLL_FREQUENCY = Setting(int, default=10)
module-attribute
","text":"Number of seconds a runner should wait between queries for scheduled work.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_MISSED_POLLS_TOLERANCE","title":"PREFECT_RUNNER_SERVER_MISSED_POLLS_TOLERANCE = Setting(int, default=2)
module-attribute
","text":"Number of missed polls before a runner is considered unhealthy by its webserver.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_HOST","title":"PREFECT_RUNNER_SERVER_HOST = Setting(str, default='localhost')
module-attribute
","text":"The host address the runner's webserver should bind to.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_PORT","title":"PREFECT_RUNNER_SERVER_PORT = Setting(int, default=8080)
module-attribute
","text":"The port the runner's webserver should bind to.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_LOG_LEVEL","title":"PREFECT_RUNNER_SERVER_LOG_LEVEL = Setting(str, default='error')
module-attribute
","text":"The log level of the runner's webserver.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_ENABLE","title":"PREFECT_RUNNER_SERVER_ENABLE = Setting(bool, default=False)
module-attribute
","text":"Whether or not to enable the runner's webserver.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_HEARTBEAT_SECONDS","title":"PREFECT_WORKER_HEARTBEAT_SECONDS = Setting(float, default=30)
module-attribute
","text":"Number of seconds a worker should wait between sending a heartbeat.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_QUERY_SECONDS","title":"PREFECT_WORKER_QUERY_SECONDS = Setting(float, default=10)
module-attribute
","text":"Number of seconds a worker should wait between queries for scheduled flow runs.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_PREFETCH_SECONDS","title":"PREFECT_WORKER_PREFETCH_SECONDS = Setting(float, default=10)
module-attribute
","text":"The number of seconds into the future a worker should query for scheduled flow runs. Can be used to compensate for infrastructure start up time for a worker.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_WEBSERVER_HOST","title":"PREFECT_WORKER_WEBSERVER_HOST = Setting(str, default='0.0.0.0')
module-attribute
","text":"The host address the worker's webserver should bind to.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_WEBSERVER_PORT","title":"PREFECT_WORKER_WEBSERVER_PORT = Setting(int, default=8080)
module-attribute
","text":"The port the worker's webserver should bind to.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_DEFAULT_STORAGE_BLOCK","title":"PREFECT_TASK_SCHEDULING_DEFAULT_STORAGE_BLOCK = Setting(str, default='local-file-system/prefect-task-scheduling')
module-attribute
","text":"The block-type/block-document
slug of a block to use as the default storage for autonomous tasks.
PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS = Setting(bool, default=True)
module-attribute
","text":"Whether or not to delete failed task submissions from the database.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE","title":"PREFECT_TASK_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE = Setting(int, default=1000)
module-attribute
","text":"The maximum number of scheduled tasks to queue for submission.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_MAX_RETRY_QUEUE_SIZE","title":"PREFECT_TASK_SCHEDULING_MAX_RETRY_QUEUE_SIZE = Setting(int, default=100)
module-attribute
","text":"The maximum number of retries to queue for submission.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_PENDING_TASK_TIMEOUT","title":"PREFECT_TASK_SCHEDULING_PENDING_TASK_TIMEOUT = Setting(timedelta, default=timedelta(seconds=30))
module-attribute
","text":"How long before a PENDING task are made available to another task server. In practice, a task server should move a task from PENDING to RUNNING very quickly, so runs stuck in PENDING for a while is a sign that the task server may have crashed.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_FLOW_RUN_INFRA_OVERRIDES","title":"PREFECT_EXPERIMENTAL_ENABLE_FLOW_RUN_INFRA_OVERRIDES = Setting(bool, default=False)
module-attribute
","text":"Whether or not to enable infrastructure overrides made on flow runs.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INFRA_OVERRIDES","title":"PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INFRA_OVERRIDES = Setting(bool, default=True)
module-attribute
","text":"Whether or not to warn infrastructure when experimental flow runs overrides are used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_EXTRA_RUNNER_ENDPOINTS","title":"PREFECT_EXPERIMENTAL_ENABLE_EXTRA_RUNNER_ENDPOINTS = Setting(bool, default=False)
module-attribute
","text":"Whether or not to enable experimental worker webserver endpoints.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS","title":"PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable experimental Prefect artifacts.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_ARTIFACTS","title":"PREFECT_EXPERIMENTAL_WARN_ARTIFACTS = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when experimental Prefect artifacts are used.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORKSPACE_DASHBOARD","title":"PREFECT_EXPERIMENTAL_ENABLE_WORKSPACE_DASHBOARD = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable the experimental workspace dashboard.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORKSPACE_DASHBOARD","title":"PREFECT_EXPERIMENTAL_WARN_WORKSPACE_DASHBOARD = Setting(bool, default=False)
module-attribute
","text":"Whether or not to warn when the experimental workspace dashboard is enabled.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING","title":"PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING = Setting(bool, default=False)
module-attribute
","text":"Whether or not to enable experimental task scheduling.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORK_QUEUE_STATUS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORK_QUEUE_STATUS = Setting(bool, default=True)
module-attribute
","text":"Whether or not to enable experimental work queue status in-place of work queue health.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_DEFAULT_RESULT_STORAGE_BLOCK","title":"PREFECT_DEFAULT_RESULT_STORAGE_BLOCK = Setting(str, default=None)
module-attribute
","text":"The block-type/block-document
slug of a block to use as the default result storage.
PREFECT_DEFAULT_WORK_POOL_NAME = Setting(str, default=None)
module-attribute
","text":"The default work pool to deploy to.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE","title":"PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE = Setting(str, default=None)
module-attribute
","text":"The default Docker namespace to use when building images.
Can be either an organization/username or a registry URL with an organization/username.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_SERVE_BASE","title":"PREFECT_UI_SERVE_BASE = Setting(str, default='/')
module-attribute
","text":"The base URL path to serve the Prefect UI from.
Defaults to the root path.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_STATIC_DIRECTORY","title":"PREFECT_UI_STATIC_DIRECTORY = Setting(str, default=None)
module-attribute
","text":"The directory to serve static files from. This should be used when running into permissions issues when attempting to serve the UI from the default directory (for example when running in a Docker container)
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting","title":"Setting
","text":" Bases: Generic[T]
Setting definition type.
Source code inprefect/settings.py
class Setting(Generic[T]):\n \"\"\"\n Setting definition type.\n \"\"\"\n\n def __init__(\n self,\n type: Type[T],\n *,\n deprecated: bool = False,\n deprecated_start_date: Optional[str] = None,\n deprecated_end_date: Optional[str] = None,\n deprecated_help: str = \"\",\n deprecated_when_message: str = \"\",\n deprecated_when: Optional[Callable[[Any], bool]] = None,\n deprecated_renamed_to: Optional[\"Setting[T]\"] = None,\n value_callback: Optional[Callable[[\"Settings\", T], T]] = None,\n is_secret: bool = False,\n **kwargs: Any,\n ) -> None:\n self.field: fields.FieldInfo = Field(**kwargs)\n self.type = type\n self.value_callback = value_callback\n self._name = None\n self.is_secret = is_secret\n self.deprecated = deprecated\n self.deprecated_start_date = deprecated_start_date\n self.deprecated_end_date = deprecated_end_date\n self.deprecated_help = deprecated_help\n self.deprecated_when = deprecated_when or (lambda _: True)\n self.deprecated_when_message = deprecated_when_message\n self.deprecated_renamed_to = deprecated_renamed_to\n self.deprecated_renamed_from = None\n self.__doc__ = self.field.description\n\n # Validate the deprecation settings, will throw an error at setting definition\n # time if the developer has not configured it correctly\n if deprecated:\n generate_deprecation_message(\n name=\"...\", # setting names not populated until after init\n start_date=self.deprecated_start_date,\n end_date=self.deprecated_end_date,\n help=self.deprecated_help,\n when=self.deprecated_when_message,\n )\n\n if deprecated_renamed_to is not None:\n # Track the deprecation both ways\n deprecated_renamed_to.deprecated_renamed_from = self\n\n def value(self, bypass_callback: bool = False) -> T:\n \"\"\"\n Get the current value of a setting.\n\n Example:\n ```python\n from prefect.settings import PREFECT_API_URL\n PREFECT_API_URL.value()\n ```\n \"\"\"\n return self.value_from(get_current_settings(), bypass_callback=bypass_callback)\n\n def value_from(self, settings: \"Settings\", bypass_callback: bool = False) -> T:\n \"\"\"\n Get the value of a setting from a settings object\n\n Example:\n ```python\n from prefect.settings import get_default_settings\n PREFECT_API_URL.value_from(get_default_settings())\n ```\n \"\"\"\n value = settings.value_of(self, bypass_callback=bypass_callback)\n\n if not bypass_callback and self.deprecated and self.deprecated_when(value):\n # Check if this setting is deprecated and someone is accessing the value\n # via the old name\n warnings.warn(self.deprecated_message, DeprecationWarning, stacklevel=3)\n\n # If the the value is empty, return the new setting's value for compat\n if value is None and self.deprecated_renamed_to is not None:\n return self.deprecated_renamed_to.value_from(settings)\n\n if not bypass_callback and self.deprecated_renamed_from is not None:\n # Check if this setting is a rename of a deprecated setting and the\n # deprecated setting is set and should be used for compatibility\n deprecated_value = self.deprecated_renamed_from.value_from(\n settings, bypass_callback=True\n )\n if deprecated_value is not None:\n warnings.warn(\n (\n f\"{self.deprecated_renamed_from.deprecated_message} Because\"\n f\" {self.deprecated_renamed_from.name!r} is set it will be used\"\n f\" instead of {self.name!r} for backwards compatibility.\"\n ),\n DeprecationWarning,\n stacklevel=3,\n )\n return deprecated_value or value\n\n return value\n\n @property\n def name(self):\n if self._name:\n return self._name\n\n # Lookup the name on first access\n for name, val in tuple(globals().items()):\n if val == self:\n self._name = name\n return name\n\n raise ValueError(\"Setting not found in `prefect.settings` module.\")\n\n @name.setter\n def name(self, value: str):\n self._name = value\n\n @property\n def deprecated_message(self):\n return generate_deprecation_message(\n name=f\"Setting {self.name!r}\",\n start_date=self.deprecated_start_date,\n end_date=self.deprecated_end_date,\n help=self.deprecated_help,\n when=self.deprecated_when_message,\n )\n\n def __repr__(self) -> str:\n return f\"<{self.name}: {self.type.__name__}>\"\n\n def __bool__(self) -> bool:\n \"\"\"\n Returns a truthy check of the current value.\n \"\"\"\n return bool(self.value())\n\n def __eq__(self, __o: object) -> bool:\n return __o.__eq__(self.value())\n\n def __hash__(self) -> int:\n return hash((type(self), self.name))\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting.value","title":"value
","text":"Get the current value of a setting.
Example:
from prefect.settings import PREFECT_API_URL\nPREFECT_API_URL.value()\n
Source code in prefect/settings.py
def value(self, bypass_callback: bool = False) -> T:\n \"\"\"\n Get the current value of a setting.\n\n Example:\n ```python\n from prefect.settings import PREFECT_API_URL\n PREFECT_API_URL.value()\n ```\n \"\"\"\n return self.value_from(get_current_settings(), bypass_callback=bypass_callback)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting.value_from","title":"value_from
","text":"Get the value of a setting from a settings object
Example:
from prefect.settings import get_default_settings\nPREFECT_API_URL.value_from(get_default_settings())\n
Source code in prefect/settings.py
def value_from(self, settings: \"Settings\", bypass_callback: bool = False) -> T:\n \"\"\"\n Get the value of a setting from a settings object\n\n Example:\n ```python\n from prefect.settings import get_default_settings\n PREFECT_API_URL.value_from(get_default_settings())\n ```\n \"\"\"\n value = settings.value_of(self, bypass_callback=bypass_callback)\n\n if not bypass_callback and self.deprecated and self.deprecated_when(value):\n # Check if this setting is deprecated and someone is accessing the value\n # via the old name\n warnings.warn(self.deprecated_message, DeprecationWarning, stacklevel=3)\n\n # If the the value is empty, return the new setting's value for compat\n if value is None and self.deprecated_renamed_to is not None:\n return self.deprecated_renamed_to.value_from(settings)\n\n if not bypass_callback and self.deprecated_renamed_from is not None:\n # Check if this setting is a rename of a deprecated setting and the\n # deprecated setting is set and should be used for compatibility\n deprecated_value = self.deprecated_renamed_from.value_from(\n settings, bypass_callback=True\n )\n if deprecated_value is not None:\n warnings.warn(\n (\n f\"{self.deprecated_renamed_from.deprecated_message} Because\"\n f\" {self.deprecated_renamed_from.name!r} is set it will be used\"\n f\" instead of {self.name!r} for backwards compatibility.\"\n ),\n DeprecationWarning,\n stacklevel=3,\n )\n return deprecated_value or value\n\n return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings","title":"Settings
","text":" Bases: SettingsFieldsMixin
Contains validated Prefect settings.
Settings should be accessed using the relevant Setting
object. For example:
from prefect.settings import PREFECT_HOME\nPREFECT_HOME.value()\n
Accessing a setting attribute directly will ignore any value_callback
mutations. This is not recommended:
from prefect.settings import Settings\nSettings().PREFECT_PROFILES_PATH # PosixPath('${PREFECT_HOME}/profiles.toml')\n
Source code in prefect/settings.py
@add_cloudpickle_reduction\nclass Settings(SettingsFieldsMixin):\n \"\"\"\n Contains validated Prefect settings.\n\n Settings should be accessed using the relevant `Setting` object. For example:\n ```python\n from prefect.settings import PREFECT_HOME\n PREFECT_HOME.value()\n ```\n\n Accessing a setting attribute directly will ignore any `value_callback` mutations.\n This is not recommended:\n ```python\n from prefect.settings import Settings\n Settings().PREFECT_PROFILES_PATH # PosixPath('${PREFECT_HOME}/profiles.toml')\n ```\n \"\"\"\n\n def value_of(self, setting: Setting[T], bypass_callback: bool = False) -> T:\n \"\"\"\n Retrieve a setting's value.\n \"\"\"\n value = getattr(self, setting.name)\n if setting.value_callback and not bypass_callback:\n value = setting.value_callback(self, value)\n return value\n\n @validator(PREFECT_LOGGING_LEVEL.name, PREFECT_LOGGING_SERVER_LEVEL.name)\n def check_valid_log_level(cls, value):\n if isinstance(value, str):\n value = value.upper()\n logging._checkLevel(value)\n return value\n\n @root_validator\n def post_root_validators(cls, values):\n \"\"\"\n Add root validation functions for settings here.\n \"\"\"\n # TODO: We could probably register these dynamically but this is the simpler\n # approach for now. We can explore more interesting validation features\n # in the future.\n values = max_log_size_smaller_than_batch_size(values)\n values = warn_on_database_password_value_without_usage(values)\n if not values[\"PREFECT_SILENCE_API_URL_MISCONFIGURATION\"]:\n values = warn_on_misconfigured_api_url(values)\n return values\n\n def copy_with_update(\n self,\n updates: Mapping[Setting, Any] = None,\n set_defaults: Mapping[Setting, Any] = None,\n restore_defaults: Iterable[Setting] = None,\n ) -> \"Settings\":\n \"\"\"\n Create a new `Settings` object with validation.\n\n Arguments:\n updates: A mapping of settings to new values. Existing values for the\n given settings will be overridden.\n set_defaults: A mapping of settings to new default values. Existing values for\n the given settings will only be overridden if they were not set.\n restore_defaults: An iterable of settings to restore to their default values.\n\n Returns:\n A new `Settings` object.\n \"\"\"\n updates = updates or {}\n set_defaults = set_defaults or {}\n restore_defaults = restore_defaults or set()\n restore_defaults_names = {setting.name for setting in restore_defaults}\n\n return self.__class__(\n **{\n **{setting.name: value for setting, value in set_defaults.items()},\n **self.dict(exclude_unset=True, exclude=restore_defaults_names),\n **{setting.name: value for setting, value in updates.items()},\n }\n )\n\n def with_obfuscated_secrets(self):\n \"\"\"\n Returns a copy of this settings object with secret setting values obfuscated.\n \"\"\"\n settings = self.copy(\n update={\n setting.name: obfuscate(self.value_of(setting))\n for setting in SETTING_VARIABLES.values()\n if setting.is_secret\n # Exclude deprecated settings with null values to avoid warnings\n and not (setting.deprecated and self.value_of(setting) is None)\n }\n )\n # Ensure that settings that have not been marked as \"set\" before are still so\n # after we have updated their value above\n settings.__fields_set__.intersection_update(self.__fields_set__)\n return settings\n\n def hash_key(self) -> str:\n \"\"\"\n Return a hash key for the settings object. This is needed since some\n settings may be unhashable. An example is lists.\n \"\"\"\n env_variables = self.to_environment_variables()\n return str(hash(tuple((key, value) for key, value in env_variables.items())))\n\n def to_environment_variables(\n self, include: Iterable[Setting] = None, exclude_unset: bool = False\n ) -> Dict[str, str]:\n \"\"\"\n Convert the settings object to environment variables.\n\n Note that setting values will not be run through their `value_callback` allowing\n dynamic resolution to occur when loaded from the returned environment.\n\n Args:\n include_keys: An iterable of settings to include in the return value.\n If not set, all settings are used.\n exclude_unset: Only include settings that have been set (i.e. the value is\n not from the default). If set, unset keys will be dropped even if they\n are set in `include_keys`.\n\n Returns:\n A dictionary of settings with values cast to strings\n \"\"\"\n include = set(include or SETTING_VARIABLES.values())\n\n if exclude_unset:\n set_keys = {\n # Collect all of the \"set\" keys and cast to `Setting` objects\n SETTING_VARIABLES[key]\n for key in self.dict(exclude_unset=True)\n }\n include.intersection_update(set_keys)\n\n # Validate the types of items in `include` to prevent exclusion bugs\n for key in include:\n if not isinstance(key, Setting):\n raise TypeError(\n \"Invalid type {type(key).__name__!r} for key in `include`.\"\n )\n\n env = {\n # Use `getattr` instead of `value_of` to avoid value callback resolution\n key: getattr(self, key)\n for key, setting in SETTING_VARIABLES.items()\n if setting in include\n }\n\n # Cast to strings and drop null values\n return {key: str(value) for key, value in env.items() if value is not None}\n\n class Config:\n frozen = True\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.value_of","title":"value_of
","text":"Retrieve a setting's value.
Source code inprefect/settings.py
def value_of(self, setting: Setting[T], bypass_callback: bool = False) -> T:\n \"\"\"\n Retrieve a setting's value.\n \"\"\"\n value = getattr(self, setting.name)\n if setting.value_callback and not bypass_callback:\n value = setting.value_callback(self, value)\n return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.post_root_validators","title":"post_root_validators
","text":"Add root validation functions for settings here.
Source code inprefect/settings.py
@root_validator\ndef post_root_validators(cls, values):\n \"\"\"\n Add root validation functions for settings here.\n \"\"\"\n # TODO: We could probably register these dynamically but this is the simpler\n # approach for now. We can explore more interesting validation features\n # in the future.\n values = max_log_size_smaller_than_batch_size(values)\n values = warn_on_database_password_value_without_usage(values)\n if not values[\"PREFECT_SILENCE_API_URL_MISCONFIGURATION\"]:\n values = warn_on_misconfigured_api_url(values)\n return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.with_obfuscated_secrets","title":"with_obfuscated_secrets
","text":"Returns a copy of this settings object with secret setting values obfuscated.
Source code inprefect/settings.py
def with_obfuscated_secrets(self):\n \"\"\"\n Returns a copy of this settings object with secret setting values obfuscated.\n \"\"\"\n settings = self.copy(\n update={\n setting.name: obfuscate(self.value_of(setting))\n for setting in SETTING_VARIABLES.values()\n if setting.is_secret\n # Exclude deprecated settings with null values to avoid warnings\n and not (setting.deprecated and self.value_of(setting) is None)\n }\n )\n # Ensure that settings that have not been marked as \"set\" before are still so\n # after we have updated their value above\n settings.__fields_set__.intersection_update(self.__fields_set__)\n return settings\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.hash_key","title":"hash_key
","text":"Return a hash key for the settings object. This is needed since some settings may be unhashable. An example is lists.
Source code inprefect/settings.py
def hash_key(self) -> str:\n \"\"\"\n Return a hash key for the settings object. This is needed since some\n settings may be unhashable. An example is lists.\n \"\"\"\n env_variables = self.to_environment_variables()\n return str(hash(tuple((key, value) for key, value in env_variables.items())))\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.to_environment_variables","title":"to_environment_variables
","text":"Convert the settings object to environment variables.
Note that setting values will not be run through their value_callback
allowing dynamic resolution to occur when loaded from the returned environment.
Parameters:
Name Type Description Defaultinclude_keys
An iterable of settings to include in the return value. If not set, all settings are used.
requiredexclude_unset
bool
Only include settings that have been set (i.e. the value is not from the default). If set, unset keys will be dropped even if they are set in include_keys
.
False
Returns:
Type DescriptionDict[str, str]
A dictionary of settings with values cast to strings
Source code inprefect/settings.py
def to_environment_variables(\n self, include: Iterable[Setting] = None, exclude_unset: bool = False\n) -> Dict[str, str]:\n \"\"\"\n Convert the settings object to environment variables.\n\n Note that setting values will not be run through their `value_callback` allowing\n dynamic resolution to occur when loaded from the returned environment.\n\n Args:\n include_keys: An iterable of settings to include in the return value.\n If not set, all settings are used.\n exclude_unset: Only include settings that have been set (i.e. the value is\n not from the default). If set, unset keys will be dropped even if they\n are set in `include_keys`.\n\n Returns:\n A dictionary of settings with values cast to strings\n \"\"\"\n include = set(include or SETTING_VARIABLES.values())\n\n if exclude_unset:\n set_keys = {\n # Collect all of the \"set\" keys and cast to `Setting` objects\n SETTING_VARIABLES[key]\n for key in self.dict(exclude_unset=True)\n }\n include.intersection_update(set_keys)\n\n # Validate the types of items in `include` to prevent exclusion bugs\n for key in include:\n if not isinstance(key, Setting):\n raise TypeError(\n \"Invalid type {type(key).__name__!r} for key in `include`.\"\n )\n\n env = {\n # Use `getattr` instead of `value_of` to avoid value callback resolution\n key: getattr(self, key)\n for key, setting in SETTING_VARIABLES.items()\n if setting in include\n }\n\n # Cast to strings and drop null values\n return {key: str(value) for key, value in env.items() if value is not None}\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile","title":"Profile
","text":" Bases: BaseModel
A user profile containing settings.
Source code inprefect/settings.py
class Profile(BaseModel):\n \"\"\"\n A user profile containing settings.\n \"\"\"\n\n name: str\n settings: Dict[Setting, Any] = Field(default_factory=dict)\n source: Optional[Path]\n\n @validator(\"settings\", pre=True)\n def map_names_to_settings(cls, value):\n if value is None:\n return value\n\n # Cast string setting names to variables\n validated = {}\n for setting, val in value.items():\n if isinstance(setting, str) and setting in SETTING_VARIABLES:\n validated[SETTING_VARIABLES[setting]] = val\n elif isinstance(setting, Setting):\n validated[setting] = val\n else:\n raise ValueError(f\"Unknown setting {setting!r}.\")\n\n return validated\n\n def validate_settings(self) -> None:\n \"\"\"\n Validate the settings contained in this profile.\n\n Raises:\n pydantic.ValidationError: When settings do not have valid values.\n \"\"\"\n # Create a new `Settings` instance with the settings from this profile relying\n # on Pydantic validation to raise an error.\n # We do not return the `Settings` object because this is not the recommended\n # path for constructing settings with a profile. See `use_profile` instead.\n Settings(**{setting.name: value for setting, value in self.settings.items()})\n\n def convert_deprecated_renamed_settings(self) -> List[Tuple[Setting, Setting]]:\n \"\"\"\n Update settings in place to replace deprecated settings with new settings when\n renamed.\n\n Returns a list of tuples with the old and new setting.\n \"\"\"\n changed = []\n for setting in tuple(self.settings):\n if (\n setting.deprecated\n and setting.deprecated_renamed_to\n and setting.deprecated_renamed_to not in self.settings\n ):\n self.settings[setting.deprecated_renamed_to] = self.settings.pop(\n setting\n )\n changed.append((setting, setting.deprecated_renamed_to))\n return changed\n\n class Config:\n arbitrary_types_allowed = True\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile.validate_settings","title":"validate_settings
","text":"Validate the settings contained in this profile.
Raises:
Type DescriptionValidationError
When settings do not have valid values.
Source code inprefect/settings.py
def validate_settings(self) -> None:\n \"\"\"\n Validate the settings contained in this profile.\n\n Raises:\n pydantic.ValidationError: When settings do not have valid values.\n \"\"\"\n # Create a new `Settings` instance with the settings from this profile relying\n # on Pydantic validation to raise an error.\n # We do not return the `Settings` object because this is not the recommended\n # path for constructing settings with a profile. See `use_profile` instead.\n Settings(**{setting.name: value for setting, value in self.settings.items()})\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile.convert_deprecated_renamed_settings","title":"convert_deprecated_renamed_settings
","text":"Update settings in place to replace deprecated settings with new settings when renamed.
Returns a list of tuples with the old and new setting.
Source code inprefect/settings.py
def convert_deprecated_renamed_settings(self) -> List[Tuple[Setting, Setting]]:\n \"\"\"\n Update settings in place to replace deprecated settings with new settings when\n renamed.\n\n Returns a list of tuples with the old and new setting.\n \"\"\"\n changed = []\n for setting in tuple(self.settings):\n if (\n setting.deprecated\n and setting.deprecated_renamed_to\n and setting.deprecated_renamed_to not in self.settings\n ):\n self.settings[setting.deprecated_renamed_to] = self.settings.pop(\n setting\n )\n changed.append((setting, setting.deprecated_renamed_to))\n return changed\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection","title":"ProfilesCollection
","text":"\" A utility class for working with a collection of profiles.
Profiles in the collection must have unique names.
The collection may store the name of the active profile.
Source code inprefect/settings.py
class ProfilesCollection:\n \"\"\" \"\n A utility class for working with a collection of profiles.\n\n Profiles in the collection must have unique names.\n\n The collection may store the name of the active profile.\n \"\"\"\n\n def __init__(\n self, profiles: Iterable[Profile], active: Optional[str] = None\n ) -> None:\n self.profiles_by_name = {profile.name: profile for profile in profiles}\n self.active_name = active\n\n @property\n def names(self) -> Set[str]:\n \"\"\"\n Return a set of profile names in this collection.\n \"\"\"\n return set(self.profiles_by_name.keys())\n\n @property\n def active_profile(self) -> Optional[Profile]:\n \"\"\"\n Retrieve the active profile in this collection.\n \"\"\"\n if self.active_name is None:\n return None\n return self[self.active_name]\n\n def set_active(self, name: Optional[str], check: bool = True):\n \"\"\"\n Set the active profile name in the collection.\n\n A null value may be passed to indicate that this collection does not determine\n the active profile.\n \"\"\"\n if check and name is not None and name not in self.names:\n raise ValueError(f\"Unknown profile name {name!r}.\")\n self.active_name = name\n\n def update_profile(\n self, name: str, settings: Mapping[Union[Dict, str], Any], source: Path = None\n ) -> Profile:\n \"\"\"\n Add a profile to the collection or update the existing on if the name is already\n present in this collection.\n\n If updating an existing profile, the settings will be merged. Settings can\n be dropped from the existing profile by setting them to `None` in the new\n profile.\n\n Returns the new profile object.\n \"\"\"\n existing = self.profiles_by_name.get(name)\n\n # Convert the input to a `Profile` to cast settings to the correct type\n profile = Profile(name=name, settings=settings, source=source)\n\n if existing:\n new_settings = {**existing.settings, **profile.settings}\n\n # Drop null keys to restore to default\n for key, value in tuple(new_settings.items()):\n if value is None:\n new_settings.pop(key)\n\n new_profile = Profile(\n name=profile.name,\n settings=new_settings,\n source=source or profile.source,\n )\n else:\n new_profile = profile\n\n self.profiles_by_name[new_profile.name] = new_profile\n\n return new_profile\n\n def add_profile(self, profile: Profile) -> None:\n \"\"\"\n Add a profile to the collection.\n\n If the profile name already exists, an exception will be raised.\n \"\"\"\n if profile.name in self.profiles_by_name:\n raise ValueError(\n f\"Profile name {profile.name!r} already exists in collection.\"\n )\n\n self.profiles_by_name[profile.name] = profile\n\n def remove_profile(self, name: str) -> None:\n \"\"\"\n Remove a profile from the collection.\n \"\"\"\n self.profiles_by_name.pop(name)\n\n def without_profile_source(self, path: Optional[Path]) -> \"ProfilesCollection\":\n \"\"\"\n Remove profiles that were loaded from a given path.\n\n Returns a new collection.\n \"\"\"\n return ProfilesCollection(\n [\n profile\n for profile in self.profiles_by_name.values()\n if profile.source != path\n ],\n active=self.active_name,\n )\n\n def to_dict(self):\n \"\"\"\n Convert to a dictionary suitable for writing to disk.\n \"\"\"\n return {\n \"active\": self.active_name,\n \"profiles\": {\n profile.name: {\n setting.name: value for setting, value in profile.settings.items()\n }\n for profile in self.profiles_by_name.values()\n },\n }\n\n def __getitem__(self, name: str) -> Profile:\n return self.profiles_by_name[name]\n\n def __iter__(self):\n return self.profiles_by_name.__iter__()\n\n def items(self):\n return self.profiles_by_name.items()\n\n def __eq__(self, __o: object) -> bool:\n if not isinstance(__o, ProfilesCollection):\n return False\n\n return (\n self.profiles_by_name == __o.profiles_by_name\n and self.active_name == __o.active_name\n )\n\n def __repr__(self) -> str:\n return (\n f\"ProfilesCollection(profiles={list(self.profiles_by_name.values())!r},\"\n f\" active={self.active_name!r})>\"\n )\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.names","title":"names: Set[str]
property
","text":"Return a set of profile names in this collection.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.active_profile","title":"active_profile: Optional[Profile]
property
","text":"Retrieve the active profile in this collection.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.set_active","title":"set_active
","text":"Set the active profile name in the collection.
A null value may be passed to indicate that this collection does not determine the active profile.
Source code inprefect/settings.py
def set_active(self, name: Optional[str], check: bool = True):\n \"\"\"\n Set the active profile name in the collection.\n\n A null value may be passed to indicate that this collection does not determine\n the active profile.\n \"\"\"\n if check and name is not None and name not in self.names:\n raise ValueError(f\"Unknown profile name {name!r}.\")\n self.active_name = name\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.update_profile","title":"update_profile
","text":"Add a profile to the collection or update the existing on if the name is already present in this collection.
If updating an existing profile, the settings will be merged. Settings can be dropped from the existing profile by setting them to None
in the new profile.
Returns the new profile object.
Source code inprefect/settings.py
def update_profile(\n self, name: str, settings: Mapping[Union[Dict, str], Any], source: Path = None\n) -> Profile:\n \"\"\"\n Add a profile to the collection or update the existing on if the name is already\n present in this collection.\n\n If updating an existing profile, the settings will be merged. Settings can\n be dropped from the existing profile by setting them to `None` in the new\n profile.\n\n Returns the new profile object.\n \"\"\"\n existing = self.profiles_by_name.get(name)\n\n # Convert the input to a `Profile` to cast settings to the correct type\n profile = Profile(name=name, settings=settings, source=source)\n\n if existing:\n new_settings = {**existing.settings, **profile.settings}\n\n # Drop null keys to restore to default\n for key, value in tuple(new_settings.items()):\n if value is None:\n new_settings.pop(key)\n\n new_profile = Profile(\n name=profile.name,\n settings=new_settings,\n source=source or profile.source,\n )\n else:\n new_profile = profile\n\n self.profiles_by_name[new_profile.name] = new_profile\n\n return new_profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.add_profile","title":"add_profile
","text":"Add a profile to the collection.
If the profile name already exists, an exception will be raised.
Source code inprefect/settings.py
def add_profile(self, profile: Profile) -> None:\n \"\"\"\n Add a profile to the collection.\n\n If the profile name already exists, an exception will be raised.\n \"\"\"\n if profile.name in self.profiles_by_name:\n raise ValueError(\n f\"Profile name {profile.name!r} already exists in collection.\"\n )\n\n self.profiles_by_name[profile.name] = profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.remove_profile","title":"remove_profile
","text":"Remove a profile from the collection.
Source code inprefect/settings.py
def remove_profile(self, name: str) -> None:\n \"\"\"\n Remove a profile from the collection.\n \"\"\"\n self.profiles_by_name.pop(name)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.without_profile_source","title":"without_profile_source
","text":"Remove profiles that were loaded from a given path.
Returns a new collection.
Source code inprefect/settings.py
def without_profile_source(self, path: Optional[Path]) -> \"ProfilesCollection\":\n \"\"\"\n Remove profiles that were loaded from a given path.\n\n Returns a new collection.\n \"\"\"\n return ProfilesCollection(\n [\n profile\n for profile in self.profiles_by_name.values()\n if profile.source != path\n ],\n active=self.active_name,\n )\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_extra_loggers","title":"get_extra_loggers
","text":"value_callback
for PREFECT_LOGGING_EXTRA_LOGGERS
that parses the CSV string into a list and trims whitespace from logger names.
prefect/settings.py
def get_extra_loggers(_: \"Settings\", value: str) -> List[str]:\n \"\"\"\n `value_callback` for `PREFECT_LOGGING_EXTRA_LOGGERS`that parses the CSV string into a\n list and trims whitespace from logger names.\n \"\"\"\n return [name.strip() for name in value.split(\",\")] if value else []\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.debug_mode_log_level","title":"debug_mode_log_level
","text":"value_callback
for PREFECT_LOGGING_LEVEL
that overrides the log level to DEBUG when debug mode is enabled.
prefect/settings.py
def debug_mode_log_level(settings, value):\n \"\"\"\n `value_callback` for `PREFECT_LOGGING_LEVEL` that overrides the log level to DEBUG\n when debug mode is enabled.\n \"\"\"\n if PREFECT_DEBUG_MODE.value_from(settings):\n return \"DEBUG\"\n else:\n return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.only_return_value_in_test_mode","title":"only_return_value_in_test_mode
","text":"value_callback
for PREFECT_TEST_SETTING
that only allows access during test mode
prefect/settings.py
def only_return_value_in_test_mode(settings, value):\n \"\"\"\n `value_callback` for `PREFECT_TEST_SETTING` that only allows access during test mode\n \"\"\"\n if PREFECT_TEST_MODE.value_from(settings):\n return value\n else:\n return None\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.default_ui_api_url","title":"default_ui_api_url
","text":"value_callback
for PREFECT_UI_API_URL
that sets the default value to relative path '/api', otherwise it constructs an API URL from the API settings.
prefect/settings.py
def default_ui_api_url(settings, value):\n \"\"\"\n `value_callback` for `PREFECT_UI_API_URL` that sets the default value to\n relative path '/api', otherwise it constructs an API URL from the API settings.\n \"\"\"\n if value is None:\n # Set a default value\n value = \"/api\"\n\n return template_with_settings(\n PREFECT_SERVER_API_HOST, PREFECT_SERVER_API_PORT, PREFECT_API_URL\n )(settings, value)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.status_codes_as_integers_in_range","title":"status_codes_as_integers_in_range
","text":"value_callback
for PREFECT_CLIENT_RETRY_EXTRA_CODES
that ensures status codes are integers in the range 100-599.
prefect/settings.py
def status_codes_as_integers_in_range(_, value):\n \"\"\"\n `value_callback` for `PREFECT_CLIENT_RETRY_EXTRA_CODES` that ensures status codes\n are integers in the range 100-599.\n \"\"\"\n if value == \"\":\n return set()\n\n values = {v.strip() for v in value.split(\",\")}\n\n if any(not v.isdigit() or int(v) < 100 or int(v) > 599 for v in values):\n raise ValueError(\n \"PREFECT_CLIENT_RETRY_EXTRA_CODES must be a comma separated list of \"\n \"integers between 100 and 599.\"\n )\n\n values = {int(v) for v in values}\n return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.template_with_settings","title":"template_with_settings
","text":"Returns a value_callback
that will template the given settings into the runtime value for the setting.
prefect/settings.py
def template_with_settings(*upstream_settings: Setting) -> Callable[[\"Settings\", T], T]:\n \"\"\"\n Returns a `value_callback` that will template the given settings into the runtime\n value for the setting.\n \"\"\"\n\n def templater(settings, value):\n if value is None:\n return value # Do not attempt to template a null string\n\n original_type = type(value)\n template_values = {\n setting.name: setting.value_from(settings) for setting in upstream_settings\n }\n template = string.Template(str(value))\n return original_type(template.substitute(template_values))\n\n return templater\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.max_log_size_smaller_than_batch_size","title":"max_log_size_smaller_than_batch_size
","text":"Validator for settings asserting the batch size and match log size are compatible
Source code inprefect/settings.py
def max_log_size_smaller_than_batch_size(values):\n \"\"\"\n Validator for settings asserting the batch size and match log size are compatible\n \"\"\"\n if (\n values[\"PREFECT_LOGGING_TO_API_BATCH_SIZE\"]\n < values[\"PREFECT_LOGGING_TO_API_MAX_LOG_SIZE\"]\n ):\n raise ValueError(\n \"`PREFECT_LOGGING_TO_API_MAX_LOG_SIZE` cannot be larger than\"\n \" `PREFECT_LOGGING_TO_API_BATCH_SIZE`\"\n )\n return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.warn_on_database_password_value_without_usage","title":"warn_on_database_password_value_without_usage
","text":"Validator for settings warning if the database password is set but not used.
Source code inprefect/settings.py
def warn_on_database_password_value_without_usage(values):\n \"\"\"\n Validator for settings warning if the database password is set but not used.\n \"\"\"\n value = values[\"PREFECT_API_DATABASE_PASSWORD\"]\n if (\n value\n and not value.startswith(OBFUSCATED_PREFIX)\n and (\n \"PREFECT_API_DATABASE_PASSWORD\"\n not in values[\"PREFECT_API_DATABASE_CONNECTION_URL\"]\n )\n ):\n warnings.warn(\n \"PREFECT_API_DATABASE_PASSWORD is set but not included in the \"\n \"PREFECT_API_DATABASE_CONNECTION_URL. \"\n \"The provided password will be ignored.\"\n )\n return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.warn_on_misconfigured_api_url","title":"warn_on_misconfigured_api_url
","text":"Validator for settings warning if the API URL is misconfigured.
Source code inprefect/settings.py
def warn_on_misconfigured_api_url(values):\n \"\"\"\n Validator for settings warning if the API URL is misconfigured.\n \"\"\"\n api_url = values[\"PREFECT_API_URL\"]\n if api_url is not None:\n misconfigured_mappings = {\n \"app.prefect.cloud\": (\n \"`PREFECT_API_URL` points to `app.prefect.cloud`. Did you\"\n \" mean `api.prefect.cloud`?\"\n ),\n \"account/\": (\n \"`PREFECT_API_URL` uses `/account/` but should use `/accounts/`.\"\n ),\n \"workspace/\": (\n \"`PREFECT_API_URL` uses `/workspace/` but should use `/workspaces/`.\"\n ),\n }\n warnings_list = []\n\n for misconfig, warning in misconfigured_mappings.items():\n if misconfig in api_url:\n warnings_list.append(warning)\n\n parsed_url = urlparse(api_url)\n if parsed_url.path and not parsed_url.path.startswith(\"/api\"):\n warnings_list.append(\n \"`PREFECT_API_URL` should have `/api` after the base URL.\"\n )\n\n if warnings_list:\n example = 'e.g. PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"'\n warnings_list.append(example)\n\n warnings.warn(\"\\n\".join(warnings_list), stacklevel=2)\n\n return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_current_settings","title":"get_current_settings
","text":"Returns a settings object populated with values from the current settings context or, if no settings context is active, the environment.
Source code inprefect/settings.py
def get_current_settings() -> Settings:\n \"\"\"\n Returns a settings object populated with values from the current settings context\n or, if no settings context is active, the environment.\n \"\"\"\n from prefect.context import SettingsContext\n\n settings_context = SettingsContext.get()\n if settings_context is not None:\n return settings_context.settings\n\n return get_settings_from_env()\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_settings_from_env","title":"get_settings_from_env
","text":"Returns a settings object populated with default values and overrides from environment variables, ignoring any values in profiles.
Calls with the same environment return a cached object instead of reconstructing to avoid validation overhead.
Source code inprefect/settings.py
def get_settings_from_env() -> Settings:\n \"\"\"\n Returns a settings object populated with default values and overrides from\n environment variables, ignoring any values in profiles.\n\n Calls with the same environment return a cached object instead of reconstructing\n to avoid validation overhead.\n \"\"\"\n # Since os.environ is a Dict[str, str] we can safely hash it by contents, but we\n # must be careful to avoid hashing a generator instead of a tuple\n cache_key = hash(tuple((key, value) for key, value in os.environ.items()))\n\n if cache_key not in _FROM_ENV_CACHE:\n _FROM_ENV_CACHE[cache_key] = Settings()\n\n return _FROM_ENV_CACHE[cache_key]\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_default_settings","title":"get_default_settings
","text":"Returns a settings object populated with default values, ignoring any overrides from environment variables or profiles.
This is cached since the defaults should not change during the lifetime of the module.
Source code inprefect/settings.py
def get_default_settings() -> Settings:\n \"\"\"\n Returns a settings object populated with default values, ignoring any overrides\n from environment variables or profiles.\n\n This is cached since the defaults should not change during the lifetime of the\n module.\n \"\"\"\n global _DEFAULTS_CACHE\n\n if not _DEFAULTS_CACHE:\n old = os.environ\n try:\n os.environ = {}\n settings = get_settings_from_env()\n finally:\n os.environ = old\n\n _DEFAULTS_CACHE = settings\n\n return _DEFAULTS_CACHE\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.temporary_settings","title":"temporary_settings
","text":"Temporarily override the current settings by entering a new profile.
See Settings.copy_with_update
for details on different argument behavior.
Examples:
>>> from prefect.settings import PREFECT_API_URL\n>>>\n>>> with temporary_settings(updates={PREFECT_API_URL: \"foo\"}):\n>>> assert PREFECT_API_URL.value() == \"foo\"\n>>>\n>>> with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"}):\n>>> assert PREFECT_API_URL.value() == \"foo\"\n>>>\n>>> with temporary_settings(restore_defaults={PREFECT_API_URL}):\n>>> assert PREFECT_API_URL.value() is None\n>>>\n>>> with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"})\n>>> assert PREFECT_API_URL.value() == \"bar\"\n>>> assert PREFECT_API_URL.value() is None\n
Source code in prefect/settings.py
@contextmanager\ndef temporary_settings(\n updates: Mapping[Setting, Any] = None,\n set_defaults: Mapping[Setting, Any] = None,\n restore_defaults: Iterable[Setting] = None,\n) -> Settings:\n \"\"\"\n Temporarily override the current settings by entering a new profile.\n\n See `Settings.copy_with_update` for details on different argument behavior.\n\n Examples:\n >>> from prefect.settings import PREFECT_API_URL\n >>>\n >>> with temporary_settings(updates={PREFECT_API_URL: \"foo\"}):\n >>> assert PREFECT_API_URL.value() == \"foo\"\n >>>\n >>> with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"}):\n >>> assert PREFECT_API_URL.value() == \"foo\"\n >>>\n >>> with temporary_settings(restore_defaults={PREFECT_API_URL}):\n >>> assert PREFECT_API_URL.value() is None\n >>>\n >>> with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"})\n >>> assert PREFECT_API_URL.value() == \"bar\"\n >>> assert PREFECT_API_URL.value() is None\n \"\"\"\n import prefect.context\n\n context = prefect.context.get_settings_context()\n\n new_settings = context.settings.copy_with_update(\n updates=updates, set_defaults=set_defaults, restore_defaults=restore_defaults\n )\n\n with prefect.context.SettingsContext(\n profile=context.profile, settings=new_settings\n ):\n yield new_settings\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_profiles","title":"load_profiles
","text":"Load all profiles from the default and current profile paths.
Source code inprefect/settings.py
def load_profiles() -> ProfilesCollection:\n \"\"\"\n Load all profiles from the default and current profile paths.\n \"\"\"\n profiles = _read_profiles_from(DEFAULT_PROFILES_PATH)\n\n user_profiles_path = PREFECT_PROFILES_PATH.value()\n if user_profiles_path.exists():\n user_profiles = _read_profiles_from(user_profiles_path)\n\n # Merge all of the user profiles with the defaults\n for name in user_profiles:\n profiles.update_profile(\n name,\n settings=user_profiles[name].settings,\n source=user_profiles[name].source,\n )\n\n if user_profiles.active_name:\n profiles.set_active(user_profiles.active_name, check=False)\n\n return profiles\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_current_profile","title":"load_current_profile
","text":"Load the current profile from the default and current profile paths.
This will not include settings from the current settings context. Only settings that have been persisted to the profiles file will be saved.
Source code inprefect/settings.py
def load_current_profile():\n \"\"\"\n Load the current profile from the default and current profile paths.\n\n This will _not_ include settings from the current settings context. Only settings\n that have been persisted to the profiles file will be saved.\n \"\"\"\n from prefect.context import SettingsContext\n\n profiles = load_profiles()\n context = SettingsContext.get()\n\n if context:\n profiles.set_active(context.profile.name)\n\n return profiles.active_profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.save_profiles","title":"save_profiles
","text":"Writes all non-default profiles to the current profiles path.
Source code inprefect/settings.py
def save_profiles(profiles: ProfilesCollection) -> None:\n \"\"\"\n Writes all non-default profiles to the current profiles path.\n \"\"\"\n profiles_path = PREFECT_PROFILES_PATH.value()\n profiles = profiles.without_profile_source(DEFAULT_PROFILES_PATH)\n return _write_profiles_to(profiles_path, profiles)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_profile","title":"load_profile
","text":"Load a single profile by name.
Source code inprefect/settings.py
def load_profile(name: str) -> Profile:\n \"\"\"\n Load a single profile by name.\n \"\"\"\n profiles = load_profiles()\n try:\n return profiles[name]\n except KeyError:\n raise ValueError(f\"Profile {name!r} not found.\")\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.update_current_profile","title":"update_current_profile
","text":"Update the persisted data for the profile currently in-use.
If the profile does not exist in the profiles file, it will be created.
Given settings will be merged with the existing settings as described in ProfilesCollection.update_profile
.
Returns:
Type DescriptionProfile
The new profile.
Source code inprefect/settings.py
def update_current_profile(settings: Dict[Union[str, Setting], Any]) -> Profile:\n \"\"\"\n Update the persisted data for the profile currently in-use.\n\n If the profile does not exist in the profiles file, it will be created.\n\n Given settings will be merged with the existing settings as described in\n `ProfilesCollection.update_profile`.\n\n Returns:\n The new profile.\n \"\"\"\n import prefect.context\n\n current_profile = prefect.context.get_settings_context().profile\n\n if not current_profile:\n raise MissingProfileError(\"No profile is currently in use.\")\n\n profiles = load_profiles()\n\n # Ensure the current profile's settings are present\n profiles.update_profile(current_profile.name, current_profile.settings)\n # Then merge the new settings in\n new_profile = profiles.update_profile(current_profile.name, settings)\n\n # Validate before saving\n new_profile.validate_settings()\n\n save_profiles(profiles)\n\n return profiles[current_profile.name]\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/software/","title":"prefect.software","text":"","tags":["Python API","software"]},{"location":"api-ref/prefect/software/#prefect.software","title":"prefect.software
","text":"","tags":["Python API","software"]},{"location":"api-ref/prefect/states/","title":"prefect.states","text":"","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states","title":"prefect.states
","text":"","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.AwaitingRetry","title":"AwaitingRetry
","text":"Convenience function for creating AwaitingRetry
states.
Returns:
Name Type DescriptionState
State
a AwaitingRetry state
Source code inprefect/states.py
def AwaitingRetry(\n cls: Type[State] = State, scheduled_time: datetime.datetime = None, **kwargs\n) -> State:\n \"\"\"Convenience function for creating `AwaitingRetry` states.\n\n Returns:\n State: a AwaitingRetry state\n \"\"\"\n return Scheduled(\n cls=cls, scheduled_time=scheduled_time, name=\"AwaitingRetry\", **kwargs\n )\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Cancelled","title":"Cancelled
","text":"Convenience function for creating Cancelled
states.
Returns:
Name Type DescriptionState
State
a Cancelled state
Source code inprefect/states.py
def Cancelled(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Cancelled` states.\n\n Returns:\n State: a Cancelled state\n \"\"\"\n return cls(type=StateType.CANCELLED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Cancelling","title":"Cancelling
","text":"Convenience function for creating Cancelling
states.
Returns:
Name Type DescriptionState
State
a Cancelling state
Source code inprefect/states.py
def Cancelling(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Cancelling` states.\n\n Returns:\n State: a Cancelling state\n \"\"\"\n return cls(type=StateType.CANCELLING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Completed","title":"Completed
","text":"Convenience function for creating Completed
states.
Returns:
Name Type DescriptionState
State
a Completed state
Source code inprefect/states.py
def Completed(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Completed` states.\n\n Returns:\n State: a Completed state\n \"\"\"\n return cls(type=StateType.COMPLETED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Crashed","title":"Crashed
","text":"Convenience function for creating Crashed
states.
Returns:
Name Type DescriptionState
State
a Crashed state
Source code inprefect/states.py
def Crashed(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Crashed` states.\n\n Returns:\n State: a Crashed state\n \"\"\"\n return cls(type=StateType.CRASHED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Failed","title":"Failed
","text":"Convenience function for creating Failed
states.
Returns:
Name Type DescriptionState
State
a Failed state
Source code inprefect/states.py
def Failed(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Failed` states.\n\n Returns:\n State: a Failed state\n \"\"\"\n return cls(type=StateType.FAILED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Late","title":"Late
","text":"Convenience function for creating Late
states.
Returns:
Name Type DescriptionState
State
a Late state
Source code inprefect/states.py
def Late(\n cls: Type[State] = State, scheduled_time: datetime.datetime = None, **kwargs\n) -> State:\n \"\"\"Convenience function for creating `Late` states.\n\n Returns:\n State: a Late state\n \"\"\"\n return Scheduled(cls=cls, scheduled_time=scheduled_time, name=\"Late\", **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Paused","title":"Paused
","text":"Convenience function for creating Paused
states.
Returns:
Name Type DescriptionState
State
a Paused state
Source code inprefect/states.py
def Paused(\n cls: Type[State] = State,\n timeout_seconds: Optional[int] = None,\n pause_expiration_time: Optional[datetime.datetime] = None,\n reschedule: bool = False,\n pause_key: Optional[str] = None,\n **kwargs,\n) -> State:\n \"\"\"Convenience function for creating `Paused` states.\n\n Returns:\n State: a Paused state\n \"\"\"\n state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n\n if state_details.pause_timeout:\n raise ValueError(\"An extra pause timeout was provided in state_details\")\n\n if pause_expiration_time is not None and timeout_seconds is not None:\n raise ValueError(\n \"Cannot supply both a pause_expiration_time and timeout_seconds\"\n )\n\n if pause_expiration_time is None and timeout_seconds is None:\n pass\n else:\n state_details.pause_timeout = pause_expiration_time or (\n pendulum.now(\"UTC\") + pendulum.Duration(seconds=timeout_seconds)\n )\n\n state_details.pause_reschedule = reschedule\n state_details.pause_key = pause_key\n\n return cls(type=StateType.PAUSED, state_details=state_details, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Pending","title":"Pending
","text":"Convenience function for creating Pending
states.
Returns:
Name Type DescriptionState
State
a Pending state
Source code inprefect/states.py
def Pending(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Pending` states.\n\n Returns:\n State: a Pending state\n \"\"\"\n return cls(type=StateType.PENDING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Retrying","title":"Retrying
","text":"Convenience function for creating Retrying
states.
Returns:
Name Type DescriptionState
State
a Retrying state
Source code inprefect/states.py
def Retrying(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Retrying` states.\n\n Returns:\n State: a Retrying state\n \"\"\"\n return cls(type=StateType.RUNNING, name=\"Retrying\", **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Running","title":"Running
","text":"Convenience function for creating Running
states.
Returns:
Name Type DescriptionState
State
a Running state
Source code inprefect/states.py
def Running(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Running` states.\n\n Returns:\n State: a Running state\n \"\"\"\n return cls(type=StateType.RUNNING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Scheduled","title":"Scheduled
","text":"Convenience function for creating Scheduled
states.
Returns:
Name Type DescriptionState
State
a Scheduled state
Source code inprefect/states.py
def Scheduled(\n cls: Type[State] = State, scheduled_time: datetime.datetime = None, **kwargs\n) -> State:\n \"\"\"Convenience function for creating `Scheduled` states.\n\n Returns:\n State: a Scheduled state\n \"\"\"\n state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n if scheduled_time is None:\n scheduled_time = pendulum.now(\"UTC\")\n elif state_details.scheduled_time:\n raise ValueError(\"An extra scheduled_time was provided in state_details\")\n state_details.scheduled_time = scheduled_time\n\n return cls(type=StateType.SCHEDULED, state_details=state_details, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Suspended","title":"Suspended
","text":"Convenience function for creating Suspended
states.
Returns:
Name Type DescriptionState
a Suspended state
Source code inprefect/states.py
def Suspended(\n cls: Type[State] = State,\n timeout_seconds: Optional[int] = None,\n pause_expiration_time: Optional[datetime.datetime] = None,\n pause_key: Optional[str] = None,\n **kwargs,\n):\n \"\"\"Convenience function for creating `Suspended` states.\n\n Returns:\n State: a Suspended state\n \"\"\"\n return Paused(\n cls=cls,\n name=\"Suspended\",\n reschedule=True,\n timeout_seconds=timeout_seconds,\n pause_expiration_time=pause_expiration_time,\n pause_key=pause_key,\n **kwargs,\n )\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.exception_to_crashed_state","title":"exception_to_crashed_state
async
","text":"Takes an exception that occurs outside of user code and converts it to a 'Crash' exception with a 'Crashed' state.
Source code inprefect/states.py
async def exception_to_crashed_state(\n exc: BaseException,\n result_factory: Optional[ResultFactory] = None,\n) -> State:\n \"\"\"\n Takes an exception that occurs _outside_ of user code and converts it to a\n 'Crash' exception with a 'Crashed' state.\n \"\"\"\n state_message = None\n\n if isinstance(exc, anyio.get_cancelled_exc_class()):\n state_message = \"Execution was cancelled by the runtime environment.\"\n\n elif isinstance(exc, KeyboardInterrupt):\n state_message = \"Execution was aborted by an interrupt signal.\"\n\n elif isinstance(exc, TerminationSignal):\n state_message = \"Execution was aborted by a termination signal.\"\n\n elif isinstance(exc, SystemExit):\n state_message = \"Execution was aborted by Python system exit call.\"\n\n elif isinstance(exc, (httpx.TimeoutException, httpx.ConnectError)):\n try:\n request: httpx.Request = exc.request\n except RuntimeError:\n # The request property is not set\n state_message = (\n \"Request failed while attempting to contact the server:\"\n f\" {format_exception(exc)}\"\n )\n else:\n # TODO: We can check if this is actually our API url\n state_message = f\"Request to {request.url} failed: {format_exception(exc)}.\"\n\n else:\n state_message = (\n \"Execution was interrupted by an unexpected exception:\"\n f\" {format_exception(exc)}\"\n )\n\n if result_factory:\n data = await result_factory.create_result(exc)\n else:\n # Attach the exception for local usage, will not be available when retrieved\n # from the API\n data = exc\n\n return Crashed(message=state_message, data=data)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.exception_to_failed_state","title":"exception_to_failed_state
async
","text":"Convenience function for creating Failed
states from exceptions
prefect/states.py
async def exception_to_failed_state(\n exc: Optional[BaseException] = None,\n result_factory: Optional[ResultFactory] = None,\n **kwargs,\n) -> State:\n \"\"\"\n Convenience function for creating `Failed` states from exceptions\n \"\"\"\n if not exc:\n _, exc, _ = sys.exc_info()\n if exc is None:\n raise ValueError(\n \"Exception was not passed and no active exception could be found.\"\n )\n else:\n pass\n\n if result_factory:\n data = await result_factory.create_result(exc)\n else:\n # Attach the exception for local usage, will not be available when retrieved\n # from the API\n data = exc\n\n existing_message = kwargs.pop(\"message\", \"\")\n if existing_message and not existing_message.endswith(\" \"):\n existing_message += \" \"\n\n # TODO: Consider if we want to include traceback information, it is intentionally\n # excluded from messages for now\n message = existing_message + format_exception(exc)\n\n return Failed(data=data, message=message, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.get_state_exception","title":"get_state_exception
async
","text":"If not given a FAILED or CRASHED state, this raise a value error.
If the state result is a state, its exception will be returned.
If the state result is an iterable of states, the exception of the first failure will be returned.
If the state result is a string, a wrapper exception will be returned with the string as the message.
If the state result is null, a wrapper exception will be returned with the state message attached.
If the state result is not of a known type, a TypeError
will be returned.
When a wrapper exception is returned, the type will be: - FailedRun
if the state type is FAILED. - CrashedRun
if the state type is CRASHED. - CancelledRun
if the state type is CANCELLED.
prefect/states.py
@sync_compatible\nasync def get_state_exception(state: State) -> BaseException:\n \"\"\"\n If not given a FAILED or CRASHED state, this raise a value error.\n\n If the state result is a state, its exception will be returned.\n\n If the state result is an iterable of states, the exception of the first failure\n will be returned.\n\n If the state result is a string, a wrapper exception will be returned with the\n string as the message.\n\n If the state result is null, a wrapper exception will be returned with the state\n message attached.\n\n If the state result is not of a known type, a `TypeError` will be returned.\n\n When a wrapper exception is returned, the type will be:\n - `FailedRun` if the state type is FAILED.\n - `CrashedRun` if the state type is CRASHED.\n - `CancelledRun` if the state type is CANCELLED.\n \"\"\"\n\n if state.is_failed():\n wrapper = FailedRun\n default_message = \"Run failed.\"\n elif state.is_crashed():\n wrapper = CrashedRun\n default_message = \"Run crashed.\"\n elif state.is_cancelled():\n wrapper = CancelledRun\n default_message = \"Run cancelled.\"\n else:\n raise ValueError(f\"Expected failed or crashed state got {state!r}.\")\n\n if isinstance(state.data, BaseResult):\n result = await state.data.get()\n elif state.data is None:\n result = None\n else:\n result = state.data\n\n if result is None:\n return wrapper(state.message or default_message)\n\n if isinstance(result, Exception):\n return result\n\n elif isinstance(result, BaseException):\n return result\n\n elif isinstance(result, str):\n return wrapper(result)\n\n elif is_state(result):\n # Return the exception from the inner state\n return await get_state_exception(result)\n\n elif is_state_iterable(result):\n # Return the first failure\n for state in result:\n if state.is_failed() or state.is_crashed() or state.is_cancelled():\n return await get_state_exception(state)\n\n raise ValueError(\n \"Failed state result was an iterable of states but none were failed.\"\n )\n\n else:\n raise TypeError(\n f\"Unexpected result for failed state: {result!r} \u2014\u2014 \"\n f\"{type(result).__name__} cannot be resolved into an exception\"\n )\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.get_state_result","title":"get_state_result
","text":"Get the result from a state.
See State.result()
prefect/states.py
def get_state_result(\n state: State[R], raise_on_failure: bool = True, fetch: Optional[bool] = None\n) -> R:\n \"\"\"\n Get the result from a state.\n\n See `State.result()`\n \"\"\"\n\n if fetch is None and (\n PREFECT_ASYNC_FETCH_STATE_RESULT or not in_async_main_thread()\n ):\n # Fetch defaults to `True` for sync users or async users who have opted in\n fetch = True\n\n if not fetch:\n if fetch is None and in_async_main_thread():\n warnings.warn(\n (\n \"State.result() was called from an async context but not awaited. \"\n \"This method will be updated to return a coroutine by default in \"\n \"the future. Pass `fetch=True` and `await` the call to get rid of \"\n \"this warning.\"\n ),\n DeprecationWarning,\n stacklevel=2,\n )\n # Backwards compatibility\n if isinstance(state.data, DataDocument):\n return result_from_state_with_data_document(\n state, raise_on_failure=raise_on_failure\n )\n else:\n return state.data\n else:\n return _get_state_result(state, raise_on_failure=raise_on_failure)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.is_state","title":"is_state
","text":"Check if the given object is a state instance
Source code inprefect/states.py
def is_state(obj: Any) -> TypeGuard[State]:\n \"\"\"\n Check if the given object is a state instance\n \"\"\"\n # We may want to narrow this to client-side state types but for now this provides\n # backwards compatibility\n try:\n from prefect.server.schemas.states import State as State_\n\n classes_ = (State, State_)\n except ImportError:\n classes_ = State\n\n # return isinstance(obj, (State, State_))\n return isinstance(obj, classes_)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.is_state_iterable","title":"is_state_iterable
","text":"Check if a the given object is an iterable of states types
Supported iterables are: - set - list - tuple
Other iterables will return False
even if they contain states.
prefect/states.py
def is_state_iterable(obj: Any) -> TypeGuard[Iterable[State]]:\n \"\"\"\n Check if a the given object is an iterable of states types\n\n Supported iterables are:\n - set\n - list\n - tuple\n\n Other iterables will return `False` even if they contain states.\n \"\"\"\n # We do not check for arbitrary iterables because this is not intended to be used\n # for things like dictionaries, dataframes, or pydantic models\n if (\n not isinstance(obj, BaseAnnotation)\n and isinstance(obj, (list, set, tuple))\n and obj\n ):\n return all([is_state(o) for o in obj])\n else:\n return False\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.raise_state_exception","title":"raise_state_exception
async
","text":"Given a FAILED or CRASHED state, raise the contained exception.
Source code inprefect/states.py
@sync_compatible\nasync def raise_state_exception(state: State) -> None:\n \"\"\"\n Given a FAILED or CRASHED state, raise the contained exception.\n \"\"\"\n if not (state.is_failed() or state.is_crashed() or state.is_cancelled()):\n return None\n\n raise await get_state_exception(state)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.return_value_to_state","title":"return_value_to_state
async
","text":"Given a return value from a user's function, create a State
the run should be placed in.
The aggregate rule says that given multiple states we will determine the final state such that:
data
attributeCallers should resolve all futures into states before passing return values to this function.
Source code inprefect/states.py
async def return_value_to_state(retval: R, result_factory: ResultFactory) -> State[R]:\n \"\"\"\n Given a return value from a user's function, create a `State` the run should\n be placed in.\n\n - If data is returned, we create a 'COMPLETED' state with the data\n - If a single, manually created state is returned, we use that state as given\n (manual creation is determined by the lack of ids)\n - If an upstream state or iterable of upstream states is returned, we apply the\n aggregate rule\n\n The aggregate rule says that given multiple states we will determine the final state\n such that:\n\n - If any states are not COMPLETED the final state is FAILED\n - If all of the states are COMPLETED the final state is COMPLETED\n - The states will be placed in the final state `data` attribute\n\n Callers should resolve all futures into states before passing return values to this\n function.\n \"\"\"\n\n if (\n is_state(retval)\n # Check for manual creation\n and not retval.state_details.flow_run_id\n and not retval.state_details.task_run_id\n ):\n state = retval\n\n # Do not modify states with data documents attached; backwards compatibility\n if isinstance(state.data, DataDocument):\n return state\n\n # Unless the user has already constructed a result explicitly, use the factory\n # to update the data to the correct type\n if not isinstance(state.data, BaseResult):\n state.data = await result_factory.create_result(state.data)\n\n return state\n\n # Determine a new state from the aggregate of contained states\n if is_state(retval) or is_state_iterable(retval):\n states = StateGroup(ensure_iterable(retval))\n\n # Determine the new state type\n if states.all_completed():\n new_state_type = StateType.COMPLETED\n elif states.any_cancelled():\n new_state_type = StateType.CANCELLED\n elif states.any_paused():\n new_state_type = StateType.PAUSED\n else:\n new_state_type = StateType.FAILED\n\n # Generate a nice message for the aggregate\n if states.all_completed():\n message = \"All states completed.\"\n elif states.any_cancelled():\n message = f\"{states.cancelled_count}/{states.total_count} states cancelled.\"\n elif states.any_paused():\n message = f\"{states.paused_count}/{states.total_count} states paused.\"\n elif states.any_failed():\n message = f\"{states.fail_count}/{states.total_count} states failed.\"\n elif not states.all_final():\n message = (\n f\"{states.not_final_count}/{states.total_count} states are not final.\"\n )\n else:\n message = \"Given states: \" + states.counts_message()\n\n # TODO: We may actually want to set the data to a `StateGroup` object and just\n # allow it to be unpacked into a tuple and such so users can interact with\n # it\n return State(\n type=new_state_type,\n message=message,\n data=await result_factory.create_result(retval),\n )\n\n # Generators aren't portable, implicitly convert them to a list.\n if isinstance(retval, GeneratorType):\n data = list(retval)\n else:\n data = retval\n\n # Otherwise, they just gave data and this is a completed retval\n return Completed(data=await result_factory.create_result(data))\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/task-runners/","title":"prefect.task_runners","text":"","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners","title":"prefect.task_runners
","text":"Interface and implementations of various task runners.
Task Runners in Prefect are responsible for managing the execution of Prefect task runs. Generally speaking, users are not expected to interact with task runners outside of configuring and initializing them for a flow.
Example>>> from prefect import flow, task\n>>> from prefect.task_runners import SequentialTaskRunner\n>>> from typing import List\n>>>\n>>> @task\n>>> def say_hello(name):\n... print(f\"hello {name}\")\n>>>\n>>> @task\n>>> def say_goodbye(name):\n... print(f\"goodbye {name}\")\n>>>\n>>> @flow(task_runner=SequentialTaskRunner())\n>>> def greetings(names: List[str]):\n... for name in names:\n... say_hello(name)\n... say_goodbye(name)\n>>>\n>>> greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\nhello arthur\ngoodbye arthur\nhello trillian\ngoodbye trillian\nhello ford\ngoodbye ford\nhello marvin\ngoodbye marvin\n
Switching to a DaskTaskRunner
:
>>> from prefect_dask.task_runners import DaskTaskRunner\n>>> flow.task_runner = DaskTaskRunner()\n>>> greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\nhello arthur\ngoodbye arthur\nhello trillian\nhello ford\ngoodbye marvin\nhello marvin\ngoodbye ford\ngoodbye trillian\n
For usage details, see the Task Runners documentation.
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner","title":"BaseTaskRunner
","text":"Source code in prefect/task_runners.py
class BaseTaskRunner(metaclass=abc.ABCMeta):\n def __init__(self) -> None:\n self.logger = get_logger(f\"task_runner.{self.name}\")\n self._started: bool = False\n\n @property\n @abc.abstractmethod\n def concurrency_type(self) -> TaskConcurrencyType:\n pass # noqa\n\n @property\n def name(self):\n return type(self).__name__.lower().replace(\"taskrunner\", \"\")\n\n def duplicate(self):\n \"\"\"\n Return a new task runner instance with the same options.\n \"\"\"\n # The base class returns `NotImplemented` to indicate that this is not yet\n # implemented by a given task runner.\n return NotImplemented\n\n def __eq__(self, other: object) -> bool:\n \"\"\"\n Returns true if the task runners use the same options.\n \"\"\"\n if type(other) == type(self) and (\n # Compare public attributes for naive equality check\n # Subclasses should implement this method with a check init option equality\n {k: v for k, v in self.__dict__.items() if not k.startswith(\"_\")}\n == {k: v for k, v in other.__dict__.items() if not k.startswith(\"_\")}\n ):\n return True\n else:\n return NotImplemented\n\n @abc.abstractmethod\n async def submit(\n self,\n key: UUID,\n call: Callable[..., Awaitable[State[R]]],\n ) -> None:\n \"\"\"\n Submit a call for execution and return a `PrefectFuture` that can be used to\n get the call result.\n\n Args:\n task_run: The task run being submitted.\n task_key: A unique key for this orchestration run of the task. Can be used\n for caching.\n call: The function to be executed\n run_kwargs: A dict of keyword arguments to pass to `call`\n\n Returns:\n A future representing the result of `call` execution\n \"\"\"\n raise NotImplementedError()\n\n @abc.abstractmethod\n async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n \"\"\"\n Given a `PrefectFuture`, wait for its return state up to `timeout` seconds.\n If it is not finished after the timeout expires, `None` should be returned.\n\n Implementers should be careful to ensure that this function never returns or\n raises an exception.\n \"\"\"\n raise NotImplementedError()\n\n @asynccontextmanager\n async def start(\n self: T,\n ) -> AsyncIterator[T]:\n \"\"\"\n Start the task runner, preparing any resources necessary for task submission.\n\n Children should implement `_start` to prepare and clean up resources.\n\n Yields:\n The prepared task runner\n \"\"\"\n if self._started:\n raise RuntimeError(\"The task runner is already started!\")\n\n async with AsyncExitStack() as exit_stack:\n self.logger.debug(\"Starting task runner...\")\n try:\n await self._start(exit_stack)\n self._started = True\n yield self\n finally:\n self.logger.debug(\"Shutting down task runner...\")\n self._started = False\n\n async def _start(self, exit_stack: AsyncExitStack) -> None:\n \"\"\"\n Create any resources required for this task runner to submit work.\n\n Cleanup of resources should be submitted to the `exit_stack`.\n \"\"\"\n pass # noqa\n\n def __str__(self) -> str:\n return type(self).__name__\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.duplicate","title":"duplicate
","text":"Return a new task runner instance with the same options.
Source code inprefect/task_runners.py
def duplicate(self):\n \"\"\"\n Return a new task runner instance with the same options.\n \"\"\"\n # The base class returns `NotImplemented` to indicate that this is not yet\n # implemented by a given task runner.\n return NotImplemented\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.start","title":"start
async
","text":"Start the task runner, preparing any resources necessary for task submission.
Children should implement _start
to prepare and clean up resources.
Yields:
Type DescriptionAsyncIterator[T]
The prepared task runner
Source code inprefect/task_runners.py
@asynccontextmanager\nasync def start(\n self: T,\n) -> AsyncIterator[T]:\n \"\"\"\n Start the task runner, preparing any resources necessary for task submission.\n\n Children should implement `_start` to prepare and clean up resources.\n\n Yields:\n The prepared task runner\n \"\"\"\n if self._started:\n raise RuntimeError(\"The task runner is already started!\")\n\n async with AsyncExitStack() as exit_stack:\n self.logger.debug(\"Starting task runner...\")\n try:\n await self._start(exit_stack)\n self._started = True\n yield self\n finally:\n self.logger.debug(\"Shutting down task runner...\")\n self._started = False\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.submit","title":"submit
abstractmethod
async
","text":"Submit a call for execution and return a PrefectFuture
that can be used to get the call result.
Parameters:
Name Type Description Defaulttask_run
The task run being submitted.
requiredtask_key
A unique key for this orchestration run of the task. Can be used for caching.
requiredcall
Callable[..., Awaitable[State[R]]]
The function to be executed
requiredrun_kwargs
A dict of keyword arguments to pass to call
Returns:
Type DescriptionNone
A future representing the result of call
execution
prefect/task_runners.py
@abc.abstractmethod\nasync def submit(\n self,\n key: UUID,\n call: Callable[..., Awaitable[State[R]]],\n) -> None:\n \"\"\"\n Submit a call for execution and return a `PrefectFuture` that can be used to\n get the call result.\n\n Args:\n task_run: The task run being submitted.\n task_key: A unique key for this orchestration run of the task. Can be used\n for caching.\n call: The function to be executed\n run_kwargs: A dict of keyword arguments to pass to `call`\n\n Returns:\n A future representing the result of `call` execution\n \"\"\"\n raise NotImplementedError()\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.wait","title":"wait
abstractmethod
async
","text":"Given a PrefectFuture
, wait for its return state up to timeout
seconds. If it is not finished after the timeout expires, None
should be returned.
Implementers should be careful to ensure that this function never returns or raises an exception.
Source code inprefect/task_runners.py
@abc.abstractmethod\nasync def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n \"\"\"\n Given a `PrefectFuture`, wait for its return state up to `timeout` seconds.\n If it is not finished after the timeout expires, `None` should be returned.\n\n Implementers should be careful to ensure that this function never returns or\n raises an exception.\n \"\"\"\n raise NotImplementedError()\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.ConcurrentTaskRunner","title":"ConcurrentTaskRunner
","text":" Bases: BaseTaskRunner
A concurrent task runner that allows tasks to switch when blocking on IO. Synchronous tasks will be submitted to a thread pool maintained by anyio
.
Using a thread for concurrency:\n>>> from prefect import flow\n>>> from prefect.task_runners import ConcurrentTaskRunner\n>>> @flow(task_runner=ConcurrentTaskRunner)\n>>> def my_flow():\n>>> ...\n
Source code in prefect/task_runners.py
class ConcurrentTaskRunner(BaseTaskRunner):\n \"\"\"\n A concurrent task runner that allows tasks to switch when blocking on IO.\n Synchronous tasks will be submitted to a thread pool maintained by `anyio`.\n\n Example:\n ```\n Using a thread for concurrency:\n >>> from prefect import flow\n >>> from prefect.task_runners import ConcurrentTaskRunner\n >>> @flow(task_runner=ConcurrentTaskRunner)\n >>> def my_flow():\n >>> ...\n ```\n \"\"\"\n\n def __init__(self):\n # TODO: Consider adding `max_workers` support using anyio capacity limiters\n\n # Runtime attributes\n self._task_group: anyio.abc.TaskGroup = None\n self._result_events: Dict[UUID, Event] = {}\n self._results: Dict[UUID, Any] = {}\n self._keys: Set[UUID] = set()\n\n super().__init__()\n\n @property\n def concurrency_type(self) -> TaskConcurrencyType:\n return TaskConcurrencyType.CONCURRENT\n\n def duplicate(self):\n return type(self)()\n\n async def submit(\n self,\n key: UUID,\n call: Callable[[], Awaitable[State[R]]],\n ) -> None:\n if not self._started:\n raise RuntimeError(\n \"The task runner must be started before submitting work.\"\n )\n\n if not self._task_group:\n raise RuntimeError(\n \"The concurrent task runner cannot be used to submit work after \"\n \"serialization.\"\n )\n\n # Create an event to set on completion\n self._result_events[key] = Event()\n\n # Rely on the event loop for concurrency\n self._task_group.start_soon(self._run_and_store_result, key, call)\n\n async def wait(\n self,\n key: UUID,\n timeout: float = None,\n ) -> Optional[State]:\n if not self._task_group:\n raise RuntimeError(\n \"The concurrent task runner cannot be used to wait for work after \"\n \"serialization.\"\n )\n\n return await self._get_run_result(key, timeout)\n\n async def _run_and_store_result(\n self, key: UUID, call: Callable[[], Awaitable[State[R]]]\n ):\n \"\"\"\n Simple utility to store the orchestration result in memory on completion\n\n Since this run is occurring on the main thread, we capture exceptions to prevent\n task crashes from crashing the flow run.\n \"\"\"\n try:\n result = await call()\n except BaseException as exc:\n result = await exception_to_crashed_state(exc)\n\n self._results[key] = result\n self._result_events[key].set()\n\n async def _get_run_result(\n self, key: UUID, timeout: float = None\n ) -> Optional[State]:\n \"\"\"\n Block until the run result has been populated.\n \"\"\"\n result = None # retval on timeout\n\n # Note we do not use `asyncio.wrap_future` and instead use an `Event` to avoid\n # stdlib behavior where the wrapped future is cancelled if the parent future is\n # cancelled (as it would be during a timeout here)\n with anyio.move_on_after(timeout):\n await self._result_events[key].wait()\n result = self._results[key]\n\n return result # timeout reached\n\n async def _start(self, exit_stack: AsyncExitStack):\n \"\"\"\n Start the process pool\n \"\"\"\n self._task_group = await exit_stack.enter_async_context(\n anyio.create_task_group()\n )\n\n def __getstate__(self):\n \"\"\"\n Allow the `ConcurrentTaskRunner` to be serialized by dropping the task group.\n \"\"\"\n data = self.__dict__.copy()\n data.update({k: None for k in {\"_task_group\"}})\n return data\n\n def __setstate__(self, data: dict):\n \"\"\"\n When deserialized, we will no longer have a reference to the task group.\n \"\"\"\n self.__dict__.update(data)\n self._task_group = None\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.SequentialTaskRunner","title":"SequentialTaskRunner
","text":" Bases: BaseTaskRunner
A simple task runner that executes calls as they are submitted.
If writing synchronous tasks, this runner will always execute tasks sequentially. If writing async tasks, this runner will execute tasks sequentially unless grouped using anyio.create_task_group
or asyncio.gather
.
prefect/task_runners.py
class SequentialTaskRunner(BaseTaskRunner):\n \"\"\"\n A simple task runner that executes calls as they are submitted.\n\n If writing synchronous tasks, this runner will always execute tasks sequentially.\n If writing async tasks, this runner will execute tasks sequentially unless grouped\n using `anyio.create_task_group` or `asyncio.gather`.\n \"\"\"\n\n def __init__(self) -> None:\n super().__init__()\n self._results: Dict[str, State] = {}\n\n @property\n def concurrency_type(self) -> TaskConcurrencyType:\n return TaskConcurrencyType.SEQUENTIAL\n\n def duplicate(self):\n return type(self)()\n\n async def submit(\n self,\n key: UUID,\n call: Callable[..., Awaitable[State[R]]],\n ) -> None:\n # Run the function immediately and store the result in memory\n try:\n result = await call()\n except BaseException as exc:\n result = await exception_to_crashed_state(exc)\n\n self._results[key] = result\n\n async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n return self._results[key]\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/tasks/","title":"prefect.tasks","text":"","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks","title":"prefect.tasks
","text":"Module containing the base workflow task class and decorator - for most use cases, using the @task
decorator is preferred.
Task
","text":" Bases: Generic[P, R]
A Prefect task definition.
Note
We recommend using the @task
decorator for most use-cases.
Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function creates a new task run.
To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and \"Returns\" respectively.
Parameters:
Name Type Description Defaultfn
Callable[P, R]
The function defining the task.
requiredname
str
An optional name for the task; if not provided, the name will be inferred from the given function.
None
description
str
An optional string description for the task.
None
tags
Iterable[str]
An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags
context at task runtime.
None
version
str
An optional string specifying the version of this task definition
None
cache_key_fn
Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]
An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again.
None
cache_expiration
timedelta
An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire.
None
task_run_name
Optional[Union[Callable[[], str], str]]
An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.
None
retries
Optional[int]
An optional number of times to retry on task run failure.
None
retry_delay_seconds
Optional[Union[float, int, List[float], Callable[[int], List[float]]]]
Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries
is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.
None
retry_jitter_factor
Optional[float]
An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".
None
persist_result
Optional[bool]
An optional toggle indicating whether the result of this task should be persisted to result storage. Defaults to None
, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.
None
result_storage
Optional[ResultStorage]
An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in.
None
result_storage_key
Optional[str]
An optional key to store the result in storage at when persisted. Defaults to a unique identifier.
None
result_serializer
Optional[ResultSerializer]
An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in.
None
timeout_seconds
Union[int, float]
An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed.
None
log_prints
Optional[bool]
If set, print
statements in the task will be redirected to the Prefect logger for the task run. Defaults to None
, which indicates that the value from the flow should be used.
False
refresh_cache
Optional[bool]
If set, cached results for the cache key are not used. Defaults to None
, which indicates that a cached result from a previous execution with matching cache key is used.
None
on_failure
Optional[List[Callable[[Task, TaskRun, State], None]]]
An optional list of callables to run when the task enters a failed state.
None
on_completion
Optional[List[Callable[[Task, TaskRun, State], None]]]
An optional list of callables to run when the task enters a completed state.
None
retry_condition_fn
Optional[Callable[[Task, TaskRun, State], bool]]
An optional callable run when a task run returns a Failed state. Should return True
if the task should continue to its retry policy (e.g. retries=3
), and False
if the task should end as failed. Defaults to None
, indicating the task should always continue to its retry policy.
None
viz_return_value
Optional[Any]
An optional value to return when the task dependency tree is visualized.
None
Source code in prefect/tasks.py
@PrefectObjectRegistry.register_instances\nclass Task(Generic[P, R]):\n \"\"\"\n A Prefect task definition.\n\n !!! note\n We recommend using [the `@task` decorator][prefect.tasks.task] for most use-cases.\n\n Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function\n creates a new task run.\n\n To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and\n \"Returns\" respectively.\n\n Args:\n fn: The function defining the task.\n name: An optional name for the task; if not provided, the name will be inferred\n from the given function.\n description: An optional string description for the task.\n tags: An optional set of tags to be associated with runs of this task. These\n tags are combined with any tags defined by a `prefect.tags` context at\n task runtime.\n version: An optional string specifying the version of this task definition\n cache_key_fn: An optional callable that, given the task run context and call\n parameters, generates a string key; if the key matches a previous completed\n state, that state result will be restored instead of running the task again.\n cache_expiration: An optional amount of time indicating how long cached states\n for this task should be restorable; if not provided, cached states will\n never expire.\n task_run_name: An optional name to distinguish runs of this task; this name can be provided\n as a string template with the task's keyword arguments as variables,\n or a function that returns a string.\n retries: An optional number of times to retry on task run failure.\n retry_delay_seconds: Optionally configures how long to wait before retrying the\n task after failure. This is only applicable if `retries` is nonzero. This\n setting can either be a number of seconds, a list of retry delays, or a\n callable that, given the total number of retries, generates a list of retry\n delays. If a number of seconds, that delay will be applied to all retries.\n If a list, each retry will wait for the corresponding delay before retrying.\n When passing a callable or a list, the number of configured retry delays\n cannot exceed 50.\n retry_jitter_factor: An optional factor that defines the factor to which a retry\n can be jittered in order to avoid a \"thundering herd\".\n persist_result: An optional toggle indicating whether the result of this task\n should be persisted to result storage. Defaults to `None`, which indicates\n that Prefect should choose whether the result should be persisted depending on\n the features being used.\n result_storage: An optional block to use to persist the result of this task.\n Defaults to the value set in the flow the task is called in.\n result_storage_key: An optional key to store the result in storage at when persisted.\n Defaults to a unique identifier.\n result_serializer: An optional serializer to use to serialize the result of this\n task for persistence. Defaults to the value set in the flow the task is\n called in.\n timeout_seconds: An optional number of seconds indicating a maximum runtime for\n the task. If the task exceeds this runtime, it will be marked as failed.\n log_prints: If set, `print` statements in the task will be redirected to the\n Prefect logger for the task run. Defaults to `None`, which indicates\n that the value from the flow should be used.\n refresh_cache: If set, cached results for the cache key are not used.\n Defaults to `None`, which indicates that a cached result from a previous\n execution with matching cache key is used.\n on_failure: An optional list of callables to run when the task enters a failed state.\n on_completion: An optional list of callables to run when the task enters a completed state.\n retry_condition_fn: An optional callable run when a task run returns a Failed state. Should\n return `True` if the task should continue to its retry policy (e.g. `retries=3`), and `False` if the task\n should end as failed. Defaults to `None`, indicating the task should always continue\n to its retry policy.\n viz_return_value: An optional value to return when the task dependency tree is visualized.\n \"\"\"\n\n # NOTE: These parameters (types, defaults, and docstrings) should be duplicated\n # exactly in the @task decorator\n def __init__(\n self,\n fn: Callable[P, R],\n name: str = None,\n description: str = None,\n tags: Iterable[str] = None,\n version: str = None,\n cache_key_fn: Callable[\n [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n ] = None,\n cache_expiration: datetime.timedelta = None,\n task_run_name: Optional[Union[Callable[[], str], str]] = None,\n retries: Optional[int] = None,\n retry_delay_seconds: Optional[\n Union[\n float,\n int,\n List[float],\n Callable[[int], List[float]],\n ]\n ] = None,\n retry_jitter_factor: Optional[float] = None,\n persist_result: Optional[bool] = None,\n result_storage: Optional[ResultStorage] = None,\n result_serializer: Optional[ResultSerializer] = None,\n result_storage_key: Optional[str] = None,\n cache_result_in_memory: bool = True,\n timeout_seconds: Union[int, float] = None,\n log_prints: Optional[bool] = False,\n refresh_cache: Optional[bool] = None,\n on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n viz_return_value: Optional[Any] = None,\n ):\n # Validate if hook passed is list and contains callables\n hook_categories = [on_completion, on_failure]\n hook_names = [\"on_completion\", \"on_failure\"]\n for hooks, hook_name in zip(hook_categories, hook_names):\n if hooks is not None:\n if not hooks:\n raise ValueError(f\"Empty list passed for '{hook_name}'\")\n try:\n hooks = list(hooks)\n except TypeError:\n raise TypeError(\n f\"Expected iterable for '{hook_name}'; got\"\n f\" {type(hooks).__name__} instead. Please provide a list of\"\n f\" hooks to '{hook_name}':\\n\\n\"\n f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n \" my_flow():\\n\\tpass\"\n )\n\n for hook in hooks:\n if not callable(hook):\n raise TypeError(\n f\"Expected callables in '{hook_name}'; got\"\n f\" {type(hook).__name__} instead. Please provide a list of\"\n f\" hooks to '{hook_name}':\\n\\n\"\n f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n \" my_flow():\\n\\tpass\"\n )\n\n if not callable(fn):\n raise TypeError(\"'fn' must be callable\")\n\n self.description = description or inspect.getdoc(fn)\n update_wrapper(self, fn)\n self.fn = fn\n self.isasync = inspect.iscoroutinefunction(self.fn)\n\n if not name:\n if not hasattr(self.fn, \"__name__\"):\n self.name = type(self.fn).__name__\n else:\n self.name = self.fn.__name__\n else:\n self.name = name\n\n if task_run_name is not None:\n if not isinstance(task_run_name, str) and not callable(task_run_name):\n raise TypeError(\n \"Expected string or callable for 'task_run_name'; got\"\n f\" {type(task_run_name).__name__} instead.\"\n )\n self.task_run_name = task_run_name\n\n self.version = version\n self.log_prints = log_prints\n\n raise_for_reserved_arguments(self.fn, [\"return_state\", \"wait_for\"])\n\n self.tags = set(tags if tags else [])\n\n if not hasattr(self.fn, \"__qualname__\"):\n self.task_key = to_qualified_name(type(self.fn))\n else:\n if self.fn.__module__ == \"__main__\":\n task_definition_path = inspect.getsourcefile(self.fn)\n self.task_key = hash_objects(\n self.name, os.path.abspath(task_definition_path)\n )\n else:\n self.task_key = to_qualified_name(self.fn)\n\n self.cache_key_fn = cache_key_fn\n self.cache_expiration = cache_expiration\n self.refresh_cache = refresh_cache\n\n # TaskRunPolicy settings\n # TODO: We can instantiate a `TaskRunPolicy` and add Pydantic bound checks to\n # validate that the user passes positive numbers here\n\n self.retries = (\n retries if retries is not None else PREFECT_TASK_DEFAULT_RETRIES.value()\n )\n if retry_delay_seconds is None:\n retry_delay_seconds = PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS.value()\n\n if callable(retry_delay_seconds):\n self.retry_delay_seconds = retry_delay_seconds(retries)\n else:\n self.retry_delay_seconds = retry_delay_seconds\n\n if isinstance(self.retry_delay_seconds, list) and (\n len(self.retry_delay_seconds) > 50\n ):\n raise ValueError(\"Can not configure more than 50 retry delays per task.\")\n\n if retry_jitter_factor is not None and retry_jitter_factor < 0:\n raise ValueError(\"`retry_jitter_factor` must be >= 0.\")\n\n self.retry_jitter_factor = retry_jitter_factor\n self.persist_result = persist_result\n self.result_storage = result_storage\n self.result_serializer = result_serializer\n self.result_storage_key = result_storage_key\n self.cache_result_in_memory = cache_result_in_memory\n self.timeout_seconds = float(timeout_seconds) if timeout_seconds else None\n # Warn if this task's `name` conflicts with another task while having a\n # different function. This is to detect the case where two or more tasks\n # share a name or are lambdas, which should result in a warning, and to\n # differentiate it from the case where the task was 'copied' via\n # `with_options`, which should not result in a warning.\n registry = PrefectObjectRegistry.get()\n\n if registry and any(\n other\n for other in registry.get_instances(Task)\n if other.name == self.name and id(other.fn) != id(self.fn)\n ):\n try:\n file = inspect.getsourcefile(self.fn)\n line_number = inspect.getsourcelines(self.fn)[1]\n except TypeError:\n file = \"unknown\"\n line_number = \"unknown\"\n\n warnings.warn(\n f\"A task named {self.name!r} and defined at '{file}:{line_number}' \"\n \"conflicts with another task. Consider specifying a unique `name` \"\n \"parameter in the task definition:\\n\\n \"\n \"`@task(name='my_unique_name', ...)`\"\n )\n self.on_completion = on_completion\n self.on_failure = on_failure\n\n # retry_condition_fn must be a callable or None. If it is neither, raise a TypeError\n if retry_condition_fn is not None and not (callable(retry_condition_fn)):\n raise TypeError(\n \"Expected `retry_condition_fn` to be callable, got\"\n f\" {type(retry_condition_fn).__name__} instead.\"\n )\n\n self.retry_condition_fn = retry_condition_fn\n self.viz_return_value = viz_return_value\n\n def with_options(\n self,\n *,\n name: str = None,\n description: str = None,\n tags: Iterable[str] = None,\n cache_key_fn: Callable[\n [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n ] = None,\n task_run_name: Optional[Union[Callable[[], str], str]] = None,\n cache_expiration: datetime.timedelta = None,\n retries: Optional[int] = NotSet,\n retry_delay_seconds: Union[\n float,\n int,\n List[float],\n Callable[[int], List[float]],\n ] = NotSet,\n retry_jitter_factor: Optional[float] = NotSet,\n persist_result: Optional[bool] = NotSet,\n result_storage: Optional[ResultStorage] = NotSet,\n result_serializer: Optional[ResultSerializer] = NotSet,\n result_storage_key: Optional[str] = NotSet,\n cache_result_in_memory: Optional[bool] = None,\n timeout_seconds: Union[int, float] = None,\n log_prints: Optional[bool] = NotSet,\n refresh_cache: Optional[bool] = NotSet,\n on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n viz_return_value: Optional[Any] = None,\n ):\n \"\"\"\n Create a new task from the current object, updating provided options.\n\n Args:\n name: A new name for the task.\n description: A new description for the task.\n tags: A new set of tags for the task. If given, existing tags are ignored,\n not merged.\n cache_key_fn: A new cache key function for the task.\n cache_expiration: A new cache expiration time for the task.\n task_run_name: An optional name to distinguish runs of this task; this name can be provided\n as a string template with the task's keyword arguments as variables,\n or a function that returns a string.\n retries: A new number of times to retry on task run failure.\n retry_delay_seconds: Optionally configures how long to wait before retrying\n the task after failure. This is only applicable if `retries` is nonzero.\n This setting can either be a number of seconds, a list of retry delays,\n or a callable that, given the total number of retries, generates a list\n of retry delays. If a number of seconds, that delay will be applied to\n all retries. If a list, each retry will wait for the corresponding delay\n before retrying. When passing a callable or a list, the number of\n configured retry delays cannot exceed 50.\n retry_jitter_factor: An optional factor that defines the factor to which a\n retry can be jittered in order to avoid a \"thundering herd\".\n persist_result: A new option for enabling or disabling result persistence.\n result_storage: A new storage type to use for results.\n result_serializer: A new serializer to use for results.\n result_storage_key: A new key for the persisted result to be stored at.\n timeout_seconds: A new maximum time for the task to complete in seconds.\n log_prints: A new option for enabling or disabling redirection of `print` statements.\n refresh_cache: A new option for enabling or disabling cache refresh.\n on_completion: A new list of callables to run when the task enters a completed state.\n on_failure: A new list of callables to run when the task enters a failed state.\n retry_condition_fn: An optional callable run when a task run returns a Failed state.\n Should return `True` if the task should continue to its retry policy, and `False`\n if the task should end as failed. Defaults to `None`, indicating the task should\n always continue to its retry policy.\n viz_return_value: An optional value to return when the task dependency tree is visualized.\n\n Returns:\n A new `Task` instance.\n\n Examples:\n\n Create a new task from an existing task and update the name\n\n >>> @task(name=\"My task\")\n >>> def my_task():\n >>> return 1\n >>>\n >>> new_task = my_task.with_options(name=\"My new task\")\n\n Create a new task from an existing task and update the retry settings\n\n >>> from random import randint\n >>>\n >>> @task(retries=1, retry_delay_seconds=5)\n >>> def my_task():\n >>> x = randint(0, 5)\n >>> if x >= 3: # Make a task that fails sometimes\n >>> raise ValueError(\"Retry me please!\")\n >>> return x\n >>>\n >>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\n Use a task with updated options within a flow\n\n >>> @task(name=\"My task\")\n >>> def my_task():\n >>> return 1\n >>>\n >>> @flow\n >>> my_flow():\n >>> new_task = my_task.with_options(name=\"My new task\")\n >>> new_task()\n \"\"\"\n return Task(\n fn=self.fn,\n name=name or self.name,\n description=description or self.description,\n tags=tags or copy(self.tags),\n cache_key_fn=cache_key_fn or self.cache_key_fn,\n cache_expiration=cache_expiration or self.cache_expiration,\n task_run_name=task_run_name,\n retries=retries if retries is not NotSet else self.retries,\n retry_delay_seconds=(\n retry_delay_seconds\n if retry_delay_seconds is not NotSet\n else self.retry_delay_seconds\n ),\n retry_jitter_factor=(\n retry_jitter_factor\n if retry_jitter_factor is not NotSet\n else self.retry_jitter_factor\n ),\n persist_result=(\n persist_result if persist_result is not NotSet else self.persist_result\n ),\n result_storage=(\n result_storage if result_storage is not NotSet else self.result_storage\n ),\n result_storage_key=(\n result_storage_key\n if result_storage_key is not NotSet\n else self.result_storage_key\n ),\n result_serializer=(\n result_serializer\n if result_serializer is not NotSet\n else self.result_serializer\n ),\n cache_result_in_memory=(\n cache_result_in_memory\n if cache_result_in_memory is not None\n else self.cache_result_in_memory\n ),\n timeout_seconds=(\n timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n ),\n log_prints=(log_prints if log_prints is not NotSet else self.log_prints),\n refresh_cache=(\n refresh_cache if refresh_cache is not NotSet else self.refresh_cache\n ),\n on_completion=on_completion or self.on_completion,\n on_failure=on_failure or self.on_failure,\n retry_condition_fn=retry_condition_fn or self.retry_condition_fn,\n viz_return_value=viz_return_value or self.viz_return_value,\n )\n\n @overload\n def __call__(\n self: \"Task[P, NoReturn]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> None:\n # `NoReturn` matches if a type can't be inferred for the function which stops a\n # sync function from matching the `Coroutine` overload\n ...\n\n @overload\n def __call__(\n self: \"Task[P, T]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> T:\n ...\n\n @overload\n def __call__(\n self: \"Task[P, T]\",\n *args: P.args,\n return_state: Literal[True],\n **kwargs: P.kwargs,\n ) -> State[T]:\n ...\n\n def __call__(\n self,\n *args: P.args,\n return_state: bool = False,\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: P.kwargs,\n ):\n \"\"\"\n Run the task and return the result. If `return_state` is True returns\n the result is wrapped in a Prefect State which provides error handling.\n \"\"\"\n from prefect.engine import enter_task_run_engine\n from prefect.task_engine import submit_autonomous_task_run_to_engine\n from prefect.task_runners import SequentialTaskRunner\n\n # Convert the call args/kwargs to a parameter dict\n parameters = get_call_parameters(self.fn, args, kwargs)\n\n return_type = \"state\" if return_state else \"result\"\n\n task_run_tracker = get_task_viz_tracker()\n if task_run_tracker:\n return track_viz_task(\n self.isasync, self.name, parameters, self.viz_return_value\n )\n\n if (\n PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n and not FlowRunContext.get()\n ):\n from prefect import get_client\n\n return submit_autonomous_task_run_to_engine(\n task=self,\n task_run=None,\n task_runner=SequentialTaskRunner(),\n parameters=parameters,\n return_type=return_type,\n client=get_client(),\n )\n\n return enter_task_run_engine(\n self,\n parameters=parameters,\n wait_for=wait_for,\n task_runner=SequentialTaskRunner(),\n return_type=return_type,\n mapped=False,\n )\n\n @overload\n def _run(\n self: \"Task[P, NoReturn]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> PrefectFuture[None, Sync]:\n # `NoReturn` matches if a type can't be inferred for the function which stops a\n # sync function from matching the `Coroutine` overload\n ...\n\n @overload\n def _run(\n self: \"Task[P, Coroutine[Any, Any, T]]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> Awaitable[State[T]]:\n ...\n\n @overload\n def _run(\n self: \"Task[P, T]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> State[T]:\n ...\n\n def _run(\n self,\n *args: P.args,\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: P.kwargs,\n ) -> Union[State, Awaitable[State]]:\n \"\"\"\n Run the task and return the final state.\n \"\"\"\n from prefect.engine import enter_task_run_engine\n from prefect.task_runners import SequentialTaskRunner\n\n # Convert the call args/kwargs to a parameter dict\n parameters = get_call_parameters(self.fn, args, kwargs)\n\n return enter_task_run_engine(\n self,\n parameters=parameters,\n wait_for=wait_for,\n return_type=\"state\",\n task_runner=SequentialTaskRunner(),\n mapped=False,\n )\n\n @overload\n def submit(\n self: \"Task[P, NoReturn]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> PrefectFuture[None, Sync]:\n # `NoReturn` matches if a type can't be inferred for the function which stops a\n # sync function from matching the `Coroutine` overload\n ...\n\n @overload\n def submit(\n self: \"Task[P, Coroutine[Any, Any, T]]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> Awaitable[PrefectFuture[T, Async]]:\n ...\n\n @overload\n def submit(\n self: \"Task[P, T]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> PrefectFuture[T, Sync]:\n ...\n\n @overload\n def submit(\n self: \"Task[P, T]\",\n *args: P.args,\n return_state: Literal[True],\n **kwargs: P.kwargs,\n ) -> State[T]:\n ...\n\n @overload\n def submit(\n self: \"Task[P, T]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> TaskRun:\n ...\n\n @overload\n def submit(\n self: \"Task[P, Coroutine[Any, Any, T]]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> Awaitable[TaskRun]:\n ...\n\n def submit(\n self,\n *args: Any,\n return_state: bool = False,\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: Any,\n ) -> Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]:\n \"\"\"\n Submit a run of the task to the engine.\n\n If writing an async task, this call must be awaited.\n\n If called from within a flow function,\n\n Will create a new task run in the backing API and submit the task to the flow's\n task runner. This call only blocks execution while the task is being submitted,\n once it is submitted, the flow function will continue executing. However, note\n that the `SequentialTaskRunner` does not implement parallel execution for sync tasks\n and they are fully resolved on submission.\n\n Args:\n *args: Arguments to run the task with\n return_state: Return the result of the flow run wrapped in a\n Prefect State.\n wait_for: Upstream task futures to wait for before starting the task\n **kwargs: Keyword arguments to run the task with\n\n Returns:\n If `return_state` is False a future allowing asynchronous access to\n the state of the task\n If `return_state` is True a future wrapped in a Prefect State allowing asynchronous access to\n the state of the task\n\n Examples:\n\n Define a task\n\n >>> from prefect import task\n >>> @task\n >>> def my_task():\n >>> return \"hello\"\n\n Run a task in a flow\n\n >>> from prefect import flow\n >>> @flow\n >>> def my_flow():\n >>> my_task.submit()\n\n Wait for a task to finish\n\n >>> @flow\n >>> def my_flow():\n >>> my_task.submit().wait()\n\n Use the result from a task in a flow\n\n >>> @flow\n >>> def my_flow():\n >>> print(my_task.submit().result())\n >>>\n >>> my_flow()\n hello\n\n Run an async task in an async flow\n\n >>> @task\n >>> async def my_async_task():\n >>> pass\n >>>\n >>> @flow\n >>> async def my_flow():\n >>> await my_async_task.submit()\n\n Run a sync task in an async flow\n\n >>> @flow\n >>> async def my_flow():\n >>> my_task.submit()\n\n Enforce ordering between tasks that do not exchange data\n >>> @task\n >>> def task_1():\n >>> pass\n >>>\n >>> @task\n >>> def task_2():\n >>> pass\n >>>\n >>> @flow\n >>> def my_flow():\n >>> x = task_1.submit()\n >>>\n >>> # task 2 will wait for task_1 to complete\n >>> y = task_2.submit(wait_for=[x])\n\n \"\"\"\n\n from prefect.engine import create_autonomous_task_run, enter_task_run_engine\n\n # Convert the call args/kwargs to a parameter dict\n parameters = get_call_parameters(self.fn, args, kwargs)\n return_type = \"state\" if return_state else \"future\"\n\n task_viz_tracker = get_task_viz_tracker()\n if task_viz_tracker:\n raise VisualizationUnsupportedError(\n \"`task.submit()` is not currently supported by `flow.visualize()`\"\n )\n\n if (\n PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n and not FlowRunContext.get()\n ):\n create_autonomous_task_run_call = create_call(\n create_autonomous_task_run, task=self, parameters=parameters\n )\n if self.isasync:\n return from_async.wait_for_call_in_loop_thread(\n create_autonomous_task_run_call\n )\n else:\n return from_sync.wait_for_call_in_loop_thread(\n create_autonomous_task_run_call\n )\n\n return enter_task_run_engine(\n self,\n parameters=parameters,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=None, # Use the flow's task runner\n mapped=False,\n )\n\n @overload\n def map(\n self: \"Task[P, NoReturn]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> List[PrefectFuture[None, Sync]]:\n # `NoReturn` matches if a type can't be inferred for the function which stops a\n # sync function from matching the `Coroutine` overload\n ...\n\n @overload\n def map(\n self: \"Task[P, Coroutine[Any, Any, T]]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> Awaitable[List[PrefectFuture[T, Async]]]:\n ...\n\n @overload\n def map(\n self: \"Task[P, T]\",\n *args: P.args,\n **kwargs: P.kwargs,\n ) -> List[PrefectFuture[T, Sync]]:\n ...\n\n @overload\n def map(\n self: \"Task[P, T]\",\n *args: P.args,\n return_state: Literal[True],\n **kwargs: P.kwargs,\n ) -> List[State[T]]:\n ...\n\n def map(\n self,\n *args: Any,\n return_state: bool = False,\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"\n Submit a mapped run of the task to a worker.\n\n Must be called within a flow function. If writing an async task, this\n call must be awaited.\n\n Must be called with at least one iterable and all iterables must be\n the same length. Any arguments that are not iterable will be treated as\n a static value and each task run will receive the same value.\n\n Will create as many task runs as the length of the iterable(s) in the\n backing API and submit the task runs to the flow's task runner. This\n call blocks if given a future as input while the future is resolved. It\n also blocks while the tasks are being submitted, once they are\n submitted, the flow function will continue executing. However, note\n that the `SequentialTaskRunner` does not implement parallel execution\n for sync tasks and they are fully resolved on submission.\n\n Args:\n *args: Iterable and static arguments to run the tasks with\n return_state: Return a list of Prefect States that wrap the results\n of each task run.\n wait_for: Upstream task futures to wait for before starting the\n task\n **kwargs: Keyword iterable arguments to run the task with\n\n Returns:\n A list of futures allowing asynchronous access to the state of the\n tasks\n\n Examples:\n\n Define a task\n\n >>> from prefect import task\n >>> @task\n >>> def my_task(x):\n >>> return x + 1\n\n Create mapped tasks\n\n >>> from prefect import flow\n >>> @flow\n >>> def my_flow():\n >>> my_task.map([1, 2, 3])\n\n Wait for all mapped tasks to finish\n\n >>> @flow\n >>> def my_flow():\n >>> futures = my_task.map([1, 2, 3])\n >>> for future in futures:\n >>> future.wait()\n >>> # Now all of the mapped tasks have finished\n >>> my_task(10)\n\n Use the result from mapped tasks in a flow\n\n >>> @flow\n >>> def my_flow():\n >>> futures = my_task.map([1, 2, 3])\n >>> for future in futures:\n >>> print(future.result())\n >>> my_flow()\n 2\n 3\n 4\n\n Enforce ordering between tasks that do not exchange data\n >>> @task\n >>> def task_1(x):\n >>> pass\n >>>\n >>> @task\n >>> def task_2(y):\n >>> pass\n >>>\n >>> @flow\n >>> def my_flow():\n >>> x = task_1.submit()\n >>>\n >>> # task 2 will wait for task_1 to complete\n >>> y = task_2.map([1, 2, 3], wait_for=[x])\n\n Use a non-iterable input as a constant across mapped tasks\n >>> @task\n >>> def display(prefix, item):\n >>> print(prefix, item)\n >>>\n >>> @flow\n >>> def my_flow():\n >>> display.map(\"Check it out: \", [1, 2, 3])\n >>>\n >>> my_flow()\n Check it out: 1\n Check it out: 2\n Check it out: 3\n\n Use `unmapped` to treat an iterable argument as a constant\n >>> from prefect import unmapped\n >>>\n >>> @task\n >>> def add_n_to_items(items, n):\n >>> return [item + n for item in items]\n >>>\n >>> @flow\n >>> def my_flow():\n >>> return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n >>>\n >>> my_flow()\n [[11, 21], [12, 22], [13, 23]]\n \"\"\"\n\n from prefect.engine import begin_task_map, enter_task_run_engine\n\n # Convert the call args/kwargs to a parameter dict; do not apply defaults\n # since they should not be mapped over\n parameters = get_call_parameters(self.fn, args, kwargs, apply_defaults=False)\n return_type = \"state\" if return_state else \"future\"\n\n task_viz_tracker = get_task_viz_tracker()\n if task_viz_tracker:\n raise VisualizationUnsupportedError(\n \"`task.map()` is not currently supported by `flow.visualize()`\"\n )\n\n if (\n PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n and not FlowRunContext.get()\n ):\n map_call = create_call(\n begin_task_map,\n task=self,\n parameters=parameters,\n flow_run_context=None,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=None,\n autonomous=True,\n )\n if self.isasync:\n return from_async.wait_for_call_in_loop_thread(map_call)\n else:\n return from_sync.wait_for_call_in_loop_thread(map_call)\n\n return enter_task_run_engine(\n self,\n parameters=parameters,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=None,\n mapped=True,\n )\n\n def serve(self, task_runner: Optional[BaseTaskRunner] = None) -> \"Task\":\n \"\"\"Serve the task using the provided task runner. This method is used to\n establish a websocket connection with the Prefect server and listen for\n submitted task runs to execute.\n\n Args:\n task_runner: The task runner to use for serving the task. If not provided,\n the default ConcurrentTaskRunner will be used.\n\n Examples:\n Serve a task using the default task runner\n >>> @task\n >>> def my_task():\n >>> return 1\n\n >>> my_task.serve()\n \"\"\"\n\n if not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING:\n raise ValueError(\n \"Task's `serve` method is an experimental feature and must be enabled with \"\n \"`prefect config set PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING=True`\"\n )\n\n from prefect.task_server import serve\n\n serve(self, task_runner=task_runner)\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.map","title":"map
","text":"Submit a mapped run of the task to a worker.
Must be called within a flow function. If writing an async task, this call must be awaited.
Must be called with at least one iterable and all iterables must be the same length. Any arguments that are not iterable will be treated as a static value and each task run will receive the same value.
Will create as many task runs as the length of the iterable(s) in the backing API and submit the task runs to the flow's task runner. This call blocks if given a future as input while the future is resolved. It also blocks while the tasks are being submitted, once they are submitted, the flow function will continue executing. However, note that the SequentialTaskRunner
does not implement parallel execution for sync tasks and they are fully resolved on submission.
Parameters:
Name Type Description Default*args
Any
Iterable and static arguments to run the tasks with
()
return_state
bool
Return a list of Prefect States that wrap the results of each task run.
False
wait_for
Optional[Iterable[PrefectFuture]]
Upstream task futures to wait for before starting the task
None
**kwargs
Any
Keyword iterable arguments to run the task with
{}
Returns:
Type DescriptionAny
A list of futures allowing asynchronous access to the state of the
Any
tasks
Define a task\n\n>>> from prefect import task\n>>> @task\n>>> def my_task(x):\n>>> return x + 1\n\nCreate mapped tasks\n\n>>> from prefect import flow\n>>> @flow\n>>> def my_flow():\n>>> my_task.map([1, 2, 3])\n\nWait for all mapped tasks to finish\n\n>>> @flow\n>>> def my_flow():\n>>> futures = my_task.map([1, 2, 3])\n>>> for future in futures:\n>>> future.wait()\n>>> # Now all of the mapped tasks have finished\n>>> my_task(10)\n\nUse the result from mapped tasks in a flow\n\n>>> @flow\n>>> def my_flow():\n>>> futures = my_task.map([1, 2, 3])\n>>> for future in futures:\n>>> print(future.result())\n>>> my_flow()\n2\n3\n4\n\nEnforce ordering between tasks that do not exchange data\n>>> @task\n>>> def task_1(x):\n>>> pass\n>>>\n>>> @task\n>>> def task_2(y):\n>>> pass\n>>>\n>>> @flow\n>>> def my_flow():\n>>> x = task_1.submit()\n>>>\n>>> # task 2 will wait for task_1 to complete\n>>> y = task_2.map([1, 2, 3], wait_for=[x])\n\nUse a non-iterable input as a constant across mapped tasks\n>>> @task\n>>> def display(prefix, item):\n>>> print(prefix, item)\n>>>\n>>> @flow\n>>> def my_flow():\n>>> display.map(\"Check it out: \", [1, 2, 3])\n>>>\n>>> my_flow()\nCheck it out: 1\nCheck it out: 2\nCheck it out: 3\n\nUse `unmapped` to treat an iterable argument as a constant\n>>> from prefect import unmapped\n>>>\n>>> @task\n>>> def add_n_to_items(items, n):\n>>> return [item + n for item in items]\n>>>\n>>> @flow\n>>> def my_flow():\n>>> return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n>>>\n>>> my_flow()\n[[11, 21], [12, 22], [13, 23]]\n
Source code in prefect/tasks.py
def map(\n self,\n *args: Any,\n return_state: bool = False,\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: Any,\n) -> Any:\n \"\"\"\n Submit a mapped run of the task to a worker.\n\n Must be called within a flow function. If writing an async task, this\n call must be awaited.\n\n Must be called with at least one iterable and all iterables must be\n the same length. Any arguments that are not iterable will be treated as\n a static value and each task run will receive the same value.\n\n Will create as many task runs as the length of the iterable(s) in the\n backing API and submit the task runs to the flow's task runner. This\n call blocks if given a future as input while the future is resolved. It\n also blocks while the tasks are being submitted, once they are\n submitted, the flow function will continue executing. However, note\n that the `SequentialTaskRunner` does not implement parallel execution\n for sync tasks and they are fully resolved on submission.\n\n Args:\n *args: Iterable and static arguments to run the tasks with\n return_state: Return a list of Prefect States that wrap the results\n of each task run.\n wait_for: Upstream task futures to wait for before starting the\n task\n **kwargs: Keyword iterable arguments to run the task with\n\n Returns:\n A list of futures allowing asynchronous access to the state of the\n tasks\n\n Examples:\n\n Define a task\n\n >>> from prefect import task\n >>> @task\n >>> def my_task(x):\n >>> return x + 1\n\n Create mapped tasks\n\n >>> from prefect import flow\n >>> @flow\n >>> def my_flow():\n >>> my_task.map([1, 2, 3])\n\n Wait for all mapped tasks to finish\n\n >>> @flow\n >>> def my_flow():\n >>> futures = my_task.map([1, 2, 3])\n >>> for future in futures:\n >>> future.wait()\n >>> # Now all of the mapped tasks have finished\n >>> my_task(10)\n\n Use the result from mapped tasks in a flow\n\n >>> @flow\n >>> def my_flow():\n >>> futures = my_task.map([1, 2, 3])\n >>> for future in futures:\n >>> print(future.result())\n >>> my_flow()\n 2\n 3\n 4\n\n Enforce ordering between tasks that do not exchange data\n >>> @task\n >>> def task_1(x):\n >>> pass\n >>>\n >>> @task\n >>> def task_2(y):\n >>> pass\n >>>\n >>> @flow\n >>> def my_flow():\n >>> x = task_1.submit()\n >>>\n >>> # task 2 will wait for task_1 to complete\n >>> y = task_2.map([1, 2, 3], wait_for=[x])\n\n Use a non-iterable input as a constant across mapped tasks\n >>> @task\n >>> def display(prefix, item):\n >>> print(prefix, item)\n >>>\n >>> @flow\n >>> def my_flow():\n >>> display.map(\"Check it out: \", [1, 2, 3])\n >>>\n >>> my_flow()\n Check it out: 1\n Check it out: 2\n Check it out: 3\n\n Use `unmapped` to treat an iterable argument as a constant\n >>> from prefect import unmapped\n >>>\n >>> @task\n >>> def add_n_to_items(items, n):\n >>> return [item + n for item in items]\n >>>\n >>> @flow\n >>> def my_flow():\n >>> return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n >>>\n >>> my_flow()\n [[11, 21], [12, 22], [13, 23]]\n \"\"\"\n\n from prefect.engine import begin_task_map, enter_task_run_engine\n\n # Convert the call args/kwargs to a parameter dict; do not apply defaults\n # since they should not be mapped over\n parameters = get_call_parameters(self.fn, args, kwargs, apply_defaults=False)\n return_type = \"state\" if return_state else \"future\"\n\n task_viz_tracker = get_task_viz_tracker()\n if task_viz_tracker:\n raise VisualizationUnsupportedError(\n \"`task.map()` is not currently supported by `flow.visualize()`\"\n )\n\n if (\n PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n and not FlowRunContext.get()\n ):\n map_call = create_call(\n begin_task_map,\n task=self,\n parameters=parameters,\n flow_run_context=None,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=None,\n autonomous=True,\n )\n if self.isasync:\n return from_async.wait_for_call_in_loop_thread(map_call)\n else:\n return from_sync.wait_for_call_in_loop_thread(map_call)\n\n return enter_task_run_engine(\n self,\n parameters=parameters,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=None,\n mapped=True,\n )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.serve","title":"serve
","text":"Serve the task using the provided task runner. This method is used to establish a websocket connection with the Prefect server and listen for submitted task runs to execute.
Parameters:
Name Type Description Defaulttask_runner
Optional[BaseTaskRunner]
The task runner to use for serving the task. If not provided, the default ConcurrentTaskRunner will be used.
None
Examples:
Serve a task using the default task runner
>>> @task\n>>> def my_task():\n>>> return 1\n
>>> my_task.serve()\n
Source code in prefect/tasks.py
def serve(self, task_runner: Optional[BaseTaskRunner] = None) -> \"Task\":\n \"\"\"Serve the task using the provided task runner. This method is used to\n establish a websocket connection with the Prefect server and listen for\n submitted task runs to execute.\n\n Args:\n task_runner: The task runner to use for serving the task. If not provided,\n the default ConcurrentTaskRunner will be used.\n\n Examples:\n Serve a task using the default task runner\n >>> @task\n >>> def my_task():\n >>> return 1\n\n >>> my_task.serve()\n \"\"\"\n\n if not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING:\n raise ValueError(\n \"Task's `serve` method is an experimental feature and must be enabled with \"\n \"`prefect config set PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING=True`\"\n )\n\n from prefect.task_server import serve\n\n serve(self, task_runner=task_runner)\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.submit","title":"submit
","text":"Submit a run of the task to the engine.
If writing an async task, this call must be awaited.
If called from within a flow function,
Will create a new task run in the backing API and submit the task to the flow's task runner. This call only blocks execution while the task is being submitted, once it is submitted, the flow function will continue executing. However, note that the SequentialTaskRunner
does not implement parallel execution for sync tasks and they are fully resolved on submission.
Parameters:
Name Type Description Default*args
Any
Arguments to run the task with
()
return_state
bool
Return the result of the flow run wrapped in a Prefect State.
False
wait_for
Optional[Iterable[PrefectFuture]]
Upstream task futures to wait for before starting the task
None
**kwargs
Any
Keyword arguments to run the task with
{}
Returns:
Type DescriptionUnion[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]
If return_state
is False a future allowing asynchronous access to the state of the task
Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]
If return_state
is True a future wrapped in a Prefect State allowing asynchronous access to the state of the task
Define a task\n\n>>> from prefect import task\n>>> @task\n>>> def my_task():\n>>> return \"hello\"\n\nRun a task in a flow\n\n>>> from prefect import flow\n>>> @flow\n>>> def my_flow():\n>>> my_task.submit()\n\nWait for a task to finish\n\n>>> @flow\n>>> def my_flow():\n>>> my_task.submit().wait()\n\nUse the result from a task in a flow\n\n>>> @flow\n>>> def my_flow():\n>>> print(my_task.submit().result())\n>>>\n>>> my_flow()\nhello\n\nRun an async task in an async flow\n\n>>> @task\n>>> async def my_async_task():\n>>> pass\n>>>\n>>> @flow\n>>> async def my_flow():\n>>> await my_async_task.submit()\n\nRun a sync task in an async flow\n\n>>> @flow\n>>> async def my_flow():\n>>> my_task.submit()\n\nEnforce ordering between tasks that do not exchange data\n>>> @task\n>>> def task_1():\n>>> pass\n>>>\n>>> @task\n>>> def task_2():\n>>> pass\n>>>\n>>> @flow\n>>> def my_flow():\n>>> x = task_1.submit()\n>>>\n>>> # task 2 will wait for task_1 to complete\n>>> y = task_2.submit(wait_for=[x])\n
Source code in prefect/tasks.py
def submit(\n self,\n *args: Any,\n return_state: bool = False,\n wait_for: Optional[Iterable[PrefectFuture]] = None,\n **kwargs: Any,\n) -> Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]:\n \"\"\"\n Submit a run of the task to the engine.\n\n If writing an async task, this call must be awaited.\n\n If called from within a flow function,\n\n Will create a new task run in the backing API and submit the task to the flow's\n task runner. This call only blocks execution while the task is being submitted,\n once it is submitted, the flow function will continue executing. However, note\n that the `SequentialTaskRunner` does not implement parallel execution for sync tasks\n and they are fully resolved on submission.\n\n Args:\n *args: Arguments to run the task with\n return_state: Return the result of the flow run wrapped in a\n Prefect State.\n wait_for: Upstream task futures to wait for before starting the task\n **kwargs: Keyword arguments to run the task with\n\n Returns:\n If `return_state` is False a future allowing asynchronous access to\n the state of the task\n If `return_state` is True a future wrapped in a Prefect State allowing asynchronous access to\n the state of the task\n\n Examples:\n\n Define a task\n\n >>> from prefect import task\n >>> @task\n >>> def my_task():\n >>> return \"hello\"\n\n Run a task in a flow\n\n >>> from prefect import flow\n >>> @flow\n >>> def my_flow():\n >>> my_task.submit()\n\n Wait for a task to finish\n\n >>> @flow\n >>> def my_flow():\n >>> my_task.submit().wait()\n\n Use the result from a task in a flow\n\n >>> @flow\n >>> def my_flow():\n >>> print(my_task.submit().result())\n >>>\n >>> my_flow()\n hello\n\n Run an async task in an async flow\n\n >>> @task\n >>> async def my_async_task():\n >>> pass\n >>>\n >>> @flow\n >>> async def my_flow():\n >>> await my_async_task.submit()\n\n Run a sync task in an async flow\n\n >>> @flow\n >>> async def my_flow():\n >>> my_task.submit()\n\n Enforce ordering between tasks that do not exchange data\n >>> @task\n >>> def task_1():\n >>> pass\n >>>\n >>> @task\n >>> def task_2():\n >>> pass\n >>>\n >>> @flow\n >>> def my_flow():\n >>> x = task_1.submit()\n >>>\n >>> # task 2 will wait for task_1 to complete\n >>> y = task_2.submit(wait_for=[x])\n\n \"\"\"\n\n from prefect.engine import create_autonomous_task_run, enter_task_run_engine\n\n # Convert the call args/kwargs to a parameter dict\n parameters = get_call_parameters(self.fn, args, kwargs)\n return_type = \"state\" if return_state else \"future\"\n\n task_viz_tracker = get_task_viz_tracker()\n if task_viz_tracker:\n raise VisualizationUnsupportedError(\n \"`task.submit()` is not currently supported by `flow.visualize()`\"\n )\n\n if (\n PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n and not FlowRunContext.get()\n ):\n create_autonomous_task_run_call = create_call(\n create_autonomous_task_run, task=self, parameters=parameters\n )\n if self.isasync:\n return from_async.wait_for_call_in_loop_thread(\n create_autonomous_task_run_call\n )\n else:\n return from_sync.wait_for_call_in_loop_thread(\n create_autonomous_task_run_call\n )\n\n return enter_task_run_engine(\n self,\n parameters=parameters,\n wait_for=wait_for,\n return_type=return_type,\n task_runner=None, # Use the flow's task runner\n mapped=False,\n )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.with_options","title":"with_options
","text":"Create a new task from the current object, updating provided options.
Parameters:
Name Type Description Defaultname
str
A new name for the task.
None
description
str
A new description for the task.
None
tags
Iterable[str]
A new set of tags for the task. If given, existing tags are ignored, not merged.
None
cache_key_fn
Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]
A new cache key function for the task.
None
cache_expiration
timedelta
A new cache expiration time for the task.
None
task_run_name
Optional[Union[Callable[[], str], str]]
An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.
None
retries
Optional[int]
A new number of times to retry on task run failure.
NotSet
retry_delay_seconds
Union[float, int, List[float], Callable[[int], List[float]]]
Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries
is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.
NotSet
retry_jitter_factor
Optional[float]
An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".
NotSet
persist_result
Optional[bool]
A new option for enabling or disabling result persistence.
NotSet
result_storage
Optional[ResultStorage]
A new storage type to use for results.
NotSet
result_serializer
Optional[ResultSerializer]
A new serializer to use for results.
NotSet
result_storage_key
Optional[str]
A new key for the persisted result to be stored at.
NotSet
timeout_seconds
Union[int, float]
A new maximum time for the task to complete in seconds.
None
log_prints
Optional[bool]
A new option for enabling or disabling redirection of print
statements.
NotSet
refresh_cache
Optional[bool]
A new option for enabling or disabling cache refresh.
NotSet
on_completion
Optional[List[Callable[[Task, TaskRun, State], None]]]
A new list of callables to run when the task enters a completed state.
None
on_failure
Optional[List[Callable[[Task, TaskRun, State], None]]]
A new list of callables to run when the task enters a failed state.
None
retry_condition_fn
Optional[Callable[[Task, TaskRun, State], bool]]
An optional callable run when a task run returns a Failed state. Should return True
if the task should continue to its retry policy, and False
if the task should end as failed. Defaults to None
, indicating the task should always continue to its retry policy.
None
viz_return_value
Optional[Any]
An optional value to return when the task dependency tree is visualized.
None
Returns:
Type DescriptionA new Task
instance.
Create a new task from an existing task and update the name\n\n>>> @task(name=\"My task\")\n>>> def my_task():\n>>> return 1\n>>>\n>>> new_task = my_task.with_options(name=\"My new task\")\n\nCreate a new task from an existing task and update the retry settings\n\n>>> from random import randint\n>>>\n>>> @task(retries=1, retry_delay_seconds=5)\n>>> def my_task():\n>>> x = randint(0, 5)\n>>> if x >= 3: # Make a task that fails sometimes\n>>> raise ValueError(\"Retry me please!\")\n>>> return x\n>>>\n>>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\nUse a task with updated options within a flow\n\n>>> @task(name=\"My task\")\n>>> def my_task():\n>>> return 1\n>>>\n>>> @flow\n>>> my_flow():\n>>> new_task = my_task.with_options(name=\"My new task\")\n>>> new_task()\n
Source code in prefect/tasks.py
def with_options(\n self,\n *,\n name: str = None,\n description: str = None,\n tags: Iterable[str] = None,\n cache_key_fn: Callable[\n [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n ] = None,\n task_run_name: Optional[Union[Callable[[], str], str]] = None,\n cache_expiration: datetime.timedelta = None,\n retries: Optional[int] = NotSet,\n retry_delay_seconds: Union[\n float,\n int,\n List[float],\n Callable[[int], List[float]],\n ] = NotSet,\n retry_jitter_factor: Optional[float] = NotSet,\n persist_result: Optional[bool] = NotSet,\n result_storage: Optional[ResultStorage] = NotSet,\n result_serializer: Optional[ResultSerializer] = NotSet,\n result_storage_key: Optional[str] = NotSet,\n cache_result_in_memory: Optional[bool] = None,\n timeout_seconds: Union[int, float] = None,\n log_prints: Optional[bool] = NotSet,\n refresh_cache: Optional[bool] = NotSet,\n on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n viz_return_value: Optional[Any] = None,\n):\n \"\"\"\n Create a new task from the current object, updating provided options.\n\n Args:\n name: A new name for the task.\n description: A new description for the task.\n tags: A new set of tags for the task. If given, existing tags are ignored,\n not merged.\n cache_key_fn: A new cache key function for the task.\n cache_expiration: A new cache expiration time for the task.\n task_run_name: An optional name to distinguish runs of this task; this name can be provided\n as a string template with the task's keyword arguments as variables,\n or a function that returns a string.\n retries: A new number of times to retry on task run failure.\n retry_delay_seconds: Optionally configures how long to wait before retrying\n the task after failure. This is only applicable if `retries` is nonzero.\n This setting can either be a number of seconds, a list of retry delays,\n or a callable that, given the total number of retries, generates a list\n of retry delays. If a number of seconds, that delay will be applied to\n all retries. If a list, each retry will wait for the corresponding delay\n before retrying. When passing a callable or a list, the number of\n configured retry delays cannot exceed 50.\n retry_jitter_factor: An optional factor that defines the factor to which a\n retry can be jittered in order to avoid a \"thundering herd\".\n persist_result: A new option for enabling or disabling result persistence.\n result_storage: A new storage type to use for results.\n result_serializer: A new serializer to use for results.\n result_storage_key: A new key for the persisted result to be stored at.\n timeout_seconds: A new maximum time for the task to complete in seconds.\n log_prints: A new option for enabling or disabling redirection of `print` statements.\n refresh_cache: A new option for enabling or disabling cache refresh.\n on_completion: A new list of callables to run when the task enters a completed state.\n on_failure: A new list of callables to run when the task enters a failed state.\n retry_condition_fn: An optional callable run when a task run returns a Failed state.\n Should return `True` if the task should continue to its retry policy, and `False`\n if the task should end as failed. Defaults to `None`, indicating the task should\n always continue to its retry policy.\n viz_return_value: An optional value to return when the task dependency tree is visualized.\n\n Returns:\n A new `Task` instance.\n\n Examples:\n\n Create a new task from an existing task and update the name\n\n >>> @task(name=\"My task\")\n >>> def my_task():\n >>> return 1\n >>>\n >>> new_task = my_task.with_options(name=\"My new task\")\n\n Create a new task from an existing task and update the retry settings\n\n >>> from random import randint\n >>>\n >>> @task(retries=1, retry_delay_seconds=5)\n >>> def my_task():\n >>> x = randint(0, 5)\n >>> if x >= 3: # Make a task that fails sometimes\n >>> raise ValueError(\"Retry me please!\")\n >>> return x\n >>>\n >>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\n Use a task with updated options within a flow\n\n >>> @task(name=\"My task\")\n >>> def my_task():\n >>> return 1\n >>>\n >>> @flow\n >>> my_flow():\n >>> new_task = my_task.with_options(name=\"My new task\")\n >>> new_task()\n \"\"\"\n return Task(\n fn=self.fn,\n name=name or self.name,\n description=description or self.description,\n tags=tags or copy(self.tags),\n cache_key_fn=cache_key_fn or self.cache_key_fn,\n cache_expiration=cache_expiration or self.cache_expiration,\n task_run_name=task_run_name,\n retries=retries if retries is not NotSet else self.retries,\n retry_delay_seconds=(\n retry_delay_seconds\n if retry_delay_seconds is not NotSet\n else self.retry_delay_seconds\n ),\n retry_jitter_factor=(\n retry_jitter_factor\n if retry_jitter_factor is not NotSet\n else self.retry_jitter_factor\n ),\n persist_result=(\n persist_result if persist_result is not NotSet else self.persist_result\n ),\n result_storage=(\n result_storage if result_storage is not NotSet else self.result_storage\n ),\n result_storage_key=(\n result_storage_key\n if result_storage_key is not NotSet\n else self.result_storage_key\n ),\n result_serializer=(\n result_serializer\n if result_serializer is not NotSet\n else self.result_serializer\n ),\n cache_result_in_memory=(\n cache_result_in_memory\n if cache_result_in_memory is not None\n else self.cache_result_in_memory\n ),\n timeout_seconds=(\n timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n ),\n log_prints=(log_prints if log_prints is not NotSet else self.log_prints),\n refresh_cache=(\n refresh_cache if refresh_cache is not NotSet else self.refresh_cache\n ),\n on_completion=on_completion or self.on_completion,\n on_failure=on_failure or self.on_failure,\n retry_condition_fn=retry_condition_fn or self.retry_condition_fn,\n viz_return_value=viz_return_value or self.viz_return_value,\n )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.exponential_backoff","title":"exponential_backoff
","text":"A task retry backoff utility that configures exponential backoff for task retries. The exponential backoff design matches the urllib3 implementation.
Parameters:
Name Type Description Defaultbackoff_factor
float
the base delay for the first retry, subsequent retries will increase the delay time by powers of 2.
requiredReturns:
Type DescriptionCallable[[int], List[float]]
a callable that can be passed to the task constructor
Source code inprefect/tasks.py
def exponential_backoff(backoff_factor: float) -> Callable[[int], List[float]]:\n \"\"\"\n A task retry backoff utility that configures exponential backoff for task retries.\n The exponential backoff design matches the urllib3 implementation.\n\n Arguments:\n backoff_factor: the base delay for the first retry, subsequent retries will\n increase the delay time by powers of 2.\n\n Returns:\n a callable that can be passed to the task constructor\n \"\"\"\n\n def retry_backoff_callable(retries: int) -> List[float]:\n # no more than 50 retry delays can be configured on a task\n retries = min(retries, 50)\n\n return [backoff_factor * max(0, 2**r) for r in range(retries)]\n\n return retry_backoff_callable\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.task","title":"task
","text":"Decorator to designate a function as a task in a Prefect workflow.
This decorator may be used for asynchronous or synchronous functions.
Parameters:
Name Type Description Defaultname
str
An optional name for the task; if not provided, the name will be inferred from the given function.
None
description
str
An optional string description for the task.
None
tags
Iterable[str]
An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags
context at task runtime.
None
version
str
An optional string specifying the version of this task definition
None
cache_key_fn
Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]
An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again.
None
cache_expiration
timedelta
An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire.
None
task_run_name
Optional[Union[Callable[[], str], str]]
An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.
None
retries
int
An optional number of times to retry on task run failure
None
retry_delay_seconds
Union[float, int, List[float], Callable[[int], List[float]]]
Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries
is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.
None
retry_jitter_factor
Optional[float]
An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".
None
persist_result
Optional[bool]
An optional toggle indicating whether the result of this task should be persisted to result storage. Defaults to None
, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.
None
result_storage
Optional[ResultStorage]
An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in.
None
result_storage_key
Optional[str]
An optional key to store the result in storage at when persisted. Defaults to a unique identifier.
None
result_serializer
Optional[ResultSerializer]
An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in.
None
timeout_seconds
Union[int, float]
An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed.
None
log_prints
Optional[bool]
If set, print
statements in the task will be redirected to the Prefect logger for the task run. Defaults to None
, which indicates that the value from the flow should be used.
None
refresh_cache
Optional[bool]
If set, cached results for the cache key are not used. Defaults to None
, which indicates that a cached result from a previous execution with matching cache key is used.
None
on_failure
Optional[List[Callable[[Task, TaskRun, State], None]]]
An optional list of callables to run when the task enters a failed state.
None
on_completion
Optional[List[Callable[[Task, TaskRun, State], None]]]
An optional list of callables to run when the task enters a completed state.
None
retry_condition_fn
Optional[Callable[[Task, TaskRun, State], bool]]
An optional callable run when a task run returns a Failed state. Should return True
if the task should continue to its retry policy (e.g. retries=3
), and False
if the task should end as failed. Defaults to None
, indicating the task should always continue to its retry policy.
None
viz_return_value
Any
An optional value to return when the task dependency tree is visualized.
None
Returns:
Type DescriptionA callable Task
object which, when called, will submit the task for execution.
Examples:
Define a simple task
>>> @task\n>>> def add(x, y):\n>>> return x + y\n
Define an async task
>>> @task\n>>> async def add(x, y):\n>>> return x + y\n
Define a task with tags and a description
>>> @task(tags={\"a\", \"b\"}, description=\"This task is empty but its my first!\")\n>>> def my_task():\n>>> pass\n
Define a task with a custom name
>>> @task(name=\"The Ultimate Task\")\n>>> def my_task():\n>>> pass\n
Define a task that retries 3 times with a 5 second delay between attempts
>>> from random import randint\n>>>\n>>> @task(retries=3, retry_delay_seconds=5)\n>>> def my_task():\n>>> x = randint(0, 5)\n>>> if x >= 3: # Make a task that fails sometimes\n>>> raise ValueError(\"Retry me please!\")\n>>> return x\n
Define a task that is cached for a day based on its inputs
>>> from prefect.tasks import task_input_hash\n>>> from datetime import timedelta\n>>>\n>>> @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\n>>> def my_task():\n>>> return \"hello\"\n
Source code in prefect/tasks.py
def task(\n __fn=None,\n *,\n name: str = None,\n description: str = None,\n tags: Iterable[str] = None,\n version: str = None,\n cache_key_fn: Callable[[\"TaskRunContext\", Dict[str, Any]], Optional[str]] = None,\n cache_expiration: datetime.timedelta = None,\n task_run_name: Optional[Union[Callable[[], str], str]] = None,\n retries: int = None,\n retry_delay_seconds: Union[\n float,\n int,\n List[float],\n Callable[[int], List[float]],\n ] = None,\n retry_jitter_factor: Optional[float] = None,\n persist_result: Optional[bool] = None,\n result_storage: Optional[ResultStorage] = None,\n result_storage_key: Optional[str] = None,\n result_serializer: Optional[ResultSerializer] = None,\n cache_result_in_memory: bool = True,\n timeout_seconds: Union[int, float] = None,\n log_prints: Optional[bool] = None,\n refresh_cache: Optional[bool] = None,\n on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n viz_return_value: Any = None,\n):\n \"\"\"\n Decorator to designate a function as a task in a Prefect workflow.\n\n This decorator may be used for asynchronous or synchronous functions.\n\n Args:\n name: An optional name for the task; if not provided, the name will be inferred\n from the given function.\n description: An optional string description for the task.\n tags: An optional set of tags to be associated with runs of this task. These\n tags are combined with any tags defined by a `prefect.tags` context at\n task runtime.\n version: An optional string specifying the version of this task definition\n cache_key_fn: An optional callable that, given the task run context and call\n parameters, generates a string key; if the key matches a previous completed\n state, that state result will be restored instead of running the task again.\n cache_expiration: An optional amount of time indicating how long cached states\n for this task should be restorable; if not provided, cached states will\n never expire.\n task_run_name: An optional name to distinguish runs of this task; this name can be provided\n as a string template with the task's keyword arguments as variables,\n or a function that returns a string.\n retries: An optional number of times to retry on task run failure\n retry_delay_seconds: Optionally configures how long to wait before retrying the\n task after failure. This is only applicable if `retries` is nonzero. This\n setting can either be a number of seconds, a list of retry delays, or a\n callable that, given the total number of retries, generates a list of retry\n delays. If a number of seconds, that delay will be applied to all retries.\n If a list, each retry will wait for the corresponding delay before retrying.\n When passing a callable or a list, the number of configured retry delays\n cannot exceed 50.\n retry_jitter_factor: An optional factor that defines the factor to which a retry\n can be jittered in order to avoid a \"thundering herd\".\n persist_result: An optional toggle indicating whether the result of this task\n should be persisted to result storage. Defaults to `None`, which indicates\n that Prefect should choose whether the result should be persisted depending on\n the features being used.\n result_storage: An optional block to use to persist the result of this task.\n Defaults to the value set in the flow the task is called in.\n result_storage_key: An optional key to store the result in storage at when persisted.\n Defaults to a unique identifier.\n result_serializer: An optional serializer to use to serialize the result of this\n task for persistence. Defaults to the value set in the flow the task is\n called in.\n timeout_seconds: An optional number of seconds indicating a maximum runtime for\n the task. If the task exceeds this runtime, it will be marked as failed.\n log_prints: If set, `print` statements in the task will be redirected to the\n Prefect logger for the task run. Defaults to `None`, which indicates\n that the value from the flow should be used.\n refresh_cache: If set, cached results for the cache key are not used.\n Defaults to `None`, which indicates that a cached result from a previous\n execution with matching cache key is used.\n on_failure: An optional list of callables to run when the task enters a failed state.\n on_completion: An optional list of callables to run when the task enters a completed state.\n retry_condition_fn: An optional callable run when a task run returns a Failed state. Should\n return `True` if the task should continue to its retry policy (e.g. `retries=3`), and `False` if the task\n should end as failed. Defaults to `None`, indicating the task should always continue\n to its retry policy.\n viz_return_value: An optional value to return when the task dependency tree is visualized.\n\n Returns:\n A callable `Task` object which, when called, will submit the task for execution.\n\n Examples:\n Define a simple task\n\n >>> @task\n >>> def add(x, y):\n >>> return x + y\n\n Define an async task\n\n >>> @task\n >>> async def add(x, y):\n >>> return x + y\n\n Define a task with tags and a description\n\n >>> @task(tags={\"a\", \"b\"}, description=\"This task is empty but its my first!\")\n >>> def my_task():\n >>> pass\n\n Define a task with a custom name\n\n >>> @task(name=\"The Ultimate Task\")\n >>> def my_task():\n >>> pass\n\n Define a task that retries 3 times with a 5 second delay between attempts\n\n >>> from random import randint\n >>>\n >>> @task(retries=3, retry_delay_seconds=5)\n >>> def my_task():\n >>> x = randint(0, 5)\n >>> if x >= 3: # Make a task that fails sometimes\n >>> raise ValueError(\"Retry me please!\")\n >>> return x\n\n Define a task that is cached for a day based on its inputs\n\n >>> from prefect.tasks import task_input_hash\n >>> from datetime import timedelta\n >>>\n >>> @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\n >>> def my_task():\n >>> return \"hello\"\n \"\"\"\n\n if __fn:\n return cast(\n Task[P, R],\n Task(\n fn=__fn,\n name=name,\n description=description,\n tags=tags,\n version=version,\n cache_key_fn=cache_key_fn,\n cache_expiration=cache_expiration,\n task_run_name=task_run_name,\n retries=retries,\n retry_delay_seconds=retry_delay_seconds,\n retry_jitter_factor=retry_jitter_factor,\n persist_result=persist_result,\n result_storage=result_storage,\n result_storage_key=result_storage_key,\n result_serializer=result_serializer,\n cache_result_in_memory=cache_result_in_memory,\n timeout_seconds=timeout_seconds,\n log_prints=log_prints,\n refresh_cache=refresh_cache,\n on_completion=on_completion,\n on_failure=on_failure,\n retry_condition_fn=retry_condition_fn,\n viz_return_value=viz_return_value,\n ),\n )\n else:\n return cast(\n Callable[[Callable[P, R]], Task[P, R]],\n partial(\n task,\n name=name,\n description=description,\n tags=tags,\n version=version,\n cache_key_fn=cache_key_fn,\n cache_expiration=cache_expiration,\n task_run_name=task_run_name,\n retries=retries,\n retry_delay_seconds=retry_delay_seconds,\n retry_jitter_factor=retry_jitter_factor,\n persist_result=persist_result,\n result_storage=result_storage,\n result_storage_key=result_storage_key,\n result_serializer=result_serializer,\n cache_result_in_memory=cache_result_in_memory,\n timeout_seconds=timeout_seconds,\n log_prints=log_prints,\n refresh_cache=refresh_cache,\n on_completion=on_completion,\n on_failure=on_failure,\n retry_condition_fn=retry_condition_fn,\n viz_return_value=viz_return_value,\n ),\n )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.task_input_hash","title":"task_input_hash
","text":"A task cache key implementation which hashes all inputs to the task using a JSON or cloudpickle serializer. If any arguments are not JSON serializable, the pickle serializer is used as a fallback. If cloudpickle fails, this will return a null key indicating that a cache key could not be generated for the given inputs.
Parameters:
Name Type Description Defaultcontext
TaskRunContext
the active TaskRunContext
arguments
Dict[str, Any]
a dictionary of arguments to be passed to the underlying task
requiredReturns:
Type DescriptionOptional[str]
a string hash if hashing succeeded, else None
prefect/tasks.py
def task_input_hash(\n context: \"TaskRunContext\", arguments: Dict[str, Any]\n) -> Optional[str]:\n \"\"\"\n A task cache key implementation which hashes all inputs to the task using a JSON or\n cloudpickle serializer. If any arguments are not JSON serializable, the pickle\n serializer is used as a fallback. If cloudpickle fails, this will return a null key\n indicating that a cache key could not be generated for the given inputs.\n\n Arguments:\n context: the active `TaskRunContext`\n arguments: a dictionary of arguments to be passed to the underlying task\n\n Returns:\n a string hash if hashing succeeded, else `None`\n \"\"\"\n return hash_objects(\n # We use the task key to get the qualified name for the task and include the\n # task functions `co_code` bytes to avoid caching when the underlying function\n # changes\n context.task.task_key,\n context.task.fn.__code__.co_code.hex(),\n arguments,\n )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/testing/","title":"prefect.testing","text":"","tags":["Python API","testing"]},{"location":"api-ref/prefect/testing/#prefect.testing","title":"prefect.testing
","text":"","tags":["Python API","testing"]},{"location":"api-ref/prefect/variables/","title":"prefect.variables","text":"","tags":["Python API","variables"]},{"location":"api-ref/prefect/variables/#prefect.variables","title":"prefect.variables
","text":"","tags":["Python API","variables"]},{"location":"api-ref/prefect/variables/#prefect.variables.get","title":"get
async
","text":"Get a variable by name. If doesn't exist return the default.
from prefect import variables\n\n @flow\n def my_flow():\n var = variables.get(\"my_var\")\n
or from prefect import variables\n\n @flow\n async def my_flow():\n var = await variables.get(\"my_var\")\n
Source code in prefect/variables.py
@sync_compatible\nasync def get(name: str, default: str = None) -> Optional[str]:\n \"\"\"\n Get a variable by name. If doesn't exist return the default.\n ```\n from prefect import variables\n\n @flow\n def my_flow():\n var = variables.get(\"my_var\")\n ```\n or\n ```\n from prefect import variables\n\n @flow\n async def my_flow():\n var = await variables.get(\"my_var\")\n ```\n \"\"\"\n variable = await _get_variable_by_name(name)\n return variable.value if variable else default\n
","tags":["Python API","variables"]},{"location":"api-ref/prefect/blocks/core/","title":"core","text":"","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core","title":"prefect.blocks.core
","text":"","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block","title":"Block
","text":" Bases: BaseModel
, ABC
A base class for implementing a block that wraps an external service.
This class can be defined with an arbitrary set of fields and methods, and couples business logic with data contained in an block document. _block_document_name
, _block_document_id
, _block_schema_id
, and _block_type_id
are reserved by Prefect as Block metadata fields, but otherwise a Block can implement arbitrary logic. Blocks can be instantiated without populating these metadata fields, but can only be used interactively, not with the Prefect API.
Instead of the init method, a block implementation allows the definition of a block_initialization
method that is called after initialization.
prefect/blocks/core.py
@register_base_type\n@instrument_method_calls_on_class_instances\nclass Block(BaseModel, ABC):\n \"\"\"\n A base class for implementing a block that wraps an external service.\n\n This class can be defined with an arbitrary set of fields and methods, and\n couples business logic with data contained in an block document.\n `_block_document_name`, `_block_document_id`, `_block_schema_id`, and\n `_block_type_id` are reserved by Prefect as Block metadata fields, but\n otherwise a Block can implement arbitrary logic. Blocks can be instantiated\n without populating these metadata fields, but can only be used interactively,\n not with the Prefect API.\n\n Instead of the __init__ method, a block implementation allows the\n definition of a `block_initialization` method that is called after\n initialization.\n \"\"\"\n\n class Config:\n extra = \"allow\"\n\n json_encoders = {SecretDict: lambda v: v.dict()}\n\n @staticmethod\n def schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n \"\"\"\n Customizes Pydantic's schema generation feature to add blocks related information.\n \"\"\"\n schema[\"block_type_slug\"] = model.get_block_type_slug()\n # Ensures args and code examples aren't included in the schema\n description = model.get_description()\n if description:\n schema[\"description\"] = description\n else:\n # Prevent the description of the base class from being included in the schema\n schema.pop(\"description\", None)\n\n # create a list of secret field names\n # secret fields include both top-level keys and dot-delimited nested secret keys\n # A wildcard (*) means that all fields under a given key are secret.\n # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n # nested under the \"child\" key are all secret. There is no limit to nesting.\n secrets = schema[\"secret_fields\"] = []\n for field in model.__fields__.values():\n _collect_secret_fields(field.name, field.type_, secrets)\n\n # create block schema references\n refs = schema[\"block_schema_references\"] = {}\n for field in model.__fields__.values():\n if Block.is_block_class(field.type_):\n refs[field.name] = field.type_._to_block_schema_reference_dict()\n if get_origin(field.type_) is Union:\n for type_ in get_args(field.type_):\n if Block.is_block_class(type_):\n if isinstance(refs.get(field.name), list):\n refs[field.name].append(\n type_._to_block_schema_reference_dict()\n )\n elif isinstance(refs.get(field.name), dict):\n refs[field.name] = [\n refs[field.name],\n type_._to_block_schema_reference_dict(),\n ]\n else:\n refs[\n field.name\n ] = type_._to_block_schema_reference_dict()\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.block_initialization()\n\n def __str__(self) -> str:\n return self.__repr__()\n\n def __repr_args__(self):\n repr_args = super().__repr_args__()\n data_keys = self.schema()[\"properties\"].keys()\n return [\n (key, value) for key, value in repr_args if key is None or key in data_keys\n ]\n\n def block_initialization(self) -> None:\n pass\n\n # -- private class variables\n # set by the class itself\n\n # Attribute to customize the name of the block type created\n # when the block is registered with the API. If not set, block\n # type name will default to the class name.\n _block_type_name: Optional[str] = None\n _block_type_slug: Optional[str] = None\n\n # Attributes used to set properties on a block type when registered\n # with the API.\n _logo_url: Optional[HttpUrl] = None\n _documentation_url: Optional[HttpUrl] = None\n _description: Optional[str] = None\n _code_example: Optional[str] = None\n\n # -- private instance variables\n # these are set when blocks are loaded from the API\n _block_type_id: Optional[UUID] = None\n _block_schema_id: Optional[UUID] = None\n _block_schema_capabilities: Optional[List[str]] = None\n _block_schema_version: Optional[str] = None\n _block_document_id: Optional[UUID] = None\n _block_document_name: Optional[str] = None\n _is_anonymous: Optional[bool] = None\n\n # Exclude `save` as it uses the `sync_compatible` decorator and needs to be\n # decorated directly.\n _events_excluded_methods = [\"block_initialization\", \"save\", \"dict\"]\n\n @classmethod\n def __dispatch_key__(cls):\n if cls.__name__ == \"Block\":\n return None # The base class is abstract\n return block_schema_to_key(cls._to_block_schema())\n\n @classmethod\n def get_block_type_name(cls):\n return cls._block_type_name or cls.__name__\n\n @classmethod\n def get_block_type_slug(cls):\n return slugify(cls._block_type_slug or cls.get_block_type_name())\n\n @classmethod\n def get_block_capabilities(cls) -> FrozenSet[str]:\n \"\"\"\n Returns the block capabilities for this Block. Recursively collects all block\n capabilities of all parent classes into a single frozenset.\n \"\"\"\n return frozenset(\n {\n c\n for base in (cls,) + cls.__mro__\n for c in getattr(base, \"_block_schema_capabilities\", []) or []\n }\n )\n\n @classmethod\n def _get_current_package_version(cls):\n current_module = inspect.getmodule(cls)\n if current_module:\n top_level_module = sys.modules[\n current_module.__name__.split(\".\")[0] or \"__main__\"\n ]\n try:\n version = Version(top_level_module.__version__)\n # Strips off any local version information\n return version.base_version\n except (AttributeError, InvalidVersion):\n # Module does not have a __version__ attribute or is not a parsable format\n pass\n return DEFAULT_BLOCK_SCHEMA_VERSION\n\n @classmethod\n def get_block_schema_version(cls) -> str:\n return cls._block_schema_version or cls._get_current_package_version()\n\n @classmethod\n def _to_block_schema_reference_dict(cls):\n return dict(\n block_type_slug=cls.get_block_type_slug(),\n block_schema_checksum=cls._calculate_schema_checksum(),\n )\n\n @classmethod\n def _calculate_schema_checksum(\n cls, block_schema_fields: Optional[Dict[str, Any]] = None\n ):\n \"\"\"\n Generates a unique hash for the underlying schema of block.\n\n Args:\n block_schema_fields: Dictionary detailing block schema fields to generate a\n checksum for. The fields of the current class is used if this parameter\n is not provided.\n\n Returns:\n str: The calculated checksum prefixed with the hashing algorithm used.\n \"\"\"\n block_schema_fields = (\n cls.schema() if block_schema_fields is None else block_schema_fields\n )\n fields_for_checksum = remove_nested_keys([\"secret_fields\"], block_schema_fields)\n if fields_for_checksum.get(\"definitions\"):\n non_block_definitions = _get_non_block_reference_definitions(\n fields_for_checksum, fields_for_checksum[\"definitions\"]\n )\n if non_block_definitions:\n fields_for_checksum[\"definitions\"] = non_block_definitions\n else:\n # Pop off definitions entirely instead of empty dict for consistency\n # with the OpenAPI specification\n fields_for_checksum.pop(\"definitions\")\n checksum = hash_objects(fields_for_checksum, hash_algo=hashlib.sha256)\n if checksum is None:\n raise ValueError(\"Unable to compute checksum for block schema\")\n else:\n return f\"sha256:{checksum}\"\n\n def _to_block_document(\n self,\n name: Optional[str] = None,\n block_schema_id: Optional[UUID] = None,\n block_type_id: Optional[UUID] = None,\n is_anonymous: Optional[bool] = None,\n ) -> BlockDocument:\n \"\"\"\n Creates the corresponding block document based on the data stored in a block.\n The corresponding block document name, block type ID, and block schema ID must\n either be passed into the method or configured on the block.\n\n Args:\n name: The name of the created block document. Not required if anonymous.\n block_schema_id: UUID of the corresponding block schema.\n block_type_id: UUID of the corresponding block type.\n is_anonymous: if True, an anonymous block is created. Anonymous\n blocks are not displayed in the UI and used primarily for system\n operations and features that need to automatically generate blocks.\n\n Returns:\n BlockDocument: Corresponding block document\n populated with the block's configured data.\n \"\"\"\n if is_anonymous is None:\n is_anonymous = self._is_anonymous or False\n\n # name must be present if not anonymous\n if not is_anonymous and not name and not self._block_document_name:\n raise ValueError(\"No name provided, either as an argument or on the block.\")\n\n if not block_schema_id and not self._block_schema_id:\n raise ValueError(\n \"No block schema ID provided, either as an argument or on the block.\"\n )\n if not block_type_id and not self._block_type_id:\n raise ValueError(\n \"No block type ID provided, either as an argument or on the block.\"\n )\n\n # The keys passed to `include` must NOT be aliases, else some items will be missed\n # i.e. must do `self.schema_` vs `self.schema` to get a `schema_ = Field(alias=\"schema\")`\n # reported from https://github.com/PrefectHQ/prefect-dbt/issues/54\n data_keys = self.schema(by_alias=False)[\"properties\"].keys()\n\n # `block_document_data`` must return the aliased version for it to show in the UI\n block_document_data = self.dict(by_alias=True, include=data_keys)\n\n # Iterate through and find blocks that already have saved block documents to\n # create references to those saved block documents.\n for key in data_keys:\n field_value = getattr(self, key)\n if (\n isinstance(field_value, Block)\n and field_value._block_document_id is not None\n ):\n block_document_data[key] = {\n \"$ref\": {\"block_document_id\": field_value._block_document_id}\n }\n\n return BlockDocument(\n id=self._block_document_id or uuid4(),\n name=(name or self._block_document_name) if not is_anonymous else None,\n block_schema_id=block_schema_id or self._block_schema_id,\n block_type_id=block_type_id or self._block_type_id,\n data=block_document_data,\n block_schema=self._to_block_schema(\n block_type_id=block_type_id or self._block_type_id,\n ),\n block_type=self._to_block_type(),\n is_anonymous=is_anonymous,\n )\n\n @classmethod\n def _to_block_schema(cls, block_type_id: Optional[UUID] = None) -> BlockSchema:\n \"\"\"\n Creates the corresponding block schema of the block.\n The corresponding block_type_id must either be passed into\n the method or configured on the block.\n\n Args:\n block_type_id: UUID of the corresponding block type.\n\n Returns:\n BlockSchema: The corresponding block schema.\n \"\"\"\n fields = cls.schema()\n return BlockSchema(\n id=cls._block_schema_id if cls._block_schema_id is not None else uuid4(),\n checksum=cls._calculate_schema_checksum(),\n fields=fields,\n block_type_id=block_type_id or cls._block_type_id,\n block_type=cls._to_block_type(),\n capabilities=list(cls.get_block_capabilities()),\n version=cls.get_block_schema_version(),\n )\n\n @classmethod\n def _parse_docstring(cls) -> List[DocstringSection]:\n \"\"\"\n Parses the docstring into list of DocstringSection objects.\n Helper method used primarily to suppress irrelevant logs, e.g.\n `<module>:11: No type or annotation for parameter 'write_json'`\n because griffe is unable to parse the types from pydantic.BaseModel.\n \"\"\"\n with disable_logger(\"griffe.docstrings.google\"):\n with disable_logger(\"griffe.agents.nodes\"):\n docstring = Docstring(cls.__doc__)\n parsed = parse(docstring, Parser.google)\n return parsed\n\n @classmethod\n def get_description(cls) -> Optional[str]:\n \"\"\"\n Returns the description for the current block. Attempts to parse\n description from class docstring if an override is not defined.\n \"\"\"\n description = cls._description\n # If no description override has been provided, find the first text section\n # and use that as the description\n if description is None and cls.__doc__ is not None:\n parsed = cls._parse_docstring()\n parsed_description = next(\n (\n section.as_dict().get(\"value\")\n for section in parsed\n if section.kind == DocstringSectionKind.text\n ),\n None,\n )\n if isinstance(parsed_description, str):\n description = parsed_description.strip()\n return description\n\n @classmethod\n def get_code_example(cls) -> Optional[str]:\n \"\"\"\n Returns the code example for the given block. Attempts to parse\n code example from the class docstring if an override is not provided.\n \"\"\"\n code_example = (\n dedent(cls._code_example) if cls._code_example is not None else None\n )\n # If no code example override has been provided, attempt to find a examples\n # section or an admonition with the annotation \"example\" and use that as the\n # code example\n if code_example is None and cls.__doc__ is not None:\n parsed = cls._parse_docstring()\n for section in parsed:\n # Section kind will be \"examples\" if Examples section heading is used.\n if section.kind == DocstringSectionKind.examples:\n # Examples sections are made up of smaller sections that need to be\n # joined with newlines. Smaller sections are represented as tuples\n # with shape (DocstringSectionKind, str)\n code_example = \"\\n\".join(\n (part[1] for part in section.as_dict().get(\"value\", []))\n )\n break\n # Section kind will be \"admonition\" if Example section heading is used.\n if section.kind == DocstringSectionKind.admonition:\n value = section.as_dict().get(\"value\", {})\n if value.get(\"annotation\") == \"example\":\n code_example = value.get(\"description\")\n break\n\n if code_example is None:\n # If no code example has been specified or extracted from the class\n # docstring, generate a sensible default\n code_example = cls._generate_code_example()\n\n return code_example\n\n @classmethod\n def _generate_code_example(cls) -> str:\n \"\"\"Generates a default code example for the current class\"\"\"\n qualified_name = to_qualified_name(cls)\n module_str = \".\".join(qualified_name.split(\".\")[:-1])\n class_name = cls.__name__\n block_variable_name = f'{cls.get_block_type_slug().replace(\"-\", \"_\")}_block'\n\n return dedent(\n f\"\"\"\\\n ```python\n from {module_str} import {class_name}\n\n {block_variable_name} = {class_name}.load(\"BLOCK_NAME\")\n ```\"\"\"\n )\n\n @classmethod\n def _to_block_type(cls) -> BlockType:\n \"\"\"\n Creates the corresponding block type of the block.\n\n Returns:\n BlockType: The corresponding block type.\n \"\"\"\n return BlockType(\n id=cls._block_type_id or uuid4(),\n slug=cls.get_block_type_slug(),\n name=cls.get_block_type_name(),\n logo_url=cls._logo_url,\n documentation_url=cls._documentation_url,\n description=cls.get_description(),\n code_example=cls.get_code_example(),\n )\n\n @classmethod\n def _from_block_document(cls, block_document: BlockDocument):\n \"\"\"\n Instantiates a block from a given block document. The corresponding block class\n will be looked up in the block registry based on the corresponding block schema\n of the provided block document.\n\n Args:\n block_document: The block document used to instantiate a block.\n\n Raises:\n ValueError: If the provided block document doesn't have a corresponding block\n schema.\n\n Returns:\n Block: Hydrated block with data from block document.\n \"\"\"\n if block_document.block_schema is None:\n raise ValueError(\n \"Unable to determine block schema for provided block document\"\n )\n\n block_cls = (\n cls\n if cls.__name__ != \"Block\"\n # Look up the block class by dispatch\n else cls.get_block_class_from_schema(block_document.block_schema)\n )\n\n block_cls = instrument_method_calls_on_class_instances(block_cls)\n\n block = block_cls.parse_obj(block_document.data)\n block._block_document_id = block_document.id\n block.__class__._block_schema_id = block_document.block_schema_id\n block.__class__._block_type_id = block_document.block_type_id\n block._block_document_name = block_document.name\n block._is_anonymous = block_document.is_anonymous\n block._define_metadata_on_nested_blocks(\n block_document.block_document_references\n )\n\n # Due to the way blocks are loaded we can't directly instrument the\n # `load` method and have the data be about the block document. Instead\n # this will emit a proxy event for the load method so that block\n # document data can be included instead of the event being about an\n # 'anonymous' block.\n\n emit_instance_method_called_event(block, \"load\", successful=True)\n\n return block\n\n def _event_kind(self) -> str:\n return f\"prefect.block.{self.get_block_type_slug()}\"\n\n def _event_method_called_resources(self) -> Optional[ResourceTuple]:\n if not (self._block_document_id and self._block_document_name):\n return None\n\n return (\n {\n \"prefect.resource.id\": (\n f\"prefect.block-document.{self._block_document_id}\"\n ),\n \"prefect.resource.name\": self._block_document_name,\n },\n [\n {\n \"prefect.resource.id\": (\n f\"prefect.block-type.{self.get_block_type_slug()}\"\n ),\n \"prefect.resource.role\": \"block-type\",\n }\n ],\n )\n\n @classmethod\n def get_block_class_from_schema(cls: Type[Self], schema: BlockSchema) -> Type[Self]:\n \"\"\"\n Retrieve the block class implementation given a schema.\n \"\"\"\n return cls.get_block_class_from_key(block_schema_to_key(schema))\n\n @classmethod\n def get_block_class_from_key(cls: Type[Self], key: str) -> Type[Self]:\n \"\"\"\n Retrieve the block class implementation given a key.\n \"\"\"\n # Ensure collections are imported and have the opportunity to register types\n # before looking up the block class\n prefect.plugins.load_prefect_collections()\n\n return lookup_type(cls, key)\n\n def _define_metadata_on_nested_blocks(\n self, block_document_references: Dict[str, Dict[str, Any]]\n ):\n \"\"\"\n Recursively populates metadata fields on nested blocks based on the\n provided block document references.\n \"\"\"\n for item in block_document_references.items():\n field_name, block_document_reference = item\n nested_block = getattr(self, field_name)\n if isinstance(nested_block, Block):\n nested_block_document_info = block_document_reference.get(\n \"block_document\", {}\n )\n nested_block._define_metadata_on_nested_blocks(\n nested_block_document_info.get(\"block_document_references\", {})\n )\n nested_block_document_id = nested_block_document_info.get(\"id\")\n nested_block._block_document_id = (\n UUID(nested_block_document_id) if nested_block_document_id else None\n )\n nested_block._block_document_name = nested_block_document_info.get(\n \"name\"\n )\n nested_block._is_anonymous = nested_block_document_info.get(\n \"is_anonymous\"\n )\n\n @classmethod\n @inject_client\n async def _get_block_document(\n cls,\n name: str,\n client: \"PrefectClient\" = None,\n ):\n if cls.__name__ == \"Block\":\n block_type_slug, block_document_name = name.split(\"/\", 1)\n else:\n block_type_slug = cls.get_block_type_slug()\n block_document_name = name\n\n try:\n block_document = await client.read_block_document_by_name(\n name=block_document_name, block_type_slug=block_type_slug\n )\n except prefect.exceptions.ObjectNotFound as e:\n raise ValueError(\n f\"Unable to find block document named {block_document_name} for block\"\n f\" type {block_type_slug}\"\n ) from e\n\n return block_document, block_document_name\n\n @classmethod\n @sync_compatible\n @inject_client\n async def load(\n cls,\n name: str,\n validate: bool = True,\n client: \"PrefectClient\" = None,\n ):\n \"\"\"\n Retrieves data from the block document with the given name for the block type\n that corresponds with the current class and returns an instantiated version of\n the current class with the data stored in the block document.\n\n If a block document for a given block type is saved with a different schema\n than the current class calling `load`, a warning will be raised.\n\n If the current class schema is a subset of the block document schema, the block\n can be loaded as normal using the default `validate = True`.\n\n If the current class schema is a superset of the block document schema, `load`\n must be called with `validate` set to False to prevent a validation error. In\n this case, the block attributes will default to `None` and must be set manually\n and saved to a new block document before the block can be used as expected.\n\n Args:\n name: The name or slug of the block document. A block document slug is a\n string with the format <block_type_slug>/<block_document_name>\n validate: If False, the block document will be loaded without Pydantic\n validating the block schema. This is useful if the block schema has\n changed client-side since the block document referred to by `name` was saved.\n client: The client to use to load the block document. If not provided, the\n default client will be injected.\n\n Raises:\n ValueError: If the requested block document is not found.\n\n Returns:\n An instance of the current class hydrated with the data stored in the\n block document with the specified name.\n\n Examples:\n Load from a Block subclass with a block document name:\n ```python\n class Custom(Block):\n message: str\n\n Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n loaded_block = Custom.load(\"my-custom-message\")\n ```\n\n Load from Block with a block document slug:\n ```python\n class Custom(Block):\n message: str\n\n Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n loaded_block = Block.load(\"custom/my-custom-message\")\n ```\n\n Migrate a block document to a new schema:\n ```python\n # original class\n class Custom(Block):\n message: str\n\n Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n # Updated class with new required field\n class Custom(Block):\n message: str\n number_of_ducks: int\n\n loaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n # Prints UserWarning about schema mismatch\n\n loaded_block.number_of_ducks = 42\n\n loaded_block.save(\"my-custom-message\", overwrite=True)\n ```\n \"\"\"\n block_document, block_document_name = await cls._get_block_document(name)\n\n try:\n return cls._from_block_document(block_document)\n except ValidationError as e:\n if not validate:\n missing_fields = tuple(err[\"loc\"][0] for err in e.errors())\n missing_block_data = {field: None for field in missing_fields}\n warnings.warn(\n f\"Could not fully load {block_document_name!r} of block type\"\n f\" {cls._block_type_slug!r} - this is likely because one or more\"\n \" required fields were added to the schema for\"\n f\" {cls.__name__!r} that did not exist on the class when this block\"\n \" was last saved. Please specify values for new field(s):\"\n f\" {listrepr(missing_fields)}, then run\"\n f' `{cls.__name__}.save(\"{block_document_name}\", overwrite=True)`,'\n \" and load this block again before attempting to use it.\"\n )\n return cls.construct(**block_document.data, **missing_block_data)\n raise RuntimeError(\n f\"Unable to load {block_document_name!r} of block type\"\n f\" {cls._block_type_slug!r} due to failed validation. To load without\"\n \" validation, try loading again with `validate=False`.\"\n ) from e\n\n @staticmethod\n def is_block_class(block) -> bool:\n return _is_subclass(block, Block)\n\n @classmethod\n @sync_compatible\n @inject_client\n async def register_type_and_schema(cls, client: \"PrefectClient\" = None):\n \"\"\"\n Makes block available for configuration with current Prefect API.\n Recursively registers all nested blocks. Registration is idempotent.\n\n Args:\n client: Optional client to use for registering type and schema with the\n Prefect API. A new client will be created and used if one is not\n provided.\n \"\"\"\n if cls.__name__ == \"Block\":\n raise InvalidBlockRegistration(\n \"`register_type_and_schema` should be called on a Block \"\n \"subclass and not on the Block class directly.\"\n )\n if ABC in getattr(cls, \"__bases__\", []):\n raise InvalidBlockRegistration(\n \"`register_type_and_schema` should be called on a Block \"\n \"subclass and not on a Block interface class directly.\"\n )\n\n for field in cls.__fields__.values():\n if Block.is_block_class(field.type_):\n await field.type_.register_type_and_schema(client=client)\n if get_origin(field.type_) is Union:\n for type_ in get_args(field.type_):\n if Block.is_block_class(type_):\n await type_.register_type_and_schema(client=client)\n\n try:\n block_type = await client.read_block_type_by_slug(\n slug=cls.get_block_type_slug()\n )\n cls._block_type_id = block_type.id\n local_block_type = cls._to_block_type()\n if _should_update_block_type(\n local_block_type=local_block_type, server_block_type=block_type\n ):\n await client.update_block_type(\n block_type_id=block_type.id, block_type=local_block_type\n )\n except prefect.exceptions.ObjectNotFound:\n block_type = await client.create_block_type(block_type=cls._to_block_type())\n cls._block_type_id = block_type.id\n\n try:\n block_schema = await client.read_block_schema_by_checksum(\n checksum=cls._calculate_schema_checksum(),\n version=cls.get_block_schema_version(),\n )\n except prefect.exceptions.ObjectNotFound:\n block_schema = await client.create_block_schema(\n block_schema=cls._to_block_schema(block_type_id=block_type.id)\n )\n\n cls._block_schema_id = block_schema.id\n\n @inject_client\n async def _save(\n self,\n name: Optional[str] = None,\n is_anonymous: bool = False,\n overwrite: bool = False,\n client: \"PrefectClient\" = None,\n ):\n \"\"\"\n Saves the values of a block as a block document with an option to save as an\n anonymous block document.\n\n Args:\n name: User specified name to give saved block document which can later be used to load the\n block document.\n is_anonymous: Boolean value specifying whether the block document is anonymous. Anonymous\n blocks are intended for system use and are not shown in the UI. Anonymous blocks do not\n require a user-supplied name.\n overwrite: Boolean value specifying if values should be overwritten if a block document with\n the specified name already exists.\n\n Raises:\n ValueError: If a name is not given and `is_anonymous` is `False` or a name is given and\n `is_anonymous` is `True`.\n \"\"\"\n if name is None and not is_anonymous:\n if self._block_document_name is None:\n raise ValueError(\n \"You're attempting to save a block document without a name.\"\n \" Please either call `save` with a `name` or pass\"\n \" `is_anonymous=True` to save an anonymous block.\"\n )\n else:\n name = self._block_document_name\n\n self._is_anonymous = is_anonymous\n\n # Ensure block type and schema are registered before saving block document.\n await self.register_type_and_schema(client=client)\n\n try:\n block_document = await client.create_block_document(\n block_document=self._to_block_document(name=name)\n )\n except prefect.exceptions.ObjectAlreadyExists as err:\n if overwrite:\n block_document_id = self._block_document_id\n if block_document_id is None:\n existing_block_document = await client.read_block_document_by_name(\n name=name, block_type_slug=self.get_block_type_slug()\n )\n block_document_id = existing_block_document.id\n await client.update_block_document(\n block_document_id=block_document_id,\n block_document=self._to_block_document(name=name),\n )\n block_document = await client.read_block_document(\n block_document_id=block_document_id\n )\n else:\n raise ValueError(\n \"You are attempting to save values with a name that is already in\"\n \" use for this block type. If you would like to overwrite the\"\n \" values that are saved, then save with `overwrite=True`.\"\n ) from err\n\n # Update metadata on block instance for later use.\n self._block_document_name = block_document.name\n self._block_document_id = block_document.id\n return self._block_document_id\n\n @sync_compatible\n @instrument_instance_method_call()\n async def save(\n self,\n name: Optional[str] = None,\n overwrite: bool = False,\n client: \"PrefectClient\" = None,\n ):\n \"\"\"\n Saves the values of a block as a block document.\n\n Args:\n name: User specified name to give saved block document which can later be used to load the\n block document.\n overwrite: Boolean value specifying if values should be overwritten if a block document with\n the specified name already exists.\n\n \"\"\"\n document_id = await self._save(name=name, overwrite=overwrite, client=client)\n\n return document_id\n\n @classmethod\n @sync_compatible\n @inject_client\n async def delete(\n cls,\n name: str,\n client: \"PrefectClient\" = None,\n ):\n block_document, block_document_name = await cls._get_block_document(name)\n\n await client.delete_block_document(block_document.id)\n\n def _iter(self, *, include=None, exclude=None, **kwargs):\n # Injects the `block_type_slug` into serialized payloads for dispatch\n for key_value in super()._iter(include=include, exclude=exclude, **kwargs):\n yield key_value\n\n # Respect inclusion and exclusion still\n if include and \"block_type_slug\" not in include:\n return\n if exclude and \"block_type_slug\" in exclude:\n return\n\n yield \"block_type_slug\", self.get_block_type_slug()\n\n def __new__(cls: Type[Self], **kwargs) -> Self:\n \"\"\"\n Create an instance of the Block subclass type if a `block_type_slug` is\n present in the data payload.\n \"\"\"\n block_type_slug = kwargs.pop(\"block_type_slug\", None)\n if block_type_slug:\n subcls = lookup_type(cls, dispatch_key=block_type_slug)\n m = super().__new__(subcls)\n # NOTE: This is a workaround for an obscure issue where copied models were\n # missing attributes. This pattern is from Pydantic's\n # `BaseModel._copy_and_set_values`.\n # The issue this fixes could not be reproduced in unit tests that\n # directly targeted dispatch handling and was only observed when\n # copying then saving infrastructure blocks on deployment models.\n object.__setattr__(m, \"__dict__\", kwargs)\n object.__setattr__(m, \"__fields_set__\", set(kwargs.keys()))\n return m\n else:\n m = super().__new__(cls)\n object.__setattr__(m, \"__dict__\", kwargs)\n object.__setattr__(m, \"__fields_set__\", set(kwargs.keys()))\n return m\n\n def get_block_placeholder(self) -> str:\n \"\"\"\n Returns the block placeholder for the current block which can be used for\n templating.\n\n Returns:\n str: The block placeholder for the current block in the format\n `prefect.blocks.{block_type_name}.{block_document_name}`\n\n Raises:\n BlockNotSavedError: Raised if the block has not been saved.\n\n If a block has not been saved, the return value will be `None`.\n \"\"\"\n block_document_name = self._block_document_name\n if not block_document_name:\n raise BlockNotSavedError(\n \"Could not generate block placeholder for unsaved block.\"\n )\n\n return f\"prefect.blocks.{self.get_block_type_slug()}.{block_document_name}\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.Config","title":"Config
","text":"Source code in prefect/blocks/core.py
class Config:\n extra = \"allow\"\n\n json_encoders = {SecretDict: lambda v: v.dict()}\n\n @staticmethod\n def schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n \"\"\"\n Customizes Pydantic's schema generation feature to add blocks related information.\n \"\"\"\n schema[\"block_type_slug\"] = model.get_block_type_slug()\n # Ensures args and code examples aren't included in the schema\n description = model.get_description()\n if description:\n schema[\"description\"] = description\n else:\n # Prevent the description of the base class from being included in the schema\n schema.pop(\"description\", None)\n\n # create a list of secret field names\n # secret fields include both top-level keys and dot-delimited nested secret keys\n # A wildcard (*) means that all fields under a given key are secret.\n # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n # nested under the \"child\" key are all secret. There is no limit to nesting.\n secrets = schema[\"secret_fields\"] = []\n for field in model.__fields__.values():\n _collect_secret_fields(field.name, field.type_, secrets)\n\n # create block schema references\n refs = schema[\"block_schema_references\"] = {}\n for field in model.__fields__.values():\n if Block.is_block_class(field.type_):\n refs[field.name] = field.type_._to_block_schema_reference_dict()\n if get_origin(field.type_) is Union:\n for type_ in get_args(field.type_):\n if Block.is_block_class(type_):\n if isinstance(refs.get(field.name), list):\n refs[field.name].append(\n type_._to_block_schema_reference_dict()\n )\n elif isinstance(refs.get(field.name), dict):\n refs[field.name] = [\n refs[field.name],\n type_._to_block_schema_reference_dict(),\n ]\n else:\n refs[\n field.name\n ] = type_._to_block_schema_reference_dict()\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.Config.schema_extra","title":"schema_extra
staticmethod
","text":"Customizes Pydantic's schema generation feature to add blocks related information.
Source code inprefect/blocks/core.py
@staticmethod\ndef schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n \"\"\"\n Customizes Pydantic's schema generation feature to add blocks related information.\n \"\"\"\n schema[\"block_type_slug\"] = model.get_block_type_slug()\n # Ensures args and code examples aren't included in the schema\n description = model.get_description()\n if description:\n schema[\"description\"] = description\n else:\n # Prevent the description of the base class from being included in the schema\n schema.pop(\"description\", None)\n\n # create a list of secret field names\n # secret fields include both top-level keys and dot-delimited nested secret keys\n # A wildcard (*) means that all fields under a given key are secret.\n # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n # nested under the \"child\" key are all secret. There is no limit to nesting.\n secrets = schema[\"secret_fields\"] = []\n for field in model.__fields__.values():\n _collect_secret_fields(field.name, field.type_, secrets)\n\n # create block schema references\n refs = schema[\"block_schema_references\"] = {}\n for field in model.__fields__.values():\n if Block.is_block_class(field.type_):\n refs[field.name] = field.type_._to_block_schema_reference_dict()\n if get_origin(field.type_) is Union:\n for type_ in get_args(field.type_):\n if Block.is_block_class(type_):\n if isinstance(refs.get(field.name), list):\n refs[field.name].append(\n type_._to_block_schema_reference_dict()\n )\n elif isinstance(refs.get(field.name), dict):\n refs[field.name] = [\n refs[field.name],\n type_._to_block_schema_reference_dict(),\n ]\n else:\n refs[\n field.name\n ] = type_._to_block_schema_reference_dict()\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_capabilities","title":"get_block_capabilities
classmethod
","text":"Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset.
Source code inprefect/blocks/core.py
@classmethod\ndef get_block_capabilities(cls) -> FrozenSet[str]:\n \"\"\"\n Returns the block capabilities for this Block. Recursively collects all block\n capabilities of all parent classes into a single frozenset.\n \"\"\"\n return frozenset(\n {\n c\n for base in (cls,) + cls.__mro__\n for c in getattr(base, \"_block_schema_capabilities\", []) or []\n }\n )\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_class_from_key","title":"get_block_class_from_key
classmethod
","text":"Retrieve the block class implementation given a key.
Source code inprefect/blocks/core.py
@classmethod\ndef get_block_class_from_key(cls: Type[Self], key: str) -> Type[Self]:\n \"\"\"\n Retrieve the block class implementation given a key.\n \"\"\"\n # Ensure collections are imported and have the opportunity to register types\n # before looking up the block class\n prefect.plugins.load_prefect_collections()\n\n return lookup_type(cls, key)\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_class_from_schema","title":"get_block_class_from_schema
classmethod
","text":"Retrieve the block class implementation given a schema.
Source code inprefect/blocks/core.py
@classmethod\ndef get_block_class_from_schema(cls: Type[Self], schema: BlockSchema) -> Type[Self]:\n \"\"\"\n Retrieve the block class implementation given a schema.\n \"\"\"\n return cls.get_block_class_from_key(block_schema_to_key(schema))\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_placeholder","title":"get_block_placeholder
","text":"Returns the block placeholder for the current block which can be used for templating.
Returns:
Name Type Descriptionstr
str
The block placeholder for the current block in the format prefect.blocks.{block_type_name}.{block_document_name}
Raises:
Type DescriptionBlockNotSavedError
Raised if the block has not been saved.
If a block has not been saved, the return value will be None
.
prefect/blocks/core.py
def get_block_placeholder(self) -> str:\n \"\"\"\n Returns the block placeholder for the current block which can be used for\n templating.\n\n Returns:\n str: The block placeholder for the current block in the format\n `prefect.blocks.{block_type_name}.{block_document_name}`\n\n Raises:\n BlockNotSavedError: Raised if the block has not been saved.\n\n If a block has not been saved, the return value will be `None`.\n \"\"\"\n block_document_name = self._block_document_name\n if not block_document_name:\n raise BlockNotSavedError(\n \"Could not generate block placeholder for unsaved block.\"\n )\n\n return f\"prefect.blocks.{self.get_block_type_slug()}.{block_document_name}\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_code_example","title":"get_code_example
classmethod
","text":"Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided.
Source code inprefect/blocks/core.py
@classmethod\ndef get_code_example(cls) -> Optional[str]:\n \"\"\"\n Returns the code example for the given block. Attempts to parse\n code example from the class docstring if an override is not provided.\n \"\"\"\n code_example = (\n dedent(cls._code_example) if cls._code_example is not None else None\n )\n # If no code example override has been provided, attempt to find a examples\n # section or an admonition with the annotation \"example\" and use that as the\n # code example\n if code_example is None and cls.__doc__ is not None:\n parsed = cls._parse_docstring()\n for section in parsed:\n # Section kind will be \"examples\" if Examples section heading is used.\n if section.kind == DocstringSectionKind.examples:\n # Examples sections are made up of smaller sections that need to be\n # joined with newlines. Smaller sections are represented as tuples\n # with shape (DocstringSectionKind, str)\n code_example = \"\\n\".join(\n (part[1] for part in section.as_dict().get(\"value\", []))\n )\n break\n # Section kind will be \"admonition\" if Example section heading is used.\n if section.kind == DocstringSectionKind.admonition:\n value = section.as_dict().get(\"value\", {})\n if value.get(\"annotation\") == \"example\":\n code_example = value.get(\"description\")\n break\n\n if code_example is None:\n # If no code example has been specified or extracted from the class\n # docstring, generate a sensible default\n code_example = cls._generate_code_example()\n\n return code_example\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_description","title":"get_description
classmethod
","text":"Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined.
Source code inprefect/blocks/core.py
@classmethod\ndef get_description(cls) -> Optional[str]:\n \"\"\"\n Returns the description for the current block. Attempts to parse\n description from class docstring if an override is not defined.\n \"\"\"\n description = cls._description\n # If no description override has been provided, find the first text section\n # and use that as the description\n if description is None and cls.__doc__ is not None:\n parsed = cls._parse_docstring()\n parsed_description = next(\n (\n section.as_dict().get(\"value\")\n for section in parsed\n if section.kind == DocstringSectionKind.text\n ),\n None,\n )\n if isinstance(parsed_description, str):\n description = parsed_description.strip()\n return description\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.load","title":"load
async
classmethod
","text":"Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document.
If a block document for a given block type is saved with a different schema than the current class calling load
, a warning will be raised.
If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default validate = True
.
If the current class schema is a superset of the block document schema, load
must be called with validate
set to False to prevent a validation error. In this case, the block attributes will default to None
and must be set manually and saved to a new block document before the block can be used as expected.
Parameters:
Name Type Description Defaultname
str
The name or slug of the block document. A block document slug is a string with the format / required validate
bool
If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by name
was saved.
True
client
PrefectClient
The client to use to load the block document. If not provided, the default client will be injected.
None
Raises:
Type DescriptionValueError
If the requested block document is not found.
Returns:
Type DescriptionAn instance of the current class hydrated with the data stored in the
block document with the specified name.
Examples:
Load from a Block subclass with a block document name:
class Custom(Block):\n message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\nloaded_block = Custom.load(\"my-custom-message\")\n
Load from Block with a block document slug:
class Custom(Block):\n message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\nloaded_block = Block.load(\"custom/my-custom-message\")\n
Migrate a block document to a new schema:
# original class\nclass Custom(Block):\n message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\n# Updated class with new required field\nclass Custom(Block):\n message: str\n number_of_ducks: int\n\nloaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n# Prints UserWarning about schema mismatch\n\nloaded_block.number_of_ducks = 42\n\nloaded_block.save(\"my-custom-message\", overwrite=True)\n
Source code in prefect/blocks/core.py
@classmethod\n@sync_compatible\n@inject_client\nasync def load(\n cls,\n name: str,\n validate: bool = True,\n client: \"PrefectClient\" = None,\n):\n \"\"\"\n Retrieves data from the block document with the given name for the block type\n that corresponds with the current class and returns an instantiated version of\n the current class with the data stored in the block document.\n\n If a block document for a given block type is saved with a different schema\n than the current class calling `load`, a warning will be raised.\n\n If the current class schema is a subset of the block document schema, the block\n can be loaded as normal using the default `validate = True`.\n\n If the current class schema is a superset of the block document schema, `load`\n must be called with `validate` set to False to prevent a validation error. In\n this case, the block attributes will default to `None` and must be set manually\n and saved to a new block document before the block can be used as expected.\n\n Args:\n name: The name or slug of the block document. A block document slug is a\n string with the format <block_type_slug>/<block_document_name>\n validate: If False, the block document will be loaded without Pydantic\n validating the block schema. This is useful if the block schema has\n changed client-side since the block document referred to by `name` was saved.\n client: The client to use to load the block document. If not provided, the\n default client will be injected.\n\n Raises:\n ValueError: If the requested block document is not found.\n\n Returns:\n An instance of the current class hydrated with the data stored in the\n block document with the specified name.\n\n Examples:\n Load from a Block subclass with a block document name:\n ```python\n class Custom(Block):\n message: str\n\n Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n loaded_block = Custom.load(\"my-custom-message\")\n ```\n\n Load from Block with a block document slug:\n ```python\n class Custom(Block):\n message: str\n\n Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n loaded_block = Block.load(\"custom/my-custom-message\")\n ```\n\n Migrate a block document to a new schema:\n ```python\n # original class\n class Custom(Block):\n message: str\n\n Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n # Updated class with new required field\n class Custom(Block):\n message: str\n number_of_ducks: int\n\n loaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n # Prints UserWarning about schema mismatch\n\n loaded_block.number_of_ducks = 42\n\n loaded_block.save(\"my-custom-message\", overwrite=True)\n ```\n \"\"\"\n block_document, block_document_name = await cls._get_block_document(name)\n\n try:\n return cls._from_block_document(block_document)\n except ValidationError as e:\n if not validate:\n missing_fields = tuple(err[\"loc\"][0] for err in e.errors())\n missing_block_data = {field: None for field in missing_fields}\n warnings.warn(\n f\"Could not fully load {block_document_name!r} of block type\"\n f\" {cls._block_type_slug!r} - this is likely because one or more\"\n \" required fields were added to the schema for\"\n f\" {cls.__name__!r} that did not exist on the class when this block\"\n \" was last saved. Please specify values for new field(s):\"\n f\" {listrepr(missing_fields)}, then run\"\n f' `{cls.__name__}.save(\"{block_document_name}\", overwrite=True)`,'\n \" and load this block again before attempting to use it.\"\n )\n return cls.construct(**block_document.data, **missing_block_data)\n raise RuntimeError(\n f\"Unable to load {block_document_name!r} of block type\"\n f\" {cls._block_type_slug!r} due to failed validation. To load without\"\n \" validation, try loading again with `validate=False`.\"\n ) from e\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.register_type_and_schema","title":"register_type_and_schema
async
classmethod
","text":"Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent.
Parameters:
Name Type Description Defaultclient
PrefectClient
Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided.
None
Source code in prefect/blocks/core.py
@classmethod\n@sync_compatible\n@inject_client\nasync def register_type_and_schema(cls, client: \"PrefectClient\" = None):\n \"\"\"\n Makes block available for configuration with current Prefect API.\n Recursively registers all nested blocks. Registration is idempotent.\n\n Args:\n client: Optional client to use for registering type and schema with the\n Prefect API. A new client will be created and used if one is not\n provided.\n \"\"\"\n if cls.__name__ == \"Block\":\n raise InvalidBlockRegistration(\n \"`register_type_and_schema` should be called on a Block \"\n \"subclass and not on the Block class directly.\"\n )\n if ABC in getattr(cls, \"__bases__\", []):\n raise InvalidBlockRegistration(\n \"`register_type_and_schema` should be called on a Block \"\n \"subclass and not on a Block interface class directly.\"\n )\n\n for field in cls.__fields__.values():\n if Block.is_block_class(field.type_):\n await field.type_.register_type_and_schema(client=client)\n if get_origin(field.type_) is Union:\n for type_ in get_args(field.type_):\n if Block.is_block_class(type_):\n await type_.register_type_and_schema(client=client)\n\n try:\n block_type = await client.read_block_type_by_slug(\n slug=cls.get_block_type_slug()\n )\n cls._block_type_id = block_type.id\n local_block_type = cls._to_block_type()\n if _should_update_block_type(\n local_block_type=local_block_type, server_block_type=block_type\n ):\n await client.update_block_type(\n block_type_id=block_type.id, block_type=local_block_type\n )\n except prefect.exceptions.ObjectNotFound:\n block_type = await client.create_block_type(block_type=cls._to_block_type())\n cls._block_type_id = block_type.id\n\n try:\n block_schema = await client.read_block_schema_by_checksum(\n checksum=cls._calculate_schema_checksum(),\n version=cls.get_block_schema_version(),\n )\n except prefect.exceptions.ObjectNotFound:\n block_schema = await client.create_block_schema(\n block_schema=cls._to_block_schema(block_type_id=block_type.id)\n )\n\n cls._block_schema_id = block_schema.id\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.save","title":"save
async
","text":"Saves the values of a block as a block document.
Parameters:
Name Type Description Defaultname
Optional[str]
User specified name to give saved block document which can later be used to load the block document.
None
overwrite
bool
Boolean value specifying if values should be overwritten if a block document with the specified name already exists.
False
Source code in prefect/blocks/core.py
@sync_compatible\n@instrument_instance_method_call()\nasync def save(\n self,\n name: Optional[str] = None,\n overwrite: bool = False,\n client: \"PrefectClient\" = None,\n):\n \"\"\"\n Saves the values of a block as a block document.\n\n Args:\n name: User specified name to give saved block document which can later be used to load the\n block document.\n overwrite: Boolean value specifying if values should be overwritten if a block document with\n the specified name already exists.\n\n \"\"\"\n document_id = await self._save(name=name, overwrite=overwrite, client=client)\n\n return document_id\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.BlockNotSavedError","title":"BlockNotSavedError
","text":" Bases: RuntimeError
Raised when a given block is not saved and an operation that requires the block to be saved is attempted.
Source code inprefect/blocks/core.py
class BlockNotSavedError(RuntimeError):\n \"\"\"\n Raised when a given block is not saved and an operation that requires\n the block to be saved is attempted.\n \"\"\"\n\n pass\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.InvalidBlockRegistration","title":"InvalidBlockRegistration
","text":" Bases: Exception
Raised on attempted registration of the base Block class or a Block interface class
Source code inprefect/blocks/core.py
class InvalidBlockRegistration(Exception):\n \"\"\"\n Raised on attempted registration of the base Block\n class or a Block interface class\n \"\"\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.block_schema_to_key","title":"block_schema_to_key
","text":"Defines the unique key used to lookup the Block class for a given schema.
Source code inprefect/blocks/core.py
def block_schema_to_key(schema: BlockSchema) -> str:\n \"\"\"\n Defines the unique key used to lookup the Block class for a given schema.\n \"\"\"\n return f\"{schema.block_type.slug}\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/fields/","title":"fields","text":"","tags":["Python API","fields"]},{"location":"api-ref/prefect/blocks/fields/#prefect.blocks.fields","title":"prefect.blocks.fields
","text":"","tags":["Python API","fields"]},{"location":"api-ref/prefect/blocks/kubernetes/","title":"kubernetes","text":"","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes","title":"prefect.blocks.kubernetes
","text":"","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig","title":"KubernetesClusterConfig
","text":" Bases: Block
Stores configuration for interaction with Kubernetes clusters.
See from_file
for creation.
Attributes:
Name Type Descriptionconfig
Dict
The entire loaded YAML contents of a kubectl config file
context_name
str
The name of the kubectl context to use
ExampleLoad a saved Kubernetes cluster config:
from prefect.blocks.kubernetes import KubernetesClusterConfig\n\ncluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n
Source code in prefect/blocks/kubernetes.py
class KubernetesClusterConfig(Block):\n \"\"\"\n Stores configuration for interaction with Kubernetes clusters.\n\n See `from_file` for creation.\n\n Attributes:\n config: The entire loaded YAML contents of a kubectl config file\n context_name: The name of the kubectl context to use\n\n Example:\n Load a saved Kubernetes cluster config:\n ```python\n from prefect.blocks.kubernetes import KubernetesClusterConfig\n\n cluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"Kubernetes Cluster Config\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig\"\n\n config: Dict = Field(\n default=..., description=\"The entire contents of a kubectl config file.\"\n )\n context_name: str = Field(\n default=..., description=\"The name of the kubectl context to use.\"\n )\n\n @validator(\"config\", pre=True)\n def parse_yaml_config(cls, value):\n if isinstance(value, str):\n return yaml.safe_load(value)\n return value\n\n @classmethod\n def from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n \"\"\"\n Create a cluster config from the a Kubernetes config file.\n\n By default, the current context in the default Kubernetes config file will be\n used.\n\n An alternative file or context may be specified.\n\n The entire config file will be loaded and stored.\n \"\"\"\n kube_config = kubernetes.config.kube_config\n\n path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n path = path.expanduser().resolve()\n\n # Determine the context\n existing_contexts, current_context = kube_config.list_kube_config_contexts(\n config_file=str(path)\n )\n context_names = {ctx[\"name\"] for ctx in existing_contexts}\n if context_name:\n if context_name not in context_names:\n raise ValueError(\n f\"Context {context_name!r} not found. \"\n f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n )\n else:\n context_name = current_context[\"name\"]\n\n # Load the entire config file\n config_file_contents = path.read_text()\n config_dict = yaml.safe_load(config_file_contents)\n\n return cls(config=config_dict, context_name=context_name)\n\n def get_api_client(self) -> \"ApiClient\":\n \"\"\"\n Returns a Kubernetes API client for this cluster config.\n \"\"\"\n return kubernetes.config.kube_config.new_client_from_config_dict(\n config_dict=self.config, context=self.context_name\n )\n\n def configure_client(self) -> None:\n \"\"\"\n Activates this cluster configuration by loading the configuration into the\n Kubernetes Python client. After calling this, Kubernetes API clients can use\n this config's context.\n \"\"\"\n kubernetes.config.kube_config.load_kube_config_from_dict(\n config_dict=self.config, context=self.context_name\n )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.configure_client","title":"configure_client
","text":"Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context.
Source code inprefect/blocks/kubernetes.py
def configure_client(self) -> None:\n \"\"\"\n Activates this cluster configuration by loading the configuration into the\n Kubernetes Python client. After calling this, Kubernetes API clients can use\n this config's context.\n \"\"\"\n kubernetes.config.kube_config.load_kube_config_from_dict(\n config_dict=self.config, context=self.context_name\n )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.from_file","title":"from_file
classmethod
","text":"Create a cluster config from the a Kubernetes config file.
By default, the current context in the default Kubernetes config file will be used.
An alternative file or context may be specified.
The entire config file will be loaded and stored.
Source code inprefect/blocks/kubernetes.py
@classmethod\ndef from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n \"\"\"\n Create a cluster config from the a Kubernetes config file.\n\n By default, the current context in the default Kubernetes config file will be\n used.\n\n An alternative file or context may be specified.\n\n The entire config file will be loaded and stored.\n \"\"\"\n kube_config = kubernetes.config.kube_config\n\n path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n path = path.expanduser().resolve()\n\n # Determine the context\n existing_contexts, current_context = kube_config.list_kube_config_contexts(\n config_file=str(path)\n )\n context_names = {ctx[\"name\"] for ctx in existing_contexts}\n if context_name:\n if context_name not in context_names:\n raise ValueError(\n f\"Context {context_name!r} not found. \"\n f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n )\n else:\n context_name = current_context[\"name\"]\n\n # Load the entire config file\n config_file_contents = path.read_text()\n config_dict = yaml.safe_load(config_file_contents)\n\n return cls(config=config_dict, context_name=context_name)\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.get_api_client","title":"get_api_client
","text":"Returns a Kubernetes API client for this cluster config.
Source code inprefect/blocks/kubernetes.py
def get_api_client(self) -> \"ApiClient\":\n \"\"\"\n Returns a Kubernetes API client for this cluster config.\n \"\"\"\n return kubernetes.config.kube_config.new_client_from_config_dict(\n config_dict=self.config, context=self.context_name\n )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/notifications/","title":"notifications","text":"","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications","title":"prefect.blocks.notifications
","text":"","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.AbstractAppriseNotificationBlock","title":"AbstractAppriseNotificationBlock
","text":" Bases: NotificationBlock
, ABC
An abstract class for sending notifications using Apprise.
Source code inprefect/blocks/notifications.py
class AbstractAppriseNotificationBlock(NotificationBlock, ABC):\n \"\"\"\n An abstract class for sending notifications using Apprise.\n \"\"\"\n\n notify_type: Literal[\n \"prefect_default\", \"info\", \"success\", \"warning\", \"failure\"\n ] = Field(\n default=PREFECT_NOTIFY_TYPE_DEFAULT,\n description=(\n \"The type of notification being performed; the prefect_default \"\n \"is a plain notification that does not attach an image.\"\n ),\n )\n\n def __init__(self, *args, **kwargs):\n import apprise\n\n if PREFECT_NOTIFY_TYPE_DEFAULT not in apprise.NOTIFY_TYPES:\n apprise.NOTIFY_TYPES += (PREFECT_NOTIFY_TYPE_DEFAULT,)\n\n super().__init__(*args, **kwargs)\n\n def _start_apprise_client(self, url: SecretStr):\n from apprise import Apprise, AppriseAsset\n\n # A custom `AppriseAsset` that ensures Prefect Notifications\n # appear correctly across multiple messaging platforms\n prefect_app_data = AppriseAsset(\n app_id=\"Prefect Notifications\",\n app_desc=\"Prefect Notifications\",\n app_url=\"https://prefect.io\",\n )\n\n self._apprise_client = Apprise(asset=prefect_app_data)\n self._apprise_client.add(url.get_secret_value())\n\n def block_initialization(self) -> None:\n self._start_apprise_client(self.url)\n\n @sync_compatible\n @instrument_instance_method_call()\n async def notify(self, body: str, subject: Optional[str] = None):\n await self._apprise_client.async_notify(\n body=body, title=subject, notify_type=self.notify_type\n )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.AppriseNotificationBlock","title":"AppriseNotificationBlock
","text":" Bases: AbstractAppriseNotificationBlock
, ABC
A base class for sending notifications using Apprise, through webhook URLs.
Source code inprefect/blocks/notifications.py
class AppriseNotificationBlock(AbstractAppriseNotificationBlock, ABC):\n \"\"\"\n A base class for sending notifications using Apprise, through webhook URLs.\n \"\"\"\n\n _documentation_url = \"https://docs.prefect.io/ui/notifications/\"\n url: SecretStr = Field(\n default=...,\n title=\"Webhook URL\",\n description=\"Incoming webhook URL used to send notifications.\",\n example=\"https://hooks.example.com/XXX\",\n )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.CustomWebhookNotificationBlock","title":"CustomWebhookNotificationBlock
","text":" Bases: NotificationBlock
Enables sending notifications via any custom webhook.
All nested string param contains {{key}}
will be substituted with value from context/secrets.
Context values include: subject
, body
and name
.
Examples:
Load a saved custom webhook and send a message:
from prefect.blocks.notifications import CustomWebhookNotificationBlock\n\ncustom_webhook_block = CustomWebhookNotificationBlock.load(\"BLOCK_NAME\")\n\ncustom_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class CustomWebhookNotificationBlock(NotificationBlock):\n \"\"\"\n Enables sending notifications via any custom webhook.\n\n All nested string param contains `{{key}}` will be substituted with value from context/secrets.\n\n Context values include: `subject`, `body` and `name`.\n\n Examples:\n Load a saved custom webhook and send a message:\n ```python\n from prefect.blocks.notifications import CustomWebhookNotificationBlock\n\n custom_webhook_block = CustomWebhookNotificationBlock.load(\"BLOCK_NAME\")\n\n custom_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _block_type_name = \"Custom Webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c7247cb359eb6cf276734d4b1fbf00fb8930e89e-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.CustomWebhookNotificationBlock\"\n\n name: str = Field(title=\"Name\", description=\"Name of the webhook.\")\n\n url: str = Field(\n title=\"Webhook URL\",\n description=\"The webhook URL.\",\n example=\"https://hooks.slack.com/XXX\",\n )\n\n method: Literal[\"GET\", \"POST\", \"PUT\", \"PATCH\", \"DELETE\"] = Field(\n default=\"POST\", description=\"The webhook request method. Defaults to `POST`.\"\n )\n\n params: Optional[Dict[str, str]] = Field(\n default=None, title=\"Query Params\", description=\"Custom query params.\"\n )\n json_data: Optional[dict] = Field(\n default=None,\n title=\"JSON Data\",\n description=\"Send json data as payload.\",\n example=(\n '{\"text\": \"{{subject}}\\\\n{{body}}\", \"title\": \"{{name}}\", \"token\":'\n ' \"{{tokenFromSecrets}}\"}'\n ),\n )\n form_data: Optional[Dict[str, str]] = Field(\n default=None,\n title=\"Form Data\",\n description=(\n \"Send form data as payload. Should not be used together with _JSON Data_.\"\n ),\n example=(\n '{\"text\": \"{{subject}}\\\\n{{body}}\", \"title\": \"{{name}}\", \"token\":'\n ' \"{{tokenFromSecrets}}\"}'\n ),\n )\n\n headers: Optional[Dict[str, str]] = Field(None, description=\"Custom headers.\")\n cookies: Optional[Dict[str, str]] = Field(None, description=\"Custom cookies.\")\n\n timeout: float = Field(\n default=10, description=\"Request timeout in seconds. Defaults to 10.\"\n )\n\n secrets: SecretDict = Field(\n default_factory=lambda: SecretDict(dict()),\n title=\"Custom Secret Values\",\n description=\"A dictionary of secret values to be substituted in other configs.\",\n example='{\"tokenFromSecrets\":\"SomeSecretToken\"}',\n )\n\n def _build_request_args(self, body: str, subject: Optional[str]):\n \"\"\"Build kwargs for httpx.AsyncClient.request\"\"\"\n # prepare values\n values = self.secrets.get_secret_value()\n # use 'null' when subject is None\n values.update(\n {\n \"subject\": \"null\" if subject is None else subject,\n \"body\": body,\n \"name\": self.name,\n }\n )\n # do substution\n return apply_values(\n {\n \"method\": self.method,\n \"url\": self.url,\n \"params\": self.params,\n \"data\": self.form_data,\n \"json\": self.json_data,\n \"headers\": self.headers,\n \"cookies\": self.cookies,\n \"timeout\": self.timeout,\n },\n values,\n )\n\n def block_initialization(self) -> None:\n # check form_data and json_data\n if self.form_data is not None and self.json_data is not None:\n raise ValueError(\"both `Form Data` and `JSON Data` provided\")\n allowed_keys = {\"subject\", \"body\", \"name\"}.union(\n self.secrets.get_secret_value().keys()\n )\n # test template to raise a error early\n for name in [\"url\", \"params\", \"form_data\", \"json_data\", \"headers\", \"cookies\"]:\n template = getattr(self, name)\n if template is None:\n continue\n # check for placeholders not in predefined keys and secrets\n placeholders = find_placeholders(template)\n for placeholder in placeholders:\n if placeholder.name not in allowed_keys:\n raise KeyError(f\"{name}/{placeholder}\")\n\n @sync_compatible\n @instrument_instance_method_call()\n async def notify(self, body: str, subject: Optional[str] = None):\n import httpx\n\n request_args = self._build_request_args(body, subject)\n cookies = request_args.pop(\"cookies\", None)\n # make request with httpx\n client = httpx.AsyncClient(\n headers={\"user-agent\": \"Prefect Notifications\"}, cookies=cookies\n )\n async with client:\n resp = await client.request(**request_args)\n resp.raise_for_status()\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.DiscordWebhook","title":"DiscordWebhook
","text":" Bases: AbstractAppriseNotificationBlock
Enables sending notifications via a provided Discord webhook. See Apprise notify_Discord docs # noqa
Examples:
Load a saved Discord webhook and send a message:
from prefect.blocks.notifications import DiscordWebhook\n\ndiscord_webhook_block = DiscordWebhook.load(\"BLOCK_NAME\")\n\ndiscord_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class DiscordWebhook(AbstractAppriseNotificationBlock):\n \"\"\"\n Enables sending notifications via a provided Discord webhook.\n See [Apprise notify_Discord docs](https://github.com/caronc/apprise/wiki/Notify_Discord) # noqa\n\n Examples:\n Load a saved Discord webhook and send a message:\n ```python\n from prefect.blocks.notifications import DiscordWebhook\n\n discord_webhook_block = DiscordWebhook.load(\"BLOCK_NAME\")\n\n discord_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _description = \"Enables sending notifications via a provided Discord webhook.\"\n _block_type_name = \"Discord Webhook\"\n _block_type_slug = \"discord-webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/9e94976c80ef925b66d24e5d14f0d47baa6b8f88-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.DiscordWebhook\"\n\n webhook_id: SecretStr = Field(\n default=...,\n description=(\n \"The first part of 2 tokens provided to you after creating a\"\n \" incoming-webhook.\"\n ),\n )\n\n webhook_token: SecretStr = Field(\n default=...,\n description=(\n \"The second part of 2 tokens provided to you after creating a\"\n \" incoming-webhook.\"\n ),\n )\n\n botname: Optional[str] = Field(\n title=\"Bot name\",\n default=None,\n description=(\n \"Identify the name of the bot that should issue the message. If one isn't\"\n \" specified then the default is to just use your account (associated with\"\n \" the incoming-webhook).\"\n ),\n )\n\n tts: bool = Field(\n default=False,\n description=\"Whether to enable Text-To-Speech.\",\n )\n\n include_image: bool = Field(\n default=False,\n description=(\n \"Whether to include an image in-line with the message describing the\"\n \" notification type.\"\n ),\n )\n\n avatar: bool = Field(\n default=False,\n description=\"Whether to override the default discord avatar icon.\",\n )\n\n avatar_url: Optional[str] = Field(\n title=\"Avatar URL\",\n default=False,\n description=(\n \"Over-ride the default discord avatar icon URL. By default this is not set\"\n \" and Apprise chooses the URL dynamically based on the type of message\"\n \" (info, success, warning, or error).\"\n ),\n )\n\n def block_initialization(self) -> None:\n from apprise.plugins.NotifyDiscord import NotifyDiscord\n\n url = SecretStr(\n NotifyDiscord(\n webhook_id=self.webhook_id.get_secret_value(),\n webhook_token=self.webhook_token.get_secret_value(),\n botname=self.botname,\n tts=self.tts,\n include_image=self.include_image,\n avatar=self.avatar,\n avatar_url=self.avatar_url,\n ).url()\n )\n self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MattermostWebhook","title":"MattermostWebhook
","text":" Bases: AbstractAppriseNotificationBlock
Enables sending notifications via a provided Mattermost webhook. See Apprise notify_Mattermost docs # noqa
Examples:
Load a saved Mattermost webhook and send a message:
from prefect.blocks.notifications import MattermostWebhook\n\nmattermost_webhook_block = MattermostWebhook.load(\"BLOCK_NAME\")\n\nmattermost_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class MattermostWebhook(AbstractAppriseNotificationBlock):\n \"\"\"\n Enables sending notifications via a provided Mattermost webhook.\n See [Apprise notify_Mattermost docs](https://github.com/caronc/apprise/wiki/Notify_Mattermost) # noqa\n\n\n Examples:\n Load a saved Mattermost webhook and send a message:\n ```python\n from prefect.blocks.notifications import MattermostWebhook\n\n mattermost_webhook_block = MattermostWebhook.load(\"BLOCK_NAME\")\n\n mattermost_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _description = \"Enables sending notifications via a provided Mattermost webhook.\"\n _block_type_name = \"Mattermost Webhook\"\n _block_type_slug = \"mattermost-webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/1350a147130bf82cbc799a5f868d2c0116207736-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MattermostWebhook\"\n\n hostname: str = Field(\n default=...,\n description=\"The hostname of your Mattermost server.\",\n example=\"Mattermost.example.com\",\n )\n\n token: SecretStr = Field(\n default=...,\n description=\"The token associated with your Mattermost webhook.\",\n )\n\n botname: Optional[str] = Field(\n title=\"Bot name\",\n default=None,\n description=\"The name of the bot that will send the message.\",\n )\n\n channels: Optional[List[str]] = Field(\n default=None,\n description=\"The channel(s) you wish to notify.\",\n )\n\n include_image: bool = Field(\n default=False,\n description=\"Whether to include the Apprise status image in the message.\",\n )\n\n path: Optional[str] = Field(\n default=None,\n description=\"An optional sub-path specification to append to the hostname.\",\n )\n\n port: int = Field(\n default=8065,\n description=\"The port of your Mattermost server.\",\n )\n\n def block_initialization(self) -> None:\n from apprise.plugins.NotifyMattermost import NotifyMattermost\n\n url = SecretStr(\n NotifyMattermost(\n token=self.token.get_secret_value(),\n fullpath=self.path,\n host=self.hostname,\n botname=self.botname,\n channels=self.channels,\n include_image=self.include_image,\n port=self.port,\n ).url()\n )\n self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MicrosoftTeamsWebhook","title":"MicrosoftTeamsWebhook
","text":" Bases: AppriseNotificationBlock
Enables sending notifications via a provided Microsoft Teams webhook.
Examples:
Load a saved Teams webhook and send a message:
from prefect.blocks.notifications import MicrosoftTeamsWebhook\nteams_webhook_block = MicrosoftTeamsWebhook.load(\"BLOCK_NAME\")\nteams_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class MicrosoftTeamsWebhook(AppriseNotificationBlock):\n \"\"\"\n Enables sending notifications via a provided Microsoft Teams webhook.\n\n Examples:\n Load a saved Teams webhook and send a message:\n ```python\n from prefect.blocks.notifications import MicrosoftTeamsWebhook\n teams_webhook_block = MicrosoftTeamsWebhook.load(\"BLOCK_NAME\")\n teams_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _block_type_name = \"Microsoft Teams Webhook\"\n _block_type_slug = \"ms-teams-webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/817efe008a57f0a24f3587414714b563e5e23658-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MicrosoftTeamsWebhook\"\n\n url: SecretStr = Field(\n ...,\n title=\"Webhook URL\",\n description=\"The Teams incoming webhook URL used to send notifications.\",\n example=(\n \"https://your-org.webhook.office.com/webhookb2/XXX/IncomingWebhook/YYY/ZZZ\"\n ),\n )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.OpsgenieWebhook","title":"OpsgenieWebhook
","text":" Bases: AbstractAppriseNotificationBlock
Enables sending notifications via a provided Opsgenie webhook. See Apprise notify_opsgenie docs for more info on formatting the URL.
Examples:
Load a saved Opsgenie webhook and send a message:
from prefect.blocks.notifications import OpsgenieWebhook\nopsgenie_webhook_block = OpsgenieWebhook.load(\"BLOCK_NAME\")\nopsgenie_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class OpsgenieWebhook(AbstractAppriseNotificationBlock):\n \"\"\"\n Enables sending notifications via a provided Opsgenie webhook.\n See [Apprise notify_opsgenie docs](https://github.com/caronc/apprise/wiki/Notify_opsgenie)\n for more info on formatting the URL.\n\n Examples:\n Load a saved Opsgenie webhook and send a message:\n ```python\n from prefect.blocks.notifications import OpsgenieWebhook\n opsgenie_webhook_block = OpsgenieWebhook.load(\"BLOCK_NAME\")\n opsgenie_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _description = \"Enables sending notifications via a provided Opsgenie webhook.\"\n\n _block_type_name = \"Opsgenie Webhook\"\n _block_type_slug = \"opsgenie-webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d8b5bc6244ae6cd83b62ec42f10d96e14d6e9113-280x280.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.OpsgenieWebhook\"\n\n apikey: SecretStr = Field(\n default=...,\n title=\"API Key\",\n description=\"The API Key associated with your Opsgenie account.\",\n )\n\n target_user: Optional[List] = Field(\n default=None, description=\"The user(s) you wish to notify.\"\n )\n\n target_team: Optional[List] = Field(\n default=None, description=\"The team(s) you wish to notify.\"\n )\n\n target_schedule: Optional[List] = Field(\n default=None, description=\"The schedule(s) you wish to notify.\"\n )\n\n target_escalation: Optional[List] = Field(\n default=None, description=\"The escalation(s) you wish to notify.\"\n )\n\n region_name: Literal[\"us\", \"eu\"] = Field(\n default=\"us\", description=\"The 2-character region code.\"\n )\n\n batch: bool = Field(\n default=False,\n description=\"Notify all targets in batches (instead of individually).\",\n )\n\n tags: Optional[List] = Field(\n default=None,\n description=(\n \"A comma-separated list of tags you can associate with your Opsgenie\"\n \" message.\"\n ),\n example='[\"tag1\", \"tag2\"]',\n )\n\n priority: Optional[str] = Field(\n default=3,\n description=(\n \"The priority to associate with the message. It is on a scale between 1\"\n \" (LOW) and 5 (EMERGENCY).\"\n ),\n )\n\n alias: Optional[str] = Field(\n default=None, description=\"The alias to associate with the message.\"\n )\n\n entity: Optional[str] = Field(\n default=None, description=\"The entity to associate with the message.\"\n )\n\n details: Optional[Dict[str, str]] = Field(\n default=None,\n description=\"Additional details composed of key/values pairs.\",\n example='{\"key1\": \"value1\", \"key2\": \"value2\"}',\n )\n\n def block_initialization(self) -> None:\n from apprise.plugins.NotifyOpsgenie import NotifyOpsgenie\n\n targets = []\n if self.target_user:\n [targets.append(f\"@{x}\") for x in self.target_user]\n if self.target_team:\n [targets.append(f\"#{x}\") for x in self.target_team]\n if self.target_schedule:\n [targets.append(f\"*{x}\") for x in self.target_schedule]\n if self.target_escalation:\n [targets.append(f\"^{x}\") for x in self.target_escalation]\n url = SecretStr(\n NotifyOpsgenie(\n apikey=self.apikey.get_secret_value(),\n targets=targets,\n region_name=self.region_name,\n details=self.details,\n priority=self.priority,\n alias=self.alias,\n entity=self.entity,\n batch=self.batch,\n tags=self.tags,\n ).url()\n )\n self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.PagerDutyWebHook","title":"PagerDutyWebHook
","text":" Bases: AbstractAppriseNotificationBlock
Enables sending notifications via a provided PagerDuty webhook. See Apprise notify_pagerduty docs for more info on formatting the URL.
Examples:
Load a saved PagerDuty webhook and send a message:
from prefect.blocks.notifications import PagerDutyWebHook\npagerduty_webhook_block = PagerDutyWebHook.load(\"BLOCK_NAME\")\npagerduty_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class PagerDutyWebHook(AbstractAppriseNotificationBlock):\n \"\"\"\n Enables sending notifications via a provided PagerDuty webhook.\n See [Apprise notify_pagerduty docs](https://github.com/caronc/apprise/wiki/Notify_pagerduty)\n for more info on formatting the URL.\n\n Examples:\n Load a saved PagerDuty webhook and send a message:\n ```python\n from prefect.blocks.notifications import PagerDutyWebHook\n pagerduty_webhook_block = PagerDutyWebHook.load(\"BLOCK_NAME\")\n pagerduty_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _description = \"Enables sending notifications via a provided PagerDuty webhook.\"\n\n _block_type_name = \"Pager Duty Webhook\"\n _block_type_slug = \"pager-duty-webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/8dbf37d17089c1ce531708eac2e510801f7b3aee-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.PagerDutyWebHook\"\n\n # The default cannot be prefect_default because NotifyPagerDuty's\n # PAGERDUTY_SEVERITY_MAP only has these notify types defined as keys\n notify_type: Literal[\"info\", \"success\", \"warning\", \"failure\"] = Field(\n default=\"info\", description=\"The severity of the notification.\"\n )\n\n integration_key: SecretStr = Field(\n default=...,\n description=(\n \"This can be found on the Events API V2 \"\n \"integration's detail page, and is also referred to as a Routing Key. \"\n \"This must be provided alongside `api_key`, but will error if provided \"\n \"alongside `url`.\"\n ),\n )\n\n api_key: SecretStr = Field(\n default=...,\n title=\"API Key\",\n description=(\n \"This can be found under Integrations. \"\n \"This must be provided alongside `integration_key`, but will error if \"\n \"provided alongside `url`.\"\n ),\n )\n\n source: Optional[str] = Field(\n default=\"Prefect\", description=\"The source string as part of the payload.\"\n )\n\n component: str = Field(\n default=\"Notification\",\n description=\"The component string as part of the payload.\",\n )\n\n group: Optional[str] = Field(\n default=None, description=\"The group string as part of the payload.\"\n )\n\n class_id: Optional[str] = Field(\n default=None,\n title=\"Class ID\",\n description=\"The class string as part of the payload.\",\n )\n\n region_name: Literal[\"us\", \"eu\"] = Field(\n default=\"us\", description=\"The region name.\"\n )\n\n clickable_url: Optional[AnyHttpUrl] = Field(\n default=None,\n title=\"Clickable URL\",\n description=\"A clickable URL to associate with the notice.\",\n )\n\n include_image: bool = Field(\n default=True,\n description=\"Associate the notification status via a represented icon.\",\n )\n\n custom_details: Optional[Dict[str, str]] = Field(\n default=None,\n description=\"Additional details to include as part of the payload.\",\n example='{\"disk_space_left\": \"145GB\"}',\n )\n\n def block_initialization(self) -> None:\n from apprise.plugins.NotifyPagerDuty import NotifyPagerDuty\n\n url = SecretStr(\n NotifyPagerDuty(\n apikey=self.api_key.get_secret_value(),\n integrationkey=self.integration_key.get_secret_value(),\n source=self.source,\n component=self.component,\n group=self.group,\n class_id=self.class_id,\n region_name=self.region_name,\n click=self.clickable_url,\n include_image=self.include_image,\n details=self.custom_details,\n ).url()\n )\n self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SendgridEmail","title":"SendgridEmail
","text":" Bases: AbstractAppriseNotificationBlock
Enables sending notifications via any sendgrid account. See Apprise Notify_sendgrid docs
Examples:
Load a saved Sendgrid and send a email message: ```python from prefect.blocks.notifications import SendgridEmail
sendgrid_block = SendgridEmail.load(\"BLOCK_NAME\")
sendgrid_block.notify(\"Hello from Prefect!\")
Source code inprefect/blocks/notifications.py
class SendgridEmail(AbstractAppriseNotificationBlock):\n \"\"\"\n Enables sending notifications via any sendgrid account.\n See [Apprise Notify_sendgrid docs](https://github.com/caronc/apprise/wiki/Notify_Sendgrid)\n\n Examples:\n Load a saved Sendgrid and send a email message:\n ```python\n from prefect.blocks.notifications import SendgridEmail\n\n sendgrid_block = SendgridEmail.load(\"BLOCK_NAME\")\n\n sendgrid_block.notify(\"Hello from Prefect!\")\n \"\"\"\n\n _description = \"Enables sending notifications via Sendgrid email service.\"\n _block_type_name = \"Sendgrid Email\"\n _block_type_slug = \"sendgrid-email\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/82bc6ed16ca42a2252a5512c72233a253b8a58eb-250x250.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SendgridEmail\"\n\n api_key: SecretStr = Field(\n default=...,\n title=\"API Key\",\n description=\"The API Key associated with your sendgrid account.\",\n )\n\n sender_email: str = Field(\n title=\"Sender email id\",\n description=\"The sender email id.\",\n example=\"test-support@gmail.com\",\n )\n\n to_emails: List[str] = Field(\n default=...,\n title=\"Recipient emails\",\n description=\"Email ids of all recipients.\",\n example='\"recipient1@gmail.com\"',\n )\n\n def block_initialization(self) -> None:\n from apprise.plugins.NotifySendGrid import NotifySendGrid\n\n url = SecretStr(\n NotifySendGrid(\n apikey=self.api_key.get_secret_value(),\n from_email=self.sender_email,\n targets=self.to_emails,\n ).url()\n )\n\n self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SlackWebhook","title":"SlackWebhook
","text":" Bases: AppriseNotificationBlock
Enables sending notifications via a provided Slack webhook.
Examples:
Load a saved Slack webhook and send a message:
from prefect.blocks.notifications import SlackWebhook\n\nslack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\nslack_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class SlackWebhook(AppriseNotificationBlock):\n \"\"\"\n Enables sending notifications via a provided Slack webhook.\n\n Examples:\n Load a saved Slack webhook and send a message:\n ```python\n from prefect.blocks.notifications import SlackWebhook\n\n slack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\n slack_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _block_type_name = \"Slack Webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c1965ecbf8704ee1ea20d77786de9a41ce1087d1-500x500.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SlackWebhook\"\n\n url: SecretStr = Field(\n default=...,\n title=\"Webhook URL\",\n description=\"Slack incoming webhook URL used to send notifications.\",\n example=\"https://hooks.slack.com/XXX\",\n )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.TwilioSMS","title":"TwilioSMS
","text":" Bases: AbstractAppriseNotificationBlock
Enables sending notifications via Twilio SMS. Find more on sending Twilio SMS messages in the docs.
Examples:
Load a saved TwilioSMS
block and send a message:
from prefect.blocks.notifications import TwilioSMS\ntwilio_webhook_block = TwilioSMS.load(\"BLOCK_NAME\")\ntwilio_webhook_block.notify(\"Hello from Prefect!\")\n
Source code in prefect/blocks/notifications.py
class TwilioSMS(AbstractAppriseNotificationBlock):\n \"\"\"Enables sending notifications via Twilio SMS.\n Find more on sending Twilio SMS messages in the [docs](https://www.twilio.com/docs/sms).\n\n Examples:\n Load a saved `TwilioSMS` block and send a message:\n ```python\n from prefect.blocks.notifications import TwilioSMS\n twilio_webhook_block = TwilioSMS.load(\"BLOCK_NAME\")\n twilio_webhook_block.notify(\"Hello from Prefect!\")\n ```\n \"\"\"\n\n _description = \"Enables sending notifications via Twilio SMS.\"\n _block_type_name = \"Twilio SMS\"\n _block_type_slug = \"twilio-sms\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/8bd8777999f82112c09b9c8d57083ac75a4a0d65-250x250.png\" # noqa\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.TwilioSMS\"\n\n account_sid: str = Field(\n default=...,\n description=(\n \"The Twilio Account SID - it can be found on the homepage \"\n \"of the Twilio console.\"\n ),\n )\n\n auth_token: SecretStr = Field(\n default=...,\n description=(\n \"The Twilio Authentication Token - \"\n \"it can be found on the homepage of the Twilio console.\"\n ),\n )\n\n from_phone_number: str = Field(\n default=...,\n description=\"The valid Twilio phone number to send the message from.\",\n example=\"18001234567\",\n )\n\n to_phone_numbers: List[str] = Field(\n default=...,\n description=\"A list of valid Twilio phone number(s) to send the message to.\",\n # not wrapped in brackets because of the way UI displays examples; in code should be [\"18004242424\"]\n example=\"18004242424\",\n )\n\n def block_initialization(self) -> None:\n from apprise.plugins.NotifyTwilio import NotifyTwilio\n\n url = SecretStr(\n NotifyTwilio(\n account_sid=self.account_sid,\n auth_token=self.auth_token.get_secret_value(),\n source=self.from_phone_number,\n targets=self.to_phone_numbers,\n ).url()\n )\n self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/system/","title":"system","text":"","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system","title":"prefect.blocks.system
","text":"","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.DateTime","title":"DateTime
","text":" Bases: Block
A block that represents a datetime
Attributes:
Name Type Descriptionvalue
DateTime
An ISO 8601-compatible datetime value.
ExampleLoad a stored JSON value:
from prefect.blocks.system import DateTime\n\ndata_time_block = DateTime.load(\"BLOCK_NAME\")\n
Source code in prefect/blocks/system.py
class DateTime(Block):\n \"\"\"\n A block that represents a datetime\n\n Attributes:\n value: An ISO 8601-compatible datetime value.\n\n Example:\n Load a stored JSON value:\n ```python\n from prefect.blocks.system import DateTime\n\n data_time_block = DateTime.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _block_type_name = \"Date Time\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/8b3da9a6621e92108b8e6a75b82e15374e170ff7-48x48.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.DateTime\"\n\n value: pendulum.DateTime = Field(\n default=...,\n description=\"An ISO 8601-compatible datetime value.\",\n )\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.JSON","title":"JSON
","text":" Bases: Block
A block that represents JSON
Attributes:
Name Type Descriptionvalue
Any
A JSON-compatible value.
ExampleLoad a stored JSON value:
from prefect.blocks.system import JSON\n\njson_block = JSON.load(\"BLOCK_NAME\")\n
Source code in prefect/blocks/system.py
class JSON(Block):\n \"\"\"\n A block that represents JSON\n\n Attributes:\n value: A JSON-compatible value.\n\n Example:\n Load a stored JSON value:\n ```python\n from prefect.blocks.system import JSON\n\n json_block = JSON.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/4fcef2294b6eeb423b1332d1ece5156bf296ff96-48x48.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.JSON\"\n\n value: Any = Field(default=..., description=\"A JSON-compatible value.\")\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.Secret","title":"Secret
","text":" Bases: Block
A block that represents a secret value. The value stored in this block will be obfuscated when this block is logged or shown in the UI.
Attributes:
Name Type Descriptionvalue
SecretStr
A string value that should be kept secret.
Examplefrom prefect.blocks.system import Secret\n\nsecret_block = Secret.load(\"BLOCK_NAME\")\n\n# Access the stored secret\nsecret_block.get()\n
Source code in prefect/blocks/system.py
class Secret(Block):\n \"\"\"\n A block that represents a secret value. The value stored in this block will be obfuscated when\n this block is logged or shown in the UI.\n\n Attributes:\n value: A string value that should be kept secret.\n\n Example:\n ```python\n from prefect.blocks.system import Secret\n\n secret_block = Secret.load(\"BLOCK_NAME\")\n\n # Access the stored secret\n secret_block.get()\n ```\n \"\"\"\n\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c6f20e556dd16effda9df16551feecfb5822092b-48x48.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.Secret\"\n\n value: SecretStr = Field(\n default=..., description=\"A string value that should be kept secret.\"\n )\n\n def get(self):\n return self.value.get_secret_value()\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.String","title":"String
","text":" Bases: Block
A block that represents a string
Attributes:
Name Type Descriptionvalue
str
A string value.
ExampleLoad a stored string value:
from prefect.blocks.system import String\n\nstring_block = String.load(\"BLOCK_NAME\")\n
Source code in prefect/blocks/system.py
class String(Block):\n \"\"\"\n A block that represents a string\n\n Attributes:\n value: A string value.\n\n Example:\n Load a stored string value:\n ```python\n from prefect.blocks.system import String\n\n string_block = String.load(\"BLOCK_NAME\")\n ```\n \"\"\"\n\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c262ea2c80a2c043564e8763f3370c3db5a6b3e6-48x48.png\"\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.String\"\n\n value: str = Field(default=..., description=\"A string value.\")\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/webhook/","title":"webhook","text":"","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook","title":"prefect.blocks.webhook
","text":"","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook","title":"Webhook
","text":" Bases: Block
Block that enables calling webhooks.
Source code inprefect/blocks/webhook.py
class Webhook(Block):\n \"\"\"\n Block that enables calling webhooks.\n \"\"\"\n\n _block_type_name = \"Webhook\"\n _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c7247cb359eb6cf276734d4b1fbf00fb8930e89e-250x250.png\" # type: ignore\n _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook\"\n\n method: Literal[\"GET\", \"POST\", \"PUT\", \"PATCH\", \"DELETE\"] = Field(\n default=\"POST\", description=\"The webhook request method. Defaults to `POST`.\"\n )\n\n url: SecretStr = Field(\n default=...,\n title=\"Webhook URL\",\n description=\"The webhook URL.\",\n example=\"https://hooks.slack.com/XXX\",\n )\n\n headers: SecretDict = Field(\n default_factory=lambda: SecretDict(dict()),\n title=\"Webhook Headers\",\n description=\"A dictionary of headers to send with the webhook request.\",\n )\n\n def block_initialization(self):\n self._client = AsyncClient(transport=_http_transport)\n\n async def call(self, payload: Optional[dict] = None) -> Response:\n \"\"\"\n Call the webhook.\n\n Args:\n payload: an optional payload to send when calling the webhook.\n \"\"\"\n async with self._client:\n return await self._client.request(\n method=self.method,\n url=self.url.get_secret_value(),\n headers=self.headers.get_secret_value(),\n json=payload,\n )\n
","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook.call","title":"call
async
","text":"Call the webhook.
Parameters:
Name Type Description Defaultpayload
Optional[dict]
an optional payload to send when calling the webhook.
None
Source code in prefect/blocks/webhook.py
async def call(self, payload: Optional[dict] = None) -> Response:\n \"\"\"\n Call the webhook.\n\n Args:\n payload: an optional payload to send when calling the webhook.\n \"\"\"\n async with self._client:\n return await self._client.request(\n method=self.method,\n url=self.url.get_secret_value(),\n headers=self.headers.get_secret_value(),\n json=payload,\n )\n
","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/cli/agent/","title":"agent","text":"","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/agent/#prefect.cli.agent","title":"prefect.cli.agent
","text":"Command line interface for working with agent services
","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/agent/#prefect.cli.agent.start","title":"start
async
","text":"Start an agent process to poll one or more work queues for flow runs.
Source code inprefect/cli/agent.py
@agent_app.command()\nasync def start(\n # deprecated main argument\n work_queue: str = typer.Argument(\n None,\n show_default=False,\n help=\"DEPRECATED: A work queue name or ID\",\n ),\n work_queues: List[str] = typer.Option(\n None,\n \"-q\",\n \"--work-queue\",\n help=\"One or more work queue names for the agent to pull from.\",\n ),\n work_queue_prefix: List[str] = typer.Option(\n None,\n \"-m\",\n \"--match\",\n help=(\n \"Dynamically matches work queue names with the specified prefix for the\"\n \" agent to pull from,for example `dev-` will match all work queues with a\"\n \" name that starts with `dev-`\"\n ),\n ),\n work_pool_name: str = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"A work pool name for the agent to pull from.\",\n ),\n hide_welcome: bool = typer.Option(False, \"--hide-welcome\"),\n api: str = SettingsOption(PREFECT_API_URL),\n run_once: bool = typer.Option(\n False, help=\"Run the agent loop once, instead of forever.\"\n ),\n prefetch_seconds: int = SettingsOption(PREFECT_AGENT_PREFETCH_SECONDS),\n # deprecated tags\n tags: List[str] = typer.Option(\n None,\n \"-t\",\n \"--tag\",\n help=(\n \"DEPRECATED: One or more optional tags that will be used to create a work\"\n \" queue. This option will be removed on 2023-02-23.\"\n ),\n ),\n limit: int = typer.Option(\n None,\n \"-l\",\n \"--limit\",\n help=\"Maximum number of flow runs to start simultaneously.\",\n ),\n):\n \"\"\"\n Start an agent process to poll one or more work queues for flow runs.\n \"\"\"\n work_queues = work_queues or []\n\n if work_queue is not None:\n # try to treat the work_queue as a UUID\n try:\n async with get_client() as client:\n q = await client.read_work_queue(UUID(work_queue))\n work_queue = q.name\n # otherwise treat it as a string name\n except (TypeError, ValueError):\n pass\n work_queues.append(work_queue)\n app.console.print(\n (\n \"Agents now support multiple work queues. Instead of passing a single\"\n \" argument, provide work queue names with the `-q` or `--work-queue`\"\n f\" flag: `prefect agent start -q {work_queue}`\\n\"\n ),\n style=\"blue\",\n )\n\n if not work_queues and not tags and not work_queue_prefix and not work_pool_name:\n exit_with_error(\"No work queues provided!\", style=\"red\")\n elif bool(work_queues) + bool(tags) + bool(work_queue_prefix) > 1:\n exit_with_error(\n \"Only one of `work_queues`, `match`, or `tags` can be provided.\",\n style=\"red\",\n )\n if work_pool_name and tags:\n exit_with_error(\n \"`tag` and `pool` options cannot be used together.\", style=\"red\"\n )\n\n if tags:\n work_queue_name = f\"Agent queue {'-'.join(sorted(tags))}\"\n app.console.print(\n (\n \"`tags` are deprecated. For backwards-compatibility with old versions\"\n \" of Prefect, this agent will create a work queue named\"\n f\" `{work_queue_name}` that uses legacy tag-based matching. This option\"\n \" will be removed on 2023-02-23.\"\n ),\n style=\"red\",\n )\n\n async with get_client() as client:\n try:\n work_queue = await client.read_work_queue_by_name(work_queue_name)\n if work_queue.filter is None:\n # ensure the work queue has legacy (deprecated) tag-based behavior\n await client.update_work_queue(filter=dict(tags=tags))\n except ObjectNotFound:\n # if the work queue doesn't already exist, we create it with tags\n # to enable legacy (deprecated) tag-matching behavior\n await client.create_work_queue(name=work_queue_name, tags=tags)\n\n work_queues = [work_queue_name]\n\n if not hide_welcome:\n if api:\n app.console.print(\n f\"Starting v{prefect.__version__} agent connected to {api}...\"\n )\n else:\n app.console.print(\n f\"Starting v{prefect.__version__} agent with ephemeral API...\"\n )\n\n agent_process_id = os.getpid()\n setup_signal_handlers_agent(\n agent_process_id, \"the Prefect agent\", app.console.print\n )\n\n async with PrefectAgent(\n work_queues=work_queues,\n work_queue_prefix=work_queue_prefix,\n work_pool_name=work_pool_name,\n prefetch_seconds=prefetch_seconds,\n limit=limit,\n ) as agent:\n if not hide_welcome:\n app.console.print(ascii_name)\n if work_pool_name:\n app.console.print(\n \"Agent started! Looking for work from \"\n f\"work pool '{work_pool_name}'...\"\n )\n elif work_queue_prefix:\n app.console.print(\n \"Agent started! Looking for work from \"\n f\"queue(s) that start with the prefix: {work_queue_prefix}...\"\n )\n else:\n app.console.print(\n \"Agent started! Looking for work from \"\n f\"queue(s): {', '.join(work_queues)}...\"\n )\n\n async with anyio.create_task_group() as tg:\n tg.start_soon(\n partial(\n critical_service_loop,\n agent.get_and_submit_flow_runs,\n PREFECT_AGENT_QUERY_INTERVAL.value(),\n printer=app.console.print,\n run_once=run_once,\n jitter_range=0.3,\n backoff=4, # Up to ~1 minute interval during backoff\n )\n )\n\n tg.start_soon(\n partial(\n critical_service_loop,\n agent.check_for_cancelled_flow_runs,\n PREFECT_AGENT_QUERY_INTERVAL.value() * 2,\n printer=app.console.print,\n run_once=run_once,\n jitter_range=0.3,\n backoff=4,\n )\n )\n\n app.console.print(\"Agent stopped!\")\n
","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/artifact/","title":"artifact","text":"","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact","title":"prefect.cli.artifact
","text":"","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact.delete","title":"delete
async
","text":"Delete an artifact.
Parameters:
Name Type Description Defaultkey
Optional[str]
the key of the artifact to delete
Argument(None, help='The key of the artifact to delete.')
Examples:
$ prefect artifact delete \"my-artifact\"
Source code inprefect/cli/artifact.py
@artifact_app.command(\"delete\")\nasync def delete(\n key: Optional[str] = typer.Argument(\n None, help=\"The key of the artifact to delete.\"\n ),\n artifact_id: Optional[str] = typer.Option(\n None, \"--id\", help=\"The ID of the artifact to delete.\"\n ),\n):\n \"\"\"\n Delete an artifact.\n\n Arguments:\n key: the key of the artifact to delete\n\n Examples:\n $ prefect artifact delete \"my-artifact\"\n \"\"\"\n if key and artifact_id:\n exit_with_error(\"Please provide either a key or an artifact_id but not both.\")\n\n async with get_client() as client:\n if artifact_id is not None:\n try:\n confirm_delete = typer.confirm(\n (\n \"Are you sure you want to delete artifact with id\"\n f\" {artifact_id!r}?\"\n ),\n default=False,\n )\n if not confirm_delete:\n exit_with_error(\"Deletion aborted.\")\n\n await client.delete_artifact(artifact_id)\n exit_with_success(f\"Deleted artifact with id {artifact_id!r}.\")\n except ObjectNotFound:\n exit_with_error(f\"Artifact with id {artifact_id!r} not found!\")\n\n elif key is not None:\n artifacts = await client.read_artifacts(\n artifact_filter=ArtifactFilter(key=ArtifactFilterKey(any_=[key])),\n )\n if not artifacts:\n exit_with_error(\n f\"Artifact with key {key!r} not found. You can also specify an\"\n \" artifact id with the --id flag.\"\n )\n\n confirm_delete = typer.confirm(\n (\n f\"Are you sure you want to delete {len(artifacts)} artifact(s) with\"\n f\" key {key!r}?\"\n ),\n default=False,\n )\n if not confirm_delete:\n exit_with_error(\"Deletion aborted.\")\n\n for a in artifacts:\n await client.delete_artifact(a.id)\n\n exit_with_success(f\"Deleted {len(artifacts)} artifact(s) with key {key!r}.\")\n\n else:\n exit_with_error(\"Please provide a key or an artifact_id.\")\n
","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact.inspect","title":"inspect
async
","text":"View details about an artifact.\n\nArguments:\n key: the key of the artifact to inspect\n\nExamples:\n $ prefect artifact inspect \"my-artifact\"\n [\n {\n 'id': 'ba1d67be-0bd7-452e-8110-247fe5e6d8cc',\n 'created': '2023-03-21T21:40:09.895910+00:00',\n 'updated': '2023-03-21T21:40:09.895910+00:00',\n 'key': 'my-artifact',\n 'type': 'markdown',\n 'description': None,\n 'data': 'my markdown',\n 'metadata_': None,\n 'flow_run_id': '8dc54b6f-6e24-4586-a05c-e98c6490cb98',\n 'task_run_id': None\n },\n {\n 'id': '57f235b5-2576-45a5-bd93-c829c2900966',\n 'created': '2023-03-27T23:16:15.536434+00:00',\n 'updated': '2023-03-27T23:16:15.536434+00:00',\n 'key': 'my-artifact',\n 'type': 'markdown',\n 'description': 'my-artifact-description',\n 'data': 'my markdown',\n 'metadata_': None,\n 'flow_run_id': 'ffa91051-f249-48c1-ae0f-4754fcb7eb29',\n 'task_run_id': None\n }\n
]
Source code inprefect/cli/artifact.py
@artifact_app.command(\"inspect\")\nasync def inspect(\n key: str,\n limit: int = typer.Option(\n 10,\n \"--limit\",\n help=\"The maximum number of artifacts to return.\",\n ),\n):\n \"\"\"\n View details about an artifact.\n\n Arguments:\n key: the key of the artifact to inspect\n\n Examples:\n $ prefect artifact inspect \"my-artifact\"\n [\n {\n 'id': 'ba1d67be-0bd7-452e-8110-247fe5e6d8cc',\n 'created': '2023-03-21T21:40:09.895910+00:00',\n 'updated': '2023-03-21T21:40:09.895910+00:00',\n 'key': 'my-artifact',\n 'type': 'markdown',\n 'description': None,\n 'data': 'my markdown',\n 'metadata_': None,\n 'flow_run_id': '8dc54b6f-6e24-4586-a05c-e98c6490cb98',\n 'task_run_id': None\n },\n {\n 'id': '57f235b5-2576-45a5-bd93-c829c2900966',\n 'created': '2023-03-27T23:16:15.536434+00:00',\n 'updated': '2023-03-27T23:16:15.536434+00:00',\n 'key': 'my-artifact',\n 'type': 'markdown',\n 'description': 'my-artifact-description',\n 'data': 'my markdown',\n 'metadata_': None,\n 'flow_run_id': 'ffa91051-f249-48c1-ae0f-4754fcb7eb29',\n 'task_run_id': None\n }\n ]\n \"\"\"\n\n async with get_client() as client:\n artifacts = await client.read_artifacts(\n limit=limit,\n sort=ArtifactSort.UPDATED_DESC,\n artifact_filter=ArtifactFilter(key=ArtifactFilterKey(any_=[key])),\n )\n if not artifacts:\n exit_with_error(f\"Artifact {key!r} not found.\")\n\n artifacts = [a.dict(json_compatible=True) for a in artifacts]\n\n app.console.print(Pretty(artifacts))\n
","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact.list_artifacts","title":"list_artifacts
async
","text":"List artifacts.
Source code inprefect/cli/artifact.py
@artifact_app.command(\"ls\")\nasync def list_artifacts(\n limit: int = typer.Option(\n 100,\n \"--limit\",\n help=\"The maximum number of artifacts to return.\",\n ),\n all: bool = typer.Option(\n False,\n \"--all\",\n \"-a\",\n help=\"Whether or not to only return the latest version of each artifact.\",\n ),\n):\n \"\"\"\n List artifacts.\n \"\"\"\n table = Table(\n title=\"Artifacts\",\n caption=\"List Artifacts using `prefect artifact ls`\",\n show_header=True,\n )\n\n table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Key\", style=\"blue\", no_wrap=True)\n table.add_column(\"Type\", style=\"blue\", no_wrap=True)\n table.add_column(\"Updated\", style=\"blue\", no_wrap=True)\n\n async with get_client() as client:\n if all:\n artifacts = await client.read_artifacts(\n sort=ArtifactSort.KEY_ASC,\n limit=limit,\n )\n\n for artifact in sorted(artifacts, key=lambda x: f\"{x.key}\"):\n table.add_row(\n str(artifact.id),\n artifact.key,\n artifact.type,\n pendulum.instance(artifact.updated).diff_for_humans(),\n )\n\n else:\n artifacts = await client.read_latest_artifacts(\n sort=ArtifactCollectionSort.KEY_ASC,\n limit=limit,\n )\n\n for artifact in sorted(artifacts, key=lambda x: f\"{x.key}\"):\n table.add_row(\n str(artifact.latest_id),\n artifact.key,\n artifact.type,\n pendulum.instance(artifact.updated).diff_for_humans(),\n )\n\n app.console.print(table)\n
","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/block/","title":"block","text":"","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block","title":"prefect.cli.block
","text":"Command line interface for working with blocks.
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_create","title":"block_create
async
","text":"Generate a link to the Prefect UI to create a block.
Source code inprefect/cli/block.py
@blocks_app.command(\"create\")\nasync def block_create(\n block_type_slug: str = typer.Argument(\n ...,\n help=\"A block type slug. View available types with: prefect block type ls\",\n show_default=False,\n ),\n):\n \"\"\"\n Generate a link to the Prefect UI to create a block.\n \"\"\"\n async with get_client() as client:\n try:\n block_type = await client.read_block_type_by_slug(block_type_slug)\n except ObjectNotFound:\n app.console.print(f\"[red]Block type {block_type_slug!r} not found![/red]\")\n block_types = await client.read_block_types()\n slugs = {block_type.slug for block_type in block_types}\n app.console.print(f\"Available block types: {', '.join(slugs)}\")\n raise typer.Exit(1)\n\n if not PREFECT_UI_URL:\n exit_with_error(\n \"Prefect must be configured to use a hosted Prefect server or \"\n \"Prefect Cloud to display the Prefect UI\"\n )\n\n block_link = f\"{PREFECT_UI_URL.value()}/blocks/catalog/{block_type.slug}/create\"\n app.console.print(\n f\"Create a {block_type_slug} block: {block_link}\",\n )\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_delete","title":"block_delete
async
","text":"Delete a configured block.
Source code inprefect/cli/block.py
@blocks_app.command(\"delete\")\nasync def block_delete(\n slug: Optional[str] = typer.Argument(\n None, help=\"A block slug. Formatted as '<BLOCK_TYPE_SLUG>/<BLOCK_NAME>'\"\n ),\n block_id: Optional[str] = typer.Option(None, \"--id\", help=\"A block id.\"),\n):\n \"\"\"\n Delete a configured block.\n \"\"\"\n async with get_client() as client:\n if slug is None and block_id is not None:\n try:\n await client.delete_block_document(block_id)\n exit_with_success(f\"Deleted Block '{block_id}'.\")\n except ObjectNotFound:\n exit_with_error(f\"Deployment {block_id!r} not found!\")\n elif slug is not None:\n block_type_slug, block_document_name = slug.split(\"/\")\n try:\n block_document = await client.read_block_document_by_name(\n block_document_name, block_type_slug, include_secrets=False\n )\n await client.delete_block_document(block_document.id)\n exit_with_success(f\"Deleted Block '{slug}'.\")\n except ObjectNotFound:\n exit_with_error(f\"Block {slug!r} not found!\")\n else:\n exit_with_error(\"Must provide a block slug or id\")\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_inspect","title":"block_inspect
async
","text":"Displays details about a configured block.
Source code inprefect/cli/block.py
@blocks_app.command(\"inspect\")\nasync def block_inspect(\n slug: Optional[str] = typer.Argument(\n None, help=\"A Block slug: <BLOCK_TYPE_SLUG>/<BLOCK_NAME>\"\n ),\n block_id: Optional[str] = typer.Option(\n None, \"--id\", help=\"A Block id to search for if no slug is given\"\n ),\n):\n \"\"\"\n Displays details about a configured block.\n \"\"\"\n async with get_client() as client:\n if slug is None and block_id is not None:\n try:\n block_document = await client.read_block_document(\n block_id, include_secrets=False\n )\n except ObjectNotFound:\n exit_with_error(f\"Deployment {block_id!r} not found!\")\n elif slug is not None:\n block_type_slug, block_document_name = slug.split(\"/\")\n try:\n block_document = await client.read_block_document_by_name(\n block_document_name, block_type_slug, include_secrets=False\n )\n except ObjectNotFound:\n exit_with_error(f\"Block {slug!r} not found!\")\n else:\n exit_with_error(\"Must provide a block slug or id\")\n app.console.print(display_block(block_document))\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_ls","title":"block_ls
async
","text":"View all configured blocks.
Source code inprefect/cli/block.py
@blocks_app.command(\"ls\")\nasync def block_ls():\n \"\"\"\n View all configured blocks.\n \"\"\"\n async with get_client() as client:\n blocks = await client.read_block_documents()\n\n table = Table(\n title=\"Blocks\", caption=\"List Block Types using `prefect block type ls`\"\n )\n table.add_column(\"ID\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Type\", style=\"blue\", no_wrap=True)\n table.add_column(\"Name\", style=\"blue\", no_wrap=True)\n table.add_column(\"Slug\", style=\"blue\", no_wrap=True)\n\n for block in sorted(blocks, key=lambda x: f\"{x.block_type.slug}/{x.name}\"):\n table.add_row(\n str(block.id),\n block.block_type.name,\n str(block.name),\n f\"{block.block_type.slug}/{block.name}\",\n )\n\n app.console.print(table)\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.blocktype_delete","title":"blocktype_delete
async
","text":"Delete an unprotected Block Type.
Source code inprefect/cli/block.py
@blocktypes_app.command(\"delete\")\nasync def blocktype_delete(\n slug: str = typer.Argument(..., help=\"A Block type slug\"),\n):\n \"\"\"\n Delete an unprotected Block Type.\n \"\"\"\n async with get_client() as client:\n try:\n block_type = await client.read_block_type_by_slug(slug)\n await client.delete_block_type(block_type.id)\n exit_with_success(f\"Deleted Block Type '{slug}'.\")\n except ObjectNotFound:\n exit_with_error(f\"Block Type {slug!r} not found!\")\n except ProtectedBlockError:\n exit_with_error(f\"Block Type {slug!r} is a protected block!\")\n except PrefectHTTPStatusError:\n exit_with_error(f\"Cannot delete Block Type {slug!r}!\")\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.blocktype_inspect","title":"blocktype_inspect
async
","text":"Display details about a block type.
Source code inprefect/cli/block.py
@blocktypes_app.command(\"inspect\")\nasync def blocktype_inspect(\n slug: str = typer.Argument(..., help=\"A block type slug\"),\n):\n \"\"\"\n Display details about a block type.\n \"\"\"\n async with get_client() as client:\n try:\n block_type = await client.read_block_type_by_slug(slug)\n except ObjectNotFound:\n exit_with_error(f\"Block type {slug!r} not found!\")\n\n app.console.print(display_block_type(block_type))\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.list_types","title":"list_types
async
","text":"List all block types.
Source code inprefect/cli/block.py
@blocktypes_app.command(\"ls\")\nasync def list_types():\n \"\"\"\n List all block types.\n \"\"\"\n async with get_client() as client:\n block_types = await client.read_block_types()\n\n table = Table(\n title=\"Block Types\",\n show_lines=True,\n )\n\n table.add_column(\"Block Type Slug\", style=\"italic cyan\", no_wrap=True)\n table.add_column(\"Description\", style=\"blue\", no_wrap=False, justify=\"left\")\n table.add_column(\n \"Generate creation link\", style=\"italic cyan\", no_wrap=False, justify=\"left\"\n )\n\n for blocktype in sorted(block_types, key=lambda x: x.name):\n table.add_row(\n str(blocktype.slug),\n (\n str(blocktype.description.splitlines()[0].partition(\".\")[0])\n if blocktype.description is not None\n else \"\"\n ),\n f\"prefect block create {blocktype.slug}\",\n )\n\n app.console.print(table)\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.register","title":"register
async
","text":"Register blocks types within a module or file.
This makes the blocks available for configuration via the UI. If a block type has already been registered, its registration will be updated to match the block's current definition.
\b Examples: \b Register block types in a Python module: $ prefect block register -m prefect_aws.credentials \b Register block types in a .py file: $ prefect block register -f my_blocks.py
Source code inprefect/cli/block.py
@blocks_app.command()\nasync def register(\n module_name: Optional[str] = typer.Option(\n None,\n \"--module\",\n \"-m\",\n help=\"Python module containing block types to be registered\",\n ),\n file_path: Optional[Path] = typer.Option(\n None,\n \"--file\",\n \"-f\",\n help=\"Path to .py file containing block types to be registered\",\n ),\n):\n \"\"\"\n Register blocks types within a module or file.\n\n This makes the blocks available for configuration via the UI.\n If a block type has already been registered, its registration will be updated to\n match the block's current definition.\n\n \\b\n Examples:\n \\b\n Register block types in a Python module:\n $ prefect block register -m prefect_aws.credentials\n \\b\n Register block types in a .py file:\n $ prefect block register -f my_blocks.py\n \"\"\"\n # Handles if both options are specified or if neither are specified\n if not (bool(file_path) ^ bool(module_name)):\n exit_with_error(\n \"Please specify either a module or a file containing blocks to be\"\n \" registered, but not both.\"\n )\n\n if module_name:\n try:\n imported_module = import_module(name=module_name)\n except ModuleNotFoundError:\n exit_with_error(\n f\"Unable to load {module_name}. Please make sure the module is \"\n \"installed in your current environment.\"\n )\n\n if file_path:\n if file_path.suffix != \".py\":\n exit_with_error(\n f\"{file_path} is not a .py file. Please specify a \"\n \".py that contains blocks to be registered.\"\n )\n try:\n imported_module = await run_sync_in_worker_thread(\n load_script_as_module, str(file_path)\n )\n except ScriptError as exc:\n app.console.print(exc)\n app.console.print(exception_traceback(exc.user_exc))\n exit_with_error(\n f\"Unable to load file at {file_path}. Please make sure the file path \"\n \"is correct and the file contains valid Python.\"\n )\n\n registered_blocks = await _register_blocks_in_module(imported_module)\n number_of_registered_blocks = len(registered_blocks)\n block_text = \"block\" if 0 < number_of_registered_blocks < 2 else \"blocks\"\n app.console.print(\n f\"[green]Successfully registered {number_of_registered_blocks} {block_text}\\n\"\n )\n app.console.print(_build_registered_blocks_table(registered_blocks))\n msg = (\n \"\\n To configure the newly registered blocks, \"\n \"go to the Blocks page in the Prefect UI.\\n\"\n )\n\n ui_url = PREFECT_UI_URL.value()\n if ui_url is not None:\n block_catalog_url = f\"{ui_url}/blocks/catalog\"\n msg = f\"{msg.rstrip().rstrip('.')}: {block_catalog_url}\\n\"\n\n app.console.print(msg)\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/cloud-webhook/","title":"Cloud webhook","text":"","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook","title":"prefect.cli.cloud.webhook
","text":"Command line interface for working with webhooks
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.create","title":"create
async
","text":"Create a new Cloud webhook
Source code inprefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def create(\n webhook_name: str,\n description: str = typer.Option(\n \"\", \"--description\", \"-d\", help=\"Description of the webhook\"\n ),\n template: str = typer.Option(\n None, \"--template\", \"-t\", help=\"Jinja2 template expression\"\n ),\n):\n \"\"\"\n Create a new Cloud webhook\n \"\"\"\n if not template:\n exit_with_error(\n \"Please provide a Jinja2 template expression in the --template flag \\nwhich\"\n ' should define (at minimum) the following attributes: \\n{ \"event\":'\n ' \"your.event.name\", \"resource\": { \"prefect.resource.id\":'\n ' \"your.resource.id\" } }'\n \" \\nhttps://docs.prefect.io/latest/cloud/webhooks/#webhook-templates\"\n )\n\n confirm_logged_in()\n\n # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n response = await client.request(\n \"POST\",\n \"/webhooks/\",\n json={\n \"name\": webhook_name,\n \"description\": description,\n \"template\": template,\n },\n )\n app.console.print(f'Successfully created webhook {response[\"name\"]}')\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.delete","title":"delete
async
","text":"Delete an existing Cloud webhook
Source code inprefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def delete(webhook_id: UUID):\n \"\"\"\n Delete an existing Cloud webhook\n \"\"\"\n confirm_logged_in()\n\n confirm_delete = typer.confirm(\n \"Are you sure you want to delete it? This cannot be undone.\"\n )\n\n if not confirm_delete:\n return\n\n # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n await client.request(\"DELETE\", f\"/webhooks/{webhook_id}\")\n app.console.print(f\"Successfully deleted webhook {webhook_id}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.get","title":"get
async
","text":"Retrieve a webhook by ID.
Source code inprefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def get(webhook_id: UUID):\n \"\"\"\n Retrieve a webhook by ID.\n \"\"\"\n confirm_logged_in()\n\n # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n webhook = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n display_table = _render_webhooks_into_table([webhook])\n app.console.print(display_table)\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.ls","title":"ls
async
","text":"Fetch and list all webhooks in your workspace
Source code inprefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def ls():\n \"\"\"\n Fetch and list all webhooks in your workspace\n \"\"\"\n confirm_logged_in()\n\n # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n retrieved_webhooks = await client.request(\"POST\", \"/webhooks/filter\")\n display_table = _render_webhooks_into_table(retrieved_webhooks)\n app.console.print(display_table)\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.rotate","title":"rotate
async
","text":"Rotate url for an existing Cloud webhook, in case it has been compromised
Source code inprefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def rotate(webhook_id: UUID):\n \"\"\"\n Rotate url for an existing Cloud webhook, in case it has been compromised\n \"\"\"\n confirm_logged_in()\n\n confirm_rotate = typer.confirm(\n \"Are you sure you want to rotate? This will invalidate the old URL.\"\n )\n\n if not confirm_rotate:\n return\n\n # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n response = await client.request(\"POST\", f\"/webhooks/{webhook_id}/rotate\")\n app.console.print(f'Successfully rotated webhook URL to {response[\"slug\"]}')\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.toggle","title":"toggle
async
","text":"Toggle the enabled status of an existing Cloud webhook
Source code inprefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def toggle(\n webhook_id: UUID,\n):\n \"\"\"\n Toggle the enabled status of an existing Cloud webhook\n \"\"\"\n confirm_logged_in()\n\n status_lookup = {True: \"enabled\", False: \"disabled\"}\n\n async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n response = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n current_status = response[\"enabled\"]\n new_status = not current_status\n\n await client.request(\n \"PATCH\", f\"/webhooks/{webhook_id}\", json={\"enabled\": new_status}\n )\n app.console.print(f\"Webhook is now {status_lookup[new_status]}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.update","title":"update
async
","text":"Partially update an existing Cloud webhook
Source code inprefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def update(\n webhook_id: UUID,\n webhook_name: str = typer.Option(None, \"--name\", \"-n\", help=\"Webhook name\"),\n description: str = typer.Option(\n None, \"--description\", \"-d\", help=\"Description of the webhook\"\n ),\n template: str = typer.Option(\n None, \"--template\", \"-t\", help=\"Jinja2 template expression\"\n ),\n):\n \"\"\"\n Partially update an existing Cloud webhook\n \"\"\"\n confirm_logged_in()\n\n # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n response = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n update_payload = {\n \"name\": webhook_name or response[\"name\"],\n \"description\": description or response[\"description\"],\n \"template\": template or response[\"template\"],\n }\n\n await client.request(\"PUT\", f\"/webhooks/{webhook_id}\", json=update_payload)\n app.console.print(f\"Successfully updated webhook {webhook_id}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud/","title":"cloud","text":"","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud","title":"prefect.cli.cloud
","text":"Command line interface for interacting with Prefect Cloud
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login_api","title":"login_api = FastAPI(lifespan=lifespan)
module-attribute
","text":"This small API server is used for data transmission for browser-based log in.
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.check_key_is_valid_for_login","title":"check_key_is_valid_for_login
async
","text":"Attempt to use a key to see if it is valid
Source code inprefect/cli/cloud/__init__.py
async def check_key_is_valid_for_login(key: str):\n \"\"\"\n Attempt to use a key to see if it is valid\n \"\"\"\n async with get_cloud_client(api_key=key) as client:\n try:\n await client.read_workspaces()\n return True\n except CloudUnauthorizedError:\n return False\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login","title":"login
async
","text":"Log in to Prefect Cloud. Creates a new profile configured to use the specified PREFECT_API_KEY. Uses a previously configured profile if it exists.
Source code inprefect/cli/cloud/__init__.py
@cloud_app.command()\nasync def login(\n key: Optional[str] = typer.Option(\n None, \"--key\", \"-k\", help=\"API Key to authenticate with Prefect\"\n ),\n workspace_handle: Optional[str] = typer.Option(\n None,\n \"--workspace\",\n \"-w\",\n help=(\n \"Full handle of workspace, in format '<account_handle>/<workspace_handle>'\"\n ),\n ),\n):\n \"\"\"\n Log in to Prefect Cloud.\n Creates a new profile configured to use the specified PREFECT_API_KEY.\n Uses a previously configured profile if it exists.\n \"\"\"\n if not is_interactive() and (not key or not workspace_handle):\n exit_with_error(\n \"When not using an interactive terminal, you must supply a `--key` and\"\n \" `--workspace`.\"\n )\n\n profiles = load_profiles()\n current_profile = get_settings_context().profile\n env_var_api_key = PREFECT_API_KEY.value()\n\n if env_var_api_key and key and env_var_api_key != key:\n exit_with_error(\n \"Cannot log in with a key when a different PREFECT_API_KEY is present as an\"\n \" environment variable that will override it.\"\n )\n\n if env_var_api_key and env_var_api_key == key:\n is_valid_key = await check_key_is_valid_for_login(key)\n is_correct_key_format = key.startswith(\"pnu_\") or key.startswith(\"pnb_\")\n if not is_valid_key:\n help_message = \"Please ensure your credentials are correct and unexpired.\"\n if not is_correct_key_format:\n help_message = \"Your key is not in our expected format.\"\n exit_with_error(\n f\"Unable to authenticate with Prefect Cloud. {help_message}\"\n )\n\n already_logged_in_profiles = []\n for name, profile in profiles.items():\n profile_key = profile.settings.get(PREFECT_API_KEY)\n if (\n # If a key is provided, only show profiles with the same key\n (key and profile_key == key)\n # Otherwise, show all profiles with a key set\n or (not key and profile_key is not None)\n # Check that the key is usable to avoid suggesting unauthenticated profiles\n and await check_key_is_valid_for_login(profile_key)\n ):\n already_logged_in_profiles.append(name)\n\n current_profile_is_logged_in = current_profile.name in already_logged_in_profiles\n\n if current_profile_is_logged_in:\n app.console.print(\"It looks like you're already authenticated on this profile.\")\n should_reauth = typer.confirm(\n \"? Would you like to reauthenticate?\", default=False\n )\n if not should_reauth:\n app.console.print(\"Using the existing authentication on this profile.\")\n key = PREFECT_API_KEY.value()\n\n elif already_logged_in_profiles:\n app.console.print(\n \"It looks like you're already authenticated with another profile.\"\n )\n if typer.confirm(\n \"? Would you like to switch profiles?\",\n default=True,\n ):\n profile_name = prompt_select_from_list(\n app.console,\n \"Which authenticated profile would you like to switch to?\",\n already_logged_in_profiles,\n )\n\n profiles.set_active(profile_name)\n save_profiles(profiles)\n exit_with_success(f\"Switched to authenticated profile {profile_name!r}.\")\n\n if not key:\n choice = prompt_select_from_list(\n app.console,\n \"How would you like to authenticate?\",\n [\n (\"browser\", \"Log in with a web browser\"),\n (\"key\", \"Paste an API key\"),\n ],\n )\n\n if choice == \"key\":\n key = typer.prompt(\"Paste your API key\", hide_input=True)\n elif choice == \"browser\":\n key = await login_with_browser()\n\n async with get_cloud_client(api_key=key) as client:\n try:\n workspaces = await client.read_workspaces()\n except CloudUnauthorizedError:\n if key.startswith(\"pcu\"):\n help_message = (\n \"It looks like you're using API key from Cloud 1\"\n \" (https://cloud.prefect.io). Make sure that you generate API key\"\n \" using Cloud 2 (https://app.prefect.cloud)\"\n )\n elif not key.startswith(\"pnu_\") and not key.startswith(\"pnb_\"):\n help_message = (\n \"Your key is not in our expected format: 'pnu_' or 'pnb_'.\"\n )\n else:\n help_message = (\n \"Please ensure your credentials are correct and unexpired.\"\n )\n exit_with_error(\n f\"Unable to authenticate with Prefect Cloud. {help_message}\"\n )\n except httpx.HTTPStatusError as exc:\n exit_with_error(f\"Error connecting to Prefect Cloud: {exc!r}\")\n\n if workspace_handle:\n # Search for the given workspace\n for workspace in workspaces:\n if workspace.handle == workspace_handle:\n break\n else:\n if workspaces:\n hint = (\n \" Available workspaces:\"\n f\" {listrepr((w.handle for w in workspaces), ', ')}\"\n )\n else:\n hint = \"\"\n\n exit_with_error(f\"Workspace {workspace_handle!r} not found.\" + hint)\n else:\n # Prompt a switch if the number of workspaces is greater than one\n prompt_switch_workspace = len(workspaces) > 1\n\n current_workspace = get_current_workspace(workspaces)\n\n # Confirm that we want to switch if the current profile is already logged in\n if (\n current_profile_is_logged_in and current_workspace is not None\n ) and prompt_switch_workspace:\n app.console.print(\n f\"You are currently using workspace {current_workspace.handle!r}.\"\n )\n prompt_switch_workspace = typer.confirm(\n \"? Would you like to switch workspaces?\", default=False\n )\n\n if prompt_switch_workspace:\n workspace = prompt_select_from_list(\n app.console,\n \"Which workspace would you like to use?\",\n [(workspace, workspace.handle) for workspace in workspaces],\n )\n else:\n if current_workspace:\n workspace = current_workspace\n elif len(workspaces) > 0:\n workspace = workspaces[0]\n else:\n exit_with_error(\n \"No workspaces found! Create a workspace at\"\n f\" {PREFECT_CLOUD_UI_URL.value()} and try again.\"\n )\n\n update_current_profile(\n {\n PREFECT_API_KEY: key,\n PREFECT_API_URL: workspace.api_url(),\n }\n )\n\n exit_with_success(\n f\"Authenticated with Prefect Cloud! Using workspace {workspace.handle!r}.\"\n )\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login_with_browser","title":"login_with_browser
async
","text":"Perform login using the browser.
On failure, this function will exit the process. On success, it will return an API key.
Source code inprefect/cli/cloud/__init__.py
async def login_with_browser() -> str:\n \"\"\"\n Perform login using the browser.\n\n On failure, this function will exit the process.\n On success, it will return an API key.\n \"\"\"\n\n # Set up an event that the login API will toggle on startup\n ready_event = login_api.extra[\"ready-event\"] = anyio.Event()\n\n # Set up an event that the login API will set when a response comes from the UI\n result_event = login_api.extra[\"result-event\"] = anyio.Event()\n\n timeout_scope = None\n async with anyio.create_task_group() as tg:\n # Run a server in the background to get payload from the browser\n server = await tg.start(serve_login_api, tg.cancel_scope)\n\n # Wait for the login server to be ready\n with anyio.fail_after(10):\n await ready_event.wait()\n\n # The server may not actually be serving as the lifespan is started first\n while not server.started:\n await anyio.sleep(0)\n\n # Get the port the server is using\n server_port = server.servers[0].sockets[0].getsockname()[1]\n callback = urllib.parse.quote(f\"http://localhost:{server_port}\")\n ui_login_url = (\n PREFECT_CLOUD_UI_URL.value() + f\"/auth/client?callback={callback}\"\n )\n\n # Then open the authorization page in a new browser tab\n app.console.print(\"Opening browser...\")\n await run_sync_in_worker_thread(webbrowser.open_new_tab, ui_login_url)\n\n # Wait for the response from the browser,\n with anyio.move_on_after(120) as timeout_scope:\n app.console.print(\"Waiting for response...\")\n await result_event.wait()\n\n # Uvicorn installs signal handlers, this is the cleanest way to shutdown the\n # login API\n raise_signal(signal.SIGINT)\n\n result = login_api.extra.get(\"result\")\n if not result:\n if timeout_scope and timeout_scope.cancel_called:\n exit_with_error(\"Timed out while waiting for authorization.\")\n else:\n exit_with_error(\"Aborted.\")\n\n if result.type == \"success\":\n return result.content.api_key\n elif result.type == \"failure\":\n exit_with_error(f\"Failed to log in. {result.content.reason}\")\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.logout","title":"logout
async
","text":"Logout the current workspace. Reset PREFECT_API_KEY and PREFECT_API_URL to default.
Source code inprefect/cli/cloud/__init__.py
@cloud_app.command()\nasync def logout():\n \"\"\"\n Logout the current workspace.\n Reset PREFECT_API_KEY and PREFECT_API_URL to default.\n \"\"\"\n current_profile = prefect.context.get_settings_context().profile\n if current_profile is None:\n exit_with_error(\"There is no current profile set.\")\n\n if current_profile.settings.get(PREFECT_API_KEY) is None:\n exit_with_error(\"Current profile is not logged into Prefect Cloud.\")\n\n update_current_profile(\n {\n PREFECT_API_URL: None,\n PREFECT_API_KEY: None,\n },\n )\n\n exit_with_success(\"Logged out from Prefect Cloud.\")\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.ls","title":"ls
async
","text":"List available workspaces.
Source code inprefect/cli/cloud/__init__.py
@workspace_app.command()\nasync def ls():\n \"\"\"List available workspaces.\"\"\"\n\n confirm_logged_in()\n\n async with get_cloud_client() as client:\n try:\n workspaces = await client.read_workspaces()\n except CloudUnauthorizedError:\n exit_with_error(\n \"Unable to authenticate. Please ensure your credentials are correct.\"\n )\n\n current_workspace = get_current_workspace(workspaces)\n\n table = Table(caption=\"* active workspace\")\n table.add_column(\n \"[#024dfd]Workspaces:\", justify=\"left\", style=\"#8ea0ae\", no_wrap=True\n )\n\n for workspace_handle in sorted(workspace.handle for workspace in workspaces):\n if workspace_handle == current_workspace.handle:\n table.add_row(f\"[green]* {workspace_handle}[/green]\")\n else:\n table.add_row(f\" {workspace_handle}\")\n\n app.console.print(table)\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.open","title":"open
async
","text":"Open the Prefect Cloud UI in the browser.
Source code inprefect/cli/cloud/__init__.py
@cloud_app.command()\nasync def open():\n \"\"\"\n Open the Prefect Cloud UI in the browser.\n \"\"\"\n confirm_logged_in()\n\n current_profile = prefect.context.get_settings_context().profile\n if current_profile is None:\n exit_with_error(\n \"There is no current profile set - set one with `prefect profile create\"\n \" <name>` and `prefect profile use <name>`.\"\n )\n\n current_workspace = get_current_workspace(\n await prefect.get_cloud_client().read_workspaces()\n )\n if current_workspace is None:\n exit_with_error(\n \"There is no current workspace set - set one with `prefect cloud workspace\"\n \" set --workspace <workspace>`.\"\n )\n\n ui_url = current_workspace.ui_url()\n\n await run_sync_in_worker_thread(webbrowser.open_new_tab, ui_url)\n\n exit_with_success(f\"Opened {current_workspace.handle!r} in browser.\")\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.prompt_select_from_list","title":"prompt_select_from_list
","text":"Given a list of options, display the values to user in a table and prompt them to select one.
Parameters:
Name Type Description Defaultoptions
Union[List[str], List[Tuple[Hashable, str]]]
A list of options to present to the user. A list of tuples can be passed as key value pairs. If a value is chosen, the key will be returned.
requiredReturns:
Name Type Descriptionstr
str
the selected option
Source code inprefect/cli/cloud/__init__.py
def prompt_select_from_list(\n console, prompt: str, options: Union[List[str], List[Tuple[Hashable, str]]]\n) -> str:\n \"\"\"\n Given a list of options, display the values to user in a table and prompt them\n to select one.\n\n Args:\n options: A list of options to present to the user.\n A list of tuples can be passed as key value pairs. If a value is chosen, the\n key will be returned.\n\n Returns:\n str: the selected option\n \"\"\"\n\n current_idx = 0\n selected_option = None\n\n def build_table() -> Table:\n \"\"\"\n Generate a table of options. The `current_idx` will be highlighted.\n \"\"\"\n\n table = Table(box=False, header_style=None, padding=(0, 0))\n table.add_column(\n f\"? [bold]{prompt}[/] [bright_blue][Use arrows to move; enter to select]\",\n justify=\"left\",\n no_wrap=True,\n )\n\n for i, option in enumerate(options):\n if isinstance(option, tuple):\n option = option[1]\n\n if i == current_idx:\n # Use blue for selected options\n table.add_row(\"[bold][blue]> \" + option)\n else:\n table.add_row(\" \" + option)\n return table\n\n with Live(build_table(), auto_refresh=False, console=console) as live:\n while selected_option is None:\n key = readchar.readkey()\n\n if key == readchar.key.UP:\n current_idx = current_idx - 1\n # wrap to bottom if at the top\n if current_idx < 0:\n current_idx = len(options) - 1\n elif key == readchar.key.DOWN:\n current_idx = current_idx + 1\n # wrap to top if at the bottom\n if current_idx >= len(options):\n current_idx = 0\n elif key == readchar.key.CTRL_C:\n # gracefully exit with no message\n exit_with_error(\"\")\n elif key == readchar.key.ENTER or key == readchar.key.CR:\n selected_option = options[current_idx]\n if isinstance(selected_option, tuple):\n selected_option = selected_option[0]\n\n live.update(build_table(), refresh=True)\n\n return selected_option\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.set","title":"set
async
","text":"Set current workspace. Shows a workspace picker if no workspace is specified.
Source code inprefect/cli/cloud/__init__.py
@workspace_app.command()\nasync def set(\n workspace_handle: str = typer.Option(\n None,\n \"--workspace\",\n \"-w\",\n help=(\n \"Full handle of workspace, in format '<account_handle>/<workspace_handle>'\"\n ),\n ),\n):\n \"\"\"Set current workspace. Shows a workspace picker if no workspace is specified.\"\"\"\n confirm_logged_in()\n\n async with get_cloud_client() as client:\n try:\n workspaces = await client.read_workspaces()\n except CloudUnauthorizedError:\n exit_with_error(\n \"Unable to authenticate. Please ensure your credentials are correct.\"\n )\n\n if workspace_handle:\n # Search for the given workspace\n for workspace in workspaces:\n if workspace.handle == workspace_handle:\n break\n else:\n exit_with_error(f\"Workspace {workspace_handle!r} not found.\")\n else:\n workspace = prompt_select_from_list(\n app.console,\n \"Which workspace would you like to use?\",\n [(workspace, workspace.handle) for workspace in workspaces],\n )\n\n profile = update_current_profile({PREFECT_API_URL: workspace.api_url()})\n\n exit_with_success(\n f\"Successfully set workspace to {workspace.handle!r} in profile\"\n f\" {profile.name!r}.\"\n )\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/concurrency_limit/","title":"concurrency_limit","text":"","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit","title":"prefect.cli.concurrency_limit
","text":"Command line interface for working with concurrency limits.
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.create","title":"create
async
","text":"Create a concurrency limit against a tag.
This limit controls how many task runs with that tag may simultaneously be in a Running state.
Source code inprefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def create(tag: str, concurrency_limit: int):\n \"\"\"\n Create a concurrency limit against a tag.\n\n This limit controls how many task runs with that tag may simultaneously be in a\n Running state.\n \"\"\"\n\n async with get_client() as client:\n await client.create_concurrency_limit(\n tag=tag, concurrency_limit=concurrency_limit\n )\n await client.read_concurrency_limit_by_tag(tag)\n\n app.console.print(\n textwrap.dedent(\n f\"\"\"\n Created concurrency limit with properties:\n tag - {tag!r}\n concurrency_limit - {concurrency_limit}\n\n Delete the concurrency limit:\n prefect concurrency-limit delete {tag!r}\n\n Inspect the concurrency limit:\n prefect concurrency-limit inspect {tag!r}\n \"\"\"\n )\n )\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.delete","title":"delete
async
","text":"Delete the concurrency limit set on the specified tag.
Source code inprefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def delete(tag: str):\n \"\"\"\n Delete the concurrency limit set on the specified tag.\n \"\"\"\n\n async with get_client() as client:\n try:\n await client.delete_concurrency_limit_by_tag(tag=tag)\n except ObjectNotFound:\n exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n exit_with_success(f\"Deleted concurrency limit set on the tag: {tag}\")\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.inspect","title":"inspect
async
","text":"View details about a concurrency limit. active_slots
shows a list of TaskRun IDs which are currently using a concurrency slot.
prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def inspect(tag: str):\n \"\"\"\n View details about a concurrency limit. `active_slots` shows a list of TaskRun IDs\n which are currently using a concurrency slot.\n \"\"\"\n\n async with get_client() as client:\n try:\n result = await client.read_concurrency_limit_by_tag(tag=tag)\n except ObjectNotFound:\n exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n trid_table = Table()\n trid_table.add_column(\"Active Task Run IDs\", style=\"cyan\", no_wrap=True)\n\n cl_table = Table(title=f\"Concurrency Limit ID: [red]{str(result.id)}\")\n cl_table.add_column(\"Tag\", style=\"green\", no_wrap=True)\n cl_table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n cl_table.add_column(\"Created\", style=\"magenta\", no_wrap=True)\n cl_table.add_column(\"Updated\", style=\"magenta\", no_wrap=True)\n\n for trid in sorted(result.active_slots):\n trid_table.add_row(str(trid))\n\n cl_table.add_row(\n str(result.tag),\n str(result.concurrency_limit),\n Pretty(pendulum.instance(result.created).diff_for_humans()),\n Pretty(pendulum.instance(result.updated).diff_for_humans()),\n )\n\n group = Group(\n cl_table,\n trid_table,\n )\n app.console.print(Panel(group, expand=False))\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.ls","title":"ls
async
","text":"View all concurrency limits.
Source code inprefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def ls(limit: int = 15, offset: int = 0):\n \"\"\"\n View all concurrency limits.\n \"\"\"\n table = Table(\n title=\"Concurrency Limits\",\n caption=\"inspect a concurrency limit to show active task run IDs\",\n )\n table.add_column(\"Tag\", style=\"green\", no_wrap=True)\n table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n table.add_column(\"Active Task Runs\", style=\"magenta\", no_wrap=True)\n\n async with get_client() as client:\n concurrency_limits = await client.read_concurrency_limits(\n limit=limit, offset=offset\n )\n\n for cl in sorted(concurrency_limits, key=lambda c: c.updated, reverse=True):\n table.add_row(\n str(cl.tag),\n str(cl.id),\n str(cl.concurrency_limit),\n str(len(cl.active_slots)),\n )\n\n app.console.print(table)\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.reset","title":"reset
async
","text":"Resets the concurrency limit slots set on the specified tag.
Source code inprefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def reset(tag: str):\n \"\"\"\n Resets the concurrency limit slots set on the specified tag.\n \"\"\"\n\n async with get_client() as client:\n try:\n await client.reset_concurrency_limit_by_tag(tag=tag)\n except ObjectNotFound:\n exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n exit_with_success(f\"Reset concurrency limit set on the tag: {tag}\")\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/config/","title":"config","text":"","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config","title":"prefect.cli.config
","text":"Command line interface for working with profiles
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.set_","title":"set_
","text":"Change the value for a setting by setting the value in the current profile.
Source code inprefect/cli/config.py
@config_app.command(\"set\")\ndef set_(settings: List[str]):\n \"\"\"\n Change the value for a setting by setting the value in the current profile.\n \"\"\"\n parsed_settings = {}\n for item in settings:\n try:\n setting, value = item.split(\"=\", maxsplit=1)\n except ValueError:\n exit_with_error(\n f\"Failed to parse argument {item!r}. Use the format 'VAR=VAL'.\"\n )\n\n if setting not in prefect.settings.SETTING_VARIABLES:\n exit_with_error(f\"Unknown setting name {setting!r}.\")\n\n # Guard against changing settings that tweak config locations\n if setting in {\"PREFECT_HOME\", \"PREFECT_PROFILES_PATH\"}:\n exit_with_error(\n f\"Setting {setting!r} cannot be changed with this command. \"\n \"Use an environment variable instead.\"\n )\n\n parsed_settings[setting] = value\n\n try:\n new_profile = prefect.settings.update_current_profile(parsed_settings)\n except pydantic.ValidationError as exc:\n for error in exc.errors():\n setting = error[\"loc\"][0]\n message = error[\"msg\"]\n app.console.print(f\"Validation error for setting {setting!r}: {message}\")\n exit_with_error(\"Invalid setting value.\")\n\n for setting, value in parsed_settings.items():\n app.console.print(f\"Set {setting!r} to {value!r}.\")\n if setting in os.environ:\n app.console.print(\n f\"[yellow]{setting} is also set by an environment variable which will \"\n f\"override your config value. Run `unset {setting}` to clear it.\"\n )\n\n if prefect.settings.SETTING_VARIABLES[setting].deprecated:\n app.console.print(\n f\"[yellow]{prefect.settings.SETTING_VARIABLES[setting].deprecated_message}.\"\n )\n\n exit_with_success(f\"Updated profile {new_profile.name!r}.\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.unset","title":"unset
","text":"Restore the default value for a setting.
Removes the setting from the current profile.
Source code inprefect/cli/config.py
@config_app.command()\ndef unset(settings: List[str]):\n \"\"\"\n Restore the default value for a setting.\n\n Removes the setting from the current profile.\n \"\"\"\n profiles = prefect.settings.load_profiles()\n profile = profiles[prefect.context.get_settings_context().profile.name]\n parsed = set()\n\n for setting in settings:\n if setting not in prefect.settings.SETTING_VARIABLES:\n exit_with_error(f\"Unknown setting name {setting!r}.\")\n # Cast to settings objects\n parsed.add(prefect.settings.SETTING_VARIABLES[setting])\n\n for setting in parsed:\n if setting not in profile.settings:\n exit_with_error(f\"{setting.name!r} is not set in profile {profile.name!r}.\")\n\n profiles.update_profile(\n name=profile.name, settings={setting: None for setting in parsed}\n )\n\n for setting in settings:\n app.console.print(f\"Unset {setting!r}.\")\n\n if setting in os.environ:\n app.console.print(\n f\"[yellow]{setting!r} is also set by an environment variable. \"\n f\"Use `unset {setting}` to clear it.\"\n )\n\n prefect.settings.save_profiles(profiles)\n exit_with_success(f\"Updated profile {profile.name!r}.\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.validate","title":"validate
","text":"Read and validate the current profile.
Deprecated settings will be automatically converted to new names unless both are set.
Source code inprefect/cli/config.py
@config_app.command()\ndef validate():\n \"\"\"\n Read and validate the current profile.\n\n Deprecated settings will be automatically converted to new names unless both are\n set.\n \"\"\"\n profiles = prefect.settings.load_profiles()\n profile = profiles[prefect.context.get_settings_context().profile.name]\n changed = profile.convert_deprecated_renamed_settings()\n for old, new in changed:\n app.console.print(f\"Updated {old.name!r} to {new.name!r}.\")\n\n for setting in profile.settings.keys():\n if setting.deprecated:\n app.console.print(f\"Found deprecated setting {setting.name!r}.\")\n\n profile.validate_settings()\n\n prefect.settings.save_profiles(profiles)\n exit_with_success(\"Configuration valid!\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.view","title":"view
","text":"Display the current settings.
Source code inprefect/cli/config.py
@config_app.command()\ndef view(\n show_defaults: Optional[bool] = typer.Option(\n False, \"--show-defaults/--hide-defaults\", help=(show_defaults_help)\n ),\n show_sources: Optional[bool] = typer.Option(\n True,\n \"--show-sources/--hide-sources\",\n help=(show_sources_help),\n ),\n show_secrets: Optional[bool] = typer.Option(\n False,\n \"--show-secrets/--hide-secrets\",\n help=\"Toggle display of secrets setting values.\",\n ),\n):\n \"\"\"\n Display the current settings.\n \"\"\"\n context = prefect.context.get_settings_context()\n\n # Get settings at each level, converted to a flat dictionary for easy comparison\n default_settings = prefect.settings.get_default_settings()\n env_settings = prefect.settings.get_settings_from_env()\n current_profile_settings = context.settings\n\n # Obfuscate secrets\n if not show_secrets:\n default_settings = default_settings.with_obfuscated_secrets()\n env_settings = env_settings.with_obfuscated_secrets()\n current_profile_settings = current_profile_settings.with_obfuscated_secrets()\n\n # Display the profile first\n app.console.print(f\"PREFECT_PROFILE={context.profile.name!r}\")\n\n settings_output = []\n\n # The combination of environment variables and profile settings that are in use\n profile_overrides = current_profile_settings.dict(exclude_unset=True)\n\n # Used to see which settings in current_profile_settings came from env vars\n env_overrides = env_settings.dict(exclude_unset=True)\n\n for key, value in profile_overrides.items():\n source = \"env\" if env_overrides.get(key) is not None else \"profile\"\n source_blurb = f\" (from {source})\" if show_sources else \"\"\n settings_output.append(f\"{key}='{value}'{source_blurb}\")\n\n if show_defaults:\n for key, value in default_settings.dict().items():\n if key not in profile_overrides:\n source_blurb = \" (from defaults)\" if show_sources else \"\"\n settings_output.append(f\"{key}='{value}'{source_blurb}\")\n\n app.console.print(\"\\n\".join(sorted(settings_output)))\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/deploy/","title":"deploy","text":"","tags":["Python API","deploy","deployment","CLI"]},{"location":"api-ref/prefect/cli/deploy/#prefect.cli.deploy","title":"prefect.cli.deploy
","text":"Module containing implementation for deploying projects.
","tags":["Python API","deploy","deployment","CLI"]},{"location":"api-ref/prefect/cli/deploy/#prefect.cli.deploy.deploy","title":"deploy
async
","text":"Deploy a flow from this project by creating a deployment.
Should be run from a project root directory.
Source code inprefect/cli/deploy.py
@app.command()\nasync def deploy(\n entrypoint: str = typer.Argument(\n None,\n help=(\n \"The path to a flow entrypoint within a project, in the form of\"\n \" `./path/to/file.py:flow_func_name`\"\n ),\n ),\n flow_name: str = typer.Option(\n None,\n \"--flow\",\n \"-f\",\n help=\"DEPRECATED: The name of a registered flow to create a deployment for.\",\n ),\n names: List[str] = typer.Option(\n None,\n \"--name\",\n \"-n\",\n help=(\n \"The name to give the deployment. Can be a pattern. Examples:\"\n \" 'my-deployment', 'my-flow/my-deployment', 'my-deployment-*',\"\n \" '*-flow-name/deployment*'\"\n ),\n ),\n description: str = typer.Option(\n None,\n \"--description\",\n \"-d\",\n help=(\n \"The description to give the deployment. If not provided, the description\"\n \" will be populated from the flow's description.\"\n ),\n ),\n version: str = typer.Option(\n None, \"--version\", help=\"A version to give the deployment.\"\n ),\n tags: List[str] = typer.Option(\n None,\n \"-t\",\n \"--tag\",\n help=(\n \"One or more optional tags to apply to the deployment. Note: tags are used\"\n \" only for organizational purposes. For delegating work to agents, use the\"\n \" --work-queue flag.\"\n ),\n ),\n work_pool_name: str = SettingsOption(\n PREFECT_DEFAULT_WORK_POOL_NAME,\n \"-p\",\n \"--pool\",\n help=\"The work pool that will handle this deployment's runs.\",\n ),\n work_queue_name: str = typer.Option(\n None,\n \"-q\",\n \"--work-queue\",\n help=(\n \"The work queue that will handle this deployment's runs. \"\n \"It will be created if it doesn't already exist. Defaults to `None`.\"\n ),\n ),\n variables: List[str] = typer.Option(\n None,\n \"-v\",\n \"--variable\",\n help=(\n \"One or more job variable overrides for the work pool provided in the\"\n \" format of key=value string or a JSON object\"\n ),\n ),\n cron: List[str] = typer.Option(\n None,\n \"--cron\",\n help=\"A cron string that will be used to set a CronSchedule on the deployment.\",\n ),\n interval: List[int] = typer.Option(\n None,\n \"--interval\",\n help=(\n \"An integer specifying an interval (in seconds) that will be used to set an\"\n \" IntervalSchedule on the deployment.\"\n ),\n ),\n interval_anchor: Optional[str] = typer.Option(\n None, \"--anchor-date\", help=\"The anchor date for all interval schedules\"\n ),\n rrule: List[str] = typer.Option(\n None,\n \"--rrule\",\n help=\"An RRule that will be used to set an RRuleSchedule on the deployment.\",\n ),\n timezone: str = typer.Option(\n None,\n \"--timezone\",\n help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n ),\n trigger: List[str] = typer.Option(\n None,\n \"--trigger\",\n help=(\n \"Specifies a trigger for the deployment. The value can be a\"\n \" json string or path to `.yaml`/`.json` file. This flag can be used\"\n \" multiple times.\"\n ),\n ),\n param: List[str] = typer.Option(\n None,\n \"--param\",\n help=(\n \"An optional parameter override, values are parsed as JSON strings e.g.\"\n \" --param question=ultimate --param answer=42\"\n ),\n ),\n params: str = typer.Option(\n None,\n \"--params\",\n help=(\n \"An optional parameter override in a JSON string format e.g.\"\n ' --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\''\n ),\n ),\n enforce_parameter_schema: bool = typer.Option(\n False,\n \"--enforce-parameter-schema\",\n help=(\n \"Whether to enforce the parameter schema on this deployment. If set to\"\n \" True, any parameters passed to this deployment must match the signature\"\n \" of the flow.\"\n ),\n ),\n deploy_all: bool = typer.Option(\n False,\n \"--all\",\n help=(\n \"Deploy all flows in the project. If a flow name or entrypoint is also\"\n \" provided, this flag will be ignored.\"\n ),\n ),\n prefect_file: Path = typer.Option(\n Path(\"prefect.yaml\"),\n \"--prefect-file\",\n help=\"Specify a custom path to a prefect.yaml file\",\n ),\n ci: bool = typer.Option(\n False,\n \"--ci\",\n help=(\n \"DEPRECATED: Please use the global '--no-prompt' flag instead: 'prefect\"\n \" --no-prompt deploy'.\\n\\nRun this command in CI mode. This will disable\"\n \" interactive prompts and will error if any required arguments are not\"\n \" provided.\"\n ),\n ),\n):\n \"\"\"\n Deploy a flow from this project by creating a deployment.\n\n Should be run from a project root directory.\n \"\"\"\n if ci:\n app.console.print(\n generate_deprecation_message(\n name=\"The `--ci` flag\",\n start_date=\"Jun 2023\",\n help=(\n \"Please use the global `--no-prompt` flag instead: `prefect\"\n \" --no-prompt deploy`.\"\n ),\n ),\n style=\"yellow\",\n )\n\n options = {\n \"entrypoint\": entrypoint,\n \"flow_name\": flow_name,\n \"description\": description,\n \"version\": version,\n \"tags\": tags,\n \"work_pool_name\": work_pool_name,\n \"work_queue_name\": work_queue_name,\n \"variables\": variables,\n \"cron\": cron,\n \"interval\": interval,\n \"anchor_date\": interval_anchor,\n \"rrule\": rrule,\n \"timezone\": timezone,\n \"triggers\": trigger,\n \"param\": param,\n \"params\": params,\n \"enforce_parameter_schema\": enforce_parameter_schema,\n }\n try:\n deploy_configs, actions = _load_deploy_configs_and_actions(\n prefect_file=prefect_file, ci=ci\n )\n parsed_names = []\n for name in names:\n if \"*\" in name:\n parsed_names.extend(_parse_name_from_pattern(deploy_configs, name))\n else:\n parsed_names.append(name)\n deploy_configs = _pick_deploy_configs(\n deploy_configs, parsed_names, deploy_all, ci\n )\n\n if len(deploy_configs) > 1:\n if any(options.values()):\n app.console.print(\n (\n \"You have passed options to the deploy command, but you are\"\n \" creating or updating multiple deployments. These options\"\n \" will be ignored.\"\n ),\n style=\"yellow\",\n )\n await _run_multi_deploy(\n deploy_configs=deploy_configs,\n actions=actions,\n deploy_all=deploy_all,\n ci=ci,\n prefect_file=prefect_file,\n )\n else:\n # Accommodate passing in -n flow-name/deployment-name as well as -n deployment-name\n options[\"names\"] = [\n name.split(\"/\", 1)[-1] if \"/\" in name else name for name in parsed_names\n ]\n\n await _run_single_deploy(\n deploy_config=deploy_configs[0] if deploy_configs else {},\n actions=actions,\n options=options,\n ci=ci,\n prefect_file=prefect_file,\n )\n except ValueError as exc:\n exit_with_error(str(exc))\n
","tags":["Python API","deploy","deployment","CLI"]},{"location":"api-ref/prefect/cli/deployment/","title":"deployment","text":"","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment","title":"prefect.cli.deployment
","text":"Command line interface for working with deployments.
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.apply","title":"apply
async
","text":"Create or update a deployment from a YAML file.
Source code inprefect/cli/deployment.py
@deployment_app.command(\n deprecated=True,\n deprecated_start_date=\"Mar 2024\",\n deprecated_name=\"deployment apply\",\n deprecated_help=\"Use 'prefect deploy' to deploy flows via YAML instead.\",\n)\nasync def apply(\n paths: List[str] = typer.Argument(\n ...,\n help=\"One or more paths to deployment YAML files.\",\n ),\n upload: bool = typer.Option(\n False,\n \"--upload\",\n help=(\n \"A flag that, when provided, uploads this deployment's files to remote\"\n \" storage.\"\n ),\n ),\n work_queue_concurrency: int = typer.Option(\n None,\n \"--limit\",\n \"-l\",\n help=(\n \"Sets the concurrency limit on the work queue that handles this\"\n \" deployment's runs\"\n ),\n ),\n):\n \"\"\"\n Create or update a deployment from a YAML file.\n \"\"\"\n deployment = None\n async with get_client() as client:\n for path in paths:\n try:\n deployment = await Deployment.load_from_yaml(path)\n app.console.print(\n f\"Successfully loaded {deployment.name!r}\", style=\"green\"\n )\n except Exception as exc:\n exit_with_error(\n f\"'{path!s}' did not conform to deployment spec: {exc!r}\"\n )\n\n assert deployment\n\n await create_work_queue_and_set_concurrency_limit(\n deployment.work_queue_name,\n deployment.work_pool_name,\n work_queue_concurrency,\n )\n\n if upload:\n if (\n deployment.storage\n and \"put-directory\" in deployment.storage.get_block_capabilities()\n ):\n file_count = await deployment.upload_to_storage()\n if file_count:\n app.console.print(\n (\n f\"Successfully uploaded {file_count} files to\"\n f\" {deployment.location}\"\n ),\n style=\"green\",\n )\n else:\n app.console.print(\n (\n f\"Deployment storage {deployment.storage} does not have\"\n \" upload capabilities; no files uploaded.\"\n ),\n style=\"red\",\n )\n await check_work_pool_exists(\n work_pool_name=deployment.work_pool_name, client=client\n )\n\n if client.server_type != ServerType.CLOUD and deployment.triggers:\n app.console.print(\n (\n \"Deployment triggers are only supported on \"\n f\"Prefect Cloud. Triggers defined in {path!r} will be \"\n \"ignored.\"\n ),\n style=\"red\",\n )\n\n deployment_id = await deployment.apply()\n app.console.print(\n (\n f\"Deployment '{deployment.flow_name}/{deployment.name}'\"\n f\" successfully created with id '{deployment_id}'.\"\n ),\n style=\"green\",\n )\n\n if PREFECT_UI_URL:\n app.console.print(\n \"View Deployment in UI:\"\n f\" {PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}\"\n )\n\n if deployment.work_pool_name is not None:\n await _print_deployment_work_pool_instructions(\n work_pool_name=deployment.work_pool_name, client=client\n )\n elif deployment.work_queue_name is not None:\n app.console.print(\n \"\\nTo execute flow runs from this deployment, start an agent that\"\n f\" pulls work from the {deployment.work_queue_name!r} work queue:\"\n )\n app.console.print(\n f\"$ prefect agent start -q {deployment.work_queue_name!r}\",\n style=\"blue\",\n )\n else:\n app.console.print(\n (\n \"\\nThis deployment does not specify a work queue name, which\"\n \" means agents will not be able to pick up its runs. To add a\"\n \" work queue, edit the deployment spec and re-run this command,\"\n \" or visit the deployment in the UI.\"\n ),\n style=\"red\",\n )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.build","title":"build
async
","text":"Generate a deployment YAML from /path/to/file.py:flow_function
Source code inprefect/cli/deployment.py
@deployment_app.command(\n deprecated=True,\n deprecated_start_date=\"Mar 2024\",\n deprecated_name=\"deployment build\",\n deprecated_help=\"Use 'prefect deploy' to deploy flows via YAML instead.\",\n)\nasync def build(\n entrypoint: str = typer.Argument(\n ...,\n help=(\n \"The path to a flow entrypoint, in the form of\"\n \" `./path/to/file.py:flow_func_name`\"\n ),\n ),\n name: str = typer.Option(\n None, \"--name\", \"-n\", help=\"The name to give the deployment.\"\n ),\n description: str = typer.Option(\n None,\n \"--description\",\n \"-d\",\n help=(\n \"The description to give the deployment. If not provided, the description\"\n \" will be populated from the flow's description.\"\n ),\n ),\n version: str = typer.Option(\n None, \"--version\", \"-v\", help=\"A version to give the deployment.\"\n ),\n tags: List[str] = typer.Option(\n None,\n \"-t\",\n \"--tag\",\n help=(\n \"One or more optional tags to apply to the deployment. Note: tags are used\"\n \" only for organizational purposes. For delegating work to agents, use the\"\n \" --work-queue flag.\"\n ),\n ),\n work_queue_name: str = typer.Option(\n None,\n \"-q\",\n \"--work-queue\",\n help=(\n \"The work queue that will handle this deployment's runs. \"\n \"It will be created if it doesn't already exist. Defaults to `None`. \"\n \"Note that if a work queue is not set, work will not be scheduled.\"\n ),\n ),\n work_pool_name: str = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The work pool that will handle this deployment's runs.\",\n ),\n work_queue_concurrency: int = typer.Option(\n None,\n \"--limit\",\n \"-l\",\n help=(\n \"Sets the concurrency limit on the work queue that handles this\"\n \" deployment's runs\"\n ),\n ),\n infra_type: str = typer.Option(\n None,\n \"--infra\",\n \"-i\",\n help=\"The infrastructure type to use, prepopulated with defaults. For example: \"\n + listrepr(builtin_infrastructure_types, sep=\", \"),\n ),\n infra_block: str = typer.Option(\n None,\n \"--infra-block\",\n \"-ib\",\n help=\"The slug of the infrastructure block to use as a template.\",\n ),\n overrides: List[str] = typer.Option(\n None,\n \"--override\",\n help=(\n \"One or more optional infrastructure overrides provided as a dot delimited\"\n \" path, e.g., `env.env_key=env_value`\"\n ),\n ),\n storage_block: str = typer.Option(\n None,\n \"--storage-block\",\n \"-sb\",\n help=(\n \"The slug of a remote storage block. Use the syntax:\"\n \" 'block_type/block_name', where block_type is one of 'github', 's3',\"\n \" 'gcs', 'azure', 'smb', or a registered block from a library that\"\n \" implements the WritableDeploymentStorage interface such as\"\n \" 'gitlab-repository', 'bitbucket-repository', 's3-bucket',\"\n \" 'gcs-bucket'\"\n ),\n ),\n skip_upload: bool = typer.Option(\n False,\n \"--skip-upload\",\n help=(\n \"A flag that, when provided, skips uploading this deployment's files to\"\n \" remote storage.\"\n ),\n ),\n cron: str = typer.Option(\n None,\n \"--cron\",\n help=\"A cron string that will be used to set a CronSchedule on the deployment.\",\n ),\n interval: int = typer.Option(\n None,\n \"--interval\",\n help=(\n \"An integer specifying an interval (in seconds) that will be used to set an\"\n \" IntervalSchedule on the deployment.\"\n ),\n ),\n interval_anchor: Optional[str] = typer.Option(\n None, \"--anchor-date\", help=\"The anchor date for an interval schedule\"\n ),\n rrule: str = typer.Option(\n None,\n \"--rrule\",\n help=\"An RRule that will be used to set an RRuleSchedule on the deployment.\",\n ),\n timezone: str = typer.Option(\n None,\n \"--timezone\",\n help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n ),\n path: str = typer.Option(\n None,\n \"--path\",\n help=(\n \"An optional path to specify a subdirectory of remote storage to upload to,\"\n \" or to point to a subdirectory of a locally stored flow.\"\n ),\n ),\n output: str = typer.Option(\n None,\n \"--output\",\n \"-o\",\n help=\"An optional filename to write the deployment file to.\",\n ),\n _apply: bool = typer.Option(\n False,\n \"--apply\",\n \"-a\",\n help=(\n \"An optional flag to automatically register the resulting deployment with\"\n \" the API.\"\n ),\n ),\n param: List[str] = typer.Option(\n None,\n \"--param\",\n help=(\n \"An optional parameter override, values are parsed as JSON strings e.g.\"\n \" --param question=ultimate --param answer=42\"\n ),\n ),\n params: str = typer.Option(\n None,\n \"--params\",\n help=(\n \"An optional parameter override in a JSON string format e.g.\"\n ' --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\''\n ),\n ),\n no_schedule: bool = typer.Option(\n False,\n \"--no-schedule\",\n help=\"An optional flag to disable scheduling for this deployment.\",\n ),\n):\n \"\"\"\n Generate a deployment YAML from /path/to/file.py:flow_function\n \"\"\"\n # validate inputs\n if not name:\n exit_with_error(\n \"A name for this deployment must be provided with the '--name' flag.\"\n )\n\n if (\n len([value for value in (cron, rrule, interval) if value is not None])\n + (1 if no_schedule else 0)\n > 1\n ):\n exit_with_error(\"Only one schedule type can be provided.\")\n\n if infra_block and infra_type:\n exit_with_error(\n \"Only one of `infra` or `infra_block` can be provided, please choose one.\"\n )\n\n output_file = None\n if output:\n output_file = Path(output)\n if output_file.suffix and output_file.suffix != \".yaml\":\n exit_with_error(\"Output file must be a '.yaml' file.\")\n else:\n output_file = output_file.with_suffix(\".yaml\")\n\n # validate flow\n try:\n fpath, obj_name = entrypoint.rsplit(\":\", 1)\n except ValueError as exc:\n if str(exc) == \"not enough values to unpack (expected 2, got 1)\":\n missing_flow_name_msg = (\n \"Your flow entrypoint must include the name of the function that is\"\n f\" the entrypoint to your flow.\\nTry {entrypoint}:<flow_name>\"\n )\n exit_with_error(missing_flow_name_msg)\n else:\n raise exc\n try:\n flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, entrypoint)\n except Exception as exc:\n exit_with_error(exc)\n app.console.print(f\"Found flow {flow.name!r}\", style=\"green\")\n infra_overrides = {}\n for override in overrides or []:\n key, value = override.split(\"=\", 1)\n infra_overrides[key] = value\n\n if infra_block:\n infrastructure = await Block.load(infra_block)\n elif infra_type:\n # Create an instance of the given type\n infrastructure = Block.get_block_class_from_key(infra_type)()\n else:\n # will reset to a default of Process is no infra is present on the\n # server-side definition of this deployment\n infrastructure = None\n\n if interval_anchor and not interval:\n exit_with_error(\"An anchor date can only be provided with an interval schedule\")\n\n schedule = None\n if cron:\n cron_kwargs = {\"cron\": cron, \"timezone\": timezone}\n schedule = CronSchedule(\n **{k: v for k, v in cron_kwargs.items() if v is not None}\n )\n elif interval:\n interval_kwargs = {\n \"interval\": timedelta(seconds=interval),\n \"anchor_date\": interval_anchor,\n \"timezone\": timezone,\n }\n schedule = IntervalSchedule(\n **{k: v for k, v in interval_kwargs.items() if v is not None}\n )\n elif rrule:\n try:\n schedule = RRuleSchedule(**json.loads(rrule))\n if timezone:\n # override timezone if specified via CLI argument\n schedule.timezone = timezone\n except json.JSONDecodeError:\n schedule = RRuleSchedule(rrule=rrule, timezone=timezone)\n\n # parse storage_block\n if storage_block:\n block_type, block_name, *block_path = storage_block.split(\"/\")\n if block_path and path:\n exit_with_error(\n \"Must provide a `path` explicitly or provide one on the storage block\"\n \" specification, but not both.\"\n )\n elif not path:\n path = \"/\".join(block_path)\n storage_block = f\"{block_type}/{block_name}\"\n storage = await Block.load(storage_block)\n else:\n storage = None\n\n if create_default_ignore_file(path=\".\"):\n app.console.print(\n (\n \"Default '.prefectignore' file written to\"\n f\" {(Path('.') / '.prefectignore').absolute()}\"\n ),\n style=\"green\",\n )\n\n if param and (params is not None):\n exit_with_error(\"Can only pass one of `param` or `params` options\")\n\n parameters = dict()\n\n if param:\n for p in param or []:\n k, unparsed_value = p.split(\"=\", 1)\n try:\n v = json.loads(unparsed_value)\n app.console.print(\n f\"The parameter value {unparsed_value} is parsed as a JSON string\"\n )\n except json.JSONDecodeError:\n v = unparsed_value\n parameters[k] = v\n\n if params is not None:\n parameters = json.loads(params)\n\n # set up deployment object\n entrypoint = (\n f\"{Path(fpath).absolute().relative_to(Path('.').absolute())}:{obj_name}\"\n )\n\n init_kwargs = dict(\n path=path,\n entrypoint=entrypoint,\n version=version,\n storage=storage,\n infra_overrides=infra_overrides or {},\n )\n\n if parameters:\n init_kwargs[\"parameters\"] = parameters\n\n if description:\n init_kwargs[\"description\"] = description\n\n # if a schedule, tags, work_queue_name, or infrastructure are not provided via CLI,\n # we let `build_from_flow` load them from the server\n if schedule or no_schedule:\n init_kwargs.update(schedule=schedule)\n if tags:\n init_kwargs.update(tags=tags)\n if infrastructure:\n init_kwargs.update(infrastructure=infrastructure)\n if work_queue_name:\n init_kwargs.update(work_queue_name=work_queue_name)\n if work_pool_name:\n init_kwargs.update(work_pool_name=work_pool_name)\n\n deployment_loc = output_file or f\"{obj_name}-deployment.yaml\"\n deployment = await Deployment.build_from_flow(\n flow=flow,\n name=name,\n output=deployment_loc,\n skip_upload=skip_upload,\n apply=False,\n **init_kwargs,\n )\n app.console.print(\n f\"Deployment YAML created at '{Path(deployment_loc).absolute()!s}'.\",\n style=\"green\",\n )\n\n await create_work_queue_and_set_concurrency_limit(\n deployment.work_queue_name, deployment.work_pool_name, work_queue_concurrency\n )\n\n # we process these separately for informative output\n if not skip_upload:\n if (\n deployment.storage\n and \"put-directory\" in deployment.storage.get_block_capabilities()\n ):\n file_count = await deployment.upload_to_storage()\n if file_count:\n app.console.print(\n (\n f\"Successfully uploaded {file_count} files to\"\n f\" {deployment.location}\"\n ),\n style=\"green\",\n )\n else:\n app.console.print(\n (\n f\"Deployment storage {deployment.storage} does not have upload\"\n \" capabilities; no files uploaded. Pass --skip-upload to suppress\"\n \" this warning.\"\n ),\n style=\"green\",\n )\n\n if _apply:\n async with get_client() as client:\n await check_work_pool_exists(\n work_pool_name=deployment.work_pool_name, client=client\n )\n deployment_id = await deployment.apply()\n app.console.print(\n (\n f\"Deployment '{deployment.flow_name}/{deployment.name}'\"\n f\" successfully created with id '{deployment_id}'.\"\n ),\n style=\"green\",\n )\n if deployment.work_pool_name is not None:\n await _print_deployment_work_pool_instructions(\n work_pool_name=deployment.work_pool_name, client=client\n )\n\n elif deployment.work_queue_name is not None:\n app.console.print(\n \"\\nTo execute flow runs from this deployment, start an agent that\"\n f\" pulls work from the {deployment.work_queue_name!r} work queue:\"\n )\n app.console.print(\n f\"$ prefect agent start -q {deployment.work_queue_name!r}\",\n style=\"blue\",\n )\n else:\n app.console.print(\n (\n \"\\nThis deployment does not specify a work queue name, which\"\n \" means agents will not be able to pick up its runs. To add a\"\n \" work queue, edit the deployment spec and re-run this command,\"\n \" or visit the deployment in the UI.\"\n ),\n style=\"red\",\n )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.clear_schedules","title":"clear_schedules
async
","text":"Clear all schedules for a deployment.
Source code inprefect/cli/deployment.py
@schedule_app.command(\"clear\")\nasync def clear_schedules(\n deployment_name: str,\n assume_yes: Optional[bool] = typer.Option(\n False,\n \"--accept-yes\",\n \"-y\",\n help=\"Accept the confirmation prompt without prompting\",\n ),\n):\n \"\"\"\n Clear all schedules for a deployment.\n \"\"\"\n assert_deployment_name_format(deployment_name)\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(deployment_name)\n except ObjectNotFound:\n return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n await client.read_flow(deployment.flow_id)\n\n # Get input from user: confirm removal of all schedules\n if not assume_yes and not typer.confirm(\n \"Are you sure you want to clear all schedules for this deployment?\",\n ):\n exit_with_error(\"Clearing schedules cancelled.\")\n\n for schedule in deployment.schedules:\n try:\n await client.delete_deployment_schedule(deployment.id, schedule.id)\n except ObjectNotFound:\n pass\n\n exit_with_success(f\"Cleared all schedules for deployment {deployment_name}\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.create_schedule","title":"create_schedule
async
","text":"Create a schedule for a given deployment.
Source code inprefect/cli/deployment.py
@schedule_app.command(\"create\")\nasync def create_schedule(\n name: str,\n interval: Optional[float] = typer.Option(\n None,\n \"--interval\",\n help=\"An interval to schedule on, specified in seconds\",\n min=0.0001,\n ),\n interval_anchor: Optional[str] = typer.Option(\n None,\n \"--anchor-date\",\n help=\"The anchor date for an interval schedule\",\n ),\n rrule_string: Optional[str] = typer.Option(\n None, \"--rrule\", help=\"Deployment schedule rrule string\"\n ),\n cron_string: Optional[str] = typer.Option(\n None, \"--cron\", help=\"Deployment schedule cron string\"\n ),\n cron_day_or: Optional[str] = typer.Option(\n None,\n \"--day_or\",\n help=\"Control how croniter handles `day` and `day_of_week` entries\",\n ),\n timezone: Optional[str] = typer.Option(\n None,\n \"--timezone\",\n help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n ),\n active: Optional[bool] = typer.Option(\n True,\n \"--active\",\n help=\"Whether the schedule is active. Defaults to True.\",\n ),\n replace: Optional[bool] = typer.Option(\n False,\n \"--replace\",\n help=\"Replace the deployment's current schedule(s) with this new schedule.\",\n ),\n assume_yes: Optional[bool] = typer.Option(\n False,\n \"--accept-yes\",\n \"-y\",\n help=\"Accept the confirmation prompt without prompting\",\n ),\n):\n \"\"\"\n Create a schedule for a given deployment.\n \"\"\"\n assert_deployment_name_format(name)\n\n if sum(option is not None for option in [interval, rrule_string, cron_string]) != 1:\n exit_with_error(\n \"Exactly one of `--interval`, `--rrule`, or `--cron` must be provided.\"\n )\n\n schedule = None\n\n if interval_anchor and not interval:\n exit_with_error(\"An anchor date can only be provided with an interval schedule\")\n\n if interval is not None:\n if interval_anchor:\n try:\n pendulum.parse(interval_anchor)\n except ValueError:\n return exit_with_error(\"The anchor date must be a valid date string.\")\n interval_schedule = {\n \"interval\": interval,\n \"anchor_date\": interval_anchor,\n \"timezone\": timezone,\n }\n schedule = IntervalSchedule(\n **{k: v for k, v in interval_schedule.items() if v is not None}\n )\n\n if cron_string is not None:\n cron_schedule = {\n \"cron\": cron_string,\n \"day_or\": cron_day_or,\n \"timezone\": timezone,\n }\n schedule = CronSchedule(\n **{k: v for k, v in cron_schedule.items() if v is not None}\n )\n\n if rrule_string is not None:\n # a timezone in the `rrule_string` gets ignored by the RRuleSchedule constructor\n if \"TZID\" in rrule_string and not timezone:\n exit_with_error(\n \"You can provide a timezone by providing a dict with a `timezone` key\"\n \" to the --rrule option. E.g. {'rrule': 'FREQ=MINUTELY;INTERVAL=5',\"\n \" 'timezone': 'America/New_York'}.\\nAlternatively, you can provide a\"\n \" timezone by passing in a --timezone argument.\"\n )\n try:\n schedule = RRuleSchedule(**json.loads(rrule_string))\n if timezone:\n # override timezone if specified via CLI argument\n schedule.timezone = timezone\n except json.JSONDecodeError:\n schedule = RRuleSchedule(rrule=rrule_string, timezone=timezone)\n\n if schedule is None:\n return exit_with_success(\n \"Could not create a valid schedule from the provided options.\"\n )\n\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(name)\n except ObjectNotFound:\n return exit_with_error(f\"Deployment {name!r} not found!\")\n\n num_schedules = len(deployment.schedules)\n noun = \"schedule\" if num_schedules == 1 else \"schedules\"\n\n if replace and num_schedules > 0:\n if not assume_yes and not typer.confirm(\n f\"Are you sure you want to replace {num_schedules} {noun} for {name}?\"\n ):\n return exit_with_error(\"Schedule replacement cancelled.\")\n\n for existing_schedule in deployment.schedules:\n try:\n await client.delete_deployment_schedule(\n deployment.id, existing_schedule.id\n )\n except ObjectNotFound:\n pass\n\n await client.create_deployment_schedules(deployment.id, [(schedule, active)])\n\n if replace and num_schedules > 0:\n exit_with_success(f\"Replaced existing deployment {noun} with new schedule!\")\n else:\n exit_with_success(\"Created deployment schedule!\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.delete","title":"delete
async
","text":"Delete a deployment.
\b Examples: \b $ prefect deployment delete test_flow/test_deployment $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6
Source code inprefect/cli/deployment.py
@deployment_app.command()\nasync def delete(\n name: Optional[str] = typer.Argument(\n None, help=\"A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\"\n ),\n deployment_id: Optional[str] = typer.Option(\n None, \"--id\", help=\"A deployment id to search for if no name is given\"\n ),\n):\n \"\"\"\n Delete a deployment.\n\n \\b\n Examples:\n \\b\n $ prefect deployment delete test_flow/test_deployment\n $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6\n \"\"\"\n async with get_client() as client:\n if name is None and deployment_id is not None:\n try:\n await client.delete_deployment(deployment_id)\n exit_with_success(f\"Deleted deployment '{deployment_id}'.\")\n except ObjectNotFound:\n exit_with_error(f\"Deployment {deployment_id!r} not found!\")\n elif name is not None:\n try:\n deployment = await client.read_deployment_by_name(name)\n await client.delete_deployment(deployment.id)\n exit_with_success(f\"Deleted deployment '{name}'.\")\n except ObjectNotFound:\n exit_with_error(f\"Deployment {name!r} not found!\")\n else:\n exit_with_error(\"Must provide a deployment name or id\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.delete_schedule","title":"delete_schedule
async
","text":"Delete a deployment schedule.
Source code inprefect/cli/deployment.py
@schedule_app.command(\"delete\")\nasync def delete_schedule(\n deployment_name: str,\n schedule_id: UUID,\n assume_yes: Optional[bool] = typer.Option(\n False,\n \"--accept-yes\",\n \"-y\",\n help=\"Accept the confirmation prompt without prompting\",\n ),\n):\n \"\"\"\n Delete a deployment schedule.\n \"\"\"\n assert_deployment_name_format(deployment_name)\n\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(deployment_name)\n except ObjectNotFound:\n return exit_with_error(f\"Deployment {deployment_name} not found!\")\n\n try:\n schedule = [s for s in deployment.schedules if s.id == schedule_id][0]\n except IndexError:\n return exit_with_error(\"Deployment schedule not found!\")\n\n if not assume_yes and not typer.confirm(\n f\"Are you sure you want to delete this schedule: {schedule.schedule}\",\n ):\n return exit_with_error(\"Deletion cancelled.\")\n\n try:\n await client.delete_deployment_schedule(deployment.id, schedule_id)\n except ObjectNotFound:\n exit_with_error(\"Deployment schedule not found!\")\n\n exit_with_success(f\"Deleted deployment schedule {schedule_id}\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.inspect","title":"inspect
async
","text":"View details about a deployment.
\b Example: \b $ prefect deployment inspect \"hello-world/my-deployment\" { 'id': '610df9c3-0fb4-4856-b330-67f588d20201', 'created': '2022-08-01T18:36:25.192102+00:00', 'updated': '2022-08-01T18:36:25.188166+00:00', 'name': 'my-deployment', 'description': None, 'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e', 'schedules': None, 'parameters': {'name': 'Marvin'}, 'tags': ['test'], 'parameter_openapi_schema': { 'title': 'Parameters', 'type': 'object', 'properties': { 'name': { 'title': 'name', 'type': 'string' } }, 'required': ['name'] }, 'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32', 'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028', 'infrastructure': { 'type': 'process', 'env': {}, 'labels': {}, 'name': None, 'command': ['python', '-m', 'prefect.engine'], 'stream_output': True } }
Source code inprefect/cli/deployment.py
@deployment_app.command()\nasync def inspect(name: str):\n \"\"\"\n View details about a deployment.\n\n \\b\n Example:\n \\b\n $ prefect deployment inspect \"hello-world/my-deployment\"\n {\n 'id': '610df9c3-0fb4-4856-b330-67f588d20201',\n 'created': '2022-08-01T18:36:25.192102+00:00',\n 'updated': '2022-08-01T18:36:25.188166+00:00',\n 'name': 'my-deployment',\n 'description': None,\n 'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e',\n 'schedules': None,\n 'parameters': {'name': 'Marvin'},\n 'tags': ['test'],\n 'parameter_openapi_schema': {\n 'title': 'Parameters',\n 'type': 'object',\n 'properties': {\n 'name': {\n 'title': 'name',\n 'type': 'string'\n }\n },\n 'required': ['name']\n },\n 'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32',\n 'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028',\n 'infrastructure': {\n 'type': 'process',\n 'env': {},\n 'labels': {},\n 'name': None,\n 'command': ['python', '-m', 'prefect.engine'],\n 'stream_output': True\n }\n }\n\n \"\"\"\n assert_deployment_name_format(name)\n\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(name)\n except ObjectNotFound:\n exit_with_error(f\"Deployment {name!r} not found!\")\n\n deployment_json = deployment.dict(json_compatible=True)\n\n if deployment.infrastructure_document_id:\n deployment_json[\"infrastructure\"] = Block._from_block_document(\n await client.read_block_document(deployment.infrastructure_document_id)\n ).dict(\n exclude={\"_block_document_id\", \"_block_document_name\", \"_is_anonymous\"}\n )\n\n if client.server_type == ServerType.CLOUD:\n deployment_json[\"automations\"] = [\n a.dict()\n for a in await client.read_resource_related_automations(\n f\"prefect.deployment.{deployment.id}\"\n )\n ]\n\n app.console.print(Pretty(deployment_json))\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.list_schedules","title":"list_schedules
async
","text":"View all schedules for a deployment.
Source code inprefect/cli/deployment.py
@schedule_app.command(\"ls\")\nasync def list_schedules(deployment_name: str):\n \"\"\"\n View all schedules for a deployment.\n \"\"\"\n assert_deployment_name_format(deployment_name)\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(deployment_name)\n except ObjectNotFound:\n return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n def sort_by_created_key(schedule: DeploymentSchedule): # noqa\n return pendulum.now(\"utc\") - schedule.created\n\n def schedule_details(schedule: DeploymentSchedule):\n if isinstance(schedule.schedule, IntervalSchedule):\n return f\"interval: {schedule.schedule.interval}s\"\n elif isinstance(schedule.schedule, CronSchedule):\n return f\"cron: {schedule.schedule.cron}\"\n elif isinstance(schedule.schedule, RRuleSchedule):\n return f\"rrule: {schedule.schedule.rrule}\"\n else:\n return \"unknown\"\n\n table = Table(\n title=\"Deployment Schedules\",\n )\n table.add_column(\"ID\", style=\"blue\", no_wrap=True)\n table.add_column(\"Schedule\", style=\"cyan\", no_wrap=False)\n table.add_column(\"Active\", style=\"purple\", no_wrap=True)\n\n for schedule in sorted(deployment.schedules, key=sort_by_created_key):\n table.add_row(\n str(schedule.id),\n schedule_details(schedule),\n str(schedule.active),\n )\n\n app.console.print(table)\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.ls","title":"ls
async
","text":"View all deployments or deployments for specific flows.
Source code inprefect/cli/deployment.py
@deployment_app.command()\nasync def ls(flow_name: List[str] = None, by_created: bool = False):\n \"\"\"\n View all deployments or deployments for specific flows.\n \"\"\"\n async with get_client() as client:\n deployments = await client.read_deployments(\n flow_filter=FlowFilter(name={\"any_\": flow_name}) if flow_name else None\n )\n flows = {\n flow.id: flow\n for flow in await client.read_flows(\n flow_filter=FlowFilter(id={\"any_\": [d.flow_id for d in deployments]})\n )\n }\n\n def sort_by_name_keys(d):\n return flows[d.flow_id].name, d.name\n\n def sort_by_created_key(d):\n return pendulum.now(\"utc\") - d.created\n\n table = Table(\n title=\"Deployments\",\n )\n table.add_column(\"Name\", style=\"blue\", no_wrap=True)\n table.add_column(\"ID\", style=\"cyan\", no_wrap=True)\n\n for deployment in sorted(\n deployments, key=sort_by_created_key if by_created else sort_by_name_keys\n ):\n table.add_row(\n f\"{flows[deployment.flow_id].name}/[bold]{deployment.name}[/]\",\n str(deployment.id),\n )\n\n app.console.print(table)\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.pause_schedule","title":"pause_schedule
async
","text":"Pause a deployment schedule.
Source code inprefect/cli/deployment.py
@schedule_app.command(\"pause\")\nasync def pause_schedule(deployment_name: str, schedule_id: UUID):\n \"\"\"\n Pause a deployment schedule.\n \"\"\"\n assert_deployment_name_format(deployment_name)\n\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(deployment_name)\n except ObjectNotFound:\n return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n try:\n schedule = [s for s in deployment.schedules if s.id == schedule_id][0]\n except IndexError:\n return exit_with_error(\"Deployment schedule not found!\")\n\n if not schedule.active:\n return exit_with_error(\n f\"Deployment schedule {schedule_id} is already inactive\"\n )\n\n await client.update_deployment_schedule(\n deployment.id, schedule_id, active=False\n )\n exit_with_success(\n f\"Paused schedule {schedule.schedule} for deployment {deployment_name}\"\n )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.resume_schedule","title":"resume_schedule
async
","text":"Resume a deployment schedule.
Source code inprefect/cli/deployment.py
@schedule_app.command(\"resume\")\nasync def resume_schedule(deployment_name: str, schedule_id: UUID):\n \"\"\"\n Resume a deployment schedule.\n \"\"\"\n assert_deployment_name_format(deployment_name)\n\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(deployment_name)\n except ObjectNotFound:\n return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n try:\n schedule = [s for s in deployment.schedules if s.id == schedule_id][0]\n except IndexError:\n return exit_with_error(\"Deployment schedule not found!\")\n\n if schedule.active:\n return exit_with_error(\n f\"Deployment schedule {schedule_id} is already active\"\n )\n\n await client.update_deployment_schedule(deployment.id, schedule_id, active=True)\n exit_with_success(\n f\"Resumed schedule {schedule.schedule} for deployment {deployment_name}\"\n )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.run","title":"run
async
","text":"Create a flow run for the given flow and deployment.
The flow run will be scheduled to run immediately unless --start-in
or --start-at
is specified. The flow run will not execute until a worker starts. To watch the flow run until it reaches a terminal state, use the --watch
flag.
prefect/cli/deployment.py
@deployment_app.command()\nasync def run(\n name: Optional[str] = typer.Argument(\n None, help=\"A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\"\n ),\n deployment_id: Optional[str] = typer.Option(\n None,\n \"--id\",\n help=(\"A deployment id to search for if no name is given\"),\n ),\n job_variables: List[str] = typer.Option(\n None,\n \"-jv\",\n \"--job-variable\",\n help=(\n \"A key, value pair (key=value) specifying a flow run job variable. The value will\"\n \" be interpreted as JSON. May be passed multiple times to specify multiple\"\n \" job variable values.\"\n ),\n ),\n params: List[str] = typer.Option(\n None,\n \"-p\",\n \"--param\",\n help=(\n \"A key, value pair (key=value) specifying a flow parameter. The value will\"\n \" be interpreted as JSON. May be passed multiple times to specify multiple\"\n \" parameter values.\"\n ),\n ),\n multiparams: Optional[str] = typer.Option(\n None,\n \"--params\",\n help=(\n \"A mapping of parameters to values. To use a stdin, pass '-'. Any \"\n \"parameters passed with `--param` will take precedence over these values.\"\n ),\n ),\n start_in: Optional[str] = typer.Option(\n None,\n \"--start-in\",\n help=(\n \"A human-readable string specifying a time interval to wait before starting\"\n \" the flow run. E.g. 'in 5 minutes', 'in 1 hour', 'in 2 days'.\"\n ),\n ),\n start_at: Optional[str] = typer.Option(\n None,\n \"--start-at\",\n help=(\n \"A human-readable string specifying a time to start the flow run. E.g.\"\n \" 'at 5:30pm', 'at 2022-08-01 17:30', 'at 2022-08-01 17:30:00'.\"\n ),\n ),\n tags: List[str] = typer.Option(\n [],\n \"--tag\",\n help=(\"Tag(s) to be applied to flow run.\"),\n ),\n watch: bool = typer.Option(\n False,\n \"--watch\",\n help=(\"Whether to poll the flow run until a terminal state is reached.\"),\n ),\n watch_interval: Optional[int] = typer.Option(\n None,\n \"--watch-interval\",\n help=(\"How often to poll the flow run for state changes (in seconds).\"),\n ),\n watch_timeout: Optional[int] = typer.Option(\n None,\n \"--watch-timeout\",\n help=(\"Timeout for --watch.\"),\n ),\n):\n \"\"\"\n Create a flow run for the given flow and deployment.\n\n The flow run will be scheduled to run immediately unless `--start-in` or `--start-at` is specified.\n The flow run will not execute until a worker starts.\n To watch the flow run until it reaches a terminal state, use the `--watch` flag.\n \"\"\"\n import dateparser\n\n now = pendulum.now(\"UTC\")\n\n multi_params = {}\n if multiparams:\n if multiparams == \"-\":\n multiparams = sys.stdin.read()\n if not multiparams:\n exit_with_error(\"No data passed to stdin\")\n\n try:\n multi_params = json.loads(multiparams)\n except ValueError as exc:\n exit_with_error(f\"Failed to parse JSON: {exc}\")\n if watch_interval and not watch:\n exit_with_error(\n \"`--watch-interval` can only be used with `--watch`.\",\n )\n cli_params = _load_json_key_values(params, \"parameter\")\n conflicting_keys = set(cli_params.keys()).intersection(multi_params.keys())\n if conflicting_keys:\n app.console.print(\n \"The following parameters were specified by `--param` and `--params`, the \"\n f\"`--param` value will be used: {conflicting_keys}\"\n )\n parameters = {**multi_params, **cli_params}\n\n job_vars = _load_json_key_values(job_variables, \"job variable\")\n if start_in and start_at:\n exit_with_error(\n \"Only one of `--start-in` or `--start-at` can be set, not both.\"\n )\n\n elif start_in is None and start_at is None:\n scheduled_start_time = now\n human_dt_diff = \" (now)\"\n else:\n if start_in:\n start_time_raw = \"in \" + start_in\n else:\n start_time_raw = \"at \" + start_at\n with warnings.catch_warnings():\n # PyTZ throws a warning based on dateparser usage of the library\n # See https://github.com/scrapinghub/dateparser/issues/1089\n warnings.filterwarnings(\"ignore\", module=\"dateparser\")\n\n try:\n start_time_parsed = dateparser.parse(\n start_time_raw,\n settings={\n \"TO_TIMEZONE\": \"UTC\",\n \"RETURN_AS_TIMEZONE_AWARE\": False,\n \"PREFER_DATES_FROM\": \"future\",\n \"RELATIVE_BASE\": datetime.fromtimestamp(\n now.timestamp(), tz=timezone.utc\n ),\n },\n )\n\n except Exception as exc:\n exit_with_error(f\"Failed to parse '{start_time_raw!r}': {exc!s}\")\n\n if start_time_parsed is None:\n exit_with_error(f\"Unable to parse scheduled start time {start_time_raw!r}.\")\n\n scheduled_start_time = pendulum.instance(start_time_parsed)\n human_dt_diff = (\n \" (\" + pendulum.format_diff(scheduled_start_time.diff(now)) + \")\"\n )\n\n async with get_client() as client:\n deployment = await get_deployment(client, name, deployment_id)\n flow = await client.read_flow(deployment.flow_id)\n\n deployment_parameters = deployment.parameter_openapi_schema[\"properties\"].keys()\n unknown_keys = set(parameters.keys()).difference(deployment_parameters)\n if unknown_keys:\n available_parameters = (\n (\n \"The following parameters are available on the deployment: \"\n + listrepr(deployment_parameters, sep=\", \")\n )\n if deployment_parameters\n else \"This deployment does not accept parameters.\"\n )\n\n exit_with_error(\n \"The following parameters were specified but not found on the \"\n f\"deployment: {listrepr(unknown_keys, sep=', ')}\"\n f\"\\n{available_parameters}\"\n )\n\n app.console.print(\n f\"Creating flow run for deployment '{flow.name}/{deployment.name}'...\",\n )\n\n try:\n flow_run = await client.create_flow_run_from_deployment(\n deployment.id,\n parameters=parameters,\n state=Scheduled(scheduled_time=scheduled_start_time),\n tags=tags,\n job_variables=job_vars,\n )\n except PrefectHTTPStatusError as exc:\n detail = exc.response.json().get(\"detail\")\n if detail:\n exit_with_error(\n exc.response.json()[\"detail\"],\n )\n else:\n raise\n\n if PREFECT_UI_URL:\n run_url = f\"{PREFECT_UI_URL.value()}/flow-runs/flow-run/{flow_run.id}\"\n else:\n run_url = \"<no dashboard available>\"\n\n datetime_local_tz = scheduled_start_time.in_tz(pendulum.tz.local_timezone())\n scheduled_display = (\n datetime_local_tz.to_datetime_string()\n + \" \"\n + datetime_local_tz.tzname()\n + human_dt_diff\n )\n\n app.console.print(f\"Created flow run {flow_run.name!r}.\")\n app.console.print(\n textwrap.dedent(\n f\"\"\"\n \u2514\u2500\u2500 UUID: {flow_run.id}\n \u2514\u2500\u2500 Parameters: {flow_run.parameters}\n \u2514\u2500\u2500 Job Variables: {flow_run.job_variables}\n \u2514\u2500\u2500 Scheduled start time: {scheduled_display}\n \u2514\u2500\u2500 URL: {run_url}\n \"\"\"\n ).strip(),\n soft_wrap=True,\n )\n if watch:\n watch_interval = 5 if watch_interval is None else watch_interval\n app.console.print(f\"Watching flow run '{flow_run.name!r}'...\")\n finished_flow_run = await wait_for_flow_run(\n flow_run.id,\n timeout=watch_timeout,\n poll_interval=watch_interval,\n log_states=True,\n )\n finished_flow_run_state = finished_flow_run.state\n if finished_flow_run_state.is_completed():\n exit_with_success(\n f\"Flow run finished successfully in {finished_flow_run_state.name!r}.\"\n )\n exit_with_error(\n f\"Flow run finished in state {finished_flow_run_state.name!r}.\",\n code=1,\n )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.str_presenter","title":"str_presenter
","text":"configures yaml for dumping multiline strings Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data
Source code inprefect/cli/deployment.py
def str_presenter(dumper, data):\n \"\"\"\n configures yaml for dumping multiline strings\n Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data\n \"\"\"\n if len(data.splitlines()) > 1: # check for multiline string\n return dumper.represent_scalar(\"tag:yaml.org,2002:str\", data, style=\"|\")\n return dumper.represent_scalar(\"tag:yaml.org,2002:str\", data)\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/dev/","title":"dev","text":"","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev","title":"prefect.cli.dev
","text":"Command line interface for working with Prefect Server
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.agent","title":"agent
async
","text":"Starts a hot-reloading development agent process.
Source code inprefect/cli/dev.py
@dev_app.command()\nasync def agent(\n api_url: str = SettingsOption(PREFECT_API_URL),\n work_queues: List[str] = typer.Option(\n [\"default\"],\n \"-q\",\n \"--work-queue\",\n help=\"One or more work queue names for the agent to pull from.\",\n ),\n):\n \"\"\"\n Starts a hot-reloading development agent process.\n \"\"\"\n # Delayed import since this is only a 'dev' dependency\n import watchfiles\n\n app.console.print(\"Creating hot-reloading agent process...\")\n\n try:\n await watchfiles.arun_process(\n prefect.__module_path__,\n target=agent_process_entrypoint,\n kwargs=dict(api=api_url, work_queues=work_queues),\n )\n except RuntimeError as err:\n # a bug in watchfiles causes an 'Already borrowed' error from Rust when\n # exiting: https://github.com/samuelcolvin/watchfiles/issues/200\n if str(err).strip() != \"Already borrowed\":\n raise\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.agent_process_entrypoint","title":"agent_process_entrypoint
","text":"An entrypoint for starting an agent in a subprocess. Adds a Rich console to the Typer app, processes Typer default parameters, then starts an agent. All kwargs are forwarded to prefect.cli.agent.start
.
prefect/cli/dev.py
def agent_process_entrypoint(**kwargs):\n \"\"\"\n An entrypoint for starting an agent in a subprocess. Adds a Rich console\n to the Typer app, processes Typer default parameters, then starts an agent.\n All kwargs are forwarded to `prefect.cli.agent.start`.\n \"\"\"\n import inspect\n\n import rich\n\n # import locally so only the `dev` command breaks if Typer internals change\n from typer.models import ParameterInfo\n\n # Typer does not process default parameters when calling a function\n # directly, so we must set `start_agent`'s default parameters manually.\n # get the signature of the `start_agent` function\n start_agent_signature = inspect.signature(start_agent)\n\n # for any arguments not present in kwargs, use the default value.\n for name, param in start_agent_signature.parameters.items():\n if name not in kwargs:\n # All `param.default` values for start_agent are Typer params that store the\n # actual default value in their `default` attribute and we must call\n # `param.default.default` to get the actual default value. We should also\n # ensure we extract the right default if non-Typer defaults are added\n # to `start_agent` in the future.\n if isinstance(param.default, ParameterInfo):\n default = param.default.default\n else:\n default = param.default\n\n # Some defaults are Prefect `SettingsOption.value` methods\n # that must be called to get the actual value.\n kwargs[name] = default() if callable(default) else default\n\n # add a console, because calling the agent start function directly\n # instead of via CLI call means `app` has no `console` attached.\n app.console = (\n rich.console.Console(\n highlight=False,\n color_system=\"auto\" if PREFECT_CLI_COLORS else None,\n soft_wrap=not PREFECT_CLI_WRAP_LINES.value(),\n )\n if not getattr(app, \"console\", None)\n else app.console\n )\n\n try:\n start_agent(**kwargs) # type: ignore\n except KeyboardInterrupt:\n # expected when watchfiles kills the process\n pass\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.api","title":"api
async
","text":"Starts a hot-reloading development API.
Source code inprefect/cli/dev.py
@dev_app.command()\nasync def api(\n host: str = SettingsOption(PREFECT_SERVER_API_HOST),\n port: int = SettingsOption(PREFECT_SERVER_API_PORT),\n log_level: str = \"DEBUG\",\n services: bool = True,\n):\n \"\"\"\n Starts a hot-reloading development API.\n \"\"\"\n import watchfiles\n\n server_env = os.environ.copy()\n server_env[\"PREFECT_API_SERVICES_RUN_IN_APP\"] = str(services)\n server_env[\"PREFECT_API_SERVICES_UI\"] = \"False\"\n server_env[\"PREFECT_UI_API_URL\"] = f\"http://{host}:{port}/api\"\n\n command = [\n sys.executable,\n \"-m\",\n \"uvicorn\",\n \"--factory\",\n \"prefect.server.api.server:create_app\",\n \"--host\",\n str(host),\n \"--port\",\n str(port),\n \"--log-level\",\n log_level.lower(),\n ]\n\n app.console.print(f\"Running: {' '.join(command)}\")\n import signal\n\n stop_event = anyio.Event()\n start_command = partial(\n run_process, command=command, env=server_env, stream_output=True\n )\n\n async with anyio.create_task_group() as tg:\n try:\n server_pid = await tg.start(start_command)\n async for _ in watchfiles.awatch(\n prefect.__module_path__,\n stop_event=stop_event, # type: ignore\n ):\n # when any watched files change, restart the server\n app.console.print(\"Restarting Prefect Server...\")\n os.kill(server_pid, signal.SIGTERM) # type: ignore\n # start a new server\n server_pid = await tg.start(start_command)\n except RuntimeError as err:\n # a bug in watchfiles causes an 'Already borrowed' error from Rust when\n # exiting: https://github.com/samuelcolvin/watchfiles/issues/200\n if str(err).strip() != \"Already borrowed\":\n raise\n except KeyboardInterrupt:\n # exit cleanly on ctrl-c by killing the server process if it's\n # still running\n try:\n os.kill(server_pid, signal.SIGTERM) # type: ignore\n except ProcessLookupError:\n # process already exited\n pass\n\n stop_event.set()\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.build_docs","title":"build_docs
","text":"Builds REST API reference documentation for static display.
Source code inprefect/cli/dev.py
@dev_app.command()\ndef build_docs(\n schema_path: str = None,\n):\n \"\"\"\n Builds REST API reference documentation for static display.\n \"\"\"\n exit_with_error_if_not_editable_install()\n\n from prefect.server.api.server import create_app\n\n schema = create_app(ephemeral=True).openapi()\n\n if not schema_path:\n schema_path = (\n prefect.__development_base_path__ / \"docs\" / \"api-ref\" / \"schema.json\"\n ).absolute()\n # overwrite info for display purposes\n schema[\"info\"] = {}\n with open(schema_path, \"w\") as f:\n json.dump(schema, f)\n app.console.print(f\"OpenAPI schema written to {schema_path}\")\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.build_image","title":"build_image
","text":"Build a docker image for development.
Source code inprefect/cli/dev.py
@dev_app.command()\ndef build_image(\n arch: str = typer.Option(\n None,\n help=(\n \"The architecture to build the container for. \"\n \"Defaults to the architecture of the host Python. \"\n f\"[default: {platform.machine()}]\"\n ),\n ),\n python_version: str = typer.Option(\n None,\n help=(\n \"The Python version to build the container for. \"\n \"Defaults to the version of the host Python. \"\n f\"[default: {python_version_minor()}]\"\n ),\n ),\n flavor: str = typer.Option(\n None,\n help=(\n \"An alternative flavor to build, for example 'conda'. \"\n \"Defaults to the standard Python base image\"\n ),\n ),\n dry_run: bool = False,\n):\n \"\"\"\n Build a docker image for development.\n \"\"\"\n exit_with_error_if_not_editable_install()\n # TODO: Once https://github.com/tiangolo/typer/issues/354 is addressed, the\n # default can be set in the function signature\n arch = arch or platform.machine()\n python_version = python_version or python_version_minor()\n\n tag = get_prefect_image_name(python_version=python_version, flavor=flavor)\n\n # Here we use a subprocess instead of the docker-py client to easily stream output\n # as it comes\n command = [\n \"docker\",\n \"build\",\n str(prefect.__development_base_path__),\n \"--tag\",\n tag,\n \"--platform\",\n f\"linux/{arch}\",\n \"--build-arg\",\n \"PREFECT_EXTRAS=[dev]\",\n \"--build-arg\",\n f\"PYTHON_VERSION={python_version}\",\n ]\n\n if flavor:\n command += [\"--build-arg\", f\"BASE_IMAGE=prefect-{flavor}\"]\n\n if dry_run:\n print(\" \".join(command))\n return\n\n try:\n subprocess.check_call(command, shell=sys.platform == \"win32\")\n except subprocess.CalledProcessError:\n exit_with_error(\"Failed to build image!\")\n else:\n exit_with_success(f\"Built image {tag!r} for linux/{arch}\")\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.container","title":"container
","text":"Run a docker container with local code mounted and installed.
Source code inprefect/cli/dev.py
@dev_app.command()\ndef container(bg: bool = False, name=\"prefect-dev\", api: bool = True, tag: str = None):\n \"\"\"\n Run a docker container with local code mounted and installed.\n \"\"\"\n exit_with_error_if_not_editable_install()\n import docker\n from docker.models.containers import Container\n\n client = docker.from_env()\n\n containers = client.containers.list()\n container_names = {container.name for container in containers}\n if name in container_names:\n exit_with_error(\n f\"Container {name!r} already exists. Specify a different name or stop \"\n \"the existing container.\"\n )\n\n blocking_cmd = \"prefect dev api\" if api else \"sleep infinity\"\n tag = tag or get_prefect_image_name()\n\n container: Container = client.containers.create(\n image=tag,\n command=[\n \"/bin/bash\",\n \"-c\",\n ( # noqa\n \"pip install -e /opt/prefect/repo\\\\[dev\\\\] && touch /READY &&\"\n f\" {blocking_cmd}\"\n ),\n ],\n name=name,\n auto_remove=True,\n working_dir=\"/opt/prefect/repo\",\n volumes=[f\"{prefect.__development_base_path__}:/opt/prefect/repo\"],\n shm_size=\"4G\",\n )\n\n print(f\"Starting container for image {tag!r}...\")\n container.start()\n\n print(\"Waiting for installation to complete\", end=\"\", flush=True)\n try:\n ready = False\n while not ready:\n print(\".\", end=\"\", flush=True)\n result = container.exec_run(\"test -f /READY\")\n ready = result.exit_code == 0\n if not ready:\n time.sleep(3)\n except BaseException:\n print(\"\\nInterrupted. Stopping container...\")\n container.stop()\n raise\n\n print(\n textwrap.dedent(\n f\"\"\"\n Container {container.name!r} is ready! To connect to the container, run:\n\n docker exec -it {container.name} /bin/bash\n \"\"\"\n )\n )\n\n if bg:\n print(\n textwrap.dedent(\n f\"\"\"\n The container will run forever. Stop the container with:\n\n docker stop {container.name}\n \"\"\"\n )\n )\n # Exit without stopping\n return\n\n try:\n print(\"Send a keyboard interrupt to exit...\")\n container.wait()\n except KeyboardInterrupt:\n pass # Avoid showing \"Abort\"\n finally:\n print(\"\\nStopping container...\")\n container.stop()\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.kubernetes_manifest","title":"kubernetes_manifest
","text":"Generates a Kubernetes manifest for development.
Example$ prefect dev kubernetes-manifest | kubectl apply -f -
Source code inprefect/cli/dev.py
@dev_app.command()\ndef kubernetes_manifest():\n \"\"\"\n Generates a Kubernetes manifest for development.\n\n Example:\n $ prefect dev kubernetes-manifest | kubectl apply -f -\n \"\"\"\n exit_with_error_if_not_editable_install()\n\n template = Template(\n (\n prefect.__module_path__ / \"cli\" / \"templates\" / \"kubernetes-dev.yaml\"\n ).read_text()\n )\n manifest = template.substitute(\n {\n \"prefect_root_directory\": prefect.__development_base_path__,\n \"image_name\": get_prefect_image_name(),\n }\n )\n print(manifest)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.start","title":"start
async
","text":"Starts a hot-reloading development server with API, UI, and agent processes.
Each service has an individual command if you wish to start them separately. Each service can be excluded here as well.
Source code inprefect/cli/dev.py
@dev_app.command()\nasync def start(\n exclude_api: bool = typer.Option(False, \"--no-api\"),\n exclude_ui: bool = typer.Option(False, \"--no-ui\"),\n exclude_agent: bool = typer.Option(False, \"--no-agent\"),\n work_queues: List[str] = typer.Option(\n [\"default\"],\n \"-q\",\n \"--work-queue\",\n help=\"One or more work queue names for the dev agent to pull from.\",\n ),\n):\n \"\"\"\n Starts a hot-reloading development server with API, UI, and agent processes.\n\n Each service has an individual command if you wish to start them separately.\n Each service can be excluded here as well.\n \"\"\"\n async with anyio.create_task_group() as tg:\n if not exclude_api:\n tg.start_soon(\n partial(\n api,\n host=PREFECT_SERVER_API_HOST.value(),\n port=PREFECT_SERVER_API_PORT.value(),\n )\n )\n if not exclude_ui:\n tg.start_soon(ui)\n if not exclude_agent:\n # Hook the agent to the hosted API if running\n if not exclude_api:\n host = f\"http://{PREFECT_SERVER_API_HOST.value()}:{PREFECT_SERVER_API_PORT.value()}/api\" # noqa\n else:\n host = PREFECT_API_URL.value()\n tg.start_soon(agent, host, work_queues)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.ui","title":"ui
async
","text":"Starts a hot-reloading development UI.
Source code inprefect/cli/dev.py
@dev_app.command()\nasync def ui():\n \"\"\"\n Starts a hot-reloading development UI.\n \"\"\"\n exit_with_error_if_not_editable_install()\n with tmpchdir(prefect.__development_base_path__ / \"ui\"):\n app.console.print(\"Installing npm packages...\")\n await run_process([\"npm\", \"install\"], stream_output=True)\n\n app.console.print(\"Starting UI development server...\")\n await run_process(command=[\"npm\", \"run\", \"serve\"], stream_output=True)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/flow/","title":"flow","text":"","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow/#prefect.cli.flow","title":"prefect.cli.flow
","text":"Command line interface for working with flows.
","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow/#prefect.cli.flow.ls","title":"ls
async
","text":"View flows.
Source code inprefect/cli/flow.py
@flow_app.command()\nasync def ls(\n limit: int = 15,\n):\n \"\"\"\n View flows.\n \"\"\"\n async with get_client() as client:\n flows = await client.read_flows(\n limit=limit,\n sort=FlowSort.CREATED_DESC,\n )\n\n table = Table(title=\"Flows\")\n table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"Created\", no_wrap=True)\n\n for flow in flows:\n table.add_row(\n str(flow.id),\n str(flow.name),\n str(flow.created),\n )\n\n app.console.print(table)\n
","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow/#prefect.cli.flow.serve","title":"serve
async
","text":"Serve a flow via an entrypoint.
Source code inprefect/cli/flow.py
@flow_app.command()\nasync def serve(\n entrypoint: str = typer.Argument(\n ...,\n help=(\n \"The path to a file containing a flow and the name of the flow function in\"\n \" the format `./path/to/file.py:flow_func_name`.\"\n ),\n ),\n name: str = typer.Option(\n ...,\n \"--name\",\n \"-n\",\n help=\"The name to give the deployment created for the flow.\",\n ),\n description: Optional[str] = typer.Option(\n None,\n \"--description\",\n \"-d\",\n help=(\n \"The description to give the created deployment. If not provided, the\"\n \" description will be populated from the flow's description.\"\n ),\n ),\n version: Optional[str] = typer.Option(\n None, \"-v\", \"--version\", help=\"A version to give the created deployment.\"\n ),\n tags: Optional[List[str]] = typer.Option(\n None,\n \"-t\",\n \"--tag\",\n help=\"One or more optional tags to apply to the created deployment.\",\n ),\n cron: Optional[str] = typer.Option(\n None,\n \"--cron\",\n help=(\n \"A cron string that will be used to set a schedule for the created\"\n \" deployment.\"\n ),\n ),\n interval: Optional[int] = typer.Option(\n None,\n \"--interval\",\n help=(\n \"An integer specifying an interval (in seconds) between scheduled runs of\"\n \" the flow.\"\n ),\n ),\n interval_anchor: Optional[str] = typer.Option(\n None, \"--anchor-date\", help=\"The start date for an interval schedule.\"\n ),\n rrule: Optional[str] = typer.Option(\n None,\n \"--rrule\",\n help=\"An RRule that will be used to set a schedule for the created deployment.\",\n ),\n timezone: Optional[str] = typer.Option(\n None,\n \"--timezone\",\n help=\"Timezone to used scheduling flow runs e.g. 'America/New_York'\",\n ),\n pause_on_shutdown: bool = typer.Option(\n True,\n help=(\n \"If set, provided schedule will be paused when the serve command is\"\n \" stopped. If not set, the schedules will continue running.\"\n ),\n ),\n):\n \"\"\"\n Serve a flow via an entrypoint.\n \"\"\"\n runner = Runner(name=name, pause_on_shutdown=pause_on_shutdown)\n try:\n schedules = []\n if interval or cron or rrule:\n schedule = construct_schedule(\n interval=interval,\n cron=cron,\n rrule=rrule,\n timezone=timezone,\n anchor_date=interval_anchor,\n )\n schedules = [MinimalDeploymentSchedule(schedule=schedule, active=True)]\n\n runner_deployment = RunnerDeployment.from_entrypoint(\n entrypoint=entrypoint,\n name=name,\n schedules=schedules,\n description=description,\n tags=tags or [],\n version=version,\n )\n except (MissingFlowError, ValueError) as exc:\n exit_with_error(str(exc))\n deployment_id = await runner.add_deployment(runner_deployment)\n\n help_message = (\n f\"[green]Your flow {runner_deployment.flow_name!r} is being served and polling\"\n \" for scheduled runs!\\n[/]\\nTo trigger a run for this flow, use the following\"\n \" command:\\n[blue]\\n\\t$ prefect deployment run\"\n f\" '{runner_deployment.flow_name}/{name}'\\n[/]\"\n )\n if PREFECT_UI_URL:\n help_message += (\n \"\\nYou can also run your flow via the Prefect UI:\"\n f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}[/]\\n\"\n )\n\n app.console.print(help_message, soft_wrap=True)\n await runner.start()\n
","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow_run/","title":"flow_run","text":"","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run","title":"prefect.cli.flow_run
","text":"Command line interface for working with flow runs
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.cancel","title":"cancel
async
","text":"Cancel a flow run by ID.
Source code inprefect/cli/flow_run.py
@flow_run_app.command()\nasync def cancel(id: UUID):\n \"\"\"Cancel a flow run by ID.\"\"\"\n async with get_client() as client:\n cancelling_state = State(type=StateType.CANCELLING)\n try:\n result = await client.set_flow_run_state(\n flow_run_id=id, state=cancelling_state\n )\n except ObjectNotFound:\n exit_with_error(f\"Flow run '{id}' not found!\")\n\n if result.status == SetStateStatus.ABORT:\n exit_with_error(\n f\"Flow run '{id}' was unable to be cancelled. Reason:\"\n f\" '{result.details.reason}'\"\n )\n\n exit_with_success(f\"Flow run '{id}' was successfully scheduled for cancellation.\")\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.delete","title":"delete
async
","text":"Delete a flow run by ID.
Source code inprefect/cli/flow_run.py
@flow_run_app.command()\nasync def delete(id: UUID):\n \"\"\"\n Delete a flow run by ID.\n \"\"\"\n async with get_client() as client:\n try:\n await client.delete_flow_run(id)\n except ObjectNotFound:\n exit_with_error(f\"Flow run '{id}' not found!\")\n\n exit_with_success(f\"Successfully deleted flow run '{id}'.\")\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.inspect","title":"inspect
async
","text":"View details about a flow run.
Source code inprefect/cli/flow_run.py
@flow_run_app.command()\nasync def inspect(id: UUID):\n \"\"\"\n View details about a flow run.\n \"\"\"\n async with get_client() as client:\n try:\n flow_run = await client.read_flow_run(id)\n except httpx.HTTPStatusError as exc:\n if exc.response.status_code == status.HTTP_404_NOT_FOUND:\n exit_with_error(f\"Flow run {id!r} not found!\")\n else:\n raise\n\n app.console.print(Pretty(flow_run))\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.logs","title":"logs
async
","text":"View logs for a flow run.
Source code inprefect/cli/flow_run.py
@flow_run_app.command()\nasync def logs(\n id: UUID,\n head: bool = typer.Option(\n False,\n \"--head\",\n \"-h\",\n help=(\n f\"Show the first {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of\"\n \" all logs.\"\n ),\n ),\n num_logs: int = typer.Option(\n None,\n \"--num-logs\",\n \"-n\",\n help=(\n \"Number of logs to show when using the --head or --tail flag. If None,\"\n f\" defaults to {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS}.\"\n ),\n min=1,\n ),\n reverse: bool = typer.Option(\n False,\n \"--reverse\",\n \"-r\",\n help=\"Reverse the logs order to print the most recent logs first\",\n ),\n tail: bool = typer.Option(\n False,\n \"--tail\",\n \"-t\",\n help=(\n f\"Show the last {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of\"\n \" all logs.\"\n ),\n ),\n):\n \"\"\"\n View logs for a flow run.\n \"\"\"\n # Pagination - API returns max 200 (LOGS_DEFAULT_PAGE_SIZE) logs at a time\n offset = 0\n more_logs = True\n num_logs_returned = 0\n\n # if head and tail flags are being used together\n if head and tail:\n exit_with_error(\"Please provide either a `head` or `tail` option but not both.\")\n\n user_specified_num_logs = (\n num_logs or LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS\n if head or tail or num_logs\n else None\n )\n\n # if using tail update offset according to LOGS_DEFAULT_PAGE_SIZE\n if tail:\n offset = max(0, user_specified_num_logs - LOGS_DEFAULT_PAGE_SIZE)\n\n log_filter = LogFilter(flow_run_id={\"any_\": [id]})\n\n async with get_client() as client:\n # Get the flow run\n try:\n flow_run = await client.read_flow_run(id)\n except ObjectNotFound:\n exit_with_error(f\"Flow run {str(id)!r} not found!\")\n\n while more_logs:\n num_logs_to_return_from_page = (\n LOGS_DEFAULT_PAGE_SIZE\n if user_specified_num_logs is None\n else min(\n LOGS_DEFAULT_PAGE_SIZE, user_specified_num_logs - num_logs_returned\n )\n )\n\n # Get the next page of logs\n page_logs = await client.read_logs(\n log_filter=log_filter,\n limit=num_logs_to_return_from_page,\n offset=offset,\n sort=(\n LogSort.TIMESTAMP_DESC if reverse or tail else LogSort.TIMESTAMP_ASC\n ),\n )\n\n for log in reversed(page_logs) if tail and not reverse else page_logs:\n app.console.print(\n # Print following the flow run format (declared in logging.yml)\n (\n f\"{pendulum.instance(log.timestamp).to_datetime_string()}.{log.timestamp.microsecond // 1000:03d} |\"\n f\" {logging.getLevelName(log.level):7s} | Flow run\"\n f\" {flow_run.name!r} - {log.message}\"\n ),\n soft_wrap=True,\n )\n\n # Update the number of logs retrieved\n num_logs_returned += num_logs_to_return_from_page\n\n if tail:\n # If the current offset is not 0, update the offset for the next page\n if offset != 0:\n offset = (\n 0\n # Reset the offset to 0 if there are less logs than the LOGS_DEFAULT_PAGE_SIZE to get the remaining log\n if offset < LOGS_DEFAULT_PAGE_SIZE\n else offset - LOGS_DEFAULT_PAGE_SIZE\n )\n else:\n more_logs = False\n else:\n if len(page_logs) == LOGS_DEFAULT_PAGE_SIZE:\n offset += LOGS_DEFAULT_PAGE_SIZE\n else:\n # No more logs to show, exit\n more_logs = False\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.ls","title":"ls
async
","text":"View recent flow runs or flow runs for specific flows
Source code inprefect/cli/flow_run.py
@flow_run_app.command()\nasync def ls(\n flow_name: List[str] = typer.Option(None, help=\"Name of the flow\"),\n limit: int = typer.Option(15, help=\"Maximum number of flow runs to list\"),\n state: List[str] = typer.Option(None, help=\"Name of the flow run's state\"),\n state_type: List[StateType] = typer.Option(\n None, help=\"Type of the flow run's state\"\n ),\n):\n \"\"\"\n View recent flow runs or flow runs for specific flows\n \"\"\"\n\n state_filter = {}\n if state:\n state_filter[\"name\"] = {\"any_\": state}\n if state_type:\n state_filter[\"type\"] = {\"any_\": state_type}\n\n async with get_client() as client:\n flow_runs = await client.read_flow_runs(\n flow_filter=FlowFilter(name={\"any_\": flow_name}) if flow_name else None,\n flow_run_filter=FlowRunFilter(state=state_filter) if state_filter else None,\n limit=limit,\n sort=FlowRunSort.EXPECTED_START_TIME_DESC,\n )\n flows_by_id = {\n flow.id: flow\n for flow in await client.read_flows(\n flow_filter=FlowFilter(id={\"any_\": [run.flow_id for run in flow_runs]})\n )\n }\n\n table = Table(title=\"Flow Runs\")\n table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Flow\", style=\"blue\", no_wrap=True)\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"State\", no_wrap=True)\n table.add_column(\"When\", style=\"bold\", no_wrap=True)\n\n for flow_run in sorted(flow_runs, key=lambda d: d.created, reverse=True):\n flow = flows_by_id[flow_run.flow_id]\n timestamp = (\n flow_run.state.state_details.scheduled_time\n if flow_run.state.is_scheduled()\n else flow_run.state.timestamp\n )\n table.add_row(\n str(flow_run.id),\n str(flow.name),\n str(flow_run.name),\n str(flow_run.state.type.value),\n pendulum.instance(timestamp).diff_for_humans(),\n )\n\n app.console.print(table)\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/kubernetes/","title":"kubernetes","text":"","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes","title":"prefect.cli.kubernetes
","text":"Command line interface for working with Prefect on Kubernetes
","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes.manifest_agent","title":"manifest_agent
","text":"Generates a manifest for deploying Agent on Kubernetes.
Example$ prefect kubernetes manifest agent | kubectl apply -f -
Source code inprefect/cli/kubernetes.py
@manifest_app.command(\"agent\")\ndef manifest_agent(\n api_url: str = SettingsOption(PREFECT_API_URL),\n api_key: str = SettingsOption(PREFECT_API_KEY),\n image_tag: str = typer.Option(\n get_prefect_image_name(),\n \"-i\",\n \"--image-tag\",\n help=\"The tag of a Docker image to use for the Agent.\",\n ),\n namespace: str = typer.Option(\n \"default\",\n \"-n\",\n \"--namespace\",\n help=\"A Kubernetes namespace to create agent in.\",\n ),\n work_queue: str = typer.Option(\n \"kubernetes\",\n \"-q\",\n \"--work-queue\",\n help=\"A work queue name for the agent to pull from.\",\n ),\n):\n \"\"\"\n Generates a manifest for deploying Agent on Kubernetes.\n\n Example:\n $ prefect kubernetes manifest agent | kubectl apply -f -\n \"\"\"\n\n template = Template(\n (\n prefect.__module_path__ / \"cli\" / \"templates\" / \"kubernetes-agent.yaml\"\n ).read_text()\n )\n manifest = template.substitute(\n {\n \"api_url\": api_url,\n \"api_key\": api_key,\n \"image_name\": image_tag,\n \"namespace\": namespace,\n \"work_queue\": work_queue,\n }\n )\n print(manifest)\n
","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes.manifest_flow_run_job","title":"manifest_flow_run_job
async
","text":"Prints the default KubernetesJob Job manifest.
Use this file to fully customize your KubernetesJob
deployments.
\b Example: \b $ prefect kubernetes manifest flow-run-job
\b Output, a YAML file: \b apiVersion: batch/v1 kind: Job ...
Source code inprefect/cli/kubernetes.py
@manifest_app.command(\"flow-run-job\")\nasync def manifest_flow_run_job():\n \"\"\"\n Prints the default KubernetesJob Job manifest.\n\n Use this file to fully customize your `KubernetesJob` deployments.\n\n \\b\n Example:\n \\b\n $ prefect kubernetes manifest flow-run-job\n\n \\b\n Output, a YAML file:\n \\b\n apiVersion: batch/v1\n kind: Job\n ...\n \"\"\"\n\n KubernetesJob.base_job_manifest()\n\n output = yaml.dump(KubernetesJob.base_job_manifest())\n\n # add some commentary where appropriate\n output = output.replace(\n \"metadata:\\n labels:\",\n \"metadata:\\n # labels are required, even if empty\\n labels:\",\n )\n output = output.replace(\n \"containers:\\n\",\n \"containers: # the first container is required\\n\",\n )\n output = output.replace(\n \"env: []\\n\",\n \"env: [] # env is required, even if empty\\n\",\n )\n\n print(output)\n
","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes.manifest_server","title":"manifest_server
","text":"Generates a manifest for deploying Prefect on Kubernetes.
Example$ prefect kubernetes manifest server | kubectl apply -f -
Source code inprefect/cli/kubernetes.py
@manifest_app.command(\"server\")\ndef manifest_server(\n image_tag: str = typer.Option(\n get_prefect_image_name(),\n \"-i\",\n \"--image-tag\",\n help=\"The tag of a Docker image to use for the server.\",\n ),\n namespace: str = typer.Option(\n \"default\",\n \"-n\",\n \"--namespace\",\n help=\"A Kubernetes namespace to create the server in.\",\n ),\n log_level: str = SettingsOption(PREFECT_LOGGING_SERVER_LEVEL),\n):\n \"\"\"\n Generates a manifest for deploying Prefect on Kubernetes.\n\n Example:\n $ prefect kubernetes manifest server | kubectl apply -f -\n \"\"\"\n\n template = Template(\n (\n prefect.__module_path__ / \"cli\" / \"templates\" / \"kubernetes-server.yaml\"\n ).read_text()\n )\n manifest = template.substitute(\n {\n \"image_name\": image_tag,\n \"namespace\": namespace,\n \"log_level\": log_level,\n }\n )\n print(manifest)\n
","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/profile/","title":"profile","text":"","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile","title":"prefect.cli.profile
","text":"Command line interface for working with profiles.
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.create","title":"create
","text":"Create a new profile.
Source code inprefect/cli/profile.py
@profile_app.command()\ndef create(\n name: str,\n from_name: str = typer.Option(None, \"--from\", help=\"Copy an existing profile.\"),\n):\n \"\"\"\n Create a new profile.\n \"\"\"\n\n profiles = prefect.settings.load_profiles()\n if name in profiles:\n app.console.print(\n textwrap.dedent(\n f\"\"\"\n [red]Profile {name!r} already exists.[/red]\n To create a new profile, remove the existing profile first:\n\n prefect profile delete {name!r}\n \"\"\"\n ).strip()\n )\n raise typer.Exit(1)\n\n if from_name:\n if from_name not in profiles:\n exit_with_error(f\"Profile {from_name!r} not found.\")\n\n # Create a copy of the profile with a new name and add to the collection\n profiles.add_profile(profiles[from_name].copy(update={\"name\": name}))\n else:\n profiles.add_profile(prefect.settings.Profile(name=name, settings={}))\n\n prefect.settings.save_profiles(profiles)\n\n app.console.print(\n textwrap.dedent(\n f\"\"\"\n Created profile with properties:\n name - {name!r}\n from name - {from_name or None}\n\n Use created profile for future, subsequent commands:\n prefect profile use {name!r}\n\n Use created profile temporarily for a single command:\n prefect -p {name!r} config view\n \"\"\"\n )\n )\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.delete","title":"delete
","text":"Delete the given profile.
Source code inprefect/cli/profile.py
@profile_app.command()\ndef delete(name: str):\n \"\"\"\n Delete the given profile.\n \"\"\"\n profiles = prefect.settings.load_profiles()\n if name not in profiles:\n exit_with_error(f\"Profile {name!r} not found.\")\n\n current_profile = prefect.context.get_settings_context().profile\n if current_profile.name == name:\n exit_with_error(\n f\"Profile {name!r} is the active profile. You must switch profiles before\"\n \" it can be deleted.\"\n )\n\n profiles.remove_profile(name)\n\n verb = \"Removed\"\n if name == \"default\":\n verb = \"Reset\"\n\n prefect.settings.save_profiles(profiles)\n exit_with_success(f\"{verb} profile {name!r}.\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.inspect","title":"inspect
","text":"Display settings from a given profile; defaults to active.
Source code inprefect/cli/profile.py
@profile_app.command()\ndef inspect(\n name: Optional[str] = typer.Argument(\n None, help=\"Name of profile to inspect; defaults to active profile.\"\n ),\n):\n \"\"\"\n Display settings from a given profile; defaults to active.\n \"\"\"\n profiles = prefect.settings.load_profiles()\n if name is None:\n current_profile = prefect.context.get_settings_context().profile\n if not current_profile:\n exit_with_error(\"No active profile set - please provide a name to inspect.\")\n name = current_profile.name\n print(f\"No name provided, defaulting to {name!r}\")\n if name not in profiles:\n exit_with_error(f\"Profile {name!r} not found.\")\n\n if not profiles[name].settings:\n # TODO: Consider instructing on how to add settings.\n print(f\"Profile {name!r} is empty.\")\n\n for setting, value in profiles[name].settings.items():\n app.console.print(f\"{setting.name}='{value}'\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.ls","title":"ls
","text":"List profile names.
Source code inprefect/cli/profile.py
@profile_app.command()\ndef ls():\n \"\"\"\n List profile names.\n \"\"\"\n profiles = prefect.settings.load_profiles()\n current_profile = prefect.context.get_settings_context().profile\n current_name = current_profile.name if current_profile is not None else None\n\n table = Table(caption=\"* active profile\")\n table.add_column(\n \"[#024dfd]Available Profiles:\", justify=\"right\", style=\"#8ea0ae\", no_wrap=True\n )\n\n for name in profiles:\n if name == current_name:\n table.add_row(f\"[green] * {name}[/green]\")\n else:\n table.add_row(f\" {name}\")\n app.console.print(table)\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.rename","title":"rename
","text":"Change the name of a profile.
Source code inprefect/cli/profile.py
@profile_app.command()\ndef rename(name: str, new_name: str):\n \"\"\"\n Change the name of a profile.\n \"\"\"\n profiles = prefect.settings.load_profiles()\n if name not in profiles:\n exit_with_error(f\"Profile {name!r} not found.\")\n\n if new_name in profiles:\n exit_with_error(f\"Profile {new_name!r} already exists.\")\n\n profiles.add_profile(profiles[name].copy(update={\"name\": new_name}))\n profiles.remove_profile(name)\n\n # If the active profile was renamed switch the active profile to the new name.\n prefect.context.get_settings_context().profile\n if profiles.active_name == name:\n profiles.set_active(new_name)\n if os.environ.get(\"PREFECT_PROFILE\") == name:\n app.console.print(\n f\"You have set your current profile to {name!r} with the \"\n \"PREFECT_PROFILE environment variable. You must update this variable to \"\n f\"{new_name!r} to continue using the profile.\"\n )\n\n prefect.settings.save_profiles(profiles)\n exit_with_success(f\"Renamed profile {name!r} to {new_name!r}.\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.use","title":"use
async
","text":"Set the given profile to active.
Source code inprefect/cli/profile.py
@profile_app.command()\nasync def use(name: str):\n \"\"\"\n Set the given profile to active.\n \"\"\"\n status_messages = {\n ConnectionStatus.CLOUD_CONNECTED: (\n exit_with_success,\n f\"Connected to Prefect Cloud using profile {name!r}\",\n ),\n ConnectionStatus.CLOUD_ERROR: (\n exit_with_error,\n f\"Error connecting to Prefect Cloud using profile {name!r}\",\n ),\n ConnectionStatus.CLOUD_UNAUTHORIZED: (\n exit_with_error,\n f\"Error authenticating with Prefect Cloud using profile {name!r}\",\n ),\n ConnectionStatus.ORION_CONNECTED: (\n exit_with_success,\n f\"Connected to Prefect server using profile {name!r}\",\n ),\n ConnectionStatus.ORION_ERROR: (\n exit_with_error,\n f\"Error connecting to Prefect server using profile {name!r}\",\n ),\n ConnectionStatus.EPHEMERAL: (\n exit_with_success,\n (\n f\"No Prefect server specified using profile {name!r} - the API will run\"\n \" in ephemeral mode.\"\n ),\n ),\n ConnectionStatus.INVALID_API: (\n exit_with_error,\n \"Error connecting to Prefect API URL\",\n ),\n }\n\n profiles = prefect.settings.load_profiles()\n if name not in profiles.names:\n exit_with_error(f\"Profile {name!r} not found.\")\n\n profiles.set_active(name)\n prefect.settings.save_profiles(profiles)\n\n with Progress(\n SpinnerColumn(),\n TextColumn(\"[progress.description]{task.description}\"),\n transient=False,\n ) as progress:\n progress.add_task(\n description=\"Checking API connectivity...\",\n total=None,\n )\n\n with use_profile(name, include_current_context=False):\n connection_status = await check_orion_connection()\n\n exit_method, msg = status_messages[connection_status]\n\n exit_method(msg)\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/project/","title":"project","text":"","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project","title":"prefect.cli.project
","text":"Deprecated - Command line interface for working with projects.
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.clone","title":"clone
async
","text":"Clone an existing project for a given deployment.
Source code inprefect/cli/project.py
@project_app.command()\nasync def clone(\n deployment_name: str = typer.Option(\n None,\n \"--deployment\",\n \"-d\",\n help=\"The name of the deployment to clone a project for.\",\n ),\n deployment_id: str = typer.Option(\n None,\n \"--id\",\n \"-i\",\n help=\"The id of the deployment to clone a project for.\",\n ),\n):\n \"\"\"\n Clone an existing project for a given deployment.\n \"\"\"\n app.console.print(\n generate_deprecation_message(\n \"The `prefect project clone` command\",\n start_date=\"Jun 2023\",\n )\n )\n if deployment_name and deployment_id:\n exit_with_error(\n \"Can only pass one of deployment name or deployment ID options.\"\n )\n\n if not deployment_name and not deployment_id:\n exit_with_error(\"Must pass either a deployment name or deployment ID.\")\n\n if deployment_name:\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(deployment_name)\n except ObjectNotFound:\n exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n else:\n async with get_client() as client:\n try:\n deployment = await client.read_deployment(deployment_id)\n except ObjectNotFound:\n exit_with_error(f\"Deployment {deployment_id!r} not found!\")\n\n if deployment.pull_steps:\n output = await run_steps(deployment.pull_steps)\n app.console.out(output[\"directory\"])\n else:\n exit_with_error(\"No pull steps found, exiting early.\")\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.init","title":"init
async
","text":"Initialize a new project.
Source code inprefect/cli/project.py
@project_app.command()\n@app.command()\nasync def init(\n name: str = None,\n recipe: str = None,\n fields: List[str] = typer.Option(\n None,\n \"-f\",\n \"--field\",\n help=(\n \"One or more fields to pass to the recipe (e.g., image_name) in the format\"\n \" of key=value.\"\n ),\n ),\n):\n \"\"\"\n Initialize a new project.\n \"\"\"\n inputs = {}\n fields = fields or []\n recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n\n for field in fields:\n key, value = field.split(\"=\")\n inputs[key] = value\n\n if not recipe and is_interactive():\n recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n recipes = []\n\n for r in recipe_paths.iterdir():\n if r.is_dir() and (r / \"prefect.yaml\").exists():\n with open(r / \"prefect.yaml\") as f:\n recipe_data = yaml.safe_load(f)\n recipe_name = r.name\n recipe_description = recipe_data.get(\n \"description\", \"(no description available)\"\n )\n recipe_dict = {\n \"name\": recipe_name,\n \"description\": recipe_description,\n }\n recipes.append(recipe_dict)\n\n selected_recipe = prompt_select_from_table(\n app.console,\n \"Would you like to initialize your deployment configuration with a recipe?\",\n columns=[\n {\"header\": \"Name\", \"key\": \"name\"},\n {\"header\": \"Description\", \"key\": \"description\"},\n ],\n data=recipes,\n opt_out_message=\"No, I'll use the default deployment configuration.\",\n opt_out_response={},\n )\n if selected_recipe != {}:\n recipe = selected_recipe[\"name\"]\n\n if recipe and (recipe_paths / recipe / \"prefect.yaml\").exists():\n with open(recipe_paths / recipe / \"prefect.yaml\") as f:\n recipe_inputs = yaml.safe_load(f).get(\"required_inputs\") or {}\n\n if recipe_inputs:\n if set(recipe_inputs.keys()) < set(inputs.keys()):\n # message to user about extra fields\n app.console.print(\n (\n f\"Warning: extra fields provided for {recipe!r} recipe:\"\n f\" '{', '.join(set(inputs.keys()) - set(recipe_inputs.keys()))}'\"\n ),\n style=\"red\",\n )\n elif set(recipe_inputs.keys()) > set(inputs.keys()):\n table = Table(\n title=f\"[red]Required inputs for {recipe!r} recipe[/red]\",\n )\n table.add_column(\"Field Name\", style=\"green\", no_wrap=True)\n table.add_column(\n \"Description\", justify=\"left\", style=\"white\", no_wrap=False\n )\n for field, description in recipe_inputs.items():\n if field not in inputs:\n table.add_row(field, description)\n\n app.console.print(table)\n\n for key, description in recipe_inputs.items():\n if key not in inputs:\n inputs[key] = typer.prompt(key)\n\n app.console.print(\"-\" * 15)\n\n try:\n files = [\n f\"[green]{fname}[/green]\"\n for fname in initialize_project(name=name, recipe=recipe, inputs=inputs)\n ]\n except ValueError as exc:\n if \"Unknown recipe\" in str(exc):\n exit_with_error(\n f\"Unknown recipe {recipe!r} provided - run [yellow]`prefect init\"\n \"`[/yellow] to see all available recipes.\"\n )\n else:\n raise\n\n files = \"\\n\".join(files)\n empty_msg = (\n f\"Created project in [green]{Path('.').resolve()}[/green]; no new files\"\n \" created.\"\n )\n file_msg = (\n f\"Created project in [green]{Path('.').resolve()}[/green] with the following\"\n f\" new files:\\n{files}\"\n )\n app.console.print(file_msg if files else empty_msg)\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.ls","title":"ls
async
","text":"List available recipes.
Source code inprefect/cli/project.py
@recipe_app.command()\nasync def ls():\n \"\"\"\n List available recipes.\n \"\"\"\n\n recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n recipes = {}\n\n for recipe in recipe_paths.iterdir():\n if recipe.is_dir() and (recipe / \"prefect.yaml\").exists():\n with open(recipe / \"prefect.yaml\") as f:\n recipes[recipe.name] = yaml.safe_load(f).get(\n \"description\", \"(no description available)\"\n )\n\n table = Table(\n title=\"Available project recipes\",\n caption=(\n \"Run `prefect project init --recipe <recipe>` to initialize a project with\"\n \" a recipe.\"\n ),\n caption_style=\"red\",\n )\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"Description\", justify=\"left\", style=\"white\", no_wrap=False)\n for name, description in sorted(recipes.items(), key=lambda x: x[0]):\n table.add_row(name, description)\n\n app.console.print(table)\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.register_flow","title":"register_flow
async
","text":"Register a flow with this project.
Source code inprefect/cli/project.py
@project_app.command()\nasync def register_flow(\n entrypoint: str = typer.Argument(\n ...,\n help=(\n \"The path to a flow entrypoint, in the form of\"\n \" `./path/to/file.py:flow_func_name`\"\n ),\n ),\n force: bool = typer.Option(\n False,\n \"--force\",\n \"-f\",\n help=(\n \"An optional flag to force register this flow and overwrite any existing\"\n \" entry\"\n ),\n ),\n):\n \"\"\"\n Register a flow with this project.\n \"\"\"\n try:\n flow = await register(entrypoint, force=force)\n except Exception as exc:\n exit_with_error(exc)\n\n app.console.print(\n (\n f\"Registered flow {flow.name!r} in\"\n f\" {(find_prefect_directory()/'flows.json').resolve()!s}\"\n ),\n style=\"green\",\n )\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/root/","title":"root","text":"","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/root/#prefect.cli.root","title":"prefect.cli.root
","text":"Base prefect
command-line application
version
async
","text":"Get the current Prefect version.
Source code inprefect/cli/root.py
@app.command()\nasync def version():\n \"\"\"Get the current Prefect version.\"\"\"\n import sqlite3\n\n from prefect.server.utilities.database import get_dialect\n from prefect.settings import PREFECT_API_DATABASE_CONNECTION_URL\n\n version_info = {\n \"Version\": prefect.__version__,\n \"API version\": SERVER_API_VERSION,\n \"Python version\": platform.python_version(),\n \"Git commit\": prefect.__version_info__[\"full-revisionid\"][:8],\n \"Built\": pendulum.parse(\n prefect.__version_info__[\"date\"]\n ).to_day_datetime_string(),\n \"OS/Arch\": f\"{sys.platform}/{platform.machine()}\",\n \"Profile\": prefect.context.get_settings_context().profile.name,\n }\n\n server_type: str\n\n try:\n # We do not context manage the client because when using an ephemeral app we do not\n # want to create the database or run migrations\n client = prefect.get_client()\n server_type = client.server_type.value\n except Exception:\n server_type = \"<client error>\"\n\n version_info[\"Server type\"] = server_type.lower()\n\n # TODO: Consider adding an API route to retrieve this information?\n if server_type == ServerType.EPHEMERAL.value:\n database = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value()).name\n version_info[\"Server\"] = {\"Database\": database}\n if database == \"sqlite\":\n version_info[\"Server\"][\"SQLite version\"] = sqlite3.sqlite_version\n\n def display(object: dict, nesting: int = 0):\n # Recursive display of a dictionary with nesting\n for key, value in object.items():\n key += \":\"\n if isinstance(value, dict):\n app.console.print(key)\n return display(value, nesting + 2)\n prefix = \" \" * nesting\n app.console.print(f\"{prefix}{key.ljust(20 - len(prefix))} {value}\")\n\n display(version_info)\n
","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/server/","title":"server","text":"","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server","title":"prefect.cli.server
","text":"Command line interface for working with Prefect
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.downgrade","title":"downgrade
async
","text":"Downgrade the Prefect database
Source code inprefect/cli/server.py
@database_app.command()\nasync def downgrade(\n yes: bool = typer.Option(False, \"--yes\", \"-y\"),\n revision: str = typer.Option(\n \"-1\",\n \"-r\",\n help=(\n \"The revision to pass to `alembic downgrade`. If not provided, \"\n \"downgrades to the most recent revision. Use 'base' to run all \"\n \"migrations.\"\n ),\n ),\n dry_run: bool = typer.Option(\n False,\n help=(\n \"Flag to show what migrations would be made without applying them. Will\"\n \" emit sql statements to stdout.\"\n ),\n ),\n):\n \"\"\"Downgrade the Prefect database\"\"\"\n from prefect.server.database.alembic_commands import alembic_downgrade\n from prefect.server.database.dependencies import provide_database_interface\n\n db = provide_database_interface()\n\n engine = await db.engine()\n\n if not yes:\n confirm = typer.confirm(\n \"Are you sure you want to downgrade the Prefect \"\n f\"database at {engine.url!r}?\"\n )\n if not confirm:\n exit_with_error(\"Database downgrade aborted!\")\n\n app.console.print(\"Running downgrade migrations ...\")\n await run_sync_in_worker_thread(\n alembic_downgrade, revision=revision, dry_run=dry_run\n )\n app.console.print(\"Migrations succeeded!\")\n exit_with_success(f\"Prefect database at {engine.url!r} downgraded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.reset","title":"reset
async
","text":"Drop and recreate all Prefect database tables
Source code inprefect/cli/server.py
@database_app.command()\nasync def reset(yes: bool = typer.Option(False, \"--yes\", \"-y\")):\n \"\"\"Drop and recreate all Prefect database tables\"\"\"\n from prefect.server.database.dependencies import provide_database_interface\n\n db = provide_database_interface()\n engine = await db.engine()\n if not yes:\n confirm = typer.confirm(\n \"Are you sure you want to reset the Prefect database located \"\n f'at \"{engine.url!r}\"? This will drop and recreate all tables.'\n )\n if not confirm:\n exit_with_error(\"Database reset aborted\")\n app.console.print(\"Downgrading database...\")\n await db.drop_db()\n app.console.print(\"Upgrading database...\")\n await db.create_db()\n exit_with_success(f'Prefect database \"{engine.url!r}\" reset!')\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.revision","title":"revision
async
","text":"Create a new migration for the Prefect database
Source code inprefect/cli/server.py
@database_app.command()\nasync def revision(\n message: str = typer.Option(\n None,\n \"--message\",\n \"-m\",\n help=\"A message to describe the migration.\",\n ),\n autogenerate: bool = False,\n):\n \"\"\"Create a new migration for the Prefect database\"\"\"\n from prefect.server.database.alembic_commands import alembic_revision\n\n app.console.print(\"Running migration file creation ...\")\n await run_sync_in_worker_thread(\n alembic_revision,\n message=message,\n autogenerate=autogenerate,\n )\n exit_with_success(\"Creating new migration file succeeded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.stamp","title":"stamp
async
","text":"Stamp the revision table with the given revision; don't run any migrations
Source code inprefect/cli/server.py
@database_app.command()\nasync def stamp(revision: str):\n \"\"\"Stamp the revision table with the given revision; don't run any migrations\"\"\"\n from prefect.server.database.alembic_commands import alembic_stamp\n\n app.console.print(\"Stamping database with revision ...\")\n await run_sync_in_worker_thread(alembic_stamp, revision=revision)\n exit_with_success(\"Stamping database with revision succeeded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.start","title":"start
async
","text":"Start a Prefect server instance
Source code inprefect/cli/server.py
@server_app.command()\nasync def start(\n host: str = SettingsOption(PREFECT_SERVER_API_HOST),\n port: int = SettingsOption(PREFECT_SERVER_API_PORT),\n keep_alive_timeout: int = SettingsOption(PREFECT_SERVER_API_KEEPALIVE_TIMEOUT),\n log_level: str = SettingsOption(PREFECT_LOGGING_SERVER_LEVEL),\n scheduler: bool = SettingsOption(PREFECT_API_SERVICES_SCHEDULER_ENABLED),\n analytics: bool = SettingsOption(\n PREFECT_SERVER_ANALYTICS_ENABLED, \"--analytics-on/--analytics-off\"\n ),\n late_runs: bool = SettingsOption(PREFECT_API_SERVICES_LATE_RUNS_ENABLED),\n ui: bool = SettingsOption(PREFECT_UI_ENABLED),\n):\n \"\"\"Start a Prefect server instance\"\"\"\n\n server_env = os.environ.copy()\n server_env[\"PREFECT_API_SERVICES_SCHEDULER_ENABLED\"] = str(scheduler)\n server_env[\"PREFECT_SERVER_ANALYTICS_ENABLED\"] = str(analytics)\n server_env[\"PREFECT_API_SERVICES_LATE_RUNS_ENABLED\"] = str(late_runs)\n server_env[\"PREFECT_API_SERVICES_UI\"] = str(ui)\n server_env[\"PREFECT_LOGGING_SERVER_LEVEL\"] = log_level\n\n base_url = f\"http://{host}:{port}\"\n\n async with anyio.create_task_group() as tg:\n app.console.print(generate_welcome_blurb(base_url, ui_enabled=ui))\n app.console.print(\"\\n\")\n\n server_process_id = await tg.start(\n partial(\n run_process,\n command=[\n get_sys_executable(),\n \"-m\",\n \"uvicorn\",\n \"--app-dir\",\n # quote wrapping needed for windows paths with spaces\n f'\"{prefect.__module_path__.parent}\"',\n \"--factory\",\n \"prefect.server.api.server:create_app\",\n \"--host\",\n str(host),\n \"--port\",\n str(port),\n \"--timeout-keep-alive\",\n str(keep_alive_timeout),\n ],\n env=server_env,\n stream_output=True,\n )\n )\n\n # Explicitly handle the interrupt signal here, as it will allow us to\n # cleanly stop the uvicorn server. Failing to do that may cause a\n # large amount of anyio error traces on the terminal, because the\n # SIGINT is handled by Typer/Click in this process (the parent process)\n # and will start shutting down subprocesses:\n # https://github.com/PrefectHQ/server/issues/2475\n\n setup_signal_handlers_server(\n server_process_id, \"the Prefect server\", app.console.print\n )\n\n app.console.print(\"Server stopped!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.upgrade","title":"upgrade
async
","text":"Upgrade the Prefect database
Source code inprefect/cli/server.py
@database_app.command()\nasync def upgrade(\n yes: bool = typer.Option(False, \"--yes\", \"-y\"),\n revision: str = typer.Option(\n \"head\",\n \"-r\",\n help=(\n \"The revision to pass to `alembic upgrade`. If not provided, runs all\"\n \" migrations.\"\n ),\n ),\n dry_run: bool = typer.Option(\n False,\n help=(\n \"Flag to show what migrations would be made without applying them. Will\"\n \" emit sql statements to stdout.\"\n ),\n ),\n):\n \"\"\"Upgrade the Prefect database\"\"\"\n from prefect.server.database.alembic_commands import alembic_upgrade\n from prefect.server.database.dependencies import provide_database_interface\n\n db = provide_database_interface()\n engine = await db.engine()\n\n if not yes:\n confirm = typer.confirm(\n f\"Are you sure you want to upgrade the Prefect database at {engine.url!r}?\"\n )\n if not confirm:\n exit_with_error(\"Database upgrade aborted!\")\n\n app.console.print(\"Running upgrade migrations ...\")\n await run_sync_in_worker_thread(alembic_upgrade, revision=revision, dry_run=dry_run)\n app.console.print(\"Migrations succeeded!\")\n exit_with_success(f\"Prefect database at {engine.url!r} upgraded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/variable/","title":"variable","text":"","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable","title":"prefect.cli.variable
","text":"","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable.delete","title":"delete
async
","text":"Delete a variable.
Parameters:
Name Type Description Defaultname
str
the name of the variable to delete
required Source code inprefect/cli/variable.py
@variable_app.command(\"delete\")\nasync def delete(\n name: str,\n):\n \"\"\"\n Delete a variable.\n\n Arguments:\n name: the name of the variable to delete\n \"\"\"\n\n async with get_client() as client:\n try:\n await client.delete_variable_by_name(\n name=name,\n )\n except ObjectNotFound:\n exit_with_error(f\"Variable {name!r} not found.\")\n\n exit_with_success(f\"Deleted variable {name!r}.\")\n
","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable.inspect","title":"inspect
async
","text":"View details about a variable.
Parameters:
Name Type Description Defaultname
str
the name of the variable to inspect
required Source code inprefect/cli/variable.py
@variable_app.command(\"inspect\")\nasync def inspect(\n name: str,\n):\n \"\"\"\n View details about a variable.\n\n Arguments:\n name: the name of the variable to inspect\n \"\"\"\n\n async with get_client() as client:\n variable = await client.read_variable_by_name(\n name=name,\n )\n if not variable:\n exit_with_error(f\"Variable {name!r} not found.\")\n\n app.console.print(Pretty(variable))\n
","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable.list_variables","title":"list_variables
async
","text":"List variables.
Source code inprefect/cli/variable.py
@variable_app.command(\"ls\")\nasync def list_variables(\n limit: int = typer.Option(\n 100,\n \"--limit\",\n help=\"The maximum number of variables to return.\",\n ),\n):\n \"\"\"\n List variables.\n \"\"\"\n async with get_client() as client:\n variables = await client.read_variables(\n limit=limit,\n )\n\n table = Table(\n title=\"Variables\",\n caption=\"List Variables using `prefect variable ls`\",\n show_header=True,\n )\n\n table.add_column(\"Name\", style=\"blue\", no_wrap=True)\n # values can be up 5000 characters so truncate early\n table.add_column(\"Value\", style=\"blue\", no_wrap=True, max_width=50)\n table.add_column(\"Created\", style=\"blue\", no_wrap=True)\n table.add_column(\"Updated\", style=\"blue\", no_wrap=True)\n\n for variable in sorted(variables, key=lambda x: f\"{x.name}\"):\n table.add_row(\n variable.name,\n variable.value,\n pendulum.instance(variable.created).diff_for_humans(),\n pendulum.instance(variable.updated).diff_for_humans(),\n )\n\n app.console.print(table)\n
","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/work_pool/","title":"work_pool","text":"","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool","title":"prefect.cli.work_pool
","text":"Command line interface for working with work queues.
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.clear_concurrency_limit","title":"clear_concurrency_limit
async
","text":"Clear the concurrency limit for a work pool.
\b Examples: $ prefect work-pool clear-concurrency-limit \"my-pool\"
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def clear_concurrency_limit(\n name: str = typer.Argument(..., help=\"The name of the work pool to update.\"),\n):\n \"\"\"\n Clear the concurrency limit for a work pool.\n\n \\b\n Examples:\n $ prefect work-pool clear-concurrency-limit \"my-pool\"\n\n \"\"\"\n async with get_client() as client:\n try:\n await client.update_work_pool(\n work_pool_name=name,\n work_pool=WorkPoolUpdate(\n concurrency_limit=None,\n ),\n )\n except ObjectNotFound as exc:\n exit_with_error(exc)\n\n exit_with_success(f\"Cleared concurrency limit for work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.create","title":"create
async
","text":"Create a new work pool.
\b Examples: \b Create a Kubernetes work pool in a paused state: \b $ prefect work-pool create \"my-pool\" --type kubernetes --paused \b Create a Docker work pool with a custom base job template: \b $ prefect work-pool create \"my-pool\" --type docker --base-job-template ./base-job-template.json
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def create(\n name: str = typer.Argument(..., help=\"The name of the work pool.\"),\n base_job_template: typer.FileText = typer.Option(\n None,\n \"--base-job-template\",\n help=(\n \"The path to a JSON file containing the base job template to use. If\"\n \" unspecified, Prefect will use the default base job template for the given\"\n \" worker type.\"\n ),\n ),\n paused: bool = typer.Option(\n False,\n \"--paused\",\n help=\"Whether or not to create the work pool in a paused state.\",\n ),\n type: str = typer.Option(\n None, \"-t\", \"--type\", help=\"The type of work pool to create.\"\n ),\n set_as_default: bool = typer.Option(\n False,\n \"--set-as-default\",\n help=(\n \"Whether or not to use the created work pool as the local default for\"\n \" deployment.\"\n ),\n ),\n provision_infrastructure: bool = typer.Option(\n False,\n \"--provision-infrastructure\",\n \"--provision-infra\",\n help=(\n \"Whether or not to provision infrastructure for the work pool if supported\"\n \" for the given work pool type.\"\n ),\n ),\n):\n \"\"\"\n Create a new work pool.\n\n \\b\n Examples:\n \\b\n Create a Kubernetes work pool in a paused state:\n \\b\n $ prefect work-pool create \"my-pool\" --type kubernetes --paused\n \\b\n Create a Docker work pool with a custom base job template:\n \\b\n $ prefect work-pool create \"my-pool\" --type docker --base-job-template ./base-job-template.json\n\n \"\"\"\n if not name.lower().strip(\"'\\\" \"):\n exit_with_error(\"Work pool name cannot be empty.\")\n async with get_client() as client:\n try:\n await client.read_work_pool(work_pool_name=name)\n except ObjectNotFound:\n pass\n else:\n exit_with_error(\n f\"Work pool named {name!r} already exists. Please try creating your\"\n \" work pool again with a different name.\"\n )\n\n if type is None:\n async with get_collections_metadata_client() as collections_client:\n if not is_interactive():\n exit_with_error(\n \"When not using an interactive terminal, you must supply a\"\n \" `--type` value.\"\n )\n worker_metadata = await collections_client.read_worker_metadata()\n\n # Retrieve only push pools if provisioning infrastructure\n data = [\n worker\n for collection in worker_metadata.values()\n for worker in collection.values()\n if provision_infrastructure\n and has_provisioner_for_type(worker[\"type\"])\n or not provision_infrastructure\n ]\n worker = prompt_select_from_table(\n app.console,\n \"What type of work pool infrastructure would you like to use?\",\n columns=[\n {\"header\": \"Infrastructure Type\", \"key\": \"display_name\"},\n {\"header\": \"Description\", \"key\": \"description\"},\n ],\n data=data,\n table_kwargs={\"show_lines\": True},\n )\n type = worker[\"type\"]\n\n available_work_pool_types = await get_available_work_pool_types()\n if type not in available_work_pool_types:\n exit_with_error(\n f\"Unknown work pool type {type!r}. \"\n \"Please choose from\"\n f\" {', '.join(available_work_pool_types)}.\"\n )\n\n if base_job_template is None:\n template_contents = (\n await get_default_base_job_template_for_infrastructure_type(type)\n )\n else:\n template_contents = json.load(base_job_template)\n\n if provision_infrastructure:\n try:\n provisioner = get_infrastructure_provisioner_for_work_pool_type(type)\n provisioner.console = app.console\n template_contents = await provisioner.provision(\n work_pool_name=name, base_job_template=template_contents\n )\n except ValueError as exc:\n print(exc)\n app.console.print(\n (\n \"Automatic infrastructure provisioning is not supported for\"\n f\" {type!r} work pools.\"\n ),\n style=\"yellow\",\n )\n except RuntimeError as exc:\n exit_with_error(f\"Failed to provision infrastructure: {exc}\")\n\n try:\n wp = WorkPoolCreate(\n name=name,\n type=type,\n base_job_template=template_contents,\n is_paused=paused,\n )\n work_pool = await client.create_work_pool(work_pool=wp)\n app.console.print(f\"Created work pool {work_pool.name!r}!\\n\", style=\"green\")\n if (\n not work_pool.is_paused\n and not work_pool.is_managed_pool\n and not work_pool.is_push_pool\n ):\n app.console.print(\"To start a worker for this work pool, run:\\n\")\n app.console.print(\n f\"\\t[blue]prefect worker start --pool {work_pool.name}[/]\\n\"\n )\n if set_as_default:\n set_work_pool_as_default(work_pool.name)\n exit_with_success(\"\")\n except ObjectAlreadyExists:\n exit_with_error(\n f\"Work pool named {name!r} already exists. Please try creating your\"\n \" work pool again with a different name.\"\n )\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.delete","title":"delete
async
","text":"Delete a work pool.
\b Examples: $ prefect work-pool delete \"my-pool\"
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def delete(\n name: str = typer.Argument(..., help=\"The name of the work pool to delete.\"),\n):\n \"\"\"\n Delete a work pool.\n\n \\b\n Examples:\n $ prefect work-pool delete \"my-pool\"\n\n \"\"\"\n async with get_client() as client:\n try:\n await client.delete_work_pool(work_pool_name=name)\n except ObjectNotFound as exc:\n exit_with_error(exc)\n\n exit_with_success(f\"Deleted work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.get_default_base_job_template","title":"get_default_base_job_template
async
","text":"Get the default base job template for a given work pool type.
\b Examples: $ prefect work-pool get-default-base-job-template --type kubernetes
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def get_default_base_job_template(\n type: str = typer.Option(\n None,\n \"-t\",\n \"--type\",\n help=\"The type of work pool for which to get the default base job template.\",\n ),\n file: str = typer.Option(\n None, \"-f\", \"--file\", help=\"If set, write the output to a file.\"\n ),\n):\n \"\"\"\n Get the default base job template for a given work pool type.\n\n \\b\n Examples:\n $ prefect work-pool get-default-base-job-template --type kubernetes\n \"\"\"\n base_job_template = await get_default_base_job_template_for_infrastructure_type(\n type\n )\n if base_job_template is None:\n exit_with_error(\n f\"Unknown work pool type {type!r}. \"\n \"Please choose from\"\n f\" {', '.join(await get_available_work_pool_types())}.\"\n )\n\n if file is None:\n print(json.dumps(base_job_template, indent=2))\n else:\n with open(file, mode=\"w\") as f:\n json.dump(base_job_template, fp=f, indent=2)\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.has_provisioner_for_type","title":"has_provisioner_for_type
","text":"Check if there is a provisioner for the given work pool type.
Parameters:
Name Type Description Defaultwork_pool_type
str
The type of the work pool.
requiredReturns:
Name Type Descriptionbool
bool
True if a provisioner exists for the given type, False otherwise.
Source code inprefect/cli/work_pool.py
def has_provisioner_for_type(work_pool_type: str) -> bool:\n \"\"\"\n Check if there is a provisioner for the given work pool type.\n\n Args:\n work_pool_type (str): The type of the work pool.\n\n Returns:\n bool: True if a provisioner exists for the given type, False otherwise.\n \"\"\"\n return work_pool_type in _provisioners\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.inspect","title":"inspect
async
","text":"Inspect a work pool.
\b Examples: $ prefect work-pool inspect \"my-pool\"
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def inspect(\n name: str = typer.Argument(..., help=\"The name of the work pool to inspect.\"),\n):\n \"\"\"\n Inspect a work pool.\n\n \\b\n Examples:\n $ prefect work-pool inspect \"my-pool\"\n\n \"\"\"\n async with get_client() as client:\n try:\n pool = await client.read_work_pool(work_pool_name=name)\n app.console.print(Pretty(pool))\n except ObjectNotFound:\n exit_with_error(f\"Work pool {name!r} not found!\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.ls","title":"ls
async
","text":"List work pools.
\b Examples: $ prefect work-pool ls
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def ls(\n verbose: bool = typer.Option(\n False,\n \"--verbose\",\n \"-v\",\n help=\"Show additional information about work pools.\",\n ),\n):\n \"\"\"\n List work pools.\n\n \\b\n Examples:\n $ prefect work-pool ls\n \"\"\"\n table = Table(\n title=\"Work Pools\", caption=\"(**) denotes a paused pool\", caption_style=\"red\"\n )\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"Type\", style=\"magenta\", no_wrap=True)\n table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n if verbose:\n table.add_column(\"Base Job Template\", style=\"magenta\", no_wrap=True)\n\n async with get_client() as client:\n pools = await client.read_work_pools()\n\n def sort_by_created_key(q):\n return pendulum.now(\"utc\") - q.created\n\n for pool in sorted(pools, key=sort_by_created_key):\n row = [\n f\"{pool.name} [red](**)\" if pool.is_paused else pool.name,\n str(pool.type),\n str(pool.id),\n (\n f\"[red]{pool.concurrency_limit}\"\n if pool.concurrency_limit\n else \"[blue]None\"\n ),\n ]\n if verbose:\n row.append(str(pool.base_job_template))\n table.add_row(*row)\n\n app.console.print(table)\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.pause","title":"pause
async
","text":"Pause a work pool.
\b Examples: $ prefect work-pool pause \"my-pool\"
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def pause(\n name: str = typer.Argument(..., help=\"The name of the work pool to pause.\"),\n):\n \"\"\"\n Pause a work pool.\n\n \\b\n Examples:\n $ prefect work-pool pause \"my-pool\"\n\n \"\"\"\n async with get_client() as client:\n try:\n await client.update_work_pool(\n work_pool_name=name,\n work_pool=WorkPoolUpdate(\n is_paused=True,\n ),\n )\n except ObjectNotFound as exc:\n exit_with_error(exc)\n\n exit_with_success(f\"Paused work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.preview","title":"preview
async
","text":"Preview the work pool's scheduled work for all queues.
\b Examples: $ prefect work-pool preview \"my-pool\" --hours 24
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def preview(\n name: str = typer.Argument(None, help=\"The name or ID of the work pool to preview\"),\n hours: int = typer.Option(\n None,\n \"-h\",\n \"--hours\",\n help=\"The number of hours to look ahead; defaults to 1 hour\",\n ),\n):\n \"\"\"\n Preview the work pool's scheduled work for all queues.\n\n \\b\n Examples:\n $ prefect work-pool preview \"my-pool\" --hours 24\n\n \"\"\"\n if hours is None:\n hours = 1\n\n async with get_client() as client:\n try:\n responses = await client.get_scheduled_flow_runs_for_work_pool(\n work_pool_name=name,\n )\n except ObjectNotFound as exc:\n exit_with_error(exc)\n\n runs = [response.flow_run for response in responses]\n table = Table(caption=\"(**) denotes a late run\", caption_style=\"red\")\n\n table.add_column(\n \"Scheduled Start Time\", justify=\"left\", style=\"yellow\", no_wrap=True\n )\n table.add_column(\"Run ID\", justify=\"left\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"Deployment ID\", style=\"blue\", no_wrap=True)\n\n pendulum.now(\"utc\").add(hours=hours or 1)\n\n now = pendulum.now(\"utc\")\n\n def sort_by_created_key(r):\n return now - r.created\n\n for run in sorted(runs, key=sort_by_created_key):\n table.add_row(\n (\n f\"{run.expected_start_time} [red](**)\"\n if run.expected_start_time < now\n else f\"{run.expected_start_time}\"\n ),\n str(run.id),\n run.name,\n str(run.deployment_id),\n )\n\n if runs:\n app.console.print(table)\n else:\n app.console.print(\n (\n \"No runs found - try increasing how far into the future you preview\"\n \" with the --hours flag\"\n ),\n style=\"yellow\",\n )\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.provision_infrastructure","title":"provision_infrastructure
async
","text":"Provision infrastructure for a work pool.
\b Examples: $ prefect work-pool provision-infrastructure \"my-pool\"
$ prefect work-pool provision-infra \"my-pool\"\n
Source code in prefect/cli/work_pool.py
@work_pool_app.command(aliases=[\"provision-infra\"])\nasync def provision_infrastructure(\n name: str = typer.Argument(\n ..., help=\"The name of the work pool to provision infrastructure for.\"\n ),\n):\n \"\"\"\n Provision infrastructure for a work pool.\n\n \\b\n Examples:\n $ prefect work-pool provision-infrastructure \"my-pool\"\n\n $ prefect work-pool provision-infra \"my-pool\"\n\n \"\"\"\n async with get_client() as client:\n try:\n work_pool = await client.read_work_pool(work_pool_name=name)\n if not work_pool.is_push_pool:\n exit_with_error(\n f\"Work pool {name!r} is not a push pool type. \"\n \"Please try provisioning infrastructure for a push pool.\"\n )\n except ObjectNotFound:\n exit_with_error(f\"Work pool {name!r} does not exist.\")\n except Exception as exc:\n exit_with_error(f\"Failed to read work pool {name!r}: {exc}\")\n\n try:\n provisioner = get_infrastructure_provisioner_for_work_pool_type(\n work_pool.type\n )\n provisioner.console = app.console\n new_base_job_template = await provisioner.provision(\n work_pool_name=name, base_job_template=work_pool.base_job_template\n )\n\n await client.update_work_pool(\n work_pool_name=name,\n work_pool=WorkPoolUpdate(\n base_job_template=new_base_job_template,\n ),\n )\n\n except ValueError as exc:\n app.console.print(f\"Error: {exc}\")\n app.console.print(\n (\n \"Automatic infrastructure provisioning is not supported for\"\n f\" {work_pool.type!r} work pools.\"\n ),\n style=\"yellow\",\n )\n except RuntimeError as exc:\n exit_with_error(\n f\"Failed to provision infrastructure for '{name}' work pool: {exc}\"\n )\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.resume","title":"resume
async
","text":"Resume a work pool.
\b Examples: $ prefect work-pool resume \"my-pool\"
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def resume(\n name: str = typer.Argument(..., help=\"The name of the work pool to resume.\"),\n):\n \"\"\"\n Resume a work pool.\n\n \\b\n Examples:\n $ prefect work-pool resume \"my-pool\"\n\n \"\"\"\n async with get_client() as client:\n try:\n await client.update_work_pool(\n work_pool_name=name,\n work_pool=WorkPoolUpdate(\n is_paused=False,\n ),\n )\n except ObjectNotFound as exc:\n exit_with_error(exc)\n\n exit_with_success(f\"Resumed work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.set_concurrency_limit","title":"set_concurrency_limit
async
","text":"Set the concurrency limit for a work pool.
\b Examples: $ prefect work-pool set-concurrency-limit \"my-pool\" 10
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def set_concurrency_limit(\n name: str = typer.Argument(..., help=\"The name of the work pool to update.\"),\n concurrency_limit: int = typer.Argument(\n ..., help=\"The new concurrency limit for the work pool.\"\n ),\n):\n \"\"\"\n Set the concurrency limit for a work pool.\n\n \\b\n Examples:\n $ prefect work-pool set-concurrency-limit \"my-pool\" 10\n\n \"\"\"\n async with get_client() as client:\n try:\n await client.update_work_pool(\n work_pool_name=name,\n work_pool=WorkPoolUpdate(\n concurrency_limit=concurrency_limit,\n ),\n )\n except ObjectNotFound as exc:\n exit_with_error(exc)\n\n exit_with_success(\n f\"Set concurrency limit for work pool {name!r} to {concurrency_limit}\"\n )\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.update","title":"update
async
","text":"Update a work pool.
\b Examples: $ prefect work-pool update \"my-pool\"
Source code inprefect/cli/work_pool.py
@work_pool_app.command()\nasync def update(\n name: str = typer.Argument(..., help=\"The name of the work pool to update.\"),\n base_job_template: typer.FileText = typer.Option(\n None,\n \"--base-job-template\",\n help=(\n \"The path to a JSON file containing the base job template to use. If\"\n \" unspecified, Prefect will use the default base job template for the given\"\n \" worker type. If None, the base job template will not be modified.\"\n ),\n ),\n concurrency_limit: int = typer.Option(\n None,\n \"--concurrency-limit\",\n help=(\n \"The concurrency limit for the work pool. If None, the concurrency limit\"\n \" will not be modified.\"\n ),\n ),\n description: str = typer.Option(\n None,\n \"--description\",\n help=(\n \"The description for the work pool. If None, the description will not be\"\n \" modified.\"\n ),\n ),\n):\n \"\"\"\n Update a work pool.\n\n \\b\n Examples:\n $ prefect work-pool update \"my-pool\"\n\n \"\"\"\n wp = WorkPoolUpdate()\n if base_job_template:\n wp.base_job_template = json.load(base_job_template)\n if concurrency_limit:\n wp.concurrency_limit = concurrency_limit\n if description:\n wp.description = description\n\n async with get_client() as client:\n try:\n await client.update_work_pool(\n work_pool_name=name,\n work_pool=wp,\n )\n except ObjectNotFound:\n exit_with_error(\"Work pool named {name!r} does not exist.\")\n\n exit_with_success(f\"Updated work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_queue/","title":"work_queue","text":"","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue","title":"prefect.cli.work_queue
","text":"Command line interface for working with work queues.
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.clear_concurrency_limit","title":"clear_concurrency_limit
async
","text":"Clear any concurrency limits from a work queue.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def clear_concurrency_limit(\n name: str = typer.Argument(..., help=\"The name or ID of the work queue to clear\"),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool that the work queue belongs to.\",\n ),\n):\n \"\"\"\n Clear any concurrency limits from a work queue.\n \"\"\"\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name,\n work_pool_name=pool,\n )\n async with get_client() as client:\n try:\n await client.update_work_queue(\n id=queue_id,\n concurrency_limit=None,\n )\n except ObjectNotFound:\n if pool:\n error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n else:\n error_message = f\"No work queue found: {name!r}\"\n exit_with_error(error_message)\n\n if pool:\n success_message = (\n f\"Concurrency limits removed on work queue {name!r} in work pool {pool!r}\"\n )\n else:\n success_message = f\"Concurrency limits removed on work queue {name!r}\"\n exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.create","title":"create
async
","text":"Create a work queue.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def create(\n name: str = typer.Argument(..., help=\"The unique name to assign this work queue\"),\n limit: int = typer.Option(\n None, \"-l\", \"--limit\", help=\"The concurrency limit to set on the queue.\"\n ),\n tags: List[str] = typer.Option(\n None,\n \"-t\",\n \"--tag\",\n help=(\n \"DEPRECATED: One or more optional tags. This option will be removed on\"\n \" 2023-02-23.\"\n ),\n ),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool to create the work queue in.\",\n ),\n priority: Optional[int] = typer.Option(\n None,\n \"-q\",\n \"--priority\",\n help=\"The associated priority for the created work queue\",\n ),\n):\n \"\"\"\n Create a work queue.\n \"\"\"\n if tags:\n app.console.print(\n (\n \"Supplying `tags` for work queues is deprecated. This work \"\n \"queue will use legacy tag-matching behavior. \"\n \"This option will be removed on 2023-02-23.\"\n ),\n style=\"red\",\n )\n\n if pool and tags:\n exit_with_error(\n \"Work queues created with tags cannot specify work pools or set priorities.\"\n )\n\n async with get_client() as client:\n try:\n result = await client.create_work_queue(\n name=name, tags=tags or None, work_pool_name=pool, priority=priority\n )\n if limit is not None:\n await client.update_work_queue(\n id=result.id,\n concurrency_limit=limit,\n )\n except ObjectAlreadyExists:\n exit_with_error(f\"Work queue with name: {name!r} already exists.\")\n except ObjectNotFound:\n exit_with_error(f\"Work pool with name: {pool!r} not found.\")\n\n if tags:\n tags_message = f\"tags - {', '.join(sorted(tags))}\\n\" or \"\"\n output_msg = dedent(\n f\"\"\"\n Created work queue with properties:\n name - {name!r}\n id - {result.id}\n concurrency limit - {limit}\n {tags_message}\n Start an agent to pick up flow runs from the work queue:\n prefect agent start -q '{result.name}'\n\n Inspect the work queue:\n prefect work-queue inspect '{result.name}'\n \"\"\"\n )\n else:\n if not pool:\n # specify the default work pool name after work queue creation to allow the server\n # to handle a bunch of logic associated with agents without work pools\n pool = DEFAULT_AGENT_WORK_POOL_NAME\n output_msg = dedent(\n f\"\"\"\n Created work queue with properties:\n name - {name!r}\n work pool - {pool!r}\n id - {result.id}\n concurrency limit - {limit}\n Start an agent to pick up flow runs from the work queue:\n prefect agent start -q '{result.name} -p {pool}'\n\n Inspect the work queue:\n prefect work-queue inspect '{result.name}'\n \"\"\"\n )\n exit_with_success(output_msg)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.delete","title":"delete
async
","text":"Delete a work queue by ID.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def delete(\n name: str = typer.Argument(..., help=\"The name or ID of the work queue to delete\"),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool containing the work queue to delete.\",\n ),\n):\n \"\"\"\n Delete a work queue by ID.\n \"\"\"\n\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name,\n work_pool_name=pool,\n )\n async with get_client() as client:\n try:\n await client.delete_work_queue_by_id(id=queue_id)\n except ObjectNotFound:\n if pool:\n error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n else:\n error_message = f\"No work queue found: {name!r}\"\n exit_with_error(error_message)\n if pool:\n success_message = (\n f\"Successfully deleted work queue {name!r} in work pool {pool!r}\"\n )\n else:\n success_message = f\"Successfully deleted work queue {name!r}\"\n exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.inspect","title":"inspect
async
","text":"Inspect a work queue by ID.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def inspect(\n name: str = typer.Argument(\n None, help=\"The name or ID of the work queue to inspect\"\n ),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool that the work queue belongs to.\",\n ),\n):\n \"\"\"\n Inspect a work queue by ID.\n \"\"\"\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name,\n work_pool_name=pool,\n )\n async with get_client() as client:\n try:\n result = await client.read_work_queue(id=queue_id)\n app.console.print(Pretty(result))\n except ObjectNotFound:\n if pool:\n error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n else:\n error_message = f\"No work queue found: {name!r}\"\n exit_with_error(error_message)\n\n try:\n status = await client.read_work_queue_status(id=queue_id)\n app.console.print(Pretty(status))\n except ObjectNotFound:\n pass\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.ls","title":"ls
async
","text":"View all work queues.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def ls(\n verbose: bool = typer.Option(\n False, \"--verbose\", \"-v\", help=\"Display more information.\"\n ),\n work_queue_prefix: str = typer.Option(\n None,\n \"--match\",\n \"-m\",\n help=(\n \"Will match work queues with names that start with the specified prefix\"\n \" string\"\n ),\n ),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool containing the work queues to list.\",\n ),\n):\n \"\"\"\n View all work queues.\n \"\"\"\n if not pool and not experiment_enabled(\"work_pools\"):\n table = Table(\n title=\"Work Queues\",\n caption=\"(**) denotes a paused queue\",\n caption_style=\"red\",\n )\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n if verbose:\n table.add_column(\"Filter (Deprecated)\", style=\"magenta\", no_wrap=True)\n\n async with get_client() as client:\n if work_queue_prefix is not None:\n queues = await client.match_work_queues([work_queue_prefix])\n else:\n queues = await client.read_work_queues()\n\n def sort_by_created_key(q):\n return pendulum.now(\"utc\") - q.created\n\n for queue in sorted(queues, key=sort_by_created_key):\n row = [\n f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n str(queue.id),\n (\n f\"[red]{queue.concurrency_limit}\"\n if queue.concurrency_limit\n else \"[blue]None\"\n ),\n ]\n if verbose and queue.filter is not None:\n row.append(queue.filter.json())\n table.add_row(*row)\n elif not pool:\n table = Table(\n title=\"Work Queues\",\n caption=\"(**) denotes a paused queue\",\n caption_style=\"red\",\n )\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"Pool\", style=\"magenta\", no_wrap=True)\n table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n if verbose:\n table.add_column(\"Filter (Deprecated)\", style=\"magenta\", no_wrap=True)\n\n async with get_client() as client:\n if work_queue_prefix is not None:\n queues = await client.match_work_queues([work_queue_prefix])\n else:\n queues = await client.read_work_queues()\n\n pool_ids = [q.work_pool_id for q in queues]\n wp_filter = WorkPoolFilter(id=WorkPoolFilterId(any_=pool_ids))\n pools = await client.read_work_pools(work_pool_filter=wp_filter)\n pool_id_name_map = {p.id: p.name for p in pools}\n\n def sort_by_created_key(q):\n return pendulum.now(\"utc\") - q.created\n\n for queue in sorted(queues, key=sort_by_created_key):\n row = [\n f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n pool_id_name_map[queue.work_pool_id],\n str(queue.id),\n (\n f\"[red]{queue.concurrency_limit}\"\n if queue.concurrency_limit\n else \"[blue]None\"\n ),\n ]\n if verbose and queue.filter is not None:\n row.append(queue.filter.json())\n table.add_row(*row)\n\n else:\n table = Table(\n title=f\"Work Queues in Work Pool {pool!r}\",\n caption=\"(**) denotes a paused queue\",\n caption_style=\"red\",\n )\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"Priority\", style=\"magenta\", no_wrap=True)\n table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n if verbose:\n table.add_column(\"Description\", style=\"cyan\", no_wrap=False)\n\n async with get_client() as client:\n try:\n queues = await client.read_work_queues(work_pool_name=pool)\n except ObjectNotFound:\n exit_with_error(f\"No work pool found: {pool!r}\")\n\n def sort_by_created_key(q):\n return pendulum.now(\"utc\") - q.created\n\n for queue in sorted(queues, key=sort_by_created_key):\n row = [\n f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n f\"{queue.priority}\",\n (\n f\"[red]{queue.concurrency_limit}\"\n if queue.concurrency_limit\n else \"[blue]None\"\n ),\n ]\n if verbose:\n row.append(queue.description)\n table.add_row(*row)\n\n app.console.print(table)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.pause","title":"pause
async
","text":"Pause a work queue.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def pause(\n name: str = typer.Argument(..., help=\"The name or ID of the work queue to pause\"),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool that the work queue belongs to.\",\n ),\n):\n \"\"\"\n Pause a work queue.\n \"\"\"\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name,\n work_pool_name=pool,\n )\n\n async with get_client() as client:\n try:\n await client.update_work_queue(\n id=queue_id,\n is_paused=True,\n )\n except ObjectNotFound:\n if pool:\n error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n else:\n error_message = f\"No work queue found: {name!r}\"\n exit_with_error(error_message)\n\n if pool:\n success_message = f\"Work queue {name!r} in work pool {pool!r} paused\"\n else:\n success_message = f\"Work queue {name!r} paused\"\n exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.preview","title":"preview
async
","text":"Preview a work queue.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def preview(\n name: str = typer.Argument(\n None, help=\"The name or ID of the work queue to preview\"\n ),\n hours: int = typer.Option(\n None,\n \"-h\",\n \"--hours\",\n help=\"The number of hours to look ahead; defaults to 1 hour\",\n ),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool that the work queue belongs to.\",\n ),\n):\n \"\"\"\n Preview a work queue.\n \"\"\"\n if pool:\n title = f\"Preview of Work Queue {name!r} in Work Pool {pool!r}\"\n else:\n title = f\"Preview of Work Queue {name!r}\"\n\n table = Table(title=title, caption=\"(**) denotes a late run\", caption_style=\"red\")\n table.add_column(\n \"Scheduled Start Time\", justify=\"left\", style=\"yellow\", no_wrap=True\n )\n table.add_column(\"Run ID\", justify=\"left\", style=\"cyan\", no_wrap=True)\n table.add_column(\"Name\", style=\"green\", no_wrap=True)\n table.add_column(\"Deployment ID\", style=\"blue\", no_wrap=True)\n\n window = pendulum.now(\"utc\").add(hours=hours or 1)\n\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name, work_pool_name=pool\n )\n async with get_client() as client:\n if pool:\n try:\n responses = await client.get_scheduled_flow_runs_for_work_pool(\n work_pool_name=pool,\n work_queue_names=[name],\n )\n runs = [response.flow_run for response in responses]\n except ObjectNotFound:\n exit_with_error(f\"No work queue found: {name!r} in work pool {pool!r}\")\n else:\n try:\n runs = await client.get_runs_in_work_queue(\n queue_id,\n limit=10,\n scheduled_before=window,\n )\n except ObjectNotFound:\n exit_with_error(f\"No work queue found: {name!r}\")\n now = pendulum.now(\"utc\")\n\n def sort_by_created_key(r):\n return now - r.created\n\n for run in sorted(runs, key=sort_by_created_key):\n table.add_row(\n (\n f\"{run.expected_start_time} [red](**)\"\n if run.expected_start_time < now\n else f\"{run.expected_start_time}\"\n ),\n str(run.id),\n run.name,\n str(run.deployment_id),\n )\n\n if runs:\n app.console.print(table)\n else:\n app.console.print(\n (\n \"No runs found - try increasing how far into the future you preview\"\n \" with the --hours flag\"\n ),\n style=\"yellow\",\n )\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.read_wq_runs","title":"read_wq_runs
async
","text":"Get runs in a work queue. Note that this will trigger an artificial poll of the work queue.
Source code inprefect/cli/work_queue.py
@work_app.command(\"read-runs\")\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def read_wq_runs(\n name: str = typer.Argument(..., help=\"The name or ID of the work queue to poll\"),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool containing the work queue to poll.\",\n ),\n):\n \"\"\"\n Get runs in a work queue. Note that this will trigger an artificial poll of\n the work queue.\n \"\"\"\n\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name,\n work_pool_name=pool,\n )\n async with get_client() as client:\n try:\n runs = await client.get_runs_in_work_queue(id=queue_id)\n except ObjectNotFound:\n if pool:\n error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n else:\n error_message = f\"No work queue found: {name!r}\"\n exit_with_error(error_message)\n success_message = (\n f\"Read {len(runs)} runs for work queue {name!r} in work pool {pool}: {runs}\"\n )\n exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.resume","title":"resume
async
","text":"Resume a paused work queue.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def resume(\n name: str = typer.Argument(..., help=\"The name or ID of the work queue to resume\"),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool that the work queue belongs to.\",\n ),\n):\n \"\"\"\n Resume a paused work queue.\n \"\"\"\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name,\n work_pool_name=pool,\n )\n\n async with get_client() as client:\n try:\n await client.update_work_queue(\n id=queue_id,\n is_paused=False,\n )\n except ObjectNotFound:\n if pool:\n error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n else:\n error_message = f\"No work queue found: {name!r}\"\n exit_with_error(error_message)\n\n if pool:\n success_message = f\"Work queue {name!r} in work pool {pool!r} resumed\"\n else:\n success_message = f\"Work queue {name!r} resumed\"\n exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.set_concurrency_limit","title":"set_concurrency_limit
async
","text":"Set a concurrency limit on a work queue.
Source code inprefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def set_concurrency_limit(\n name: str = typer.Argument(..., help=\"The name or ID of the work queue\"),\n limit: int = typer.Argument(..., help=\"The concurrency limit to set on the queue.\"),\n pool: Optional[str] = typer.Option(\n None,\n \"-p\",\n \"--pool\",\n help=\"The name of the work pool that the work queue belongs to.\",\n ),\n):\n \"\"\"\n Set a concurrency limit on a work queue.\n \"\"\"\n queue_id = await _get_work_queue_id_from_name_or_id(\n name_or_id=name,\n work_pool_name=pool,\n )\n\n async with get_client() as client:\n try:\n await client.update_work_queue(\n id=queue_id,\n concurrency_limit=limit,\n )\n except ObjectNotFound:\n if pool:\n error_message = (\n f\"No work queue named {name!r} found in work pool {pool!r}.\"\n )\n else:\n error_message = f\"No work queue named {name!r} found.\"\n exit_with_error(error_message)\n\n if pool:\n success_message = (\n f\"Concurrency limit of {limit} set on work queue {name!r} in work pool\"\n f\" {pool!r}\"\n )\n else:\n success_message = f\"Concurrency limit of {limit} set on work queue {name!r}\"\n exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/worker/","title":"worker","text":"","tags":["Python API","workers","CLI"]},{"location":"api-ref/prefect/cli/worker/#prefect.cli.worker","title":"prefect.cli.worker
","text":"","tags":["Python API","workers","CLI"]},{"location":"api-ref/prefect/cli/worker/#prefect.cli.worker.start","title":"start
async
","text":"Start a worker process to poll a work pool for flow runs.
Source code inprefect/cli/worker.py
@worker_app.command()\nasync def start(\n worker_name: str = typer.Option(\n None,\n \"-n\",\n \"--name\",\n help=(\n \"The name to give to the started worker. If not provided, a unique name\"\n \" will be generated.\"\n ),\n ),\n work_pool_name: str = typer.Option(\n ...,\n \"-p\",\n \"--pool\",\n help=\"The work pool the started worker should poll.\",\n prompt=True,\n ),\n work_queues: List[str] = typer.Option(\n None,\n \"-q\",\n \"--work-queue\",\n help=(\n \"One or more work queue names for the worker to pull from. If not provided,\"\n \" the worker will pull from all work queues in the work pool.\"\n ),\n ),\n worker_type: Optional[str] = typer.Option(\n None,\n \"-t\",\n \"--type\",\n help=(\n \"The type of worker to start. If not provided, the worker type will be\"\n \" inferred from the work pool.\"\n ),\n ),\n prefetch_seconds: int = SettingsOption(\n PREFECT_WORKER_PREFETCH_SECONDS,\n help=\"Number of seconds to look into the future for scheduled flow runs.\",\n ),\n run_once: bool = typer.Option(\n False, help=\"Only run worker polling once. By default, the worker runs forever.\"\n ),\n limit: int = typer.Option(\n None,\n \"-l\",\n \"--limit\",\n help=\"Maximum number of flow runs to start simultaneously.\",\n ),\n with_healthcheck: bool = typer.Option(\n False, help=\"Start a healthcheck server for the worker.\"\n ),\n install_policy: InstallPolicy = typer.Option(\n InstallPolicy.PROMPT.value,\n \"--install-policy\",\n help=\"Install policy to use workers from Prefect integration packages.\",\n case_sensitive=False,\n ),\n base_job_template: typer.FileText = typer.Option(\n None,\n \"--base-job-template\",\n help=(\n \"The path to a JSON file containing the base job template to use. If\"\n \" unspecified, Prefect will use the default base job template for the given\"\n \" worker type. If the work pool already exists, this will be ignored.\"\n ),\n ),\n):\n \"\"\"\n Start a worker process to poll a work pool for flow runs.\n \"\"\"\n\n is_paused = await _check_work_pool_paused(work_pool_name)\n if is_paused:\n app.console.print(\n (\n f\"The work pool {work_pool_name!r} is currently paused. This worker\"\n \" will not execute any flow runs until the work pool is unpaused.\"\n ),\n style=\"yellow\",\n )\n\n worker_cls = await _get_worker_class(worker_type, work_pool_name, install_policy)\n\n if worker_cls is None:\n exit_with_error(\n \"Unable to start worker. Please ensure you have the necessary dependencies\"\n \" installed to run your desired worker type.\"\n )\n\n worker_process_id = os.getpid()\n setup_signal_handlers_worker(\n worker_process_id, f\"the {worker_type} worker\", app.console.print\n )\n\n template_contents = None\n if base_job_template is not None:\n template_contents = json.load(fp=base_job_template)\n\n async with worker_cls(\n name=worker_name,\n work_pool_name=work_pool_name,\n work_queues=work_queues,\n limit=limit,\n prefetch_seconds=prefetch_seconds,\n heartbeat_interval_seconds=PREFECT_WORKER_HEARTBEAT_SECONDS.value(),\n base_job_template=template_contents,\n ) as worker:\n app.console.print(f\"Worker {worker.name!r} started!\", style=\"green\")\n async with anyio.create_task_group() as tg:\n # wait for an initial heartbeat to configure the worker\n await worker.sync_with_backend()\n # schedule the scheduled flow run polling loop\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=worker.get_and_submit_flow_runs,\n interval=PREFECT_WORKER_QUERY_SECONDS.value(),\n run_once=run_once,\n printer=app.console.print,\n jitter_range=0.3,\n backoff=4, # Up to ~1 minute interval during backoff\n )\n )\n # schedule the sync loop\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=worker.sync_with_backend,\n interval=worker.heartbeat_interval_seconds,\n run_once=run_once,\n printer=app.console.print,\n jitter_range=0.3,\n backoff=4,\n )\n )\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=worker.check_for_cancelled_flow_runs,\n interval=PREFECT_WORKER_QUERY_SECONDS.value() * 2,\n run_once=run_once,\n printer=app.console.print,\n jitter_range=0.3,\n backoff=4,\n )\n )\n\n started_event = await worker._emit_worker_started_event()\n\n # if --with-healthcheck was passed, start the healthcheck server\n if with_healthcheck:\n # we'll start the ASGI server in a separate thread so that\n # uvicorn does not block the main thread\n server_thread = threading.Thread(\n name=\"healthcheck-server-thread\",\n target=partial(\n start_healthcheck_server,\n worker=worker,\n query_interval_seconds=PREFECT_WORKER_QUERY_SECONDS.value(),\n ),\n daemon=True,\n )\n server_thread.start()\n\n await worker._emit_worker_stopped_event(started_event)\n app.console.print(f\"Worker {worker.name!r} stopped!\")\n
","tags":["Python API","workers","CLI"]},{"location":"api-ref/prefect/client/base/","title":"base","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base","title":"prefect.client.base
","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse","title":"PrefectResponse
","text":" Bases: Response
A Prefect wrapper for the httpx.Response
class.
Provides more informative error messages.
Source code inprefect/client/base.py
class PrefectResponse(httpx.Response):\n \"\"\"\n A Prefect wrapper for the `httpx.Response` class.\n\n Provides more informative error messages.\n \"\"\"\n\n def raise_for_status(self) -> None:\n \"\"\"\n Raise an exception if the response contains an HTTPStatusError.\n\n The `PrefectHTTPStatusError` contains useful additional information that\n is not contained in the `HTTPStatusError`.\n \"\"\"\n try:\n return super().raise_for_status()\n except HTTPStatusError as exc:\n raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__\n\n @classmethod\n def from_httpx_response(cls: Type[Self], response: httpx.Response) -> Self:\n \"\"\"\n Create a `PrefectReponse` from an `httpx.Response`.\n\n By changing the `__class__` attribute of the Response, we change the method\n resolution order to look for methods defined in PrefectResponse, while leaving\n everything else about the original Response instance intact.\n \"\"\"\n new_response = copy.copy(response)\n new_response.__class__ = cls\n return new_response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse.raise_for_status","title":"raise_for_status
","text":"Raise an exception if the response contains an HTTPStatusError.
The PrefectHTTPStatusError
contains useful additional information that is not contained in the HTTPStatusError
.
prefect/client/base.py
def raise_for_status(self) -> None:\n \"\"\"\n Raise an exception if the response contains an HTTPStatusError.\n\n The `PrefectHTTPStatusError` contains useful additional information that\n is not contained in the `HTTPStatusError`.\n \"\"\"\n try:\n return super().raise_for_status()\n except HTTPStatusError as exc:\n raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse.from_httpx_response","title":"from_httpx_response
classmethod
","text":"Create a PrefectReponse
from an httpx.Response
.
By changing the __class__
attribute of the Response, we change the method resolution order to look for methods defined in PrefectResponse, while leaving everything else about the original Response instance intact.
prefect/client/base.py
@classmethod\ndef from_httpx_response(cls: Type[Self], response: httpx.Response) -> Self:\n \"\"\"\n Create a `PrefectReponse` from an `httpx.Response`.\n\n By changing the `__class__` attribute of the Response, we change the method\n resolution order to look for methods defined in PrefectResponse, while leaving\n everything else about the original Response instance intact.\n \"\"\"\n new_response = copy.copy(response)\n new_response.__class__ = cls\n return new_response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectHttpxClient","title":"PrefectHttpxClient
","text":" Bases: AsyncClient
A Prefect wrapper for the async httpx client with support for retry-after headers for the provided status codes (typically 429, 502 and 503).
Additionally, this client will always call raise_for_status
on responses.
For more details on rate limit headers, see: Configuring Cloudflare Rate Limiting
Source code inprefect/client/base.py
class PrefectHttpxClient(httpx.AsyncClient):\n \"\"\"\n A Prefect wrapper for the async httpx client with support for retry-after headers\n for the provided status codes (typically 429, 502 and 503).\n\n Additionally, this client will always call `raise_for_status` on responses.\n\n For more details on rate limit headers, see:\n [Configuring Cloudflare Rate Limiting](https://support.cloudflare.com/hc/en-us/articles/115001635128-Configuring-Rate-Limiting-from-UI)\n \"\"\"\n\n async def _send_with_retry(\n self,\n request: Callable,\n retry_codes: Set[int] = set(),\n retry_exceptions: Tuple[Exception, ...] = tuple(),\n ):\n \"\"\"\n Send a request and retry it if it fails.\n\n Sends the provided request and retries it up to PREFECT_CLIENT_MAX_RETRIES times\n if the request either raises an exception listed in `retry_exceptions` or\n receives a response with a status code listed in `retry_codes`.\n\n Retries will be delayed based on either the retry header (preferred) or\n exponential backoff if a retry header is not provided.\n \"\"\"\n try_count = 0\n response = None\n\n while try_count <= PREFECT_CLIENT_MAX_RETRIES.value():\n try_count += 1\n retry_seconds = None\n exc_info = None\n\n try:\n response = await request()\n except retry_exceptions: # type: ignore\n if try_count > PREFECT_CLIENT_MAX_RETRIES.value():\n raise\n # Otherwise, we will ignore this error but capture the info for logging\n exc_info = sys.exc_info()\n else:\n # We got a response; return immediately if it is not retryable\n if response.status_code not in retry_codes:\n return response\n\n if \"Retry-After\" in response.headers:\n retry_seconds = float(response.headers[\"Retry-After\"])\n\n # Use an exponential back-off if not set in a header\n if retry_seconds is None:\n retry_seconds = 2**try_count\n\n # Add jitter\n jitter_factor = PREFECT_CLIENT_RETRY_JITTER_FACTOR.value()\n if retry_seconds > 0 and jitter_factor > 0:\n if response is not None and \"Retry-After\" in response.headers:\n # Always wait for _at least_ retry seconds if requested by the API\n retry_seconds = bounded_poisson_interval(\n retry_seconds, retry_seconds * (1 + jitter_factor)\n )\n else:\n # Otherwise, use a symmetrical jitter\n retry_seconds = clamped_poisson_interval(\n retry_seconds, jitter_factor\n )\n\n logger.debug(\n (\n \"Encountered retryable exception during request. \"\n if exc_info\n else (\n \"Received response with retryable status code\"\n f\" {response.status_code}. \"\n )\n )\n + f\"Another attempt will be made in {retry_seconds}s. \"\n \"This is attempt\"\n f\" {try_count}/{PREFECT_CLIENT_MAX_RETRIES.value() + 1}.\",\n exc_info=exc_info,\n )\n await anyio.sleep(retry_seconds)\n\n assert (\n response is not None\n ), \"Retry handling ended without response or exception\"\n\n # We ran out of retries, return the failed response\n return response\n\n async def send(self, *args, **kwargs) -> Response:\n \"\"\"\n Send a request with automatic retry behavior for the following status codes:\n\n - 429 CloudFlare-style rate limiting\n - 502 Bad Gateway\n - 503 Service unavailable\n \"\"\"\n\n api_request = partial(super().send, *args, **kwargs)\n\n response = await self._send_with_retry(\n request=api_request,\n retry_codes={\n status.HTTP_429_TOO_MANY_REQUESTS,\n status.HTTP_503_SERVICE_UNAVAILABLE,\n status.HTTP_502_BAD_GATEWAY,\n status.HTTP_408_REQUEST_TIMEOUT,\n *PREFECT_CLIENT_RETRY_EXTRA_CODES.value(),\n },\n retry_exceptions=(\n httpx.ReadTimeout,\n httpx.PoolTimeout,\n httpx.ConnectTimeout,\n # `ConnectionResetError` when reading socket raises as a `ReadError`\n httpx.ReadError,\n # Sockets can be closed during writes resulting in a `WriteError`\n httpx.WriteError,\n # Uvicorn bug, see https://github.com/PrefectHQ/prefect/issues/7512\n httpx.RemoteProtocolError,\n # HTTP2 bug, see https://github.com/PrefectHQ/prefect/issues/7442\n httpx.LocalProtocolError,\n ),\n )\n\n # Convert to a Prefect response to add nicer errors messages\n response = PrefectResponse.from_httpx_response(response)\n\n # Always raise bad responses\n # NOTE: We may want to remove this and handle responses per route in the\n # `PrefectClient`\n response.raise_for_status()\n\n return response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectHttpxClient.send","title":"send
async
","text":"Send a request with automatic retry behavior for the following status codes:
prefect/client/base.py
async def send(self, *args, **kwargs) -> Response:\n \"\"\"\n Send a request with automatic retry behavior for the following status codes:\n\n - 429 CloudFlare-style rate limiting\n - 502 Bad Gateway\n - 503 Service unavailable\n \"\"\"\n\n api_request = partial(super().send, *args, **kwargs)\n\n response = await self._send_with_retry(\n request=api_request,\n retry_codes={\n status.HTTP_429_TOO_MANY_REQUESTS,\n status.HTTP_503_SERVICE_UNAVAILABLE,\n status.HTTP_502_BAD_GATEWAY,\n status.HTTP_408_REQUEST_TIMEOUT,\n *PREFECT_CLIENT_RETRY_EXTRA_CODES.value(),\n },\n retry_exceptions=(\n httpx.ReadTimeout,\n httpx.PoolTimeout,\n httpx.ConnectTimeout,\n # `ConnectionResetError` when reading socket raises as a `ReadError`\n httpx.ReadError,\n # Sockets can be closed during writes resulting in a `WriteError`\n httpx.WriteError,\n # Uvicorn bug, see https://github.com/PrefectHQ/prefect/issues/7512\n httpx.RemoteProtocolError,\n # HTTP2 bug, see https://github.com/PrefectHQ/prefect/issues/7442\n httpx.LocalProtocolError,\n ),\n )\n\n # Convert to a Prefect response to add nicer errors messages\n response = PrefectResponse.from_httpx_response(response)\n\n # Always raise bad responses\n # NOTE: We may want to remove this and handle responses per route in the\n # `PrefectClient`\n response.raise_for_status()\n\n return response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.app_lifespan_context","title":"app_lifespan_context
async
","text":"A context manager that calls startup/shutdown hooks for the given application.
Lifespan contexts are cached per application to avoid calling the lifespan hooks more than once if the context is entered in nested code. A no-op context will be returned if the context for the given application is already being managed.
This manager is robust to concurrent access within the event loop. For example, if you have concurrent contexts for the same application, it is guaranteed that startup hooks will be called before their context starts and shutdown hooks will only be called after their context exits.
A reference count is used to support nested use of clients without running lifespan hooks excessively. The first client context entered will create and enter a lifespan context. Each subsequent client will increment a reference count but will not create a new lifespan context. When each client context exits, the reference count is decremented. When the last client context exits, the lifespan will be closed.
In simple nested cases, the first client context will be the one to exit the lifespan. However, if client contexts are entered concurrently they may not exit in a consistent order. If the first client context was responsible for closing the lifespan, it would have to wait until all other client contexts to exit to avoid firing shutdown hooks while the application is in use. Waiting for the other clients to exit can introduce deadlocks, so, instead, the first client will exit without closing the lifespan context and reference counts will be used to ensure the lifespan is closed once all of the clients are done.
Source code inprefect/client/base.py
@asynccontextmanager\nasync def app_lifespan_context(app: ASGIApp) -> AsyncGenerator[None, None]:\n \"\"\"\n A context manager that calls startup/shutdown hooks for the given application.\n\n Lifespan contexts are cached per application to avoid calling the lifespan hooks\n more than once if the context is entered in nested code. A no-op context will be\n returned if the context for the given application is already being managed.\n\n This manager is robust to concurrent access within the event loop. For example,\n if you have concurrent contexts for the same application, it is guaranteed that\n startup hooks will be called before their context starts and shutdown hooks will\n only be called after their context exits.\n\n A reference count is used to support nested use of clients without running\n lifespan hooks excessively. The first client context entered will create and enter\n a lifespan context. Each subsequent client will increment a reference count but will\n not create a new lifespan context. When each client context exits, the reference\n count is decremented. When the last client context exits, the lifespan will be\n closed.\n\n In simple nested cases, the first client context will be the one to exit the\n lifespan. However, if client contexts are entered concurrently they may not exit\n in a consistent order. If the first client context was responsible for closing\n the lifespan, it would have to wait until all other client contexts to exit to\n avoid firing shutdown hooks while the application is in use. Waiting for the other\n clients to exit can introduce deadlocks, so, instead, the first client will exit\n without closing the lifespan context and reference counts will be used to ensure\n the lifespan is closed once all of the clients are done.\n \"\"\"\n # TODO: A deadlock has been observed during multithreaded use of clients while this\n # lifespan context is being used. This has only been reproduced on Python 3.7\n # and while we hope to discourage using multiple event loops in threads, this\n # bug may emerge again.\n # See https://github.com/PrefectHQ/orion/pull/1696\n thread_id = threading.get_ident()\n\n # The id of the application is used instead of the hash so each application instance\n # is managed independently even if they share the same settings. We include the\n # thread id since applications are managed separately per thread.\n key = (thread_id, id(app))\n\n # On exception, this will be populated with exception details\n exc_info = (None, None, None)\n\n # Get a lock unique to this thread since anyio locks are not threadsafe\n lock = APP_LIFESPANS_LOCKS[thread_id]\n\n async with lock:\n if key in APP_LIFESPANS:\n # The lifespan is already being managed, just increment the reference count\n APP_LIFESPANS_REF_COUNTS[key] += 1\n else:\n # Create a new lifespan manager\n APP_LIFESPANS[key] = context = LifespanManager(\n app, startup_timeout=30, shutdown_timeout=30\n )\n APP_LIFESPANS_REF_COUNTS[key] = 1\n\n # Ensure we enter the context before releasing the lock so startup hooks\n # are complete before another client can be used\n await context.__aenter__()\n\n try:\n yield\n except BaseException:\n exc_info = sys.exc_info()\n raise\n finally:\n # If we do not shield against anyio cancellation, the lock will return\n # immediately and the code in its context will not run, leaving the lifespan\n # open\n with anyio.CancelScope(shield=True):\n async with lock:\n # After the consumer exits the context, decrement the reference count\n APP_LIFESPANS_REF_COUNTS[key] -= 1\n\n # If this the last context to exit, close the lifespan\n if APP_LIFESPANS_REF_COUNTS[key] <= 0:\n APP_LIFESPANS_REF_COUNTS.pop(key)\n context = APP_LIFESPANS.pop(key)\n await context.__aexit__(*exc_info)\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/","title":"cloud","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud","title":"prefect.client.cloud
","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudUnauthorizedError","title":"CloudUnauthorizedError
","text":" Bases: PrefectException
Raised when the CloudClient receives a 401 or 403 from the Cloud API.
Source code inprefect/client/cloud.py
class CloudUnauthorizedError(PrefectException):\n \"\"\"\n Raised when the CloudClient receives a 401 or 403 from the Cloud API.\n \"\"\"\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudClient","title":"CloudClient
","text":"Source code in prefect/client/cloud.py
class CloudClient:\n def __init__(\n self,\n host: str,\n api_key: str,\n httpx_settings: dict = None,\n ) -> None:\n httpx_settings = httpx_settings or dict()\n httpx_settings.setdefault(\"headers\", dict())\n httpx_settings[\"headers\"].setdefault(\"Authorization\", f\"Bearer {api_key}\")\n\n httpx_settings.setdefault(\"base_url\", host)\n if not PREFECT_UNIT_TEST_MODE.value():\n httpx_settings.setdefault(\"follow_redirects\", True)\n self._client = PrefectHttpxClient(**httpx_settings)\n\n async def api_healthcheck(self):\n \"\"\"\n Attempts to connect to the Cloud API and raises the encountered exception if not\n successful.\n\n If successful, returns `None`.\n \"\"\"\n with anyio.fail_after(10):\n await self.read_workspaces()\n\n async def read_workspaces(self) -> List[Workspace]:\n return pydantic.parse_obj_as(List[Workspace], await self.get(\"/me/workspaces\"))\n\n async def read_worker_metadata(self) -> Dict[str, Any]:\n configured_url = prefect.settings.PREFECT_API_URL.value()\n account_id, workspace_id = re.findall(PARSE_API_URL_REGEX, configured_url)[0]\n return await self.get(\n f\"accounts/{account_id}/workspaces/{workspace_id}/collections/work_pool_types\"\n )\n\n async def __aenter__(self):\n await self._client.__aenter__()\n return self\n\n async def __aexit__(self, *exc_info):\n return await self._client.__aexit__(*exc_info)\n\n def __enter__(self):\n raise RuntimeError(\n \"The `CloudClient` must be entered with an async context. Use 'async \"\n \"with CloudClient(...)' not 'with CloudClient(...)'\"\n )\n\n def __exit__(self, *_):\n assert False, \"This should never be called but must be defined for __enter__\"\n\n async def get(self, route, **kwargs):\n return await self.request(\"GET\", route, **kwargs)\n\n async def request(self, method, route, **kwargs):\n try:\n res = await self._client.request(method, route, **kwargs)\n res.raise_for_status()\n except httpx.HTTPStatusError as exc:\n if exc.response.status_code in (\n status.HTTP_401_UNAUTHORIZED,\n status.HTTP_403_FORBIDDEN,\n ):\n raise CloudUnauthorizedError\n else:\n raise exc\n\n if res.status_code == status.HTTP_204_NO_CONTENT:\n return\n\n return res.json()\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudClient.api_healthcheck","title":"api_healthcheck
async
","text":"Attempts to connect to the Cloud API and raises the encountered exception if not successful.
If successful, returns None
.
prefect/client/cloud.py
async def api_healthcheck(self):\n \"\"\"\n Attempts to connect to the Cloud API and raises the encountered exception if not\n successful.\n\n If successful, returns `None`.\n \"\"\"\n with anyio.fail_after(10):\n await self.read_workspaces()\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.get_cloud_client","title":"get_cloud_client
","text":"Needs a docstring.
Source code inprefect/client/cloud.py
def get_cloud_client(\n host: Optional[str] = None,\n api_key: Optional[str] = None,\n httpx_settings: Optional[dict] = None,\n infer_cloud_url: bool = False,\n) -> \"CloudClient\":\n \"\"\"\n Needs a docstring.\n \"\"\"\n if httpx_settings is not None:\n httpx_settings = httpx_settings.copy()\n\n if infer_cloud_url is False:\n host = host or PREFECT_CLOUD_API_URL.value()\n else:\n configured_url = prefect.settings.PREFECT_API_URL.value()\n host = re.sub(PARSE_API_URL_REGEX, \"\", configured_url)\n\n return CloudClient(\n host=host,\n api_key=api_key or PREFECT_API_KEY.value(),\n httpx_settings=httpx_settings,\n )\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/orchestration/","title":"orchestration","text":"Asynchronous client implementation for communicating with the Prefect REST API.
Explore the client by communicating with an in-memory webserver \u2014 no setup required:
$ # start python REPL with native await functionality\n$ python -m asyncio\n>>> from prefect import get_client\n>>> async with get_client() as client:\n... response = await client.hello()\n... print(response.json())\n\ud83d\udc4b\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration","title":"prefect.client.orchestration
","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient","title":"PrefectClient
","text":"An asynchronous client for interacting with the Prefect REST API.
Parameters:
Name Type Description Defaultapi
Union[str, ASGIApp]
the REST API URL or FastAPI application to connect to
requiredapi_key
str
An optional API key for authentication.
None
api_version
str
The API version this client is compatible with.
None
httpx_settings
dict
An optional dictionary of settings to pass to the underlying httpx.AsyncClient
None
Say hello to a Prefect REST API\n\n<div class=\"terminal\">\n```\n>>> async with get_client() as client:\n>>> response = await client.hello()\n>>>\n>>> print(response.json())\n\ud83d\udc4b\n```\n</div>\n
Source code in prefect/client/orchestration.py
class PrefectClient:\n \"\"\"\n An asynchronous client for interacting with the [Prefect REST API](/api-ref/rest-api/).\n\n Args:\n api: the REST API URL or FastAPI application to connect to\n api_key: An optional API key for authentication.\n api_version: The API version this client is compatible with.\n httpx_settings: An optional dictionary of settings to pass to the underlying\n `httpx.AsyncClient`\n\n Examples:\n\n Say hello to a Prefect REST API\n\n <div class=\"terminal\">\n ```\n >>> async with get_client() as client:\n >>> response = await client.hello()\n >>>\n >>> print(response.json())\n \ud83d\udc4b\n ```\n </div>\n \"\"\"\n\n def __init__(\n self,\n api: Union[str, ASGIApp],\n *,\n api_key: str = None,\n api_version: str = None,\n httpx_settings: dict = None,\n ) -> None:\n httpx_settings = httpx_settings.copy() if httpx_settings else {}\n httpx_settings.setdefault(\"headers\", {})\n\n if PREFECT_API_TLS_INSECURE_SKIP_VERIFY:\n httpx_settings.setdefault(\"verify\", False)\n\n if api_version is None:\n api_version = SERVER_API_VERSION\n httpx_settings[\"headers\"].setdefault(\"X-PREFECT-API-VERSION\", api_version)\n if api_key:\n httpx_settings[\"headers\"].setdefault(\"Authorization\", f\"Bearer {api_key}\")\n\n # Context management\n self._exit_stack = AsyncExitStack()\n self._ephemeral_app: Optional[ASGIApp] = None\n self.manage_lifespan = True\n self.server_type: ServerType\n\n # Only set if this client started the lifespan of the application\n self._ephemeral_lifespan: Optional[LifespanManager] = None\n\n self._closed = False\n self._started = False\n\n # Connect to an external application\n if isinstance(api, str):\n if httpx_settings.get(\"app\"):\n raise ValueError(\n \"Invalid httpx settings: `app` cannot be set when providing an \"\n \"api url. `app` is only for use with ephemeral instances. Provide \"\n \"it as the `api` parameter instead.\"\n )\n httpx_settings.setdefault(\"base_url\", api)\n\n # See https://www.python-httpx.org/advanced/#pool-limit-configuration\n httpx_settings.setdefault(\n \"limits\",\n httpx.Limits(\n # We see instability when allowing the client to open many connections at once.\n # Limiting concurrency results in more stable performance.\n max_connections=16,\n max_keepalive_connections=8,\n # The Prefect Cloud LB will keep connections alive for 30s.\n # Only allow the client to keep connections alive for 25s.\n keepalive_expiry=25,\n ),\n )\n\n # See https://www.python-httpx.org/http2/\n # Enabling HTTP/2 support on the client does not necessarily mean that your requests\n # and responses will be transported over HTTP/2, since both the client and the server\n # need to support HTTP/2. If you connect to a server that only supports HTTP/1.1 the\n # client will use a standard HTTP/1.1 connection instead.\n httpx_settings.setdefault(\"http2\", PREFECT_API_ENABLE_HTTP2.value())\n\n self.server_type = (\n ServerType.CLOUD\n if api.startswith(PREFECT_CLOUD_API_URL.value())\n else ServerType.SERVER\n )\n\n # Connect to an in-process application\n elif isinstance(api, ASGIApp):\n self._ephemeral_app = api\n self.server_type = ServerType.EPHEMERAL\n\n # When using an ephemeral server, server-side exceptions can be raised\n # client-side breaking all of our response error code handling. To work\n # around this, we create an ASGI transport with application exceptions\n # disabled instead of using the application directly.\n # refs:\n # - https://github.com/PrefectHQ/prefect/pull/9637\n # - https://github.com/encode/starlette/blob/d3a11205ed35f8e5a58a711db0ff59c86fa7bb31/starlette/middleware/errors.py#L184\n # - https://github.com/tiangolo/fastapi/blob/8cc967a7605d3883bd04ceb5d25cc94ae079612f/fastapi/applications.py#L163-L164\n httpx_settings.setdefault(\n \"transport\",\n httpx.ASGITransport(\n app=self._ephemeral_app, raise_app_exceptions=False\n ),\n )\n httpx_settings.setdefault(\"base_url\", \"http://ephemeral-prefect/api\")\n\n else:\n raise TypeError(\n f\"Unexpected type {type(api).__name__!r} for argument `api`. Expected\"\n \" 'str' or 'ASGIApp/FastAPI'\"\n )\n\n # See https://www.python-httpx.org/advanced/#timeout-configuration\n httpx_settings.setdefault(\n \"timeout\",\n httpx.Timeout(\n connect=PREFECT_API_REQUEST_TIMEOUT.value(),\n read=PREFECT_API_REQUEST_TIMEOUT.value(),\n write=PREFECT_API_REQUEST_TIMEOUT.value(),\n pool=PREFECT_API_REQUEST_TIMEOUT.value(),\n ),\n )\n\n if not PREFECT_UNIT_TEST_MODE:\n httpx_settings.setdefault(\"follow_redirects\", True)\n self._client = PrefectHttpxClient(**httpx_settings)\n self._loop = None\n\n # See https://www.python-httpx.org/advanced/#custom-transports\n #\n # If we're using an HTTP/S client (not the ephemeral client), adjust the\n # transport to add retries _after_ it is instantiated. If we alter the transport\n # before instantiation, the transport will not be aware of proxies unless we\n # reproduce all of the logic to make it so.\n #\n # Only alter the transport to set our default of 3 retries, don't modify any\n # transport a user may have provided via httpx_settings.\n #\n # Making liberal use of getattr and isinstance checks here to avoid any\n # surprises if the internals of httpx or httpcore change on us\n if isinstance(api, str) and not httpx_settings.get(\"transport\"):\n transport_for_url = getattr(self._client, \"_transport_for_url\", None)\n if callable(transport_for_url):\n server_transport = transport_for_url(httpx.URL(api))\n if isinstance(server_transport, httpx.AsyncHTTPTransport):\n pool = getattr(server_transport, \"_pool\", None)\n if isinstance(pool, httpcore.AsyncConnectionPool):\n pool._retries = 3\n\n self.logger = get_logger(\"client\")\n\n @property\n def api_url(self) -> httpx.URL:\n \"\"\"\n Get the base URL for the API.\n \"\"\"\n return self._client.base_url\n\n # API methods ----------------------------------------------------------------------\n\n async def api_healthcheck(self) -> Optional[Exception]:\n \"\"\"\n Attempts to connect to the API and returns the encountered exception if not\n successful.\n\n If successful, returns `None`.\n \"\"\"\n try:\n await self._client.get(\"/health\")\n return None\n except Exception as exc:\n return exc\n\n async def hello(self) -> httpx.Response:\n \"\"\"\n Send a GET request to /hello for testing purposes.\n \"\"\"\n return await self._client.get(\"/hello\")\n\n async def create_flow(self, flow: \"FlowObject\") -> UUID:\n \"\"\"\n Create a flow in the Prefect API.\n\n Args:\n flow: a [Flow][prefect.flows.Flow] object\n\n Raises:\n httpx.RequestError: if a flow was not created for any reason\n\n Returns:\n the ID of the flow in the backend\n \"\"\"\n return await self.create_flow_from_name(flow.name)\n\n async def create_flow_from_name(self, flow_name: str) -> UUID:\n \"\"\"\n Create a flow in the Prefect API.\n\n Args:\n flow_name: the name of the new flow\n\n Raises:\n httpx.RequestError: if a flow was not created for any reason\n\n Returns:\n the ID of the flow in the backend\n \"\"\"\n flow_data = FlowCreate(name=flow_name)\n response = await self._client.post(\n \"/flows/\", json=flow_data.dict(json_compatible=True)\n )\n\n flow_id = response.json().get(\"id\")\n if not flow_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n # Return the id of the created flow\n return UUID(flow_id)\n\n async def read_flow(self, flow_id: UUID) -> Flow:\n \"\"\"\n Query the Prefect API for a flow by id.\n\n Args:\n flow_id: the flow ID of interest\n\n Returns:\n a [Flow model][prefect.client.schemas.objects.Flow] representation of the flow\n \"\"\"\n response = await self._client.get(f\"/flows/{flow_id}\")\n return Flow.parse_obj(response.json())\n\n async def read_flows(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n work_pool_filter: WorkPoolFilter = None,\n work_queue_filter: WorkQueueFilter = None,\n sort: FlowSort = None,\n limit: int = None,\n offset: int = 0,\n ) -> List[Flow]:\n \"\"\"\n Query the Prefect API for flows. Only flows matching all criteria will\n be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n work_pool_filter: filter criteria for work pools\n work_queue_filter: filter criteria for work pool queues\n sort: sort criteria for the flows\n limit: limit for the flow query\n offset: offset for the flow query\n\n Returns:\n a list of Flow model representations of the flows\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n \"work_queues\": (\n work_queue_filter.dict(json_compatible=True)\n if work_queue_filter\n else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n\n response = await self._client.post(\"/flows/filter\", json=body)\n return pydantic.parse_obj_as(List[Flow], response.json())\n\n async def read_flow_by_name(\n self,\n flow_name: str,\n ) -> Flow:\n \"\"\"\n Query the Prefect API for a flow by name.\n\n Args:\n flow_name: the name of a flow\n\n Returns:\n a fully hydrated Flow model\n \"\"\"\n response = await self._client.get(f\"/flows/name/{flow_name}\")\n return Flow.parse_obj(response.json())\n\n async def create_flow_run_from_deployment(\n self,\n deployment_id: UUID,\n *,\n parameters: Dict[str, Any] = None,\n context: dict = None,\n state: prefect.states.State = None,\n name: str = None,\n tags: Iterable[str] = None,\n idempotency_key: str = None,\n parent_task_run_id: UUID = None,\n work_queue_name: str = None,\n job_variables: Optional[Dict[str, Any]] = None,\n ) -> FlowRun:\n \"\"\"\n Create a flow run for a deployment.\n\n Args:\n deployment_id: The deployment ID to create the flow run from\n parameters: Parameter overrides for this flow run. Merged with the\n deployment defaults\n context: Optional run context data\n state: The initial state for the run. If not provided, defaults to\n `Scheduled` for now. Should always be a `Scheduled` type.\n name: An optional name for the flow run. If not provided, the server will\n generate a name.\n tags: An optional iterable of tags to apply to the flow run; these tags\n are merged with the deployment's tags.\n idempotency_key: Optional idempotency key for creation of the flow run.\n If the key matches the key of an existing flow run, the existing run will\n be returned instead of creating a new one.\n parent_task_run_id: if a subflow run is being created, the placeholder task\n run identifier in the parent flow\n work_queue_name: An optional work queue name to add this run to. If not provided,\n will default to the deployment's set work queue. If one is provided that does not\n exist, a new work queue will be created within the deployment's work pool.\n job_variables: Optional variables that will be supplied to the flow run job.\n\n Raises:\n httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n Returns:\n The flow run model\n \"\"\"\n if job_variables is not None and experiment_enabled(\"flow_run_infra_overrides\"):\n if (\n PREFECT_EXPERIMENTAL_WARN\n and PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INFRA_OVERRIDES\n ):\n warnings.warn(\n EXPERIMENTAL_WARNING.format(\n feature=\"Flow run job variables\",\n group=\"flow_run_infra_overrides\",\n help=\"To use this feature, update your workers to Prefect 2.16.4 or later. \",\n ),\n ExperimentalFeature,\n stacklevel=3,\n )\n\n parameters = parameters or {}\n context = context or {}\n state = state or prefect.states.Scheduled()\n tags = tags or []\n\n flow_run_create = DeploymentFlowRunCreate(\n parameters=parameters,\n context=context,\n state=state.to_state_create(),\n tags=tags,\n name=name,\n idempotency_key=idempotency_key,\n parent_task_run_id=parent_task_run_id,\n job_variables=job_variables,\n )\n\n # done separately to avoid including this field in payloads sent to older API versions\n if work_queue_name:\n flow_run_create.work_queue_name = work_queue_name\n\n response = await self._client.post(\n f\"/deployments/{deployment_id}/create_flow_run\",\n json=flow_run_create.dict(json_compatible=True, exclude_unset=True),\n )\n return FlowRun.parse_obj(response.json())\n\n async def create_flow_run(\n self,\n flow: \"FlowObject\",\n name: str = None,\n parameters: Dict[str, Any] = None,\n context: dict = None,\n tags: Iterable[str] = None,\n parent_task_run_id: UUID = None,\n state: \"prefect.states.State\" = None,\n ) -> FlowRun:\n \"\"\"\n Create a flow run for a flow.\n\n Args:\n flow: The flow model to create the flow run for\n name: An optional name for the flow run\n parameters: Parameter overrides for this flow run.\n context: Optional run context data\n tags: a list of tags to apply to this flow run\n parent_task_run_id: if a subflow run is being created, the placeholder task\n run identifier in the parent flow\n state: The initial state for the run. If not provided, defaults to\n `Scheduled` for now. Should always be a `Scheduled` type.\n\n Raises:\n httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n Returns:\n The flow run model\n \"\"\"\n parameters = parameters or {}\n context = context or {}\n\n if state is None:\n state = prefect.states.Pending()\n\n # Retrieve the flow id\n flow_id = await self.create_flow(flow)\n\n flow_run_create = FlowRunCreate(\n flow_id=flow_id,\n flow_version=flow.version,\n name=name,\n parameters=parameters,\n context=context,\n tags=list(tags or []),\n parent_task_run_id=parent_task_run_id,\n state=state.to_state_create(),\n empirical_policy=FlowRunPolicy(\n retries=flow.retries,\n retry_delay=flow.retry_delay_seconds,\n ),\n )\n\n flow_run_create_json = flow_run_create.dict(json_compatible=True)\n response = await self._client.post(\"/flow_runs/\", json=flow_run_create_json)\n flow_run = FlowRun.parse_obj(response.json())\n\n # Restore the parameters to the local objects to retain expectations about\n # Python objects\n flow_run.parameters = parameters\n\n return flow_run\n\n async def update_flow_run(\n self,\n flow_run_id: UUID,\n flow_version: Optional[str] = None,\n parameters: Optional[dict] = None,\n name: Optional[str] = None,\n tags: Optional[Iterable[str]] = None,\n empirical_policy: Optional[FlowRunPolicy] = None,\n infrastructure_pid: Optional[str] = None,\n job_variables: Optional[dict] = None,\n ) -> httpx.Response:\n \"\"\"\n Update a flow run's details.\n\n Args:\n flow_run_id: The identifier for the flow run to update.\n flow_version: A new version string for the flow run.\n parameters: A dictionary of parameter values for the flow run. This will not\n be merged with any existing parameters.\n name: A new name for the flow run.\n empirical_policy: A new flow run orchestration policy. This will not be\n merged with any existing policy.\n tags: An iterable of new tags for the flow run. These will not be merged with\n any existing tags.\n infrastructure_pid: The id of flow run as returned by an\n infrastructure block.\n\n Returns:\n an `httpx.Response` object from the PATCH request\n \"\"\"\n if job_variables is not None and experiment_enabled(\"flow_run_infra_overrides\"):\n if (\n PREFECT_EXPERIMENTAL_WARN\n and PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INFRA_OVERRIDES\n ):\n warnings.warn(\n EXPERIMENTAL_WARNING.format(\n feature=\"Flow run job variables\",\n group=\"flow_run_infra_overrides\",\n help=\"To use this feature, update your workers to Prefect 2.16.4 or later. \",\n ),\n ExperimentalFeature,\n stacklevel=3,\n )\n\n params = {}\n if flow_version is not None:\n params[\"flow_version\"] = flow_version\n if parameters is not None:\n params[\"parameters\"] = parameters\n if name is not None:\n params[\"name\"] = name\n if tags is not None:\n params[\"tags\"] = tags\n if empirical_policy is not None:\n params[\"empirical_policy\"] = empirical_policy\n if infrastructure_pid:\n params[\"infrastructure_pid\"] = infrastructure_pid\n if job_variables is not None:\n params[\"job_variables\"] = job_variables\n\n flow_run_data = FlowRunUpdate(**params)\n\n return await self._client.patch(\n f\"/flow_runs/{flow_run_id}\",\n json=flow_run_data.dict(json_compatible=True, exclude_unset=True),\n )\n\n async def delete_flow_run(\n self,\n flow_run_id: UUID,\n ) -> None:\n \"\"\"\n Delete a flow run by UUID.\n\n Args:\n flow_run_id: The flow run UUID of interest.\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/flow_runs/{flow_run_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def create_concurrency_limit(\n self,\n tag: str,\n concurrency_limit: int,\n ) -> UUID:\n \"\"\"\n Create a tag concurrency limit in the Prefect API. These limits govern concurrently\n running tasks.\n\n Args:\n tag: a tag the concurrency limit is applied to\n concurrency_limit: the maximum number of concurrent task runs for a given tag\n\n Raises:\n httpx.RequestError: if the concurrency limit was not created for any reason\n\n Returns:\n the ID of the concurrency limit in the backend\n \"\"\"\n\n concurrency_limit_create = ConcurrencyLimitCreate(\n tag=tag,\n concurrency_limit=concurrency_limit,\n )\n response = await self._client.post(\n \"/concurrency_limits/\",\n json=concurrency_limit_create.dict(json_compatible=True),\n )\n\n concurrency_limit_id = response.json().get(\"id\")\n\n if not concurrency_limit_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n return UUID(concurrency_limit_id)\n\n async def read_concurrency_limit_by_tag(\n self,\n tag: str,\n ):\n \"\"\"\n Read the concurrency limit set on a specific tag.\n\n Args:\n tag: a tag the concurrency limit is applied to\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: if the concurrency limit was not created for any reason\n\n Returns:\n the concurrency limit set on a specific tag\n \"\"\"\n try:\n response = await self._client.get(\n f\"/concurrency_limits/tag/{tag}\",\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n concurrency_limit_id = response.json().get(\"id\")\n\n if not concurrency_limit_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n concurrency_limit = ConcurrencyLimit.parse_obj(response.json())\n return concurrency_limit\n\n async def read_concurrency_limits(\n self,\n limit: int,\n offset: int,\n ):\n \"\"\"\n Lists concurrency limits set on task run tags.\n\n Args:\n limit: the maximum number of concurrency limits returned\n offset: the concurrency limit query offset\n\n Returns:\n a list of concurrency limits\n \"\"\"\n\n body = {\n \"limit\": limit,\n \"offset\": offset,\n }\n\n response = await self._client.post(\"/concurrency_limits/filter\", json=body)\n return pydantic.parse_obj_as(List[ConcurrencyLimit], response.json())\n\n async def reset_concurrency_limit_by_tag(\n self,\n tag: str,\n slot_override: Optional[List[Union[UUID, str]]] = None,\n ):\n \"\"\"\n Resets the concurrency limit slots set on a specific tag.\n\n Args:\n tag: a tag the concurrency limit is applied to\n slot_override: a list of task run IDs that are currently using a\n concurrency slot, please check that any task run IDs included in\n `slot_override` are currently running, otherwise those concurrency\n slots will never be released.\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n \"\"\"\n if slot_override is not None:\n slot_override = [str(slot) for slot in slot_override]\n\n try:\n await self._client.post(\n f\"/concurrency_limits/tag/{tag}/reset\",\n json=dict(slot_override=slot_override),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def delete_concurrency_limit_by_tag(\n self,\n tag: str,\n ):\n \"\"\"\n Delete the concurrency limit set on a specific tag.\n\n Args:\n tag: a tag the concurrency limit is applied to\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n \"\"\"\n try:\n await self._client.delete(\n f\"/concurrency_limits/tag/{tag}\",\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def create_work_queue(\n self,\n name: str,\n tags: Optional[List[str]] = None,\n description: Optional[str] = None,\n is_paused: Optional[bool] = None,\n concurrency_limit: Optional[int] = None,\n priority: Optional[int] = None,\n work_pool_name: Optional[str] = None,\n ) -> WorkQueue:\n \"\"\"\n Create a work queue.\n\n Args:\n name: a unique name for the work queue\n tags: DEPRECATED: an optional list of tags to filter on; only work scheduled with these tags\n will be included in the queue. This option will be removed on 2023-02-23.\n description: An optional description for the work queue.\n is_paused: Whether or not the work queue is paused.\n concurrency_limit: An optional concurrency limit for the work queue.\n priority: The queue's priority. Lower values are higher priority (1 is the highest).\n work_pool_name: The name of the work pool to use for this queue.\n\n Raises:\n prefect.exceptions.ObjectAlreadyExists: If request returns 409\n httpx.RequestError: If request fails\n\n Returns:\n The created work queue\n \"\"\"\n if tags:\n warnings.warn(\n (\n \"The use of tags for creating work queue filters is deprecated.\"\n \" This option will be removed on 2023-02-23.\"\n ),\n DeprecationWarning,\n )\n filter = QueueFilter(tags=tags)\n else:\n filter = None\n create_model = WorkQueueCreate(name=name, filter=filter)\n if description is not None:\n create_model.description = description\n if is_paused is not None:\n create_model.is_paused = is_paused\n if concurrency_limit is not None:\n create_model.concurrency_limit = concurrency_limit\n if priority is not None:\n create_model.priority = priority\n\n data = create_model.dict(json_compatible=True)\n try:\n if work_pool_name is not None:\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/queues\", json=data\n )\n else:\n response = await self._client.post(\"/work_queues/\", json=data)\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n elif e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return WorkQueue.parse_obj(response.json())\n\n async def read_work_queue_by_name(\n self,\n name: str,\n work_pool_name: Optional[str] = None,\n ) -> WorkQueue:\n \"\"\"\n Read a work queue by name.\n\n Args:\n name (str): a unique name for the work queue\n work_pool_name (str, optional): the name of the work pool\n the queue belongs to.\n\n Raises:\n prefect.exceptions.ObjectNotFound: if no work queue is found\n httpx.HTTPStatusError: other status errors\n\n Returns:\n WorkQueue: a work queue API object\n \"\"\"\n try:\n if work_pool_name is not None:\n response = await self._client.get(\n f\"/work_pools/{work_pool_name}/queues/{name}\"\n )\n else:\n response = await self._client.get(f\"/work_queues/name/{name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n return WorkQueue.parse_obj(response.json())\n\n async def update_work_queue(self, id: UUID, **kwargs):\n \"\"\"\n Update properties of a work queue.\n\n Args:\n id: the ID of the work queue to update\n **kwargs: the fields to update\n\n Raises:\n ValueError: if no kwargs are provided\n prefect.exceptions.ObjectNotFound: if request returns 404\n httpx.RequestError: if the request fails\n\n \"\"\"\n if not kwargs:\n raise ValueError(\"No fields provided to update.\")\n\n data = WorkQueueUpdate(**kwargs).dict(json_compatible=True, exclude_unset=True)\n try:\n await self._client.patch(f\"/work_queues/{id}\", json=data)\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def get_runs_in_work_queue(\n self,\n id: UUID,\n limit: int = 10,\n scheduled_before: datetime.datetime = None,\n ) -> List[FlowRun]:\n \"\"\"\n Read flow runs off a work queue.\n\n Args:\n id: the id of the work queue to read from\n limit: a limit on the number of runs to return\n scheduled_before: a timestamp; only runs scheduled before this time will be returned.\n Defaults to now.\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n List[FlowRun]: a list of FlowRun objects read from the queue\n \"\"\"\n if scheduled_before is None:\n scheduled_before = pendulum.now(\"UTC\")\n\n try:\n response = await self._client.post(\n f\"/work_queues/{id}/get_runs\",\n json={\n \"limit\": limit,\n \"scheduled_before\": scheduled_before.isoformat(),\n },\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return pydantic.parse_obj_as(List[FlowRun], response.json())\n\n async def read_work_queue(\n self,\n id: UUID,\n ) -> WorkQueue:\n \"\"\"\n Read a work queue.\n\n Args:\n id: the id of the work queue to load\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n WorkQueue: an instantiated WorkQueue object\n \"\"\"\n try:\n response = await self._client.get(f\"/work_queues/{id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return WorkQueue.parse_obj(response.json())\n\n async def read_work_queue_status(\n self,\n id: UUID,\n ) -> WorkQueueStatusDetail:\n \"\"\"\n Read a work queue status.\n\n Args:\n id: the id of the work queue to load\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n WorkQueueStatus: an instantiated WorkQueueStatus object\n \"\"\"\n try:\n response = await self._client.get(f\"/work_queues/{id}/status\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return WorkQueueStatusDetail.parse_obj(response.json())\n\n async def match_work_queues(\n self,\n prefixes: List[str],\n work_pool_name: Optional[str] = None,\n ) -> List[WorkQueue]:\n \"\"\"\n Query the Prefect API for work queues with names with a specific prefix.\n\n Args:\n prefixes: a list of strings used to match work queue name prefixes\n work_pool_name: an optional work pool name to scope the query to\n\n Returns:\n a list of WorkQueue model representations\n of the work queues\n \"\"\"\n page_length = 100\n current_page = 0\n work_queues = []\n\n while True:\n new_queues = await self.read_work_queues(\n work_pool_name=work_pool_name,\n offset=current_page * page_length,\n limit=page_length,\n work_queue_filter=WorkQueueFilter(\n name=WorkQueueFilterName(startswith_=prefixes)\n ),\n )\n if not new_queues:\n break\n work_queues += new_queues\n current_page += 1\n\n return work_queues\n\n async def delete_work_queue_by_id(\n self,\n id: UUID,\n ):\n \"\"\"\n Delete a work queue by its ID.\n\n Args:\n id: the id of the work queue to delete\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(\n f\"/work_queues/{id}\",\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def create_block_type(self, block_type: BlockTypeCreate) -> BlockType:\n \"\"\"\n Create a block type in the Prefect API.\n \"\"\"\n try:\n response = await self._client.post(\n \"/block_types/\",\n json=block_type.dict(\n json_compatible=True, exclude_unset=True, exclude={\"id\"}\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n return BlockType.parse_obj(response.json())\n\n async def create_block_schema(self, block_schema: BlockSchemaCreate) -> BlockSchema:\n \"\"\"\n Create a block schema in the Prefect API.\n \"\"\"\n try:\n response = await self._client.post(\n \"/block_schemas/\",\n json=block_schema.dict(\n json_compatible=True,\n exclude_unset=True,\n exclude={\"id\", \"block_type\", \"checksum\"},\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n return BlockSchema.parse_obj(response.json())\n\n async def create_block_document(\n self,\n block_document: Union[BlockDocument, BlockDocumentCreate],\n include_secrets: bool = True,\n ) -> BlockDocument:\n \"\"\"\n Create a block document in the Prefect API. This data is used to configure a\n corresponding Block.\n\n Args:\n include_secrets (bool): whether to include secret values\n on the stored Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. Note Blocks may not work as expected if\n this is set to `False`.\n \"\"\"\n if isinstance(block_document, BlockDocument):\n block_document = BlockDocumentCreate.parse_obj(\n block_document.dict(\n json_compatible=True,\n include_secrets=include_secrets,\n exclude_unset=True,\n exclude={\"id\", \"block_schema\", \"block_type\"},\n ),\n )\n\n try:\n response = await self._client.post(\n \"/block_documents/\",\n json=block_document.dict(\n json_compatible=True,\n include_secrets=include_secrets,\n exclude_unset=True,\n exclude={\"id\", \"block_schema\", \"block_type\"},\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n return BlockDocument.parse_obj(response.json())\n\n async def update_block_document(\n self,\n block_document_id: UUID,\n block_document: BlockDocumentUpdate,\n ):\n \"\"\"\n Update a block document in the Prefect API.\n \"\"\"\n try:\n await self._client.patch(\n f\"/block_documents/{block_document_id}\",\n json=block_document.dict(\n json_compatible=True,\n exclude_unset=True,\n include={\"data\", \"merge_existing_data\", \"block_schema_id\"},\n include_secrets=True,\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def delete_block_document(self, block_document_id: UUID):\n \"\"\"\n Delete a block document.\n \"\"\"\n try:\n await self._client.delete(f\"/block_documents/{block_document_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_block_type_by_slug(self, slug: str) -> BlockType:\n \"\"\"\n Read a block type by its slug.\n \"\"\"\n try:\n response = await self._client.get(f\"/block_types/slug/{slug}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockType.parse_obj(response.json())\n\n async def read_block_schema_by_checksum(\n self, checksum: str, version: Optional[str] = None\n ) -> BlockSchema:\n \"\"\"\n Look up a block schema checksum\n \"\"\"\n try:\n url = f\"/block_schemas/checksum/{checksum}\"\n if version is not None:\n url = f\"{url}?version={version}\"\n response = await self._client.get(url)\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockSchema.parse_obj(response.json())\n\n async def update_block_type(self, block_type_id: UUID, block_type: BlockTypeUpdate):\n \"\"\"\n Update a block document in the Prefect API.\n \"\"\"\n try:\n await self._client.patch(\n f\"/block_types/{block_type_id}\",\n json=block_type.dict(\n json_compatible=True,\n exclude_unset=True,\n include=BlockTypeUpdate.updatable_fields(),\n include_secrets=True,\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def delete_block_type(self, block_type_id: UUID):\n \"\"\"\n Delete a block type.\n \"\"\"\n try:\n await self._client.delete(f\"/block_types/{block_type_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n elif (\n e.response.status_code == status.HTTP_403_FORBIDDEN\n and e.response.json()[\"detail\"]\n == \"protected block types cannot be deleted.\"\n ):\n raise prefect.exceptions.ProtectedBlockError(\n \"Protected block types cannot be deleted.\"\n ) from e\n else:\n raise\n\n async def read_block_types(self) -> List[BlockType]:\n \"\"\"\n Read all block types\n Raises:\n httpx.RequestError: if the block types were not found\n\n Returns:\n List of BlockTypes.\n \"\"\"\n response = await self._client.post(\"/block_types/filter\", json={})\n return pydantic.parse_obj_as(List[BlockType], response.json())\n\n async def read_block_schemas(self) -> List[BlockSchema]:\n \"\"\"\n Read all block schemas\n Raises:\n httpx.RequestError: if a valid block schema was not found\n\n Returns:\n A BlockSchema.\n \"\"\"\n response = await self._client.post(\"/block_schemas/filter\", json={})\n return pydantic.parse_obj_as(List[BlockSchema], response.json())\n\n async def get_most_recent_block_schema_for_block_type(\n self,\n block_type_id: UUID,\n ) -> Optional[BlockSchema]:\n \"\"\"\n Fetches the most recent block schema for a specified block type ID.\n\n Args:\n block_type_id: The ID of the block type.\n\n Raises:\n httpx.RequestError: If the request fails for any reason.\n\n Returns:\n The most recent block schema or None.\n \"\"\"\n try:\n response = await self._client.post(\n \"/block_schemas/filter\",\n json={\n \"block_schemas\": {\"block_type_id\": {\"any_\": [str(block_type_id)]}},\n \"limit\": 1,\n },\n )\n except httpx.HTTPStatusError:\n raise\n return BlockSchema.parse_obj(response.json()[0]) if response.json() else None\n\n async def read_block_document(\n self,\n block_document_id: UUID,\n include_secrets: bool = True,\n ):\n \"\"\"\n Read the block document with the specified ID.\n\n Args:\n block_document_id: the block document id\n include_secrets (bool): whether to include secret values\n on the Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. These fields are automatically obfuscated\n by Pydantic, but users can additionally choose not to receive\n their values from the API. Note that any business logic on the\n Block may not work if this is `False`.\n\n Raises:\n httpx.RequestError: if the block document was not found for any reason\n\n Returns:\n A block document or None.\n \"\"\"\n assert (\n block_document_id is not None\n ), \"Unexpected ID on block document. Was it persisted?\"\n try:\n response = await self._client.get(\n f\"/block_documents/{block_document_id}\",\n params=dict(include_secrets=include_secrets),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockDocument.parse_obj(response.json())\n\n async def read_block_document_by_name(\n self,\n name: str,\n block_type_slug: str,\n include_secrets: bool = True,\n ) -> BlockDocument:\n \"\"\"\n Read the block document with the specified name that corresponds to a\n specific block type name.\n\n Args:\n name: The block document name.\n block_type_slug: The block type slug.\n include_secrets (bool): whether to include secret values\n on the Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. These fields are automatically obfuscated\n by Pydantic, but users can additionally choose not to receive\n their values from the API. Note that any business logic on the\n Block may not work if this is `False`.\n\n Raises:\n httpx.RequestError: if the block document was not found for any reason\n\n Returns:\n A block document or None.\n \"\"\"\n try:\n response = await self._client.get(\n f\"/block_types/slug/{block_type_slug}/block_documents/name/{name}\",\n params=dict(include_secrets=include_secrets),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockDocument.parse_obj(response.json())\n\n async def read_block_documents(\n self,\n block_schema_type: Optional[str] = None,\n offset: Optional[int] = None,\n limit: Optional[int] = None,\n include_secrets: bool = True,\n ):\n \"\"\"\n Read block documents\n\n Args:\n block_schema_type: an optional block schema type\n offset: an offset\n limit: the number of blocks to return\n include_secrets (bool): whether to include secret values\n on the Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. These fields are automatically obfuscated\n by Pydantic, but users can additionally choose not to receive\n their values from the API. Note that any business logic on the\n Block may not work if this is `False`.\n\n Returns:\n A list of block documents\n \"\"\"\n response = await self._client.post(\n \"/block_documents/filter\",\n json=dict(\n block_schema_type=block_schema_type,\n offset=offset,\n limit=limit,\n include_secrets=include_secrets,\n ),\n )\n return pydantic.parse_obj_as(List[BlockDocument], response.json())\n\n async def read_block_documents_by_type(\n self,\n block_type_slug: str,\n offset: Optional[int] = None,\n limit: Optional[int] = None,\n include_secrets: bool = True,\n ) -> List[BlockDocument]:\n \"\"\"Retrieve block documents by block type slug.\n\n Args:\n block_type_slug: The block type slug.\n offset: an offset\n limit: the number of blocks to return\n include_secrets: whether to include secret values\n\n Returns:\n A list of block documents\n \"\"\"\n response = await self._client.get(\n f\"/block_types/slug/{block_type_slug}/block_documents\",\n params=dict(\n offset=offset,\n limit=limit,\n include_secrets=include_secrets,\n ),\n )\n\n return pydantic.parse_obj_as(List[BlockDocument], response.json())\n\n async def create_deployment(\n self,\n flow_id: UUID,\n name: str,\n version: str = None,\n schedule: SCHEDULE_TYPES = None,\n schedules: List[DeploymentScheduleCreate] = None,\n parameters: Dict[str, Any] = None,\n description: str = None,\n work_queue_name: str = None,\n work_pool_name: str = None,\n tags: List[str] = None,\n storage_document_id: UUID = None,\n manifest_path: str = None,\n path: str = None,\n entrypoint: str = None,\n infrastructure_document_id: UUID = None,\n infra_overrides: Dict[str, Any] = None,\n parameter_openapi_schema: dict = None,\n is_schedule_active: Optional[bool] = None,\n paused: Optional[bool] = None,\n pull_steps: Optional[List[dict]] = None,\n enforce_parameter_schema: Optional[bool] = None,\n ) -> UUID:\n \"\"\"\n Create a deployment.\n\n Args:\n flow_id: the flow ID to create a deployment for\n name: the name of the deployment\n version: an optional version string for the deployment\n schedule: an optional schedule to apply to the deployment\n tags: an optional list of tags to apply to the deployment\n storage_document_id: an reference to the storage block document\n used for the deployed flow\n infrastructure_document_id: an reference to the infrastructure block document\n to use for this deployment\n\n Raises:\n httpx.RequestError: if the deployment was not created for any reason\n\n Returns:\n the ID of the deployment in the backend\n \"\"\"\n\n deployment_create = DeploymentCreate(\n flow_id=flow_id,\n name=name,\n version=version,\n parameters=dict(parameters or {}),\n tags=list(tags or []),\n work_queue_name=work_queue_name,\n description=description,\n storage_document_id=storage_document_id,\n path=path,\n entrypoint=entrypoint,\n manifest_path=manifest_path, # for backwards compat\n infrastructure_document_id=infrastructure_document_id,\n infra_overrides=infra_overrides or {},\n parameter_openapi_schema=parameter_openapi_schema,\n is_schedule_active=is_schedule_active,\n paused=paused,\n schedule=schedule,\n schedules=schedules or [],\n pull_steps=pull_steps,\n enforce_parameter_schema=enforce_parameter_schema,\n )\n\n if work_pool_name is not None:\n deployment_create.work_pool_name = work_pool_name\n\n # Exclude newer fields that are not set to avoid compatibility issues\n exclude = {\n field\n for field in [\"work_pool_name\", \"work_queue_name\"]\n if field not in deployment_create.__fields_set__\n }\n\n if deployment_create.is_schedule_active is None:\n exclude.add(\"is_schedule_active\")\n\n if deployment_create.paused is None:\n exclude.add(\"paused\")\n\n if deployment_create.pull_steps is None:\n exclude.add(\"pull_steps\")\n\n if deployment_create.enforce_parameter_schema is None:\n exclude.add(\"enforce_parameter_schema\")\n\n json = deployment_create.dict(json_compatible=True, exclude=exclude)\n response = await self._client.post(\n \"/deployments/\",\n json=json,\n )\n deployment_id = response.json().get(\"id\")\n if not deployment_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n return UUID(deployment_id)\n\n async def update_schedule(self, deployment_id: UUID, active: bool = True):\n path = \"set_schedule_active\" if active else \"set_schedule_inactive\"\n await self._client.post(\n f\"/deployments/{deployment_id}/{path}\",\n )\n\n async def set_deployment_paused_state(self, deployment_id: UUID, paused: bool):\n await self._client.patch(\n f\"/deployments/{deployment_id}\", json={\"paused\": paused}\n )\n\n async def update_deployment(\n self,\n deployment: Deployment,\n schedule: SCHEDULE_TYPES = None,\n is_schedule_active: bool = None,\n ):\n deployment_update = DeploymentUpdate(\n version=deployment.version,\n schedule=schedule if schedule is not None else deployment.schedule,\n is_schedule_active=(\n is_schedule_active\n if is_schedule_active is not None\n else deployment.is_schedule_active\n ),\n description=deployment.description,\n work_queue_name=deployment.work_queue_name,\n tags=deployment.tags,\n manifest_path=deployment.manifest_path,\n path=deployment.path,\n entrypoint=deployment.entrypoint,\n parameters=deployment.parameters,\n storage_document_id=deployment.storage_document_id,\n infrastructure_document_id=deployment.infrastructure_document_id,\n infra_overrides=deployment.infra_overrides,\n enforce_parameter_schema=deployment.enforce_parameter_schema,\n )\n\n if getattr(deployment, \"work_pool_name\", None) is not None:\n deployment_update.work_pool_name = deployment.work_pool_name\n\n exclude = set()\n if deployment.enforce_parameter_schema is None:\n exclude.add(\"enforce_parameter_schema\")\n\n await self._client.patch(\n f\"/deployments/{deployment.id}\",\n json=deployment_update.dict(json_compatible=True, exclude=exclude),\n )\n\n async def _create_deployment_from_schema(self, schema: DeploymentCreate) -> UUID:\n \"\"\"\n Create a deployment from a prepared `DeploymentCreate` schema.\n \"\"\"\n # TODO: We are likely to remove this method once we have considered the\n # packaging interface for deployments further.\n response = await self._client.post(\n \"/deployments/\", json=schema.dict(json_compatible=True)\n )\n deployment_id = response.json().get(\"id\")\n if not deployment_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n return UUID(deployment_id)\n\n async def read_deployment(\n self,\n deployment_id: UUID,\n ) -> DeploymentResponse:\n \"\"\"\n Query the Prefect API for a deployment by id.\n\n Args:\n deployment_id: the deployment ID of interest\n\n Returns:\n a [Deployment model][prefect.client.schemas.objects.Deployment] representation of the deployment\n \"\"\"\n try:\n response = await self._client.get(f\"/deployments/{deployment_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return DeploymentResponse.parse_obj(response.json())\n\n async def read_deployment_by_name(\n self,\n name: str,\n ) -> DeploymentResponse:\n \"\"\"\n Query the Prefect API for a deployment by name.\n\n Args:\n name: A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n a Deployment model representation of the deployment\n \"\"\"\n try:\n response = await self._client.get(f\"/deployments/name/{name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n return DeploymentResponse.parse_obj(response.json())\n\n async def read_deployments(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n work_pool_filter: WorkPoolFilter = None,\n work_queue_filter: WorkQueueFilter = None,\n limit: int = None,\n sort: DeploymentSort = None,\n offset: int = 0,\n ) -> List[DeploymentResponse]:\n \"\"\"\n Query the Prefect API for deployments. Only deployments matching all\n the provided criteria will be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n work_pool_filter: filter criteria for work pools\n work_queue_filter: filter criteria for work pool queues\n limit: a limit for the deployment query\n offset: an offset for the deployment query\n\n Returns:\n a list of Deployment model representations\n of the deployments\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n \"work_pool_queues\": (\n work_queue_filter.dict(json_compatible=True)\n if work_queue_filter\n else None\n ),\n \"limit\": limit,\n \"offset\": offset,\n \"sort\": sort,\n }\n\n response = await self._client.post(\"/deployments/filter\", json=body)\n return pydantic.parse_obj_as(List[DeploymentResponse], response.json())\n\n async def delete_deployment(\n self,\n deployment_id: UUID,\n ):\n \"\"\"\n Delete deployment by id.\n\n Args:\n deployment_id: The deployment id of interest.\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/deployments/{deployment_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def create_deployment_schedules(\n self,\n deployment_id: UUID,\n schedules: List[Tuple[SCHEDULE_TYPES, bool]],\n ) -> List[DeploymentSchedule]:\n \"\"\"\n Create deployment schedules.\n\n Args:\n deployment_id: the deployment ID\n schedules: a list of tuples containing the schedule to create\n and whether or not it should be active.\n\n Raises:\n httpx.RequestError: if the schedules were not created for any reason\n\n Returns:\n the list of schedules created in the backend\n \"\"\"\n deployment_schedule_create = [\n DeploymentScheduleCreate(schedule=schedule[0], active=schedule[1])\n for schedule in schedules\n ]\n\n json = [\n deployment_schedule_create.dict(json_compatible=True)\n for deployment_schedule_create in deployment_schedule_create\n ]\n response = await self._client.post(\n f\"/deployments/{deployment_id}/schedules\", json=json\n )\n return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n\n async def read_deployment_schedules(\n self,\n deployment_id: UUID,\n ) -> List[DeploymentSchedule]:\n \"\"\"\n Query the Prefect API for a deployment's schedules.\n\n Args:\n deployment_id: the deployment ID\n\n Returns:\n a list of DeploymentSchedule model representations of the deployment schedules\n \"\"\"\n try:\n response = await self._client.get(f\"/deployments/{deployment_id}/schedules\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n\n async def update_deployment_schedule(\n self,\n deployment_id: UUID,\n schedule_id: UUID,\n active: Optional[bool] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n ):\n \"\"\"\n Update a deployment schedule by ID.\n\n Args:\n deployment_id: the deployment ID\n schedule_id: the deployment schedule ID of interest\n active: whether or not the schedule should be active\n schedule: the cron, rrule, or interval schedule this deployment schedule should use\n \"\"\"\n kwargs = {}\n if active is not None:\n kwargs[\"active\"] = active\n elif schedule is not None:\n kwargs[\"schedule\"] = schedule\n\n deployment_schedule_update = DeploymentScheduleUpdate(**kwargs)\n json = deployment_schedule_update.dict(json_compatible=True, exclude_unset=True)\n\n try:\n await self._client.patch(\n f\"/deployments/{deployment_id}/schedules/{schedule_id}\", json=json\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def delete_deployment_schedule(\n self,\n deployment_id: UUID,\n schedule_id: UUID,\n ) -> None:\n \"\"\"\n Delete a deployment schedule.\n\n Args:\n deployment_id: the deployment ID\n schedule_id: the ID of the deployment schedule to delete.\n\n Raises:\n httpx.RequestError: if the schedules were not deleted for any reason\n \"\"\"\n try:\n await self._client.delete(\n f\"/deployments/{deployment_id}/schedules/{schedule_id}\"\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_flow_run(self, flow_run_id: UUID) -> FlowRun:\n \"\"\"\n Query the Prefect API for a flow run by id.\n\n Args:\n flow_run_id: the flow run ID of interest\n\n Returns:\n a Flow Run model representation of the flow run\n \"\"\"\n try:\n response = await self._client.get(f\"/flow_runs/{flow_run_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return FlowRun.parse_obj(response.json())\n\n async def resume_flow_run(\n self, flow_run_id: UUID, run_input: Optional[Dict] = None\n ) -> OrchestrationResult:\n \"\"\"\n Resumes a paused flow run.\n\n Args:\n flow_run_id: the flow run ID of interest\n run_input: the input to resume the flow run with\n\n Returns:\n an OrchestrationResult model representation of state orchestration output\n \"\"\"\n try:\n response = await self._client.post(\n f\"/flow_runs/{flow_run_id}/resume\", json={\"run_input\": run_input}\n )\n except httpx.HTTPStatusError:\n raise\n\n return OrchestrationResult.parse_obj(response.json())\n\n async def read_flow_runs(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n work_pool_filter: WorkPoolFilter = None,\n work_queue_filter: WorkQueueFilter = None,\n sort: FlowRunSort = None,\n limit: int = None,\n offset: int = 0,\n ) -> List[FlowRun]:\n \"\"\"\n Query the Prefect API for flow runs. Only flow runs matching all criteria will\n be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n work_pool_filter: filter criteria for work pools\n work_queue_filter: filter criteria for work pool queues\n sort: sort criteria for the flow runs\n limit: limit for the flow run query\n offset: offset for the flow run query\n\n Returns:\n a list of Flow Run model representations\n of the flow runs\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n \"work_pool_queues\": (\n work_queue_filter.dict(json_compatible=True)\n if work_queue_filter\n else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n\n response = await self._client.post(\"/flow_runs/filter\", json=body)\n return pydantic.parse_obj_as(List[FlowRun], response.json())\n\n async def set_flow_run_state(\n self,\n flow_run_id: UUID,\n state: \"prefect.states.State\",\n force: bool = False,\n ) -> OrchestrationResult:\n \"\"\"\n Set the state of a flow run.\n\n Args:\n flow_run_id: the id of the flow run\n state: the state to set\n force: if True, disregard orchestration logic when setting the state,\n forcing the Prefect API to accept the state\n\n Returns:\n an OrchestrationResult model representation of state orchestration output\n \"\"\"\n state_create = state.to_state_create()\n state_create.state_details.flow_run_id = flow_run_id\n state_create.state_details.transition_id = uuid4()\n try:\n response = await self._client.post(\n f\"/flow_runs/{flow_run_id}/set_state\",\n json=dict(state=state_create.dict(json_compatible=True), force=force),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n return OrchestrationResult.parse_obj(response.json())\n\n async def read_flow_run_states(\n self, flow_run_id: UUID\n ) -> List[prefect.states.State]:\n \"\"\"\n Query for the states of a flow run\n\n Args:\n flow_run_id: the id of the flow run\n\n Returns:\n a list of State model representations\n of the flow run states\n \"\"\"\n response = await self._client.get(\n \"/flow_run_states/\", params=dict(flow_run_id=str(flow_run_id))\n )\n return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n\n async def set_task_run_name(self, task_run_id: UUID, name: str):\n task_run_data = TaskRunUpdate(name=name)\n return await self._client.patch(\n f\"/task_runs/{task_run_id}\",\n json=task_run_data.dict(json_compatible=True, exclude_unset=True),\n )\n\n async def create_task_run(\n self,\n task: \"TaskObject\",\n flow_run_id: Optional[UUID],\n dynamic_key: str,\n name: str = None,\n extra_tags: Iterable[str] = None,\n state: prefect.states.State = None,\n task_inputs: Dict[\n str,\n List[\n Union[\n TaskRunResult,\n Parameter,\n Constant,\n ]\n ],\n ] = None,\n ) -> TaskRun:\n \"\"\"\n Create a task run\n\n Args:\n task: The Task to run\n flow_run_id: The flow run id with which to associate the task run\n dynamic_key: A key unique to this particular run of a Task within the flow\n name: An optional name for the task run\n extra_tags: an optional list of extra tags to apply to the task run in\n addition to `task.tags`\n state: The initial state for the run. If not provided, defaults to\n `Pending` for now. Should always be a `Scheduled` type.\n task_inputs: the set of inputs passed to the task\n\n Returns:\n The created task run.\n \"\"\"\n tags = set(task.tags).union(extra_tags or [])\n\n if state is None:\n state = prefect.states.Pending()\n\n task_run_data = TaskRunCreate(\n name=name,\n flow_run_id=flow_run_id,\n task_key=task.task_key,\n dynamic_key=dynamic_key,\n tags=list(tags),\n task_version=task.version,\n empirical_policy=TaskRunPolicy(\n retries=task.retries,\n retry_delay=task.retry_delay_seconds,\n retry_jitter_factor=task.retry_jitter_factor,\n ),\n state=state.to_state_create(),\n task_inputs=task_inputs or {},\n )\n\n response = await self._client.post(\n \"/task_runs/\", json=task_run_data.dict(json_compatible=True)\n )\n return TaskRun.parse_obj(response.json())\n\n async def read_task_run(self, task_run_id: UUID) -> TaskRun:\n \"\"\"\n Query the Prefect API for a task run by id.\n\n Args:\n task_run_id: the task run ID of interest\n\n Returns:\n a Task Run model representation of the task run\n \"\"\"\n response = await self._client.get(f\"/task_runs/{task_run_id}\")\n return TaskRun.parse_obj(response.json())\n\n async def read_task_runs(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n sort: TaskRunSort = None,\n limit: int = None,\n offset: int = 0,\n ) -> List[TaskRun]:\n \"\"\"\n Query the Prefect API for task runs. Only task runs matching all criteria will\n be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n sort: sort criteria for the task runs\n limit: a limit for the task run query\n offset: an offset for the task run query\n\n Returns:\n a list of Task Run model representations\n of the task runs\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\"/task_runs/filter\", json=body)\n return pydantic.parse_obj_as(List[TaskRun], response.json())\n\n async def delete_task_run(self, task_run_id: UUID) -> None:\n \"\"\"\n Delete a task run by id.\n\n Args:\n task_run_id: the task run ID of interest\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/task_runs/{task_run_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def set_task_run_state(\n self,\n task_run_id: UUID,\n state: prefect.states.State,\n force: bool = False,\n ) -> OrchestrationResult:\n \"\"\"\n Set the state of a task run.\n\n Args:\n task_run_id: the id of the task run\n state: the state to set\n force: if True, disregard orchestration logic when setting the state,\n forcing the Prefect API to accept the state\n\n Returns:\n an OrchestrationResult model representation of state orchestration output\n \"\"\"\n state_create = state.to_state_create()\n state_create.state_details.task_run_id = task_run_id\n response = await self._client.post(\n f\"/task_runs/{task_run_id}/set_state\",\n json=dict(state=state_create.dict(json_compatible=True), force=force),\n )\n return OrchestrationResult.parse_obj(response.json())\n\n async def read_task_run_states(\n self, task_run_id: UUID\n ) -> List[prefect.states.State]:\n \"\"\"\n Query for the states of a task run\n\n Args:\n task_run_id: the id of the task run\n\n Returns:\n a list of State model representations of the task run states\n \"\"\"\n response = await self._client.get(\n \"/task_run_states/\", params=dict(task_run_id=str(task_run_id))\n )\n return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n\n async def create_logs(self, logs: Iterable[Union[LogCreate, dict]]) -> None:\n \"\"\"\n Create logs for a flow or task run\n\n Args:\n logs: An iterable of `LogCreate` objects or already json-compatible dicts\n \"\"\"\n serialized_logs = [\n log.dict(json_compatible=True) if isinstance(log, LogCreate) else log\n for log in logs\n ]\n await self._client.post(\"/logs/\", json=serialized_logs)\n\n async def create_flow_run_notification_policy(\n self,\n block_document_id: UUID,\n is_active: bool = True,\n tags: List[str] = None,\n state_names: List[str] = None,\n message_template: Optional[str] = None,\n ) -> UUID:\n \"\"\"\n Create a notification policy for flow runs\n\n Args:\n block_document_id: The block document UUID\n is_active: Whether the notification policy is active\n tags: List of flow tags\n state_names: List of state names\n message_template: Notification message template\n \"\"\"\n if tags is None:\n tags = []\n if state_names is None:\n state_names = []\n\n policy = FlowRunNotificationPolicyCreate(\n block_document_id=block_document_id,\n is_active=is_active,\n tags=tags,\n state_names=state_names,\n message_template=message_template,\n )\n response = await self._client.post(\n \"/flow_run_notification_policies/\",\n json=policy.dict(json_compatible=True),\n )\n\n policy_id = response.json().get(\"id\")\n if not policy_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n return UUID(policy_id)\n\n async def delete_flow_run_notification_policy(\n self,\n id: UUID,\n ) -> None:\n \"\"\"\n Delete a flow run notification policy by id.\n\n Args:\n id: UUID of the flow run notification policy to delete.\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/flow_run_notification_policies/{id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def update_flow_run_notification_policy(\n self,\n id: UUID,\n block_document_id: Optional[UUID] = None,\n is_active: Optional[bool] = None,\n tags: Optional[List[str]] = None,\n state_names: Optional[List[str]] = None,\n message_template: Optional[str] = None,\n ) -> None:\n \"\"\"\n Update a notification policy for flow runs\n\n Args:\n id: UUID of the notification policy\n block_document_id: The block document UUID\n is_active: Whether the notification policy is active\n tags: List of flow tags\n state_names: List of state names\n message_template: Notification message template\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n params = {}\n if block_document_id is not None:\n params[\"block_document_id\"] = block_document_id\n if is_active is not None:\n params[\"is_active\"] = is_active\n if tags is not None:\n params[\"tags\"] = tags\n if state_names is not None:\n params[\"state_names\"] = state_names\n if message_template is not None:\n params[\"message_template\"] = message_template\n\n policy = FlowRunNotificationPolicyUpdate(**params)\n\n try:\n await self._client.patch(\n f\"/flow_run_notification_policies/{id}\",\n json=policy.dict(json_compatible=True, exclude_unset=True),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_flow_run_notification_policies(\n self,\n flow_run_notification_policy_filter: FlowRunNotificationPolicyFilter,\n limit: Optional[int] = None,\n offset: int = 0,\n ) -> List[FlowRunNotificationPolicy]:\n \"\"\"\n Query the Prefect API for flow run notification policies. Only policies matching all criteria will\n be returned.\n\n Args:\n flow_run_notification_policy_filter: filter criteria for notification policies\n limit: a limit for the notification policies query\n offset: an offset for the notification policies query\n\n Returns:\n a list of FlowRunNotificationPolicy model representations\n of the notification policies\n \"\"\"\n body = {\n \"flow_run_notification_policy_filter\": (\n flow_run_notification_policy_filter.dict(json_compatible=True)\n if flow_run_notification_policy_filter\n else None\n ),\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\n \"/flow_run_notification_policies/filter\", json=body\n )\n return pydantic.parse_obj_as(List[FlowRunNotificationPolicy], response.json())\n\n async def read_logs(\n self,\n log_filter: LogFilter = None,\n limit: int = None,\n offset: int = None,\n sort: LogSort = LogSort.TIMESTAMP_ASC,\n ) -> List[Log]:\n \"\"\"\n Read flow and task run logs.\n \"\"\"\n body = {\n \"logs\": log_filter.dict(json_compatible=True) if log_filter else None,\n \"limit\": limit,\n \"offset\": offset,\n \"sort\": sort,\n }\n\n response = await self._client.post(\"/logs/filter\", json=body)\n return pydantic.parse_obj_as(List[Log], response.json())\n\n async def resolve_datadoc(self, datadoc: DataDocument) -> Any:\n \"\"\"\n Recursively decode possibly nested data documents.\n\n \"server\" encoded documents will be retrieved from the server.\n\n Args:\n datadoc: The data document to resolve\n\n Returns:\n a decoded object, the innermost data\n \"\"\"\n if not isinstance(datadoc, DataDocument):\n raise TypeError(\n f\"`resolve_datadoc` received invalid type {type(datadoc).__name__}\"\n )\n\n async def resolve_inner(data):\n if isinstance(data, bytes):\n try:\n data = DataDocument.parse_raw(data)\n except pydantic.ValidationError:\n return data\n\n if isinstance(data, DataDocument):\n return await resolve_inner(data.decode())\n\n return data\n\n return await resolve_inner(datadoc)\n\n async def send_worker_heartbeat(\n self,\n work_pool_name: str,\n worker_name: str,\n heartbeat_interval_seconds: Optional[float] = None,\n ):\n \"\"\"\n Sends a worker heartbeat for a given work pool.\n\n Args:\n work_pool_name: The name of the work pool to heartbeat against.\n worker_name: The name of the worker sending the heartbeat.\n \"\"\"\n await self._client.post(\n f\"/work_pools/{work_pool_name}/workers/heartbeat\",\n json={\n \"name\": worker_name,\n \"heartbeat_interval_seconds\": heartbeat_interval_seconds,\n },\n )\n\n async def read_workers_for_work_pool(\n self,\n work_pool_name: str,\n worker_filter: Optional[WorkerFilter] = None,\n offset: Optional[int] = None,\n limit: Optional[int] = None,\n ) -> List[Worker]:\n \"\"\"\n Reads workers for a given work pool.\n\n Args:\n work_pool_name: The name of the work pool for which to get\n member workers.\n worker_filter: Criteria by which to filter workers.\n limit: Limit for the worker query.\n offset: Limit for the worker query.\n \"\"\"\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/workers/filter\",\n json={\n \"worker_filter\": (\n worker_filter.dict(json_compatible=True, exclude_unset=True)\n if worker_filter\n else None\n ),\n \"offset\": offset,\n \"limit\": limit,\n },\n )\n\n return pydantic.parse_obj_as(List[Worker], response.json())\n\n async def read_work_pool(self, work_pool_name: str) -> WorkPool:\n \"\"\"\n Reads information for a given work pool\n\n Args:\n work_pool_name: The name of the work pool to for which to get\n information.\n\n Returns:\n Information about the requested work pool.\n \"\"\"\n try:\n response = await self._client.get(f\"/work_pools/{work_pool_name}\")\n return pydantic.parse_obj_as(WorkPool, response.json())\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_work_pools(\n self,\n limit: Optional[int] = None,\n offset: int = 0,\n work_pool_filter: Optional[WorkPoolFilter] = None,\n ) -> List[WorkPool]:\n \"\"\"\n Reads work pools.\n\n Args:\n limit: Limit for the work pool query.\n offset: Offset for the work pool query.\n work_pool_filter: Criteria by which to filter work pools.\n\n Returns:\n A list of work pools.\n \"\"\"\n\n body = {\n \"limit\": limit,\n \"offset\": offset,\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n }\n response = await self._client.post(\"/work_pools/filter\", json=body)\n return pydantic.parse_obj_as(List[WorkPool], response.json())\n\n async def create_work_pool(\n self,\n work_pool: WorkPoolCreate,\n ) -> WorkPool:\n \"\"\"\n Creates a work pool with the provided configuration.\n\n Args:\n work_pool: Desired configuration for the new work pool.\n\n Returns:\n Information about the newly created work pool.\n \"\"\"\n try:\n response = await self._client.post(\n \"/work_pools/\",\n json=work_pool.dict(json_compatible=True, exclude_unset=True),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n\n return pydantic.parse_obj_as(WorkPool, response.json())\n\n async def update_work_pool(\n self,\n work_pool_name: str,\n work_pool: WorkPoolUpdate,\n ):\n \"\"\"\n Updates a work pool.\n\n Args:\n work_pool_name: Name of the work pool to update.\n work_pool: Fields to update in the work pool.\n \"\"\"\n try:\n await self._client.patch(\n f\"/work_pools/{work_pool_name}\",\n json=work_pool.dict(json_compatible=True, exclude_unset=True),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def delete_work_pool(\n self,\n work_pool_name: str,\n ):\n \"\"\"\n Deletes a work pool.\n\n Args:\n work_pool_name: Name of the work pool to delete.\n \"\"\"\n try:\n await self._client.delete(f\"/work_pools/{work_pool_name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_work_queues(\n self,\n work_pool_name: Optional[str] = None,\n work_queue_filter: Optional[WorkQueueFilter] = None,\n limit: Optional[int] = None,\n offset: Optional[int] = None,\n ) -> List[WorkQueue]:\n \"\"\"\n Retrieves queues for a work pool.\n\n Args:\n work_pool_name: Name of the work pool for which to get queues.\n work_queue_filter: Criteria by which to filter queues.\n limit: Limit for the queue query.\n offset: Limit for the queue query.\n\n Returns:\n List of queues for the specified work pool.\n \"\"\"\n json = {\n \"work_queues\": (\n work_queue_filter.dict(json_compatible=True, exclude_unset=True)\n if work_queue_filter\n else None\n ),\n \"limit\": limit,\n \"offset\": offset,\n }\n\n if work_pool_name:\n try:\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/queues/filter\",\n json=json,\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n else:\n response = await self._client.post(\"/work_queues/filter\", json=json)\n\n return pydantic.parse_obj_as(List[WorkQueue], response.json())\n\n async def get_scheduled_flow_runs_for_deployments(\n self,\n deployment_ids: List[UUID],\n scheduled_before: Optional[datetime.datetime] = None,\n limit: Optional[int] = None,\n ):\n body: Dict[str, Any] = dict(deployment_ids=[str(id) for id in deployment_ids])\n if scheduled_before:\n body[\"scheduled_before\"] = str(scheduled_before)\n if limit:\n body[\"limit\"] = limit\n\n response = await self._client.post(\n \"/deployments/get_scheduled_flow_runs\",\n json=body,\n )\n\n return pydantic.parse_obj_as(List[FlowRunResponse], response.json())\n\n async def get_scheduled_flow_runs_for_work_pool(\n self,\n work_pool_name: str,\n work_queue_names: Optional[List[str]] = None,\n scheduled_before: Optional[datetime.datetime] = None,\n ) -> List[WorkerFlowRunResponse]:\n \"\"\"\n Retrieves scheduled flow runs for the provided set of work pool queues.\n\n Args:\n work_pool_name: The name of the work pool that the work pool\n queues are associated with.\n work_queue_names: The names of the work pool queues from which\n to get scheduled flow runs.\n scheduled_before: Datetime used to filter returned flow runs. Flow runs\n scheduled for after the given datetime string will not be returned.\n\n Returns:\n A list of worker flow run responses containing information about the\n retrieved flow runs.\n \"\"\"\n body: Dict[str, Any] = {}\n if work_queue_names is not None:\n body[\"work_queue_names\"] = list(work_queue_names)\n if scheduled_before:\n body[\"scheduled_before\"] = str(scheduled_before)\n\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/get_scheduled_flow_runs\",\n json=body,\n )\n return pydantic.parse_obj_as(List[WorkerFlowRunResponse], response.json())\n\n async def create_artifact(\n self,\n artifact: ArtifactCreate,\n ) -> Artifact:\n \"\"\"\n Creates an artifact with the provided configuration.\n\n Args:\n artifact: Desired configuration for the new artifact.\n Returns:\n Information about the newly created artifact.\n \"\"\"\n\n response = await self._client.post(\n \"/artifacts/\",\n json=artifact.dict(json_compatible=True, exclude_unset=True),\n )\n\n return pydantic.parse_obj_as(Artifact, response.json())\n\n async def read_artifacts(\n self,\n *,\n artifact_filter: ArtifactFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n sort: ArtifactSort = None,\n limit: int = None,\n offset: int = 0,\n ) -> List[Artifact]:\n \"\"\"\n Query the Prefect API for artifacts. Only artifacts matching all criteria will\n be returned.\n Args:\n artifact_filter: filter criteria for artifacts\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n sort: sort criteria for the artifacts\n limit: limit for the artifact query\n offset: offset for the artifact query\n Returns:\n a list of Artifact model representations of the artifacts\n \"\"\"\n body = {\n \"artifacts\": (\n artifact_filter.dict(json_compatible=True) if artifact_filter else None\n ),\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\"/artifacts/filter\", json=body)\n return pydantic.parse_obj_as(List[Artifact], response.json())\n\n async def read_latest_artifacts(\n self,\n *,\n artifact_filter: ArtifactCollectionFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n sort: ArtifactCollectionSort = None,\n limit: int = None,\n offset: int = 0,\n ) -> List[ArtifactCollection]:\n \"\"\"\n Query the Prefect API for artifacts. Only artifacts matching all criteria will\n be returned.\n Args:\n artifact_filter: filter criteria for artifacts\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n sort: sort criteria for the artifacts\n limit: limit for the artifact query\n offset: offset for the artifact query\n Returns:\n a list of Artifact model representations of the artifacts\n \"\"\"\n body = {\n \"artifacts\": (\n artifact_filter.dict(json_compatible=True) if artifact_filter else None\n ),\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\"/artifacts/latest/filter\", json=body)\n return pydantic.parse_obj_as(List[ArtifactCollection], response.json())\n\n async def delete_artifact(self, artifact_id: UUID) -> None:\n \"\"\"\n Deletes an artifact with the provided id.\n\n Args:\n artifact_id: The id of the artifact to delete.\n \"\"\"\n try:\n await self._client.delete(f\"/artifacts/{artifact_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_variable_by_name(self, name: str) -> Optional[Variable]:\n \"\"\"Reads a variable by name. Returns None if no variable is found.\"\"\"\n try:\n response = await self._client.get(f\"/variables/name/{name}\")\n return pydantic.parse_obj_as(Variable, response.json())\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n return None\n else:\n raise\n\n async def delete_variable_by_name(self, name: str):\n \"\"\"Deletes a variable by name.\"\"\"\n try:\n await self._client.delete(f\"/variables/name/{name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_variables(self, limit: int = None) -> List[Variable]:\n \"\"\"Reads all variables.\"\"\"\n response = await self._client.post(\"/variables/filter\", json={\"limit\": limit})\n return pydantic.parse_obj_as(List[Variable], response.json())\n\n async def read_worker_metadata(self) -> Dict[str, Any]:\n \"\"\"Reads worker metadata stored in Prefect collection registry.\"\"\"\n response = await self._client.get(\"collections/views/aggregate-worker-metadata\")\n response.raise_for_status()\n return response.json()\n\n async def create_automation(self, automation: Automation) -> UUID:\n \"\"\"Creates an automation in Prefect Cloud.\"\"\"\n if self.server_type != ServerType.CLOUD:\n raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n response = await self._client.post(\n \"/automations/\",\n json=automation.dict(json_compatible=True),\n )\n\n return UUID(response.json()[\"id\"])\n\n async def read_resource_related_automations(\n self, resource_id: str\n ) -> List[ExistingAutomation]:\n if self.server_type != ServerType.CLOUD:\n raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n response = await self._client.get(f\"/automations/related-to/{resource_id}\")\n response.raise_for_status()\n return pydantic.parse_obj_as(List[ExistingAutomation], response.json())\n\n async def delete_resource_owned_automations(self, resource_id: str):\n if self.server_type != ServerType.CLOUD:\n raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n await self._client.delete(f\"/automations/owned-by/{resource_id}\")\n\n async def increment_concurrency_slots(\n self, names: List[str], slots: int, mode: str\n ) -> httpx.Response:\n return await self._client.post(\n \"/v2/concurrency_limits/increment\",\n json={\"names\": names, \"slots\": slots, \"mode\": mode},\n )\n\n async def release_concurrency_slots(\n self, names: List[str], slots: int, occupancy_seconds: float\n ) -> httpx.Response:\n return await self._client.post(\n \"/v2/concurrency_limits/decrement\",\n json={\n \"names\": names,\n \"slots\": slots,\n \"occupancy_seconds\": occupancy_seconds,\n },\n )\n\n async def create_global_concurrency_limit(\n self, concurrency_limit: GlobalConcurrencyLimitCreate\n ) -> UUID:\n response = await self._client.post(\n \"/v2/concurrency_limits/\",\n json=concurrency_limit.dict(json_compatible=True, exclude_unset=True),\n )\n return UUID(response.json()[\"id\"])\n\n async def update_global_concurrency_limit(\n self, name: str, concurrency_limit: GlobalConcurrencyLimitUpdate\n ) -> httpx.Response:\n try:\n response = await self._client.patch(\n f\"/v2/concurrency_limits/{name}\",\n json=concurrency_limit.dict(json_compatible=True, exclude_unset=True),\n )\n return response\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def delete_global_concurrency_limit_by_name(\n self, name: str\n ) -> httpx.Response:\n try:\n response = await self._client.delete(f\"/v2/concurrency_limits/{name}\")\n return response\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_global_concurrency_limit_by_name(\n self, name: str\n ) -> Dict[str, object]:\n try:\n response = await self._client.get(f\"/v2/concurrency_limits/{name}\")\n return response.json()\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n async def read_global_concurrency_limits(\n self, limit: int = 10, offset: int = 0\n ) -> List[Dict[str, object]]:\n response = await self._client.post(\n \"/v2/concurrency_limits/filter\",\n json={\n \"limit\": limit,\n \"offset\": offset,\n },\n )\n return response.json()\n\n async def create_flow_run_input(\n self, flow_run_id: UUID, key: str, value: str, sender: Optional[str] = None\n ):\n \"\"\"\n Creates a flow run input.\n\n Args:\n flow_run_id: The flow run id.\n key: The input key.\n value: The input value.\n sender: The sender of the input.\n \"\"\"\n\n # Initialize the input to ensure that the key is valid.\n FlowRunInput(flow_run_id=flow_run_id, key=key, value=value)\n\n response = await self._client.post(\n f\"/flow_runs/{flow_run_id}/input\",\n json={\"key\": key, \"value\": value, \"sender\": sender},\n )\n response.raise_for_status()\n\n async def filter_flow_run_input(\n self, flow_run_id: UUID, key_prefix: str, limit: int, exclude_keys: Set[str]\n ) -> List[FlowRunInput]:\n response = await self._client.post(\n f\"/flow_runs/{flow_run_id}/input/filter\",\n json={\n \"prefix\": key_prefix,\n \"limit\": limit,\n \"exclude_keys\": list(exclude_keys),\n },\n )\n response.raise_for_status()\n return pydantic.parse_obj_as(List[FlowRunInput], response.json())\n\n async def read_flow_run_input(self, flow_run_id: UUID, key: str) -> str:\n \"\"\"\n Reads a flow run input.\n\n Args:\n flow_run_id: The flow run id.\n key: The input key.\n \"\"\"\n response = await self._client.get(f\"/flow_runs/{flow_run_id}/input/{key}\")\n response.raise_for_status()\n return response.content.decode()\n\n async def delete_flow_run_input(self, flow_run_id: UUID, key: str):\n \"\"\"\n Deletes a flow run input.\n\n Args:\n flow_run_id: The flow run id.\n key: The input key.\n \"\"\"\n response = await self._client.delete(f\"/flow_runs/{flow_run_id}/input/{key}\")\n response.raise_for_status()\n\n async def __aenter__(self):\n \"\"\"\n Start the client.\n\n If the client is already started, this will raise an exception.\n\n If the client is already closed, this will raise an exception. Use a new client\n instance instead.\n \"\"\"\n if self._closed:\n # httpx.AsyncClient does not allow reuse so we will not either.\n raise RuntimeError(\n \"The client cannot be started again after closing. \"\n \"Retrieve a new client with `get_client()` instead.\"\n )\n\n if self._started:\n # httpx.AsyncClient does not allow reentrancy so we will not either.\n raise RuntimeError(\"The client cannot be started more than once.\")\n\n self._loop = asyncio.get_running_loop()\n await self._exit_stack.__aenter__()\n\n # Enter a lifespan context if using an ephemeral application.\n # See https://github.com/encode/httpx/issues/350\n if self._ephemeral_app and self.manage_lifespan:\n self._ephemeral_lifespan = await self._exit_stack.enter_async_context(\n app_lifespan_context(self._ephemeral_app)\n )\n\n if self._ephemeral_app:\n self.logger.debug(\n \"Using ephemeral application with database at \"\n f\"{PREFECT_API_DATABASE_CONNECTION_URL.value()}\"\n )\n else:\n self.logger.debug(f\"Connecting to API at {self.api_url}\")\n\n # Enter the httpx client's context\n await self._exit_stack.enter_async_context(self._client)\n\n self._started = True\n\n return self\n\n async def __aexit__(self, *exc_info):\n \"\"\"\n Shutdown the client.\n \"\"\"\n self._closed = True\n return await self._exit_stack.__aexit__(*exc_info)\n\n def __enter__(self):\n raise RuntimeError(\n \"The `PrefectClient` must be entered with an async context. Use 'async \"\n \"with PrefectClient(...)' not 'with PrefectClient(...)'\"\n )\n\n def __exit__(self, *_):\n assert False, \"This should never be called but must be defined for __enter__\"\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.api_url","title":"api_url: httpx.URL
property
","text":"Get the base URL for the API.
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.api_healthcheck","title":"api_healthcheck
async
","text":"Attempts to connect to the API and returns the encountered exception if not successful.
If successful, returns None
.
prefect/client/orchestration.py
async def api_healthcheck(self) -> Optional[Exception]:\n \"\"\"\n Attempts to connect to the API and returns the encountered exception if not\n successful.\n\n If successful, returns `None`.\n \"\"\"\n try:\n await self._client.get(\"/health\")\n return None\n except Exception as exc:\n return exc\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.hello","title":"hello
async
","text":"Send a GET request to /hello for testing purposes.
Source code inprefect/client/orchestration.py
async def hello(self) -> httpx.Response:\n \"\"\"\n Send a GET request to /hello for testing purposes.\n \"\"\"\n return await self._client.get(\"/hello\")\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow","title":"create_flow
async
","text":"Create a flow in the Prefect API.
Parameters:
Name Type Description Defaultflow
Flow
a Flow object
requiredRaises:
Type DescriptionRequestError
if a flow was not created for any reason
Returns:
Type DescriptionUUID
the ID of the flow in the backend
Source code inprefect/client/orchestration.py
async def create_flow(self, flow: \"FlowObject\") -> UUID:\n \"\"\"\n Create a flow in the Prefect API.\n\n Args:\n flow: a [Flow][prefect.flows.Flow] object\n\n Raises:\n httpx.RequestError: if a flow was not created for any reason\n\n Returns:\n the ID of the flow in the backend\n \"\"\"\n return await self.create_flow_from_name(flow.name)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_from_name","title":"create_flow_from_name
async
","text":"Create a flow in the Prefect API.
Parameters:
Name Type Description Defaultflow_name
str
the name of the new flow
requiredRaises:
Type DescriptionRequestError
if a flow was not created for any reason
Returns:
Type DescriptionUUID
the ID of the flow in the backend
Source code inprefect/client/orchestration.py
async def create_flow_from_name(self, flow_name: str) -> UUID:\n \"\"\"\n Create a flow in the Prefect API.\n\n Args:\n flow_name: the name of the new flow\n\n Raises:\n httpx.RequestError: if a flow was not created for any reason\n\n Returns:\n the ID of the flow in the backend\n \"\"\"\n flow_data = FlowCreate(name=flow_name)\n response = await self._client.post(\n \"/flows/\", json=flow_data.dict(json_compatible=True)\n )\n\n flow_id = response.json().get(\"id\")\n if not flow_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n # Return the id of the created flow\n return UUID(flow_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow","title":"read_flow
async
","text":"Query the Prefect API for a flow by id.
Parameters:
Name Type Description Defaultflow_id
UUID
the flow ID of interest
requiredReturns:
Type DescriptionFlow
a Flow model representation of the flow
Source code inprefect/client/orchestration.py
async def read_flow(self, flow_id: UUID) -> Flow:\n \"\"\"\n Query the Prefect API for a flow by id.\n\n Args:\n flow_id: the flow ID of interest\n\n Returns:\n a [Flow model][prefect.client.schemas.objects.Flow] representation of the flow\n \"\"\"\n response = await self._client.get(f\"/flows/{flow_id}\")\n return Flow.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flows","title":"read_flows
async
","text":"Query the Prefect API for flows. Only flows matching all criteria will be returned.
Parameters:
Name Type Description Defaultflow_filter
FlowFilter
filter criteria for flows
None
flow_run_filter
FlowRunFilter
filter criteria for flow runs
None
task_run_filter
TaskRunFilter
filter criteria for task runs
None
deployment_filter
DeploymentFilter
filter criteria for deployments
None
work_pool_filter
WorkPoolFilter
filter criteria for work pools
None
work_queue_filter
WorkQueueFilter
filter criteria for work pool queues
None
sort
FlowSort
sort criteria for the flows
None
limit
int
limit for the flow query
None
offset
int
offset for the flow query
0
Returns:
Type DescriptionList[Flow]
a list of Flow model representations of the flows
Source code inprefect/client/orchestration.py
async def read_flows(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n work_pool_filter: WorkPoolFilter = None,\n work_queue_filter: WorkQueueFilter = None,\n sort: FlowSort = None,\n limit: int = None,\n offset: int = 0,\n) -> List[Flow]:\n \"\"\"\n Query the Prefect API for flows. Only flows matching all criteria will\n be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n work_pool_filter: filter criteria for work pools\n work_queue_filter: filter criteria for work pool queues\n sort: sort criteria for the flows\n limit: limit for the flow query\n offset: offset for the flow query\n\n Returns:\n a list of Flow model representations of the flows\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n \"work_queues\": (\n work_queue_filter.dict(json_compatible=True)\n if work_queue_filter\n else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n\n response = await self._client.post(\"/flows/filter\", json=body)\n return pydantic.parse_obj_as(List[Flow], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_by_name","title":"read_flow_by_name
async
","text":"Query the Prefect API for a flow by name.
Parameters:
Name Type Description Defaultflow_name
str
the name of a flow
requiredReturns:
Type DescriptionFlow
a fully hydrated Flow model
Source code inprefect/client/orchestration.py
async def read_flow_by_name(\n self,\n flow_name: str,\n) -> Flow:\n \"\"\"\n Query the Prefect API for a flow by name.\n\n Args:\n flow_name: the name of a flow\n\n Returns:\n a fully hydrated Flow model\n \"\"\"\n response = await self._client.get(f\"/flows/name/{flow_name}\")\n return Flow.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_from_deployment","title":"create_flow_run_from_deployment
async
","text":"Create a flow run for a deployment.
Parameters:
Name Type Description Defaultdeployment_id
UUID
The deployment ID to create the flow run from
requiredparameters
Dict[str, Any]
Parameter overrides for this flow run. Merged with the deployment defaults
None
context
dict
Optional run context data
None
state
State
The initial state for the run. If not provided, defaults to Scheduled
for now. Should always be a Scheduled
type.
None
name
str
An optional name for the flow run. If not provided, the server will generate a name.
None
tags
Iterable[str]
An optional iterable of tags to apply to the flow run; these tags are merged with the deployment's tags.
None
idempotency_key
str
Optional idempotency key for creation of the flow run. If the key matches the key of an existing flow run, the existing run will be returned instead of creating a new one.
None
parent_task_run_id
UUID
if a subflow run is being created, the placeholder task run identifier in the parent flow
None
work_queue_name
str
An optional work queue name to add this run to. If not provided, will default to the deployment's set work queue. If one is provided that does not exist, a new work queue will be created within the deployment's work pool.
None
job_variables
Optional[Dict[str, Any]]
Optional variables that will be supplied to the flow run job.
None
Raises:
Type DescriptionRequestError
if the Prefect API does not successfully create a run for any reason
Returns:
Type DescriptionFlowRun
The flow run model
Source code inprefect/client/orchestration.py
async def create_flow_run_from_deployment(\n self,\n deployment_id: UUID,\n *,\n parameters: Dict[str, Any] = None,\n context: dict = None,\n state: prefect.states.State = None,\n name: str = None,\n tags: Iterable[str] = None,\n idempotency_key: str = None,\n parent_task_run_id: UUID = None,\n work_queue_name: str = None,\n job_variables: Optional[Dict[str, Any]] = None,\n) -> FlowRun:\n \"\"\"\n Create a flow run for a deployment.\n\n Args:\n deployment_id: The deployment ID to create the flow run from\n parameters: Parameter overrides for this flow run. Merged with the\n deployment defaults\n context: Optional run context data\n state: The initial state for the run. If not provided, defaults to\n `Scheduled` for now. Should always be a `Scheduled` type.\n name: An optional name for the flow run. If not provided, the server will\n generate a name.\n tags: An optional iterable of tags to apply to the flow run; these tags\n are merged with the deployment's tags.\n idempotency_key: Optional idempotency key for creation of the flow run.\n If the key matches the key of an existing flow run, the existing run will\n be returned instead of creating a new one.\n parent_task_run_id: if a subflow run is being created, the placeholder task\n run identifier in the parent flow\n work_queue_name: An optional work queue name to add this run to. If not provided,\n will default to the deployment's set work queue. If one is provided that does not\n exist, a new work queue will be created within the deployment's work pool.\n job_variables: Optional variables that will be supplied to the flow run job.\n\n Raises:\n httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n Returns:\n The flow run model\n \"\"\"\n if job_variables is not None and experiment_enabled(\"flow_run_infra_overrides\"):\n if (\n PREFECT_EXPERIMENTAL_WARN\n and PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INFRA_OVERRIDES\n ):\n warnings.warn(\n EXPERIMENTAL_WARNING.format(\n feature=\"Flow run job variables\",\n group=\"flow_run_infra_overrides\",\n help=\"To use this feature, update your workers to Prefect 2.16.4 or later. \",\n ),\n ExperimentalFeature,\n stacklevel=3,\n )\n\n parameters = parameters or {}\n context = context or {}\n state = state or prefect.states.Scheduled()\n tags = tags or []\n\n flow_run_create = DeploymentFlowRunCreate(\n parameters=parameters,\n context=context,\n state=state.to_state_create(),\n tags=tags,\n name=name,\n idempotency_key=idempotency_key,\n parent_task_run_id=parent_task_run_id,\n job_variables=job_variables,\n )\n\n # done separately to avoid including this field in payloads sent to older API versions\n if work_queue_name:\n flow_run_create.work_queue_name = work_queue_name\n\n response = await self._client.post(\n f\"/deployments/{deployment_id}/create_flow_run\",\n json=flow_run_create.dict(json_compatible=True, exclude_unset=True),\n )\n return FlowRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run","title":"create_flow_run
async
","text":"Create a flow run for a flow.
Parameters:
Name Type Description Defaultflow
Flow
The flow model to create the flow run for
requiredname
str
An optional name for the flow run
None
parameters
Dict[str, Any]
Parameter overrides for this flow run.
None
context
dict
Optional run context data
None
tags
Iterable[str]
a list of tags to apply to this flow run
None
parent_task_run_id
UUID
if a subflow run is being created, the placeholder task run identifier in the parent flow
None
state
State
The initial state for the run. If not provided, defaults to Scheduled
for now. Should always be a Scheduled
type.
None
Raises:
Type DescriptionRequestError
if the Prefect API does not successfully create a run for any reason
Returns:
Type DescriptionFlowRun
The flow run model
Source code inprefect/client/orchestration.py
async def create_flow_run(\n self,\n flow: \"FlowObject\",\n name: str = None,\n parameters: Dict[str, Any] = None,\n context: dict = None,\n tags: Iterable[str] = None,\n parent_task_run_id: UUID = None,\n state: \"prefect.states.State\" = None,\n) -> FlowRun:\n \"\"\"\n Create a flow run for a flow.\n\n Args:\n flow: The flow model to create the flow run for\n name: An optional name for the flow run\n parameters: Parameter overrides for this flow run.\n context: Optional run context data\n tags: a list of tags to apply to this flow run\n parent_task_run_id: if a subflow run is being created, the placeholder task\n run identifier in the parent flow\n state: The initial state for the run. If not provided, defaults to\n `Scheduled` for now. Should always be a `Scheduled` type.\n\n Raises:\n httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n Returns:\n The flow run model\n \"\"\"\n parameters = parameters or {}\n context = context or {}\n\n if state is None:\n state = prefect.states.Pending()\n\n # Retrieve the flow id\n flow_id = await self.create_flow(flow)\n\n flow_run_create = FlowRunCreate(\n flow_id=flow_id,\n flow_version=flow.version,\n name=name,\n parameters=parameters,\n context=context,\n tags=list(tags or []),\n parent_task_run_id=parent_task_run_id,\n state=state.to_state_create(),\n empirical_policy=FlowRunPolicy(\n retries=flow.retries,\n retry_delay=flow.retry_delay_seconds,\n ),\n )\n\n flow_run_create_json = flow_run_create.dict(json_compatible=True)\n response = await self._client.post(\"/flow_runs/\", json=flow_run_create_json)\n flow_run = FlowRun.parse_obj(response.json())\n\n # Restore the parameters to the local objects to retain expectations about\n # Python objects\n flow_run.parameters = parameters\n\n return flow_run\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_flow_run","title":"update_flow_run
async
","text":"Update a flow run's details.
Parameters:
Name Type Description Defaultflow_run_id
UUID
The identifier for the flow run to update.
requiredflow_version
Optional[str]
A new version string for the flow run.
None
parameters
Optional[dict]
A dictionary of parameter values for the flow run. This will not be merged with any existing parameters.
None
name
Optional[str]
A new name for the flow run.
None
empirical_policy
Optional[FlowRunPolicy]
A new flow run orchestration policy. This will not be merged with any existing policy.
None
tags
Optional[Iterable[str]]
An iterable of new tags for the flow run. These will not be merged with any existing tags.
None
infrastructure_pid
Optional[str]
The id of flow run as returned by an infrastructure block.
None
Returns:
Type DescriptionResponse
an httpx.Response
object from the PATCH request
prefect/client/orchestration.py
async def update_flow_run(\n self,\n flow_run_id: UUID,\n flow_version: Optional[str] = None,\n parameters: Optional[dict] = None,\n name: Optional[str] = None,\n tags: Optional[Iterable[str]] = None,\n empirical_policy: Optional[FlowRunPolicy] = None,\n infrastructure_pid: Optional[str] = None,\n job_variables: Optional[dict] = None,\n) -> httpx.Response:\n \"\"\"\n Update a flow run's details.\n\n Args:\n flow_run_id: The identifier for the flow run to update.\n flow_version: A new version string for the flow run.\n parameters: A dictionary of parameter values for the flow run. This will not\n be merged with any existing parameters.\n name: A new name for the flow run.\n empirical_policy: A new flow run orchestration policy. This will not be\n merged with any existing policy.\n tags: An iterable of new tags for the flow run. These will not be merged with\n any existing tags.\n infrastructure_pid: The id of flow run as returned by an\n infrastructure block.\n\n Returns:\n an `httpx.Response` object from the PATCH request\n \"\"\"\n if job_variables is not None and experiment_enabled(\"flow_run_infra_overrides\"):\n if (\n PREFECT_EXPERIMENTAL_WARN\n and PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INFRA_OVERRIDES\n ):\n warnings.warn(\n EXPERIMENTAL_WARNING.format(\n feature=\"Flow run job variables\",\n group=\"flow_run_infra_overrides\",\n help=\"To use this feature, update your workers to Prefect 2.16.4 or later. \",\n ),\n ExperimentalFeature,\n stacklevel=3,\n )\n\n params = {}\n if flow_version is not None:\n params[\"flow_version\"] = flow_version\n if parameters is not None:\n params[\"parameters\"] = parameters\n if name is not None:\n params[\"name\"] = name\n if tags is not None:\n params[\"tags\"] = tags\n if empirical_policy is not None:\n params[\"empirical_policy\"] = empirical_policy\n if infrastructure_pid:\n params[\"infrastructure_pid\"] = infrastructure_pid\n if job_variables is not None:\n params[\"job_variables\"] = job_variables\n\n flow_run_data = FlowRunUpdate(**params)\n\n return await self._client.patch(\n f\"/flow_runs/{flow_run_id}\",\n json=flow_run_data.dict(json_compatible=True, exclude_unset=True),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_flow_run","title":"delete_flow_run
async
","text":"Delete a flow run by UUID.
Parameters:
Name Type Description Defaultflow_run_id
UUID
The flow run UUID of interest.
required Source code inprefect/client/orchestration.py
async def delete_flow_run(\n self,\n flow_run_id: UUID,\n) -> None:\n \"\"\"\n Delete a flow run by UUID.\n\n Args:\n flow_run_id: The flow run UUID of interest.\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/flow_runs/{flow_run_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_concurrency_limit","title":"create_concurrency_limit
async
","text":"Create a tag concurrency limit in the Prefect API. These limits govern concurrently running tasks.
Parameters:
Name Type Description Defaulttag
str
a tag the concurrency limit is applied to
requiredconcurrency_limit
int
the maximum number of concurrent task runs for a given tag
requiredRaises:
Type DescriptionRequestError
if the concurrency limit was not created for any reason
Returns:
Type DescriptionUUID
the ID of the concurrency limit in the backend
Source code inprefect/client/orchestration.py
async def create_concurrency_limit(\n self,\n tag: str,\n concurrency_limit: int,\n) -> UUID:\n \"\"\"\n Create a tag concurrency limit in the Prefect API. These limits govern concurrently\n running tasks.\n\n Args:\n tag: a tag the concurrency limit is applied to\n concurrency_limit: the maximum number of concurrent task runs for a given tag\n\n Raises:\n httpx.RequestError: if the concurrency limit was not created for any reason\n\n Returns:\n the ID of the concurrency limit in the backend\n \"\"\"\n\n concurrency_limit_create = ConcurrencyLimitCreate(\n tag=tag,\n concurrency_limit=concurrency_limit,\n )\n response = await self._client.post(\n \"/concurrency_limits/\",\n json=concurrency_limit_create.dict(json_compatible=True),\n )\n\n concurrency_limit_id = response.json().get(\"id\")\n\n if not concurrency_limit_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n return UUID(concurrency_limit_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_concurrency_limit_by_tag","title":"read_concurrency_limit_by_tag
async
","text":"Read the concurrency limit set on a specific tag.
Parameters:
Name Type Description Defaulttag
str
a tag the concurrency limit is applied to
requiredRaises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
if the concurrency limit was not created for any reason
Returns:
Type Descriptionthe concurrency limit set on a specific tag
Source code inprefect/client/orchestration.py
async def read_concurrency_limit_by_tag(\n self,\n tag: str,\n):\n \"\"\"\n Read the concurrency limit set on a specific tag.\n\n Args:\n tag: a tag the concurrency limit is applied to\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: if the concurrency limit was not created for any reason\n\n Returns:\n the concurrency limit set on a specific tag\n \"\"\"\n try:\n response = await self._client.get(\n f\"/concurrency_limits/tag/{tag}\",\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n concurrency_limit_id = response.json().get(\"id\")\n\n if not concurrency_limit_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n concurrency_limit = ConcurrencyLimit.parse_obj(response.json())\n return concurrency_limit\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_concurrency_limits","title":"read_concurrency_limits
async
","text":"Lists concurrency limits set on task run tags.
Parameters:
Name Type Description Defaultlimit
int
the maximum number of concurrency limits returned
requiredoffset
int
the concurrency limit query offset
requiredReturns:
Type Descriptiona list of concurrency limits
Source code inprefect/client/orchestration.py
async def read_concurrency_limits(\n self,\n limit: int,\n offset: int,\n):\n \"\"\"\n Lists concurrency limits set on task run tags.\n\n Args:\n limit: the maximum number of concurrency limits returned\n offset: the concurrency limit query offset\n\n Returns:\n a list of concurrency limits\n \"\"\"\n\n body = {\n \"limit\": limit,\n \"offset\": offset,\n }\n\n response = await self._client.post(\"/concurrency_limits/filter\", json=body)\n return pydantic.parse_obj_as(List[ConcurrencyLimit], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.reset_concurrency_limit_by_tag","title":"reset_concurrency_limit_by_tag
async
","text":"Resets the concurrency limit slots set on a specific tag.
Parameters:
Name Type Description Defaulttag
str
a tag the concurrency limit is applied to
requiredslot_override
Optional[List[Union[UUID, str]]]
a list of task run IDs that are currently using a concurrency slot, please check that any task run IDs included in slot_override
are currently running, otherwise those concurrency slots will never be released.
None
Raises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
If request fails
Source code inprefect/client/orchestration.py
async def reset_concurrency_limit_by_tag(\n self,\n tag: str,\n slot_override: Optional[List[Union[UUID, str]]] = None,\n):\n \"\"\"\n Resets the concurrency limit slots set on a specific tag.\n\n Args:\n tag: a tag the concurrency limit is applied to\n slot_override: a list of task run IDs that are currently using a\n concurrency slot, please check that any task run IDs included in\n `slot_override` are currently running, otherwise those concurrency\n slots will never be released.\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n \"\"\"\n if slot_override is not None:\n slot_override = [str(slot) for slot in slot_override]\n\n try:\n await self._client.post(\n f\"/concurrency_limits/tag/{tag}/reset\",\n json=dict(slot_override=slot_override),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_concurrency_limit_by_tag","title":"delete_concurrency_limit_by_tag
async
","text":"Delete the concurrency limit set on a specific tag.
Parameters:
Name Type Description Defaulttag
str
a tag the concurrency limit is applied to
requiredRaises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
If request fails
Source code inprefect/client/orchestration.py
async def delete_concurrency_limit_by_tag(\n self,\n tag: str,\n):\n \"\"\"\n Delete the concurrency limit set on a specific tag.\n\n Args:\n tag: a tag the concurrency limit is applied to\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n \"\"\"\n try:\n await self._client.delete(\n f\"/concurrency_limits/tag/{tag}\",\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_work_queue","title":"create_work_queue
async
","text":"Create a work queue.
Parameters:
Name Type Description Defaultname
str
a unique name for the work queue
requiredtags
Optional[List[str]]
will be included in the queue. This option will be removed on 2023-02-23.
None
description
Optional[str]
An optional description for the work queue.
None
is_paused
Optional[bool]
Whether or not the work queue is paused.
None
concurrency_limit
Optional[int]
An optional concurrency limit for the work queue.
None
priority
Optional[int]
The queue's priority. Lower values are higher priority (1 is the highest).
None
work_pool_name
Optional[str]
The name of the work pool to use for this queue.
None
Raises:
Type DescriptionObjectAlreadyExists
If request returns 409
RequestError
If request fails
Returns:
Type DescriptionWorkQueue
The created work queue
Source code inprefect/client/orchestration.py
async def create_work_queue(\n self,\n name: str,\n tags: Optional[List[str]] = None,\n description: Optional[str] = None,\n is_paused: Optional[bool] = None,\n concurrency_limit: Optional[int] = None,\n priority: Optional[int] = None,\n work_pool_name: Optional[str] = None,\n) -> WorkQueue:\n \"\"\"\n Create a work queue.\n\n Args:\n name: a unique name for the work queue\n tags: DEPRECATED: an optional list of tags to filter on; only work scheduled with these tags\n will be included in the queue. This option will be removed on 2023-02-23.\n description: An optional description for the work queue.\n is_paused: Whether or not the work queue is paused.\n concurrency_limit: An optional concurrency limit for the work queue.\n priority: The queue's priority. Lower values are higher priority (1 is the highest).\n work_pool_name: The name of the work pool to use for this queue.\n\n Raises:\n prefect.exceptions.ObjectAlreadyExists: If request returns 409\n httpx.RequestError: If request fails\n\n Returns:\n The created work queue\n \"\"\"\n if tags:\n warnings.warn(\n (\n \"The use of tags for creating work queue filters is deprecated.\"\n \" This option will be removed on 2023-02-23.\"\n ),\n DeprecationWarning,\n )\n filter = QueueFilter(tags=tags)\n else:\n filter = None\n create_model = WorkQueueCreate(name=name, filter=filter)\n if description is not None:\n create_model.description = description\n if is_paused is not None:\n create_model.is_paused = is_paused\n if concurrency_limit is not None:\n create_model.concurrency_limit = concurrency_limit\n if priority is not None:\n create_model.priority = priority\n\n data = create_model.dict(json_compatible=True)\n try:\n if work_pool_name is not None:\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/queues\", json=data\n )\n else:\n response = await self._client.post(\"/work_queues/\", json=data)\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n elif e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue_by_name","title":"read_work_queue_by_name
async
","text":"Read a work queue by name.
Parameters:
Name Type Description Defaultname
str
a unique name for the work queue
requiredwork_pool_name
str
the name of the work pool the queue belongs to.
None
Raises:
Type DescriptionObjectNotFound
if no work queue is found
HTTPStatusError
other status errors
Returns:
Name Type DescriptionWorkQueue
WorkQueue
a work queue API object
Source code inprefect/client/orchestration.py
async def read_work_queue_by_name(\n self,\n name: str,\n work_pool_name: Optional[str] = None,\n) -> WorkQueue:\n \"\"\"\n Read a work queue by name.\n\n Args:\n name (str): a unique name for the work queue\n work_pool_name (str, optional): the name of the work pool\n the queue belongs to.\n\n Raises:\n prefect.exceptions.ObjectNotFound: if no work queue is found\n httpx.HTTPStatusError: other status errors\n\n Returns:\n WorkQueue: a work queue API object\n \"\"\"\n try:\n if work_pool_name is not None:\n response = await self._client.get(\n f\"/work_pools/{work_pool_name}/queues/{name}\"\n )\n else:\n response = await self._client.get(f\"/work_queues/name/{name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_work_queue","title":"update_work_queue
async
","text":"Update properties of a work queue.
Parameters:
Name Type Description Defaultid
UUID
the ID of the work queue to update
required**kwargs
the fields to update
{}
Raises:
Type DescriptionValueError
if no kwargs are provided
ObjectNotFound
if request returns 404
RequestError
if the request fails
Source code inprefect/client/orchestration.py
async def update_work_queue(self, id: UUID, **kwargs):\n \"\"\"\n Update properties of a work queue.\n\n Args:\n id: the ID of the work queue to update\n **kwargs: the fields to update\n\n Raises:\n ValueError: if no kwargs are provided\n prefect.exceptions.ObjectNotFound: if request returns 404\n httpx.RequestError: if the request fails\n\n \"\"\"\n if not kwargs:\n raise ValueError(\"No fields provided to update.\")\n\n data = WorkQueueUpdate(**kwargs).dict(json_compatible=True, exclude_unset=True)\n try:\n await self._client.patch(f\"/work_queues/{id}\", json=data)\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_runs_in_work_queue","title":"get_runs_in_work_queue
async
","text":"Read flow runs off a work queue.
Parameters:
Name Type Description Defaultid
UUID
the id of the work queue to read from
requiredlimit
int
a limit on the number of runs to return
10
scheduled_before
datetime
a timestamp; only runs scheduled before this time will be returned. Defaults to now.
None
Raises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
If request fails
Returns:
Type DescriptionList[FlowRun]
List[FlowRun]: a list of FlowRun objects read from the queue
Source code inprefect/client/orchestration.py
async def get_runs_in_work_queue(\n self,\n id: UUID,\n limit: int = 10,\n scheduled_before: datetime.datetime = None,\n) -> List[FlowRun]:\n \"\"\"\n Read flow runs off a work queue.\n\n Args:\n id: the id of the work queue to read from\n limit: a limit on the number of runs to return\n scheduled_before: a timestamp; only runs scheduled before this time will be returned.\n Defaults to now.\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n List[FlowRun]: a list of FlowRun objects read from the queue\n \"\"\"\n if scheduled_before is None:\n scheduled_before = pendulum.now(\"UTC\")\n\n try:\n response = await self._client.post(\n f\"/work_queues/{id}/get_runs\",\n json={\n \"limit\": limit,\n \"scheduled_before\": scheduled_before.isoformat(),\n },\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return pydantic.parse_obj_as(List[FlowRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue","title":"read_work_queue
async
","text":"Read a work queue.
Parameters:
Name Type Description Defaultid
UUID
the id of the work queue to load
requiredRaises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
If request fails
Returns:
Name Type DescriptionWorkQueue
WorkQueue
an instantiated WorkQueue object
Source code inprefect/client/orchestration.py
async def read_work_queue(\n self,\n id: UUID,\n) -> WorkQueue:\n \"\"\"\n Read a work queue.\n\n Args:\n id: the id of the work queue to load\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n WorkQueue: an instantiated WorkQueue object\n \"\"\"\n try:\n response = await self._client.get(f\"/work_queues/{id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue_status","title":"read_work_queue_status
async
","text":"Read a work queue status.
Parameters:
Name Type Description Defaultid
UUID
the id of the work queue to load
requiredRaises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
If request fails
Returns:
Name Type DescriptionWorkQueueStatus
WorkQueueStatusDetail
an instantiated WorkQueueStatus object
Source code inprefect/client/orchestration.py
async def read_work_queue_status(\n self,\n id: UUID,\n) -> WorkQueueStatusDetail:\n \"\"\"\n Read a work queue status.\n\n Args:\n id: the id of the work queue to load\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n WorkQueueStatus: an instantiated WorkQueueStatus object\n \"\"\"\n try:\n response = await self._client.get(f\"/work_queues/{id}/status\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return WorkQueueStatusDetail.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.match_work_queues","title":"match_work_queues
async
","text":"Query the Prefect API for work queues with names with a specific prefix.
Parameters:
Name Type Description Defaultprefixes
List[str]
a list of strings used to match work queue name prefixes
requiredwork_pool_name
Optional[str]
an optional work pool name to scope the query to
None
Returns:
Type DescriptionList[WorkQueue]
a list of WorkQueue model representations of the work queues
Source code inprefect/client/orchestration.py
async def match_work_queues(\n self,\n prefixes: List[str],\n work_pool_name: Optional[str] = None,\n) -> List[WorkQueue]:\n \"\"\"\n Query the Prefect API for work queues with names with a specific prefix.\n\n Args:\n prefixes: a list of strings used to match work queue name prefixes\n work_pool_name: an optional work pool name to scope the query to\n\n Returns:\n a list of WorkQueue model representations\n of the work queues\n \"\"\"\n page_length = 100\n current_page = 0\n work_queues = []\n\n while True:\n new_queues = await self.read_work_queues(\n work_pool_name=work_pool_name,\n offset=current_page * page_length,\n limit=page_length,\n work_queue_filter=WorkQueueFilter(\n name=WorkQueueFilterName(startswith_=prefixes)\n ),\n )\n if not new_queues:\n break\n work_queues += new_queues\n current_page += 1\n\n return work_queues\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_work_queue_by_id","title":"delete_work_queue_by_id
async
","text":"Delete a work queue by its ID.
Parameters:
Name Type Description Defaultid
UUID
the id of the work queue to delete
requiredRaises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
If requests fails
Source code inprefect/client/orchestration.py
async def delete_work_queue_by_id(\n self,\n id: UUID,\n):\n \"\"\"\n Delete a work queue by its ID.\n\n Args:\n id: the id of the work queue to delete\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(\n f\"/work_queues/{id}\",\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_type","title":"create_block_type
async
","text":"Create a block type in the Prefect API.
Source code inprefect/client/orchestration.py
async def create_block_type(self, block_type: BlockTypeCreate) -> BlockType:\n \"\"\"\n Create a block type in the Prefect API.\n \"\"\"\n try:\n response = await self._client.post(\n \"/block_types/\",\n json=block_type.dict(\n json_compatible=True, exclude_unset=True, exclude={\"id\"}\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n return BlockType.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_schema","title":"create_block_schema
async
","text":"Create a block schema in the Prefect API.
Source code inprefect/client/orchestration.py
async def create_block_schema(self, block_schema: BlockSchemaCreate) -> BlockSchema:\n \"\"\"\n Create a block schema in the Prefect API.\n \"\"\"\n try:\n response = await self._client.post(\n \"/block_schemas/\",\n json=block_schema.dict(\n json_compatible=True,\n exclude_unset=True,\n exclude={\"id\", \"block_type\", \"checksum\"},\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n return BlockSchema.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_document","title":"create_block_document
async
","text":"Create a block document in the Prefect API. This data is used to configure a corresponding Block.
Parameters:
Name Type Description Defaultinclude_secrets
bool
whether to include secret values on the stored Block, corresponding to Pydantic's SecretStr
and SecretBytes
fields. Note Blocks may not work as expected if this is set to False
.
True
Source code in prefect/client/orchestration.py
async def create_block_document(\n self,\n block_document: Union[BlockDocument, BlockDocumentCreate],\n include_secrets: bool = True,\n) -> BlockDocument:\n \"\"\"\n Create a block document in the Prefect API. This data is used to configure a\n corresponding Block.\n\n Args:\n include_secrets (bool): whether to include secret values\n on the stored Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. Note Blocks may not work as expected if\n this is set to `False`.\n \"\"\"\n if isinstance(block_document, BlockDocument):\n block_document = BlockDocumentCreate.parse_obj(\n block_document.dict(\n json_compatible=True,\n include_secrets=include_secrets,\n exclude_unset=True,\n exclude={\"id\", \"block_schema\", \"block_type\"},\n ),\n )\n\n try:\n response = await self._client.post(\n \"/block_documents/\",\n json=block_document.dict(\n json_compatible=True,\n include_secrets=include_secrets,\n exclude_unset=True,\n exclude={\"id\", \"block_schema\", \"block_type\"},\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_block_document","title":"update_block_document
async
","text":"Update a block document in the Prefect API.
Source code inprefect/client/orchestration.py
async def update_block_document(\n self,\n block_document_id: UUID,\n block_document: BlockDocumentUpdate,\n):\n \"\"\"\n Update a block document in the Prefect API.\n \"\"\"\n try:\n await self._client.patch(\n f\"/block_documents/{block_document_id}\",\n json=block_document.dict(\n json_compatible=True,\n exclude_unset=True,\n include={\"data\", \"merge_existing_data\", \"block_schema_id\"},\n include_secrets=True,\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_block_document","title":"delete_block_document
async
","text":"Delete a block document.
Source code inprefect/client/orchestration.py
async def delete_block_document(self, block_document_id: UUID):\n \"\"\"\n Delete a block document.\n \"\"\"\n try:\n await self._client.delete(f\"/block_documents/{block_document_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_type_by_slug","title":"read_block_type_by_slug
async
","text":"Read a block type by its slug.
Source code inprefect/client/orchestration.py
async def read_block_type_by_slug(self, slug: str) -> BlockType:\n \"\"\"\n Read a block type by its slug.\n \"\"\"\n try:\n response = await self._client.get(f\"/block_types/slug/{slug}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockType.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_schema_by_checksum","title":"read_block_schema_by_checksum
async
","text":"Look up a block schema checksum
Source code inprefect/client/orchestration.py
async def read_block_schema_by_checksum(\n self, checksum: str, version: Optional[str] = None\n) -> BlockSchema:\n \"\"\"\n Look up a block schema checksum\n \"\"\"\n try:\n url = f\"/block_schemas/checksum/{checksum}\"\n if version is not None:\n url = f\"{url}?version={version}\"\n response = await self._client.get(url)\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockSchema.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_block_type","title":"update_block_type
async
","text":"Update a block document in the Prefect API.
Source code inprefect/client/orchestration.py
async def update_block_type(self, block_type_id: UUID, block_type: BlockTypeUpdate):\n \"\"\"\n Update a block document in the Prefect API.\n \"\"\"\n try:\n await self._client.patch(\n f\"/block_types/{block_type_id}\",\n json=block_type.dict(\n json_compatible=True,\n exclude_unset=True,\n include=BlockTypeUpdate.updatable_fields(),\n include_secrets=True,\n ),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_block_type","title":"delete_block_type
async
","text":"Delete a block type.
Source code inprefect/client/orchestration.py
async def delete_block_type(self, block_type_id: UUID):\n \"\"\"\n Delete a block type.\n \"\"\"\n try:\n await self._client.delete(f\"/block_types/{block_type_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n elif (\n e.response.status_code == status.HTTP_403_FORBIDDEN\n and e.response.json()[\"detail\"]\n == \"protected block types cannot be deleted.\"\n ):\n raise prefect.exceptions.ProtectedBlockError(\n \"Protected block types cannot be deleted.\"\n ) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_types","title":"read_block_types
async
","text":"Read all block types Raises: httpx.RequestError: if the block types were not found
Returns:
Type DescriptionList[BlockType]
List of BlockTypes.
Source code inprefect/client/orchestration.py
async def read_block_types(self) -> List[BlockType]:\n \"\"\"\n Read all block types\n Raises:\n httpx.RequestError: if the block types were not found\n\n Returns:\n List of BlockTypes.\n \"\"\"\n response = await self._client.post(\"/block_types/filter\", json={})\n return pydantic.parse_obj_as(List[BlockType], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_schemas","title":"read_block_schemas
async
","text":"Read all block schemas Raises: httpx.RequestError: if a valid block schema was not found
Returns:
Type DescriptionList[BlockSchema]
A BlockSchema.
Source code inprefect/client/orchestration.py
async def read_block_schemas(self) -> List[BlockSchema]:\n \"\"\"\n Read all block schemas\n Raises:\n httpx.RequestError: if a valid block schema was not found\n\n Returns:\n A BlockSchema.\n \"\"\"\n response = await self._client.post(\"/block_schemas/filter\", json={})\n return pydantic.parse_obj_as(List[BlockSchema], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_most_recent_block_schema_for_block_type","title":"get_most_recent_block_schema_for_block_type
async
","text":"Fetches the most recent block schema for a specified block type ID.
Parameters:
Name Type Description Defaultblock_type_id
UUID
The ID of the block type.
requiredRaises:
Type DescriptionRequestError
If the request fails for any reason.
Returns:
Type DescriptionOptional[BlockSchema]
The most recent block schema or None.
Source code inprefect/client/orchestration.py
async def get_most_recent_block_schema_for_block_type(\n self,\n block_type_id: UUID,\n) -> Optional[BlockSchema]:\n \"\"\"\n Fetches the most recent block schema for a specified block type ID.\n\n Args:\n block_type_id: The ID of the block type.\n\n Raises:\n httpx.RequestError: If the request fails for any reason.\n\n Returns:\n The most recent block schema or None.\n \"\"\"\n try:\n response = await self._client.post(\n \"/block_schemas/filter\",\n json={\n \"block_schemas\": {\"block_type_id\": {\"any_\": [str(block_type_id)]}},\n \"limit\": 1,\n },\n )\n except httpx.HTTPStatusError:\n raise\n return BlockSchema.parse_obj(response.json()[0]) if response.json() else None\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_document","title":"read_block_document
async
","text":"Read the block document with the specified ID.
Parameters:
Name Type Description Defaultblock_document_id
UUID
the block document id
requiredinclude_secrets
bool
whether to include secret values on the Block, corresponding to Pydantic's SecretStr
and SecretBytes
fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False
.
True
Raises:
Type DescriptionRequestError
if the block document was not found for any reason
Returns:
Type DescriptionA block document or None.
Source code inprefect/client/orchestration.py
async def read_block_document(\n self,\n block_document_id: UUID,\n include_secrets: bool = True,\n):\n \"\"\"\n Read the block document with the specified ID.\n\n Args:\n block_document_id: the block document id\n include_secrets (bool): whether to include secret values\n on the Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. These fields are automatically obfuscated\n by Pydantic, but users can additionally choose not to receive\n their values from the API. Note that any business logic on the\n Block may not work if this is `False`.\n\n Raises:\n httpx.RequestError: if the block document was not found for any reason\n\n Returns:\n A block document or None.\n \"\"\"\n assert (\n block_document_id is not None\n ), \"Unexpected ID on block document. Was it persisted?\"\n try:\n response = await self._client.get(\n f\"/block_documents/{block_document_id}\",\n params=dict(include_secrets=include_secrets),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_document_by_name","title":"read_block_document_by_name
async
","text":"Read the block document with the specified name that corresponds to a specific block type name.
Parameters:
Name Type Description Defaultname
str
The block document name.
requiredblock_type_slug
str
The block type slug.
requiredinclude_secrets
bool
whether to include secret values on the Block, corresponding to Pydantic's SecretStr
and SecretBytes
fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False
.
True
Raises:
Type DescriptionRequestError
if the block document was not found for any reason
Returns:
Type DescriptionBlockDocument
A block document or None.
Source code inprefect/client/orchestration.py
async def read_block_document_by_name(\n self,\n name: str,\n block_type_slug: str,\n include_secrets: bool = True,\n) -> BlockDocument:\n \"\"\"\n Read the block document with the specified name that corresponds to a\n specific block type name.\n\n Args:\n name: The block document name.\n block_type_slug: The block type slug.\n include_secrets (bool): whether to include secret values\n on the Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. These fields are automatically obfuscated\n by Pydantic, but users can additionally choose not to receive\n their values from the API. Note that any business logic on the\n Block may not work if this is `False`.\n\n Raises:\n httpx.RequestError: if the block document was not found for any reason\n\n Returns:\n A block document or None.\n \"\"\"\n try:\n response = await self._client.get(\n f\"/block_types/slug/{block_type_slug}/block_documents/name/{name}\",\n params=dict(include_secrets=include_secrets),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_documents","title":"read_block_documents
async
","text":"Read block documents
Parameters:
Name Type Description Defaultblock_schema_type
Optional[str]
an optional block schema type
None
offset
Optional[int]
an offset
None
limit
Optional[int]
the number of blocks to return
None
include_secrets
bool
whether to include secret values on the Block, corresponding to Pydantic's SecretStr
and SecretBytes
fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False
.
True
Returns:
Type DescriptionA list of block documents
Source code inprefect/client/orchestration.py
async def read_block_documents(\n self,\n block_schema_type: Optional[str] = None,\n offset: Optional[int] = None,\n limit: Optional[int] = None,\n include_secrets: bool = True,\n):\n \"\"\"\n Read block documents\n\n Args:\n block_schema_type: an optional block schema type\n offset: an offset\n limit: the number of blocks to return\n include_secrets (bool): whether to include secret values\n on the Block, corresponding to Pydantic's `SecretStr` and\n `SecretBytes` fields. These fields are automatically obfuscated\n by Pydantic, but users can additionally choose not to receive\n their values from the API. Note that any business logic on the\n Block may not work if this is `False`.\n\n Returns:\n A list of block documents\n \"\"\"\n response = await self._client.post(\n \"/block_documents/filter\",\n json=dict(\n block_schema_type=block_schema_type,\n offset=offset,\n limit=limit,\n include_secrets=include_secrets,\n ),\n )\n return pydantic.parse_obj_as(List[BlockDocument], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_documents_by_type","title":"read_block_documents_by_type
async
","text":"Retrieve block documents by block type slug.
Parameters:
Name Type Description Defaultblock_type_slug
str
The block type slug.
requiredoffset
Optional[int]
an offset
None
limit
Optional[int]
the number of blocks to return
None
include_secrets
bool
whether to include secret values
True
Returns:
Type DescriptionList[BlockDocument]
A list of block documents
Source code inprefect/client/orchestration.py
async def read_block_documents_by_type(\n self,\n block_type_slug: str,\n offset: Optional[int] = None,\n limit: Optional[int] = None,\n include_secrets: bool = True,\n) -> List[BlockDocument]:\n \"\"\"Retrieve block documents by block type slug.\n\n Args:\n block_type_slug: The block type slug.\n offset: an offset\n limit: the number of blocks to return\n include_secrets: whether to include secret values\n\n Returns:\n A list of block documents\n \"\"\"\n response = await self._client.get(\n f\"/block_types/slug/{block_type_slug}/block_documents\",\n params=dict(\n offset=offset,\n limit=limit,\n include_secrets=include_secrets,\n ),\n )\n\n return pydantic.parse_obj_as(List[BlockDocument], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_deployment","title":"create_deployment
async
","text":"Create a deployment.
Parameters:
Name Type Description Defaultflow_id
UUID
the flow ID to create a deployment for
requiredname
str
the name of the deployment
requiredversion
str
an optional version string for the deployment
None
schedule
SCHEDULE_TYPES
an optional schedule to apply to the deployment
None
tags
List[str]
an optional list of tags to apply to the deployment
None
storage_document_id
UUID
an reference to the storage block document used for the deployed flow
None
infrastructure_document_id
UUID
an reference to the infrastructure block document to use for this deployment
None
Raises:
Type DescriptionRequestError
if the deployment was not created for any reason
Returns:
Type DescriptionUUID
the ID of the deployment in the backend
Source code inprefect/client/orchestration.py
async def create_deployment(\n self,\n flow_id: UUID,\n name: str,\n version: str = None,\n schedule: SCHEDULE_TYPES = None,\n schedules: List[DeploymentScheduleCreate] = None,\n parameters: Dict[str, Any] = None,\n description: str = None,\n work_queue_name: str = None,\n work_pool_name: str = None,\n tags: List[str] = None,\n storage_document_id: UUID = None,\n manifest_path: str = None,\n path: str = None,\n entrypoint: str = None,\n infrastructure_document_id: UUID = None,\n infra_overrides: Dict[str, Any] = None,\n parameter_openapi_schema: dict = None,\n is_schedule_active: Optional[bool] = None,\n paused: Optional[bool] = None,\n pull_steps: Optional[List[dict]] = None,\n enforce_parameter_schema: Optional[bool] = None,\n) -> UUID:\n \"\"\"\n Create a deployment.\n\n Args:\n flow_id: the flow ID to create a deployment for\n name: the name of the deployment\n version: an optional version string for the deployment\n schedule: an optional schedule to apply to the deployment\n tags: an optional list of tags to apply to the deployment\n storage_document_id: an reference to the storage block document\n used for the deployed flow\n infrastructure_document_id: an reference to the infrastructure block document\n to use for this deployment\n\n Raises:\n httpx.RequestError: if the deployment was not created for any reason\n\n Returns:\n the ID of the deployment in the backend\n \"\"\"\n\n deployment_create = DeploymentCreate(\n flow_id=flow_id,\n name=name,\n version=version,\n parameters=dict(parameters or {}),\n tags=list(tags or []),\n work_queue_name=work_queue_name,\n description=description,\n storage_document_id=storage_document_id,\n path=path,\n entrypoint=entrypoint,\n manifest_path=manifest_path, # for backwards compat\n infrastructure_document_id=infrastructure_document_id,\n infra_overrides=infra_overrides or {},\n parameter_openapi_schema=parameter_openapi_schema,\n is_schedule_active=is_schedule_active,\n paused=paused,\n schedule=schedule,\n schedules=schedules or [],\n pull_steps=pull_steps,\n enforce_parameter_schema=enforce_parameter_schema,\n )\n\n if work_pool_name is not None:\n deployment_create.work_pool_name = work_pool_name\n\n # Exclude newer fields that are not set to avoid compatibility issues\n exclude = {\n field\n for field in [\"work_pool_name\", \"work_queue_name\"]\n if field not in deployment_create.__fields_set__\n }\n\n if deployment_create.is_schedule_active is None:\n exclude.add(\"is_schedule_active\")\n\n if deployment_create.paused is None:\n exclude.add(\"paused\")\n\n if deployment_create.pull_steps is None:\n exclude.add(\"pull_steps\")\n\n if deployment_create.enforce_parameter_schema is None:\n exclude.add(\"enforce_parameter_schema\")\n\n json = deployment_create.dict(json_compatible=True, exclude=exclude)\n response = await self._client.post(\n \"/deployments/\",\n json=json,\n )\n deployment_id = response.json().get(\"id\")\n if not deployment_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n return UUID(deployment_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment","title":"read_deployment
async
","text":"Query the Prefect API for a deployment by id.
Parameters:
Name Type Description Defaultdeployment_id
UUID
the deployment ID of interest
requiredReturns:
Type DescriptionDeploymentResponse
a Deployment model representation of the deployment
Source code inprefect/client/orchestration.py
async def read_deployment(\n self,\n deployment_id: UUID,\n) -> DeploymentResponse:\n \"\"\"\n Query the Prefect API for a deployment by id.\n\n Args:\n deployment_id: the deployment ID of interest\n\n Returns:\n a [Deployment model][prefect.client.schemas.objects.Deployment] representation of the deployment\n \"\"\"\n try:\n response = await self._client.get(f\"/deployments/{deployment_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return DeploymentResponse.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment_by_name","title":"read_deployment_by_name
async
","text":"Query the Prefect API for a deployment by name.
Parameters:
Name Type Description Defaultname
str
A deployed flow's name: / required
Raises:
Type DescriptionObjectNotFound
If request returns 404
RequestError
If request fails
Returns:
Type DescriptionDeploymentResponse
a Deployment model representation of the deployment
Source code inprefect/client/orchestration.py
async def read_deployment_by_name(\n self,\n name: str,\n) -> DeploymentResponse:\n \"\"\"\n Query the Prefect API for a deployment by name.\n\n Args:\n name: A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\n\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If request fails\n\n Returns:\n a Deployment model representation of the deployment\n \"\"\"\n try:\n response = await self._client.get(f\"/deployments/name/{name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n return DeploymentResponse.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployments","title":"read_deployments
async
","text":"Query the Prefect API for deployments. Only deployments matching all the provided criteria will be returned.
Parameters:
Name Type Description Defaultflow_filter
FlowFilter
filter criteria for flows
None
flow_run_filter
FlowRunFilter
filter criteria for flow runs
None
task_run_filter
TaskRunFilter
filter criteria for task runs
None
deployment_filter
DeploymentFilter
filter criteria for deployments
None
work_pool_filter
WorkPoolFilter
filter criteria for work pools
None
work_queue_filter
WorkQueueFilter
filter criteria for work pool queues
None
limit
int
a limit for the deployment query
None
offset
int
an offset for the deployment query
0
Returns:
Type DescriptionList[DeploymentResponse]
a list of Deployment model representations of the deployments
Source code inprefect/client/orchestration.py
async def read_deployments(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n work_pool_filter: WorkPoolFilter = None,\n work_queue_filter: WorkQueueFilter = None,\n limit: int = None,\n sort: DeploymentSort = None,\n offset: int = 0,\n) -> List[DeploymentResponse]:\n \"\"\"\n Query the Prefect API for deployments. Only deployments matching all\n the provided criteria will be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n work_pool_filter: filter criteria for work pools\n work_queue_filter: filter criteria for work pool queues\n limit: a limit for the deployment query\n offset: an offset for the deployment query\n\n Returns:\n a list of Deployment model representations\n of the deployments\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n \"work_pool_queues\": (\n work_queue_filter.dict(json_compatible=True)\n if work_queue_filter\n else None\n ),\n \"limit\": limit,\n \"offset\": offset,\n \"sort\": sort,\n }\n\n response = await self._client.post(\"/deployments/filter\", json=body)\n return pydantic.parse_obj_as(List[DeploymentResponse], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_deployment","title":"delete_deployment
async
","text":"Delete deployment by id.
Parameters:
Name Type Description Defaultdeployment_id
UUID
The deployment id of interest.
required Source code inprefect/client/orchestration.py
async def delete_deployment(\n self,\n deployment_id: UUID,\n):\n \"\"\"\n Delete deployment by id.\n\n Args:\n deployment_id: The deployment id of interest.\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/deployments/{deployment_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_deployment_schedules","title":"create_deployment_schedules
async
","text":"Create deployment schedules.
Parameters:
Name Type Description Defaultdeployment_id
UUID
the deployment ID
requiredschedules
List[Tuple[SCHEDULE_TYPES, bool]]
a list of tuples containing the schedule to create and whether or not it should be active.
requiredRaises:
Type DescriptionRequestError
if the schedules were not created for any reason
Returns:
Type DescriptionList[DeploymentSchedule]
the list of schedules created in the backend
Source code inprefect/client/orchestration.py
async def create_deployment_schedules(\n self,\n deployment_id: UUID,\n schedules: List[Tuple[SCHEDULE_TYPES, bool]],\n) -> List[DeploymentSchedule]:\n \"\"\"\n Create deployment schedules.\n\n Args:\n deployment_id: the deployment ID\n schedules: a list of tuples containing the schedule to create\n and whether or not it should be active.\n\n Raises:\n httpx.RequestError: if the schedules were not created for any reason\n\n Returns:\n the list of schedules created in the backend\n \"\"\"\n deployment_schedule_create = [\n DeploymentScheduleCreate(schedule=schedule[0], active=schedule[1])\n for schedule in schedules\n ]\n\n json = [\n deployment_schedule_create.dict(json_compatible=True)\n for deployment_schedule_create in deployment_schedule_create\n ]\n response = await self._client.post(\n f\"/deployments/{deployment_id}/schedules\", json=json\n )\n return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment_schedules","title":"read_deployment_schedules
async
","text":"Query the Prefect API for a deployment's schedules.
Parameters:
Name Type Description Defaultdeployment_id
UUID
the deployment ID
requiredReturns:
Type DescriptionList[DeploymentSchedule]
a list of DeploymentSchedule model representations of the deployment schedules
Source code inprefect/client/orchestration.py
async def read_deployment_schedules(\n self,\n deployment_id: UUID,\n) -> List[DeploymentSchedule]:\n \"\"\"\n Query the Prefect API for a deployment's schedules.\n\n Args:\n deployment_id: the deployment ID\n\n Returns:\n a list of DeploymentSchedule model representations of the deployment schedules\n \"\"\"\n try:\n response = await self._client.get(f\"/deployments/{deployment_id}/schedules\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_deployment_schedule","title":"update_deployment_schedule
async
","text":"Update a deployment schedule by ID.
Parameters:
Name Type Description Defaultdeployment_id
UUID
the deployment ID
requiredschedule_id
UUID
the deployment schedule ID of interest
requiredactive
Optional[bool]
whether or not the schedule should be active
None
schedule
Optional[SCHEDULE_TYPES]
the cron, rrule, or interval schedule this deployment schedule should use
None
Source code in prefect/client/orchestration.py
async def update_deployment_schedule(\n self,\n deployment_id: UUID,\n schedule_id: UUID,\n active: Optional[bool] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n):\n \"\"\"\n Update a deployment schedule by ID.\n\n Args:\n deployment_id: the deployment ID\n schedule_id: the deployment schedule ID of interest\n active: whether or not the schedule should be active\n schedule: the cron, rrule, or interval schedule this deployment schedule should use\n \"\"\"\n kwargs = {}\n if active is not None:\n kwargs[\"active\"] = active\n elif schedule is not None:\n kwargs[\"schedule\"] = schedule\n\n deployment_schedule_update = DeploymentScheduleUpdate(**kwargs)\n json = deployment_schedule_update.dict(json_compatible=True, exclude_unset=True)\n\n try:\n await self._client.patch(\n f\"/deployments/{deployment_id}/schedules/{schedule_id}\", json=json\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_deployment_schedule","title":"delete_deployment_schedule
async
","text":"Delete a deployment schedule.
Parameters:
Name Type Description Defaultdeployment_id
UUID
the deployment ID
requiredschedule_id
UUID
the ID of the deployment schedule to delete.
requiredRaises:
Type DescriptionRequestError
if the schedules were not deleted for any reason
Source code inprefect/client/orchestration.py
async def delete_deployment_schedule(\n self,\n deployment_id: UUID,\n schedule_id: UUID,\n) -> None:\n \"\"\"\n Delete a deployment schedule.\n\n Args:\n deployment_id: the deployment ID\n schedule_id: the ID of the deployment schedule to delete.\n\n Raises:\n httpx.RequestError: if the schedules were not deleted for any reason\n \"\"\"\n try:\n await self._client.delete(\n f\"/deployments/{deployment_id}/schedules/{schedule_id}\"\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run","title":"read_flow_run
async
","text":"Query the Prefect API for a flow run by id.
Parameters:
Name Type Description Defaultflow_run_id
UUID
the flow run ID of interest
requiredReturns:
Type DescriptionFlowRun
a Flow Run model representation of the flow run
Source code inprefect/client/orchestration.py
async def read_flow_run(self, flow_run_id: UUID) -> FlowRun:\n \"\"\"\n Query the Prefect API for a flow run by id.\n\n Args:\n flow_run_id: the flow run ID of interest\n\n Returns:\n a Flow Run model representation of the flow run\n \"\"\"\n try:\n response = await self._client.get(f\"/flow_runs/{flow_run_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n return FlowRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.resume_flow_run","title":"resume_flow_run
async
","text":"Resumes a paused flow run.
Parameters:
Name Type Description Defaultflow_run_id
UUID
the flow run ID of interest
requiredrun_input
Optional[Dict]
the input to resume the flow run with
None
Returns:
Type DescriptionOrchestrationResult
an OrchestrationResult model representation of state orchestration output
Source code inprefect/client/orchestration.py
async def resume_flow_run(\n self, flow_run_id: UUID, run_input: Optional[Dict] = None\n) -> OrchestrationResult:\n \"\"\"\n Resumes a paused flow run.\n\n Args:\n flow_run_id: the flow run ID of interest\n run_input: the input to resume the flow run with\n\n Returns:\n an OrchestrationResult model representation of state orchestration output\n \"\"\"\n try:\n response = await self._client.post(\n f\"/flow_runs/{flow_run_id}/resume\", json={\"run_input\": run_input}\n )\n except httpx.HTTPStatusError:\n raise\n\n return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_runs","title":"read_flow_runs
async
","text":"Query the Prefect API for flow runs. Only flow runs matching all criteria will be returned.
Parameters:
Name Type Description Defaultflow_filter
FlowFilter
filter criteria for flows
None
flow_run_filter
FlowRunFilter
filter criteria for flow runs
None
task_run_filter
TaskRunFilter
filter criteria for task runs
None
deployment_filter
DeploymentFilter
filter criteria for deployments
None
work_pool_filter
WorkPoolFilter
filter criteria for work pools
None
work_queue_filter
WorkQueueFilter
filter criteria for work pool queues
None
sort
FlowRunSort
sort criteria for the flow runs
None
limit
int
limit for the flow run query
None
offset
int
offset for the flow run query
0
Returns:
Type DescriptionList[FlowRun]
a list of Flow Run model representations of the flow runs
Source code inprefect/client/orchestration.py
async def read_flow_runs(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n work_pool_filter: WorkPoolFilter = None,\n work_queue_filter: WorkQueueFilter = None,\n sort: FlowRunSort = None,\n limit: int = None,\n offset: int = 0,\n) -> List[FlowRun]:\n \"\"\"\n Query the Prefect API for flow runs. Only flow runs matching all criteria will\n be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n work_pool_filter: filter criteria for work pools\n work_queue_filter: filter criteria for work pool queues\n sort: sort criteria for the flow runs\n limit: limit for the flow run query\n offset: offset for the flow run query\n\n Returns:\n a list of Flow Run model representations\n of the flow runs\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n \"work_pool_queues\": (\n work_queue_filter.dict(json_compatible=True)\n if work_queue_filter\n else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n\n response = await self._client.post(\"/flow_runs/filter\", json=body)\n return pydantic.parse_obj_as(List[FlowRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.set_flow_run_state","title":"set_flow_run_state
async
","text":"Set the state of a flow run.
Parameters:
Name Type Description Defaultflow_run_id
UUID
the id of the flow run
requiredstate
State
the state to set
requiredforce
bool
if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state
False
Returns:
Type DescriptionOrchestrationResult
an OrchestrationResult model representation of state orchestration output
Source code inprefect/client/orchestration.py
async def set_flow_run_state(\n self,\n flow_run_id: UUID,\n state: \"prefect.states.State\",\n force: bool = False,\n) -> OrchestrationResult:\n \"\"\"\n Set the state of a flow run.\n\n Args:\n flow_run_id: the id of the flow run\n state: the state to set\n force: if True, disregard orchestration logic when setting the state,\n forcing the Prefect API to accept the state\n\n Returns:\n an OrchestrationResult model representation of state orchestration output\n \"\"\"\n state_create = state.to_state_create()\n state_create.state_details.flow_run_id = flow_run_id\n state_create.state_details.transition_id = uuid4()\n try:\n response = await self._client.post(\n f\"/flow_runs/{flow_run_id}/set_state\",\n json=dict(state=state_create.dict(json_compatible=True), force=force),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n\n return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_states","title":"read_flow_run_states
async
","text":"Query for the states of a flow run
Parameters:
Name Type Description Defaultflow_run_id
UUID
the id of the flow run
requiredReturns:
Type DescriptionList[State]
a list of State model representations of the flow run states
Source code inprefect/client/orchestration.py
async def read_flow_run_states(\n self, flow_run_id: UUID\n) -> List[prefect.states.State]:\n \"\"\"\n Query for the states of a flow run\n\n Args:\n flow_run_id: the id of the flow run\n\n Returns:\n a list of State model representations\n of the flow run states\n \"\"\"\n response = await self._client.get(\n \"/flow_run_states/\", params=dict(flow_run_id=str(flow_run_id))\n )\n return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_task_run","title":"create_task_run
async
","text":"Create a task run
Parameters:
Name Type Description Defaulttask
Task
The Task to run
requiredflow_run_id
Optional[UUID]
The flow run id with which to associate the task run
requireddynamic_key
str
A key unique to this particular run of a Task within the flow
requiredname
str
An optional name for the task run
None
extra_tags
Iterable[str]
an optional list of extra tags to apply to the task run in addition to task.tags
None
state
State
The initial state for the run. If not provided, defaults to Pending
for now. Should always be a Scheduled
type.
None
task_inputs
Dict[str, List[Union[TaskRunResult, Parameter, Constant]]]
the set of inputs passed to the task
None
Returns:
Type DescriptionTaskRun
The created task run.
Source code inprefect/client/orchestration.py
async def create_task_run(\n self,\n task: \"TaskObject\",\n flow_run_id: Optional[UUID],\n dynamic_key: str,\n name: str = None,\n extra_tags: Iterable[str] = None,\n state: prefect.states.State = None,\n task_inputs: Dict[\n str,\n List[\n Union[\n TaskRunResult,\n Parameter,\n Constant,\n ]\n ],\n ] = None,\n) -> TaskRun:\n \"\"\"\n Create a task run\n\n Args:\n task: The Task to run\n flow_run_id: The flow run id with which to associate the task run\n dynamic_key: A key unique to this particular run of a Task within the flow\n name: An optional name for the task run\n extra_tags: an optional list of extra tags to apply to the task run in\n addition to `task.tags`\n state: The initial state for the run. If not provided, defaults to\n `Pending` for now. Should always be a `Scheduled` type.\n task_inputs: the set of inputs passed to the task\n\n Returns:\n The created task run.\n \"\"\"\n tags = set(task.tags).union(extra_tags or [])\n\n if state is None:\n state = prefect.states.Pending()\n\n task_run_data = TaskRunCreate(\n name=name,\n flow_run_id=flow_run_id,\n task_key=task.task_key,\n dynamic_key=dynamic_key,\n tags=list(tags),\n task_version=task.version,\n empirical_policy=TaskRunPolicy(\n retries=task.retries,\n retry_delay=task.retry_delay_seconds,\n retry_jitter_factor=task.retry_jitter_factor,\n ),\n state=state.to_state_create(),\n task_inputs=task_inputs or {},\n )\n\n response = await self._client.post(\n \"/task_runs/\", json=task_run_data.dict(json_compatible=True)\n )\n return TaskRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_run","title":"read_task_run
async
","text":"Query the Prefect API for a task run by id.
Parameters:
Name Type Description Defaulttask_run_id
UUID
the task run ID of interest
requiredReturns:
Type DescriptionTaskRun
a Task Run model representation of the task run
Source code inprefect/client/orchestration.py
async def read_task_run(self, task_run_id: UUID) -> TaskRun:\n \"\"\"\n Query the Prefect API for a task run by id.\n\n Args:\n task_run_id: the task run ID of interest\n\n Returns:\n a Task Run model representation of the task run\n \"\"\"\n response = await self._client.get(f\"/task_runs/{task_run_id}\")\n return TaskRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_runs","title":"read_task_runs
async
","text":"Query the Prefect API for task runs. Only task runs matching all criteria will be returned.
Parameters:
Name Type Description Defaultflow_filter
FlowFilter
filter criteria for flows
None
flow_run_filter
FlowRunFilter
filter criteria for flow runs
None
task_run_filter
TaskRunFilter
filter criteria for task runs
None
deployment_filter
DeploymentFilter
filter criteria for deployments
None
sort
TaskRunSort
sort criteria for the task runs
None
limit
int
a limit for the task run query
None
offset
int
an offset for the task run query
0
Returns:
Type DescriptionList[TaskRun]
a list of Task Run model representations of the task runs
Source code inprefect/client/orchestration.py
async def read_task_runs(\n self,\n *,\n flow_filter: FlowFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n deployment_filter: DeploymentFilter = None,\n sort: TaskRunSort = None,\n limit: int = None,\n offset: int = 0,\n) -> List[TaskRun]:\n \"\"\"\n Query the Prefect API for task runs. Only task runs matching all criteria will\n be returned.\n\n Args:\n flow_filter: filter criteria for flows\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n deployment_filter: filter criteria for deployments\n sort: sort criteria for the task runs\n limit: a limit for the task run query\n offset: an offset for the task run query\n\n Returns:\n a list of Task Run model representations\n of the task runs\n \"\"\"\n body = {\n \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n if flow_run_filter\n else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"deployments\": (\n deployment_filter.dict(json_compatible=True)\n if deployment_filter\n else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\"/task_runs/filter\", json=body)\n return pydantic.parse_obj_as(List[TaskRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_task_run","title":"delete_task_run
async
","text":"Delete a task run by id.
Parameters:
Name Type Description Defaulttask_run_id
UUID
the task run ID of interest
required Source code inprefect/client/orchestration.py
async def delete_task_run(self, task_run_id: UUID) -> None:\n \"\"\"\n Delete a task run by id.\n\n Args:\n task_run_id: the task run ID of interest\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/task_runs/{task_run_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.set_task_run_state","title":"set_task_run_state
async
","text":"Set the state of a task run.
Parameters:
Name Type Description Defaulttask_run_id
UUID
the id of the task run
requiredstate
State
the state to set
requiredforce
bool
if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state
False
Returns:
Type DescriptionOrchestrationResult
an OrchestrationResult model representation of state orchestration output
Source code inprefect/client/orchestration.py
async def set_task_run_state(\n self,\n task_run_id: UUID,\n state: prefect.states.State,\n force: bool = False,\n) -> OrchestrationResult:\n \"\"\"\n Set the state of a task run.\n\n Args:\n task_run_id: the id of the task run\n state: the state to set\n force: if True, disregard orchestration logic when setting the state,\n forcing the Prefect API to accept the state\n\n Returns:\n an OrchestrationResult model representation of state orchestration output\n \"\"\"\n state_create = state.to_state_create()\n state_create.state_details.task_run_id = task_run_id\n response = await self._client.post(\n f\"/task_runs/{task_run_id}/set_state\",\n json=dict(state=state_create.dict(json_compatible=True), force=force),\n )\n return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_run_states","title":"read_task_run_states
async
","text":"Query for the states of a task run
Parameters:
Name Type Description Defaulttask_run_id
UUID
the id of the task run
requiredReturns:
Type DescriptionList[State]
a list of State model representations of the task run states
Source code inprefect/client/orchestration.py
async def read_task_run_states(\n self, task_run_id: UUID\n) -> List[prefect.states.State]:\n \"\"\"\n Query for the states of a task run\n\n Args:\n task_run_id: the id of the task run\n\n Returns:\n a list of State model representations of the task run states\n \"\"\"\n response = await self._client.get(\n \"/task_run_states/\", params=dict(task_run_id=str(task_run_id))\n )\n return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_logs","title":"create_logs
async
","text":"Create logs for a flow or task run
Parameters:
Name Type Description Defaultlogs
Iterable[Union[LogCreate, dict]]
An iterable of LogCreate
objects or already json-compatible dicts
prefect/client/orchestration.py
async def create_logs(self, logs: Iterable[Union[LogCreate, dict]]) -> None:\n \"\"\"\n Create logs for a flow or task run\n\n Args:\n logs: An iterable of `LogCreate` objects or already json-compatible dicts\n \"\"\"\n serialized_logs = [\n log.dict(json_compatible=True) if isinstance(log, LogCreate) else log\n for log in logs\n ]\n await self._client.post(\"/logs/\", json=serialized_logs)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_notification_policy","title":"create_flow_run_notification_policy
async
","text":"Create a notification policy for flow runs
Parameters:
Name Type Description Defaultblock_document_id
UUID
The block document UUID
requiredis_active
bool
Whether the notification policy is active
True
tags
List[str]
List of flow tags
None
state_names
List[str]
List of state names
None
message_template
Optional[str]
Notification message template
None
Source code in prefect/client/orchestration.py
async def create_flow_run_notification_policy(\n self,\n block_document_id: UUID,\n is_active: bool = True,\n tags: List[str] = None,\n state_names: List[str] = None,\n message_template: Optional[str] = None,\n) -> UUID:\n \"\"\"\n Create a notification policy for flow runs\n\n Args:\n block_document_id: The block document UUID\n is_active: Whether the notification policy is active\n tags: List of flow tags\n state_names: List of state names\n message_template: Notification message template\n \"\"\"\n if tags is None:\n tags = []\n if state_names is None:\n state_names = []\n\n policy = FlowRunNotificationPolicyCreate(\n block_document_id=block_document_id,\n is_active=is_active,\n tags=tags,\n state_names=state_names,\n message_template=message_template,\n )\n response = await self._client.post(\n \"/flow_run_notification_policies/\",\n json=policy.dict(json_compatible=True),\n )\n\n policy_id = response.json().get(\"id\")\n if not policy_id:\n raise httpx.RequestError(f\"Malformed response: {response}\")\n\n return UUID(policy_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_flow_run_notification_policy","title":"delete_flow_run_notification_policy
async
","text":"Delete a flow run notification policy by id.
Parameters:
Name Type Description Defaultid
UUID
UUID of the flow run notification policy to delete.
required Source code inprefect/client/orchestration.py
async def delete_flow_run_notification_policy(\n self,\n id: UUID,\n) -> None:\n \"\"\"\n Delete a flow run notification policy by id.\n\n Args:\n id: UUID of the flow run notification policy to delete.\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n try:\n await self._client.delete(f\"/flow_run_notification_policies/{id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_flow_run_notification_policy","title":"update_flow_run_notification_policy
async
","text":"Update a notification policy for flow runs
Parameters:
Name Type Description Defaultid
UUID
UUID of the notification policy
requiredblock_document_id
Optional[UUID]
The block document UUID
None
is_active
Optional[bool]
Whether the notification policy is active
None
tags
Optional[List[str]]
List of flow tags
None
state_names
Optional[List[str]]
List of state names
None
message_template
Optional[str]
Notification message template
None
Source code in prefect/client/orchestration.py
async def update_flow_run_notification_policy(\n self,\n id: UUID,\n block_document_id: Optional[UUID] = None,\n is_active: Optional[bool] = None,\n tags: Optional[List[str]] = None,\n state_names: Optional[List[str]] = None,\n message_template: Optional[str] = None,\n) -> None:\n \"\"\"\n Update a notification policy for flow runs\n\n Args:\n id: UUID of the notification policy\n block_document_id: The block document UUID\n is_active: Whether the notification policy is active\n tags: List of flow tags\n state_names: List of state names\n message_template: Notification message template\n Raises:\n prefect.exceptions.ObjectNotFound: If request returns 404\n httpx.RequestError: If requests fails\n \"\"\"\n params = {}\n if block_document_id is not None:\n params[\"block_document_id\"] = block_document_id\n if is_active is not None:\n params[\"is_active\"] = is_active\n if tags is not None:\n params[\"tags\"] = tags\n if state_names is not None:\n params[\"state_names\"] = state_names\n if message_template is not None:\n params[\"message_template\"] = message_template\n\n policy = FlowRunNotificationPolicyUpdate(**params)\n\n try:\n await self._client.patch(\n f\"/flow_run_notification_policies/{id}\",\n json=policy.dict(json_compatible=True, exclude_unset=True),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_notification_policies","title":"read_flow_run_notification_policies
async
","text":"Query the Prefect API for flow run notification policies. Only policies matching all criteria will be returned.
Parameters:
Name Type Description Defaultflow_run_notification_policy_filter
FlowRunNotificationPolicyFilter
filter criteria for notification policies
requiredlimit
Optional[int]
a limit for the notification policies query
None
offset
int
an offset for the notification policies query
0
Returns:
Type DescriptionList[FlowRunNotificationPolicy]
a list of FlowRunNotificationPolicy model representations of the notification policies
Source code inprefect/client/orchestration.py
async def read_flow_run_notification_policies(\n self,\n flow_run_notification_policy_filter: FlowRunNotificationPolicyFilter,\n limit: Optional[int] = None,\n offset: int = 0,\n) -> List[FlowRunNotificationPolicy]:\n \"\"\"\n Query the Prefect API for flow run notification policies. Only policies matching all criteria will\n be returned.\n\n Args:\n flow_run_notification_policy_filter: filter criteria for notification policies\n limit: a limit for the notification policies query\n offset: an offset for the notification policies query\n\n Returns:\n a list of FlowRunNotificationPolicy model representations\n of the notification policies\n \"\"\"\n body = {\n \"flow_run_notification_policy_filter\": (\n flow_run_notification_policy_filter.dict(json_compatible=True)\n if flow_run_notification_policy_filter\n else None\n ),\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\n \"/flow_run_notification_policies/filter\", json=body\n )\n return pydantic.parse_obj_as(List[FlowRunNotificationPolicy], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_logs","title":"read_logs
async
","text":"Read flow and task run logs.
Source code inprefect/client/orchestration.py
async def read_logs(\n self,\n log_filter: LogFilter = None,\n limit: int = None,\n offset: int = None,\n sort: LogSort = LogSort.TIMESTAMP_ASC,\n) -> List[Log]:\n \"\"\"\n Read flow and task run logs.\n \"\"\"\n body = {\n \"logs\": log_filter.dict(json_compatible=True) if log_filter else None,\n \"limit\": limit,\n \"offset\": offset,\n \"sort\": sort,\n }\n\n response = await self._client.post(\"/logs/filter\", json=body)\n return pydantic.parse_obj_as(List[Log], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.resolve_datadoc","title":"resolve_datadoc
async
","text":"Recursively decode possibly nested data documents.
\"server\" encoded documents will be retrieved from the server.
Parameters:
Name Type Description Defaultdatadoc
DataDocument
The data document to resolve
requiredReturns:
Type DescriptionAny
a decoded object, the innermost data
Source code inprefect/client/orchestration.py
async def resolve_datadoc(self, datadoc: DataDocument) -> Any:\n \"\"\"\n Recursively decode possibly nested data documents.\n\n \"server\" encoded documents will be retrieved from the server.\n\n Args:\n datadoc: The data document to resolve\n\n Returns:\n a decoded object, the innermost data\n \"\"\"\n if not isinstance(datadoc, DataDocument):\n raise TypeError(\n f\"`resolve_datadoc` received invalid type {type(datadoc).__name__}\"\n )\n\n async def resolve_inner(data):\n if isinstance(data, bytes):\n try:\n data = DataDocument.parse_raw(data)\n except pydantic.ValidationError:\n return data\n\n if isinstance(data, DataDocument):\n return await resolve_inner(data.decode())\n\n return data\n\n return await resolve_inner(datadoc)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.send_worker_heartbeat","title":"send_worker_heartbeat
async
","text":"Sends a worker heartbeat for a given work pool.
Parameters:
Name Type Description Defaultwork_pool_name
str
The name of the work pool to heartbeat against.
requiredworker_name
str
The name of the worker sending the heartbeat.
required Source code inprefect/client/orchestration.py
async def send_worker_heartbeat(\n self,\n work_pool_name: str,\n worker_name: str,\n heartbeat_interval_seconds: Optional[float] = None,\n):\n \"\"\"\n Sends a worker heartbeat for a given work pool.\n\n Args:\n work_pool_name: The name of the work pool to heartbeat against.\n worker_name: The name of the worker sending the heartbeat.\n \"\"\"\n await self._client.post(\n f\"/work_pools/{work_pool_name}/workers/heartbeat\",\n json={\n \"name\": worker_name,\n \"heartbeat_interval_seconds\": heartbeat_interval_seconds,\n },\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_workers_for_work_pool","title":"read_workers_for_work_pool
async
","text":"Reads workers for a given work pool.
Parameters:
Name Type Description Defaultwork_pool_name
str
The name of the work pool for which to get member workers.
requiredworker_filter
Optional[WorkerFilter]
Criteria by which to filter workers.
None
limit
Optional[int]
Limit for the worker query.
None
offset
Optional[int]
Limit for the worker query.
None
Source code in prefect/client/orchestration.py
async def read_workers_for_work_pool(\n self,\n work_pool_name: str,\n worker_filter: Optional[WorkerFilter] = None,\n offset: Optional[int] = None,\n limit: Optional[int] = None,\n) -> List[Worker]:\n \"\"\"\n Reads workers for a given work pool.\n\n Args:\n work_pool_name: The name of the work pool for which to get\n member workers.\n worker_filter: Criteria by which to filter workers.\n limit: Limit for the worker query.\n offset: Limit for the worker query.\n \"\"\"\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/workers/filter\",\n json={\n \"worker_filter\": (\n worker_filter.dict(json_compatible=True, exclude_unset=True)\n if worker_filter\n else None\n ),\n \"offset\": offset,\n \"limit\": limit,\n },\n )\n\n return pydantic.parse_obj_as(List[Worker], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_pool","title":"read_work_pool
async
","text":"Reads information for a given work pool
Parameters:
Name Type Description Defaultwork_pool_name
str
The name of the work pool to for which to get information.
requiredReturns:
Type DescriptionWorkPool
Information about the requested work pool.
Source code inprefect/client/orchestration.py
async def read_work_pool(self, work_pool_name: str) -> WorkPool:\n \"\"\"\n Reads information for a given work pool\n\n Args:\n work_pool_name: The name of the work pool to for which to get\n information.\n\n Returns:\n Information about the requested work pool.\n \"\"\"\n try:\n response = await self._client.get(f\"/work_pools/{work_pool_name}\")\n return pydantic.parse_obj_as(WorkPool, response.json())\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_pools","title":"read_work_pools
async
","text":"Reads work pools.
Parameters:
Name Type Description Defaultlimit
Optional[int]
Limit for the work pool query.
None
offset
int
Offset for the work pool query.
0
work_pool_filter
Optional[WorkPoolFilter]
Criteria by which to filter work pools.
None
Returns:
Type DescriptionList[WorkPool]
A list of work pools.
Source code inprefect/client/orchestration.py
async def read_work_pools(\n self,\n limit: Optional[int] = None,\n offset: int = 0,\n work_pool_filter: Optional[WorkPoolFilter] = None,\n) -> List[WorkPool]:\n \"\"\"\n Reads work pools.\n\n Args:\n limit: Limit for the work pool query.\n offset: Offset for the work pool query.\n work_pool_filter: Criteria by which to filter work pools.\n\n Returns:\n A list of work pools.\n \"\"\"\n\n body = {\n \"limit\": limit,\n \"offset\": offset,\n \"work_pools\": (\n work_pool_filter.dict(json_compatible=True)\n if work_pool_filter\n else None\n ),\n }\n response = await self._client.post(\"/work_pools/filter\", json=body)\n return pydantic.parse_obj_as(List[WorkPool], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_work_pool","title":"create_work_pool
async
","text":"Creates a work pool with the provided configuration.
Parameters:
Name Type Description Defaultwork_pool
WorkPoolCreate
Desired configuration for the new work pool.
requiredReturns:
Type DescriptionWorkPool
Information about the newly created work pool.
Source code inprefect/client/orchestration.py
async def create_work_pool(\n self,\n work_pool: WorkPoolCreate,\n) -> WorkPool:\n \"\"\"\n Creates a work pool with the provided configuration.\n\n Args:\n work_pool: Desired configuration for the new work pool.\n\n Returns:\n Information about the newly created work pool.\n \"\"\"\n try:\n response = await self._client.post(\n \"/work_pools/\",\n json=work_pool.dict(json_compatible=True, exclude_unset=True),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_409_CONFLICT:\n raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n else:\n raise\n\n return pydantic.parse_obj_as(WorkPool, response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_work_pool","title":"update_work_pool
async
","text":"Updates a work pool.
Parameters:
Name Type Description Defaultwork_pool_name
str
Name of the work pool to update.
requiredwork_pool
WorkPoolUpdate
Fields to update in the work pool.
required Source code inprefect/client/orchestration.py
async def update_work_pool(\n self,\n work_pool_name: str,\n work_pool: WorkPoolUpdate,\n):\n \"\"\"\n Updates a work pool.\n\n Args:\n work_pool_name: Name of the work pool to update.\n work_pool: Fields to update in the work pool.\n \"\"\"\n try:\n await self._client.patch(\n f\"/work_pools/{work_pool_name}\",\n json=work_pool.dict(json_compatible=True, exclude_unset=True),\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_work_pool","title":"delete_work_pool
async
","text":"Deletes a work pool.
Parameters:
Name Type Description Defaultwork_pool_name
str
Name of the work pool to delete.
required Source code inprefect/client/orchestration.py
async def delete_work_pool(\n self,\n work_pool_name: str,\n):\n \"\"\"\n Deletes a work pool.\n\n Args:\n work_pool_name: Name of the work pool to delete.\n \"\"\"\n try:\n await self._client.delete(f\"/work_pools/{work_pool_name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queues","title":"read_work_queues
async
","text":"Retrieves queues for a work pool.
Parameters:
Name Type Description Defaultwork_pool_name
Optional[str]
Name of the work pool for which to get queues.
None
work_queue_filter
Optional[WorkQueueFilter]
Criteria by which to filter queues.
None
limit
Optional[int]
Limit for the queue query.
None
offset
Optional[int]
Limit for the queue query.
None
Returns:
Type DescriptionList[WorkQueue]
List of queues for the specified work pool.
Source code inprefect/client/orchestration.py
async def read_work_queues(\n self,\n work_pool_name: Optional[str] = None,\n work_queue_filter: Optional[WorkQueueFilter] = None,\n limit: Optional[int] = None,\n offset: Optional[int] = None,\n) -> List[WorkQueue]:\n \"\"\"\n Retrieves queues for a work pool.\n\n Args:\n work_pool_name: Name of the work pool for which to get queues.\n work_queue_filter: Criteria by which to filter queues.\n limit: Limit for the queue query.\n offset: Limit for the queue query.\n\n Returns:\n List of queues for the specified work pool.\n \"\"\"\n json = {\n \"work_queues\": (\n work_queue_filter.dict(json_compatible=True, exclude_unset=True)\n if work_queue_filter\n else None\n ),\n \"limit\": limit,\n \"offset\": offset,\n }\n\n if work_pool_name:\n try:\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/queues/filter\",\n json=json,\n )\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n else:\n response = await self._client.post(\"/work_queues/filter\", json=json)\n\n return pydantic.parse_obj_as(List[WorkQueue], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_scheduled_flow_runs_for_work_pool","title":"get_scheduled_flow_runs_for_work_pool
async
","text":"Retrieves scheduled flow runs for the provided set of work pool queues.
Parameters:
Name Type Description Defaultwork_pool_name
str
The name of the work pool that the work pool queues are associated with.
requiredwork_queue_names
Optional[List[str]]
The names of the work pool queues from which to get scheduled flow runs.
None
scheduled_before
Optional[datetime]
Datetime used to filter returned flow runs. Flow runs scheduled for after the given datetime string will not be returned.
None
Returns:
Type DescriptionList[WorkerFlowRunResponse]
A list of worker flow run responses containing information about the
List[WorkerFlowRunResponse]
retrieved flow runs.
Source code inprefect/client/orchestration.py
async def get_scheduled_flow_runs_for_work_pool(\n self,\n work_pool_name: str,\n work_queue_names: Optional[List[str]] = None,\n scheduled_before: Optional[datetime.datetime] = None,\n) -> List[WorkerFlowRunResponse]:\n \"\"\"\n Retrieves scheduled flow runs for the provided set of work pool queues.\n\n Args:\n work_pool_name: The name of the work pool that the work pool\n queues are associated with.\n work_queue_names: The names of the work pool queues from which\n to get scheduled flow runs.\n scheduled_before: Datetime used to filter returned flow runs. Flow runs\n scheduled for after the given datetime string will not be returned.\n\n Returns:\n A list of worker flow run responses containing information about the\n retrieved flow runs.\n \"\"\"\n body: Dict[str, Any] = {}\n if work_queue_names is not None:\n body[\"work_queue_names\"] = list(work_queue_names)\n if scheduled_before:\n body[\"scheduled_before\"] = str(scheduled_before)\n\n response = await self._client.post(\n f\"/work_pools/{work_pool_name}/get_scheduled_flow_runs\",\n json=body,\n )\n return pydantic.parse_obj_as(List[WorkerFlowRunResponse], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_artifact","title":"create_artifact
async
","text":"Creates an artifact with the provided configuration.
Parameters:
Name Type Description Defaultartifact
ArtifactCreate
Desired configuration for the new artifact.
required Source code inprefect/client/orchestration.py
async def create_artifact(\n self,\n artifact: ArtifactCreate,\n) -> Artifact:\n \"\"\"\n Creates an artifact with the provided configuration.\n\n Args:\n artifact: Desired configuration for the new artifact.\n Returns:\n Information about the newly created artifact.\n \"\"\"\n\n response = await self._client.post(\n \"/artifacts/\",\n json=artifact.dict(json_compatible=True, exclude_unset=True),\n )\n\n return pydantic.parse_obj_as(Artifact, response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_artifacts","title":"read_artifacts
async
","text":"Query the Prefect API for artifacts. Only artifacts matching all criteria will be returned. Args: artifact_filter: filter criteria for artifacts flow_run_filter: filter criteria for flow runs task_run_filter: filter criteria for task runs sort: sort criteria for the artifacts limit: limit for the artifact query offset: offset for the artifact query Returns: a list of Artifact model representations of the artifacts
Source code inprefect/client/orchestration.py
async def read_artifacts(\n self,\n *,\n artifact_filter: ArtifactFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n sort: ArtifactSort = None,\n limit: int = None,\n offset: int = 0,\n) -> List[Artifact]:\n \"\"\"\n Query the Prefect API for artifacts. Only artifacts matching all criteria will\n be returned.\n Args:\n artifact_filter: filter criteria for artifacts\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n sort: sort criteria for the artifacts\n limit: limit for the artifact query\n offset: offset for the artifact query\n Returns:\n a list of Artifact model representations of the artifacts\n \"\"\"\n body = {\n \"artifacts\": (\n artifact_filter.dict(json_compatible=True) if artifact_filter else None\n ),\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\"/artifacts/filter\", json=body)\n return pydantic.parse_obj_as(List[Artifact], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_latest_artifacts","title":"read_latest_artifacts
async
","text":"Query the Prefect API for artifacts. Only artifacts matching all criteria will be returned. Args: artifact_filter: filter criteria for artifacts flow_run_filter: filter criteria for flow runs task_run_filter: filter criteria for task runs sort: sort criteria for the artifacts limit: limit for the artifact query offset: offset for the artifact query Returns: a list of Artifact model representations of the artifacts
Source code inprefect/client/orchestration.py
async def read_latest_artifacts(\n self,\n *,\n artifact_filter: ArtifactCollectionFilter = None,\n flow_run_filter: FlowRunFilter = None,\n task_run_filter: TaskRunFilter = None,\n sort: ArtifactCollectionSort = None,\n limit: int = None,\n offset: int = 0,\n) -> List[ArtifactCollection]:\n \"\"\"\n Query the Prefect API for artifacts. Only artifacts matching all criteria will\n be returned.\n Args:\n artifact_filter: filter criteria for artifacts\n flow_run_filter: filter criteria for flow runs\n task_run_filter: filter criteria for task runs\n sort: sort criteria for the artifacts\n limit: limit for the artifact query\n offset: offset for the artifact query\n Returns:\n a list of Artifact model representations of the artifacts\n \"\"\"\n body = {\n \"artifacts\": (\n artifact_filter.dict(json_compatible=True) if artifact_filter else None\n ),\n \"flow_runs\": (\n flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n ),\n \"task_runs\": (\n task_run_filter.dict(json_compatible=True) if task_run_filter else None\n ),\n \"sort\": sort,\n \"limit\": limit,\n \"offset\": offset,\n }\n response = await self._client.post(\"/artifacts/latest/filter\", json=body)\n return pydantic.parse_obj_as(List[ArtifactCollection], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_artifact","title":"delete_artifact
async
","text":"Deletes an artifact with the provided id.
Parameters:
Name Type Description Defaultartifact_id
UUID
The id of the artifact to delete.
required Source code inprefect/client/orchestration.py
async def delete_artifact(self, artifact_id: UUID) -> None:\n \"\"\"\n Deletes an artifact with the provided id.\n\n Args:\n artifact_id: The id of the artifact to delete.\n \"\"\"\n try:\n await self._client.delete(f\"/artifacts/{artifact_id}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_variable_by_name","title":"read_variable_by_name
async
","text":"Reads a variable by name. Returns None if no variable is found.
Source code inprefect/client/orchestration.py
async def read_variable_by_name(self, name: str) -> Optional[Variable]:\n \"\"\"Reads a variable by name. Returns None if no variable is found.\"\"\"\n try:\n response = await self._client.get(f\"/variables/name/{name}\")\n return pydantic.parse_obj_as(Variable, response.json())\n except httpx.HTTPStatusError as e:\n if e.response.status_code == status.HTTP_404_NOT_FOUND:\n return None\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_variable_by_name","title":"delete_variable_by_name
async
","text":"Deletes a variable by name.
Source code inprefect/client/orchestration.py
async def delete_variable_by_name(self, name: str):\n \"\"\"Deletes a variable by name.\"\"\"\n try:\n await self._client.delete(f\"/variables/name/{name}\")\n except httpx.HTTPStatusError as e:\n if e.response.status_code == 404:\n raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n else:\n raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_variables","title":"read_variables
async
","text":"Reads all variables.
Source code inprefect/client/orchestration.py
async def read_variables(self, limit: int = None) -> List[Variable]:\n \"\"\"Reads all variables.\"\"\"\n response = await self._client.post(\"/variables/filter\", json={\"limit\": limit})\n return pydantic.parse_obj_as(List[Variable], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_worker_metadata","title":"read_worker_metadata
async
","text":"Reads worker metadata stored in Prefect collection registry.
Source code inprefect/client/orchestration.py
async def read_worker_metadata(self) -> Dict[str, Any]:\n \"\"\"Reads worker metadata stored in Prefect collection registry.\"\"\"\n response = await self._client.get(\"collections/views/aggregate-worker-metadata\")\n response.raise_for_status()\n return response.json()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_automation","title":"create_automation
async
","text":"Creates an automation in Prefect Cloud.
Source code inprefect/client/orchestration.py
async def create_automation(self, automation: Automation) -> UUID:\n \"\"\"Creates an automation in Prefect Cloud.\"\"\"\n if self.server_type != ServerType.CLOUD:\n raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n response = await self._client.post(\n \"/automations/\",\n json=automation.dict(json_compatible=True),\n )\n\n return UUID(response.json()[\"id\"])\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_input","title":"create_flow_run_input
async
","text":"Creates a flow run input.
Parameters:
Name Type Description Defaultflow_run_id
UUID
The flow run id.
requiredkey
str
The input key.
requiredvalue
str
The input value.
requiredsender
Optional[str]
The sender of the input.
None
Source code in prefect/client/orchestration.py
async def create_flow_run_input(\n self, flow_run_id: UUID, key: str, value: str, sender: Optional[str] = None\n):\n \"\"\"\n Creates a flow run input.\n\n Args:\n flow_run_id: The flow run id.\n key: The input key.\n value: The input value.\n sender: The sender of the input.\n \"\"\"\n\n # Initialize the input to ensure that the key is valid.\n FlowRunInput(flow_run_id=flow_run_id, key=key, value=value)\n\n response = await self._client.post(\n f\"/flow_runs/{flow_run_id}/input\",\n json={\"key\": key, \"value\": value, \"sender\": sender},\n )\n response.raise_for_status()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_input","title":"read_flow_run_input
async
","text":"Reads a flow run input.
Parameters:
Name Type Description Defaultflow_run_id
UUID
The flow run id.
requiredkey
str
The input key.
required Source code inprefect/client/orchestration.py
async def read_flow_run_input(self, flow_run_id: UUID, key: str) -> str:\n \"\"\"\n Reads a flow run input.\n\n Args:\n flow_run_id: The flow run id.\n key: The input key.\n \"\"\"\n response = await self._client.get(f\"/flow_runs/{flow_run_id}/input/{key}\")\n response.raise_for_status()\n return response.content.decode()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_flow_run_input","title":"delete_flow_run_input
async
","text":"Deletes a flow run input.
Parameters:
Name Type Description Defaultflow_run_id
UUID
The flow run id.
requiredkey
str
The input key.
required Source code inprefect/client/orchestration.py
async def delete_flow_run_input(self, flow_run_id: UUID, key: str):\n \"\"\"\n Deletes a flow run input.\n\n Args:\n flow_run_id: The flow run id.\n key: The input key.\n \"\"\"\n response = await self._client.delete(f\"/flow_runs/{flow_run_id}/input/{key}\")\n response.raise_for_status()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.get_client","title":"get_client
","text":"Retrieve a HTTP client for communicating with the Prefect REST API.
The client must be context managed; for example:
async with get_client() as client:\n await client.hello()\n
Source code in prefect/client/orchestration.py
def get_client(httpx_settings: Optional[dict] = None) -> \"PrefectClient\":\n \"\"\"\n Retrieve a HTTP client for communicating with the Prefect REST API.\n\n The client must be context managed; for example:\n\n ```python\n async with get_client() as client:\n await client.hello()\n ```\n \"\"\"\n ctx = prefect.context.get_settings_context()\n api = PREFECT_API_URL.value()\n\n if not api:\n # create an ephemeral API if none was provided\n from prefect.server.api.server import create_app\n\n api = create_app(ctx.settings, ephemeral=True)\n\n return PrefectClient(\n api,\n api_key=PREFECT_API_KEY.value(),\n httpx_settings=httpx_settings,\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_1","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions","title":"prefect.client.schemas.actions
","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.StateCreate","title":"StateCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a new state.
Source code inprefect/client/schemas/actions.py
class StateCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a new state.\"\"\"\n\n type: StateType\n name: Optional[str] = Field(default=None)\n message: Optional[str] = Field(default=None, example=\"Run started\")\n state_details: StateDetails = Field(default_factory=StateDetails)\n data: Union[\"BaseResult[R]\", \"DataDocument[R]\", Any] = Field(\n default=None,\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowCreate","title":"FlowCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass FlowCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow.\"\"\"\n\n name: str = FieldFrom(objects.Flow)\n tags: List[str] = FieldFrom(objects.Flow)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowUpdate","title":"FlowUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a flow.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass FlowUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a flow.\"\"\"\n\n tags: List[str] = FieldFrom(objects.Flow)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentCreate","title":"DeploymentCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a deployment.
Source code inprefect/client/schemas/actions.py
@experimental_field(\n \"work_pool_name\",\n group=\"work_pools\",\n when=lambda x: x is not None,\n)\n@copy_model_fields\nclass DeploymentCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a deployment.\"\"\"\n\n @root_validator(pre=True)\n def remove_old_fields(cls, values):\n # 2.7.7 removed worker_pool_queue_id in lieu of worker_pool_name and\n # worker_pool_queue_name. Those fields were later renamed to work_pool_name\n # and work_queue_name. This validator removes old fields provided\n # by older clients to avoid 422 errors.\n values_copy = copy(values)\n worker_pool_queue_id = values_copy.pop(\"worker_pool_queue_id\", None)\n worker_pool_name = values_copy.pop(\"worker_pool_name\", None)\n worker_pool_queue_name = values_copy.pop(\"worker_pool_queue_name\", None)\n work_pool_queue_name = values_copy.pop(\"work_pool_queue_name\", None)\n if worker_pool_queue_id:\n warnings.warn(\n (\n \"`worker_pool_queue_id` is no longer supported for creating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n if worker_pool_name or worker_pool_queue_name or work_pool_queue_name:\n warnings.warn(\n (\n \"`worker_pool_name`, `worker_pool_queue_name`, and \"\n \"`work_pool_name` are\"\n \"no longer supported for creating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n return values_copy\n\n name: str = FieldFrom(objects.Deployment)\n flow_id: UUID = FieldFrom(objects.Deployment)\n is_schedule_active: Optional[bool] = FieldFrom(objects.Deployment)\n paused: Optional[bool] = FieldFrom(objects.Deployment)\n schedules: List[DeploymentScheduleCreate] = Field(\n default_factory=list,\n description=\"A list of schedules for the deployment.\",\n )\n enforce_parameter_schema: Optional[bool] = Field(\n default=None,\n description=(\n \"Whether or not the deployment should enforce the parameter schema.\"\n ),\n )\n parameter_openapi_schema: Optional[Dict[str, Any]] = FieldFrom(objects.Deployment)\n parameters: Dict[str, Any] = FieldFrom(objects.Deployment)\n tags: List[str] = FieldFrom(objects.Deployment)\n pull_steps: Optional[List[dict]] = FieldFrom(objects.Deployment)\n\n manifest_path: Optional[str] = FieldFrom(objects.Deployment)\n work_queue_name: Optional[str] = FieldFrom(objects.Deployment)\n work_pool_name: Optional[str] = Field(\n default=None,\n description=\"The name of the deployment's work pool.\",\n example=\"my-work-pool\",\n )\n storage_document_id: Optional[UUID] = FieldFrom(objects.Deployment)\n infrastructure_document_id: Optional[UUID] = FieldFrom(objects.Deployment)\n schedule: Optional[SCHEDULE_TYPES] = FieldFrom(objects.Deployment)\n description: Optional[str] = FieldFrom(objects.Deployment)\n path: Optional[str] = FieldFrom(objects.Deployment)\n version: Optional[str] = FieldFrom(objects.Deployment)\n entrypoint: Optional[str] = FieldFrom(objects.Deployment)\n infra_overrides: Optional[Dict[str, Any]] = FieldFrom(objects.Deployment)\n\n def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n jsonschema.validate(self.infra_overrides, variables_schema)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentCreate.check_valid_configuration","title":"check_valid_configuration
","text":"Check that the combination of base_job_template defaults and infra_overrides conforms to the specified schema.
Source code inprefect/client/schemas/actions.py
def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n jsonschema.validate(self.infra_overrides, variables_schema)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentUpdate","title":"DeploymentUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a deployment.
Source code inprefect/client/schemas/actions.py
@experimental_field(\n \"work_pool_name\",\n group=\"work_pools\",\n when=lambda x: x is not None,\n)\n@copy_model_fields\nclass DeploymentUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a deployment.\"\"\"\n\n @root_validator(pre=True)\n def remove_old_fields(cls, values):\n # 2.7.7 removed worker_pool_queue_id in lieu of worker_pool_name and\n # worker_pool_queue_name. Those fields were later renamed to work_pool_name\n # and work_queue_name. This validator removes old fields provided\n # by older clients to avoid 422 errors.\n values_copy = copy(values)\n worker_pool_queue_id = values_copy.pop(\"worker_pool_queue_id\", None)\n worker_pool_name = values_copy.pop(\"worker_pool_name\", None)\n worker_pool_queue_name = values_copy.pop(\"worker_pool_queue_name\", None)\n work_pool_queue_name = values_copy.pop(\"work_pool_queue_name\", None)\n if worker_pool_queue_id:\n warnings.warn(\n (\n \"`worker_pool_queue_id` is no longer supported for updating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n if worker_pool_name or worker_pool_queue_name or work_pool_queue_name:\n warnings.warn(\n (\n \"`worker_pool_name`, `worker_pool_queue_name`, and \"\n \"`work_pool_name` are\"\n \"no longer supported for creating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n return values_copy\n\n @validator(\"schedule\")\n def return_none_schedule(cls, v):\n if isinstance(v, NoSchedule):\n return None\n return v\n\n version: Optional[str] = FieldFrom(objects.Deployment)\n schedule: Optional[SCHEDULE_TYPES] = FieldFrom(objects.Deployment)\n description: Optional[str] = FieldFrom(objects.Deployment)\n is_schedule_active: bool = FieldFrom(objects.Deployment)\n parameters: Optional[Dict[str, Any]] = Field(\n default=None,\n description=\"Parameters for flow runs scheduled by the deployment.\",\n )\n tags: List[str] = FieldFrom(objects.Deployment)\n work_queue_name: Optional[str] = FieldFrom(objects.Deployment)\n work_pool_name: Optional[str] = Field(\n default=None,\n description=\"The name of the deployment's work pool.\",\n example=\"my-work-pool\",\n )\n path: Optional[str] = FieldFrom(objects.Deployment)\n infra_overrides: Optional[Dict[str, Any]] = FieldFrom(objects.Deployment)\n entrypoint: Optional[str] = FieldFrom(objects.Deployment)\n manifest_path: Optional[str] = FieldFrom(objects.Deployment)\n storage_document_id: Optional[UUID] = FieldFrom(objects.Deployment)\n infrastructure_document_id: Optional[UUID] = FieldFrom(objects.Deployment)\n enforce_parameter_schema: Optional[bool] = Field(\n default=None,\n description=(\n \"Whether or not the deployment should enforce the parameter schema.\"\n ),\n )\n\n def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n if variables_schema is not None:\n jsonschema.validate(self.infra_overrides, variables_schema)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentUpdate.check_valid_configuration","title":"check_valid_configuration
","text":"Check that the combination of base_job_template defaults and infra_overrides conforms to the specified schema.
Source code inprefect/client/schemas/actions.py
def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n if variables_schema is not None:\n jsonschema.validate(self.infra_overrides, variables_schema)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunUpdate","title":"FlowRunUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a flow run.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass FlowRunUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a flow run.\"\"\"\n\n name: Optional[str] = FieldFrom(objects.FlowRun)\n flow_version: Optional[str] = FieldFrom(objects.FlowRun)\n parameters: dict = FieldFrom(objects.FlowRun)\n empirical_policy: objects.FlowRunPolicy = FieldFrom(objects.FlowRun)\n tags: List[str] = FieldFrom(objects.FlowRun)\n infrastructure_pid: Optional[str] = FieldFrom(objects.FlowRun)\n job_variables: Optional[dict] = FieldFrom(objects.FlowRun)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.TaskRunCreate","title":"TaskRunCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a task run
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass TaskRunCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a task run\"\"\"\n\n # TaskRunCreate states must be provided as StateCreate objects\n state: Optional[StateCreate] = Field(\n default=None, description=\"The state of the task run to create\"\n )\n\n name: str = FieldFrom(objects.TaskRun)\n flow_run_id: Optional[UUID] = FieldFrom(objects.TaskRun)\n task_key: str = FieldFrom(objects.TaskRun)\n dynamic_key: str = FieldFrom(objects.TaskRun)\n cache_key: Optional[str] = FieldFrom(objects.TaskRun)\n cache_expiration: Optional[objects.DateTimeTZ] = FieldFrom(objects.TaskRun)\n task_version: Optional[str] = FieldFrom(objects.TaskRun)\n empirical_policy: objects.TaskRunPolicy = FieldFrom(objects.TaskRun)\n tags: List[str] = FieldFrom(objects.TaskRun)\n task_inputs: Dict[\n str,\n List[\n Union[\n objects.TaskRunResult,\n objects.Parameter,\n objects.Constant,\n ]\n ],\n ] = FieldFrom(objects.TaskRun)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.TaskRunUpdate","title":"TaskRunUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a task run
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass TaskRunUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a task run\"\"\"\n\n name: str = FieldFrom(objects.TaskRun)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunCreate","title":"FlowRunCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow run.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass FlowRunCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow run.\"\"\"\n\n # FlowRunCreate states must be provided as StateCreate objects\n state: Optional[StateCreate] = Field(\n default=None, description=\"The state of the flow run to create\"\n )\n\n name: str = FieldFrom(objects.FlowRun)\n flow_id: UUID = FieldFrom(objects.FlowRun)\n deployment_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n flow_version: Optional[str] = FieldFrom(objects.FlowRun)\n parameters: dict = FieldFrom(objects.FlowRun)\n context: dict = FieldFrom(objects.FlowRun)\n parent_task_run_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n infrastructure_document_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n empirical_policy: objects.FlowRunPolicy = FieldFrom(objects.FlowRun)\n tags: List[str] = FieldFrom(objects.FlowRun)\n idempotency_key: Optional[str] = FieldFrom(objects.FlowRun)\n\n class Config(ActionBaseModel.Config):\n json_dumps = orjson_dumps_extra_compatible\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentFlowRunCreate","title":"DeploymentFlowRunCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow run from a deployment.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass DeploymentFlowRunCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow run from a deployment.\"\"\"\n\n # FlowRunCreate states must be provided as StateCreate objects\n state: Optional[StateCreate] = Field(\n default=None, description=\"The state of the flow run to create\"\n )\n\n name: Optional[str] = FieldFrom(objects.FlowRun)\n parameters: dict = FieldFrom(objects.FlowRun)\n context: dict = FieldFrom(objects.FlowRun)\n infrastructure_document_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n empirical_policy: objects.FlowRunPolicy = FieldFrom(objects.FlowRun)\n tags: List[str] = FieldFrom(objects.FlowRun)\n idempotency_key: Optional[str] = FieldFrom(objects.FlowRun)\n parent_task_run_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n work_queue_name: Optional[str] = FieldFrom(objects.FlowRun)\n job_variables: Optional[dict] = FieldFrom(objects.FlowRun)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.SavedSearchCreate","title":"SavedSearchCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a saved search.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass SavedSearchCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a saved search.\"\"\"\n\n name: str = FieldFrom(objects.SavedSearch)\n filters: List[objects.SavedSearchFilter] = FieldFrom(objects.SavedSearch)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.ConcurrencyLimitCreate","title":"ConcurrencyLimitCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a concurrency limit.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass ConcurrencyLimitCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a concurrency limit.\"\"\"\n\n tag: str = FieldFrom(objects.ConcurrencyLimit)\n concurrency_limit: int = FieldFrom(objects.ConcurrencyLimit)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockTypeCreate","title":"BlockTypeCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a block type.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass BlockTypeCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a block type.\"\"\"\n\n name: str = FieldFrom(objects.BlockType)\n slug: str = FieldFrom(objects.BlockType)\n logo_url: Optional[objects.HttpUrl] = FieldFrom(objects.BlockType)\n documentation_url: Optional[objects.HttpUrl] = FieldFrom(objects.BlockType)\n description: Optional[str] = FieldFrom(objects.BlockType)\n code_example: Optional[str] = FieldFrom(objects.BlockType)\n\n # validators\n _validate_slug_format = validator(\"slug\", allow_reuse=True)(\n validate_block_type_slug\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockTypeUpdate","title":"BlockTypeUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a block type.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass BlockTypeUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a block type.\"\"\"\n\n logo_url: Optional[objects.HttpUrl] = FieldFrom(objects.BlockType)\n documentation_url: Optional[objects.HttpUrl] = FieldFrom(objects.BlockType)\n description: Optional[str] = FieldFrom(objects.BlockType)\n code_example: Optional[str] = FieldFrom(objects.BlockType)\n\n @classmethod\n def updatable_fields(cls) -> set:\n return get_class_fields_only(cls)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockSchemaCreate","title":"BlockSchemaCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a block schema.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass BlockSchemaCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a block schema.\"\"\"\n\n fields: dict = FieldFrom(objects.BlockSchema)\n block_type_id: Optional[UUID] = FieldFrom(objects.BlockSchema)\n capabilities: List[str] = FieldFrom(objects.BlockSchema)\n version: str = FieldFrom(objects.BlockSchema)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockDocumentCreate","title":"BlockDocumentCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a block document.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass BlockDocumentCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a block document.\"\"\"\n\n name: Optional[str] = FieldFrom(objects.BlockDocument)\n data: dict = FieldFrom(objects.BlockDocument)\n block_schema_id: UUID = FieldFrom(objects.BlockDocument)\n block_type_id: UUID = FieldFrom(objects.BlockDocument)\n is_anonymous: bool = FieldFrom(objects.BlockDocument)\n\n _validate_name_format = validator(\"name\", allow_reuse=True)(\n validate_block_document_name\n )\n\n @root_validator\n def validate_name_is_present_if_not_anonymous(cls, values):\n # TODO: We should find an elegant way to reuse this logic from the origin model\n if not values.get(\"is_anonymous\") and not values.get(\"name\"):\n raise ValueError(\"Names must be provided for block documents.\")\n return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockDocumentUpdate","title":"BlockDocumentUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a block document.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass BlockDocumentUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a block document.\"\"\"\n\n block_schema_id: Optional[UUID] = Field(\n default=None, description=\"A block schema ID\"\n )\n data: dict = FieldFrom(objects.BlockDocument)\n merge_existing_data: bool = True\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockDocumentReferenceCreate","title":"BlockDocumentReferenceCreate
","text":" Bases: ActionBaseModel
Data used to create block document reference.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass BlockDocumentReferenceCreate(ActionBaseModel):\n \"\"\"Data used to create block document reference.\"\"\"\n\n id: UUID = FieldFrom(objects.BlockDocumentReference)\n parent_block_document_id: UUID = FieldFrom(objects.BlockDocumentReference)\n reference_block_document_id: UUID = FieldFrom(objects.BlockDocumentReference)\n name: str = FieldFrom(objects.BlockDocumentReference)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.LogCreate","title":"LogCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a log.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass LogCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a log.\"\"\"\n\n name: str = FieldFrom(objects.Log)\n level: int = FieldFrom(objects.Log)\n message: str = FieldFrom(objects.Log)\n timestamp: objects.DateTimeTZ = FieldFrom(objects.Log)\n flow_run_id: Optional[UUID] = FieldFrom(objects.Log)\n task_run_id: Optional[UUID] = FieldFrom(objects.Log)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkPoolCreate","title":"WorkPoolCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a work pool.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass WorkPoolCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a work pool.\"\"\"\n\n name: str = FieldFrom(objects.WorkPool)\n description: Optional[str] = FieldFrom(objects.WorkPool)\n type: str = Field(description=\"The work pool type.\", default=\"prefect-agent\")\n base_job_template: Dict[str, Any] = FieldFrom(objects.WorkPool)\n is_paused: bool = FieldFrom(objects.WorkPool)\n concurrency_limit: Optional[int] = FieldFrom(objects.WorkPool)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkPoolUpdate","title":"WorkPoolUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a work pool.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass WorkPoolUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a work pool.\"\"\"\n\n description: Optional[str] = FieldFrom(objects.WorkPool)\n is_paused: Optional[bool] = FieldFrom(objects.WorkPool)\n base_job_template: Optional[Dict[str, Any]] = FieldFrom(objects.WorkPool)\n concurrency_limit: Optional[int] = FieldFrom(objects.WorkPool)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkQueueCreate","title":"WorkQueueCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a work queue.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass WorkQueueCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a work queue.\"\"\"\n\n name: str = FieldFrom(objects.WorkQueue)\n description: Optional[str] = FieldFrom(objects.WorkQueue)\n is_paused: bool = FieldFrom(objects.WorkQueue)\n concurrency_limit: Optional[int] = FieldFrom(objects.WorkQueue)\n priority: Optional[int] = Field(\n default=None,\n description=(\n \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n ),\n )\n\n # DEPRECATED\n\n filter: Optional[objects.QueueFilter] = Field(\n None,\n description=\"DEPRECATED: Filter criteria for the work queue.\",\n deprecated=True,\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkQueueUpdate","title":"WorkQueueUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a work queue.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass WorkQueueUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a work queue.\"\"\"\n\n name: str = FieldFrom(objects.WorkQueue)\n description: Optional[str] = FieldFrom(objects.WorkQueue)\n is_paused: bool = FieldFrom(objects.WorkQueue)\n concurrency_limit: Optional[int] = FieldFrom(objects.WorkQueue)\n priority: Optional[int] = FieldFrom(objects.WorkQueue)\n last_polled: Optional[DateTimeTZ] = FieldFrom(objects.WorkQueue)\n\n # DEPRECATED\n\n filter: Optional[objects.QueueFilter] = Field(\n None,\n description=\"DEPRECATED: Filter criteria for the work queue.\",\n deprecated=True,\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunNotificationPolicyCreate","title":"FlowRunNotificationPolicyCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow run notification policy.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass FlowRunNotificationPolicyCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow run notification policy.\"\"\"\n\n is_active: bool = FieldFrom(objects.FlowRunNotificationPolicy)\n state_names: List[str] = FieldFrom(objects.FlowRunNotificationPolicy)\n tags: List[str] = FieldFrom(objects.FlowRunNotificationPolicy)\n block_document_id: UUID = FieldFrom(objects.FlowRunNotificationPolicy)\n message_template: Optional[str] = FieldFrom(objects.FlowRunNotificationPolicy)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunNotificationPolicyUpdate","title":"FlowRunNotificationPolicyUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a flow run notification policy.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass FlowRunNotificationPolicyUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a flow run notification policy.\"\"\"\n\n is_active: Optional[bool] = FieldFrom(objects.FlowRunNotificationPolicy)\n state_names: Optional[List[str]] = FieldFrom(objects.FlowRunNotificationPolicy)\n tags: Optional[List[str]] = FieldFrom(objects.FlowRunNotificationPolicy)\n block_document_id: Optional[UUID] = FieldFrom(objects.FlowRunNotificationPolicy)\n message_template: Optional[str] = FieldFrom(objects.FlowRunNotificationPolicy)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.ArtifactCreate","title":"ArtifactCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create an artifact.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass ArtifactCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create an artifact.\"\"\"\n\n key: Optional[str] = FieldFrom(objects.Artifact)\n type: Optional[str] = FieldFrom(objects.Artifact)\n description: Optional[str] = FieldFrom(objects.Artifact)\n data: Optional[Union[Dict[str, Any], Any]] = FieldFrom(objects.Artifact)\n metadata_: Optional[Dict[str, str]] = FieldFrom(objects.Artifact)\n flow_run_id: Optional[UUID] = FieldFrom(objects.Artifact)\n task_run_id: Optional[UUID] = FieldFrom(objects.Artifact)\n\n _validate_artifact_format = validator(\"key\", allow_reuse=True)(\n validate_artifact_key\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.ArtifactUpdate","title":"ArtifactUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update an artifact.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass ArtifactUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update an artifact.\"\"\"\n\n data: Optional[Union[Dict[str, Any], Any]] = FieldFrom(objects.Artifact)\n description: Optional[str] = FieldFrom(objects.Artifact)\n metadata_: Optional[Dict[str, str]] = FieldFrom(objects.Artifact)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.VariableCreate","title":"VariableCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a Variable.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass VariableCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a Variable.\"\"\"\n\n name: str = FieldFrom(objects.Variable)\n value: str = FieldFrom(objects.Variable)\n tags: Optional[List[str]] = FieldFrom(objects.Variable)\n\n # validators\n _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.VariableUpdate","title":"VariableUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a Variable.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass VariableUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a Variable.\"\"\"\n\n name: Optional[str] = Field(\n default=None,\n description=\"The name of the variable\",\n example=\"my_variable\",\n max_length=objects.MAX_VARIABLE_NAME_LENGTH,\n )\n value: Optional[str] = Field(\n default=None,\n description=\"The value of the variable\",\n example=\"my-value\",\n max_length=objects.MAX_VARIABLE_NAME_LENGTH,\n )\n tags: Optional[List[str]] = FieldFrom(objects.Variable)\n\n # validators\n _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.GlobalConcurrencyLimitCreate","title":"GlobalConcurrencyLimitCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a global concurrency limit.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass GlobalConcurrencyLimitCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a global concurrency limit.\"\"\"\n\n name: str = FieldFrom(objects.GlobalConcurrencyLimit)\n limit: int = FieldFrom(objects.GlobalConcurrencyLimit)\n active: Optional[bool] = FieldFrom(objects.GlobalConcurrencyLimit)\n active_slots: Optional[int] = FieldFrom(objects.GlobalConcurrencyLimit)\n slot_decay_per_second: Optional[int] = FieldFrom(objects.GlobalConcurrencyLimit)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.GlobalConcurrencyLimitUpdate","title":"GlobalConcurrencyLimitUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a global concurrency limit.
Source code inprefect/client/schemas/actions.py
@copy_model_fields\nclass GlobalConcurrencyLimitUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a global concurrency limit.\"\"\"\n\n name: Optional[str] = FieldFrom(objects.GlobalConcurrencyLimit)\n limit: Optional[int] = FieldFrom(objects.GlobalConcurrencyLimit)\n active: Optional[bool] = FieldFrom(objects.GlobalConcurrencyLimit)\n active_slots: Optional[int] = FieldFrom(objects.GlobalConcurrencyLimit)\n slot_decay_per_second: Optional[int] = FieldFrom(objects.GlobalConcurrencyLimit)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_2","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters","title":"prefect.client.schemas.filters
","text":"Schemas that define Prefect REST API filtering operations.
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.Operator","title":"Operator
","text":" Bases: AutoEnum
Operators for combining filter criteria.
Source code inprefect/client/schemas/filters.py
class Operator(AutoEnum):\n \"\"\"Operators for combining filter criteria.\"\"\"\n\n and_ = AutoEnum.auto()\n or_ = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.OperatorMixin","title":"OperatorMixin
","text":"Base model for Prefect filters that combines criteria with a user-provided operator
Source code inprefect/client/schemas/filters.py
class OperatorMixin:\n \"\"\"Base model for Prefect filters that combines criteria with a user-provided operator\"\"\"\n\n operator: Operator = Field(\n default=Operator.and_,\n description=\"Operator for combining filter criteria. Defaults to 'and_'.\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilterId","title":"FlowFilterId
","text":" Bases: PrefectBaseModel
Filter by Flow.id
.
prefect/client/schemas/filters.py
class FlowFilterId(PrefectBaseModel):\n \"\"\"Filter by `Flow.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilterName","title":"FlowFilterName
","text":" Bases: PrefectBaseModel
Filter by Flow.name
.
prefect/client/schemas/filters.py
class FlowFilterName(PrefectBaseModel):\n \"\"\"Filter by `Flow.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of flow names to include\",\n example=[\"my-flow-1\", \"my-flow-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilterTags","title":"FlowFilterTags
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by Flow.tags
.
prefect/client/schemas/filters.py
class FlowFilterTags(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `Flow.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Flows will be returned only if their tags are a superset\"\n \" of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include flows without tags\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilter","title":"FlowFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter for flows. Only flows matching all criteria will be returned.
Source code inprefect/client/schemas/filters.py
class FlowFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter for flows. Only flows matching all criteria will be returned.\"\"\"\n\n id: Optional[FlowFilterId] = Field(\n default=None, description=\"Filter criteria for `Flow.id`\"\n )\n name: Optional[FlowFilterName] = Field(\n default=None, description=\"Filter criteria for `Flow.name`\"\n )\n tags: Optional[FlowFilterTags] = Field(\n default=None, description=\"Filter criteria for `Flow.tags`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterId","title":"FlowRunFilterId
","text":" Bases: PrefectBaseModel
Filter by FlowRun.id.
Source code inprefect/client/schemas/filters.py
class FlowRunFilterId(PrefectBaseModel):\n \"\"\"Filter by FlowRun.id.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run ids to include\"\n )\n not_any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run ids to exclude\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterName","title":"FlowRunFilterName
","text":" Bases: PrefectBaseModel
Filter by FlowRun.name
.
prefect/client/schemas/filters.py
class FlowRunFilterName(PrefectBaseModel):\n \"\"\"Filter by `FlowRun.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of flow run names to include\",\n example=[\"my-flow-run-1\", \"my-flow-run-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterTags","title":"FlowRunFilterTags
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by FlowRun.tags
.
prefect/client/schemas/filters.py
class FlowRunFilterTags(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `FlowRun.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Flow runs will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include flow runs without tags\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterDeploymentId","title":"FlowRunFilterDeploymentId
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by FlowRun.deployment_id
.
prefect/client/schemas/filters.py
class FlowRunFilterDeploymentId(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `FlowRun.deployment_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run deployment ids to include\"\n )\n is_null_: Optional[bool] = Field(\n default=None,\n description=\"If true, only include flow runs without deployment ids\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterWorkQueueName","title":"FlowRunFilterWorkQueueName
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by FlowRun.work_queue_name
.
prefect/client/schemas/filters.py
class FlowRunFilterWorkQueueName(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `FlowRun.work_queue_name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of work queue names to include\",\n example=[\"work_queue_1\", \"work_queue_2\"],\n )\n is_null_: Optional[bool] = Field(\n default=None,\n description=\"If true, only include flow runs without work queue names\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterStateType","title":"FlowRunFilterStateType
","text":" Bases: PrefectBaseModel
Filter by FlowRun.state_type
.
prefect/client/schemas/filters.py
class FlowRunFilterStateType(PrefectBaseModel):\n \"\"\"Filter by `FlowRun.state_type`.\"\"\"\n\n any_: Optional[List[StateType]] = Field(\n default=None, description=\"A list of flow run state types to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterFlowVersion","title":"FlowRunFilterFlowVersion
","text":" Bases: PrefectBaseModel
Filter by FlowRun.flow_version
.
prefect/client/schemas/filters.py
class FlowRunFilterFlowVersion(PrefectBaseModel):\n \"\"\"Filter by `FlowRun.flow_version`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of flow run flow_versions to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterStartTime","title":"FlowRunFilterStartTime
","text":" Bases: PrefectBaseModel
Filter by FlowRun.start_time
.
prefect/client/schemas/filters.py
class FlowRunFilterStartTime(PrefectBaseModel):\n \"\"\"Filter by `FlowRun.start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs starting at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs starting at or after this time\",\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only return flow runs without a start time\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterExpectedStartTime","title":"FlowRunFilterExpectedStartTime
","text":" Bases: PrefectBaseModel
Filter by FlowRun.expected_start_time
.
prefect/client/schemas/filters.py
class FlowRunFilterExpectedStartTime(PrefectBaseModel):\n \"\"\"Filter by `FlowRun.expected_start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs scheduled to start at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs scheduled to start at or after this time\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterNextScheduledStartTime","title":"FlowRunFilterNextScheduledStartTime
","text":" Bases: PrefectBaseModel
Filter by FlowRun.next_scheduled_start_time
.
prefect/client/schemas/filters.py
class FlowRunFilterNextScheduledStartTime(PrefectBaseModel):\n \"\"\"Filter by `FlowRun.next_scheduled_start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include flow runs with a next_scheduled_start_time or before this\"\n \" time\"\n ),\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include flow runs with a next_scheduled_start_time at or after this\"\n \" time\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterParentFlowRunId","title":"FlowRunFilterParentFlowRunId
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter for subflows of the given flow runs
Source code inprefect/client/schemas/filters.py
class FlowRunFilterParentFlowRunId(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter for subflows of the given flow runs\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run parents to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterParentTaskRunId","title":"FlowRunFilterParentTaskRunId
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by FlowRun.parent_task_run_id
.
prefect/client/schemas/filters.py
class FlowRunFilterParentTaskRunId(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `FlowRun.parent_task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run parent_task_run_ids to include\"\n )\n is_null_: Optional[bool] = Field(\n default=None,\n description=\"If true, only include flow runs without parent_task_run_id\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterIdempotencyKey","title":"FlowRunFilterIdempotencyKey
","text":" Bases: PrefectBaseModel
Filter by FlowRun.idempotency_key.
Source code inprefect/client/schemas/filters.py
class FlowRunFilterIdempotencyKey(PrefectBaseModel):\n \"\"\"Filter by FlowRun.idempotency_key.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of flow run idempotency keys to include\"\n )\n not_any_: Optional[List[str]] = Field(\n default=None, description=\"A list of flow run idempotency keys to exclude\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilter","title":"FlowRunFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter flow runs. Only flow runs matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class FlowRunFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter flow runs. Only flow runs matching all criteria will be returned\"\"\"\n\n id: Optional[FlowRunFilterId] = Field(\n default=None, description=\"Filter criteria for `FlowRun.id`\"\n )\n name: Optional[FlowRunFilterName] = Field(\n default=None, description=\"Filter criteria for `FlowRun.name`\"\n )\n tags: Optional[FlowRunFilterTags] = Field(\n default=None, description=\"Filter criteria for `FlowRun.tags`\"\n )\n deployment_id: Optional[FlowRunFilterDeploymentId] = Field(\n default=None, description=\"Filter criteria for `FlowRun.deployment_id`\"\n )\n work_queue_name: Optional[FlowRunFilterWorkQueueName] = Field(\n default=None, description=\"Filter criteria for `FlowRun.work_queue_name\"\n )\n state: Optional[FlowRunFilterState] = Field(\n default=None, description=\"Filter criteria for `FlowRun.state`\"\n )\n flow_version: Optional[FlowRunFilterFlowVersion] = Field(\n default=None, description=\"Filter criteria for `FlowRun.flow_version`\"\n )\n start_time: Optional[FlowRunFilterStartTime] = Field(\n default=None, description=\"Filter criteria for `FlowRun.start_time`\"\n )\n expected_start_time: Optional[FlowRunFilterExpectedStartTime] = Field(\n default=None, description=\"Filter criteria for `FlowRun.expected_start_time`\"\n )\n next_scheduled_start_time: Optional[FlowRunFilterNextScheduledStartTime] = Field(\n default=None,\n description=\"Filter criteria for `FlowRun.next_scheduled_start_time`\",\n )\n parent_flow_run_id: Optional[FlowRunFilterParentFlowRunId] = Field(\n default=None, description=\"Filter criteria for subflows of the given flow runs\"\n )\n parent_task_run_id: Optional[FlowRunFilterParentTaskRunId] = Field(\n default=None, description=\"Filter criteria for `FlowRun.parent_task_run_id`\"\n )\n idempotency_key: Optional[FlowRunFilterIdempotencyKey] = Field(\n default=None, description=\"Filter criteria for `FlowRun.idempotency_key`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterFlowRunId","title":"TaskRunFilterFlowRunId
","text":" Bases: PrefectBaseModel
Filter by TaskRun.flow_run_id
.
prefect/client/schemas/filters.py
class TaskRunFilterFlowRunId(PrefectBaseModel):\n \"\"\"Filter by `TaskRun.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run ids to include\"\n )\n\n is_null_: bool = Field(\n default=False,\n description=\"If true, only include task runs without a flow run id\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterId","title":"TaskRunFilterId
","text":" Bases: PrefectBaseModel
Filter by TaskRun.id
.
prefect/client/schemas/filters.py
class TaskRunFilterId(PrefectBaseModel):\n \"\"\"Filter by `TaskRun.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterName","title":"TaskRunFilterName
","text":" Bases: PrefectBaseModel
Filter by TaskRun.name
.
prefect/client/schemas/filters.py
class TaskRunFilterName(PrefectBaseModel):\n \"\"\"Filter by `TaskRun.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of task run names to include\",\n example=[\"my-task-run-1\", \"my-task-run-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterTags","title":"TaskRunFilterTags
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by TaskRun.tags
.
prefect/client/schemas/filters.py
class TaskRunFilterTags(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `TaskRun.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Task runs will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include task runs without tags\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterStateType","title":"TaskRunFilterStateType
","text":" Bases: PrefectBaseModel
Filter by TaskRun.state_type
.
prefect/client/schemas/filters.py
class TaskRunFilterStateType(PrefectBaseModel):\n \"\"\"Filter by `TaskRun.state_type`.\"\"\"\n\n any_: Optional[List[StateType]] = Field(\n default=None, description=\"A list of task run state types to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterSubFlowRuns","title":"TaskRunFilterSubFlowRuns
","text":" Bases: PrefectBaseModel
Filter by TaskRun.subflow_run
.
prefect/client/schemas/filters.py
class TaskRunFilterSubFlowRuns(PrefectBaseModel):\n \"\"\"Filter by `TaskRun.subflow_run`.\"\"\"\n\n exists_: Optional[bool] = Field(\n default=None,\n description=(\n \"If true, only include task runs that are subflow run parents; if false,\"\n \" exclude parent task runs\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterStartTime","title":"TaskRunFilterStartTime
","text":" Bases: PrefectBaseModel
Filter by TaskRun.start_time
.
prefect/client/schemas/filters.py
class TaskRunFilterStartTime(PrefectBaseModel):\n \"\"\"Filter by `TaskRun.start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include task runs starting at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include task runs starting at or after this time\",\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only return task runs without a start time\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilter","title":"TaskRunFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter task runs. Only task runs matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class TaskRunFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter task runs. Only task runs matching all criteria will be returned\"\"\"\n\n id: Optional[TaskRunFilterId] = Field(\n default=None, description=\"Filter criteria for `TaskRun.id`\"\n )\n name: Optional[TaskRunFilterName] = Field(\n default=None, description=\"Filter criteria for `TaskRun.name`\"\n )\n tags: Optional[TaskRunFilterTags] = Field(\n default=None, description=\"Filter criteria for `TaskRun.tags`\"\n )\n state: Optional[TaskRunFilterState] = Field(\n default=None, description=\"Filter criteria for `TaskRun.state`\"\n )\n start_time: Optional[TaskRunFilterStartTime] = Field(\n default=None, description=\"Filter criteria for `TaskRun.start_time`\"\n )\n subflow_runs: Optional[TaskRunFilterSubFlowRuns] = Field(\n default=None, description=\"Filter criteria for `TaskRun.subflow_run`\"\n )\n flow_run_id: Optional[TaskRunFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `TaskRun.flow_run_id`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterId","title":"DeploymentFilterId
","text":" Bases: PrefectBaseModel
Filter by Deployment.id
.
prefect/client/schemas/filters.py
class DeploymentFilterId(PrefectBaseModel):\n \"\"\"Filter by `Deployment.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of deployment ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterName","title":"DeploymentFilterName
","text":" Bases: PrefectBaseModel
Filter by Deployment.name
.
prefect/client/schemas/filters.py
class DeploymentFilterName(PrefectBaseModel):\n \"\"\"Filter by `Deployment.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of deployment names to include\",\n example=[\"my-deployment-1\", \"my-deployment-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterWorkQueueName","title":"DeploymentFilterWorkQueueName
","text":" Bases: PrefectBaseModel
Filter by Deployment.work_queue_name
.
prefect/client/schemas/filters.py
class DeploymentFilterWorkQueueName(PrefectBaseModel):\n \"\"\"Filter by `Deployment.work_queue_name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of work queue names to include\",\n example=[\"work_queue_1\", \"work_queue_2\"],\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterIsScheduleActive","title":"DeploymentFilterIsScheduleActive
","text":" Bases: PrefectBaseModel
Filter by Deployment.is_schedule_active
.
prefect/client/schemas/filters.py
class DeploymentFilterIsScheduleActive(PrefectBaseModel):\n \"\"\"Filter by `Deployment.is_schedule_active`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=\"Only returns where deployment schedule is/is not active\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterTags","title":"DeploymentFilterTags
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by Deployment.tags
.
prefect/client/schemas/filters.py
class DeploymentFilterTags(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `Deployment.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Deployments will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include deployments without tags\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilter","title":"DeploymentFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter for deployments. Only deployments matching all criteria will be returned.
Source code inprefect/client/schemas/filters.py
class DeploymentFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter for deployments. Only deployments matching all criteria will be returned.\"\"\"\n\n id: Optional[DeploymentFilterId] = Field(\n default=None, description=\"Filter criteria for `Deployment.id`\"\n )\n name: Optional[DeploymentFilterName] = Field(\n default=None, description=\"Filter criteria for `Deployment.name`\"\n )\n is_schedule_active: Optional[DeploymentFilterIsScheduleActive] = Field(\n default=None, description=\"Filter criteria for `Deployment.is_schedule_active`\"\n )\n tags: Optional[DeploymentFilterTags] = Field(\n default=None, description=\"Filter criteria for `Deployment.tags`\"\n )\n work_queue_name: Optional[DeploymentFilterWorkQueueName] = Field(\n default=None, description=\"Filter criteria for `Deployment.work_queue_name`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterName","title":"LogFilterName
","text":" Bases: PrefectBaseModel
Filter by Log.name
.
prefect/client/schemas/filters.py
class LogFilterName(PrefectBaseModel):\n \"\"\"Filter by `Log.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of log names to include\",\n example=[\"prefect.logger.flow_runs\", \"prefect.logger.task_runs\"],\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterLevel","title":"LogFilterLevel
","text":" Bases: PrefectBaseModel
Filter by Log.level
.
prefect/client/schemas/filters.py
class LogFilterLevel(PrefectBaseModel):\n \"\"\"Filter by `Log.level`.\"\"\"\n\n ge_: Optional[int] = Field(\n default=None,\n description=\"Include logs with a level greater than or equal to this level\",\n example=20,\n )\n\n le_: Optional[int] = Field(\n default=None,\n description=\"Include logs with a level less than or equal to this level\",\n example=50,\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterTimestamp","title":"LogFilterTimestamp
","text":" Bases: PrefectBaseModel
Filter by Log.timestamp
.
prefect/client/schemas/filters.py
class LogFilterTimestamp(PrefectBaseModel):\n \"\"\"Filter by `Log.timestamp`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include logs with a timestamp at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include logs with a timestamp at or after this time\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterFlowRunId","title":"LogFilterFlowRunId
","text":" Bases: PrefectBaseModel
Filter by Log.flow_run_id
.
prefect/client/schemas/filters.py
class LogFilterFlowRunId(PrefectBaseModel):\n \"\"\"Filter by `Log.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run IDs to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterTaskRunId","title":"LogFilterTaskRunId
","text":" Bases: PrefectBaseModel
Filter by Log.task_run_id
.
prefect/client/schemas/filters.py
class LogFilterTaskRunId(PrefectBaseModel):\n \"\"\"Filter by `Log.task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run IDs to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilter","title":"LogFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter logs. Only logs matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class LogFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter logs. Only logs matching all criteria will be returned\"\"\"\n\n level: Optional[LogFilterLevel] = Field(\n default=None, description=\"Filter criteria for `Log.level`\"\n )\n timestamp: Optional[LogFilterTimestamp] = Field(\n default=None, description=\"Filter criteria for `Log.timestamp`\"\n )\n flow_run_id: Optional[LogFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `Log.flow_run_id`\"\n )\n task_run_id: Optional[LogFilterTaskRunId] = Field(\n default=None, description=\"Filter criteria for `Log.task_run_id`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FilterSet","title":"FilterSet
","text":" Bases: PrefectBaseModel
A collection of filters for common objects
Source code inprefect/client/schemas/filters.py
class FilterSet(PrefectBaseModel):\n \"\"\"A collection of filters for common objects\"\"\"\n\n flows: FlowFilter = Field(\n default_factory=FlowFilter, description=\"Filters that apply to flows\"\n )\n flow_runs: FlowRunFilter = Field(\n default_factory=FlowRunFilter, description=\"Filters that apply to flow runs\"\n )\n task_runs: TaskRunFilter = Field(\n default_factory=TaskRunFilter, description=\"Filters that apply to task runs\"\n )\n deployments: DeploymentFilter = Field(\n default_factory=DeploymentFilter,\n description=\"Filters that apply to deployments\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockTypeFilterName","title":"BlockTypeFilterName
","text":" Bases: PrefectBaseModel
Filter by BlockType.name
prefect/client/schemas/filters.py
class BlockTypeFilterName(PrefectBaseModel):\n \"\"\"Filter by `BlockType.name`\"\"\"\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockTypeFilterSlug","title":"BlockTypeFilterSlug
","text":" Bases: PrefectBaseModel
Filter by BlockType.slug
prefect/client/schemas/filters.py
class BlockTypeFilterSlug(PrefectBaseModel):\n \"\"\"Filter by `BlockType.slug`\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of slugs to match\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockTypeFilter","title":"BlockTypeFilter
","text":" Bases: PrefectBaseModel
Filter BlockTypes
Source code inprefect/client/schemas/filters.py
class BlockTypeFilter(PrefectBaseModel):\n \"\"\"Filter BlockTypes\"\"\"\n\n name: Optional[BlockTypeFilterName] = Field(\n default=None, description=\"Filter criteria for `BlockType.name`\"\n )\n\n slug: Optional[BlockTypeFilterSlug] = Field(\n default=None, description=\"Filter criteria for `BlockType.slug`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterBlockTypeId","title":"BlockSchemaFilterBlockTypeId
","text":" Bases: PrefectBaseModel
Filter by BlockSchema.block_type_id
.
prefect/client/schemas/filters.py
class BlockSchemaFilterBlockTypeId(PrefectBaseModel):\n \"\"\"Filter by `BlockSchema.block_type_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of block type ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterId","title":"BlockSchemaFilterId
","text":" Bases: PrefectBaseModel
Filter by BlockSchema.id
Source code inprefect/client/schemas/filters.py
class BlockSchemaFilterId(PrefectBaseModel):\n \"\"\"Filter by BlockSchema.id\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of IDs to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterCapabilities","title":"BlockSchemaFilterCapabilities
","text":" Bases: PrefectBaseModel
Filter by BlockSchema.capabilities
prefect/client/schemas/filters.py
class BlockSchemaFilterCapabilities(PrefectBaseModel):\n \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"write-storage\", \"read-storage\"],\n description=(\n \"A list of block capabilities. Block entities will be returned only if an\"\n \" associated block schema has a superset of the defined capabilities.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterVersion","title":"BlockSchemaFilterVersion
","text":" Bases: PrefectBaseModel
Filter by BlockSchema.capabilities
prefect/client/schemas/filters.py
class BlockSchemaFilterVersion(PrefectBaseModel):\n \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n example=[\"2.0.0\", \"2.1.0\"],\n description=\"A list of block schema versions.\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilter","title":"BlockSchemaFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter BlockSchemas
Source code inprefect/client/schemas/filters.py
class BlockSchemaFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter BlockSchemas\"\"\"\n\n block_type_id: Optional[BlockSchemaFilterBlockTypeId] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.block_type_id`\"\n )\n block_capabilities: Optional[BlockSchemaFilterCapabilities] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.capabilities`\"\n )\n id: Optional[BlockSchemaFilterId] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.id`\"\n )\n version: Optional[BlockSchemaFilterVersion] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.version`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterIsAnonymous","title":"BlockDocumentFilterIsAnonymous
","text":" Bases: PrefectBaseModel
Filter by BlockDocument.is_anonymous
.
prefect/client/schemas/filters.py
class BlockDocumentFilterIsAnonymous(PrefectBaseModel):\n \"\"\"Filter by `BlockDocument.is_anonymous`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=(\n \"Filter block documents for only those that are or are not anonymous.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterBlockTypeId","title":"BlockDocumentFilterBlockTypeId
","text":" Bases: PrefectBaseModel
Filter by BlockDocument.block_type_id
.
prefect/client/schemas/filters.py
class BlockDocumentFilterBlockTypeId(PrefectBaseModel):\n \"\"\"Filter by `BlockDocument.block_type_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of block type ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterId","title":"BlockDocumentFilterId
","text":" Bases: PrefectBaseModel
Filter by BlockDocument.id
.
prefect/client/schemas/filters.py
class BlockDocumentFilterId(PrefectBaseModel):\n \"\"\"Filter by `BlockDocument.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of block ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterName","title":"BlockDocumentFilterName
","text":" Bases: PrefectBaseModel
Filter by BlockDocument.name
.
prefect/client/schemas/filters.py
class BlockDocumentFilterName(PrefectBaseModel):\n \"\"\"Filter by `BlockDocument.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of block names to include\"\n )\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match block names against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-block%\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilter","title":"BlockDocumentFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class BlockDocumentFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned\"\"\"\n\n id: Optional[BlockDocumentFilterId] = Field(\n default=None, description=\"Filter criteria for `BlockDocument.id`\"\n )\n is_anonymous: Optional[BlockDocumentFilterIsAnonymous] = Field(\n # default is to exclude anonymous blocks\n BlockDocumentFilterIsAnonymous(eq_=False),\n description=(\n \"Filter criteria for `BlockDocument.is_anonymous`. \"\n \"Defaults to excluding anonymous blocks.\"\n ),\n )\n block_type_id: Optional[BlockDocumentFilterBlockTypeId] = Field(\n default=None, description=\"Filter criteria for `BlockDocument.block_type_id`\"\n )\n name: Optional[BlockDocumentFilterName] = Field(\n default=None, description=\"Filter criteria for `BlockDocument.name`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunNotificationPolicyFilterIsActive","title":"FlowRunNotificationPolicyFilterIsActive
","text":" Bases: PrefectBaseModel
Filter by FlowRunNotificationPolicy.is_active
.
prefect/client/schemas/filters.py
class FlowRunNotificationPolicyFilterIsActive(PrefectBaseModel):\n \"\"\"Filter by `FlowRunNotificationPolicy.is_active`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=(\n \"Filter notification policies for only those that are or are not active.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunNotificationPolicyFilter","title":"FlowRunNotificationPolicyFilter
","text":" Bases: PrefectBaseModel
Filter FlowRunNotificationPolicies.
Source code inprefect/client/schemas/filters.py
class FlowRunNotificationPolicyFilter(PrefectBaseModel):\n \"\"\"Filter FlowRunNotificationPolicies.\"\"\"\n\n is_active: Optional[FlowRunNotificationPolicyFilterIsActive] = Field(\n default=FlowRunNotificationPolicyFilterIsActive(eq_=False),\n description=\"Filter criteria for `FlowRunNotificationPolicy.is_active`. \",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkQueueFilterId","title":"WorkQueueFilterId
","text":" Bases: PrefectBaseModel
Filter by WorkQueue.id
.
prefect/client/schemas/filters.py
class WorkQueueFilterId(PrefectBaseModel):\n \"\"\"Filter by `WorkQueue.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None,\n description=\"A list of work queue ids to include\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkQueueFilterName","title":"WorkQueueFilterName
","text":" Bases: PrefectBaseModel
Filter by WorkQueue.name
.
prefect/client/schemas/filters.py
class WorkQueueFilterName(PrefectBaseModel):\n \"\"\"Filter by `WorkQueue.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of work queue names to include\",\n example=[\"wq-1\", \"wq-2\"],\n )\n\n startswith_: Optional[List[str]] = Field(\n default=None,\n description=(\n \"A list of case-insensitive starts-with matches. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', and 'Marvin-robot', but not 'sad-marvin'.\"\n ),\n example=[\"marvin\", \"Marvin-robot\"],\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkQueueFilter","title":"WorkQueueFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter work queues. Only work queues matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class WorkQueueFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter work queues. Only work queues matching all criteria will be\n returned\"\"\"\n\n id: Optional[WorkQueueFilterId] = Field(\n default=None, description=\"Filter criteria for `WorkQueue.id`\"\n )\n\n name: Optional[WorkQueueFilterName] = Field(\n default=None, description=\"Filter criteria for `WorkQueue.name`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkPoolFilterId","title":"WorkPoolFilterId
","text":" Bases: PrefectBaseModel
Filter by WorkPool.id
.
prefect/client/schemas/filters.py
class WorkPoolFilterId(PrefectBaseModel):\n \"\"\"Filter by `WorkPool.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of work pool ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkPoolFilterName","title":"WorkPoolFilterName
","text":" Bases: PrefectBaseModel
Filter by WorkPool.name
.
prefect/client/schemas/filters.py
class WorkPoolFilterName(PrefectBaseModel):\n \"\"\"Filter by `WorkPool.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of work pool names to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkPoolFilterType","title":"WorkPoolFilterType
","text":" Bases: PrefectBaseModel
Filter by WorkPool.type
.
prefect/client/schemas/filters.py
class WorkPoolFilterType(PrefectBaseModel):\n \"\"\"Filter by `WorkPool.type`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of work pool types to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkerFilterWorkPoolId","title":"WorkerFilterWorkPoolId
","text":" Bases: PrefectBaseModel
Filter by Worker.worker_config_id
.
prefect/client/schemas/filters.py
class WorkerFilterWorkPoolId(PrefectBaseModel):\n \"\"\"Filter by `Worker.worker_config_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of work pool ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkerFilterLastHeartbeatTime","title":"WorkerFilterLastHeartbeatTime
","text":" Bases: PrefectBaseModel
Filter by Worker.last_heartbeat_time
.
prefect/client/schemas/filters.py
class WorkerFilterLastHeartbeatTime(PrefectBaseModel):\n \"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include processes whose last heartbeat was at or before this time\"\n ),\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include processes whose last heartbeat was at or after this time\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterId","title":"ArtifactFilterId
","text":" Bases: PrefectBaseModel
Filter by Artifact.id
.
prefect/client/schemas/filters.py
class ArtifactFilterId(PrefectBaseModel):\n \"\"\"Filter by `Artifact.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of artifact ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterKey","title":"ArtifactFilterKey
","text":" Bases: PrefectBaseModel
Filter by Artifact.key
.
prefect/client/schemas/filters.py
class ArtifactFilterKey(PrefectBaseModel):\n \"\"\"Filter by `Artifact.key`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact keys to include\"\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match artifact keys against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-artifact-%\",\n )\n\n exists_: Optional[bool] = Field(\n default=None,\n description=(\n \"If `true`, only include artifacts with a non-null key. If `false`, \"\n \"only include artifacts with a null key.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterFlowRunId","title":"ArtifactFilterFlowRunId
","text":" Bases: PrefectBaseModel
Filter by Artifact.flow_run_id
.
prefect/client/schemas/filters.py
class ArtifactFilterFlowRunId(PrefectBaseModel):\n \"\"\"Filter by `Artifact.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run IDs to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterTaskRunId","title":"ArtifactFilterTaskRunId
","text":" Bases: PrefectBaseModel
Filter by Artifact.task_run_id
.
prefect/client/schemas/filters.py
class ArtifactFilterTaskRunId(PrefectBaseModel):\n \"\"\"Filter by `Artifact.task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run IDs to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterType","title":"ArtifactFilterType
","text":" Bases: PrefectBaseModel
Filter by Artifact.type
.
prefect/client/schemas/filters.py
class ArtifactFilterType(PrefectBaseModel):\n \"\"\"Filter by `Artifact.type`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to include\"\n )\n not_any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to exclude\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilter","title":"ArtifactFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter artifacts. Only artifacts matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class ArtifactFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter artifacts. Only artifacts matching all criteria will be returned\"\"\"\n\n id: Optional[ArtifactFilterId] = Field(\n default=None, description=\"Filter criteria for `Artifact.id`\"\n )\n key: Optional[ArtifactFilterKey] = Field(\n default=None, description=\"Filter criteria for `Artifact.key`\"\n )\n flow_run_id: Optional[ArtifactFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n )\n task_run_id: Optional[ArtifactFilterTaskRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n )\n type: Optional[ArtifactFilterType] = Field(\n default=None, description=\"Filter criteria for `Artifact.type`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterLatestId","title":"ArtifactCollectionFilterLatestId
","text":" Bases: PrefectBaseModel
Filter by ArtifactCollection.latest_id
.
prefect/client/schemas/filters.py
class ArtifactCollectionFilterLatestId(PrefectBaseModel):\n \"\"\"Filter by `ArtifactCollection.latest_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of artifact ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterKey","title":"ArtifactCollectionFilterKey
","text":" Bases: PrefectBaseModel
Filter by ArtifactCollection.key
.
prefect/client/schemas/filters.py
class ArtifactCollectionFilterKey(PrefectBaseModel):\n \"\"\"Filter by `ArtifactCollection.key`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact keys to include\"\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match artifact keys against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-artifact-%\",\n )\n\n exists_: Optional[bool] = Field(\n default=None,\n description=(\n \"If `true`, only include artifacts with a non-null key. If `false`, \"\n \"only include artifacts with a null key. Should return all rows in \"\n \"the ArtifactCollection table if specified.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterFlowRunId","title":"ArtifactCollectionFilterFlowRunId
","text":" Bases: PrefectBaseModel
Filter by ArtifactCollection.flow_run_id
.
prefect/client/schemas/filters.py
class ArtifactCollectionFilterFlowRunId(PrefectBaseModel):\n \"\"\"Filter by `ArtifactCollection.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run IDs to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterTaskRunId","title":"ArtifactCollectionFilterTaskRunId
","text":" Bases: PrefectBaseModel
Filter by ArtifactCollection.task_run_id
.
prefect/client/schemas/filters.py
class ArtifactCollectionFilterTaskRunId(PrefectBaseModel):\n \"\"\"Filter by `ArtifactCollection.task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run IDs to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterType","title":"ArtifactCollectionFilterType
","text":" Bases: PrefectBaseModel
Filter by ArtifactCollection.type
.
prefect/client/schemas/filters.py
class ArtifactCollectionFilterType(PrefectBaseModel):\n \"\"\"Filter by `ArtifactCollection.type`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to include\"\n )\n not_any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to exclude\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilter","title":"ArtifactCollectionFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter artifact collections. Only artifact collections matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class ArtifactCollectionFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter artifact collections. Only artifact collections matching all criteria will be returned\"\"\"\n\n latest_id: Optional[ArtifactCollectionFilterLatestId] = Field(\n default=None, description=\"Filter criteria for `Artifact.id`\"\n )\n key: Optional[ArtifactCollectionFilterKey] = Field(\n default=None, description=\"Filter criteria for `Artifact.key`\"\n )\n flow_run_id: Optional[ArtifactCollectionFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n )\n task_run_id: Optional[ArtifactCollectionFilterTaskRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n )\n type: Optional[ArtifactCollectionFilterType] = Field(\n default=None, description=\"Filter criteria for `Artifact.type`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterId","title":"VariableFilterId
","text":" Bases: PrefectBaseModel
Filter by Variable.id
.
prefect/client/schemas/filters.py
class VariableFilterId(PrefectBaseModel):\n \"\"\"Filter by `Variable.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of variable ids to include\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterName","title":"VariableFilterName
","text":" Bases: PrefectBaseModel
Filter by Variable.name
.
prefect/client/schemas/filters.py
class VariableFilterName(PrefectBaseModel):\n \"\"\"Filter by `Variable.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of variables names to include\"\n )\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match variable names against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my_variable_%\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterValue","title":"VariableFilterValue
","text":" Bases: PrefectBaseModel
Filter by Variable.value
.
prefect/client/schemas/filters.py
class VariableFilterValue(PrefectBaseModel):\n \"\"\"Filter by `Variable.value`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of variables value to include\"\n )\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match variable value against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-value-%\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterTags","title":"VariableFilterTags
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter by Variable.tags
.
prefect/client/schemas/filters.py
class VariableFilterTags(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter by `Variable.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Variables will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include Variables without tags\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilter","title":"VariableFilter
","text":" Bases: PrefectBaseModel
, OperatorMixin
Filter variables. Only variables matching all criteria will be returned
Source code inprefect/client/schemas/filters.py
class VariableFilter(PrefectBaseModel, OperatorMixin):\n \"\"\"Filter variables. Only variables matching all criteria will be returned\"\"\"\n\n id: Optional[VariableFilterId] = Field(\n default=None, description=\"Filter criteria for `Variable.id`\"\n )\n name: Optional[VariableFilterName] = Field(\n default=None, description=\"Filter criteria for `Variable.name`\"\n )\n value: Optional[VariableFilterValue] = Field(\n default=None, description=\"Filter criteria for `Variable.value`\"\n )\n tags: Optional[VariableFilterTags] = Field(\n default=None, description=\"Filter criteria for `Variable.tags`\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_3","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects","title":"prefect.client.schemas.objects
","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.StateType","title":"StateType
","text":" Bases: AutoEnum
Enumeration of state types.
Source code inprefect/client/schemas/objects.py
class StateType(AutoEnum):\n \"\"\"Enumeration of state types.\"\"\"\n\n SCHEDULED = AutoEnum.auto()\n PENDING = AutoEnum.auto()\n RUNNING = AutoEnum.auto()\n COMPLETED = AutoEnum.auto()\n FAILED = AutoEnum.auto()\n CANCELLED = AutoEnum.auto()\n CRASHED = AutoEnum.auto()\n PAUSED = AutoEnum.auto()\n CANCELLING = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkPoolStatus","title":"WorkPoolStatus
","text":" Bases: AutoEnum
Enumeration of work pool statuses.
Source code inprefect/client/schemas/objects.py
class WorkPoolStatus(AutoEnum):\n \"\"\"Enumeration of work pool statuses.\"\"\"\n\n READY = AutoEnum.auto()\n NOT_READY = AutoEnum.auto()\n PAUSED = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkerStatus","title":"WorkerStatus
","text":" Bases: AutoEnum
Enumeration of worker statuses.
Source code inprefect/client/schemas/objects.py
class WorkerStatus(AutoEnum):\n \"\"\"Enumeration of worker statuses.\"\"\"\n\n ONLINE = AutoEnum.auto()\n OFFLINE = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.DeploymentStatus","title":"DeploymentStatus
","text":" Bases: AutoEnum
Enumeration of deployment statuses.
Source code inprefect/client/schemas/objects.py
class DeploymentStatus(AutoEnum):\n \"\"\"Enumeration of deployment statuses.\"\"\"\n\n READY = AutoEnum.auto()\n NOT_READY = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueueStatus","title":"WorkQueueStatus
","text":" Bases: AutoEnum
Enumeration of work queue statuses.
Source code inprefect/client/schemas/objects.py
class WorkQueueStatus(AutoEnum):\n \"\"\"Enumeration of work queue statuses.\"\"\"\n\n READY = AutoEnum.auto()\n NOT_READY = AutoEnum.auto()\n PAUSED = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State","title":"State
","text":" Bases: ObjectBaseModel
, Generic[R]
The state of a run.
Source code inprefect/client/schemas/objects.py
class State(ObjectBaseModel, Generic[R]):\n \"\"\"\n The state of a run.\n \"\"\"\n\n type: StateType\n name: Optional[str] = Field(default=None)\n timestamp: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n message: Optional[str] = Field(default=None, example=\"Run started\")\n state_details: StateDetails = Field(default_factory=StateDetails)\n data: Union[\"BaseResult[R]\", \"DataDocument[R]\", Any] = Field(\n default=None,\n )\n\n @overload\n def result(self: \"State[R]\", raise_on_failure: bool = True) -> R:\n ...\n\n @overload\n def result(self: \"State[R]\", raise_on_failure: bool = False) -> Union[R, Exception]:\n ...\n\n def result(self, raise_on_failure: bool = True, fetch: Optional[bool] = None):\n \"\"\"\n Retrieve the result attached to this state.\n\n Args:\n raise_on_failure: a boolean specifying whether to raise an exception\n if the state is of type `FAILED` and the underlying data is an exception\n fetch: a boolean specifying whether to resolve references to persisted\n results into data. For synchronous users, this defaults to `True`.\n For asynchronous users, this defaults to `False` for backwards\n compatibility.\n\n Raises:\n TypeError: If the state is failed but the result is not an exception.\n\n Returns:\n The result of the run\n\n Examples:\n >>> from prefect import flow, task\n >>> @task\n >>> def my_task(x):\n >>> return x\n\n Get the result from a task future in a flow\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task(\"hello\")\n >>> state = future.wait()\n >>> result = state.result()\n >>> print(result)\n >>> my_flow()\n hello\n\n Get the result from a flow state\n\n >>> @flow\n >>> def my_flow():\n >>> return \"hello\"\n >>> my_flow(return_state=True).result()\n hello\n\n Get the result from a failed state\n\n >>> @flow\n >>> def my_flow():\n >>> raise ValueError(\"oh no!\")\n >>> state = my_flow(return_state=True) # Error is wrapped in FAILED state\n >>> state.result() # Raises `ValueError`\n\n Get the result from a failed state without erroring\n\n >>> @flow\n >>> def my_flow():\n >>> raise ValueError(\"oh no!\")\n >>> state = my_flow(return_state=True)\n >>> result = state.result(raise_on_failure=False)\n >>> print(result)\n ValueError(\"oh no!\")\n\n\n Get the result from a flow state in an async context\n\n >>> @flow\n >>> async def my_flow():\n >>> return \"hello\"\n >>> state = await my_flow(return_state=True)\n >>> await state.result()\n hello\n \"\"\"\n from prefect.states import get_state_result\n\n return get_state_result(self, raise_on_failure=raise_on_failure, fetch=fetch)\n\n def to_state_create(self):\n \"\"\"\n Convert this state to a `StateCreate` type which can be used to set the state of\n a run in the API.\n\n This method will drop this state's `data` if it is not a result type. Only\n results should be sent to the API. Other data is only available locally.\n \"\"\"\n from prefect.client.schemas.actions import StateCreate\n from prefect.results import BaseResult\n\n return StateCreate(\n type=self.type,\n name=self.name,\n message=self.message,\n data=self.data if isinstance(self.data, BaseResult) else None,\n state_details=self.state_details,\n )\n\n @validator(\"name\", always=True)\n def default_name_from_type(cls, v, *, values, **kwargs):\n \"\"\"If a name is not provided, use the type\"\"\"\n\n # if `type` is not in `values` it means the `type` didn't pass its own\n # validation check and an error will be raised after this function is called\n if v is None and values.get(\"type\"):\n v = \" \".join([v.capitalize() for v in values.get(\"type\").value.split(\"_\")])\n return v\n\n @root_validator\n def default_scheduled_start_time(cls, values):\n \"\"\"\n TODO: This should throw an error instead of setting a default but is out of\n scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled\n into work refactoring state initialization\n \"\"\"\n if values.get(\"type\") == StateType.SCHEDULED:\n state_details = values.setdefault(\n \"state_details\", cls.__fields__[\"state_details\"].get_default()\n )\n if not state_details.scheduled_time:\n state_details.scheduled_time = pendulum.now(\"utc\")\n return values\n\n def is_scheduled(self) -> bool:\n return self.type == StateType.SCHEDULED\n\n def is_pending(self) -> bool:\n return self.type == StateType.PENDING\n\n def is_running(self) -> bool:\n return self.type == StateType.RUNNING\n\n def is_completed(self) -> bool:\n return self.type == StateType.COMPLETED\n\n def is_failed(self) -> bool:\n return self.type == StateType.FAILED\n\n def is_crashed(self) -> bool:\n return self.type == StateType.CRASHED\n\n def is_cancelled(self) -> bool:\n return self.type == StateType.CANCELLED\n\n def is_cancelling(self) -> bool:\n return self.type == StateType.CANCELLING\n\n def is_final(self) -> bool:\n return self.type in {\n StateType.CANCELLED,\n StateType.FAILED,\n StateType.COMPLETED,\n StateType.CRASHED,\n }\n\n def is_paused(self) -> bool:\n return self.type == StateType.PAUSED\n\n def copy(self, *, update: dict = None, reset_fields: bool = False, **kwargs):\n \"\"\"\n Copying API models should return an object that could be inserted into the\n database again. The 'timestamp' is reset using the default factory.\n \"\"\"\n update = update or {}\n update.setdefault(\"timestamp\", self.__fields__[\"timestamp\"].get_default())\n return super().copy(reset_fields=reset_fields, update=update, **kwargs)\n\n def __repr__(self) -> str:\n \"\"\"\n Generates a complete state representation appropriate for introspection\n and debugging, including the result:\n\n `MyCompletedState(message=\"my message\", type=COMPLETED, result=...)`\n \"\"\"\n from prefect.deprecated.data_documents import DataDocument\n\n if isinstance(self.data, DataDocument):\n result = self.data.decode()\n else:\n result = self.data\n\n display = dict(\n message=repr(self.message),\n type=str(self.type.value),\n result=repr(result),\n )\n\n return f\"{self.name}({', '.join(f'{k}={v}' for k, v in display.items())})\"\n\n def __str__(self) -> str:\n \"\"\"\n Generates a simple state representation appropriate for logging:\n\n `MyCompletedState(\"my message\", type=COMPLETED)`\n \"\"\"\n\n display = []\n\n if self.message:\n display.append(repr(self.message))\n\n if self.type.value.lower() != self.name.lower():\n display.append(f\"type={self.type.value}\")\n\n return f\"{self.name}({', '.join(display)})\"\n\n def __hash__(self) -> int:\n return hash(\n (\n getattr(self.state_details, \"flow_run_id\", None),\n getattr(self.state_details, \"task_run_id\", None),\n self.timestamp,\n self.type,\n )\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State.result","title":"result
","text":"Retrieve the result attached to this state.
Parameters:
Name Type Description Defaultraise_on_failure
bool
a boolean specifying whether to raise an exception if the state is of type FAILED
and the underlying data is an exception
True
fetch
Optional[bool]
a boolean specifying whether to resolve references to persisted results into data. For synchronous users, this defaults to True
. For asynchronous users, this defaults to False
for backwards compatibility.
None
Raises:
Type DescriptionTypeError
If the state is failed but the result is not an exception.
Returns:
Type DescriptionThe result of the run
Examples:
>>> from prefect import flow, task\n>>> @task\n>>> def my_task(x):\n>>> return x\n
Get the result from a task future in a flow
>>> @flow\n>>> def my_flow():\n>>> future = my_task(\"hello\")\n>>> state = future.wait()\n>>> result = state.result()\n>>> print(result)\n>>> my_flow()\nhello\n
Get the result from a flow state
>>> @flow\n>>> def my_flow():\n>>> return \"hello\"\n>>> my_flow(return_state=True).result()\nhello\n
Get the result from a failed state
>>> @flow\n>>> def my_flow():\n>>> raise ValueError(\"oh no!\")\n>>> state = my_flow(return_state=True) # Error is wrapped in FAILED state\n>>> state.result() # Raises `ValueError`\n
Get the result from a failed state without erroring
>>> @flow\n>>> def my_flow():\n>>> raise ValueError(\"oh no!\")\n>>> state = my_flow(return_state=True)\n>>> result = state.result(raise_on_failure=False)\n>>> print(result)\nValueError(\"oh no!\")\n
Get the result from a flow state in an async context
>>> @flow\n>>> async def my_flow():\n>>> return \"hello\"\n>>> state = await my_flow(return_state=True)\n>>> await state.result()\nhello\n
Source code in prefect/client/schemas/objects.py
def result(self, raise_on_failure: bool = True, fetch: Optional[bool] = None):\n \"\"\"\n Retrieve the result attached to this state.\n\n Args:\n raise_on_failure: a boolean specifying whether to raise an exception\n if the state is of type `FAILED` and the underlying data is an exception\n fetch: a boolean specifying whether to resolve references to persisted\n results into data. For synchronous users, this defaults to `True`.\n For asynchronous users, this defaults to `False` for backwards\n compatibility.\n\n Raises:\n TypeError: If the state is failed but the result is not an exception.\n\n Returns:\n The result of the run\n\n Examples:\n >>> from prefect import flow, task\n >>> @task\n >>> def my_task(x):\n >>> return x\n\n Get the result from a task future in a flow\n\n >>> @flow\n >>> def my_flow():\n >>> future = my_task(\"hello\")\n >>> state = future.wait()\n >>> result = state.result()\n >>> print(result)\n >>> my_flow()\n hello\n\n Get the result from a flow state\n\n >>> @flow\n >>> def my_flow():\n >>> return \"hello\"\n >>> my_flow(return_state=True).result()\n hello\n\n Get the result from a failed state\n\n >>> @flow\n >>> def my_flow():\n >>> raise ValueError(\"oh no!\")\n >>> state = my_flow(return_state=True) # Error is wrapped in FAILED state\n >>> state.result() # Raises `ValueError`\n\n Get the result from a failed state without erroring\n\n >>> @flow\n >>> def my_flow():\n >>> raise ValueError(\"oh no!\")\n >>> state = my_flow(return_state=True)\n >>> result = state.result(raise_on_failure=False)\n >>> print(result)\n ValueError(\"oh no!\")\n\n\n Get the result from a flow state in an async context\n\n >>> @flow\n >>> async def my_flow():\n >>> return \"hello\"\n >>> state = await my_flow(return_state=True)\n >>> await state.result()\n hello\n \"\"\"\n from prefect.states import get_state_result\n\n return get_state_result(self, raise_on_failure=raise_on_failure, fetch=fetch)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State.to_state_create","title":"to_state_create
","text":"Convert this state to a StateCreate
type which can be used to set the state of a run in the API.
This method will drop this state's data
if it is not a result type. Only results should be sent to the API. Other data is only available locally.
prefect/client/schemas/objects.py
def to_state_create(self):\n \"\"\"\n Convert this state to a `StateCreate` type which can be used to set the state of\n a run in the API.\n\n This method will drop this state's `data` if it is not a result type. Only\n results should be sent to the API. Other data is only available locally.\n \"\"\"\n from prefect.client.schemas.actions import StateCreate\n from prefect.results import BaseResult\n\n return StateCreate(\n type=self.type,\n name=self.name,\n message=self.message,\n data=self.data if isinstance(self.data, BaseResult) else None,\n state_details=self.state_details,\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State.default_name_from_type","title":"default_name_from_type
","text":"If a name is not provided, use the type
Source code inprefect/client/schemas/objects.py
@validator(\"name\", always=True)\ndef default_name_from_type(cls, v, *, values, **kwargs):\n \"\"\"If a name is not provided, use the type\"\"\"\n\n # if `type` is not in `values` it means the `type` didn't pass its own\n # validation check and an error will be raised after this function is called\n if v is None and values.get(\"type\"):\n v = \" \".join([v.capitalize() for v in values.get(\"type\").value.split(\"_\")])\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State.default_scheduled_start_time","title":"default_scheduled_start_time
","text":"This should throw an error instead of setting a default but is out of scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled into work refactoring state initialization
Source code inprefect/client/schemas/objects.py
@root_validator\ndef default_scheduled_start_time(cls, values):\n \"\"\"\n TODO: This should throw an error instead of setting a default but is out of\n scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled\n into work refactoring state initialization\n \"\"\"\n if values.get(\"type\") == StateType.SCHEDULED:\n state_details = values.setdefault(\n \"state_details\", cls.__fields__[\"state_details\"].get_default()\n )\n if not state_details.scheduled_time:\n state_details.scheduled_time = pendulum.now(\"utc\")\n return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunPolicy","title":"FlowRunPolicy
","text":" Bases: PrefectBaseModel
Defines of how a flow run should be orchestrated.
Source code inprefect/client/schemas/objects.py
class FlowRunPolicy(PrefectBaseModel):\n \"\"\"Defines of how a flow run should be orchestrated.\"\"\"\n\n max_retries: int = Field(\n default=0,\n description=(\n \"The maximum number of retries. Field is not used. Please use `retries`\"\n \" instead.\"\n ),\n deprecated=True,\n )\n retry_delay_seconds: float = Field(\n default=0,\n description=(\n \"The delay between retries. Field is not used. Please use `retry_delay`\"\n \" instead.\"\n ),\n deprecated=True,\n )\n retries: Optional[int] = Field(default=None, description=\"The number of retries.\")\n retry_delay: Optional[int] = Field(\n default=None, description=\"The delay time between retries, in seconds.\"\n )\n pause_keys: Optional[set] = Field(\n default_factory=set, description=\"Tracks pauses this run has observed.\"\n )\n resuming: Optional[bool] = Field(\n default=False, description=\"Indicates if this run is resuming from a pause.\"\n )\n\n @root_validator\n def populate_deprecated_fields(cls, values):\n \"\"\"\n If deprecated fields are provided, populate the corresponding new fields\n to preserve orchestration behavior.\n \"\"\"\n if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n values[\"retries\"] = values[\"max_retries\"]\n if (\n not values.get(\"retry_delay\", None)\n and values.get(\"retry_delay_seconds\", 0) != 0\n ):\n values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunPolicy.populate_deprecated_fields","title":"populate_deprecated_fields
","text":"If deprecated fields are provided, populate the corresponding new fields to preserve orchestration behavior.
Source code inprefect/client/schemas/objects.py
@root_validator\ndef populate_deprecated_fields(cls, values):\n \"\"\"\n If deprecated fields are provided, populate the corresponding new fields\n to preserve orchestration behavior.\n \"\"\"\n if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n values[\"retries\"] = values[\"max_retries\"]\n if (\n not values.get(\"retry_delay\", None)\n and values.get(\"retry_delay_seconds\", 0) != 0\n ):\n values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRun","title":"FlowRun
","text":" Bases: ObjectBaseModel
prefect/client/schemas/objects.py
class FlowRun(ObjectBaseModel):\n name: str = Field(\n default_factory=lambda: generate_slug(2),\n description=(\n \"The name of the flow run. Defaults to a random slug if not specified.\"\n ),\n example=\"my-flow-run\",\n )\n flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n state_id: Optional[UUID] = Field(\n default=None, description=\"The id of the flow run's current state.\"\n )\n deployment_id: Optional[UUID] = Field(\n default=None,\n description=(\n \"The id of the deployment associated with this flow run, if available.\"\n ),\n )\n work_queue_name: Optional[str] = Field(\n default=None, description=\"The work queue that handled this flow run.\"\n )\n flow_version: Optional[str] = Field(\n default=None,\n description=\"The version of the flow executed in this flow run.\",\n example=\"1.0\",\n )\n parameters: dict = Field(\n default_factory=dict, description=\"Parameters for the flow run.\"\n )\n idempotency_key: Optional[str] = Field(\n default=None,\n description=(\n \"An optional idempotency key for the flow run. Used to ensure the same flow\"\n \" run is not created multiple times.\"\n ),\n )\n context: dict = Field(\n default_factory=dict,\n description=\"Additional context for the flow run.\",\n example={\"my_var\": \"my_val\"},\n )\n empirical_policy: FlowRunPolicy = Field(\n default_factory=FlowRunPolicy,\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"A list of tags on the flow run\",\n example=[\"tag-1\", \"tag-2\"],\n )\n parent_task_run_id: Optional[UUID] = Field(\n default=None,\n description=(\n \"If the flow run is a subflow, the id of the 'dummy' task in the parent\"\n \" flow used to track subflow state.\"\n ),\n )\n run_count: int = Field(\n default=0, description=\"The number of times the flow run was executed.\"\n )\n expected_start_time: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The flow run's expected start time.\",\n )\n next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The next time the flow run is scheduled to start.\",\n )\n start_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The actual start time.\"\n )\n end_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The actual end time.\"\n )\n total_run_time: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=(\n \"Total run time. If the flow run was executed multiple times, the time of\"\n \" each run will be summed.\"\n ),\n )\n estimated_run_time: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=\"A real-time estimate of the total run time.\",\n )\n estimated_start_time_delta: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=\"The difference between actual and expected start time.\",\n )\n auto_scheduled: bool = Field(\n default=False,\n description=\"Whether or not the flow run was automatically scheduled.\",\n )\n infrastructure_document_id: Optional[UUID] = Field(\n default=None,\n description=\"The block document defining infrastructure to use this flow run.\",\n )\n infrastructure_pid: Optional[str] = Field(\n default=None,\n description=\"The id of the flow run as returned by an infrastructure block.\",\n )\n created_by: Optional[CreatedBy] = Field(\n default=None,\n description=\"Optional information about the creator of this flow run.\",\n )\n work_queue_id: Optional[UUID] = Field(\n default=None, description=\"The id of the run's work pool queue.\"\n )\n\n work_pool_id: Optional[UUID] = Field(\n description=\"The work pool with which the queue is associated.\"\n )\n work_pool_name: Optional[str] = Field(\n default=None,\n description=\"The name of the flow run's work pool.\",\n example=\"my-work-pool\",\n )\n state: Optional[State] = Field(\n default=None,\n description=\"The state of the flow run.\",\n example=State(type=StateType.COMPLETED),\n )\n job_variables: Optional[dict] = Field(\n default=None, description=\"Job variables for the flow run.\"\n )\n\n def __eq__(self, other: Any) -> bool:\n \"\"\"\n Check for \"equality\" to another flow run schema\n\n Estimates times are rolling and will always change with repeated queries for\n a flow run so we ignore them during equality checks.\n \"\"\"\n if isinstance(other, FlowRun):\n exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n return self.dict(exclude=exclude_fields) == other.dict(\n exclude=exclude_fields\n )\n return super().__eq__(other)\n\n @validator(\"name\", pre=True)\n def set_default_name(cls, name):\n return name or generate_slug(2)\n\n # These are server-side optimizations and should not be present on client models\n # TODO: Deprecate these fields\n\n state_type: Optional[StateType] = Field(\n default=None, description=\"The type of the current flow run state.\"\n )\n state_name: Optional[str] = Field(\n default=None, description=\"The name of the current flow run state.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.TaskRunPolicy","title":"TaskRunPolicy
","text":" Bases: PrefectBaseModel
Defines of how a task run should retry.
Source code inprefect/client/schemas/objects.py
class TaskRunPolicy(PrefectBaseModel):\n \"\"\"Defines of how a task run should retry.\"\"\"\n\n max_retries: int = Field(\n default=0,\n description=(\n \"The maximum number of retries. Field is not used. Please use `retries`\"\n \" instead.\"\n ),\n deprecated=True,\n )\n retry_delay_seconds: float = Field(\n default=0,\n description=(\n \"The delay between retries. Field is not used. Please use `retry_delay`\"\n \" instead.\"\n ),\n deprecated=True,\n )\n retries: Optional[int] = Field(default=None, description=\"The number of retries.\")\n retry_delay: Union[None, int, List[int]] = Field(\n default=None,\n description=\"A delay time or list of delay times between retries, in seconds.\",\n )\n retry_jitter_factor: Optional[float] = Field(\n default=None, description=\"Determines the amount a retry should jitter\"\n )\n\n @root_validator\n def populate_deprecated_fields(cls, values):\n \"\"\"\n If deprecated fields are provided, populate the corresponding new fields\n to preserve orchestration behavior.\n \"\"\"\n if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n values[\"retries\"] = values[\"max_retries\"]\n\n if (\n not values.get(\"retry_delay\", None)\n and values.get(\"retry_delay_seconds\", 0) != 0\n ):\n values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n\n return values\n\n @validator(\"retry_delay\")\n def validate_configured_retry_delays(cls, v):\n if isinstance(v, list) and (len(v) > 50):\n raise ValueError(\"Can not configure more than 50 retry delays per task.\")\n return v\n\n @validator(\"retry_jitter_factor\")\n def validate_jitter_factor(cls, v):\n if v is not None and v < 0:\n raise ValueError(\"`retry_jitter_factor` must be >= 0.\")\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.TaskRunPolicy.populate_deprecated_fields","title":"populate_deprecated_fields
","text":"If deprecated fields are provided, populate the corresponding new fields to preserve orchestration behavior.
Source code inprefect/client/schemas/objects.py
@root_validator\ndef populate_deprecated_fields(cls, values):\n \"\"\"\n If deprecated fields are provided, populate the corresponding new fields\n to preserve orchestration behavior.\n \"\"\"\n if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n values[\"retries\"] = values[\"max_retries\"]\n\n if (\n not values.get(\"retry_delay\", None)\n and values.get(\"retry_delay_seconds\", 0) != 0\n ):\n values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n\n return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.TaskRunInput","title":"TaskRunInput
","text":" Bases: PrefectBaseModel
Base class for classes that represent inputs to task runs, which could include, constants, parameters, or other task runs.
Source code inprefect/client/schemas/objects.py
class TaskRunInput(PrefectBaseModel):\n \"\"\"\n Base class for classes that represent inputs to task runs, which\n could include, constants, parameters, or other task runs.\n \"\"\"\n\n # freeze TaskRunInputs to allow them to be placed in sets\n class Config:\n frozen = True\n\n input_type: str\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.TaskRunResult","title":"TaskRunResult
","text":" Bases: TaskRunInput
Represents a task run result input to another task run.
Source code inprefect/client/schemas/objects.py
class TaskRunResult(TaskRunInput):\n \"\"\"Represents a task run result input to another task run.\"\"\"\n\n input_type: Literal[\"task_run\"] = \"task_run\"\n id: UUID\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Parameter","title":"Parameter
","text":" Bases: TaskRunInput
Represents a parameter input to a task run.
Source code inprefect/client/schemas/objects.py
class Parameter(TaskRunInput):\n \"\"\"Represents a parameter input to a task run.\"\"\"\n\n input_type: Literal[\"parameter\"] = \"parameter\"\n name: str\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Constant","title":"Constant
","text":" Bases: TaskRunInput
Represents constant input value to a task run.
Source code inprefect/client/schemas/objects.py
class Constant(TaskRunInput):\n \"\"\"Represents constant input value to a task run.\"\"\"\n\n input_type: Literal[\"constant\"] = \"constant\"\n type: str\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Workspace","title":"Workspace
","text":" Bases: PrefectBaseModel
A Prefect Cloud workspace.
Expected payload for each workspace returned by the me/workspaces
route.
prefect/client/schemas/objects.py
class Workspace(PrefectBaseModel):\n \"\"\"\n A Prefect Cloud workspace.\n\n Expected payload for each workspace returned by the `me/workspaces` route.\n \"\"\"\n\n account_id: UUID = Field(..., description=\"The account id of the workspace.\")\n account_name: str = Field(..., description=\"The account name.\")\n account_handle: str = Field(..., description=\"The account's unique handle.\")\n workspace_id: UUID = Field(..., description=\"The workspace id.\")\n workspace_name: str = Field(..., description=\"The workspace name.\")\n workspace_description: str = Field(..., description=\"Description of the workspace.\")\n workspace_handle: str = Field(..., description=\"The workspace's unique handle.\")\n\n class Config:\n extra = \"ignore\"\n\n @property\n def handle(self) -> str:\n \"\"\"\n The full handle of the workspace as `account_handle` / `workspace_handle`\n \"\"\"\n return self.account_handle + \"/\" + self.workspace_handle\n\n def api_url(self) -> str:\n \"\"\"\n Generate the API URL for accessing this workspace\n \"\"\"\n return (\n f\"{PREFECT_CLOUD_API_URL.value()}\"\n f\"/accounts/{self.account_id}\"\n f\"/workspaces/{self.workspace_id}\"\n )\n\n def ui_url(self) -> str:\n \"\"\"\n Generate the UI URL for accessing this workspace\n \"\"\"\n return (\n f\"{PREFECT_CLOUD_UI_URL.value()}\"\n f\"/account/{self.account_id}\"\n f\"/workspace/{self.workspace_id}\"\n )\n\n def __hash__(self):\n return hash(self.handle)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Workspace.handle","title":"handle: str
property
","text":"The full handle of the workspace as account_handle
/ workspace_handle
api_url
","text":"Generate the API URL for accessing this workspace
Source code inprefect/client/schemas/objects.py
def api_url(self) -> str:\n \"\"\"\n Generate the API URL for accessing this workspace\n \"\"\"\n return (\n f\"{PREFECT_CLOUD_API_URL.value()}\"\n f\"/accounts/{self.account_id}\"\n f\"/workspaces/{self.workspace_id}\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Workspace.ui_url","title":"ui_url
","text":"Generate the UI URL for accessing this workspace
Source code inprefect/client/schemas/objects.py
def ui_url(self) -> str:\n \"\"\"\n Generate the UI URL for accessing this workspace\n \"\"\"\n return (\n f\"{PREFECT_CLOUD_UI_URL.value()}\"\n f\"/account/{self.account_id}\"\n f\"/workspace/{self.workspace_id}\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockType","title":"BlockType
","text":" Bases: ObjectBaseModel
An ORM representation of a block type
Source code inprefect/client/schemas/objects.py
class BlockType(ObjectBaseModel):\n \"\"\"An ORM representation of a block type\"\"\"\n\n name: str = Field(default=..., description=\"A block type's name\")\n slug: str = Field(default=..., description=\"A block type's slug\")\n logo_url: Optional[HttpUrl] = Field(\n default=None, description=\"Web URL for the block type's logo\"\n )\n documentation_url: Optional[HttpUrl] = Field(\n default=None, description=\"Web URL for the block type's documentation\"\n )\n description: Optional[str] = Field(\n default=None,\n description=\"A short blurb about the corresponding block's intended use\",\n )\n code_example: Optional[str] = Field(\n default=None,\n description=\"A code snippet demonstrating use of the corresponding block\",\n )\n is_protected: bool = Field(\n default=False, description=\"Protected block types cannot be modified via API.\"\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockDocument","title":"BlockDocument
","text":" Bases: ObjectBaseModel
An ORM representation of a block document.
Source code inprefect/client/schemas/objects.py
class BlockDocument(ObjectBaseModel):\n \"\"\"An ORM representation of a block document.\"\"\"\n\n name: Optional[str] = Field(\n default=None,\n description=(\n \"The block document's name. Not required for anonymous block documents.\"\n ),\n )\n data: dict = Field(default_factory=dict, description=\"The block document's data\")\n block_schema_id: UUID = Field(default=..., description=\"A block schema ID\")\n block_schema: Optional[BlockSchema] = Field(\n default=None, description=\"The associated block schema\"\n )\n block_type_id: UUID = Field(default=..., description=\"A block type ID\")\n block_type_name: Optional[str] = Field(None, description=\"A block type name\")\n block_type: Optional[BlockType] = Field(\n default=None, description=\"The associated block type\"\n )\n block_document_references: Dict[str, Dict[str, Any]] = Field(\n default_factory=dict, description=\"Record of the block document's references\"\n )\n is_anonymous: bool = Field(\n default=False,\n description=(\n \"Whether the block is anonymous (anonymous blocks are usually created by\"\n \" Prefect automatically)\"\n ),\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n # the BlockDocumentCreate subclass allows name=None\n # and will inherit this validator\n if v is not None:\n raise_on_name_with_banned_characters(v)\n return v\n\n @root_validator\n def validate_name_is_present_if_not_anonymous(cls, values):\n # anonymous blocks may have no name prior to actually being\n # stored in the database\n if not values.get(\"is_anonymous\") and not values.get(\"name\"):\n raise ValueError(\"Names must be provided for block documents.\")\n return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Flow","title":"Flow
","text":" Bases: ObjectBaseModel
An ORM representation of flow data.
Source code inprefect/client/schemas/objects.py
class Flow(ObjectBaseModel):\n \"\"\"An ORM representation of flow data.\"\"\"\n\n name: str = Field(\n default=..., description=\"The name of the flow\", example=\"my-flow\"\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"A list of flow tags\",\n example=[\"tag-1\", \"tag-2\"],\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunnerSettings","title":"FlowRunnerSettings
","text":" Bases: PrefectBaseModel
An API schema for passing details about the flow runner.
This schema is agnostic to the types and configuration provided by clients
Source code inprefect/client/schemas/objects.py
class FlowRunnerSettings(PrefectBaseModel):\n \"\"\"\n An API schema for passing details about the flow runner.\n\n This schema is agnostic to the types and configuration provided by clients\n \"\"\"\n\n type: Optional[str] = Field(\n default=None,\n description=(\n \"The type of the flow runner which can be used by the client for\"\n \" dispatching.\"\n ),\n )\n config: Optional[dict] = Field(\n default=None, description=\"The configuration for the given flow runner type.\"\n )\n\n # The following is required for composite compatibility in the ORM\n\n def __init__(self, type: str = None, config: dict = None, **kwargs) -> None:\n # Pydantic does not support positional arguments so they must be converted to\n # keyword arguments\n super().__init__(type=type, config=config, **kwargs)\n\n def __composite_values__(self):\n return self.type, self.config\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Deployment","title":"Deployment
","text":" Bases: ObjectBaseModel
An ORM representation of deployment data.
Source code inprefect/client/schemas/objects.py
class Deployment(ObjectBaseModel):\n \"\"\"An ORM representation of deployment data.\"\"\"\n\n name: str = Field(default=..., description=\"The name of the deployment.\")\n version: Optional[str] = Field(\n default=None, description=\"An optional version for the deployment.\"\n )\n description: Optional[str] = Field(\n default=None, description=\"A description for the deployment.\"\n )\n flow_id: UUID = Field(\n default=..., description=\"The flow id associated with the deployment.\"\n )\n schedule: Optional[SCHEDULE_TYPES] = Field(\n default=None, description=\"A schedule for the deployment.\"\n )\n is_schedule_active: bool = Field(\n default=True, description=\"Whether or not the deployment schedule is active.\"\n )\n paused: bool = Field(\n default=False, description=\"Whether or not the deployment is paused.\"\n )\n schedules: List[DeploymentSchedule] = Field(\n default_factory=list, description=\"A list of schedules for the deployment.\"\n )\n infra_overrides: Dict[str, Any] = Field(\n default_factory=dict,\n description=\"Overrides to apply to the base infrastructure block at runtime.\",\n )\n parameters: Dict[str, Any] = Field(\n default_factory=dict,\n description=\"Parameters for flow runs scheduled by the deployment.\",\n )\n pull_steps: Optional[List[dict]] = Field(\n default=None,\n description=\"Pull steps for cloning and running this deployment.\",\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"A list of tags for the deployment\",\n example=[\"tag-1\", \"tag-2\"],\n )\n work_queue_name: Optional[str] = Field(\n default=None,\n description=(\n \"The work queue for the deployment. If no work queue is set, work will not\"\n \" be scheduled.\"\n ),\n )\n last_polled: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The last time the deployment was polled for status updates.\",\n )\n parameter_openapi_schema: Optional[Dict[str, Any]] = Field(\n default=None,\n description=\"The parameter schema of the flow, including defaults.\",\n )\n path: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the working directory for the workflow, relative to remote\"\n \" storage or an absolute path.\"\n ),\n )\n entrypoint: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the entrypoint for the workflow, relative to the `path`.\"\n ),\n )\n manifest_path: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the flow's manifest file, relative to the chosen storage.\"\n ),\n )\n storage_document_id: Optional[UUID] = Field(\n default=None,\n description=\"The block document defining storage used for this flow.\",\n )\n infrastructure_document_id: Optional[UUID] = Field(\n default=None,\n description=\"The block document defining infrastructure to use for flow runs.\",\n )\n created_by: Optional[CreatedBy] = Field(\n default=None,\n description=\"Optional information about the creator of this deployment.\",\n )\n updated_by: Optional[UpdatedBy] = Field(\n default=None,\n description=\"Optional information about the updater of this deployment.\",\n )\n work_queue_id: UUID = Field(\n default=None,\n description=(\n \"The id of the work pool queue to which this deployment is assigned.\"\n ),\n )\n enforce_parameter_schema: bool = Field(\n default=False,\n description=(\n \"Whether or not the deployment should enforce the parameter schema.\"\n ),\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.ConcurrencyLimit","title":"ConcurrencyLimit
","text":" Bases: ObjectBaseModel
An ORM representation of a concurrency limit.
Source code inprefect/client/schemas/objects.py
class ConcurrencyLimit(ObjectBaseModel):\n \"\"\"An ORM representation of a concurrency limit.\"\"\"\n\n tag: str = Field(\n default=..., description=\"A tag the concurrency limit is applied to.\"\n )\n concurrency_limit: int = Field(default=..., description=\"The concurrency limit.\")\n active_slots: List[UUID] = Field(\n default_factory=list,\n description=\"A list of active run ids using a concurrency slot\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockSchema","title":"BlockSchema
","text":" Bases: ObjectBaseModel
An ORM representation of a block schema.
Source code inprefect/client/schemas/objects.py
class BlockSchema(ObjectBaseModel):\n \"\"\"An ORM representation of a block schema.\"\"\"\n\n checksum: str = Field(default=..., description=\"The block schema's unique checksum\")\n fields: dict = Field(\n default_factory=dict, description=\"The block schema's field schema\"\n )\n block_type_id: Optional[UUID] = Field(default=..., description=\"A block type ID\")\n block_type: Optional[BlockType] = Field(\n default=None, description=\"The associated block type\"\n )\n capabilities: List[str] = Field(\n default_factory=list,\n description=\"A list of Block capabilities\",\n )\n version: str = Field(\n default=DEFAULT_BLOCK_SCHEMA_VERSION,\n description=\"Human readable identifier for the block schema\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockSchemaReference","title":"BlockSchemaReference
","text":" Bases: ObjectBaseModel
An ORM representation of a block schema reference.
Source code inprefect/client/schemas/objects.py
class BlockSchemaReference(ObjectBaseModel):\n \"\"\"An ORM representation of a block schema reference.\"\"\"\n\n parent_block_schema_id: UUID = Field(\n default=..., description=\"ID of block schema the reference is nested within\"\n )\n parent_block_schema: Optional[BlockSchema] = Field(\n default=None, description=\"The block schema the reference is nested within\"\n )\n reference_block_schema_id: UUID = Field(\n default=..., description=\"ID of the nested block schema\"\n )\n reference_block_schema: Optional[BlockSchema] = Field(\n default=None, description=\"The nested block schema\"\n )\n name: str = Field(\n default=..., description=\"The name that the reference is nested under\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockDocumentReference","title":"BlockDocumentReference
","text":" Bases: ObjectBaseModel
An ORM representation of a block document reference.
Source code inprefect/client/schemas/objects.py
class BlockDocumentReference(ObjectBaseModel):\n \"\"\"An ORM representation of a block document reference.\"\"\"\n\n parent_block_document_id: UUID = Field(\n default=..., description=\"ID of block document the reference is nested within\"\n )\n parent_block_document: Optional[BlockDocument] = Field(\n default=None, description=\"The block document the reference is nested within\"\n )\n reference_block_document_id: UUID = Field(\n default=..., description=\"ID of the nested block document\"\n )\n reference_block_document: Optional[BlockDocument] = Field(\n default=None, description=\"The nested block document\"\n )\n name: str = Field(\n default=..., description=\"The name that the reference is nested under\"\n )\n\n @root_validator\n def validate_parent_and_ref_are_different(cls, values):\n parent_id = values.get(\"parent_block_document_id\")\n ref_id = values.get(\"reference_block_document_id\")\n if parent_id and ref_id and parent_id == ref_id:\n raise ValueError(\n \"`parent_block_document_id` and `reference_block_document_id` cannot be\"\n \" the same\"\n )\n return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.SavedSearchFilter","title":"SavedSearchFilter
","text":" Bases: PrefectBaseModel
A filter for a saved search model. Intended for use by the Prefect UI.
Source code inprefect/client/schemas/objects.py
class SavedSearchFilter(PrefectBaseModel):\n \"\"\"A filter for a saved search model. Intended for use by the Prefect UI.\"\"\"\n\n object: str = Field(default=..., description=\"The object over which to filter.\")\n property: str = Field(\n default=..., description=\"The property of the object on which to filter.\"\n )\n type: str = Field(default=..., description=\"The type of the property.\")\n operation: str = Field(\n default=...,\n description=\"The operator to apply to the object. For example, `equals`.\",\n )\n value: Any = Field(\n default=..., description=\"A JSON-compatible value for the filter.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.SavedSearch","title":"SavedSearch
","text":" Bases: ObjectBaseModel
An ORM representation of saved search data. Represents a set of filter criteria.
Source code inprefect/client/schemas/objects.py
class SavedSearch(ObjectBaseModel):\n \"\"\"An ORM representation of saved search data. Represents a set of filter criteria.\"\"\"\n\n name: str = Field(default=..., description=\"The name of the saved search.\")\n filters: List[SavedSearchFilter] = Field(\n default_factory=list, description=\"The filter set for the saved search.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Log","title":"Log
","text":" Bases: ObjectBaseModel
An ORM representation of log data.
Source code inprefect/client/schemas/objects.py
class Log(ObjectBaseModel):\n \"\"\"An ORM representation of log data.\"\"\"\n\n name: str = Field(default=..., description=\"The logger name.\")\n level: int = Field(default=..., description=\"The log level.\")\n message: str = Field(default=..., description=\"The log message.\")\n timestamp: DateTimeTZ = Field(default=..., description=\"The log timestamp.\")\n flow_run_id: Optional[UUID] = Field(\n default=None, description=\"The flow run ID associated with the log.\"\n )\n task_run_id: Optional[UUID] = Field(\n default=None, description=\"The task run ID associated with the log.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.QueueFilter","title":"QueueFilter
","text":" Bases: PrefectBaseModel
Filter criteria definition for a work queue.
Source code inprefect/client/schemas/objects.py
class QueueFilter(PrefectBaseModel):\n \"\"\"Filter criteria definition for a work queue.\"\"\"\n\n tags: Optional[List[str]] = Field(\n default=None,\n description=\"Only include flow runs with these tags in the work queue.\",\n )\n deployment_ids: Optional[List[UUID]] = Field(\n default=None,\n description=\"Only include flow runs from these deployments in the work queue.\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueue","title":"WorkQueue
","text":" Bases: ObjectBaseModel
An ORM representation of a work queue
Source code inprefect/client/schemas/objects.py
class WorkQueue(ObjectBaseModel):\n \"\"\"An ORM representation of a work queue\"\"\"\n\n name: str = Field(default=..., description=\"The name of the work queue.\")\n description: Optional[str] = Field(\n default=\"\", description=\"An optional description for the work queue.\"\n )\n is_paused: bool = Field(\n default=False, description=\"Whether or not the work queue is paused.\"\n )\n concurrency_limit: Optional[conint(ge=0)] = Field(\n default=None, description=\"An optional concurrency limit for the work queue.\"\n )\n priority: conint(ge=1) = Field(\n default=1,\n description=(\n \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n ),\n )\n work_pool_name: Optional[str] = Field(default=None)\n # Will be required after a future migration\n work_pool_id: Optional[UUID] = Field(\n description=\"The work pool with which the queue is associated.\"\n )\n filter: Optional[QueueFilter] = Field(\n default=None,\n description=\"DEPRECATED: Filter criteria for the work queue.\",\n deprecated=True,\n )\n last_polled: Optional[DateTimeTZ] = Field(\n default=None, description=\"The last time an agent polled this queue for work.\"\n )\n status: Optional[WorkQueueStatus] = Field(\n default=None, description=\"The queue status.\"\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueueHealthPolicy","title":"WorkQueueHealthPolicy
","text":" Bases: PrefectBaseModel
prefect/client/schemas/objects.py
class WorkQueueHealthPolicy(PrefectBaseModel):\n maximum_late_runs: Optional[int] = Field(\n default=0,\n description=(\n \"The maximum number of late runs in the work queue before it is deemed\"\n \" unhealthy. Defaults to `0`.\"\n ),\n )\n maximum_seconds_since_last_polled: Optional[int] = Field(\n default=60,\n description=(\n \"The maximum number of time in seconds elapsed since work queue has been\"\n \" polled before it is deemed unhealthy. Defaults to `60`.\"\n ),\n )\n\n def evaluate_health_status(\n self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n ) -> bool:\n \"\"\"\n Given empirical information about the state of the work queue, evaluate its health status.\n\n Args:\n late_runs: the count of late runs for the work queue.\n last_polled: the last time the work queue was polled, if available.\n\n Returns:\n bool: whether or not the work queue is healthy.\n \"\"\"\n healthy = True\n if (\n self.maximum_late_runs is not None\n and late_runs_count > self.maximum_late_runs\n ):\n healthy = False\n\n if self.maximum_seconds_since_last_polled is not None:\n if (\n last_polled is None\n or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n > self.maximum_seconds_since_last_polled\n ):\n healthy = False\n\n return healthy\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueueHealthPolicy.evaluate_health_status","title":"evaluate_health_status
","text":"Given empirical information about the state of the work queue, evaluate its health status.
Parameters:
Name Type Description Defaultlate_runs
the count of late runs for the work queue.
requiredlast_polled
Optional[DateTimeTZ]
the last time the work queue was polled, if available.
None
Returns:
Name Type Descriptionbool
bool
whether or not the work queue is healthy.
Source code inprefect/client/schemas/objects.py
def evaluate_health_status(\n self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n) -> bool:\n \"\"\"\n Given empirical information about the state of the work queue, evaluate its health status.\n\n Args:\n late_runs: the count of late runs for the work queue.\n last_polled: the last time the work queue was polled, if available.\n\n Returns:\n bool: whether or not the work queue is healthy.\n \"\"\"\n healthy = True\n if (\n self.maximum_late_runs is not None\n and late_runs_count > self.maximum_late_runs\n ):\n healthy = False\n\n if self.maximum_seconds_since_last_polled is not None:\n if (\n last_polled is None\n or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n > self.maximum_seconds_since_last_polled\n ):\n healthy = False\n\n return healthy\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunNotificationPolicy","title":"FlowRunNotificationPolicy
","text":" Bases: ObjectBaseModel
An ORM representation of a flow run notification.
Source code inprefect/client/schemas/objects.py
class FlowRunNotificationPolicy(ObjectBaseModel):\n \"\"\"An ORM representation of a flow run notification.\"\"\"\n\n is_active: bool = Field(\n default=True, description=\"Whether the policy is currently active\"\n )\n state_names: List[str] = Field(\n default=..., description=\"The flow run states that trigger notifications\"\n )\n tags: List[str] = Field(\n default=...,\n description=\"The flow run tags that trigger notifications (set [] to disable)\",\n )\n block_document_id: UUID = Field(\n default=..., description=\"The block document ID used for sending notifications\"\n )\n message_template: Optional[str] = Field(\n default=None,\n description=(\n \"A templatable notification message. Use {braces} to add variables.\"\n \" Valid variables include:\"\n f\" {listrepr(sorted(FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS), sep=', ')}\"\n ),\n example=(\n \"Flow run {flow_run_name} with id {flow_run_id} entered state\"\n \" {flow_run_state_name}.\"\n ),\n )\n\n @validator(\"message_template\")\n def validate_message_template_variables(cls, v):\n if v is not None:\n try:\n v.format(**{k: \"test\" for k in FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS})\n except KeyError as exc:\n raise ValueError(f\"Invalid template variable provided: '{exc.args[0]}'\")\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Agent","title":"Agent
","text":" Bases: ObjectBaseModel
An ORM representation of an agent
Source code inprefect/client/schemas/objects.py
class Agent(ObjectBaseModel):\n \"\"\"An ORM representation of an agent\"\"\"\n\n name: str = Field(\n default_factory=lambda: generate_slug(2),\n description=(\n \"The name of the agent. If a name is not provided, it will be\"\n \" auto-generated.\"\n ),\n )\n work_queue_id: UUID = Field(\n default=..., description=\"The work queue with which the agent is associated.\"\n )\n last_activity_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The last time this agent polled for work.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkPool","title":"WorkPool
","text":" Bases: ObjectBaseModel
An ORM representation of a work pool
Source code inprefect/client/schemas/objects.py
class WorkPool(ObjectBaseModel):\n \"\"\"An ORM representation of a work pool\"\"\"\n\n name: str = Field(\n description=\"The name of the work pool.\",\n )\n description: Optional[str] = Field(\n default=None, description=\"A description of the work pool.\"\n )\n type: str = Field(description=\"The work pool type.\")\n base_job_template: Dict[str, Any] = Field(\n default_factory=dict, description=\"The work pool's base job template.\"\n )\n is_paused: bool = Field(\n default=False,\n description=\"Pausing the work pool stops the delivery of all work.\",\n )\n concurrency_limit: Optional[conint(ge=0)] = Field(\n default=None, description=\"A concurrency limit for the work pool.\"\n )\n status: Optional[WorkPoolStatus] = Field(\n default=None, description=\"The current status of the work pool.\"\n )\n\n # this required field has a default of None so that the custom validator\n # below will be called and produce a more helpful error message\n default_queue_id: UUID = Field(\n None, description=\"The id of the pool's default queue.\"\n )\n\n @property\n def is_push_pool(self) -> bool:\n return self.type.endswith(\":push\")\n\n @property\n def is_managed_pool(self) -> bool:\n return self.type.endswith(\":managed\")\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n\n @validator(\"default_queue_id\", always=True)\n def helpful_error_for_missing_default_queue_id(cls, v):\n \"\"\"\n Default queue ID is required because all pools must have a default queue\n ID, but it represents a circular foreign key relationship to a\n WorkQueue (which can't be created until the work pool exists).\n Therefore, while this field can *technically* be null, it shouldn't be.\n This should only be an issue when creating new pools, as reading\n existing ones will always have this field populated. This custom error\n message will help users understand that they should use the\n `actions.WorkPoolCreate` model in that case.\n \"\"\"\n if v is None:\n raise ValueError(\n \"`default_queue_id` is a required field. If you are \"\n \"creating a new WorkPool and don't have a queue \"\n \"ID yet, use the `actions.WorkPoolCreate` model instead.\"\n )\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkPool.helpful_error_for_missing_default_queue_id","title":"helpful_error_for_missing_default_queue_id
","text":"Default queue ID is required because all pools must have a default queue ID, but it represents a circular foreign key relationship to a WorkQueue (which can't be created until the work pool exists). Therefore, while this field can technically be null, it shouldn't be. This should only be an issue when creating new pools, as reading existing ones will always have this field populated. This custom error message will help users understand that they should use the actions.WorkPoolCreate
model in that case.
prefect/client/schemas/objects.py
@validator(\"default_queue_id\", always=True)\ndef helpful_error_for_missing_default_queue_id(cls, v):\n \"\"\"\n Default queue ID is required because all pools must have a default queue\n ID, but it represents a circular foreign key relationship to a\n WorkQueue (which can't be created until the work pool exists).\n Therefore, while this field can *technically* be null, it shouldn't be.\n This should only be an issue when creating new pools, as reading\n existing ones will always have this field populated. This custom error\n message will help users understand that they should use the\n `actions.WorkPoolCreate` model in that case.\n \"\"\"\n if v is None:\n raise ValueError(\n \"`default_queue_id` is a required field. If you are \"\n \"creating a new WorkPool and don't have a queue \"\n \"ID yet, use the `actions.WorkPoolCreate` model instead.\"\n )\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Worker","title":"Worker
","text":" Bases: ObjectBaseModel
An ORM representation of a worker
Source code inprefect/client/schemas/objects.py
class Worker(ObjectBaseModel):\n \"\"\"An ORM representation of a worker\"\"\"\n\n name: str = Field(description=\"The name of the worker.\")\n work_pool_id: UUID = Field(\n description=\"The work pool with which the queue is associated.\"\n )\n last_heartbeat_time: datetime.datetime = Field(\n None, description=\"The last time the worker process sent a heartbeat.\"\n )\n heartbeat_interval_seconds: Optional[int] = Field(\n default=None,\n description=(\n \"The number of seconds to expect between heartbeats sent by the worker.\"\n ),\n )\n status: WorkerStatus = Field(\n WorkerStatus.OFFLINE,\n description=\"Current status of the worker.\",\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunInput","title":"FlowRunInput
","text":" Bases: ObjectBaseModel
prefect/client/schemas/objects.py
class FlowRunInput(ObjectBaseModel):\n flow_run_id: UUID = Field(description=\"The flow run ID associated with the input.\")\n key: str = Field(description=\"The key of the input.\")\n value: str = Field(description=\"The value of the input.\")\n sender: Optional[str] = Field(description=\"The sender of the input.\")\n\n @property\n def decoded_value(self) -> Any:\n \"\"\"\n Decode the value of the input.\n\n Returns:\n Any: the decoded value\n \"\"\"\n return orjson.loads(self.value)\n\n @validator(\"key\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_alphanumeric_dashes_only(v)\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunInput.decoded_value","title":"decoded_value: Any
property
","text":"Decode the value of the input.
Returns:
Name Type DescriptionAny
Any
the decoded value
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.GlobalConcurrencyLimit","title":"GlobalConcurrencyLimit
","text":" Bases: ObjectBaseModel
An ORM representation of a global concurrency limit
Source code inprefect/client/schemas/objects.py
class GlobalConcurrencyLimit(ObjectBaseModel):\n \"\"\"An ORM representation of a global concurrency limit\"\"\"\n\n name: str = Field(description=\"The name of the global concurrency limit.\")\n limit: int = Field(\n description=(\n \"The maximum number of slots that can be occupied on this concurrency\"\n \" limit.\"\n )\n )\n active: Optional[bool] = Field(\n default=True,\n description=\"Whether or not the concurrency limit is in an active state.\",\n )\n active_slots: Optional[int] = Field(\n default=0,\n description=\"Number of tasks currently using a concurrency slot.\",\n )\n slot_decay_per_second: Optional[int] = Field(\n default=0,\n description=(\n \"Controls the rate at which slots are released when the concurrency limit\"\n \" is used as a rate limit.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_4","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses","title":"prefect.client.schemas.responses
","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.SetStateStatus","title":"SetStateStatus
","text":" Bases: AutoEnum
Enumerates return statuses for setting run states.
Source code inprefect/client/schemas/responses.py
class SetStateStatus(AutoEnum):\n \"\"\"Enumerates return statuses for setting run states.\"\"\"\n\n ACCEPT = AutoEnum.auto()\n REJECT = AutoEnum.auto()\n ABORT = AutoEnum.auto()\n WAIT = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateAcceptDetails","title":"StateAcceptDetails
","text":" Bases: PrefectBaseModel
Details associated with an ACCEPT state transition.
Source code inprefect/client/schemas/responses.py
class StateAcceptDetails(PrefectBaseModel):\n \"\"\"Details associated with an ACCEPT state transition.\"\"\"\n\n type: Literal[\"accept_details\"] = Field(\n default=\"accept_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateRejectDetails","title":"StateRejectDetails
","text":" Bases: PrefectBaseModel
Details associated with a REJECT state transition.
Source code inprefect/client/schemas/responses.py
class StateRejectDetails(PrefectBaseModel):\n \"\"\"Details associated with a REJECT state transition.\"\"\"\n\n type: Literal[\"reject_details\"] = Field(\n default=\"reject_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n reason: Optional[str] = Field(\n default=None, description=\"The reason why the state transition was rejected.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateAbortDetails","title":"StateAbortDetails
","text":" Bases: PrefectBaseModel
Details associated with an ABORT state transition.
Source code inprefect/client/schemas/responses.py
class StateAbortDetails(PrefectBaseModel):\n \"\"\"Details associated with an ABORT state transition.\"\"\"\n\n type: Literal[\"abort_details\"] = Field(\n default=\"abort_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n reason: Optional[str] = Field(\n default=None, description=\"The reason why the state transition was aborted.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateWaitDetails","title":"StateWaitDetails
","text":" Bases: PrefectBaseModel
Details associated with a WAIT state transition.
Source code inprefect/client/schemas/responses.py
class StateWaitDetails(PrefectBaseModel):\n \"\"\"Details associated with a WAIT state transition.\"\"\"\n\n type: Literal[\"wait_details\"] = Field(\n default=\"wait_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n delay_seconds: int = Field(\n default=...,\n description=(\n \"The length of time in seconds the client should wait before transitioning\"\n \" states.\"\n ),\n )\n reason: Optional[str] = Field(\n default=None, description=\"The reason why the state transition should wait.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.HistoryResponseState","title":"HistoryResponseState
","text":" Bases: PrefectBaseModel
Represents a single state's history over an interval.
Source code inprefect/client/schemas/responses.py
class HistoryResponseState(PrefectBaseModel):\n \"\"\"Represents a single state's history over an interval.\"\"\"\n\n state_type: objects.StateType = Field(default=..., description=\"The state type.\")\n state_name: str = Field(default=..., description=\"The state name.\")\n count_runs: int = Field(\n default=...,\n description=\"The number of runs in the specified state during the interval.\",\n )\n sum_estimated_run_time: datetime.timedelta = Field(\n default=...,\n description=\"The total estimated run time of all runs during the interval.\",\n )\n sum_estimated_lateness: datetime.timedelta = Field(\n default=...,\n description=(\n \"The sum of differences between actual and expected start time during the\"\n \" interval.\"\n ),\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.HistoryResponse","title":"HistoryResponse
","text":" Bases: PrefectBaseModel
Represents a history of aggregation states over an interval
Source code inprefect/client/schemas/responses.py
class HistoryResponse(PrefectBaseModel):\n \"\"\"Represents a history of aggregation states over an interval\"\"\"\n\n interval_start: DateTimeTZ = Field(\n default=..., description=\"The start date of the interval.\"\n )\n interval_end: DateTimeTZ = Field(\n default=..., description=\"The end date of the interval.\"\n )\n states: List[HistoryResponseState] = Field(\n default=..., description=\"A list of state histories during the interval.\"\n )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.OrchestrationResult","title":"OrchestrationResult
","text":" Bases: PrefectBaseModel
A container for the output of state orchestration.
Source code inprefect/client/schemas/responses.py
class OrchestrationResult(PrefectBaseModel):\n \"\"\"\n A container for the output of state orchestration.\n \"\"\"\n\n state: Optional[objects.State]\n status: SetStateStatus\n details: StateResponseDetails\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.FlowRunResponse","title":"FlowRunResponse
","text":" Bases: ObjectBaseModel
prefect/client/schemas/responses.py
@copy_model_fields\nclass FlowRunResponse(ObjectBaseModel):\n name: str = FieldFrom(objects.FlowRun)\n flow_id: UUID = FieldFrom(objects.FlowRun)\n state_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n deployment_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n work_queue_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n work_queue_name: Optional[str] = FieldFrom(objects.FlowRun)\n flow_version: Optional[str] = FieldFrom(objects.FlowRun)\n parameters: dict = FieldFrom(objects.FlowRun)\n idempotency_key: Optional[str] = FieldFrom(objects.FlowRun)\n context: dict = FieldFrom(objects.FlowRun)\n empirical_policy: objects.FlowRunPolicy = FieldFrom(objects.FlowRun)\n tags: List[str] = FieldFrom(objects.FlowRun)\n parent_task_run_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n state_type: Optional[objects.StateType] = FieldFrom(objects.FlowRun)\n state_name: Optional[str] = FieldFrom(objects.FlowRun)\n run_count: int = FieldFrom(objects.FlowRun)\n expected_start_time: Optional[DateTimeTZ] = FieldFrom(objects.FlowRun)\n next_scheduled_start_time: Optional[DateTimeTZ] = FieldFrom(objects.FlowRun)\n start_time: Optional[DateTimeTZ] = FieldFrom(objects.FlowRun)\n end_time: Optional[DateTimeTZ] = FieldFrom(objects.FlowRun)\n total_run_time: datetime.timedelta = FieldFrom(objects.FlowRun)\n estimated_run_time: datetime.timedelta = FieldFrom(objects.FlowRun)\n estimated_start_time_delta: datetime.timedelta = FieldFrom(objects.FlowRun)\n auto_scheduled: bool = FieldFrom(objects.FlowRun)\n infrastructure_document_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n infrastructure_pid: Optional[str] = FieldFrom(objects.FlowRun)\n created_by: Optional[CreatedBy] = FieldFrom(objects.FlowRun)\n work_pool_id: Optional[UUID] = FieldFrom(objects.FlowRun)\n work_pool_name: Optional[str] = Field(\n default=None,\n description=\"The name of the flow run's work pool.\",\n example=\"my-work-pool\",\n )\n state: Optional[objects.State] = FieldFrom(objects.FlowRun)\n job_variables: Optional[dict] = FieldFrom(objects.FlowRun)\n\n def __eq__(self, other: Any) -> bool:\n \"\"\"\n Check for \"equality\" to another flow run schema\n\n Estimates times are rolling and will always change with repeated queries for\n a flow run so we ignore them during equality checks.\n \"\"\"\n if isinstance(other, FlowRunResponse):\n exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n return self.dict(exclude=exclude_fields) == other.dict(\n exclude=exclude_fields\n )\n return super().__eq__(other)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_5","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules","title":"prefect.client.schemas.schedules
","text":"Schedule schemas
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.IntervalSchedule","title":"IntervalSchedule
","text":" Bases: PrefectBaseModel
A schedule formed by adding interval
increments to an anchor_date
. If no anchor_date
is supplied, the current UTC time is used. If a timezone-naive datetime is provided for anchor_date
, it is assumed to be in the schedule's timezone (or UTC). Even if supplied with an IANA timezone, anchor dates are always stored as UTC offsets, so a timezone
can be provided to determine localization behaviors like DST boundary handling. If none is provided it will be inferred from the anchor date.
NOTE: If the IntervalSchedule
anchor_date
or timezone
is provided in a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals. For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time. For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.
Parameters:
Name Type Description Defaultinterval
timedelta
an interval to schedule on
requiredanchor_date
DateTimeTZ
an anchor date to schedule increments against; if not provided, the current timestamp will be used
requiredtimezone
str
a valid timezone string
required Source code inprefect/client/schemas/schedules.py
class IntervalSchedule(PrefectBaseModel):\n \"\"\"\n A schedule formed by adding `interval` increments to an `anchor_date`. If no\n `anchor_date` is supplied, the current UTC time is used. If a\n timezone-naive datetime is provided for `anchor_date`, it is assumed to be\n in the schedule's timezone (or UTC). Even if supplied with an IANA timezone,\n anchor dates are always stored as UTC offsets, so a `timezone` can be\n provided to determine localization behaviors like DST boundary handling. If\n none is provided it will be inferred from the anchor date.\n\n NOTE: If the `IntervalSchedule` `anchor_date` or `timezone` is provided in a\n DST-observing timezone, then the schedule will adjust itself appropriately.\n Intervals greater than 24 hours will follow DST conventions, while intervals\n of less than 24 hours will follow UTC intervals. For example, an hourly\n schedule will fire every UTC hour, even across DST boundaries. When clocks\n are set back, this will result in two runs that *appear* to both be\n scheduled for 1am local time, even though they are an hour apart in UTC\n time. For longer intervals, like a daily schedule, the interval schedule\n will adjust for DST boundaries so that the clock-hour remains constant. This\n means that a daily schedule that always fires at 9am will observe DST and\n continue to fire at 9am in the local time zone.\n\n Args:\n interval (datetime.timedelta): an interval to schedule on\n anchor_date (DateTimeTZ, optional): an anchor date to schedule increments against;\n if not provided, the current timestamp will be used\n timezone (str, optional): a valid timezone string\n \"\"\"\n\n class Config:\n extra = \"forbid\"\n exclude_none = True\n\n interval: datetime.timedelta\n anchor_date: DateTimeTZ = None\n timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n\n @validator(\"interval\")\n def interval_must_be_positive(cls, v):\n if v.total_seconds() <= 0:\n raise ValueError(\"The interval must be positive\")\n return v\n\n @validator(\"anchor_date\", always=True)\n def default_anchor_date(cls, v):\n if v is None:\n return pendulum.now(\"UTC\")\n return pendulum.instance(v)\n\n @validator(\"timezone\", always=True)\n def default_timezone(cls, v, *, values, **kwargs):\n # pendulum.tz.timezones is a callable in 3.0 and above\n # https://github.com/PrefectHQ/prefect/issues/11619\n if callable(pendulum.tz.timezones):\n timezones = pendulum.tz.timezones()\n else:\n timezones = pendulum.tz.timezones\n # if was provided, make sure its a valid IANA string\n if v and v not in timezones:\n raise ValueError(f'Invalid timezone: \"{v}\"')\n\n # otherwise infer the timezone from the anchor date\n elif v is None and values.get(\"anchor_date\"):\n tz = values[\"anchor_date\"].tz.name\n if tz in timezones:\n return tz\n # sometimes anchor dates have \"timezones\" that are UTC offsets\n # like \"-04:00\". This happens when parsing ISO8601 strings.\n # In this case we, the correct inferred localization is \"UTC\".\n else:\n return \"UTC\"\n\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.CronSchedule","title":"CronSchedule
","text":" Bases: PrefectBaseModel
Cron schedule
NOTE: If the timezone is a DST-observing one, then the schedule will adjust itself appropriately. Cron's rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule will fire on every new schedule hour, not every elapsed hour; for example, when clocks are set back this will result in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later. Longer schedules, such as one that fires at 9am every morning, will automatically adjust for DST.
Parameters:
Name Type Description Defaultcron
str
a valid cron string
requiredtimezone
str
a valid timezone string in IANA tzdata format (for example, America/New_York).
requiredday_or
bool
Control how croniter handles day
and day_of_week
entries. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday.
prefect/client/schemas/schedules.py
class CronSchedule(PrefectBaseModel):\n \"\"\"\n Cron schedule\n\n NOTE: If the timezone is a DST-observing one, then the schedule will adjust\n itself appropriately. Cron's rules for DST are based on schedule times, not\n intervals. This means that an hourly cron schedule will fire on every new\n schedule hour, not every elapsed hour; for example, when clocks are set back\n this will result in a two-hour pause as the schedule will fire *the first\n time* 1am is reached and *the first time* 2am is reached, 120 minutes later.\n Longer schedules, such as one that fires at 9am every morning, will\n automatically adjust for DST.\n\n Args:\n cron (str): a valid cron string\n timezone (str): a valid timezone string in IANA tzdata format (for example,\n America/New_York).\n day_or (bool, optional): Control how croniter handles `day` and `day_of_week`\n entries. Defaults to True, matching cron which connects those values using\n OR. If the switch is set to False, the values are connected using AND. This\n behaves like fcron and enables you to e.g. define a job that executes each\n 2nd friday of a month by setting the days of month and the weekday.\n\n \"\"\"\n\n class Config:\n extra = \"forbid\"\n\n cron: str = Field(default=..., example=\"0 0 * * *\")\n timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n day_or: bool = Field(\n default=True,\n description=(\n \"Control croniter behavior for handling day and day_of_week entries.\"\n ),\n )\n\n @validator(\"timezone\")\n def valid_timezone(cls, v):\n # pendulum.tz.timezones is a callable in 3.0 and above\n # https://github.com/PrefectHQ/prefect/issues/11619\n if callable(pendulum.tz.timezones):\n timezones = pendulum.tz.timezones()\n else:\n timezones = pendulum.tz.timezones\n\n if v and v not in timezones:\n raise ValueError(\n f'Invalid timezone: \"{v}\" (specify in IANA tzdata format, for example,'\n \" America/New_York)\"\n )\n return v\n\n @validator(\"cron\")\n def valid_cron_string(cls, v):\n # croniter allows \"random\" and \"hashed\" expressions\n # which we do not support https://github.com/kiorky/croniter\n if not croniter.is_valid(v):\n raise ValueError(f'Invalid cron string: \"{v}\"')\n elif any(c for c in v.split() if c.casefold() in [\"R\", \"H\", \"r\", \"h\"]):\n raise ValueError(\n f'Random and Hashed expressions are unsupported, received: \"{v}\"'\n )\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.RRuleSchedule","title":"RRuleSchedule
","text":" Bases: PrefectBaseModel
RRule schedule, based on the iCalendar standard (RFC 5545) as implemented in dateutils.rrule
.
RRules are appropriate for any kind of calendar-date manipulation, including irregular intervals, repetition, exclusions, week day or day-of-month adjustments, and more.
Note that as a calendar-oriented standard, RRuleSchedules
are sensitive to to the initial timezone provided. A 9am daily schedule with a daylight saving time-aware start date will maintain a local 9am time through DST boundaries; a 9am daily schedule with a UTC start date will maintain a 9am UTC time.
Parameters:
Name Type Description Defaultrrule
str
a valid RRule string
requiredtimezone
str
a valid timezone string
required Source code inprefect/client/schemas/schedules.py
class RRuleSchedule(PrefectBaseModel):\n \"\"\"\n RRule schedule, based on the iCalendar standard\n ([RFC 5545](https://datatracker.ietf.org/doc/html/rfc5545)) as\n implemented in `dateutils.rrule`.\n\n RRules are appropriate for any kind of calendar-date manipulation, including\n irregular intervals, repetition, exclusions, week day or day-of-month\n adjustments, and more.\n\n Note that as a calendar-oriented standard, `RRuleSchedules` are sensitive to\n to the initial timezone provided. A 9am daily schedule with a daylight saving\n time-aware start date will maintain a local 9am time through DST boundaries;\n a 9am daily schedule with a UTC start date will maintain a 9am UTC time.\n\n Args:\n rrule (str): a valid RRule string\n timezone (str, optional): a valid timezone string\n \"\"\"\n\n class Config:\n extra = \"forbid\"\n\n rrule: str\n timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n\n @validator(\"rrule\")\n def validate_rrule_str(cls, v):\n # attempt to parse the rrule string as an rrule object\n # this will error if the string is invalid\n try:\n dateutil.rrule.rrulestr(v, cache=True)\n except ValueError as exc:\n # rrules errors are a mix of cryptic and informative\n # so reraise to be clear that the string was invalid\n raise ValueError(f'Invalid RRule string \"{v}\": {exc}')\n if len(v) > MAX_RRULE_LENGTH:\n raise ValueError(\n f'Invalid RRule string \"{v[:40]}...\"\\n'\n f\"Max length is {MAX_RRULE_LENGTH}, got {len(v)}\"\n )\n return v\n\n @classmethod\n def from_rrule(cls, rrule: dateutil.rrule.rrule):\n if isinstance(rrule, dateutil.rrule.rrule):\n if rrule._dtstart.tzinfo is not None:\n timezone = rrule._dtstart.tzinfo.name\n else:\n timezone = \"UTC\"\n return RRuleSchedule(rrule=str(rrule), timezone=timezone)\n elif isinstance(rrule, dateutil.rrule.rruleset):\n dtstarts = [rr._dtstart for rr in rrule._rrule if rr._dtstart is not None]\n unique_dstarts = set(pendulum.instance(d).in_tz(\"UTC\") for d in dtstarts)\n unique_timezones = set(d.tzinfo for d in dtstarts if d.tzinfo is not None)\n\n if len(unique_timezones) > 1:\n raise ValueError(\n f\"rruleset has too many dtstart timezones: {unique_timezones}\"\n )\n\n if len(unique_dstarts) > 1:\n raise ValueError(f\"rruleset has too many dtstarts: {unique_dstarts}\")\n\n if unique_dstarts and unique_timezones:\n timezone = dtstarts[0].tzinfo.name\n else:\n timezone = \"UTC\"\n\n rruleset_string = \"\"\n if rrule._rrule:\n rruleset_string += \"\\n\".join(str(r) for r in rrule._rrule)\n if rrule._exrule:\n rruleset_string += \"\\n\" if rruleset_string else \"\"\n rruleset_string += \"\\n\".join(str(r) for r in rrule._exrule).replace(\n \"RRULE\", \"EXRULE\"\n )\n if rrule._rdate:\n rruleset_string += \"\\n\" if rruleset_string else \"\"\n rruleset_string += \"RDATE:\" + \",\".join(\n rd.strftime(\"%Y%m%dT%H%M%SZ\") for rd in rrule._rdate\n )\n if rrule._exdate:\n rruleset_string += \"\\n\" if rruleset_string else \"\"\n rruleset_string += \"EXDATE:\" + \",\".join(\n exd.strftime(\"%Y%m%dT%H%M%SZ\") for exd in rrule._exdate\n )\n return RRuleSchedule(rrule=rruleset_string, timezone=timezone)\n else:\n raise ValueError(f\"Invalid RRule object: {rrule}\")\n\n def to_rrule(self) -> dateutil.rrule.rrule:\n \"\"\"\n Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n here\n \"\"\"\n rrule = dateutil.rrule.rrulestr(\n self.rrule,\n dtstart=DEFAULT_ANCHOR_DATE,\n cache=True,\n )\n timezone = dateutil.tz.gettz(self.timezone)\n if isinstance(rrule, dateutil.rrule.rrule):\n kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n if rrule._until:\n kwargs.update(\n until=rrule._until.replace(tzinfo=timezone),\n )\n return rrule.replace(**kwargs)\n elif isinstance(rrule, dateutil.rrule.rruleset):\n # update rrules\n localized_rrules = []\n for rr in rrule._rrule:\n kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n if rr._until:\n kwargs.update(\n until=rr._until.replace(tzinfo=timezone),\n )\n localized_rrules.append(rr.replace(**kwargs))\n rrule._rrule = localized_rrules\n\n # update exrules\n localized_exrules = []\n for exr in rrule._exrule:\n kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n if exr._until:\n kwargs.update(\n until=exr._until.replace(tzinfo=timezone),\n )\n localized_exrules.append(exr.replace(**kwargs))\n rrule._exrule = localized_exrules\n\n # update rdates\n localized_rdates = []\n for rd in rrule._rdate:\n localized_rdates.append(rd.replace(tzinfo=timezone))\n rrule._rdate = localized_rdates\n\n # update exdates\n localized_exdates = []\n for exd in rrule._exdate:\n localized_exdates.append(exd.replace(tzinfo=timezone))\n rrule._exdate = localized_exdates\n\n return rrule\n\n @validator(\"timezone\", always=True)\n def valid_timezone(cls, v):\n if v and v not in pytz.all_timezones_set:\n raise ValueError(f'Invalid timezone: \"{v}\"')\n elif v is None:\n return \"UTC\"\n return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.RRuleSchedule.to_rrule","title":"to_rrule
","text":"Since rrule doesn't properly serialize/deserialize timezones, we localize dates here
Source code inprefect/client/schemas/schedules.py
def to_rrule(self) -> dateutil.rrule.rrule:\n \"\"\"\n Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n here\n \"\"\"\n rrule = dateutil.rrule.rrulestr(\n self.rrule,\n dtstart=DEFAULT_ANCHOR_DATE,\n cache=True,\n )\n timezone = dateutil.tz.gettz(self.timezone)\n if isinstance(rrule, dateutil.rrule.rrule):\n kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n if rrule._until:\n kwargs.update(\n until=rrule._until.replace(tzinfo=timezone),\n )\n return rrule.replace(**kwargs)\n elif isinstance(rrule, dateutil.rrule.rruleset):\n # update rrules\n localized_rrules = []\n for rr in rrule._rrule:\n kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n if rr._until:\n kwargs.update(\n until=rr._until.replace(tzinfo=timezone),\n )\n localized_rrules.append(rr.replace(**kwargs))\n rrule._rrule = localized_rrules\n\n # update exrules\n localized_exrules = []\n for exr in rrule._exrule:\n kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n if exr._until:\n kwargs.update(\n until=exr._until.replace(tzinfo=timezone),\n )\n localized_exrules.append(exr.replace(**kwargs))\n rrule._exrule = localized_exrules\n\n # update rdates\n localized_rdates = []\n for rd in rrule._rdate:\n localized_rdates.append(rd.replace(tzinfo=timezone))\n rrule._rdate = localized_rdates\n\n # update exdates\n localized_exdates = []\n for exd in rrule._exdate:\n localized_exdates.append(exd.replace(tzinfo=timezone))\n rrule._exdate = localized_exdates\n\n return rrule\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.construct_schedule","title":"construct_schedule
","text":"Construct a schedule from the provided arguments.
Parameters:
Name Type Description Defaultinterval
Optional[Union[int, float, timedelta]]
An interval on which to schedule runs. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.
None
anchor_date
Optional[Union[datetime, str]]
The start date for an interval schedule.
None
cron
Optional[str]
A cron schedule for runs.
None
rrule
Optional[str]
An rrule schedule of when to execute runs of this flow.
None
timezone
Optional[str]
A timezone to use for the schedule. Defaults to UTC.
None
Source code in prefect/client/schemas/schedules.py
def construct_schedule(\n interval: Optional[Union[int, float, datetime.timedelta]] = None,\n anchor_date: Optional[Union[datetime.datetime, str]] = None,\n cron: Optional[str] = None,\n rrule: Optional[str] = None,\n timezone: Optional[str] = None,\n) -> SCHEDULE_TYPES:\n \"\"\"\n Construct a schedule from the provided arguments.\n\n Args:\n interval: An interval on which to schedule runs. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n anchor_date: The start date for an interval schedule.\n cron: A cron schedule for runs.\n rrule: An rrule schedule of when to execute runs of this flow.\n timezone: A timezone to use for the schedule. Defaults to UTC.\n \"\"\"\n num_schedules = sum(1 for entry in (interval, cron, rrule) if entry is not None)\n if num_schedules > 1:\n raise ValueError(\"Only one of interval, cron, or rrule can be provided.\")\n\n if anchor_date and not interval:\n raise ValueError(\n \"An anchor date can only be provided with an interval schedule\"\n )\n\n if timezone and not (interval or cron or rrule):\n raise ValueError(\n \"A timezone can only be provided with interval, cron, or rrule\"\n )\n\n schedule = None\n if interval:\n if isinstance(interval, (int, float)):\n interval = datetime.timedelta(seconds=interval)\n schedule = IntervalSchedule(\n interval=interval, anchor_date=anchor_date, timezone=timezone\n )\n elif cron:\n schedule = CronSchedule(cron=cron, timezone=timezone)\n elif rrule:\n schedule = RRuleSchedule(rrule=rrule, timezone=timezone)\n\n if schedule is None:\n raise ValueError(\"Either interval, cron, or rrule must be provided\")\n\n return schedule\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_6","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting","title":"prefect.client.schemas.sorting
","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.FlowRunSort","title":"FlowRunSort
","text":" Bases: AutoEnum
Defines flow run sorting options.
Source code inprefect/client/schemas/sorting.py
class FlowRunSort(AutoEnum):\n \"\"\"Defines flow run sorting options.\"\"\"\n\n ID_DESC = AutoEnum.auto()\n START_TIME_ASC = AutoEnum.auto()\n START_TIME_DESC = AutoEnum.auto()\n EXPECTED_START_TIME_ASC = AutoEnum.auto()\n EXPECTED_START_TIME_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n END_TIME_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.TaskRunSort","title":"TaskRunSort
","text":" Bases: AutoEnum
Defines task run sorting options.
Source code inprefect/client/schemas/sorting.py
class TaskRunSort(AutoEnum):\n \"\"\"Defines task run sorting options.\"\"\"\n\n ID_DESC = AutoEnum.auto()\n EXPECTED_START_TIME_ASC = AutoEnum.auto()\n EXPECTED_START_TIME_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n END_TIME_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.LogSort","title":"LogSort
","text":" Bases: AutoEnum
Defines log sorting options.
Source code inprefect/client/schemas/sorting.py
class LogSort(AutoEnum):\n \"\"\"Defines log sorting options.\"\"\"\n\n TIMESTAMP_ASC = AutoEnum.auto()\n TIMESTAMP_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.FlowSort","title":"FlowSort
","text":" Bases: AutoEnum
Defines flow sorting options.
Source code inprefect/client/schemas/sorting.py
class FlowSort(AutoEnum):\n \"\"\"Defines flow sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.DeploymentSort","title":"DeploymentSort
","text":" Bases: AutoEnum
Defines deployment sorting options.
Source code inprefect/client/schemas/sorting.py
class DeploymentSort(AutoEnum):\n \"\"\"Defines deployment sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.ArtifactSort","title":"ArtifactSort
","text":" Bases: AutoEnum
Defines artifact sorting options.
Source code inprefect/client/schemas/sorting.py
class ArtifactSort(AutoEnum):\n \"\"\"Defines artifact sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n ID_DESC = AutoEnum.auto()\n KEY_DESC = AutoEnum.auto()\n KEY_ASC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.ArtifactCollectionSort","title":"ArtifactCollectionSort
","text":" Bases: AutoEnum
Defines artifact collection sorting options.
Source code inprefect/client/schemas/sorting.py
class ArtifactCollectionSort(AutoEnum):\n \"\"\"Defines artifact collection sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n ID_DESC = AutoEnum.auto()\n KEY_DESC = AutoEnum.auto()\n KEY_ASC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.VariableSort","title":"VariableSort
","text":" Bases: AutoEnum
Defines variables sorting options.
Source code inprefect/client/schemas/sorting.py
class VariableSort(AutoEnum):\n \"\"\"Defines variables sorting options.\"\"\"\n\n CREATED_DESC = \"CREATED_DESC\"\n UPDATED_DESC = \"UPDATED_DESC\"\n NAME_DESC = \"NAME_DESC\"\n NAME_ASC = \"NAME_ASC\"\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.BlockDocumentSort","title":"BlockDocumentSort
","text":" Bases: AutoEnum
Defines block document sorting options.
Source code inprefect/client/schemas/sorting.py
class BlockDocumentSort(AutoEnum):\n \"\"\"Defines block document sorting options.\"\"\"\n\n NAME_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n BLOCK_TYPE_AND_NAME_ASC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/","title":"utilities","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities","title":"prefect.client.utilities
","text":"Utilities for working with clients.
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities.get_or_create_client","title":"get_or_create_client
","text":"Returns provided client, infers a client from context if available, or creates a new client.
Parameters:
Name Type Description Default-
client (PrefectClient
an optional client to use
requiredReturns:
Type DescriptionTuple[PrefectClient, bool]
prefect/client/utilities.py
def get_or_create_client(\n client: Optional[\"PrefectClient\"] = None,\n) -> Tuple[\"PrefectClient\", bool]:\n \"\"\"\n Returns provided client, infers a client from context if available, or creates a new client.\n\n Args:\n - client (PrefectClient, optional): an optional client to use\n\n Returns:\n - tuple: a tuple of the client and a boolean indicating if the client was inferred from context\n \"\"\"\n if client is not None:\n return client, True\n from prefect._internal.concurrency.event_loop import get_running_loop\n from prefect.context import FlowRunContext, TaskRunContext\n\n flow_run_context = FlowRunContext.get()\n task_run_context = TaskRunContext.get()\n\n if (\n flow_run_context\n and getattr(flow_run_context.client, \"_loop\") == get_running_loop()\n ):\n return flow_run_context.client, True\n elif (\n task_run_context\n and getattr(task_run_context.client, \"_loop\") == get_running_loop()\n ):\n return task_run_context.client, True\n else:\n from prefect.client.orchestration import get_client as get_httpx_client\n\n return get_httpx_client(), False\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities.inject_client","title":"inject_client
","text":"Simple helper to provide a context managed client to a asynchronous function.
The decorated function must take a client
kwarg and if a client is passed when called it will be used instead of creating a new one, but it will not be context managed as it is assumed that the caller is managing the context.
prefect/client/utilities.py
def inject_client(\n fn: Callable[P, Coroutine[Any, Any, Any]],\n) -> Callable[P, Coroutine[Any, Any, Any]]:\n \"\"\"\n Simple helper to provide a context managed client to a asynchronous function.\n\n The decorated function _must_ take a `client` kwarg and if a client is passed when\n called it will be used instead of creating a new one, but it will not be context\n managed as it is assumed that the caller is managing the context.\n \"\"\"\n\n @wraps(fn)\n async def with_injected_client(*args: P.args, **kwargs: P.kwargs) -> Any:\n client = cast(Optional[\"PrefectClient\"], kwargs.pop(\"client\", None))\n client, inferred = get_or_create_client(client)\n if not inferred:\n context = client\n else:\n from prefect.utilities.asyncutils import asyncnullcontext\n\n context = asyncnullcontext()\n async with context as new_client:\n kwargs.setdefault(\"client\", new_client or client)\n return await fn(*args, **kwargs)\n\n return with_injected_client\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/concurrency/asyncio/","title":"asyncio","text":"","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/asyncio/#prefect.concurrency.asyncio","title":"prefect.concurrency.asyncio
","text":"","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/asyncio/#prefect.concurrency.asyncio.ConcurrencySlotAcquisitionError","title":"ConcurrencySlotAcquisitionError
","text":" Bases: Exception
Raised when an unhandlable occurs while acquiring concurrency slots.
Source code inprefect/concurrency/asyncio.py
class ConcurrencySlotAcquisitionError(Exception):\n \"\"\"Raised when an unhandlable occurs while acquiring concurrency slots.\"\"\"\n
","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/asyncio/#prefect.concurrency.asyncio.rate_limit","title":"rate_limit
async
","text":"Block execution until an occupy
number of slots of the concurrency limits given in names
are acquired. Requires that all given concurrency limits have a slot decay.
Parameters:
Name Type Description Defaultnames
Union[str, List[str]]
The names of the concurrency limits to acquire slots from.
requiredoccupy
int
The number of slots to acquire and hold from each limit.
1
Source code in prefect/concurrency/asyncio.py
async def rate_limit(names: Union[str, List[str]], occupy: int = 1):\n \"\"\"Block execution until an `occupy` number of slots of the concurrency\n limits given in `names` are acquired. Requires that all given concurrency\n limits have a slot decay.\n\n Args:\n names: The names of the concurrency limits to acquire slots from.\n occupy: The number of slots to acquire and hold from each limit.\n \"\"\"\n names = names if isinstance(names, list) else [names]\n limits = await _acquire_concurrency_slots(names, occupy, mode=\"rate_limit\")\n _emit_concurrency_acquisition_events(limits, occupy)\n
","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/common/","title":"common","text":"","tags":["Python API","concurrency","common"]},{"location":"api-ref/prefect/concurrency/common/#prefect.concurrency.common","title":"prefect.concurrency.common
","text":"","tags":["Python API","concurrency","common"]},{"location":"api-ref/prefect/concurrency/events/","title":"events","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/events/#prefect.concurrency.events","title":"prefect.concurrency.events
","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/services/","title":"services","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/services/#prefect.concurrency.services","title":"prefect.concurrency.services
","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/sync/","title":"sync","text":"","tags":["Python API","concurrency","sync"]},{"location":"api-ref/prefect/concurrency/sync/#prefect.concurrency.sync","title":"prefect.concurrency.sync
","text":"","tags":["Python API","concurrency","sync"]},{"location":"api-ref/prefect/concurrency/sync/#prefect.concurrency.sync.rate_limit","title":"rate_limit
","text":"Block execution until an occupy
number of slots of the concurrency limits given in names
are acquired. Requires that all given concurrency limits have a slot decay.
Parameters:
Name Type Description Defaultnames
Union[str, List[str]]
The names of the concurrency limits to acquire slots from.
requiredoccupy
int
The number of slots to acquire and hold from each limit.
1
Source code in prefect/concurrency/sync.py
def rate_limit(names: Union[str, List[str]], occupy: int = 1):\n \"\"\"Block execution until an `occupy` number of slots of the concurrency\n limits given in `names` are acquired. Requires that all given concurrency\n limits have a slot decay.\n\n Args:\n names: The names of the concurrency limits to acquire slots from.\n occupy: The number of slots to acquire and hold from each limit.\n \"\"\"\n names = names if isinstance(names, list) else [names]\n limits = _call_async_function_from_sync(\n _acquire_concurrency_slots, names, occupy, mode=\"rate_limit\"\n )\n _emit_concurrency_acquisition_events(limits, occupy)\n
","tags":["Python API","concurrency","sync"]},{"location":"api-ref/prefect/deployments/base/","title":"base","text":"","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base","title":"prefect.deployments.base
","text":"Core primitives for managing Prefect projects. Projects provide a minimally opinionated build system for managing flows and deployments.
To get started, follow along with the deloyments tutorial.
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.configure_project_by_recipe","title":"configure_project_by_recipe
","text":"Given a recipe name, returns a dictionary representing base configuration options.
Parameters:
Name Type Description Defaultrecipe
str
the name of the recipe to use
requiredformatting_kwargs
dict
additional keyword arguments to format the recipe
{}
Raises:
Type DescriptionValueError
if provided recipe name does not exist.
Source code inprefect/deployments/base.py
def configure_project_by_recipe(recipe: str, **formatting_kwargs) -> dict:\n \"\"\"\n Given a recipe name, returns a dictionary representing base configuration options.\n\n Args:\n recipe (str): the name of the recipe to use\n formatting_kwargs (dict, optional): additional keyword arguments to format the recipe\n\n Raises:\n ValueError: if provided recipe name does not exist.\n \"\"\"\n # load the recipe\n recipe_path = Path(__file__).parent / \"recipes\" / recipe / \"prefect.yaml\"\n\n if not recipe_path.exists():\n raise ValueError(f\"Unknown recipe {recipe!r} provided.\")\n\n with recipe_path.open(mode=\"r\") as f:\n config = yaml.safe_load(f)\n\n config = apply_values(\n template=config, values=formatting_kwargs, remove_notset=False\n )\n\n return config\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.create_default_prefect_yaml","title":"create_default_prefect_yaml
","text":"Creates default prefect.yaml
file in the provided path if one does not already exist; returns boolean specifying whether a file was created.
Parameters:
Name Type Description Defaultname
str
the name of the project; if not provided, the current directory name will be used
None
contents
dict
a dictionary of contents to write to the file; if not provided, defaults will be used
None
Source code in prefect/deployments/base.py
def create_default_prefect_yaml(\n path: str, name: str = None, contents: dict = None\n) -> bool:\n \"\"\"\n Creates default `prefect.yaml` file in the provided path if one does not already exist;\n returns boolean specifying whether a file was created.\n\n Args:\n name (str, optional): the name of the project; if not provided, the current directory name\n will be used\n contents (dict, optional): a dictionary of contents to write to the file; if not provided,\n defaults will be used\n \"\"\"\n path = Path(path)\n prefect_file = path / \"prefect.yaml\"\n if prefect_file.exists():\n return False\n default_file = Path(__file__).parent / \"templates\" / \"prefect.yaml\"\n\n with default_file.open(mode=\"r\") as df:\n default_contents = yaml.safe_load(df)\n\n import prefect\n\n contents[\"prefect-version\"] = prefect.__version__\n contents[\"name\"] = name\n\n with prefect_file.open(mode=\"w\") as f:\n # write header\n f.write(\n \"# Welcome to your prefect.yaml file! You can use this file for storing and\"\n \" managing\\n# configuration for deploying your flows. We recommend\"\n \" committing this file to source\\n# control along with your flow code.\\n\\n\"\n )\n\n f.write(\"# Generic metadata about this project\\n\")\n yaml.dump({\"name\": contents[\"name\"]}, f, sort_keys=False)\n yaml.dump({\"prefect-version\": contents[\"prefect-version\"]}, f, sort_keys=False)\n f.write(\"\\n\")\n\n # build\n f.write(\"# build section allows you to manage and build docker images\\n\")\n yaml.dump(\n {\"build\": contents.get(\"build\", default_contents.get(\"build\"))},\n f,\n sort_keys=False,\n )\n f.write(\"\\n\")\n\n # push\n f.write(\n \"# push section allows you to manage if and how this project is uploaded to\"\n \" remote locations\\n\"\n )\n yaml.dump(\n {\"push\": contents.get(\"push\", default_contents.get(\"push\"))},\n f,\n sort_keys=False,\n )\n f.write(\"\\n\")\n\n # pull\n f.write(\n \"# pull section allows you to provide instructions for cloning this project\"\n \" in remote locations\\n\"\n )\n yaml.dump(\n {\"pull\": contents.get(\"pull\", default_contents.get(\"pull\"))},\n f,\n sort_keys=False,\n )\n f.write(\"\\n\")\n\n # deployments\n f.write(\n \"# the deployments section allows you to provide configuration for\"\n \" deploying flows\\n\"\n )\n yaml.dump(\n {\n \"deployments\": contents.get(\n \"deployments\", default_contents.get(\"deployments\")\n )\n },\n f,\n sort_keys=False,\n )\n return True\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.find_prefect_directory","title":"find_prefect_directory
","text":"Given a path, recurses upward looking for .prefect/ directories.
Once found, returns absolute path to the ./prefect directory, which is assumed to reside within the root for the current project.
If one is never found, None
is returned.
prefect/deployments/base.py
def find_prefect_directory(path: Path = None) -> Optional[Path]:\n \"\"\"\n Given a path, recurses upward looking for .prefect/ directories.\n\n Once found, returns absolute path to the ./prefect directory, which is assumed to reside within the\n root for the current project.\n\n If one is never found, `None` is returned.\n \"\"\"\n path = Path(path or \".\").resolve()\n parent = path.parent.resolve()\n while path != parent:\n prefect_dir = path.joinpath(\".prefect\")\n if prefect_dir.is_dir():\n return prefect_dir\n\n path = parent.resolve()\n parent = path.parent.resolve()\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.initialize_project","title":"initialize_project
","text":"Initializes a basic project structure with base files. If no name is provided, the name of the current directory is used. If no recipe is provided, one is inferred.
Parameters:
Name Type Description Defaultname
str
the name of the project; if not provided, the current directory name
None
recipe
str
the name of the recipe to use; if not provided, one is inferred
None
inputs
dict
a dictionary of inputs to use when formatting the recipe
None
Returns:
Type DescriptionList[str]
List[str]: a list of files / directories that were created
Source code inprefect/deployments/base.py
def initialize_project(\n name: str = None, recipe: str = None, inputs: dict = None\n) -> List[str]:\n \"\"\"\n Initializes a basic project structure with base files. If no name is provided, the name\n of the current directory is used. If no recipe is provided, one is inferred.\n\n Args:\n name (str, optional): the name of the project; if not provided, the current directory name\n recipe (str, optional): the name of the recipe to use; if not provided, one is inferred\n inputs (dict, optional): a dictionary of inputs to use when formatting the recipe\n\n Returns:\n List[str]: a list of files / directories that were created\n \"\"\"\n # determine if in git repo or use directory name as a default\n is_git_based = False\n formatting_kwargs = {\"directory\": str(Path(\".\").absolute().resolve())}\n dir_name = os.path.basename(os.getcwd())\n\n remote_url = _get_git_remote_origin_url()\n if remote_url:\n formatting_kwargs[\"repository\"] = remote_url\n is_git_based = True\n branch = _get_git_branch()\n formatting_kwargs[\"branch\"] = branch or \"main\"\n\n formatting_kwargs[\"name\"] = dir_name\n\n has_dockerfile = Path(\"Dockerfile\").exists()\n\n if has_dockerfile:\n formatting_kwargs[\"dockerfile\"] = \"Dockerfile\"\n elif recipe is not None and \"docker\" in recipe:\n formatting_kwargs[\"dockerfile\"] = \"auto\"\n\n # hand craft a pull step\n if is_git_based and recipe is None:\n if has_dockerfile:\n recipe = \"docker-git\"\n else:\n recipe = \"git\"\n elif recipe is None and has_dockerfile:\n recipe = \"docker\"\n elif recipe is None:\n recipe = \"local\"\n\n formatting_kwargs.update(inputs or {})\n configuration = configure_project_by_recipe(recipe=recipe, **formatting_kwargs)\n\n project_name = name or dir_name\n\n files = []\n if create_default_ignore_file(\".\"):\n files.append(\".prefectignore\")\n if create_default_prefect_yaml(\".\", name=project_name, contents=configuration):\n files.append(\"prefect.yaml\")\n if set_prefect_hidden_dir():\n files.append(\".prefect/\")\n\n return files\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.register_flow","title":"register_flow
async
","text":"Register a flow with this project from an entrypoint.
Parameters:
Name Type Description Defaultentrypoint
str
the entrypoint to the flow to register
requiredforce
bool
whether or not to overwrite an existing flow with the same name
False
Raises:
Type DescriptionValueError
if force
is False
and registration would overwrite an existing flow
prefect/deployments/base.py
async def register_flow(entrypoint: str, force: bool = False):\n \"\"\"\n Register a flow with this project from an entrypoint.\n\n Args:\n entrypoint (str): the entrypoint to the flow to register\n force (bool, optional): whether or not to overwrite an existing flow with the same name\n\n Raises:\n ValueError: if `force` is `False` and registration would overwrite an existing flow\n \"\"\"\n try:\n fpath, obj_name = entrypoint.rsplit(\":\", 1)\n except ValueError as exc:\n if str(exc) == \"not enough values to unpack (expected 2, got 1)\":\n missing_flow_name_msg = (\n \"Your flow entrypoint must include the name of the function that is\"\n f\" the entrypoint to your flow.\\nTry {entrypoint}:<flow_name> as your\"\n f\" entrypoint. If you meant to specify '{entrypoint}' as the deployment\"\n f\" name, try `prefect deploy -n {entrypoint}`.\"\n )\n raise ValueError(missing_flow_name_msg)\n else:\n raise exc\n\n flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, entrypoint)\n\n fpath = Path(fpath).absolute()\n prefect_dir = find_prefect_directory()\n if not prefect_dir:\n raise FileNotFoundError(\n \"No .prefect directory could be found - run `prefect project\"\n \" init` to create one.\"\n )\n\n entrypoint = f\"{fpath.relative_to(prefect_dir.parent)!s}:{obj_name}\"\n\n flows_file = prefect_dir / \"flows.json\"\n if flows_file.exists():\n with flows_file.open(mode=\"r\") as f:\n flows = json.load(f)\n else:\n flows = {}\n\n ## quality control\n if flow.name in flows and flows[flow.name] != entrypoint:\n if not force:\n raise ValueError(\n f\"Conflicting entry found for flow with name {flow.name!r}.\\nExisting\"\n f\" entrypoint: {flows[flow.name]}\\nAttempted entrypoint:\"\n f\" {entrypoint}\\n\\nYou can try removing the existing entry for\"\n f\" {flow.name!r} from your [yellow]~/.prefect/flows.json[/yellow].\"\n )\n\n flows[flow.name] = entrypoint\n\n with flows_file.open(mode=\"w\") as f:\n json.dump(flows, f, sort_keys=True, indent=2)\n\n return flow\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.set_prefect_hidden_dir","title":"set_prefect_hidden_dir
","text":"Creates default .prefect/
directory if one does not already exist. Returns boolean specifying whether or not a directory was created.
If a path is provided, the directory will be created in that location.
Source code inprefect/deployments/base.py
def set_prefect_hidden_dir(path: str = None) -> bool:\n \"\"\"\n Creates default `.prefect/` directory if one does not already exist.\n Returns boolean specifying whether or not a directory was created.\n\n If a path is provided, the directory will be created in that location.\n \"\"\"\n path = Path(path or \".\") / \".prefect\"\n\n # use exists so that we dont accidentally overwrite a file\n if path.exists():\n return False\n path.mkdir(mode=0o0700)\n return True\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/deployments/","title":"deployments","text":"","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments","title":"prefect.deployments.deployments
","text":"Objects for specifying deployments and utilities for loading flows from deployments.
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment","title":"Deployment
","text":" Bases: BaseModel
DEPRECATION WARNING:
This class is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by flow.deploy
, which offers enhanced functionality and better a better user experience. For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.
A Prefect Deployment definition, used for specifying and building deployments.
Parameters:
Name Type Description Defaultname
A name for the deployment (required).
requiredversion
An optional version for the deployment; defaults to the flow's version
requireddescription
An optional description of the deployment; defaults to the flow's description
requiredtags
An optional list of tags to associate with this deployment; note that tags are used only for organizational purposes. For delegating work to agents, see work_queue_name
.
schedule
A schedule to run this deployment on, once registered (deprecated)
requiredis_schedule_active
Whether or not the schedule is active (deprecated)
requiredschedules
A list of schedules to run this deployment on
requiredwork_queue_name
The work queue that will handle this deployment's runs
requiredwork_pool_name
The work pool for the deployment
requiredflow_name
The name of the flow this deployment encapsulates
requiredparameters
A dictionary of parameter values to pass to runs created from this deployment
requiredinfrastructure
An optional infrastructure block used to configure infrastructure for runs; if not provided, will default to running this deployment in Agent subprocesses
requiredinfra_overrides
A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example env.CONFIG_KEY=config_value
or namespace='prefect'
storage
An optional remote storage block used to store and retrieve this workflow; if not provided, will default to referencing this flow by its local path
requiredpath
The path to the working directory for the workflow, relative to remote storage or, if stored on a local filesystem, an absolute path
requiredentrypoint
The path to the entrypoint for the workflow, always relative to the path
parameter_openapi_schema
The parameter schema of the flow, including defaults.
requiredenforce_parameter_schema
Whether or not the Prefect API should enforce the parameter schema for this deployment.
requiredCreate a new deployment using configuration defaults for an imported flow:\n\n>>> from my_project.flows import my_flow\n>>> from prefect.deployments import Deployment\n>>>\n>>> deployment = Deployment.build_from_flow(\n... flow=my_flow,\n... name=\"example\",\n... version=\"1\",\n... tags=[\"demo\"],\n>>> )\n>>> deployment.apply()\n\nCreate a new deployment with custom storage and an infrastructure override:\n\n>>> from my_project.flows import my_flow\n>>> from prefect.deployments import Deployment\n>>> from prefect.filesystems import S3\n\n>>> storage = S3.load(\"dev-bucket\") # load a pre-defined block\n>>> deployment = Deployment.build_from_flow(\n... flow=my_flow,\n... name=\"s3-example\",\n... version=\"2\",\n... tags=[\"aws\"],\n... storage=storage,\n... infra_overrides=dict(\"env.PREFECT_LOGGING_LEVEL\"=\"DEBUG\"),\n>>> )\n>>> deployment.apply()\n
Source code in prefect/deployments/deployments.py
@deprecated_class(\n start_date=\"Mar 2024\",\n help=\"Use `flow.deploy` to deploy your flows instead.\"\n \" Refer to the upgrade guide for more information:\"\n \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass Deployment(BaseModel):\n \"\"\"\n DEPRECATION WARNING:\n\n This class is deprecated as of March 2024 and will not be available after September 2024.\n It has been replaced by `flow.deploy`, which offers enhanced functionality and better a better user experience.\n For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\n\n A Prefect Deployment definition, used for specifying and building deployments.\n\n Args:\n name: A name for the deployment (required).\n version: An optional version for the deployment; defaults to the flow's version\n description: An optional description of the deployment; defaults to the flow's\n description\n tags: An optional list of tags to associate with this deployment; note that tags\n are used only for organizational purposes. For delegating work to agents,\n see `work_queue_name`.\n schedule: A schedule to run this deployment on, once registered (deprecated)\n is_schedule_active: Whether or not the schedule is active (deprecated)\n schedules: A list of schedules to run this deployment on\n work_queue_name: The work queue that will handle this deployment's runs\n work_pool_name: The work pool for the deployment\n flow_name: The name of the flow this deployment encapsulates\n parameters: A dictionary of parameter values to pass to runs created from this\n deployment\n infrastructure: An optional infrastructure block used to configure\n infrastructure for runs; if not provided, will default to running this\n deployment in Agent subprocesses\n infra_overrides: A dictionary of dot delimited infrastructure overrides that\n will be applied at runtime; for example `env.CONFIG_KEY=config_value` or\n `namespace='prefect'`\n storage: An optional remote storage block used to store and retrieve this\n workflow; if not provided, will default to referencing this flow by its\n local path\n path: The path to the working directory for the workflow, relative to remote\n storage or, if stored on a local filesystem, an absolute path\n entrypoint: The path to the entrypoint for the workflow, always relative to the\n `path`\n parameter_openapi_schema: The parameter schema of the flow, including defaults.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n\n Examples:\n\n Create a new deployment using configuration defaults for an imported flow:\n\n >>> from my_project.flows import my_flow\n >>> from prefect.deployments import Deployment\n >>>\n >>> deployment = Deployment.build_from_flow(\n ... flow=my_flow,\n ... name=\"example\",\n ... version=\"1\",\n ... tags=[\"demo\"],\n >>> )\n >>> deployment.apply()\n\n Create a new deployment with custom storage and an infrastructure override:\n\n >>> from my_project.flows import my_flow\n >>> from prefect.deployments import Deployment\n >>> from prefect.filesystems import S3\n\n >>> storage = S3.load(\"dev-bucket\") # load a pre-defined block\n >>> deployment = Deployment.build_from_flow(\n ... flow=my_flow,\n ... name=\"s3-example\",\n ... version=\"2\",\n ... tags=[\"aws\"],\n ... storage=storage,\n ... infra_overrides=dict(\"env.PREFECT_LOGGING_LEVEL\"=\"DEBUG\"),\n >>> )\n >>> deployment.apply()\n\n \"\"\"\n\n class Config:\n json_encoders = {SecretDict: lambda v: v.dict()}\n validate_assignment = True\n extra = \"forbid\"\n\n @property\n def _editable_fields(self) -> List[str]:\n editable_fields = [\n \"name\",\n \"description\",\n \"version\",\n \"work_queue_name\",\n \"work_pool_name\",\n \"tags\",\n \"parameters\",\n \"schedule\",\n \"schedules\",\n \"is_schedule_active\",\n \"infra_overrides\",\n ]\n\n # if infrastructure is baked as a pre-saved block, then\n # editing its fields will not update anything\n if self.infrastructure._block_document_id:\n return editable_fields\n else:\n return editable_fields + [\"infrastructure\"]\n\n @property\n def location(self) -> str:\n \"\"\"\n The 'location' that this deployment points to is given by `path` alone\n in the case of no remote storage, and otherwise by `storage.basepath / path`.\n\n The underlying flow entrypoint is interpreted relative to this location.\n \"\"\"\n location = \"\"\n if self.storage:\n location = (\n self.storage.basepath + \"/\"\n if not self.storage.basepath.endswith(\"/\")\n else \"\"\n )\n if self.path:\n location += self.path\n return location\n\n @sync_compatible\n async def to_yaml(self, path: Path) -> None:\n yaml_dict = self._yaml_dict()\n schema = self.schema()\n\n with open(path, \"w\") as f:\n # write header\n f.write(\n \"###\\n### A complete description of a Prefect Deployment for flow\"\n f\" {self.flow_name!r}\\n###\\n\"\n )\n\n # write editable fields\n for field in self._editable_fields:\n # write any comments\n if schema[\"properties\"][field].get(\"yaml_comment\"):\n f.write(f\"# {schema['properties'][field]['yaml_comment']}\\n\")\n # write the field\n yaml.dump({field: yaml_dict[field]}, f, sort_keys=False)\n\n # write non-editable fields\n f.write(\"\\n###\\n### DO NOT EDIT BELOW THIS LINE\\n###\\n\")\n yaml.dump(\n {k: v for k, v in yaml_dict.items() if k not in self._editable_fields},\n f,\n sort_keys=False,\n )\n\n def _yaml_dict(self) -> dict:\n \"\"\"\n Returns a YAML-compatible representation of this deployment as a dictionary.\n \"\"\"\n # avoids issues with UUIDs showing up in YAML\n all_fields = json.loads(\n self.json(\n exclude={\n \"storage\": {\"_filesystem\", \"filesystem\", \"_remote_file_system\"}\n }\n )\n )\n if all_fields[\"storage\"]:\n all_fields[\"storage\"][\n \"_block_type_slug\"\n ] = self.storage.get_block_type_slug()\n if all_fields[\"infrastructure\"]:\n all_fields[\"infrastructure\"][\n \"_block_type_slug\"\n ] = self.infrastructure.get_block_type_slug()\n return all_fields\n\n @classmethod\n def _validate_schedule(cls, value):\n \"\"\"We do not support COUNT-based (# of occurrences) RRule schedules for deployments.\"\"\"\n if value:\n rrule_value = getattr(value, \"rrule\", None)\n if rrule_value and \"COUNT\" in rrule_value.upper():\n raise ValueError(\n \"RRule schedules with `COUNT` are not supported. Please use `UNTIL`\"\n \" or the `/deployments/{id}/schedule` endpoint to schedule a fixed\"\n \" number of flow runs.\"\n )\n\n # top level metadata\n name: str = Field(..., description=\"The name of the deployment.\")\n description: Optional[str] = Field(\n default=None, description=\"An optional description of the deployment.\"\n )\n version: Optional[str] = Field(\n default=None, description=\"An optional version for the deployment.\"\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"One of more tags to apply to this deployment.\",\n )\n schedule: Optional[SCHEDULE_TYPES] = Field(default=None)\n schedules: List[MinimalDeploymentSchedule] = Field(\n default_factory=list,\n description=\"The schedules to run this deployment on.\",\n )\n is_schedule_active: Optional[bool] = Field(\n default=None, description=\"Whether or not the schedule is active.\"\n )\n flow_name: Optional[str] = Field(default=None, description=\"The name of the flow.\")\n work_queue_name: Optional[str] = Field(\n \"default\",\n description=\"The work queue for the deployment.\",\n yaml_comment=\"The work queue that will handle this deployment's runs\",\n )\n work_pool_name: Optional[str] = Field(\n default=None, description=\"The work pool for the deployment\"\n )\n # flow data\n parameters: Dict[str, Any] = Field(default_factory=dict)\n manifest_path: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the flow's manifest file, relative to the chosen storage.\"\n ),\n )\n infrastructure: Infrastructure = Field(default_factory=Process)\n infra_overrides: Dict[str, Any] = Field(\n default_factory=dict,\n description=\"Overrides to apply to the base infrastructure block at runtime.\",\n )\n storage: Optional[Block] = Field(\n None,\n help=\"The remote storage to use for this workflow.\",\n )\n path: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the working directory for the workflow, relative to remote\"\n \" storage or an absolute path.\"\n ),\n )\n entrypoint: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the entrypoint for the workflow, relative to the `path`.\"\n ),\n )\n parameter_openapi_schema: ParameterSchema = Field(\n default_factory=ParameterSchema,\n description=\"The parameter schema of the flow, including defaults.\",\n )\n timestamp: datetime = Field(default_factory=partial(pendulum.now, \"UTC\"))\n triggers: List[DeploymentTrigger] = Field(\n default_factory=list,\n description=\"The triggers that should cause this deployment to run.\",\n )\n # defaults to None to allow for backwards compatibility\n enforce_parameter_schema: Optional[bool] = Field(\n default=None,\n description=(\n \"Whether or not the Prefect API should enforce the parameter schema for\"\n \" this deployment.\"\n ),\n )\n\n @validator(\"infrastructure\", pre=True)\n def infrastructure_must_have_capabilities(cls, value):\n if isinstance(value, dict):\n if \"_block_type_slug\" in value:\n # Replace private attribute with public for dispatch\n value[\"block_type_slug\"] = value.pop(\"_block_type_slug\")\n block = Block(**value)\n elif value is None:\n return value\n else:\n block = value\n\n if \"run-infrastructure\" not in block.get_block_capabilities():\n raise ValueError(\n \"Infrastructure block must have 'run-infrastructure' capabilities.\"\n )\n return block\n\n @validator(\"storage\", pre=True)\n def storage_must_have_capabilities(cls, value):\n if isinstance(value, dict):\n block_type = Block.get_block_class_from_key(value.pop(\"_block_type_slug\"))\n block = block_type(**value)\n elif value is None:\n return value\n else:\n block = value\n\n capabilities = block.get_block_capabilities()\n if \"get-directory\" not in capabilities:\n raise ValueError(\n \"Remote Storage block must have 'get-directory' capabilities.\"\n )\n return block\n\n @validator(\"parameter_openapi_schema\", pre=True)\n def handle_openapi_schema(cls, value):\n \"\"\"\n This method ensures setting a value of `None` is handled gracefully.\n \"\"\"\n if value is None:\n return ParameterSchema()\n return value\n\n @validator(\"triggers\")\n def validate_automation_names(cls, field_value, values, field, config):\n \"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n for i, trigger in enumerate(field_value, start=1):\n if trigger.name is None:\n trigger.name = f\"{values['name']}__automation_{i}\"\n\n return field_value\n\n @root_validator(pre=True)\n def validate_deprecated_schedule_fields(cls, values):\n if values.get(\"schedule\") and not values.get(\"schedules\"):\n logger.warning(\n \"The field 'schedule' in 'Deployment' has been deprecated. It will not be \"\n \"available after Sep 2024. Define schedules in the `schedules` list instead.\"\n )\n elif values.get(\"is_schedule_active\") and not values.get(\"schedules\"):\n logger.warning(\n \"The field 'is_schedule_active' in 'Deployment' has been deprecated. It will \"\n \"not be available after Sep 2024. Use the `active` flag within a schedule in \"\n \"the `schedules` list instead and the `pause` flag in 'Deployment' to pause \"\n \"all schedules.\"\n )\n return values\n\n @root_validator(pre=True)\n def reconcile_schedules(cls, values):\n schedule = values.get(\"schedule\", NotSet)\n schedules = values.get(\"schedules\", NotSet)\n\n if schedules is not NotSet:\n values[\"schedules\"] = normalize_to_minimal_deployment_schedules(schedules)\n elif schedule is not NotSet:\n values[\"schedule\"] = None\n\n if schedule is None:\n values[\"schedules\"] = []\n else:\n values[\"schedules\"] = [\n create_minimal_deployment_schedule(\n schedule=schedule, active=values.get(\"is_schedule_active\")\n )\n ]\n\n for schedule in values.get(\"schedules\", []):\n cls._validate_schedule(schedule.schedule)\n\n return values\n\n @classmethod\n @sync_compatible\n async def load_from_yaml(cls, path: str):\n data = yaml.safe_load(await anyio.Path(path).read_bytes())\n # load blocks from server to ensure secret values are properly hydrated\n if data.get(\"storage\"):\n block_doc_name = data[\"storage\"].get(\"_block_document_name\")\n # if no doc name, this block is not stored on the server\n if block_doc_name:\n block_slug = data[\"storage\"][\"_block_type_slug\"]\n block = await Block.load(f\"{block_slug}/{block_doc_name}\")\n data[\"storage\"] = block\n\n if data.get(\"infrastructure\"):\n block_doc_name = data[\"infrastructure\"].get(\"_block_document_name\")\n # if no doc name, this block is not stored on the server\n if block_doc_name:\n block_slug = data[\"infrastructure\"][\"_block_type_slug\"]\n block = await Block.load(f\"{block_slug}/{block_doc_name}\")\n data[\"infrastructure\"] = block\n\n return cls(**data)\n\n @sync_compatible\n async def load(self) -> bool:\n \"\"\"\n Queries the API for a deployment with this name for this flow, and if found,\n prepopulates any settings that were not set at initialization.\n\n Returns a boolean specifying whether a load was successful or not.\n\n Raises:\n - ValueError: if both name and flow name are not set\n \"\"\"\n if not self.name or not self.flow_name:\n raise ValueError(\"Both a deployment name and flow name must be provided.\")\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(\n f\"{self.flow_name}/{self.name}\"\n )\n if deployment.storage_document_id:\n Block._from_block_document(\n await client.read_block_document(deployment.storage_document_id)\n )\n\n excluded_fields = self.__fields_set__.union(\n {\n \"infrastructure\",\n \"storage\",\n \"timestamp\",\n \"triggers\",\n \"enforce_parameter_schema\",\n \"schedules\",\n \"schedule\",\n \"is_schedule_active\",\n }\n )\n for field in set(self.__fields__.keys()) - excluded_fields:\n new_value = getattr(deployment, field)\n setattr(self, field, new_value)\n\n if \"schedules\" not in self.__fields_set__:\n self.schedules = [\n MinimalDeploymentSchedule(\n **schedule.dict(include={\"schedule\", \"active\"})\n )\n for schedule in deployment.schedules\n ]\n\n # The API server generates the \"schedule\" field from the\n # current list of schedules, so if the user has locally set\n # \"schedules\" to anything, we should avoid sending \"schedule\"\n # and let the API server generate a new value if necessary.\n if \"schedules\" in self.__fields_set__:\n self.schedule = None\n self.is_schedule_active = None\n else:\n # The user isn't using \"schedules,\" so we should\n # populate \"schedule\" and \"is_schedule_active\" from the\n # API's version of the deployment, unless the user gave\n # us these fields in __init__().\n if \"schedule\" not in self.__fields_set__:\n self.schedule = deployment.schedule\n if \"is_schedule_active\" not in self.__fields_set__:\n self.is_schedule_active = deployment.is_schedule_active\n\n if \"infrastructure\" not in self.__fields_set__:\n if deployment.infrastructure_document_id:\n self.infrastructure = Block._from_block_document(\n await client.read_block_document(\n deployment.infrastructure_document_id\n )\n )\n if \"storage\" not in self.__fields_set__:\n if deployment.storage_document_id:\n self.storage = Block._from_block_document(\n await client.read_block_document(\n deployment.storage_document_id\n )\n )\n except ObjectNotFound:\n return False\n return True\n\n @sync_compatible\n async def update(self, ignore_none: bool = False, **kwargs):\n \"\"\"\n Performs an in-place update with the provided settings.\n\n Args:\n ignore_none: if True, all `None` values are ignored when performing the\n update\n \"\"\"\n unknown_keys = set(kwargs.keys()) - set(self.dict().keys())\n if unknown_keys:\n raise ValueError(\n f\"Received unexpected attributes: {', '.join(unknown_keys)}\"\n )\n for key, value in kwargs.items():\n if ignore_none and value is None:\n continue\n setattr(self, key, value)\n\n @sync_compatible\n async def upload_to_storage(\n self, storage_block: str = None, ignore_file: str = \".prefectignore\"\n ) -> Optional[int]:\n \"\"\"\n Uploads the workflow this deployment represents using a provided storage block;\n if no block is provided, defaults to configuring self for local storage.\n\n Args:\n storage_block: a string reference a remote storage block slug `$type/$name`;\n if provided, used to upload the workflow's project\n ignore_file: an optional path to a `.prefectignore` file that specifies\n filename patterns to ignore when uploading to remote storage; if not\n provided, looks for `.prefectignore` in the current working directory\n \"\"\"\n file_count = None\n if storage_block:\n storage = await Block.load(storage_block)\n\n if \"put-directory\" not in storage.get_block_capabilities():\n raise BlockMissingCapabilities(\n f\"Storage block {storage!r} missing 'put-directory' capability.\"\n )\n\n self.storage = storage\n\n # upload current directory to storage location\n file_count = await self.storage.put_directory(\n ignore_file=ignore_file, to_path=self.path\n )\n elif self.storage:\n if \"put-directory\" not in self.storage.get_block_capabilities():\n raise BlockMissingCapabilities(\n f\"Storage block {self.storage!r} missing 'put-directory'\"\n \" capability.\"\n )\n\n file_count = await self.storage.put_directory(\n ignore_file=ignore_file, to_path=self.path\n )\n\n # persists storage now in case it contains secret values\n if self.storage and not self.storage._block_document_id:\n await self.storage._save(is_anonymous=True)\n\n return file_count\n\n @sync_compatible\n async def apply(\n self, upload: bool = False, work_queue_concurrency: int = None\n ) -> UUID:\n \"\"\"\n Registers this deployment with the API and returns the deployment's ID.\n\n Args:\n upload: if True, deployment files are automatically uploaded to remote\n storage\n work_queue_concurrency: If provided, sets the concurrency limit on the\n deployment's work queue\n \"\"\"\n if not self.name or not self.flow_name:\n raise ValueError(\"Both a deployment name and flow name must be set.\")\n async with get_client() as client:\n # prep IDs\n flow_id = await client.create_flow_from_name(self.flow_name)\n\n infrastructure_document_id = self.infrastructure._block_document_id\n if not infrastructure_document_id:\n # if not building off a block, will create an anonymous block\n self.infrastructure = self.infrastructure.copy()\n infrastructure_document_id = await self.infrastructure._save(\n is_anonymous=True,\n )\n\n if upload:\n await self.upload_to_storage()\n\n if self.work_queue_name and work_queue_concurrency is not None:\n try:\n res = await client.create_work_queue(\n name=self.work_queue_name, work_pool_name=self.work_pool_name\n )\n except ObjectAlreadyExists:\n res = await client.read_work_queue_by_name(\n name=self.work_queue_name, work_pool_name=self.work_pool_name\n )\n await client.update_work_queue(\n res.id, concurrency_limit=work_queue_concurrency\n )\n\n if self.schedule:\n logger.info(\n \"Interpreting the deprecated `schedule` field as an entry in \"\n \"`schedules`.\"\n )\n schedules = [\n DeploymentScheduleCreate(\n schedule=self.schedule, active=self.is_schedule_active\n )\n ]\n elif self.schedules:\n schedules = [\n DeploymentScheduleCreate(**schedule.dict())\n for schedule in self.schedules\n ]\n else:\n schedules = None\n\n # we assume storage was already saved\n storage_document_id = getattr(self.storage, \"_block_document_id\", None)\n deployment_id = await client.create_deployment(\n flow_id=flow_id,\n name=self.name,\n work_queue_name=self.work_queue_name,\n work_pool_name=self.work_pool_name,\n version=self.version,\n schedules=schedules,\n is_schedule_active=self.is_schedule_active,\n parameters=self.parameters,\n description=self.description,\n tags=self.tags,\n manifest_path=self.manifest_path, # allows for backwards YAML compat\n path=self.path,\n entrypoint=self.entrypoint,\n infra_overrides=self.infra_overrides,\n storage_document_id=storage_document_id,\n infrastructure_document_id=infrastructure_document_id,\n parameter_openapi_schema=self.parameter_openapi_schema.dict(),\n enforce_parameter_schema=self.enforce_parameter_schema,\n )\n\n if client.server_type == ServerType.CLOUD:\n # The triggers defined in the deployment spec are, essentially,\n # anonymous and attempting truly sync them with cloud is not\n # feasible. Instead, we remove all automations that are owned\n # by the deployment, meaning that they were created via this\n # mechanism below, and then recreate them.\n await client.delete_resource_owned_automations(\n f\"prefect.deployment.{deployment_id}\"\n )\n for trigger in self.triggers:\n trigger.set_deployment_id(deployment_id)\n await client.create_automation(trigger.as_automation())\n\n return deployment_id\n\n @classmethod\n @sync_compatible\n async def build_from_flow(\n cls,\n flow: Flow,\n name: str,\n output: str = None,\n skip_upload: bool = False,\n ignore_file: str = \".prefectignore\",\n apply: bool = False,\n load_existing: bool = True,\n schedules: Optional[FlexibleScheduleList] = None,\n **kwargs,\n ) -> \"Deployment\":\n \"\"\"\n Configure a deployment for a given flow.\n\n Args:\n flow: A flow function to deploy\n name: A name for the deployment\n output (optional): if provided, the full deployment specification will be\n written as a YAML file in the location specified by `output`\n skip_upload: if True, deployment files are not automatically uploaded to\n remote storage\n ignore_file: an optional path to a `.prefectignore` file that specifies\n filename patterns to ignore when uploading to remote storage; if not\n provided, looks for `.prefectignore` in the current working directory\n apply: if True, the deployment is automatically registered with the API\n load_existing: if True, load any settings that may already be configured for\n the named deployment server-side (e.g., schedules, default parameter\n values, etc.)\n schedules: An optional list of schedules. Each item in the list can be:\n - An instance of `MinimalDeploymentSchedule`.\n - A dictionary with a `schedule` key, and optionally, an\n `active` key. The `schedule` key should correspond to a\n schedule type, and `active` is a boolean indicating whether\n the schedule is active or not.\n - An instance of one of the predefined schedule types:\n `IntervalSchedule`, `CronSchedule`, or `RRuleSchedule`.\n **kwargs: other keyword arguments to pass to the constructor for the\n `Deployment` class\n \"\"\"\n if not name:\n raise ValueError(\"A deployment name must be provided.\")\n\n # note that `deployment.load` only updates settings that were *not*\n # provided at initialization\n\n deployment_args = {\n \"name\": name,\n \"flow_name\": flow.name,\n **kwargs,\n }\n\n if schedules is not None:\n deployment_args[\"schedules\"] = schedules\n\n deployment = cls(**deployment_args)\n deployment.flow_name = flow.name\n if not deployment.entrypoint:\n ## first see if an entrypoint can be determined\n flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n mod_name = getattr(flow, \"__module__\", None)\n if not flow_file:\n if not mod_name:\n # todo, check if the file location was manually set already\n raise ValueError(\"Could not determine flow's file location.\")\n module = importlib.import_module(mod_name)\n flow_file = getattr(module, \"__file__\", None)\n if not flow_file:\n raise ValueError(\"Could not determine flow's file location.\")\n\n # set entrypoint\n entry_path = Path(flow_file).absolute().relative_to(Path(\".\").absolute())\n deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n if load_existing:\n await deployment.load()\n\n # set a few attributes for this flow object\n deployment.parameter_openapi_schema = parameter_schema(flow)\n\n # ensure the ignore file exists\n if not Path(ignore_file).exists():\n Path(ignore_file).touch()\n\n if not deployment.version:\n deployment.version = flow.version\n if not deployment.description:\n deployment.description = flow.description\n\n # proxy for whether infra is docker-based\n is_docker_based = hasattr(deployment.infrastructure, \"image\")\n\n if not deployment.storage and not is_docker_based and not deployment.path:\n deployment.path = str(Path(\".\").absolute())\n elif not deployment.storage and is_docker_based:\n # only update if a path is not already set\n if not deployment.path:\n deployment.path = \"/opt/prefect/flows\"\n\n if not skip_upload:\n if (\n deployment.storage\n and \"put-directory\" in deployment.storage.get_block_capabilities()\n ):\n await deployment.upload_to_storage(ignore_file=ignore_file)\n\n if output:\n await deployment.to_yaml(output)\n\n if apply:\n await deployment.apply()\n\n return deployment\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.location","title":"location: str
property
","text":"The 'location' that this deployment points to is given by path
alone in the case of no remote storage, and otherwise by storage.basepath / path
.
The underlying flow entrypoint is interpreted relative to this location.
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.apply","title":"apply
async
","text":"Registers this deployment with the API and returns the deployment's ID.
Parameters:
Name Type Description Defaultupload
bool
if True, deployment files are automatically uploaded to remote storage
False
work_queue_concurrency
int
If provided, sets the concurrency limit on the deployment's work queue
None
Source code in prefect/deployments/deployments.py
@sync_compatible\nasync def apply(\n self, upload: bool = False, work_queue_concurrency: int = None\n) -> UUID:\n \"\"\"\n Registers this deployment with the API and returns the deployment's ID.\n\n Args:\n upload: if True, deployment files are automatically uploaded to remote\n storage\n work_queue_concurrency: If provided, sets the concurrency limit on the\n deployment's work queue\n \"\"\"\n if not self.name or not self.flow_name:\n raise ValueError(\"Both a deployment name and flow name must be set.\")\n async with get_client() as client:\n # prep IDs\n flow_id = await client.create_flow_from_name(self.flow_name)\n\n infrastructure_document_id = self.infrastructure._block_document_id\n if not infrastructure_document_id:\n # if not building off a block, will create an anonymous block\n self.infrastructure = self.infrastructure.copy()\n infrastructure_document_id = await self.infrastructure._save(\n is_anonymous=True,\n )\n\n if upload:\n await self.upload_to_storage()\n\n if self.work_queue_name and work_queue_concurrency is not None:\n try:\n res = await client.create_work_queue(\n name=self.work_queue_name, work_pool_name=self.work_pool_name\n )\n except ObjectAlreadyExists:\n res = await client.read_work_queue_by_name(\n name=self.work_queue_name, work_pool_name=self.work_pool_name\n )\n await client.update_work_queue(\n res.id, concurrency_limit=work_queue_concurrency\n )\n\n if self.schedule:\n logger.info(\n \"Interpreting the deprecated `schedule` field as an entry in \"\n \"`schedules`.\"\n )\n schedules = [\n DeploymentScheduleCreate(\n schedule=self.schedule, active=self.is_schedule_active\n )\n ]\n elif self.schedules:\n schedules = [\n DeploymentScheduleCreate(**schedule.dict())\n for schedule in self.schedules\n ]\n else:\n schedules = None\n\n # we assume storage was already saved\n storage_document_id = getattr(self.storage, \"_block_document_id\", None)\n deployment_id = await client.create_deployment(\n flow_id=flow_id,\n name=self.name,\n work_queue_name=self.work_queue_name,\n work_pool_name=self.work_pool_name,\n version=self.version,\n schedules=schedules,\n is_schedule_active=self.is_schedule_active,\n parameters=self.parameters,\n description=self.description,\n tags=self.tags,\n manifest_path=self.manifest_path, # allows for backwards YAML compat\n path=self.path,\n entrypoint=self.entrypoint,\n infra_overrides=self.infra_overrides,\n storage_document_id=storage_document_id,\n infrastructure_document_id=infrastructure_document_id,\n parameter_openapi_schema=self.parameter_openapi_schema.dict(),\n enforce_parameter_schema=self.enforce_parameter_schema,\n )\n\n if client.server_type == ServerType.CLOUD:\n # The triggers defined in the deployment spec are, essentially,\n # anonymous and attempting truly sync them with cloud is not\n # feasible. Instead, we remove all automations that are owned\n # by the deployment, meaning that they were created via this\n # mechanism below, and then recreate them.\n await client.delete_resource_owned_automations(\n f\"prefect.deployment.{deployment_id}\"\n )\n for trigger in self.triggers:\n trigger.set_deployment_id(deployment_id)\n await client.create_automation(trigger.as_automation())\n\n return deployment_id\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.build_from_flow","title":"build_from_flow
async
classmethod
","text":"Configure a deployment for a given flow.
Parameters:
Name Type Description Defaultflow
Flow
A flow function to deploy
requiredname
str
A name for the deployment
requiredoutput
optional
if provided, the full deployment specification will be written as a YAML file in the location specified by output
None
skip_upload
bool
if True, deployment files are not automatically uploaded to remote storage
False
ignore_file
str
an optional path to a .prefectignore
file that specifies filename patterns to ignore when uploading to remote storage; if not provided, looks for .prefectignore
in the current working directory
'.prefectignore'
apply
bool
if True, the deployment is automatically registered with the API
False
load_existing
bool
if True, load any settings that may already be configured for the named deployment server-side (e.g., schedules, default parameter values, etc.)
True
schedules
Optional[FlexibleScheduleList]
An optional list of schedules. Each item in the list can be: - An instance of MinimalDeploymentSchedule
. - A dictionary with a schedule
key, and optionally, an active
key. The schedule
key should correspond to a schedule type, and active
is a boolean indicating whether the schedule is active or not. - An instance of one of the predefined schedule types: IntervalSchedule
, CronSchedule
, or RRuleSchedule
.
None
**kwargs
other keyword arguments to pass to the constructor for the Deployment
class
{}
Source code in prefect/deployments/deployments.py
@classmethod\n@sync_compatible\nasync def build_from_flow(\n cls,\n flow: Flow,\n name: str,\n output: str = None,\n skip_upload: bool = False,\n ignore_file: str = \".prefectignore\",\n apply: bool = False,\n load_existing: bool = True,\n schedules: Optional[FlexibleScheduleList] = None,\n **kwargs,\n) -> \"Deployment\":\n \"\"\"\n Configure a deployment for a given flow.\n\n Args:\n flow: A flow function to deploy\n name: A name for the deployment\n output (optional): if provided, the full deployment specification will be\n written as a YAML file in the location specified by `output`\n skip_upload: if True, deployment files are not automatically uploaded to\n remote storage\n ignore_file: an optional path to a `.prefectignore` file that specifies\n filename patterns to ignore when uploading to remote storage; if not\n provided, looks for `.prefectignore` in the current working directory\n apply: if True, the deployment is automatically registered with the API\n load_existing: if True, load any settings that may already be configured for\n the named deployment server-side (e.g., schedules, default parameter\n values, etc.)\n schedules: An optional list of schedules. Each item in the list can be:\n - An instance of `MinimalDeploymentSchedule`.\n - A dictionary with a `schedule` key, and optionally, an\n `active` key. The `schedule` key should correspond to a\n schedule type, and `active` is a boolean indicating whether\n the schedule is active or not.\n - An instance of one of the predefined schedule types:\n `IntervalSchedule`, `CronSchedule`, or `RRuleSchedule`.\n **kwargs: other keyword arguments to pass to the constructor for the\n `Deployment` class\n \"\"\"\n if not name:\n raise ValueError(\"A deployment name must be provided.\")\n\n # note that `deployment.load` only updates settings that were *not*\n # provided at initialization\n\n deployment_args = {\n \"name\": name,\n \"flow_name\": flow.name,\n **kwargs,\n }\n\n if schedules is not None:\n deployment_args[\"schedules\"] = schedules\n\n deployment = cls(**deployment_args)\n deployment.flow_name = flow.name\n if not deployment.entrypoint:\n ## first see if an entrypoint can be determined\n flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n mod_name = getattr(flow, \"__module__\", None)\n if not flow_file:\n if not mod_name:\n # todo, check if the file location was manually set already\n raise ValueError(\"Could not determine flow's file location.\")\n module = importlib.import_module(mod_name)\n flow_file = getattr(module, \"__file__\", None)\n if not flow_file:\n raise ValueError(\"Could not determine flow's file location.\")\n\n # set entrypoint\n entry_path = Path(flow_file).absolute().relative_to(Path(\".\").absolute())\n deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n if load_existing:\n await deployment.load()\n\n # set a few attributes for this flow object\n deployment.parameter_openapi_schema = parameter_schema(flow)\n\n # ensure the ignore file exists\n if not Path(ignore_file).exists():\n Path(ignore_file).touch()\n\n if not deployment.version:\n deployment.version = flow.version\n if not deployment.description:\n deployment.description = flow.description\n\n # proxy for whether infra is docker-based\n is_docker_based = hasattr(deployment.infrastructure, \"image\")\n\n if not deployment.storage and not is_docker_based and not deployment.path:\n deployment.path = str(Path(\".\").absolute())\n elif not deployment.storage and is_docker_based:\n # only update if a path is not already set\n if not deployment.path:\n deployment.path = \"/opt/prefect/flows\"\n\n if not skip_upload:\n if (\n deployment.storage\n and \"put-directory\" in deployment.storage.get_block_capabilities()\n ):\n await deployment.upload_to_storage(ignore_file=ignore_file)\n\n if output:\n await deployment.to_yaml(output)\n\n if apply:\n await deployment.apply()\n\n return deployment\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.handle_openapi_schema","title":"handle_openapi_schema
","text":"This method ensures setting a value of None
is handled gracefully.
prefect/deployments/deployments.py
@validator(\"parameter_openapi_schema\", pre=True)\ndef handle_openapi_schema(cls, value):\n \"\"\"\n This method ensures setting a value of `None` is handled gracefully.\n \"\"\"\n if value is None:\n return ParameterSchema()\n return value\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.load","title":"load
async
","text":"Queries the API for a deployment with this name for this flow, and if found, prepopulates any settings that were not set at initialization.
Returns a boolean specifying whether a load was successful or not.
Raises:
Type Description-ValueError
if both name and flow name are not set
Source code inprefect/deployments/deployments.py
@sync_compatible\nasync def load(self) -> bool:\n \"\"\"\n Queries the API for a deployment with this name for this flow, and if found,\n prepopulates any settings that were not set at initialization.\n\n Returns a boolean specifying whether a load was successful or not.\n\n Raises:\n - ValueError: if both name and flow name are not set\n \"\"\"\n if not self.name or not self.flow_name:\n raise ValueError(\"Both a deployment name and flow name must be provided.\")\n async with get_client() as client:\n try:\n deployment = await client.read_deployment_by_name(\n f\"{self.flow_name}/{self.name}\"\n )\n if deployment.storage_document_id:\n Block._from_block_document(\n await client.read_block_document(deployment.storage_document_id)\n )\n\n excluded_fields = self.__fields_set__.union(\n {\n \"infrastructure\",\n \"storage\",\n \"timestamp\",\n \"triggers\",\n \"enforce_parameter_schema\",\n \"schedules\",\n \"schedule\",\n \"is_schedule_active\",\n }\n )\n for field in set(self.__fields__.keys()) - excluded_fields:\n new_value = getattr(deployment, field)\n setattr(self, field, new_value)\n\n if \"schedules\" not in self.__fields_set__:\n self.schedules = [\n MinimalDeploymentSchedule(\n **schedule.dict(include={\"schedule\", \"active\"})\n )\n for schedule in deployment.schedules\n ]\n\n # The API server generates the \"schedule\" field from the\n # current list of schedules, so if the user has locally set\n # \"schedules\" to anything, we should avoid sending \"schedule\"\n # and let the API server generate a new value if necessary.\n if \"schedules\" in self.__fields_set__:\n self.schedule = None\n self.is_schedule_active = None\n else:\n # The user isn't using \"schedules,\" so we should\n # populate \"schedule\" and \"is_schedule_active\" from the\n # API's version of the deployment, unless the user gave\n # us these fields in __init__().\n if \"schedule\" not in self.__fields_set__:\n self.schedule = deployment.schedule\n if \"is_schedule_active\" not in self.__fields_set__:\n self.is_schedule_active = deployment.is_schedule_active\n\n if \"infrastructure\" not in self.__fields_set__:\n if deployment.infrastructure_document_id:\n self.infrastructure = Block._from_block_document(\n await client.read_block_document(\n deployment.infrastructure_document_id\n )\n )\n if \"storage\" not in self.__fields_set__:\n if deployment.storage_document_id:\n self.storage = Block._from_block_document(\n await client.read_block_document(\n deployment.storage_document_id\n )\n )\n except ObjectNotFound:\n return False\n return True\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.update","title":"update
async
","text":"Performs an in-place update with the provided settings.
Parameters:
Name Type Description Defaultignore_none
bool
if True, all None
values are ignored when performing the update
False
Source code in prefect/deployments/deployments.py
@sync_compatible\nasync def update(self, ignore_none: bool = False, **kwargs):\n \"\"\"\n Performs an in-place update with the provided settings.\n\n Args:\n ignore_none: if True, all `None` values are ignored when performing the\n update\n \"\"\"\n unknown_keys = set(kwargs.keys()) - set(self.dict().keys())\n if unknown_keys:\n raise ValueError(\n f\"Received unexpected attributes: {', '.join(unknown_keys)}\"\n )\n for key, value in kwargs.items():\n if ignore_none and value is None:\n continue\n setattr(self, key, value)\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.upload_to_storage","title":"upload_to_storage
async
","text":"Uploads the workflow this deployment represents using a provided storage block; if no block is provided, defaults to configuring self for local storage.
Parameters:
Name Type Description Defaultstorage_block
str
a string reference a remote storage block slug $type/$name
; if provided, used to upload the workflow's project
None
ignore_file
str
an optional path to a .prefectignore
file that specifies filename patterns to ignore when uploading to remote storage; if not provided, looks for .prefectignore
in the current working directory
'.prefectignore'
Source code in prefect/deployments/deployments.py
@sync_compatible\nasync def upload_to_storage(\n self, storage_block: str = None, ignore_file: str = \".prefectignore\"\n) -> Optional[int]:\n \"\"\"\n Uploads the workflow this deployment represents using a provided storage block;\n if no block is provided, defaults to configuring self for local storage.\n\n Args:\n storage_block: a string reference a remote storage block slug `$type/$name`;\n if provided, used to upload the workflow's project\n ignore_file: an optional path to a `.prefectignore` file that specifies\n filename patterns to ignore when uploading to remote storage; if not\n provided, looks for `.prefectignore` in the current working directory\n \"\"\"\n file_count = None\n if storage_block:\n storage = await Block.load(storage_block)\n\n if \"put-directory\" not in storage.get_block_capabilities():\n raise BlockMissingCapabilities(\n f\"Storage block {storage!r} missing 'put-directory' capability.\"\n )\n\n self.storage = storage\n\n # upload current directory to storage location\n file_count = await self.storage.put_directory(\n ignore_file=ignore_file, to_path=self.path\n )\n elif self.storage:\n if \"put-directory\" not in self.storage.get_block_capabilities():\n raise BlockMissingCapabilities(\n f\"Storage block {self.storage!r} missing 'put-directory'\"\n \" capability.\"\n )\n\n file_count = await self.storage.put_directory(\n ignore_file=ignore_file, to_path=self.path\n )\n\n # persists storage now in case it contains secret values\n if self.storage and not self.storage._block_document_id:\n await self.storage._save(is_anonymous=True)\n\n return file_count\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.validate_automation_names","title":"validate_automation_names
","text":"Ensure that each trigger has a name for its automation if none is provided.
Source code inprefect/deployments/deployments.py
@validator(\"triggers\")\ndef validate_automation_names(cls, field_value, values, field, config):\n \"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n for i, trigger in enumerate(field_value, start=1):\n if trigger.name is None:\n trigger.name = f\"{values['name']}__automation_{i}\"\n\n return field_value\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.load_deployments_from_yaml","title":"load_deployments_from_yaml
","text":"Load deployments from a yaml file.
Source code inprefect/deployments/deployments.py
@deprecated_callable(start_date=\"Mar 2024\")\ndef load_deployments_from_yaml(\n path: str,\n) -> PrefectObjectRegistry:\n \"\"\"\n Load deployments from a yaml file.\n \"\"\"\n with open(path, \"r\") as f:\n contents = f.read()\n\n # Parse into a yaml tree to retrieve separate documents\n nodes = yaml.compose_all(contents)\n\n with PrefectObjectRegistry(capture_failures=True) as registry:\n for node in nodes:\n with tmpchdir(path):\n deployment_dict = yaml.safe_load(yaml.serialize(node))\n # The return value is not necessary, just instantiating the Deployment\n # is enough to get it recorded on the registry\n parse_obj_as(Deployment, deployment_dict)\n\n return registry\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.load_flow_from_flow_run","title":"load_flow_from_flow_run
async
","text":"Load a flow from the location/script provided in a deployment's storage document.
If ignore_storage=True
is provided, no pull from remote storage occurs. This flag is largely for testing, and assumes the flow is already available locally.
prefect/deployments/deployments.py
@inject_client\nasync def load_flow_from_flow_run(\n flow_run: FlowRun,\n client: PrefectClient,\n ignore_storage: bool = False,\n storage_base_path: Optional[str] = None,\n) -> Flow:\n \"\"\"\n Load a flow from the location/script provided in a deployment's storage document.\n\n If `ignore_storage=True` is provided, no pull from remote storage occurs. This flag\n is largely for testing, and assumes the flow is already available locally.\n \"\"\"\n deployment = await client.read_deployment(flow_run.deployment_id)\n\n if deployment.entrypoint is None:\n raise ValueError(\n f\"Deployment {deployment.id} does not have an entrypoint and can not be run.\"\n )\n\n run_logger = flow_run_logger(flow_run)\n\n runner_storage_base_path = storage_base_path or os.environ.get(\n \"PREFECT__STORAGE_BASE_PATH\"\n )\n\n # If there's no colon, assume it's a module path\n if \":\" not in deployment.entrypoint:\n run_logger.debug(\n f\"Importing flow code from module path {deployment.entrypoint}\"\n )\n flow = await run_sync_in_worker_thread(\n load_flow_from_entrypoint, deployment.entrypoint\n )\n return flow\n\n if not ignore_storage and not deployment.pull_steps:\n sys.path.insert(0, \".\")\n if deployment.storage_document_id:\n storage_document = await client.read_block_document(\n deployment.storage_document_id\n )\n storage_block = Block._from_block_document(storage_document)\n else:\n basepath = deployment.path or Path(deployment.manifest_path).parent\n if runner_storage_base_path:\n basepath = str(basepath).replace(\n \"$STORAGE_BASE_PATH\", runner_storage_base_path\n )\n storage_block = LocalFileSystem(basepath=basepath)\n\n from_path = (\n str(deployment.path).replace(\"$STORAGE_BASE_PATH\", runner_storage_base_path)\n if runner_storage_base_path and deployment.path\n else deployment.path\n )\n run_logger.info(f\"Downloading flow code from storage at {from_path!r}\")\n await storage_block.get_directory(from_path=from_path, local_path=\".\")\n\n if deployment.pull_steps:\n run_logger.debug(f\"Running {len(deployment.pull_steps)} deployment pull steps\")\n output = await run_steps(deployment.pull_steps)\n if output.get(\"directory\"):\n run_logger.debug(f\"Changing working directory to {output['directory']!r}\")\n os.chdir(output[\"directory\"])\n\n import_path = relative_path_to_current_platform(deployment.entrypoint)\n # for backwards compat\n if deployment.manifest_path:\n with open(deployment.manifest_path, \"r\") as f:\n import_path = json.load(f)[\"import_path\"]\n import_path = (\n Path(deployment.manifest_path).parent / import_path\n ).absolute()\n run_logger.debug(f\"Importing flow code from '{import_path}'\")\n\n flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, str(import_path))\n\n return flow\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.run_deployment","title":"run_deployment
async
","text":"Create a flow run for a deployment and return it after completion or a timeout.
By default, this function blocks until the flow run finishes executing. Specify a timeout (in seconds) to wait for the flow run to execute before returning flow run metadata. To return immediately, without waiting for the flow run to execute, set timeout=0
.
Note that if you specify a timeout, this function will return the flow run metadata whether or not the flow run finished executing.
If called within a flow or task, the flow run this function creates will be linked to the current flow run as a subflow. Disable this behavior by passing as_subflow=False
.
Parameters:
Name Type Description Defaultname
Union[str, UUID]
The deployment id or deployment name in the form: <slugified-flow-name>/<slugified-deployment-name>
parameters
Optional[dict]
Parameter overrides for this flow run. Merged with the deployment defaults.
None
scheduled_time
Optional[datetime]
The time to schedule the flow run for, defaults to scheduling the flow run to start now.
None
flow_run_name
Optional[str]
A name for the created flow run
None
timeout
Optional[float]
The amount of time to wait (in seconds) for the flow run to complete before returning. Setting timeout
to 0 will return the flow run metadata immediately. Setting timeout
to None will allow this function to poll indefinitely. Defaults to None.
None
poll_interval
Optional[float]
The number of seconds between polls
5
tags
Optional[Iterable[str]]
A list of tags to associate with this flow run; tags can be used in automations and for organizational purposes.
None
idempotency_key
Optional[str]
A unique value to recognize retries of the same run, and prevent creating multiple flow runs.
None
work_queue_name
Optional[str]
The name of a work queue to use for this run. Defaults to the default work queue for the deployment.
None
as_subflow
Optional[bool]
Whether to link the flow run as a subflow of the current flow or task run.
True
Source code in prefect/deployments/deployments.py
@sync_compatible\n@inject_client\nasync def run_deployment(\n name: Union[str, UUID],\n client: Optional[PrefectClient] = None,\n parameters: Optional[dict] = None,\n scheduled_time: Optional[datetime] = None,\n flow_run_name: Optional[str] = None,\n timeout: Optional[float] = None,\n poll_interval: Optional[float] = 5,\n tags: Optional[Iterable[str]] = None,\n idempotency_key: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n as_subflow: Optional[bool] = True,\n job_variables: Optional[dict] = None,\n) -> FlowRun:\n \"\"\"\n Create a flow run for a deployment and return it after completion or a timeout.\n\n By default, this function blocks until the flow run finishes executing.\n Specify a timeout (in seconds) to wait for the flow run to execute before\n returning flow run metadata. To return immediately, without waiting for the\n flow run to execute, set `timeout=0`.\n\n Note that if you specify a timeout, this function will return the flow run\n metadata whether or not the flow run finished executing.\n\n If called within a flow or task, the flow run this function creates will\n be linked to the current flow run as a subflow. Disable this behavior by\n passing `as_subflow=False`.\n\n Args:\n name: The deployment id or deployment name in the form:\n `<slugified-flow-name>/<slugified-deployment-name>`\n parameters: Parameter overrides for this flow run. Merged with the deployment\n defaults.\n scheduled_time: The time to schedule the flow run for, defaults to scheduling\n the flow run to start now.\n flow_run_name: A name for the created flow run\n timeout: The amount of time to wait (in seconds) for the flow run to\n complete before returning. Setting `timeout` to 0 will return the flow\n run metadata immediately. Setting `timeout` to None will allow this\n function to poll indefinitely. Defaults to None.\n poll_interval: The number of seconds between polls\n tags: A list of tags to associate with this flow run; tags can be used in\n automations and for organizational purposes.\n idempotency_key: A unique value to recognize retries of the same run, and\n prevent creating multiple flow runs.\n work_queue_name: The name of a work queue to use for this run. Defaults to\n the default work queue for the deployment.\n as_subflow: Whether to link the flow run as a subflow of the current\n flow or task run.\n \"\"\"\n if timeout is not None and timeout < 0:\n raise ValueError(\"`timeout` cannot be negative\")\n\n if scheduled_time is None:\n scheduled_time = pendulum.now(\"UTC\")\n\n parameters = parameters or {}\n\n deployment_id = None\n\n if isinstance(name, UUID):\n deployment_id = name\n else:\n try:\n deployment_id = UUID(name)\n except ValueError:\n pass\n\n if deployment_id:\n deployment = await client.read_deployment(deployment_id=deployment_id)\n else:\n deployment = await client.read_deployment_by_name(name)\n\n flow_run_ctx = FlowRunContext.get()\n task_run_ctx = TaskRunContext.get()\n if as_subflow and (flow_run_ctx or task_run_ctx):\n # This was called from a flow. Link the flow run as a subflow.\n from prefect.engine import (\n Pending,\n _dynamic_key_for_task_run,\n collect_task_run_inputs,\n )\n\n task_inputs = {\n k: await collect_task_run_inputs(v) for k, v in parameters.items()\n }\n\n if deployment_id:\n flow = await client.read_flow(deployment.flow_id)\n deployment_name = f\"{flow.name}/{deployment.name}\"\n else:\n deployment_name = name\n\n # Generate a task in the parent flow run to represent the result of the subflow\n dummy_task = Task(\n name=deployment_name,\n fn=lambda: None,\n version=deployment.version,\n )\n # Override the default task key to include the deployment name\n dummy_task.task_key = f\"{__name__}.run_deployment.{slugify(deployment_name)}\"\n flow_run_id = (\n flow_run_ctx.flow_run.id\n if flow_run_ctx\n else task_run_ctx.task_run.flow_run_id\n )\n dynamic_key = (\n _dynamic_key_for_task_run(flow_run_ctx, dummy_task)\n if flow_run_ctx\n else task_run_ctx.task_run.dynamic_key\n )\n parent_task_run = await client.create_task_run(\n task=dummy_task,\n flow_run_id=flow_run_id,\n dynamic_key=dynamic_key,\n task_inputs=task_inputs,\n state=Pending(),\n )\n parent_task_run_id = parent_task_run.id\n else:\n parent_task_run_id = None\n\n flow_run = await client.create_flow_run_from_deployment(\n deployment.id,\n parameters=parameters,\n state=Scheduled(scheduled_time=scheduled_time),\n name=flow_run_name,\n tags=tags,\n idempotency_key=idempotency_key,\n parent_task_run_id=parent_task_run_id,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n\n flow_run_id = flow_run.id\n\n if timeout == 0:\n return flow_run\n\n with anyio.move_on_after(timeout):\n while True:\n flow_run = await client.read_flow_run(flow_run_id)\n flow_state = flow_run.state\n if flow_state and flow_state.is_final():\n return flow_run\n await anyio.sleep(poll_interval)\n\n return flow_run\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/runner/","title":"runner","text":"","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner","title":"prefect.deployments.runner
","text":"Objects for creating and configuring deployments for flows using serve
functionality.
import time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n \"Fastest flow this side of the Mississippi.\"\n return\n\n\nif __name__ == \"__main__\":\n # to_deployment creates RunnerDeployment instances\n slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n fast_deploy = fast_flow.to_deployment(name=\"fast\")\n\n serve(slow_deploy, fast_deploy)\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.DeploymentApplyError","title":"DeploymentApplyError
","text":" Bases: RuntimeError
Raised when an error occurs while applying a deployment.
Source code inprefect/deployments/runner.py
class DeploymentApplyError(RuntimeError):\n \"\"\"\n Raised when an error occurs while applying a deployment.\n \"\"\"\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.DeploymentImage","title":"DeploymentImage
","text":"Configuration used to build and push a Docker image for a deployment.
Attributes:
Name Type Descriptionname
The name of the Docker image to build, including the registry and repository.
tag
The tag to apply to the built image.
dockerfile
The path to the Dockerfile to use for building the image. If not provided, a default Dockerfile will be generated.
**build_kwargs
Additional keyword arguments to pass to the Docker build request. See the docker-py
documentation for more information.
prefect/deployments/runner.py
class DeploymentImage:\n \"\"\"\n Configuration used to build and push a Docker image for a deployment.\n\n Attributes:\n name: The name of the Docker image to build, including the registry and\n repository.\n tag: The tag to apply to the built image.\n dockerfile: The path to the Dockerfile to use for building the image. If\n not provided, a default Dockerfile will be generated.\n **build_kwargs: Additional keyword arguments to pass to the Docker build request.\n See the [`docker-py` documentation](https://docker-py.readthedocs.io/en/stable/images.html#docker.models.images.ImageCollection.build)\n for more information.\n\n \"\"\"\n\n def __init__(self, name, tag=None, dockerfile=\"auto\", **build_kwargs):\n image_name, image_tag = parse_image_tag(name)\n if tag and image_tag:\n raise ValueError(\n f\"Only one tag can be provided - both {image_tag!r} and {tag!r} were\"\n \" provided as tags.\"\n )\n namespace, repository = split_repository_path(image_name)\n # if the provided image name does not include a namespace (registry URL or user/org name),\n # use the default namespace\n if not namespace:\n namespace = PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE.value()\n # join the namespace and repository to create the full image name\n # ignore namespace if it is None\n self.name = \"/\".join(filter(None, [namespace, repository]))\n self.tag = tag or image_tag or slugify(pendulum.now(\"utc\").isoformat())\n self.dockerfile = dockerfile\n self.build_kwargs = build_kwargs\n\n @property\n def reference(self):\n return f\"{self.name}:{self.tag}\"\n\n def build(self):\n full_image_name = self.reference\n build_kwargs = self.build_kwargs.copy()\n build_kwargs[\"context\"] = Path.cwd()\n build_kwargs[\"tag\"] = full_image_name\n build_kwargs[\"pull\"] = build_kwargs.get(\"pull\", True)\n\n if self.dockerfile == \"auto\":\n with generate_default_dockerfile():\n build_image(**build_kwargs)\n else:\n build_kwargs[\"dockerfile\"] = self.dockerfile\n build_image(**build_kwargs)\n\n def push(self):\n with docker_client() as client:\n events = client.api.push(\n repository=self.name, tag=self.tag, stream=True, decode=True\n )\n for event in events:\n if \"error\" in event:\n raise PushError(event[\"error\"])\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.EntrypointType","title":"EntrypointType
","text":" Bases: Enum
Enum representing a entrypoint type.
File path entrypoints are in the format: path/to/file.py:function_name
. Module path entrypoints are in the format: path.to.module.function_name
.
prefect/deployments/runner.py
class EntrypointType(enum.Enum):\n \"\"\"\n Enum representing a entrypoint type.\n\n File path entrypoints are in the format: `path/to/file.py:function_name`.\n Module path entrypoints are in the format: `path.to.module.function_name`.\n \"\"\"\n\n FILE_PATH = \"file_path\"\n MODULE_PATH = \"module_path\"\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment","title":"RunnerDeployment
","text":" Bases: BaseModel
A Prefect RunnerDeployment definition, used for specifying and building deployments.
Attributes:
Name Type Descriptionname
str
A name for the deployment (required).
version
Optional[str]
An optional version for the deployment; defaults to the flow's version
description
Optional[str]
An optional description of the deployment; defaults to the flow's description
tags
List[str]
An optional list of tags to associate with this deployment; note that tags are used only for organizational purposes. For delegating work to agents, see work_queue_name
.
schedule
Optional[SCHEDULE_TYPES]
A schedule to run this deployment on, once registered
is_schedule_active
Optional[bool]
Whether or not the schedule is active
parameters
Dict[str, Any]
A dictionary of parameter values to pass to runs created from this deployment
path
Dict[str, Any]
The path to the working directory for the workflow, relative to remote storage or, if stored on a local filesystem, an absolute path
entrypoint
Optional[str]
The path to the entrypoint for the workflow, always relative to the path
parameter_openapi_schema
Optional[str]
The parameter schema of the flow, including defaults.
enforce_parameter_schema
bool
Whether or not the Prefect API should enforce the parameter schema for this deployment.
work_pool_name
Optional[str]
The name of the work pool to use for this deployment.
work_queue_name
Optional[str]
The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.
job_variables
Dict[str, Any]
Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.
Source code inprefect/deployments/runner.py
class RunnerDeployment(BaseModel):\n \"\"\"\n A Prefect RunnerDeployment definition, used for specifying and building deployments.\n\n Attributes:\n name: A name for the deployment (required).\n version: An optional version for the deployment; defaults to the flow's version\n description: An optional description of the deployment; defaults to the flow's\n description\n tags: An optional list of tags to associate with this deployment; note that tags\n are used only for organizational purposes. For delegating work to agents,\n see `work_queue_name`.\n schedule: A schedule to run this deployment on, once registered\n is_schedule_active: Whether or not the schedule is active\n parameters: A dictionary of parameter values to pass to runs created from this\n deployment\n path: The path to the working directory for the workflow, relative to remote\n storage or, if stored on a local filesystem, an absolute path\n entrypoint: The path to the entrypoint for the workflow, always relative to the\n `path`\n parameter_openapi_schema: The parameter schema of the flow, including defaults.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n \"\"\"\n\n class Config:\n arbitrary_types_allowed = True\n\n name: str = Field(..., description=\"The name of the deployment.\")\n flow_name: Optional[str] = Field(\n None, description=\"The name of the underlying flow; typically inferred.\"\n )\n description: Optional[str] = Field(\n default=None, description=\"An optional description of the deployment.\"\n )\n version: Optional[str] = Field(\n default=None, description=\"An optional version for the deployment.\"\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"One of more tags to apply to this deployment.\",\n )\n schedules: Optional[List[MinimalDeploymentSchedule]] = Field(\n default=None,\n description=\"The schedules that should cause this deployment to run.\",\n )\n schedule: Optional[SCHEDULE_TYPES] = None\n paused: Optional[bool] = Field(\n default=None, description=\"Whether or not the deployment is paused.\"\n )\n is_schedule_active: Optional[bool] = Field(\n default=None, description=\"DEPRECATED: Whether or not the schedule is active.\"\n )\n parameters: Dict[str, Any] = Field(default_factory=dict)\n entrypoint: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the entrypoint for the workflow, relative to the `path`.\"\n ),\n )\n triggers: List[DeploymentTrigger] = Field(\n default_factory=list,\n description=\"The triggers that should cause this deployment to run.\",\n )\n enforce_parameter_schema: bool = Field(\n default=False,\n description=(\n \"Whether or not the Prefect API should enforce the parameter schema for\"\n \" this deployment.\"\n ),\n )\n storage: Optional[RunnerStorage] = Field(\n default=None,\n description=(\n \"The storage object used to retrieve flow code for this deployment.\"\n ),\n )\n work_pool_name: Optional[str] = Field(\n default=None,\n description=(\n \"The name of the work pool to use for this deployment. Only used when\"\n \" the deployment is registered with a built runner.\"\n ),\n )\n work_queue_name: Optional[str] = Field(\n default=None,\n description=(\n \"The name of the work queue to use for this deployment. Only used when\"\n \" the deployment is registered with a built runner.\"\n ),\n )\n job_variables: Dict[str, Any] = Field(\n default_factory=dict,\n description=(\n \"Job variables used to override the default values of a work pool\"\n \" base job template. Only used when the deployment is registered with\"\n \" a built runner.\"\n ),\n )\n _entrypoint_type: EntrypointType = PrivateAttr(\n default=EntrypointType.FILE_PATH,\n )\n _path: Optional[str] = PrivateAttr(\n default=None,\n )\n _parameter_openapi_schema: ParameterSchema = PrivateAttr(\n default_factory=ParameterSchema,\n )\n\n @property\n def entrypoint_type(self) -> EntrypointType:\n return self._entrypoint_type\n\n @validator(\"triggers\", allow_reuse=True)\n def validate_automation_names(cls, field_value, values, field, config):\n \"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n for i, trigger in enumerate(field_value, start=1):\n if trigger.name is None:\n trigger.name = f\"{values['name']}__automation_{i}\"\n\n return field_value\n\n @root_validator(pre=True)\n def reconcile_paused(cls, values):\n paused = values.get(\"paused\")\n is_schedule_active = values.get(\"is_schedule_active\")\n\n if paused is not None:\n values[\"paused\"] = paused\n values[\"is_schedule_active\"] = not paused\n elif is_schedule_active is not None:\n values[\"paused\"] = not is_schedule_active\n values[\"is_schedule_active\"] = is_schedule_active\n else:\n values[\"paused\"] = False\n values[\"is_schedule_active\"] = True\n\n return values\n\n @root_validator(pre=True)\n def reconcile_schedules(cls, values):\n schedule = values.get(\"schedule\")\n schedules = values.get(\"schedules\")\n\n if schedules is None and schedule is not None:\n values[\"schedules\"] = [create_minimal_deployment_schedule(schedule)]\n elif schedules is not None and len(schedules) > 0:\n values[\"schedules\"] = normalize_to_minimal_deployment_schedules(schedules)\n\n return values\n\n @sync_compatible\n async def apply(\n self, work_pool_name: Optional[str] = None, image: Optional[str] = None\n ) -> UUID:\n \"\"\"\n Registers this deployment with the API and returns the deployment's ID.\n\n Args:\n work_pool_name: The name of the work pool to use for this\n deployment.\n image: The registry, name, and tag of the Docker image to\n use for this deployment. Only used when the deployment is\n deployed to a work pool.\n\n Returns:\n The ID of the created deployment.\n \"\"\"\n\n work_pool_name = work_pool_name or self.work_pool_name\n\n if image and not work_pool_name:\n raise ValueError(\n \"An image can only be provided when registering a deployment with a\"\n \" work pool.\"\n )\n\n if self.work_queue_name and not work_pool_name:\n raise ValueError(\n \"A work queue can only be provided when registering a deployment with\"\n \" a work pool.\"\n )\n\n if self.job_variables and not work_pool_name:\n raise ValueError(\n \"Job variables can only be provided when registering a deployment\"\n \" with a work pool.\"\n )\n\n async with get_client() as client:\n flow_id = await client.create_flow_from_name(self.flow_name)\n\n create_payload = dict(\n flow_id=flow_id,\n name=self.name,\n work_queue_name=self.work_queue_name,\n work_pool_name=work_pool_name,\n version=self.version,\n paused=self.paused,\n schedules=self.schedules,\n parameters=self.parameters,\n description=self.description,\n tags=self.tags,\n path=self._path,\n entrypoint=self.entrypoint,\n storage_document_id=None,\n infrastructure_document_id=None,\n parameter_openapi_schema=self._parameter_openapi_schema.dict(),\n enforce_parameter_schema=self.enforce_parameter_schema,\n )\n\n if work_pool_name:\n create_payload[\"infra_overrides\"] = self.job_variables\n if image:\n create_payload[\"infra_overrides\"][\"image\"] = image\n create_payload[\"path\"] = None if self.storage else self._path\n create_payload[\"pull_steps\"] = (\n [self.storage.to_pull_step()] if self.storage else []\n )\n\n try:\n deployment_id = await client.create_deployment(**create_payload)\n except Exception as exc:\n if isinstance(exc, PrefectHTTPStatusError):\n detail = exc.response.json().get(\"detail\")\n if detail:\n raise DeploymentApplyError(detail) from exc\n raise DeploymentApplyError(\n f\"Error while applying deployment: {str(exc)}\"\n ) from exc\n\n if client.server_type == ServerType.CLOUD:\n # The triggers defined in the deployment spec are, essentially,\n # anonymous and attempting truly sync them with cloud is not\n # feasible. Instead, we remove all automations that are owned\n # by the deployment, meaning that they were created via this\n # mechanism below, and then recreate them.\n await client.delete_resource_owned_automations(\n f\"prefect.deployment.{deployment_id}\"\n )\n for trigger in self.triggers:\n trigger.set_deployment_id(deployment_id)\n await client.create_automation(trigger.as_automation())\n\n return deployment_id\n\n @staticmethod\n def _construct_deployment_schedules(\n interval: Optional[\n Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n ] = None,\n anchor_date: Optional[Union[datetime, str]] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n timezone: Optional[str] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n ) -> Union[List[MinimalDeploymentSchedule], FlexibleScheduleList]:\n \"\"\"\n Construct a schedule or schedules from the provided arguments.\n\n This method serves as a unified interface for creating deployment\n schedules. If `schedules` is provided, it is directly returned. If\n `schedule` is provided, it is encapsulated in a list and returned. If\n `interval`, `cron`, or `rrule` are provided, they are used to construct\n schedule objects.\n\n Args:\n interval: An interval on which to schedule runs, either as a single\n value or as a list of values. Accepts numbers (interpreted as\n seconds) or `timedelta` objects. Each value defines a separate\n scheduling interval.\n anchor_date: The anchor date from which interval schedules should\n start. This applies to all intervals if a list is provided.\n cron: A cron expression or a list of cron expressions defining cron\n schedules. Each expression defines a separate cron schedule.\n rrule: An rrule string or a list of rrule strings for scheduling.\n Each string defines a separate recurrence rule.\n timezone: The timezone to apply to the cron or rrule schedules.\n This is a single value applied uniformly to all schedules.\n schedule: A singular schedule object, used for advanced scheduling\n options like specifying a timezone. This is returned as a list\n containing this single schedule.\n schedules: A pre-defined list of schedule objects. If provided,\n this list is returned as-is, bypassing other schedule construction\n logic.\n \"\"\"\n\n num_schedules = sum(\n 1\n for entry in (interval, cron, rrule, schedule, schedules)\n if entry is not None\n )\n if num_schedules > 1:\n raise ValueError(\n \"Only one of interval, cron, rrule, schedule, or schedules can be provided.\"\n )\n elif num_schedules == 0:\n return []\n\n if schedules is not None:\n return schedules\n elif interval or cron or rrule:\n # `interval`, `cron`, and `rrule` can be lists of values. This\n # block figures out which one is not None and uses that to\n # construct the list of schedules via `construct_schedule`.\n parameters = [(\"interval\", interval), (\"cron\", cron), (\"rrule\", rrule)]\n schedule_type, value = [\n param for param in parameters if param[1] is not None\n ][0]\n\n if not isiterable(value):\n value = [value]\n\n return [\n create_minimal_deployment_schedule(\n construct_schedule(\n **{\n schedule_type: v,\n \"timezone\": timezone,\n \"anchor_date\": anchor_date,\n }\n )\n )\n for v in value\n ]\n else:\n return [create_minimal_deployment_schedule(schedule)]\n\n def _set_defaults_from_flow(self, flow: \"Flow\"):\n self._parameter_openapi_schema = parameter_schema(flow)\n\n if not self.version:\n self.version = flow.version\n if not self.description:\n self.description = flow.description\n\n @classmethod\n def from_flow(\n cls,\n flow: \"Flow\",\n name: str,\n interval: Optional[\n Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n ) -> \"RunnerDeployment\":\n \"\"\"\n Configure a deployment for a given flow.\n\n Args:\n flow: A flow function to deploy\n name: A name for the deployment\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n \"\"\"\n constructed_schedules = cls._construct_deployment_schedules(\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedule=schedule,\n schedules=schedules,\n )\n\n job_variables = job_variables or {}\n\n deployment = cls(\n name=Path(name).stem,\n flow_name=flow.name,\n schedule=schedule,\n schedules=constructed_schedules,\n is_schedule_active=is_schedule_active,\n paused=paused,\n tags=tags or [],\n triggers=triggers or [],\n parameters=parameters or {},\n description=description,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n\n if not deployment.entrypoint:\n no_file_location_error = (\n \"Flows defined interactively cannot be deployed. Check out the\"\n \" quickstart guide for help getting started:\"\n \" https://docs.prefect.io/latest/getting-started/quickstart\"\n )\n ## first see if an entrypoint can be determined\n flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n mod_name = getattr(flow, \"__module__\", None)\n if entrypoint_type == EntrypointType.MODULE_PATH:\n if mod_name:\n deployment.entrypoint = f\"{mod_name}.{flow.__name__}\"\n else:\n raise ValueError(\n \"Unable to determine module path for provided flow.\"\n )\n else:\n if not flow_file:\n if not mod_name:\n raise ValueError(no_file_location_error)\n try:\n module = importlib.import_module(mod_name)\n flow_file = getattr(module, \"__file__\", None)\n except ModuleNotFoundError as exc:\n if \"__prefect_loader__\" in str(exc):\n raise ValueError(\n \"Cannot create a RunnerDeployment from a flow that has been\"\n \" loaded from an entrypoint. To deploy a flow via\"\n \" entrypoint, use RunnerDeployment.from_entrypoint instead.\"\n )\n raise ValueError(no_file_location_error)\n if not flow_file:\n raise ValueError(no_file_location_error)\n\n # set entrypoint\n entry_path = (\n Path(flow_file).absolute().relative_to(Path.cwd().absolute())\n )\n deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n if entrypoint_type == EntrypointType.FILE_PATH and not deployment._path:\n deployment._path = \".\"\n\n deployment._entrypoint_type = entrypoint_type\n\n cls._set_defaults_from_flow(deployment, flow)\n\n return deployment\n\n @classmethod\n def from_entrypoint(\n cls,\n entrypoint: str,\n name: str,\n interval: Optional[\n Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n ) -> \"RunnerDeployment\":\n \"\"\"\n Configure a deployment for a given flow located at a given entrypoint.\n\n Args:\n entrypoint: The path to a file containing a flow and the name of the flow function in\n the format `./path/to/file.py:flow_func_name`.\n name: A name for the deployment\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n \"\"\"\n from prefect.flows import load_flow_from_entrypoint\n\n job_variables = job_variables or {}\n flow = load_flow_from_entrypoint(entrypoint)\n\n constructed_schedules = cls._construct_deployment_schedules(\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedule=schedule,\n schedules=schedules,\n )\n\n deployment = cls(\n name=Path(name).stem,\n flow_name=flow.name,\n schedule=schedule,\n schedules=constructed_schedules,\n paused=paused,\n is_schedule_active=is_schedule_active,\n tags=tags or [],\n triggers=triggers or [],\n parameters=parameters or {},\n description=description,\n version=version,\n entrypoint=entrypoint,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n deployment._path = str(Path.cwd())\n\n cls._set_defaults_from_flow(deployment, flow)\n\n return deployment\n\n @classmethod\n @sync_compatible\n async def from_storage(\n cls,\n storage: RunnerStorage,\n entrypoint: str,\n name: str,\n interval: Optional[\n Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n ):\n \"\"\"\n Create a RunnerDeployment from a flow located at a given entrypoint and stored in a\n local storage location.\n\n Args:\n entrypoint: The path to a file containing a flow and the name of the flow function in\n the format `./path/to/file.py:flow_func_name`.\n name: A name for the deployment\n storage: A storage object to use for retrieving flow code. If not provided, a\n URL must be provided.\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n \"\"\"\n from prefect.flows import load_flow_from_entrypoint\n\n constructed_schedules = cls._construct_deployment_schedules(\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedule=schedule,\n schedules=schedules,\n )\n\n job_variables = job_variables or {}\n\n with tempfile.TemporaryDirectory() as tmpdir:\n storage.set_base_path(Path(tmpdir))\n await storage.pull_code()\n\n full_entrypoint = str(storage.destination / entrypoint)\n flow = await from_async.wait_for_call_in_new_thread(\n create_call(load_flow_from_entrypoint, full_entrypoint)\n )\n\n deployment = cls(\n name=Path(name).stem,\n flow_name=flow.name,\n schedule=schedule,\n schedules=constructed_schedules,\n paused=paused,\n is_schedule_active=is_schedule_active,\n tags=tags or [],\n triggers=triggers or [],\n parameters=parameters or {},\n description=description,\n version=version,\n entrypoint=entrypoint,\n enforce_parameter_schema=enforce_parameter_schema,\n storage=storage,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n deployment._path = str(storage.destination).replace(\n tmpdir, \"$STORAGE_BASE_PATH\"\n )\n\n cls._set_defaults_from_flow(deployment, flow)\n\n return deployment\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.apply","title":"apply
async
","text":"Registers this deployment with the API and returns the deployment's ID.
Parameters:
Name Type Description Defaultwork_pool_name
Optional[str]
The name of the work pool to use for this deployment.
None
image
Optional[str]
The registry, name, and tag of the Docker image to use for this deployment. Only used when the deployment is deployed to a work pool.
None
Returns:
Type DescriptionUUID
The ID of the created deployment.
Source code inprefect/deployments/runner.py
@sync_compatible\nasync def apply(\n self, work_pool_name: Optional[str] = None, image: Optional[str] = None\n) -> UUID:\n \"\"\"\n Registers this deployment with the API and returns the deployment's ID.\n\n Args:\n work_pool_name: The name of the work pool to use for this\n deployment.\n image: The registry, name, and tag of the Docker image to\n use for this deployment. Only used when the deployment is\n deployed to a work pool.\n\n Returns:\n The ID of the created deployment.\n \"\"\"\n\n work_pool_name = work_pool_name or self.work_pool_name\n\n if image and not work_pool_name:\n raise ValueError(\n \"An image can only be provided when registering a deployment with a\"\n \" work pool.\"\n )\n\n if self.work_queue_name and not work_pool_name:\n raise ValueError(\n \"A work queue can only be provided when registering a deployment with\"\n \" a work pool.\"\n )\n\n if self.job_variables and not work_pool_name:\n raise ValueError(\n \"Job variables can only be provided when registering a deployment\"\n \" with a work pool.\"\n )\n\n async with get_client() as client:\n flow_id = await client.create_flow_from_name(self.flow_name)\n\n create_payload = dict(\n flow_id=flow_id,\n name=self.name,\n work_queue_name=self.work_queue_name,\n work_pool_name=work_pool_name,\n version=self.version,\n paused=self.paused,\n schedules=self.schedules,\n parameters=self.parameters,\n description=self.description,\n tags=self.tags,\n path=self._path,\n entrypoint=self.entrypoint,\n storage_document_id=None,\n infrastructure_document_id=None,\n parameter_openapi_schema=self._parameter_openapi_schema.dict(),\n enforce_parameter_schema=self.enforce_parameter_schema,\n )\n\n if work_pool_name:\n create_payload[\"infra_overrides\"] = self.job_variables\n if image:\n create_payload[\"infra_overrides\"][\"image\"] = image\n create_payload[\"path\"] = None if self.storage else self._path\n create_payload[\"pull_steps\"] = (\n [self.storage.to_pull_step()] if self.storage else []\n )\n\n try:\n deployment_id = await client.create_deployment(**create_payload)\n except Exception as exc:\n if isinstance(exc, PrefectHTTPStatusError):\n detail = exc.response.json().get(\"detail\")\n if detail:\n raise DeploymentApplyError(detail) from exc\n raise DeploymentApplyError(\n f\"Error while applying deployment: {str(exc)}\"\n ) from exc\n\n if client.server_type == ServerType.CLOUD:\n # The triggers defined in the deployment spec are, essentially,\n # anonymous and attempting truly sync them with cloud is not\n # feasible. Instead, we remove all automations that are owned\n # by the deployment, meaning that they were created via this\n # mechanism below, and then recreate them.\n await client.delete_resource_owned_automations(\n f\"prefect.deployment.{deployment_id}\"\n )\n for trigger in self.triggers:\n trigger.set_deployment_id(deployment_id)\n await client.create_automation(trigger.as_automation())\n\n return deployment_id\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.from_entrypoint","title":"from_entrypoint
classmethod
","text":"Configure a deployment for a given flow located at a given entrypoint.
Parameters:
Name Type Description Defaultentrypoint
str
The path to a file containing a flow and the name of the flow function in the format ./path/to/file.py:flow_func_name
.
name
str
A name for the deployment
requiredinterval
Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]
An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.
None
cron
Optional[Union[Iterable[str], str]]
A cron schedule of when to execute runs of this flow.
None
rrule
Optional[Union[Iterable[str], str]]
An rrule schedule of when to execute runs of this flow.
None
paused
Optional[bool]
Whether or not to set this deployment as paused.
None
schedules
Optional[FlexibleScheduleList]
A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone
.
None
schedule
Optional[SCHEDULE_TYPES]
A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.
None
is_schedule_active
Optional[bool]
Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.
None
triggers
Optional[List[DeploymentTrigger]]
A list of triggers that should kick of a run of this flow.
None
parameters
Optional[dict]
A dictionary of default parameter values to pass to runs of this flow.
None
description
Optional[str]
A description for the created deployment. Defaults to the flow's description if not provided.
None
tags
Optional[List[str]]
A list of tags to associate with the created deployment for organizational purposes.
None
version
Optional[str]
A version for the created deployment. Defaults to the flow's version.
None
enforce_parameter_schema
bool
Whether or not the Prefect API should enforce the parameter schema for this deployment.
False
work_pool_name
Optional[str]
The name of the work pool to use for this deployment.
None
work_queue_name
Optional[str]
The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.
None
job_variables
Optional[Dict[str, Any]]
Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.
None
Source code in prefect/deployments/runner.py
@classmethod\ndef from_entrypoint(\n cls,\n entrypoint: str,\n name: str,\n interval: Optional[\n Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n) -> \"RunnerDeployment\":\n \"\"\"\n Configure a deployment for a given flow located at a given entrypoint.\n\n Args:\n entrypoint: The path to a file containing a flow and the name of the flow function in\n the format `./path/to/file.py:flow_func_name`.\n name: A name for the deployment\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n \"\"\"\n from prefect.flows import load_flow_from_entrypoint\n\n job_variables = job_variables or {}\n flow = load_flow_from_entrypoint(entrypoint)\n\n constructed_schedules = cls._construct_deployment_schedules(\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedule=schedule,\n schedules=schedules,\n )\n\n deployment = cls(\n name=Path(name).stem,\n flow_name=flow.name,\n schedule=schedule,\n schedules=constructed_schedules,\n paused=paused,\n is_schedule_active=is_schedule_active,\n tags=tags or [],\n triggers=triggers or [],\n parameters=parameters or {},\n description=description,\n version=version,\n entrypoint=entrypoint,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n deployment._path = str(Path.cwd())\n\n cls._set_defaults_from_flow(deployment, flow)\n\n return deployment\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.from_flow","title":"from_flow
classmethod
","text":"Configure a deployment for a given flow.
Parameters:
Name Type Description Defaultflow
Flow
A flow function to deploy
requiredname
str
A name for the deployment
requiredinterval
Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]
An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.
None
cron
Optional[Union[Iterable[str], str]]
A cron schedule of when to execute runs of this flow.
None
rrule
Optional[Union[Iterable[str], str]]
An rrule schedule of when to execute runs of this flow.
None
paused
Optional[bool]
Whether or not to set this deployment as paused.
None
schedules
Optional[FlexibleScheduleList]
A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone
.
None
schedule
Optional[SCHEDULE_TYPES]
A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.
None
is_schedule_active
Optional[bool]
Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.
None
triggers
Optional[List[DeploymentTrigger]]
A list of triggers that should kick of a run of this flow.
None
parameters
Optional[dict]
A dictionary of default parameter values to pass to runs of this flow.
None
description
Optional[str]
A description for the created deployment. Defaults to the flow's description if not provided.
None
tags
Optional[List[str]]
A list of tags to associate with the created deployment for organizational purposes.
None
version
Optional[str]
A version for the created deployment. Defaults to the flow's version.
None
enforce_parameter_schema
bool
Whether or not the Prefect API should enforce the parameter schema for this deployment.
False
work_pool_name
Optional[str]
The name of the work pool to use for this deployment.
None
work_queue_name
Optional[str]
The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.
None
job_variables
Optional[Dict[str, Any]]
Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.
None
Source code in prefect/deployments/runner.py
@classmethod\ndef from_flow(\n cls,\n flow: \"Flow\",\n name: str,\n interval: Optional[\n Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n) -> \"RunnerDeployment\":\n \"\"\"\n Configure a deployment for a given flow.\n\n Args:\n flow: A flow function to deploy\n name: A name for the deployment\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n paused: Whether or not to set this deployment as paused.\n schedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options like `timezone`.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n \"\"\"\n constructed_schedules = cls._construct_deployment_schedules(\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedule=schedule,\n schedules=schedules,\n )\n\n job_variables = job_variables or {}\n\n deployment = cls(\n name=Path(name).stem,\n flow_name=flow.name,\n schedule=schedule,\n schedules=constructed_schedules,\n is_schedule_active=is_schedule_active,\n paused=paused,\n tags=tags or [],\n triggers=triggers or [],\n parameters=parameters or {},\n description=description,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n\n if not deployment.entrypoint:\n no_file_location_error = (\n \"Flows defined interactively cannot be deployed. Check out the\"\n \" quickstart guide for help getting started:\"\n \" https://docs.prefect.io/latest/getting-started/quickstart\"\n )\n ## first see if an entrypoint can be determined\n flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n mod_name = getattr(flow, \"__module__\", None)\n if entrypoint_type == EntrypointType.MODULE_PATH:\n if mod_name:\n deployment.entrypoint = f\"{mod_name}.{flow.__name__}\"\n else:\n raise ValueError(\n \"Unable to determine module path for provided flow.\"\n )\n else:\n if not flow_file:\n if not mod_name:\n raise ValueError(no_file_location_error)\n try:\n module = importlib.import_module(mod_name)\n flow_file = getattr(module, \"__file__\", None)\n except ModuleNotFoundError as exc:\n if \"__prefect_loader__\" in str(exc):\n raise ValueError(\n \"Cannot create a RunnerDeployment from a flow that has been\"\n \" loaded from an entrypoint. To deploy a flow via\"\n \" entrypoint, use RunnerDeployment.from_entrypoint instead.\"\n )\n raise ValueError(no_file_location_error)\n if not flow_file:\n raise ValueError(no_file_location_error)\n\n # set entrypoint\n entry_path = (\n Path(flow_file).absolute().relative_to(Path.cwd().absolute())\n )\n deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n if entrypoint_type == EntrypointType.FILE_PATH and not deployment._path:\n deployment._path = \".\"\n\n deployment._entrypoint_type = entrypoint_type\n\n cls._set_defaults_from_flow(deployment, flow)\n\n return deployment\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.from_storage","title":"from_storage
async
classmethod
","text":"Create a RunnerDeployment from a flow located at a given entrypoint and stored in a local storage location.
Parameters:
Name Type Description Defaultentrypoint
str
The path to a file containing a flow and the name of the flow function in the format ./path/to/file.py:flow_func_name
.
name
str
A name for the deployment
requiredstorage
RunnerStorage
A storage object to use for retrieving flow code. If not provided, a URL must be provided.
requiredinterval
Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]
An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.
None
cron
Optional[Union[Iterable[str], str]]
A cron schedule of when to execute runs of this flow.
None
rrule
Optional[Union[Iterable[str], str]]
An rrule schedule of when to execute runs of this flow.
None
schedule
Optional[SCHEDULE_TYPES]
A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.
None
is_schedule_active
Optional[bool]
Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.
None
triggers
Optional[List[DeploymentTrigger]]
A list of triggers that should kick of a run of this flow.
None
parameters
Optional[dict]
A dictionary of default parameter values to pass to runs of this flow.
None
description
Optional[str]
A description for the created deployment. Defaults to the flow's description if not provided.
None
tags
Optional[List[str]]
A list of tags to associate with the created deployment for organizational purposes.
None
version
Optional[str]
A version for the created deployment. Defaults to the flow's version.
None
enforce_parameter_schema
bool
Whether or not the Prefect API should enforce the parameter schema for this deployment.
False
work_pool_name
Optional[str]
The name of the work pool to use for this deployment.
None
work_queue_name
Optional[str]
The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.
None
job_variables
Optional[Dict[str, Any]]
Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.
None
Source code in prefect/deployments/runner.py
@classmethod\n@sync_compatible\nasync def from_storage(\n cls,\n storage: RunnerStorage,\n entrypoint: str,\n name: str,\n interval: Optional[\n Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n work_pool_name: Optional[str] = None,\n work_queue_name: Optional[str] = None,\n job_variables: Optional[Dict[str, Any]] = None,\n):\n \"\"\"\n Create a RunnerDeployment from a flow located at a given entrypoint and stored in a\n local storage location.\n\n Args:\n entrypoint: The path to a file containing a flow and the name of the flow function in\n the format `./path/to/file.py:flow_func_name`.\n name: A name for the deployment\n storage: A storage object to use for retrieving flow code. If not provided, a\n URL must be provided.\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n enforce_parameter_schema: Whether or not the Prefect API should enforce the\n parameter schema for this deployment.\n work_pool_name: The name of the work pool to use for this deployment.\n work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n If not provided the default work queue for the work pool will be used.\n job_variables: Settings used to override the values specified default base job template\n of the chosen work pool. Refer to the base job template of the chosen work pool for\n available settings.\n \"\"\"\n from prefect.flows import load_flow_from_entrypoint\n\n constructed_schedules = cls._construct_deployment_schedules(\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedule=schedule,\n schedules=schedules,\n )\n\n job_variables = job_variables or {}\n\n with tempfile.TemporaryDirectory() as tmpdir:\n storage.set_base_path(Path(tmpdir))\n await storage.pull_code()\n\n full_entrypoint = str(storage.destination / entrypoint)\n flow = await from_async.wait_for_call_in_new_thread(\n create_call(load_flow_from_entrypoint, full_entrypoint)\n )\n\n deployment = cls(\n name=Path(name).stem,\n flow_name=flow.name,\n schedule=schedule,\n schedules=constructed_schedules,\n paused=paused,\n is_schedule_active=is_schedule_active,\n tags=tags or [],\n triggers=triggers or [],\n parameters=parameters or {},\n description=description,\n version=version,\n entrypoint=entrypoint,\n enforce_parameter_schema=enforce_parameter_schema,\n storage=storage,\n work_pool_name=work_pool_name,\n work_queue_name=work_queue_name,\n job_variables=job_variables,\n )\n deployment._path = str(storage.destination).replace(\n tmpdir, \"$STORAGE_BASE_PATH\"\n )\n\n cls._set_defaults_from_flow(deployment, flow)\n\n return deployment\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.validate_automation_names","title":"validate_automation_names
","text":"Ensure that each trigger has a name for its automation if none is provided.
Source code inprefect/deployments/runner.py
@validator(\"triggers\", allow_reuse=True)\ndef validate_automation_names(cls, field_value, values, field, config):\n \"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n for i, trigger in enumerate(field_value, start=1):\n if trigger.name is None:\n trigger.name = f\"{values['name']}__automation_{i}\"\n\n return field_value\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.deploy","title":"deploy
async
","text":"Deploy the provided list of deployments to dynamic infrastructure via a work pool.
By default, calling this function will build a Docker image for the deployments, push it to a registry, and create each deployment via the Prefect API that will run the corresponding flow on the given schedule.
If you want to use an existing image, you can pass build=False
to skip building and pushing an image.
Parameters:
Name Type Description Default*deployments
RunnerDeployment
A list of deployments to deploy.
()
work_pool_name
Optional[str]
The name of the work pool to use for these deployments. Defaults to the value of PREFECT_DEFAULT_WORK_POOL_NAME
.
None
image
Optional[Union[str, DeploymentImage]]
The name of the Docker image to build, including the registry and repository. Pass a DeploymentImage instance to customize the Dockerfile used and build arguments.
None
build
bool
Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime.
True
push
bool
Whether or not to skip pushing the built image to a registry.
True
print_next_steps_message
bool
Whether or not to print a message with next steps after deploying the deployments.
True
Returns:
Type DescriptionList[UUID]
A list of deployment IDs for the created/updated deployments.
Examples:
Deploy a group of flows to a work pool:
from prefect import deploy, flow\n\n@flow(log_prints=True)\ndef local_flow():\n print(\"I'm a locally defined flow!\")\n\nif __name__ == \"__main__\":\n deploy(\n local_flow.to_deployment(name=\"example-deploy-local-flow\"),\n flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n ).to_deployment(\n name=\"example-deploy-remote-flow\",\n ),\n work_pool_name=\"my-work-pool\",\n image=\"my-registry/my-image:dev\",\n )\n
Source code in prefect/deployments/runner.py
@sync_compatible\nasync def deploy(\n *deployments: RunnerDeployment,\n work_pool_name: Optional[str] = None,\n image: Optional[Union[str, DeploymentImage]] = None,\n build: bool = True,\n push: bool = True,\n print_next_steps_message: bool = True,\n ignore_warnings: bool = False,\n) -> List[UUID]:\n \"\"\"\n Deploy the provided list of deployments to dynamic infrastructure via a\n work pool.\n\n By default, calling this function will build a Docker image for the deployments, push it to a\n registry, and create each deployment via the Prefect API that will run the corresponding\n flow on the given schedule.\n\n If you want to use an existing image, you can pass `build=False` to skip building and pushing\n an image.\n\n Args:\n *deployments: A list of deployments to deploy.\n work_pool_name: The name of the work pool to use for these deployments. Defaults to\n the value of `PREFECT_DEFAULT_WORK_POOL_NAME`.\n image: The name of the Docker image to build, including the registry and\n repository. Pass a DeploymentImage instance to customize the Dockerfile used\n and build arguments.\n build: Whether or not to build a new image for the flow. If False, the provided\n image will be used as-is and pulled at runtime.\n push: Whether or not to skip pushing the built image to a registry.\n print_next_steps_message: Whether or not to print a message with next steps\n after deploying the deployments.\n\n Returns:\n A list of deployment IDs for the created/updated deployments.\n\n Examples:\n Deploy a group of flows to a work pool:\n\n ```python\n from prefect import deploy, flow\n\n @flow(log_prints=True)\n def local_flow():\n print(\"I'm a locally defined flow!\")\n\n if __name__ == \"__main__\":\n deploy(\n local_flow.to_deployment(name=\"example-deploy-local-flow\"),\n flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n ).to_deployment(\n name=\"example-deploy-remote-flow\",\n ),\n work_pool_name=\"my-work-pool\",\n image=\"my-registry/my-image:dev\",\n )\n ```\n \"\"\"\n work_pool_name = work_pool_name or PREFECT_DEFAULT_WORK_POOL_NAME.value()\n\n if not image and not all(\n d.storage or d.entrypoint_type == EntrypointType.MODULE_PATH\n for d in deployments\n ):\n raise ValueError(\n \"Either an image or remote storage location must be provided when deploying\"\n \" a deployment.\"\n )\n\n if not work_pool_name:\n raise ValueError(\n \"A work pool name must be provided when deploying a deployment. Either\"\n \" provide a work pool name when calling `deploy` or set\"\n \" `PREFECT_DEFAULT_WORK_POOL_NAME` in your profile.\"\n )\n\n if image and isinstance(image, str):\n image_name, image_tag = parse_image_tag(image)\n image = DeploymentImage(name=image_name, tag=image_tag)\n\n try:\n async with get_client() as client:\n work_pool = await client.read_work_pool(work_pool_name)\n except ObjectNotFound as exc:\n raise ValueError(\n f\"Could not find work pool {work_pool_name!r}. Please create it before\"\n \" deploying this flow.\"\n ) from exc\n\n is_docker_based_work_pool = get_from_dict(\n work_pool.base_job_template, \"variables.properties.image\", False\n )\n is_block_based_work_pool = get_from_dict(\n work_pool.base_job_template, \"variables.properties.block\", False\n )\n # carve out an exception for block based work pools that only have a block in their base job template\n console = Console()\n if not is_docker_based_work_pool and not is_block_based_work_pool:\n if image:\n raise ValueError(\n f\"Work pool {work_pool_name!r} does not support custom Docker images.\"\n \" Please use a work pool with an `image` variable in its base job template\"\n \" or specify a remote storage location for the flow with `.from_source`.\"\n \" If you are attempting to deploy a flow to a local process work pool,\"\n \" consider using `flow.serve` instead. See the documentation for more\"\n \" information: https://docs.prefect.io/latest/concepts/flows/#serving-a-flow\"\n )\n elif work_pool.type == \"process\" and not ignore_warnings:\n console.print(\n \"Looks like you're deploying to a process work pool. If you're creating a\"\n \" deployment for local development, calling `.serve` on your flow is a great\"\n \" way to get started. See the documentation for more information:\"\n \" https://docs.prefect.io/latest/concepts/flows/#serving-a-flow. \"\n \" Set `ignore_warnings=True` to suppress this message.\",\n style=\"yellow\",\n )\n\n is_managed_pool = work_pool.is_managed_pool\n if is_managed_pool:\n build = False\n push = False\n\n if image and build:\n with Progress(\n SpinnerColumn(),\n TextColumn(f\"Building image {image.reference}...\"),\n transient=True,\n console=console,\n ) as progress:\n docker_build_task = progress.add_task(\"docker_build\", total=1)\n image.build()\n\n progress.update(docker_build_task, completed=1)\n console.print(\n f\"Successfully built image {image.reference!r}\", style=\"green\"\n )\n\n if image and build and push:\n with Progress(\n SpinnerColumn(),\n TextColumn(\"Pushing image...\"),\n transient=True,\n console=console,\n ) as progress:\n docker_push_task = progress.add_task(\"docker_push\", total=1)\n\n image.push()\n\n progress.update(docker_push_task, completed=1)\n\n console.print(f\"Successfully pushed image {image.reference!r}\", style=\"green\")\n\n deployment_exceptions = []\n deployment_ids = []\n image_ref = image.reference if image else None\n for deployment in track(\n deployments,\n description=\"Creating/updating deployments...\",\n console=console,\n transient=True,\n ):\n try:\n deployment_ids.append(\n await deployment.apply(image=image_ref, work_pool_name=work_pool_name)\n )\n except Exception as exc:\n if len(deployments) == 1:\n raise\n deployment_exceptions.append({\"deployment\": deployment, \"exc\": exc})\n\n if deployment_exceptions:\n console.print(\n \"Encountered errors while creating/updating deployments:\\n\",\n style=\"orange_red1\",\n )\n else:\n console.print(\"Successfully created/updated all deployments!\\n\", style=\"green\")\n\n complete_failure = len(deployment_exceptions) == len(deployments)\n\n table = Table(\n title=\"Deployments\",\n show_lines=True,\n )\n\n table.add_column(header=\"Name\", style=\"blue\", no_wrap=True)\n table.add_column(header=\"Status\", style=\"blue\", no_wrap=True)\n table.add_column(header=\"Details\", style=\"blue\")\n\n for deployment in deployments:\n errored_deployment = next(\n (d for d in deployment_exceptions if d[\"deployment\"] == deployment),\n None,\n )\n if errored_deployment:\n table.add_row(\n f\"{deployment.flow_name}/{deployment.name}\",\n \"failed\",\n str(errored_deployment[\"exc\"]),\n style=\"red\",\n )\n else:\n table.add_row(f\"{deployment.flow_name}/{deployment.name}\", \"applied\")\n console.print(table)\n\n if print_next_steps_message and not complete_failure:\n if not work_pool.is_push_pool and not work_pool.is_managed_pool:\n console.print(\n \"\\nTo execute flow runs from these deployments, start a worker in a\"\n \" separate terminal that pulls work from the\"\n f\" {work_pool_name!r} work pool:\"\n )\n console.print(\n f\"\\n\\t$ prefect worker start --pool {work_pool_name!r}\",\n style=\"blue\",\n )\n console.print(\n \"\\nTo trigger any of these deployments, use the\"\n \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n \" [DEPLOYMENT_NAME]\\n[/]\"\n )\n\n if PREFECT_UI_URL:\n console.print(\n \"\\nYou can also trigger your deployments via the Prefect UI:\"\n f\" [blue]{PREFECT_UI_URL.value()}/deployments[/]\\n\"\n )\n\n return deployment_ids\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/schedules/","title":"schedules","text":"","tags":["Python API","flow runs","deployments","schedules"]},{"location":"api-ref/prefect/deployments/schedules/#prefect.deployments.schedules","title":"prefect.deployments.schedules
","text":"","tags":["Python API","flow runs","deployments","schedules"]},{"location":"api-ref/prefect/deployments/steps/core/","title":"steps.core","text":"","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core","title":"prefect.deployments.steps.core
","text":"Core primitives for running Prefect project steps.
Project steps are YAML representations of Python functions along with their inputs.
Whenever a step is run, the following actions are taken:
requires
keyword is used to install the necessary packagesStepExecutionError
","text":" Bases: Exception
Raised when a step fails to execute.
Source code inprefect/deployments/steps/core.py
class StepExecutionError(Exception):\n \"\"\"\n Raised when a step fails to execute.\n \"\"\"\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core.run_step","title":"run_step
async
","text":"Runs a step, returns the step's output.
Steps are assumed to be in the format {\"importable.func.name\": {\"kwarg1\": \"value1\", ...}}
.
The 'id and 'requires' keywords are reserved for specific purposes and will be removed from the inputs before passing to the step function:
This keyword is used to specify packages that should be installed before running the step.
Source code inprefect/deployments/steps/core.py
async def run_step(step: Dict, upstream_outputs: Optional[Dict] = None) -> Dict:\n \"\"\"\n Runs a step, returns the step's output.\n\n Steps are assumed to be in the format `{\"importable.func.name\": {\"kwarg1\": \"value1\", ...}}`.\n\n The 'id and 'requires' keywords are reserved for specific purposes and will be removed from the\n inputs before passing to the step function:\n\n This keyword is used to specify packages that should be installed before running the step.\n \"\"\"\n fqn, inputs = _get_step_fully_qualified_name_and_inputs(step)\n upstream_outputs = upstream_outputs or {}\n\n if len(step.keys()) > 1:\n raise ValueError(\n f\"Step has unexpected additional keys: {', '.join(list(step.keys())[1:])}\"\n )\n\n keywords = {\n keyword: inputs.pop(keyword)\n for keyword in RESERVED_KEYWORDS\n if keyword in inputs\n }\n\n inputs = apply_values(inputs, upstream_outputs)\n inputs = await resolve_block_document_references(inputs)\n inputs = await resolve_variables(inputs)\n inputs = apply_values(inputs, os.environ)\n step_func = _get_function_for_step(fqn, requires=keywords.get(\"requires\"))\n result = await from_async.call_soon_in_new_thread(\n Call.new(step_func, **inputs)\n ).aresult()\n return result\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/pull/","title":"steps.pull","text":"","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull","title":"prefect.deployments.steps.pull
","text":"Core set of steps for specifying a Prefect project pull step.
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.git_clone","title":"git_clone
async
","text":"Clones a git repository into the current working directory.
Parameters:
Name Type Description Defaultrepository
str
the URL of the repository to clone
requiredbranch
Optional[str]
the branch to clone; if not provided, the default branch will be used
None
include_submodules
bool
whether to include git submodules when cloning the repository
False
access_token
Optional[str]
an access token to use for cloning the repository; if not provided the repository will be cloned using the default git credentials
None
credentials
Optional[Block]
a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the credentials to use for cloning the repository.
None
Returns:
Name Type Descriptiondict
dict
a dictionary containing a directory
key of the new directory that was created
Raises:
Type DescriptionCalledProcessError
if the git clone command fails for any reason
Examples:
Clone a public repository:
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/PrefectHQ/prefect.git\n
Clone a branch of a public repository:
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/PrefectHQ/prefect.git\n branch: my-branch\n
Clone a private repository using a GitHubCredentials block:
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/org/repo.git\n credentials: \"{{ prefect.blocks.github-credentials.my-github-credentials-block }}\"\n
Clone a private repository using an access token:
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/org/repo.git\n access_token: \"{{ prefect.blocks.secret.github-access-token }}\" # Requires creation of a Secret block\n
Note that you will need to create a Secret block to store the value of your git credentials. You can also store a username/password combo or token prefix (e.g. x-token-auth
) in your secret block. Refer to your git providers documentation for the correct authentication schema. Clone a repository with submodules:
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/org/repo.git\n include_submodules: true\n
Clone a repository with an SSH key (note that the SSH key must be added to the worker before executing flows):
pull:\n - prefect.deployments.steps.git_clone:\n repository: git@github.com:org/repo.git\n
Source code in prefect/deployments/steps/pull.py
@sync_compatible\nasync def git_clone(\n repository: str,\n branch: Optional[str] = None,\n include_submodules: bool = False,\n access_token: Optional[str] = None,\n credentials: Optional[Block] = None,\n) -> dict:\n \"\"\"\n Clones a git repository into the current working directory.\n\n Args:\n repository: the URL of the repository to clone\n branch: the branch to clone; if not provided, the default branch will be used\n include_submodules (bool): whether to include git submodules when cloning the repository\n access_token: an access token to use for cloning the repository; if not provided\n the repository will be cloned using the default git credentials\n credentials: a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the\n credentials to use for cloning the repository.\n\n Returns:\n dict: a dictionary containing a `directory` key of the new directory that was created\n\n Raises:\n subprocess.CalledProcessError: if the git clone command fails for any reason\n\n Examples:\n Clone a public repository:\n ```yaml\n pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/PrefectHQ/prefect.git\n ```\n\n Clone a branch of a public repository:\n ```yaml\n pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/PrefectHQ/prefect.git\n branch: my-branch\n ```\n\n Clone a private repository using a GitHubCredentials block:\n ```yaml\n pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/org/repo.git\n credentials: \"{{ prefect.blocks.github-credentials.my-github-credentials-block }}\"\n ```\n\n Clone a private repository using an access token:\n ```yaml\n pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/org/repo.git\n access_token: \"{{ prefect.blocks.secret.github-access-token }}\" # Requires creation of a Secret block\n ```\n Note that you will need to [create a Secret block](/concepts/blocks/#using-existing-block-types) to store the\n value of your git credentials. You can also store a username/password combo or token prefix (e.g. `x-token-auth`)\n in your secret block. Refer to your git providers documentation for the correct authentication schema.\n\n Clone a repository with submodules:\n ```yaml\n pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/org/repo.git\n include_submodules: true\n ```\n\n Clone a repository with an SSH key (note that the SSH key must be added to the worker\n before executing flows):\n ```yaml\n pull:\n - prefect.deployments.steps.git_clone:\n repository: git@github.com:org/repo.git\n ```\n \"\"\"\n if access_token and credentials:\n raise ValueError(\n \"Please provide either an access token or credentials but not both.\"\n )\n\n credentials = {\"access_token\": access_token} if access_token else credentials\n\n storage = GitRepository(\n url=repository,\n credentials=credentials,\n branch=branch,\n include_submodules=include_submodules,\n )\n\n await storage.pull_code()\n\n directory = str(storage.destination.relative_to(Path.cwd()))\n deployment_logger.info(f\"Cloned repository {repository!r} into {directory!r}\")\n return {\"directory\": directory}\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.git_clone_project","title":"git_clone_project
async
","text":"Deprecated. Use git_clone
instead.
prefect/deployments/steps/pull.py
@deprecated_callable(start_date=\"Jun 2023\", help=\"Use 'git clone' instead.\")\n@sync_compatible\nasync def git_clone_project(\n repository: str,\n branch: Optional[str] = None,\n include_submodules: bool = False,\n access_token: Optional[str] = None,\n) -> dict:\n \"\"\"Deprecated. Use `git_clone` instead.\"\"\"\n return await git_clone(\n repository=repository,\n branch=branch,\n include_submodules=include_submodules,\n access_token=access_token,\n )\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.pull_from_remote_storage","title":"pull_from_remote_storage
async
","text":"Pulls code from a remote storage location into the current working directory.
Works with protocols supported by fsspec
.
Parameters:
Name Type Description Defaulturl
str
the URL of the remote storage location. Should be a valid fsspec
URL. Some protocols may require an additional fsspec
dependency to be installed. Refer to the fsspec
docs for more details.
**settings
Any
any additional settings to pass the fsspec
filesystem class.
{}
Returns:
Name Type Descriptiondict
a dictionary containing a directory
key of the new directory that was created
Examples:
Pull code from a remote storage location:
pull:\n - prefect.deployments.steps.pull_from_remote_storage:\n url: s3://my-bucket/my-folder\n
Pull code from a remote storage location with additional settings:
pull:\n - prefect.deployments.steps.pull_from_remote_storage:\n url: s3://my-bucket/my-folder\n key: {{ prefect.blocks.secret.my-aws-access-key }}}\n secret: {{ prefect.blocks.secret.my-aws-secret-key }}}\n
Source code in prefect/deployments/steps/pull.py
async def pull_from_remote_storage(url: str, **settings: Any):\n \"\"\"\n Pulls code from a remote storage location into the current working directory.\n\n Works with protocols supported by `fsspec`.\n\n Args:\n url (str): the URL of the remote storage location. Should be a valid `fsspec` URL.\n Some protocols may require an additional `fsspec` dependency to be installed.\n Refer to the [`fsspec` docs](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations)\n for more details.\n **settings (Any): any additional settings to pass the `fsspec` filesystem class.\n\n Returns:\n dict: a dictionary containing a `directory` key of the new directory that was created\n\n Examples:\n Pull code from a remote storage location:\n ```yaml\n pull:\n - prefect.deployments.steps.pull_from_remote_storage:\n url: s3://my-bucket/my-folder\n ```\n\n Pull code from a remote storage location with additional settings:\n ```yaml\n pull:\n - prefect.deployments.steps.pull_from_remote_storage:\n url: s3://my-bucket/my-folder\n key: {{ prefect.blocks.secret.my-aws-access-key }}}\n secret: {{ prefect.blocks.secret.my-aws-secret-key }}}\n ```\n \"\"\"\n storage = RemoteStorage(url, **settings)\n\n await storage.pull_code()\n\n directory = str(storage.destination.relative_to(Path.cwd()))\n deployment_logger.info(f\"Pulled code from {url!r} into {directory!r}\")\n return {\"directory\": directory}\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.pull_with_block","title":"pull_with_block
async
","text":"Pulls code using a block.
Parameters:
Name Type Description Defaultblock_document_name
str
The name of the block document to use
requiredblock_type_slug
str
The slug of the type of block to use
required Source code inprefect/deployments/steps/pull.py
async def pull_with_block(block_document_name: str, block_type_slug: str):\n \"\"\"\n Pulls code using a block.\n\n Args:\n block_document_name: The name of the block document to use\n block_type_slug: The slug of the type of block to use\n \"\"\"\n full_slug = f\"{block_type_slug}/{block_document_name}\"\n try:\n block = await Block.load(full_slug)\n except Exception:\n deployment_logger.exception(\"Unable to load block '%s'\", full_slug)\n raise\n\n try:\n storage = BlockStorageAdapter(block)\n except Exception:\n deployment_logger.exception(\n \"Unable to create storage adapter for block '%s'\", full_slug\n )\n raise\n\n await storage.pull_code()\n\n directory = str(storage.destination.relative_to(Path.cwd()))\n deployment_logger.info(\n \"Pulled code using block '%s' into '%s'\", full_slug, directory\n )\n return {\"directory\": directory}\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.set_working_directory","title":"set_working_directory
","text":"Sets the working directory; works with both absolute and relative paths.
Parameters:
Name Type Description Defaultdirectory
str
the directory to set as the working directory
requiredReturns:
Name Type Descriptiondict
dict
a dictionary containing a directory
key of the directory that was set
prefect/deployments/steps/pull.py
def set_working_directory(directory: str) -> dict:\n \"\"\"\n Sets the working directory; works with both absolute and relative paths.\n\n Args:\n directory (str): the directory to set as the working directory\n\n Returns:\n dict: a dictionary containing a `directory` key of the\n directory that was set\n \"\"\"\n os.chdir(directory)\n return dict(directory=directory)\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/utility/","title":"steps.utility","text":"","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility","title":"prefect.deployments.steps.utility
","text":"Utility project steps that are useful for managing a project's deployment lifecycle.
Steps within this module can be used within a build
, push
, or pull
deployment action.
Use the run_shell_script
setp to retrieve the short Git commit hash of the current repository and use it as a Docker image tag:
build:\n - prefect.deployments.steps.run_shell_script:\n id: get-commit-hash\n script: git rev-parse --short HEAD\n stream_output: false\n - prefect_docker.deployments.steps.build_docker_image:\n requires: prefect-docker\n image_name: my-image\n image_tag: \"{{ get-commit-hash.stdout }}\"\n dockerfile: auto\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.RunShellScriptResult","title":"RunShellScriptResult
","text":" Bases: TypedDict
The result of a run_shell_script
step.
Attributes:
Name Type Descriptionstdout
str
The captured standard output of the script.
stderr
str
The captured standard error of the script.
Source code inprefect/deployments/steps/utility.py
class RunShellScriptResult(TypedDict):\n \"\"\"\n The result of a `run_shell_script` step.\n\n Attributes:\n stdout: The captured standard output of the script.\n stderr: The captured standard error of the script.\n \"\"\"\n\n stdout: str\n stderr: str\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.pip_install_requirements","title":"pip_install_requirements
async
","text":"Installs dependencies from a requirements.txt file.
Parameters:
Name Type Description Defaultrequirements_file
str
The requirements.txt to use for installation.
'requirements.txt'
directory
Optional[str]
The directory the requirements.txt file is in. Defaults to the current working directory.
None
stream_output
bool
Whether to stream the output from pip install should be streamed to the console
True
Returns:
Type DescriptionA dictionary with the keys stdout
and stderr
containing the output the pip install
command
Raises:
Type DescriptionCalledProcessError
if the pip install command fails for any reason
Examplepull:\n - prefect.deployments.steps.git_clone:\n id: clone-step\n repository: https://github.com/org/repo.git\n - prefect.deployments.steps.pip_install_requirements:\n directory: {{ clone-step.directory }}\n requirements_file: requirements.txt\n stream_output: False\n
Source code in prefect/deployments/steps/utility.py
async def pip_install_requirements(\n directory: Optional[str] = None,\n requirements_file: str = \"requirements.txt\",\n stream_output: bool = True,\n):\n \"\"\"\n Installs dependencies from a requirements.txt file.\n\n Args:\n requirements_file: The requirements.txt to use for installation.\n directory: The directory the requirements.txt file is in. Defaults to\n the current working directory.\n stream_output: Whether to stream the output from pip install should be\n streamed to the console\n\n Returns:\n A dictionary with the keys `stdout` and `stderr` containing the output\n the `pip install` command\n\n Raises:\n subprocess.CalledProcessError: if the pip install command fails for any reason\n\n Example:\n ```yaml\n pull:\n - prefect.deployments.steps.git_clone:\n id: clone-step\n repository: https://github.com/org/repo.git\n - prefect.deployments.steps.pip_install_requirements:\n directory: {{ clone-step.directory }}\n requirements_file: requirements.txt\n stream_output: False\n ```\n \"\"\"\n stdout_sink = io.StringIO()\n stderr_sink = io.StringIO()\n\n async with open_process(\n [get_sys_executable(), \"-m\", \"pip\", \"install\", \"-r\", requirements_file],\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n cwd=directory,\n ) as process:\n await _stream_capture_process_output(\n process,\n stdout_sink=stdout_sink,\n stderr_sink=stderr_sink,\n stream_output=stream_output,\n )\n await process.wait()\n\n if process.returncode != 0:\n raise RuntimeError(\n f\"pip_install_requirements failed with error code {process.returncode}:\"\n f\" {stderr_sink.getvalue()}\"\n )\n\n return {\n \"stdout\": stdout_sink.getvalue().strip(),\n \"stderr\": stderr_sink.getvalue().strip(),\n }\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.run_shell_script","title":"run_shell_script
async
","text":"Runs one or more shell commands in a subprocess. Returns the standard output and standard error of the script.
Parameters:
Name Type Description Defaultscript
str
The script to run
requireddirectory
Optional[str]
The directory to run the script in. Defaults to the current working directory.
None
env
Optional[Dict[str, str]]
A dictionary of environment variables to set for the script
None
stream_output
bool
Whether to stream the output of the script to stdout/stderr
True
expand_env_vars
bool
Whether to expand environment variables in the script before running it
False
Returns:
Type DescriptionRunShellScriptResult
A dictionary with the keys stdout
and stderr
containing the output of the script
Examples:
Retrieve the short Git commit hash of the current repository to use as a Docker image tag:
build:\n - prefect.deployments.steps.run_shell_script:\n id: get-commit-hash\n script: git rev-parse --short HEAD\n stream_output: false\n - prefect_docker.deployments.steps.build_docker_image:\n requires: prefect-docker\n image_name: my-image\n image_tag: \"{{ get-commit-hash.stdout }}\"\n dockerfile: auto\n
Run a multi-line shell script:
build:\n - prefect.deployments.steps.run_shell_script:\n script: |\n echo \"Hello\"\n echo \"World\"\n
Run a shell script with environment variables:
build:\n - prefect.deployments.steps.run_shell_script:\n script: echo \"Hello $NAME\"\n env:\n NAME: World\n
Run a shell script with environment variables expanded from the current environment:
pull:\n - prefect.deployments.steps.run_shell_script:\n script: |\n echo \"User: $USER\"\n echo \"Home Directory: $HOME\"\n stream_output: true\n expand_env_vars: true\n
Run a shell script in a specific directory:
build:\n - prefect.deployments.steps.run_shell_script:\n script: echo \"Hello\"\n directory: /path/to/directory\n
Run a script stored in a file:
build:\n - prefect.deployments.steps.run_shell_script:\n script: \"bash path/to/script.sh\"\n
Source code in prefect/deployments/steps/utility.py
async def run_shell_script(\n script: str,\n directory: Optional[str] = None,\n env: Optional[Dict[str, str]] = None,\n stream_output: bool = True,\n expand_env_vars: bool = False,\n) -> RunShellScriptResult:\n \"\"\"\n Runs one or more shell commands in a subprocess. Returns the standard\n output and standard error of the script.\n\n Args:\n script: The script to run\n directory: The directory to run the script in. Defaults to the current\n working directory.\n env: A dictionary of environment variables to set for the script\n stream_output: Whether to stream the output of the script to\n stdout/stderr\n expand_env_vars: Whether to expand environment variables in the script\n before running it\n\n Returns:\n A dictionary with the keys `stdout` and `stderr` containing the output\n of the script\n\n Examples:\n Retrieve the short Git commit hash of the current repository to use as\n a Docker image tag:\n ```yaml\n build:\n - prefect.deployments.steps.run_shell_script:\n id: get-commit-hash\n script: git rev-parse --short HEAD\n stream_output: false\n - prefect_docker.deployments.steps.build_docker_image:\n requires: prefect-docker\n image_name: my-image\n image_tag: \"{{ get-commit-hash.stdout }}\"\n dockerfile: auto\n ```\n\n Run a multi-line shell script:\n ```yaml\n build:\n - prefect.deployments.steps.run_shell_script:\n script: |\n echo \"Hello\"\n echo \"World\"\n ```\n\n Run a shell script with environment variables:\n ```yaml\n build:\n - prefect.deployments.steps.run_shell_script:\n script: echo \"Hello $NAME\"\n env:\n NAME: World\n ```\n\n Run a shell script with environment variables expanded\n from the current environment:\n ```yaml\n pull:\n - prefect.deployments.steps.run_shell_script:\n script: |\n echo \"User: $USER\"\n echo \"Home Directory: $HOME\"\n stream_output: true\n expand_env_vars: true\n ```\n\n Run a shell script in a specific directory:\n ```yaml\n build:\n - prefect.deployments.steps.run_shell_script:\n script: echo \"Hello\"\n directory: /path/to/directory\n ```\n\n Run a script stored in a file:\n ```yaml\n build:\n - prefect.deployments.steps.run_shell_script:\n script: \"bash path/to/script.sh\"\n ```\n \"\"\"\n current_env = os.environ.copy()\n current_env.update(env or {})\n\n commands = script.splitlines()\n stdout_sink = io.StringIO()\n stderr_sink = io.StringIO()\n\n for command in commands:\n if expand_env_vars:\n # Expand environment variables in command and provided environment\n command = string.Template(command).safe_substitute(current_env)\n split_command = shlex.split(command, posix=sys.platform != \"win32\")\n if not split_command:\n continue\n async with open_process(\n split_command,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n cwd=directory,\n env=current_env,\n ) as process:\n await _stream_capture_process_output(\n process,\n stdout_sink=stdout_sink,\n stderr_sink=stderr_sink,\n stream_output=stream_output,\n )\n\n await process.wait()\n\n if process.returncode != 0:\n raise RuntimeError(\n f\"`run_shell_script` failed with error code {process.returncode}:\"\n f\" {stderr_sink.getvalue()}\"\n )\n\n return {\n \"stdout\": stdout_sink.getvalue().strip(),\n \"stderr\": stderr_sink.getvalue().strip(),\n }\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/input/actions/","title":"actions","text":"","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions","title":"prefect.input.actions
","text":"","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions.create_flow_run_input","title":"create_flow_run_input
async
","text":"Create a new flow run input. The given value
will be serialized to JSON and stored as a flow run input value.
Parameters:
Name Type Description Default-
key (str
the flow run input key
required-
value (Any
the flow run input value
required-
flow_run_id (UUID
the, optional, flow run ID. If not given will default to pulling the flow run ID from the current context.
required Source code inprefect/input/actions.py
@sync_compatible\n@inject_client\nasync def create_flow_run_input(\n key: str,\n value: Any,\n flow_run_id: Optional[UUID] = None,\n sender: Optional[str] = None,\n client: \"PrefectClient\" = None,\n):\n \"\"\"\n Create a new flow run input. The given `value` will be serialized to JSON\n and stored as a flow run input value.\n\n Args:\n - key (str): the flow run input key\n - value (Any): the flow run input value\n - flow_run_id (UUID): the, optional, flow run ID. If not given will\n default to pulling the flow run ID from the current context.\n \"\"\"\n flow_run_id = ensure_flow_run_id(flow_run_id)\n\n await client.create_flow_run_input(\n flow_run_id=flow_run_id,\n key=key,\n sender=sender,\n value=orjson.dumps(value).decode(),\n )\n
","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions.delete_flow_run_input","title":"delete_flow_run_input
async
","text":"Delete a flow run input.
Parameters:
Name Type Description Default-
flow_run_id (UUID
the flow run ID
required-
key (str
the flow run input key
required Source code inprefect/input/actions.py
@sync_compatible\n@inject_client\nasync def delete_flow_run_input(\n key: str, flow_run_id: Optional[UUID] = None, client: \"PrefectClient\" = None\n):\n \"\"\"Delete a flow run input.\n\n Args:\n - flow_run_id (UUID): the flow run ID\n - key (str): the flow run input key\n \"\"\"\n\n flow_run_id = ensure_flow_run_id(flow_run_id)\n\n await client.delete_flow_run_input(flow_run_id=flow_run_id, key=key)\n
","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions.read_flow_run_input","title":"read_flow_run_input
async
","text":"Read a flow run input.
Parameters:
Name Type Description Default-
key (str
the flow run input key
required-
flow_run_id (UUID
the flow run ID
required Source code inprefect/input/actions.py
@sync_compatible\n@inject_client\nasync def read_flow_run_input(\n key: str, flow_run_id: Optional[UUID] = None, client: \"PrefectClient\" = None\n) -> Any:\n \"\"\"Read a flow run input.\n\n Args:\n - key (str): the flow run input key\n - flow_run_id (UUID): the flow run ID\n \"\"\"\n flow_run_id = ensure_flow_run_id(flow_run_id)\n\n try:\n value = await client.read_flow_run_input(flow_run_id=flow_run_id, key=key)\n except PrefectHTTPStatusError as exc:\n if exc.response.status_code == 404:\n return None\n raise\n else:\n return orjson.loads(value)\n
","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/run_input/","title":"run_input","text":"","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input","title":"prefect.input.run_input
","text":"This module contains functions that allow sending type-checked RunInput
data to flows at runtime. Flows can send back responses, establishing two-way channels with senders. These functions are particularly useful for systems that require ongoing data transfer or need to react to input quickly. real-time interaction and efficient data handling. It's designed to facilitate dynamic communication within distributed or microservices-oriented systems, making it ideal for scenarios requiring continuous data synchronization and processing. It's particularly useful for systems that require ongoing data input and output.
The following is an example of two flows. One sends a random number to the other and waits for a response. The other receives the number, squares it, and sends the result back. The sender flow then prints the result.
Sender flow:
import random\nfrom uuid import UUID\nfrom prefect import flow, get_run_logger\nfrom prefect.input import RunInput\n\nclass NumberData(RunInput):\n number: int\n\n\n@flow\nasync def sender_flow(receiver_flow_run_id: UUID):\n logger = get_run_logger()\n\n the_number = random.randint(1, 100)\n\n await NumberData(number=the_number).send_to(receiver_flow_run_id)\n\n receiver = NumberData.receive(flow_run_id=receiver_flow_run_id)\n squared = await receiver.next()\n\n logger.info(f\"{the_number} squared is {squared.number}\")\n
Receiver flow:
import random\nfrom uuid import UUID\nfrom prefect import flow, get_run_logger\nfrom prefect.input import RunInput\n\nclass NumberData(RunInput):\n number: int\n\n\n@flow\nasync def receiver_flow():\n async for data in NumberData.receive():\n squared = data.number ** 2\n data.respond(NumberData(number=squared))\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.AutomaticRunInput","title":"AutomaticRunInput
","text":" Bases: RunInput
, Generic[T]
prefect/input/run_input.py
class AutomaticRunInput(RunInput, Generic[T]):\n value: T\n\n @classmethod\n @sync_compatible\n async def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> T:\n \"\"\"\n Load the run input response from the given key.\n\n Args:\n - keyset (Keyset): the keyset to load the input for\n - flow_run_id (UUID, optional): the flow run ID to load the input for\n \"\"\"\n instance = await super().load(keyset, flow_run_id=flow_run_id)\n return instance.value\n\n @classmethod\n def subclass_from_type(cls, _type: Type[T]) -> Type[\"AutomaticRunInput[T]\"]:\n \"\"\"\n Create a new `AutomaticRunInput` subclass from the given type.\n \"\"\"\n fields = {\"value\": (_type, ...)}\n\n # Sending a value to a flow run that relies on an AutomaticRunInput will\n # produce a key prefix that includes the type name. For example, if the\n # value is a list, the key will include \"list\" as the type. If the user\n # then tries to receive the value with a type annotation like List[int],\n # we need to find the key we saved with \"list\" as the type (not\n # \"List[int]\"). Calling __name__.lower() on a type annotation like\n # List[int] produces the string \"list\", which is what we need.\n if hasattr(_type, \"__name__\"):\n type_prefix = _type.__name__.lower()\n elif hasattr(_type, \"_name\"):\n # On Python 3.9 and earlier, type annotation values don't have a\n # __name__ attribute, but they do have a _name.\n type_prefix = _type._name.lower()\n else:\n # If we can't identify a type name that we can use as a key\n # prefix that will match an input, we'll have to use\n # \"AutomaticRunInput\" as the generic name. This will match all\n # automatic inputs sent to the flow run, rather than a specific\n # type.\n type_prefix = \"\"\n class_name = f\"{type_prefix}AutomaticRunInput\"\n\n new_cls: Type[\"AutomaticRunInput\"] = pydantic.create_model(\n class_name, **fields, __base__=AutomaticRunInput\n )\n return new_cls\n\n @classmethod\n def receive(cls, *args, **kwargs):\n if kwargs.get(\"key_prefix\") is None:\n kwargs[\"key_prefix\"] = f\"{cls.__name__.lower()}-auto\"\n\n return GetAutomaticInputHandler(run_input_cls=cls, *args, **kwargs)\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.AutomaticRunInput.load","title":"load
async
classmethod
","text":"Load the run input response from the given key.
Parameters:
Name Type Description Default-
keyset (Keyset
the keyset to load the input for
required-
flow_run_id (UUID
the flow run ID to load the input for
required Source code inprefect/input/run_input.py
@classmethod\n@sync_compatible\nasync def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> T:\n \"\"\"\n Load the run input response from the given key.\n\n Args:\n - keyset (Keyset): the keyset to load the input for\n - flow_run_id (UUID, optional): the flow run ID to load the input for\n \"\"\"\n instance = await super().load(keyset, flow_run_id=flow_run_id)\n return instance.value\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.AutomaticRunInput.subclass_from_type","title":"subclass_from_type
classmethod
","text":"Create a new AutomaticRunInput
subclass from the given type.
prefect/input/run_input.py
@classmethod\ndef subclass_from_type(cls, _type: Type[T]) -> Type[\"AutomaticRunInput[T]\"]:\n \"\"\"\n Create a new `AutomaticRunInput` subclass from the given type.\n \"\"\"\n fields = {\"value\": (_type, ...)}\n\n # Sending a value to a flow run that relies on an AutomaticRunInput will\n # produce a key prefix that includes the type name. For example, if the\n # value is a list, the key will include \"list\" as the type. If the user\n # then tries to receive the value with a type annotation like List[int],\n # we need to find the key we saved with \"list\" as the type (not\n # \"List[int]\"). Calling __name__.lower() on a type annotation like\n # List[int] produces the string \"list\", which is what we need.\n if hasattr(_type, \"__name__\"):\n type_prefix = _type.__name__.lower()\n elif hasattr(_type, \"_name\"):\n # On Python 3.9 and earlier, type annotation values don't have a\n # __name__ attribute, but they do have a _name.\n type_prefix = _type._name.lower()\n else:\n # If we can't identify a type name that we can use as a key\n # prefix that will match an input, we'll have to use\n # \"AutomaticRunInput\" as the generic name. This will match all\n # automatic inputs sent to the flow run, rather than a specific\n # type.\n type_prefix = \"\"\n class_name = f\"{type_prefix}AutomaticRunInput\"\n\n new_cls: Type[\"AutomaticRunInput\"] = pydantic.create_model(\n class_name, **fields, __base__=AutomaticRunInput\n )\n return new_cls\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput","title":"RunInput
","text":" Bases: BaseModel
prefect/input/run_input.py
class RunInput(pydantic.BaseModel):\n class Config:\n extra = \"forbid\"\n\n _description: Optional[str] = pydantic.PrivateAttr(default=None)\n _metadata: RunInputMetadata = pydantic.PrivateAttr()\n\n @property\n def metadata(self) -> RunInputMetadata:\n return self._metadata\n\n @classmethod\n def keyset_from_type(cls) -> Keyset:\n return keyset_from_base_key(cls.__name__.lower())\n\n @classmethod\n @sync_compatible\n async def save(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n \"\"\"\n Save the run input response to the given key.\n\n Args:\n - keyset (Keyset): the keyset to save the input for\n - flow_run_id (UUID, optional): the flow run ID to save the input for\n \"\"\"\n\n if HAS_PYDANTIC_V2:\n schema = create_v2_schema(cls.__name__, model_base=cls)\n else:\n schema = cls.schema(by_alias=True)\n\n await create_flow_run_input(\n key=keyset[\"schema\"], value=schema, flow_run_id=flow_run_id\n )\n\n description = cls._description if isinstance(cls._description, str) else None\n if description:\n await create_flow_run_input(\n key=keyset[\"description\"],\n value=description,\n flow_run_id=flow_run_id,\n )\n\n @classmethod\n @sync_compatible\n async def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n \"\"\"\n Load the run input response from the given key.\n\n Args:\n - keyset (Keyset): the keyset to load the input for\n - flow_run_id (UUID, optional): the flow run ID to load the input for\n \"\"\"\n flow_run_id = ensure_flow_run_id(flow_run_id)\n value = await read_flow_run_input(keyset[\"response\"], flow_run_id=flow_run_id)\n if value:\n instance = cls(**value)\n else:\n instance = cls()\n instance._metadata = RunInputMetadata(\n key=keyset[\"response\"], sender=None, receiver=flow_run_id\n )\n return instance\n\n @classmethod\n def load_from_flow_run_input(cls, flow_run_input: \"FlowRunInput\"):\n \"\"\"\n Load the run input from a FlowRunInput object.\n\n Args:\n - flow_run_input (FlowRunInput): the flow run input to load the input for\n \"\"\"\n instance = cls(**flow_run_input.decoded_value)\n instance._metadata = RunInputMetadata(\n key=flow_run_input.key,\n sender=flow_run_input.sender,\n receiver=flow_run_input.flow_run_id,\n )\n return instance\n\n @classmethod\n def with_initial_data(\n cls: Type[R], description: Optional[str] = None, **kwargs: Any\n ) -> Type[R]:\n \"\"\"\n Create a new `RunInput` subclass with the given initial data as field\n defaults.\n\n Args:\n - description (str, optional): a description to show when resuming\n a flow run that requires input\n - kwargs (Any): the initial data to populate the subclass\n \"\"\"\n fields = {}\n for key, value in kwargs.items():\n fields[key] = (type(value), value)\n model = pydantic.create_model(cls.__name__, **fields, __base__=cls)\n\n if description is not None:\n model._description = description\n\n return model\n\n @sync_compatible\n async def respond(\n self,\n run_input: \"RunInput\",\n sender: Optional[str] = None,\n key_prefix: Optional[str] = None,\n ):\n flow_run_id = None\n if self.metadata.sender and self.metadata.sender.startswith(\"prefect.flow-run\"):\n _, _, id = self.metadata.sender.rpartition(\".\")\n flow_run_id = UUID(id)\n\n if not flow_run_id:\n raise RuntimeError(\n \"Cannot respond to an input that was not sent by a flow run.\"\n )\n\n await _send_input(\n flow_run_id=flow_run_id,\n run_input=run_input,\n sender=sender,\n key_prefix=key_prefix,\n )\n\n @sync_compatible\n async def send_to(\n self,\n flow_run_id: UUID,\n sender: Optional[str] = None,\n key_prefix: Optional[str] = None,\n ):\n await _send_input(\n flow_run_id=flow_run_id,\n run_input=self,\n sender=sender,\n key_prefix=key_prefix,\n )\n\n @classmethod\n def receive(\n cls,\n timeout: Optional[float] = 3600,\n poll_interval: float = 10,\n raise_timeout_error: bool = False,\n exclude_keys: Optional[Set[str]] = None,\n key_prefix: Optional[str] = None,\n flow_run_id: Optional[UUID] = None,\n ):\n if key_prefix is None:\n key_prefix = f\"{cls.__name__.lower()}-auto\"\n\n return GetInputHandler(\n run_input_cls=cls,\n key_prefix=key_prefix,\n timeout=timeout,\n poll_interval=poll_interval,\n raise_timeout_error=raise_timeout_error,\n exclude_keys=exclude_keys,\n flow_run_id=flow_run_id,\n )\n\n @classmethod\n def subclass_from_base_model_type(\n cls, model_cls: Type[pydantic.BaseModel]\n ) -> Type[\"RunInput\"]:\n \"\"\"\n Create a new `RunInput` subclass from the given `pydantic.BaseModel`\n subclass.\n\n Args:\n - model_cls (pydantic.BaseModel subclass): the class from which\n to create the new `RunInput` subclass\n \"\"\"\n return type(f\"{model_cls.__name__}RunInput\", (RunInput, model_cls), {}) # type: ignore\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.load","title":"load
async
classmethod
","text":"Load the run input response from the given key.
Parameters:
Name Type Description Default-
keyset (Keyset
the keyset to load the input for
required-
flow_run_id (UUID
the flow run ID to load the input for
required Source code inprefect/input/run_input.py
@classmethod\n@sync_compatible\nasync def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n \"\"\"\n Load the run input response from the given key.\n\n Args:\n - keyset (Keyset): the keyset to load the input for\n - flow_run_id (UUID, optional): the flow run ID to load the input for\n \"\"\"\n flow_run_id = ensure_flow_run_id(flow_run_id)\n value = await read_flow_run_input(keyset[\"response\"], flow_run_id=flow_run_id)\n if value:\n instance = cls(**value)\n else:\n instance = cls()\n instance._metadata = RunInputMetadata(\n key=keyset[\"response\"], sender=None, receiver=flow_run_id\n )\n return instance\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.load_from_flow_run_input","title":"load_from_flow_run_input
classmethod
","text":"Load the run input from a FlowRunInput object.
Parameters:
Name Type Description Default-
flow_run_input (FlowRunInput
the flow run input to load the input for
required Source code inprefect/input/run_input.py
@classmethod\ndef load_from_flow_run_input(cls, flow_run_input: \"FlowRunInput\"):\n \"\"\"\n Load the run input from a FlowRunInput object.\n\n Args:\n - flow_run_input (FlowRunInput): the flow run input to load the input for\n \"\"\"\n instance = cls(**flow_run_input.decoded_value)\n instance._metadata = RunInputMetadata(\n key=flow_run_input.key,\n sender=flow_run_input.sender,\n receiver=flow_run_input.flow_run_id,\n )\n return instance\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.save","title":"save
async
classmethod
","text":"Save the run input response to the given key.
Parameters:
Name Type Description Default-
keyset (Keyset
the keyset to save the input for
required-
flow_run_id (UUID
the flow run ID to save the input for
required Source code inprefect/input/run_input.py
@classmethod\n@sync_compatible\nasync def save(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n \"\"\"\n Save the run input response to the given key.\n\n Args:\n - keyset (Keyset): the keyset to save the input for\n - flow_run_id (UUID, optional): the flow run ID to save the input for\n \"\"\"\n\n if HAS_PYDANTIC_V2:\n schema = create_v2_schema(cls.__name__, model_base=cls)\n else:\n schema = cls.schema(by_alias=True)\n\n await create_flow_run_input(\n key=keyset[\"schema\"], value=schema, flow_run_id=flow_run_id\n )\n\n description = cls._description if isinstance(cls._description, str) else None\n if description:\n await create_flow_run_input(\n key=keyset[\"description\"],\n value=description,\n flow_run_id=flow_run_id,\n )\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.subclass_from_base_model_type","title":"subclass_from_base_model_type
classmethod
","text":"Create a new RunInput
subclass from the given pydantic.BaseModel
subclass.
Parameters:
Name Type Description Default-
model_cls (pydantic.BaseModel subclass
the class from which to create the new RunInput
subclass
prefect/input/run_input.py
@classmethod\ndef subclass_from_base_model_type(\n cls, model_cls: Type[pydantic.BaseModel]\n) -> Type[\"RunInput\"]:\n \"\"\"\n Create a new `RunInput` subclass from the given `pydantic.BaseModel`\n subclass.\n\n Args:\n - model_cls (pydantic.BaseModel subclass): the class from which\n to create the new `RunInput` subclass\n \"\"\"\n return type(f\"{model_cls.__name__}RunInput\", (RunInput, model_cls), {}) # type: ignore\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.with_initial_data","title":"with_initial_data
classmethod
","text":"Create a new RunInput
subclass with the given initial data as field defaults.
Parameters:
Name Type Description Default-
description (str
a description to show when resuming a flow run that requires input
required-
kwargs (Any
the initial data to populate the subclass
required Source code inprefect/input/run_input.py
@classmethod\ndef with_initial_data(\n cls: Type[R], description: Optional[str] = None, **kwargs: Any\n) -> Type[R]:\n \"\"\"\n Create a new `RunInput` subclass with the given initial data as field\n defaults.\n\n Args:\n - description (str, optional): a description to show when resuming\n a flow run that requires input\n - kwargs (Any): the initial data to populate the subclass\n \"\"\"\n fields = {}\n for key, value in kwargs.items():\n fields[key] = (type(value), value)\n model = pydantic.create_model(cls.__name__, **fields, __base__=cls)\n\n if description is not None:\n model._description = description\n\n return model\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.keyset_from_base_key","title":"keyset_from_base_key
","text":"Get the keyset for the given base key.
Parameters:
Name Type Description Default-
base_key (str
the base key to get the keyset for
requiredReturns:
Type DescriptionKeyset
prefect/input/run_input.py
def keyset_from_base_key(base_key: str) -> Keyset:\n \"\"\"\n Get the keyset for the given base key.\n\n Args:\n - base_key (str): the base key to get the keyset for\n\n Returns:\n - Dict[str, str]: the keyset\n \"\"\"\n return {\n \"description\": f\"{base_key}-description\",\n \"response\": f\"{base_key}-response\",\n \"schema\": f\"{base_key}-schema\",\n }\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.keyset_from_paused_state","title":"keyset_from_paused_state
","text":"Get the keyset for the given Paused state.
Parameters:
Name Type Description Default-
state (State
the state to get the keyset for
required Source code inprefect/input/run_input.py
def keyset_from_paused_state(state: \"State\") -> Keyset:\n \"\"\"\n Get the keyset for the given Paused state.\n\n Args:\n - state (State): the state to get the keyset for\n \"\"\"\n\n if not state.is_paused():\n raise RuntimeError(f\"{state.type.value!r} is unsupported.\")\n\n base_key = f\"{state.name.lower()}-{str(state.state_details.pause_key)}\"\n return keyset_from_base_key(base_key)\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.run_input_subclass_from_type","title":"run_input_subclass_from_type
","text":"Create a new RunInput
subclass from the given type.
prefect/input/run_input.py
def run_input_subclass_from_type(\n _type: Union[Type[R], Type[T], pydantic.BaseModel],\n) -> Union[Type[AutomaticRunInput[T]], Type[R]]:\n \"\"\"\n Create a new `RunInput` subclass from the given type.\n \"\"\"\n try:\n if issubclass(_type, RunInput):\n return cast(Type[R], _type)\n elif issubclass(_type, pydantic.BaseModel):\n return cast(Type[R], RunInput.subclass_from_base_model_type(_type))\n except TypeError:\n pass\n\n # Could be something like a typing._GenericAlias or any other type that\n # isn't a `RunInput` subclass or `pydantic.BaseModel` subclass. Try passing\n # it to AutomaticRunInput to see if we can create a model from it.\n return cast(Type[AutomaticRunInput[T]], AutomaticRunInput.subclass_from_type(_type))\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/logging/configuration/","title":"configuration","text":"\"\"\"
","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/configuration/#prefect.logging.configuration","title":"prefect.logging.configuration
","text":"","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/configuration/#prefect.logging.configuration.load_logging_config","title":"load_logging_config
","text":"Loads logging configuration from a path allowing override from the environment
Source code inprefect/logging/configuration.py
def load_logging_config(path: Path) -> dict:\n \"\"\"\n Loads logging configuration from a path allowing override from the environment\n \"\"\"\n template = string.Template(path.read_text())\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", category=DeprecationWarning)\n config = yaml.safe_load(\n # Substitute settings into the template in format $SETTING / ${SETTING}\n template.substitute(\n {\n setting.name: str(setting.value())\n for setting in SETTING_VARIABLES.values()\n if setting.value() is not None\n }\n )\n )\n\n # Load overrides from the environment\n flat_config = dict_to_flatdict(config)\n\n for key_tup, val in flat_config.items():\n env_val = os.environ.get(\n # Generate a valid environment variable with nesting indicated with '_'\n to_envvar(\"PREFECT_LOGGING_\" + \"_\".join(key_tup)).upper()\n )\n if env_val:\n val = env_val\n\n # reassign the updated value\n flat_config[key_tup] = val\n\n return flatdict_to_dict(flat_config)\n
","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/configuration/#prefect.logging.configuration.setup_logging","title":"setup_logging
","text":"Sets up logging.
Returns the config used.
Source code inprefect/logging/configuration.py
def setup_logging(incremental: Optional[bool] = None) -> dict:\n \"\"\"\n Sets up logging.\n\n Returns the config used.\n \"\"\"\n global PROCESS_LOGGING_CONFIG\n\n # If the user has specified a logging path and it exists we will ignore the\n # default entirely rather than dealing with complex merging\n config = load_logging_config(\n (\n PREFECT_LOGGING_SETTINGS_PATH.value()\n if PREFECT_LOGGING_SETTINGS_PATH.value().exists()\n else DEFAULT_LOGGING_SETTINGS_PATH\n )\n )\n\n incremental = (\n incremental if incremental is not None else bool(PROCESS_LOGGING_CONFIG)\n )\n\n # Perform an incremental update if setup has already been run\n config.setdefault(\"incremental\", incremental)\n\n try:\n logging.config.dictConfig(config)\n except ValueError:\n if incremental:\n setup_logging(incremental=False)\n\n # Copy configuration of the 'prefect.extra' logger to the extra loggers\n extra_config = logging.getLogger(\"prefect.extra\")\n\n for logger_name in PREFECT_LOGGING_EXTRA_LOGGERS.value():\n logger = logging.getLogger(logger_name)\n for handler in extra_config.handlers:\n if not config[\"incremental\"]:\n logger.addHandler(handler)\n if logger.level == logging.NOTSET:\n logger.setLevel(extra_config.level)\n logger.propagate = extra_config.propagate\n\n PROCESS_LOGGING_CONFIG = config\n\n return config\n
","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/formatters/","title":"formatters","text":"\"\"\"
","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/formatters/#prefect.logging.formatters","title":"prefect.logging.formatters
","text":"","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/formatters/#prefect.logging.formatters.JsonFormatter","title":"JsonFormatter
","text":" Bases: Formatter
Formats log records as a JSON string.
The format may be specified as \"pretty\" to format the JSON with indents and newlines.
Source code inprefect/logging/formatters.py
class JsonFormatter(logging.Formatter):\n \"\"\"\n Formats log records as a JSON string.\n\n The format may be specified as \"pretty\" to format the JSON with indents and\n newlines.\n \"\"\"\n\n def __init__(self, fmt, dmft, style) -> None: # noqa\n super().__init__()\n\n if fmt not in [\"pretty\", \"default\"]:\n raise ValueError(\"Format must be either 'pretty' or 'default'.\")\n\n self.serializer = JSONSerializer(\n jsonlib=\"orjson\",\n object_encoder=\"pydantic.json.pydantic_encoder\",\n dumps_kwargs={\"option\": orjson.OPT_INDENT_2} if fmt == \"pretty\" else {},\n )\n\n def format(self, record: logging.LogRecord) -> str:\n record_dict = record.__dict__.copy()\n\n # GCP severity detection compatibility\n record_dict.setdefault(\"severity\", record.levelname)\n\n # replace any exception tuples returned by `sys.exc_info()`\n # with a JSON-serializable `dict`.\n if record.exc_info:\n record_dict[\"exc_info\"] = format_exception_info(record.exc_info)\n\n log_json_bytes = self.serializer.dumps(record_dict)\n\n # JSONSerializer returns bytes; decode to string to conform to\n # the `logging.Formatter.format` interface\n return log_json_bytes.decode()\n
","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/formatters/#prefect.logging.formatters.PrefectFormatter","title":"PrefectFormatter
","text":" Bases: Formatter
prefect/logging/formatters.py
class PrefectFormatter(logging.Formatter):\n def __init__(\n self,\n format=None,\n datefmt=None,\n style=\"%\",\n validate=True,\n *,\n defaults=None,\n task_run_fmt: str = None,\n flow_run_fmt: str = None,\n ) -> None:\n \"\"\"\n Implementation of the standard Python formatter with support for multiple\n message formats.\n\n \"\"\"\n # See https://github.com/python/cpython/blob/c8c6113398ee9a7867fe9b08bc539cceb61e2aaa/Lib/logging/__init__.py#L546\n # for implementation details\n\n init_kwargs = {}\n style_kwargs = {}\n\n # defaults added in 3.10\n if sys.version_info >= (3, 10):\n init_kwargs[\"defaults\"] = defaults\n style_kwargs[\"defaults\"] = defaults\n\n # validate added in 3.8\n if sys.version_info >= (3, 8):\n init_kwargs[\"validate\"] = validate\n else:\n validate = False\n\n super().__init__(format, datefmt, style, **init_kwargs)\n\n self.flow_run_fmt = flow_run_fmt\n self.task_run_fmt = task_run_fmt\n\n # Retrieve the style class from the base class to avoid importing private\n # `_STYLES` mapping\n style_class = type(self._style)\n\n self._flow_run_style = (\n style_class(flow_run_fmt, **style_kwargs) if flow_run_fmt else self._style\n )\n self._task_run_style = (\n style_class(task_run_fmt, **style_kwargs) if task_run_fmt else self._style\n )\n if validate:\n self._flow_run_style.validate()\n self._task_run_style.validate()\n\n def formatMessage(self, record: logging.LogRecord):\n if record.name == \"prefect.flow_runs\":\n style = self._flow_run_style\n elif record.name == \"prefect.task_runs\":\n style = self._task_run_style\n else:\n style = self._style\n\n return style.format(record)\n
","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/handlers/","title":"handlers","text":"\"\"\"
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers","title":"prefect.logging.handlers
","text":"","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler","title":"APILogHandler
","text":" Bases: Handler
A logging handler that sends logs to the Prefect API.
Sends log records to the APILogWorker
which manages sending batches of logs in the background.
prefect/logging/handlers.py
class APILogHandler(logging.Handler):\n \"\"\"\n A logging handler that sends logs to the Prefect API.\n\n Sends log records to the `APILogWorker` which manages sending batches of logs in\n the background.\n \"\"\"\n\n @classmethod\n def flush(cls):\n \"\"\"\n Tell the `APILogWorker` to send any currently enqueued logs and block until\n completion.\n\n Use `aflush` from async contexts instead.\n \"\"\"\n loop = get_running_loop()\n if loop:\n if in_global_loop(): # Guard against internal misuse\n raise RuntimeError(\n \"Cannot call `APILogWorker.flush` from the global event loop; it\"\n \" would block the event loop and cause a deadlock. Use\"\n \" `APILogWorker.aflush` instead.\"\n )\n\n # Not ideal, but this method is called by the stdlib and cannot return a\n # coroutine so we just schedule the drain in a new thread and continue\n from_sync.call_soon_in_new_thread(create_call(APILogWorker.drain_all))\n return None\n else:\n # We set a timeout of 5s because we don't want to block forever if the worker\n # is stuck. This can occur when the handler is being shutdown and the\n # `logging._lock` is held but the worker is attempting to emit logs resulting\n # in a deadlock.\n return APILogWorker.drain_all(timeout=5)\n\n @classmethod\n def aflush(cls):\n \"\"\"\n Tell the `APILogWorker` to send any currently enqueued logs and block until\n completion.\n\n If called in a synchronous context, will only block up to 5s before returning.\n \"\"\"\n\n if not get_running_loop():\n raise RuntimeError(\n \"`aflush` cannot be used from a synchronous context; use `flush`\"\n \" instead.\"\n )\n\n return APILogWorker.drain_all()\n\n def emit(self, record: logging.LogRecord):\n \"\"\"\n Send a log to the `APILogWorker`\n \"\"\"\n try:\n profile = prefect.context.get_settings_context()\n\n if not PREFECT_LOGGING_TO_API_ENABLED.value_from(profile.settings):\n return # Respect the global settings toggle\n if not getattr(record, \"send_to_api\", True):\n return # Do not send records that have opted out\n if not getattr(record, \"send_to_orion\", True):\n return # Backwards compatibility\n\n log = self.prepare(record)\n APILogWorker.instance().send(log)\n\n except Exception:\n self.handleError(record)\n\n def handleError(self, record: logging.LogRecord) -> None:\n _, exc, _ = sys.exc_info()\n\n if isinstance(exc, MissingContextError):\n log_handling_when_missing_flow = (\n PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW.value()\n )\n if log_handling_when_missing_flow == \"warn\":\n # Warn when a logger is used outside of a run context, the stack level here\n # gets us to the user logging call\n warnings.warn(str(exc), stacklevel=8)\n return\n elif log_handling_when_missing_flow == \"ignore\":\n return\n else:\n raise exc\n\n # Display a longer traceback for other errors\n return super().handleError(record)\n\n def prepare(self, record: logging.LogRecord) -> Dict[str, Any]:\n \"\"\"\n Convert a `logging.LogRecord` to the API `LogCreate` schema and serialize.\n\n This infers the linked flow or task run from the log record or the current\n run context.\n\n If a flow run id cannot be found, the log will be dropped.\n\n Logs exceeding the maximum size will be dropped.\n \"\"\"\n flow_run_id = getattr(record, \"flow_run_id\", None)\n task_run_id = getattr(record, \"task_run_id\", None)\n\n if not flow_run_id:\n try:\n context = prefect.context.get_run_context()\n except MissingContextError:\n raise MissingContextError(\n f\"Logger {record.name!r} attempted to send logs to the API without\"\n \" a flow run id. The API log handler can only send logs within\"\n \" flow run contexts unless the flow run id is manually provided.\"\n ) from None\n\n if hasattr(context, \"flow_run\"):\n flow_run_id = context.flow_run.id\n elif hasattr(context, \"task_run\"):\n flow_run_id = context.task_run.flow_run_id\n task_run_id = task_run_id or context.task_run.id\n else:\n raise ValueError(\n \"Encountered malformed run context. Does not contain flow or task \"\n \"run information.\"\n )\n\n # Parsing to a `LogCreate` object here gives us nice parsing error messages\n # from the standard lib `handleError` method if something goes wrong and\n # prevents malformed logs from entering the queue\n try:\n is_uuid_like = isinstance(flow_run_id, uuid.UUID) or (\n isinstance(flow_run_id, str) and uuid.UUID(flow_run_id)\n )\n except ValueError:\n is_uuid_like = False\n\n log = LogCreate(\n flow_run_id=flow_run_id if is_uuid_like else None,\n task_run_id=task_run_id,\n name=record.name,\n level=record.levelno,\n timestamp=pendulum.from_timestamp(\n getattr(record, \"created\", None) or time.time()\n ),\n message=self.format(record),\n ).dict(json_compatible=True)\n\n log_size = log[\"__payload_size__\"] = self._get_payload_size(log)\n if log_size > PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value():\n raise ValueError(\n f\"Log of size {log_size} is greater than the max size of \"\n f\"{PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value()}\"\n )\n\n return log\n\n def _get_payload_size(self, log: Dict[str, Any]) -> int:\n return len(json.dumps(log).encode())\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.aflush","title":"aflush
classmethod
","text":"Tell the APILogWorker
to send any currently enqueued logs and block until completion.
If called in a synchronous context, will only block up to 5s before returning.
Source code inprefect/logging/handlers.py
@classmethod\ndef aflush(cls):\n \"\"\"\n Tell the `APILogWorker` to send any currently enqueued logs and block until\n completion.\n\n If called in a synchronous context, will only block up to 5s before returning.\n \"\"\"\n\n if not get_running_loop():\n raise RuntimeError(\n \"`aflush` cannot be used from a synchronous context; use `flush`\"\n \" instead.\"\n )\n\n return APILogWorker.drain_all()\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.emit","title":"emit
","text":"Send a log to the APILogWorker
prefect/logging/handlers.py
def emit(self, record: logging.LogRecord):\n \"\"\"\n Send a log to the `APILogWorker`\n \"\"\"\n try:\n profile = prefect.context.get_settings_context()\n\n if not PREFECT_LOGGING_TO_API_ENABLED.value_from(profile.settings):\n return # Respect the global settings toggle\n if not getattr(record, \"send_to_api\", True):\n return # Do not send records that have opted out\n if not getattr(record, \"send_to_orion\", True):\n return # Backwards compatibility\n\n log = self.prepare(record)\n APILogWorker.instance().send(log)\n\n except Exception:\n self.handleError(record)\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.flush","title":"flush
classmethod
","text":"Tell the APILogWorker
to send any currently enqueued logs and block until completion.
Use aflush
from async contexts instead.
prefect/logging/handlers.py
@classmethod\ndef flush(cls):\n \"\"\"\n Tell the `APILogWorker` to send any currently enqueued logs and block until\n completion.\n\n Use `aflush` from async contexts instead.\n \"\"\"\n loop = get_running_loop()\n if loop:\n if in_global_loop(): # Guard against internal misuse\n raise RuntimeError(\n \"Cannot call `APILogWorker.flush` from the global event loop; it\"\n \" would block the event loop and cause a deadlock. Use\"\n \" `APILogWorker.aflush` instead.\"\n )\n\n # Not ideal, but this method is called by the stdlib and cannot return a\n # coroutine so we just schedule the drain in a new thread and continue\n from_sync.call_soon_in_new_thread(create_call(APILogWorker.drain_all))\n return None\n else:\n # We set a timeout of 5s because we don't want to block forever if the worker\n # is stuck. This can occur when the handler is being shutdown and the\n # `logging._lock` is held but the worker is attempting to emit logs resulting\n # in a deadlock.\n return APILogWorker.drain_all(timeout=5)\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.prepare","title":"prepare
","text":"Convert a logging.LogRecord
to the API LogCreate
schema and serialize.
This infers the linked flow or task run from the log record or the current run context.
If a flow run id cannot be found, the log will be dropped.
Logs exceeding the maximum size will be dropped.
Source code inprefect/logging/handlers.py
def prepare(self, record: logging.LogRecord) -> Dict[str, Any]:\n \"\"\"\n Convert a `logging.LogRecord` to the API `LogCreate` schema and serialize.\n\n This infers the linked flow or task run from the log record or the current\n run context.\n\n If a flow run id cannot be found, the log will be dropped.\n\n Logs exceeding the maximum size will be dropped.\n \"\"\"\n flow_run_id = getattr(record, \"flow_run_id\", None)\n task_run_id = getattr(record, \"task_run_id\", None)\n\n if not flow_run_id:\n try:\n context = prefect.context.get_run_context()\n except MissingContextError:\n raise MissingContextError(\n f\"Logger {record.name!r} attempted to send logs to the API without\"\n \" a flow run id. The API log handler can only send logs within\"\n \" flow run contexts unless the flow run id is manually provided.\"\n ) from None\n\n if hasattr(context, \"flow_run\"):\n flow_run_id = context.flow_run.id\n elif hasattr(context, \"task_run\"):\n flow_run_id = context.task_run.flow_run_id\n task_run_id = task_run_id or context.task_run.id\n else:\n raise ValueError(\n \"Encountered malformed run context. Does not contain flow or task \"\n \"run information.\"\n )\n\n # Parsing to a `LogCreate` object here gives us nice parsing error messages\n # from the standard lib `handleError` method if something goes wrong and\n # prevents malformed logs from entering the queue\n try:\n is_uuid_like = isinstance(flow_run_id, uuid.UUID) or (\n isinstance(flow_run_id, str) and uuid.UUID(flow_run_id)\n )\n except ValueError:\n is_uuid_like = False\n\n log = LogCreate(\n flow_run_id=flow_run_id if is_uuid_like else None,\n task_run_id=task_run_id,\n name=record.name,\n level=record.levelno,\n timestamp=pendulum.from_timestamp(\n getattr(record, \"created\", None) or time.time()\n ),\n message=self.format(record),\n ).dict(json_compatible=True)\n\n log_size = log[\"__payload_size__\"] = self._get_payload_size(log)\n if log_size > PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value():\n raise ValueError(\n f\"Log of size {log_size} is greater than the max size of \"\n f\"{PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value()}\"\n )\n\n return log\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.PrefectConsoleHandler","title":"PrefectConsoleHandler
","text":" Bases: StreamHandler
prefect/logging/handlers.py
class PrefectConsoleHandler(logging.StreamHandler):\n def __init__(\n self,\n stream=None,\n highlighter: Highlighter = PrefectConsoleHighlighter,\n styles: Dict[str, str] = None,\n level: Union[int, str] = logging.NOTSET,\n ):\n \"\"\"\n The default console handler for Prefect, which highlights log levels,\n web and file URLs, flow and task (run) names, and state types in the\n local console (terminal).\n\n Highlighting can be toggled on/off with the PREFECT_LOGGING_COLORS setting.\n For finer control, use logging.yml to add or remove styles, and/or\n adjust colors.\n \"\"\"\n super().__init__(stream=stream)\n\n styled_console = PREFECT_LOGGING_COLORS.value()\n markup_console = PREFECT_LOGGING_MARKUP.value()\n if styled_console:\n highlighter = highlighter()\n theme = Theme(styles, inherit=False)\n else:\n highlighter = NullHighlighter()\n theme = Theme(inherit=False)\n\n self.level = level\n self.console = Console(\n highlighter=highlighter,\n theme=theme,\n file=self.stream,\n markup=markup_console,\n )\n\n def emit(self, record: logging.LogRecord):\n try:\n message = self.format(record)\n self.console.print(message, soft_wrap=True)\n except RecursionError:\n # This was copied over from logging.StreamHandler().emit()\n # https://bugs.python.org/issue36272\n raise\n except Exception:\n self.handleError(record)\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/highlighters/","title":"highlighters","text":"\"\"\"
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers","title":"prefect.logging.loggers
","text":"","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.PrefectLogAdapter","title":"PrefectLogAdapter
","text":" Bases: LoggerAdapter
Adapter that ensures extra kwargs are passed through correctly; without this the extra
fields set on the adapter would overshadow any provided on a log-by-log basis.
See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is not a bug in the LoggingAdapter and subclassing is the intended workaround.
Source code inprefect/logging/loggers.py
class PrefectLogAdapter(logging.LoggerAdapter):\n \"\"\"\n Adapter that ensures extra kwargs are passed through correctly; without this\n the `extra` fields set on the adapter would overshadow any provided on a\n log-by-log basis.\n\n See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is\n not a bug in the LoggingAdapter and subclassing is the intended workaround.\n \"\"\"\n\n def process(self, msg, kwargs):\n kwargs[\"extra\"] = {**(self.extra or {}), **(kwargs.get(\"extra\") or {})}\n\n from prefect._internal.compatibility.deprecated import (\n PrefectDeprecationWarning,\n generate_deprecation_message,\n )\n\n if \"send_to_orion\" in kwargs[\"extra\"]:\n warnings.warn(\n generate_deprecation_message(\n 'The \"send_to_orion\" option',\n start_date=\"May 2023\",\n help='Use \"send_to_api\" instead.',\n ),\n PrefectDeprecationWarning,\n stacklevel=4,\n )\n\n return (msg, kwargs)\n\n def getChild(\n self, suffix: str, extra: Optional[Dict[str, str]] = None\n ) -> \"PrefectLogAdapter\":\n if extra is None:\n extra = {}\n\n return PrefectLogAdapter(\n self.logger.getChild(suffix),\n extra={\n **self.extra,\n **extra,\n },\n )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.disable_logger","title":"disable_logger
","text":"Get a logger by name and disables it within the context manager. Upon exiting the context manager, the logger is returned to its original state.
Source code inprefect/logging/loggers.py
@contextmanager\ndef disable_logger(name: str):\n \"\"\"\n Get a logger by name and disables it within the context manager.\n Upon exiting the context manager, the logger is returned to its\n original state.\n \"\"\"\n logger = logging.getLogger(name=name)\n\n # determine if it's already disabled\n base_state = logger.disabled\n try:\n # disable the logger\n logger.disabled = True\n yield\n finally:\n # return to base state\n logger.disabled = base_state\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.disable_run_logger","title":"disable_run_logger
","text":"Gets both prefect.flow_run
and prefect.task_run
and disables them within the context manager. Upon exiting the context manager, both loggers are returned to its original state.
prefect/logging/loggers.py
@contextmanager\ndef disable_run_logger():\n \"\"\"\n Gets both `prefect.flow_run` and `prefect.task_run` and disables them\n within the context manager. Upon exiting the context manager, both loggers\n are returned to its original state.\n \"\"\"\n with disable_logger(\"prefect.flow_run\"), disable_logger(\"prefect.task_run\"):\n yield\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.flow_run_logger","title":"flow_run_logger
","text":"Create a flow run logger with the run's metadata attached.
Additional keyword arguments can be provided to attach custom data to the log records.
If the flow run context is available, see get_run_logger
instead.
prefect/logging/loggers.py
def flow_run_logger(\n flow_run: Union[\"FlowRun\", \"ClientFlowRun\"],\n flow: Optional[\"Flow\"] = None,\n **kwargs: str,\n):\n \"\"\"\n Create a flow run logger with the run's metadata attached.\n\n Additional keyword arguments can be provided to attach custom data to the log\n records.\n\n If the flow run context is available, see `get_run_logger` instead.\n \"\"\"\n return PrefectLogAdapter(\n get_logger(\"prefect.flow_runs\"),\n extra={\n **{\n \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n \"flow_run_id\": str(flow_run.id) if flow_run else \"<unknown>\",\n \"flow_name\": flow.name if flow else \"<unknown>\",\n },\n **kwargs,\n },\n )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.get_logger","title":"get_logger
cached
","text":"Get a prefect
logger. These loggers are intended for internal use within the prefect
package.
See get_run_logger
for retrieving loggers for use within task or flow runs. By default, only run-related loggers are connected to the APILogHandler
.
prefect/logging/loggers.py
@lru_cache()\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Get a `prefect` logger. These loggers are intended for internal use within the\n `prefect` package.\n\n See `get_run_logger` for retrieving loggers for use within task or flow runs.\n By default, only run-related loggers are connected to the `APILogHandler`.\n \"\"\"\n parent_logger = logging.getLogger(\"prefect\")\n\n if name:\n # Append the name if given but allow explicit full names e.g. \"prefect.test\"\n # should not become \"prefect.prefect.test\"\n if not name.startswith(parent_logger.name + \".\"):\n logger = parent_logger.getChild(name)\n else:\n logger = logging.getLogger(name)\n else:\n logger = parent_logger\n\n # Prevent the current API key from being logged in plain text\n obfuscate_api_key_filter = ObfuscateApiKeyFilter()\n logger.addFilter(obfuscate_api_key_filter)\n\n return logger\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.get_run_logger","title":"get_run_logger
","text":"Get a Prefect logger for the current task run or flow run.
The logger will be named either prefect.task_runs
or prefect.flow_runs
. Contextual data about the run will be attached to the log records.
These loggers are connected to the APILogHandler
by default to send log records to the API.
Parameters:
Name Type Description Defaultcontext
RunContext
A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed.
None
**kwargs
str
Additional keyword arguments will be attached to the log records in addition to the run metadata
{}
Raises:
Type DescriptionRuntimeError
If no context can be found
Source code inprefect/logging/loggers.py
def get_run_logger(\n context: \"RunContext\" = None, **kwargs: str\n) -> Union[logging.Logger, logging.LoggerAdapter]:\n \"\"\"\n Get a Prefect logger for the current task run or flow run.\n\n The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n Contextual data about the run will be attached to the log records.\n\n These loggers are connected to the `APILogHandler` by default to send log records to\n the API.\n\n Arguments:\n context: A specific context may be provided as an override. By default, the\n context is inferred from global state and this should not be needed.\n **kwargs: Additional keyword arguments will be attached to the log records in\n addition to the run metadata\n\n Raises:\n RuntimeError: If no context can be found\n \"\"\"\n # Check for existing contexts\n task_run_context = prefect.context.TaskRunContext.get()\n flow_run_context = prefect.context.FlowRunContext.get()\n\n # Apply the context override\n if context:\n if isinstance(context, prefect.context.FlowRunContext):\n flow_run_context = context\n elif isinstance(context, prefect.context.TaskRunContext):\n task_run_context = context\n else:\n raise TypeError(\n f\"Received unexpected type {type(context).__name__!r} for context. \"\n \"Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'.\"\n )\n\n # Determine if this is a task or flow run logger\n if task_run_context:\n logger = task_run_logger(\n task_run=task_run_context.task_run,\n task=task_run_context.task,\n flow_run=flow_run_context.flow_run if flow_run_context else None,\n flow=flow_run_context.flow if flow_run_context else None,\n **kwargs,\n )\n elif flow_run_context:\n logger = flow_run_logger(\n flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs\n )\n elif (\n get_logger(\"prefect.flow_run\").disabled\n and get_logger(\"prefect.task_run\").disabled\n ):\n logger = logging.getLogger(\"null\")\n else:\n raise MissingContextError(\"There is no active flow or task run context.\")\n\n return logger\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.patch_print","title":"patch_print
","text":"Patches the Python builtin print
method to use print_as_log
prefect/logging/loggers.py
@contextmanager\ndef patch_print():\n \"\"\"\n Patches the Python builtin `print` method to use `print_as_log`\n \"\"\"\n import builtins\n\n original = builtins.print\n\n try:\n builtins.print = print_as_log\n yield\n finally:\n builtins.print = original\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.print_as_log","title":"print_as_log
","text":"A patch for print
to send printed messages to the Prefect run logger.
If no run is active, print
will behave as if it were not patched.
If print
sends data to a file other than sys.stdout
or sys.stderr
, it will not be forwarded to the Prefect logger either.
prefect/logging/loggers.py
def print_as_log(*args, **kwargs):\n \"\"\"\n A patch for `print` to send printed messages to the Prefect run logger.\n\n If no run is active, `print` will behave as if it were not patched.\n\n If `print` sends data to a file other than `sys.stdout` or `sys.stderr`, it will\n not be forwarded to the Prefect logger either.\n \"\"\"\n from prefect.context import FlowRunContext, TaskRunContext\n\n context = TaskRunContext.get() or FlowRunContext.get()\n if (\n not context\n or not context.log_prints\n or kwargs.get(\"file\") not in {None, sys.stdout, sys.stderr}\n ):\n return print(*args, **kwargs)\n\n logger = get_run_logger()\n\n # Print to an in-memory buffer; so we do not need to implement `print`\n buffer = io.StringIO()\n kwargs[\"file\"] = buffer\n print(*args, **kwargs)\n\n # Remove trailing whitespace to prevent duplicates\n logger.info(buffer.getvalue().rstrip())\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.task_run_logger","title":"task_run_logger
","text":"Create a task run logger with the run's metadata attached.
Additional keyword arguments can be provided to attach custom data to the log records.
If the task run context is available, see get_run_logger
instead.
If only the flow run context is available, it will be used for default values of flow_run
and flow
.
prefect/logging/loggers.py
def task_run_logger(\n task_run: \"TaskRun\",\n task: \"Task\" = None,\n flow_run: \"FlowRun\" = None,\n flow: \"Flow\" = None,\n **kwargs: str,\n):\n \"\"\"\n Create a task run logger with the run's metadata attached.\n\n Additional keyword arguments can be provided to attach custom data to the log\n records.\n\n If the task run context is available, see `get_run_logger` instead.\n\n If only the flow run context is available, it will be used for default values\n of `flow_run` and `flow`.\n \"\"\"\n if not flow_run or not flow:\n flow_run_context = prefect.context.FlowRunContext.get()\n if flow_run_context:\n flow_run = flow_run or flow_run_context.flow_run\n flow = flow or flow_run_context.flow\n\n return PrefectLogAdapter(\n get_logger(\"prefect.task_runs\"),\n extra={\n **{\n \"task_run_id\": str(task_run.id),\n \"flow_run_id\": str(task_run.flow_run_id),\n \"task_run_name\": task_run.name,\n \"task_name\": task.name if task else \"<unknown>\",\n \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n \"flow_name\": flow.name if flow else \"<unknown>\",\n },\n **kwargs,\n },\n )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/","title":"loggers","text":"\"\"\"
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers","title":"prefect.logging.loggers
","text":"","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.PrefectLogAdapter","title":"PrefectLogAdapter
","text":" Bases: LoggerAdapter
Adapter that ensures extra kwargs are passed through correctly; without this the extra
fields set on the adapter would overshadow any provided on a log-by-log basis.
See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is not a bug in the LoggingAdapter and subclassing is the intended workaround.
Source code inprefect/logging/loggers.py
class PrefectLogAdapter(logging.LoggerAdapter):\n \"\"\"\n Adapter that ensures extra kwargs are passed through correctly; without this\n the `extra` fields set on the adapter would overshadow any provided on a\n log-by-log basis.\n\n See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is\n not a bug in the LoggingAdapter and subclassing is the intended workaround.\n \"\"\"\n\n def process(self, msg, kwargs):\n kwargs[\"extra\"] = {**(self.extra or {}), **(kwargs.get(\"extra\") or {})}\n\n from prefect._internal.compatibility.deprecated import (\n PrefectDeprecationWarning,\n generate_deprecation_message,\n )\n\n if \"send_to_orion\" in kwargs[\"extra\"]:\n warnings.warn(\n generate_deprecation_message(\n 'The \"send_to_orion\" option',\n start_date=\"May 2023\",\n help='Use \"send_to_api\" instead.',\n ),\n PrefectDeprecationWarning,\n stacklevel=4,\n )\n\n return (msg, kwargs)\n\n def getChild(\n self, suffix: str, extra: Optional[Dict[str, str]] = None\n ) -> \"PrefectLogAdapter\":\n if extra is None:\n extra = {}\n\n return PrefectLogAdapter(\n self.logger.getChild(suffix),\n extra={\n **self.extra,\n **extra,\n },\n )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.disable_logger","title":"disable_logger
","text":"Get a logger by name and disables it within the context manager. Upon exiting the context manager, the logger is returned to its original state.
Source code inprefect/logging/loggers.py
@contextmanager\ndef disable_logger(name: str):\n \"\"\"\n Get a logger by name and disables it within the context manager.\n Upon exiting the context manager, the logger is returned to its\n original state.\n \"\"\"\n logger = logging.getLogger(name=name)\n\n # determine if it's already disabled\n base_state = logger.disabled\n try:\n # disable the logger\n logger.disabled = True\n yield\n finally:\n # return to base state\n logger.disabled = base_state\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.disable_run_logger","title":"disable_run_logger
","text":"Gets both prefect.flow_run
and prefect.task_run
and disables them within the context manager. Upon exiting the context manager, both loggers are returned to its original state.
prefect/logging/loggers.py
@contextmanager\ndef disable_run_logger():\n \"\"\"\n Gets both `prefect.flow_run` and `prefect.task_run` and disables them\n within the context manager. Upon exiting the context manager, both loggers\n are returned to its original state.\n \"\"\"\n with disable_logger(\"prefect.flow_run\"), disable_logger(\"prefect.task_run\"):\n yield\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.flow_run_logger","title":"flow_run_logger
","text":"Create a flow run logger with the run's metadata attached.
Additional keyword arguments can be provided to attach custom data to the log records.
If the flow run context is available, see get_run_logger
instead.
prefect/logging/loggers.py
def flow_run_logger(\n flow_run: Union[\"FlowRun\", \"ClientFlowRun\"],\n flow: Optional[\"Flow\"] = None,\n **kwargs: str,\n):\n \"\"\"\n Create a flow run logger with the run's metadata attached.\n\n Additional keyword arguments can be provided to attach custom data to the log\n records.\n\n If the flow run context is available, see `get_run_logger` instead.\n \"\"\"\n return PrefectLogAdapter(\n get_logger(\"prefect.flow_runs\"),\n extra={\n **{\n \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n \"flow_run_id\": str(flow_run.id) if flow_run else \"<unknown>\",\n \"flow_name\": flow.name if flow else \"<unknown>\",\n },\n **kwargs,\n },\n )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.get_logger","title":"get_logger
cached
","text":"Get a prefect
logger. These loggers are intended for internal use within the prefect
package.
See get_run_logger
for retrieving loggers for use within task or flow runs. By default, only run-related loggers are connected to the APILogHandler
.
prefect/logging/loggers.py
@lru_cache()\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Get a `prefect` logger. These loggers are intended for internal use within the\n `prefect` package.\n\n See `get_run_logger` for retrieving loggers for use within task or flow runs.\n By default, only run-related loggers are connected to the `APILogHandler`.\n \"\"\"\n parent_logger = logging.getLogger(\"prefect\")\n\n if name:\n # Append the name if given but allow explicit full names e.g. \"prefect.test\"\n # should not become \"prefect.prefect.test\"\n if not name.startswith(parent_logger.name + \".\"):\n logger = parent_logger.getChild(name)\n else:\n logger = logging.getLogger(name)\n else:\n logger = parent_logger\n\n # Prevent the current API key from being logged in plain text\n obfuscate_api_key_filter = ObfuscateApiKeyFilter()\n logger.addFilter(obfuscate_api_key_filter)\n\n return logger\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.get_run_logger","title":"get_run_logger
","text":"Get a Prefect logger for the current task run or flow run.
The logger will be named either prefect.task_runs
or prefect.flow_runs
. Contextual data about the run will be attached to the log records.
These loggers are connected to the APILogHandler
by default to send log records to the API.
Parameters:
Name Type Description Defaultcontext
RunContext
A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed.
None
**kwargs
str
Additional keyword arguments will be attached to the log records in addition to the run metadata
{}
Raises:
Type DescriptionRuntimeError
If no context can be found
Source code inprefect/logging/loggers.py
def get_run_logger(\n context: \"RunContext\" = None, **kwargs: str\n) -> Union[logging.Logger, logging.LoggerAdapter]:\n \"\"\"\n Get a Prefect logger for the current task run or flow run.\n\n The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n Contextual data about the run will be attached to the log records.\n\n These loggers are connected to the `APILogHandler` by default to send log records to\n the API.\n\n Arguments:\n context: A specific context may be provided as an override. By default, the\n context is inferred from global state and this should not be needed.\n **kwargs: Additional keyword arguments will be attached to the log records in\n addition to the run metadata\n\n Raises:\n RuntimeError: If no context can be found\n \"\"\"\n # Check for existing contexts\n task_run_context = prefect.context.TaskRunContext.get()\n flow_run_context = prefect.context.FlowRunContext.get()\n\n # Apply the context override\n if context:\n if isinstance(context, prefect.context.FlowRunContext):\n flow_run_context = context\n elif isinstance(context, prefect.context.TaskRunContext):\n task_run_context = context\n else:\n raise TypeError(\n f\"Received unexpected type {type(context).__name__!r} for context. \"\n \"Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'.\"\n )\n\n # Determine if this is a task or flow run logger\n if task_run_context:\n logger = task_run_logger(\n task_run=task_run_context.task_run,\n task=task_run_context.task,\n flow_run=flow_run_context.flow_run if flow_run_context else None,\n flow=flow_run_context.flow if flow_run_context else None,\n **kwargs,\n )\n elif flow_run_context:\n logger = flow_run_logger(\n flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs\n )\n elif (\n get_logger(\"prefect.flow_run\").disabled\n and get_logger(\"prefect.task_run\").disabled\n ):\n logger = logging.getLogger(\"null\")\n else:\n raise MissingContextError(\"There is no active flow or task run context.\")\n\n return logger\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.patch_print","title":"patch_print
","text":"Patches the Python builtin print
method to use print_as_log
prefect/logging/loggers.py
@contextmanager\ndef patch_print():\n \"\"\"\n Patches the Python builtin `print` method to use `print_as_log`\n \"\"\"\n import builtins\n\n original = builtins.print\n\n try:\n builtins.print = print_as_log\n yield\n finally:\n builtins.print = original\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.print_as_log","title":"print_as_log
","text":"A patch for print
to send printed messages to the Prefect run logger.
If no run is active, print
will behave as if it were not patched.
If print
sends data to a file other than sys.stdout
or sys.stderr
, it will not be forwarded to the Prefect logger either.
prefect/logging/loggers.py
def print_as_log(*args, **kwargs):\n \"\"\"\n A patch for `print` to send printed messages to the Prefect run logger.\n\n If no run is active, `print` will behave as if it were not patched.\n\n If `print` sends data to a file other than `sys.stdout` or `sys.stderr`, it will\n not be forwarded to the Prefect logger either.\n \"\"\"\n from prefect.context import FlowRunContext, TaskRunContext\n\n context = TaskRunContext.get() or FlowRunContext.get()\n if (\n not context\n or not context.log_prints\n or kwargs.get(\"file\") not in {None, sys.stdout, sys.stderr}\n ):\n return print(*args, **kwargs)\n\n logger = get_run_logger()\n\n # Print to an in-memory buffer; so we do not need to implement `print`\n buffer = io.StringIO()\n kwargs[\"file\"] = buffer\n print(*args, **kwargs)\n\n # Remove trailing whitespace to prevent duplicates\n logger.info(buffer.getvalue().rstrip())\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.task_run_logger","title":"task_run_logger
","text":"Create a task run logger with the run's metadata attached.
Additional keyword arguments can be provided to attach custom data to the log records.
If the task run context is available, see get_run_logger
instead.
If only the flow run context is available, it will be used for default values of flow_run
and flow
.
prefect/logging/loggers.py
def task_run_logger(\n task_run: \"TaskRun\",\n task: \"Task\" = None,\n flow_run: \"FlowRun\" = None,\n flow: \"Flow\" = None,\n **kwargs: str,\n):\n \"\"\"\n Create a task run logger with the run's metadata attached.\n\n Additional keyword arguments can be provided to attach custom data to the log\n records.\n\n If the task run context is available, see `get_run_logger` instead.\n\n If only the flow run context is available, it will be used for default values\n of `flow_run` and `flow`.\n \"\"\"\n if not flow_run or not flow:\n flow_run_context = prefect.context.FlowRunContext.get()\n if flow_run_context:\n flow_run = flow_run or flow_run_context.flow_run\n flow = flow or flow_run_context.flow\n\n return PrefectLogAdapter(\n get_logger(\"prefect.task_runs\"),\n extra={\n **{\n \"task_run_id\": str(task_run.id),\n \"flow_run_id\": str(task_run.flow_run_id),\n \"task_run_name\": task_run.name,\n \"task_name\": task.name if task else \"<unknown>\",\n \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n \"flow_name\": flow.name if flow else \"<unknown>\",\n },\n **kwargs,\n },\n )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/runner/runner/","title":"runner","text":"","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner","title":"prefect.runner.runner
","text":"Runners are responsible for managing the execution of deployments created and managed by either flow.serve
or the serve
utility.
import time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n \"Fastest flow this side of the Mississippi.\"\n return\n\n\nif __name__ == \"__main__\":\n slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n fast_deploy = fast_flow.to_deployment(name=\"fast\")\n\n # serve generates a Runner instance\n serve(slow_deploy, fast_deploy)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner","title":"Runner
","text":"Source code in prefect/runner/runner.py
class Runner:\n def __init__(\n self,\n name: Optional[str] = None,\n query_seconds: Optional[float] = None,\n prefetch_seconds: float = 10,\n limit: Optional[int] = None,\n pause_on_shutdown: bool = True,\n webserver: bool = False,\n ):\n \"\"\"\n Responsible for managing the execution of remotely initiated flow runs.\n\n Args:\n name: The name of the runner. If not provided, a random one\n will be generated. If provided, it cannot contain '/' or '%'.\n query_seconds: The number of seconds to wait between querying for\n scheduled flow runs; defaults to `PREFECT_RUNNER_POLL_FREQUENCY`\n prefetch_seconds: The number of seconds to prefetch flow runs for.\n limit: The maximum number of flow runs this runner should be running at\n pause_on_shutdown: A boolean for whether or not to automatically pause\n deployment schedules on shutdown; defaults to `True`\n webserver: a boolean flag for whether to start a webserver for this runner\n\n Examples:\n Set up a Runner to manage the execute of scheduled flow runs for two flows:\n ```python\n from prefect import flow, Runner\n\n @flow\n def hello_flow(name):\n print(f\"hello {name}\")\n\n @flow\n def goodbye_flow(name):\n print(f\"goodbye {name}\")\n\n if __name__ == \"__main__\"\n runner = Runner(name=\"my-runner\")\n\n # Will be runnable via the API\n runner.add_flow(hello_flow)\n\n # Run on a cron schedule\n runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n runner.start()\n ```\n \"\"\"\n if name and (\"/\" in name or \"%\" in name):\n raise ValueError(\"Runner name cannot contain '/' or '%'\")\n self.name = Path(name).stem if name is not None else f\"runner-{uuid4()}\"\n self._logger = get_logger(\"runner\")\n\n self.started = False\n self.stopping = False\n self.pause_on_shutdown = pause_on_shutdown\n self.limit = limit or PREFECT_RUNNER_PROCESS_LIMIT.value()\n self.webserver = webserver\n\n self.query_seconds = query_seconds or PREFECT_RUNNER_POLL_FREQUENCY.value()\n self._prefetch_seconds = prefetch_seconds\n\n self._runs_task_group: anyio.abc.TaskGroup = anyio.create_task_group()\n self._loops_task_group: anyio.abc.TaskGroup = anyio.create_task_group()\n\n self._limiter: Optional[anyio.CapacityLimiter] = anyio.CapacityLimiter(\n self.limit\n )\n self._client = get_client()\n self._submitting_flow_run_ids = set()\n self._cancelling_flow_run_ids = set()\n self._scheduled_task_scopes = set()\n self._deployment_ids: Set[UUID] = set()\n self._flow_run_process_map = dict()\n\n self._tmp_dir: Path = (\n Path(tempfile.gettempdir()) / \"runner_storage\" / str(uuid4())\n )\n self._storage_objs: List[RunnerStorage] = []\n self._deployment_storage_map: Dict[UUID, RunnerStorage] = {}\n self._loop = asyncio.get_event_loop()\n\n @sync_compatible\n async def add_deployment(\n self,\n deployment: RunnerDeployment,\n ) -> UUID:\n \"\"\"\n Registers the deployment with the Prefect API and will monitor for work once\n the runner is started.\n\n Args:\n deployment: A deployment for the runner to register.\n \"\"\"\n deployment_id = await deployment.apply()\n storage = deployment.storage\n if storage is not None:\n storage = await self._add_storage(storage)\n self._deployment_storage_map[deployment_id] = storage\n self._deployment_ids.add(deployment_id)\n\n return deployment_id\n\n @sync_compatible\n async def add_flow(\n self,\n flow: Flow,\n name: str = None,\n interval: Optional[\n Union[\n Iterable[Union[int, float, datetime.timedelta]],\n int,\n float,\n datetime.timedelta,\n ]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n ) -> UUID:\n \"\"\"\n Provides a flow to the runner to be run based on the provided configuration.\n\n Will create a deployment for the provided flow and register the deployment\n with the runner.\n\n Args:\n flow: A flow for the runner to run.\n name: The name to give the created deployment. Will default to the name\n of the runner.\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n \"\"\"\n api = PREFECT_API_URL.value()\n if any([interval, cron, rrule]) and not api:\n self._logger.warning(\n \"Cannot schedule flows on an ephemeral server; run `prefect server\"\n \" start` to start the scheduler.\"\n )\n name = self.name if name is None else name\n\n deployment = await flow.to_deployment(\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedules=schedules,\n schedule=schedule,\n paused=paused,\n is_schedule_active=is_schedule_active,\n triggers=triggers,\n parameters=parameters,\n description=description,\n tags=tags,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n entrypoint_type=entrypoint_type,\n )\n return await self.add_deployment(deployment)\n\n @sync_compatible\n async def _add_storage(self, storage: RunnerStorage) -> RunnerStorage:\n \"\"\"\n Adds a storage object to the runner. The storage object will be used to pull\n code to the runner's working directory before the runner starts.\n\n Args:\n storage: The storage object to add to the runner.\n Returns:\n The updated storage object that was added to the runner.\n \"\"\"\n if storage not in self._storage_objs:\n storage_copy = deepcopy(storage)\n storage_copy.set_base_path(self._tmp_dir)\n\n self._logger.debug(\n f\"Adding storage {storage_copy!r} to runner at\"\n f\" {str(storage_copy.destination)!r}\"\n )\n self._storage_objs.append(storage_copy)\n\n return storage_copy\n else:\n return next(s for s in self._storage_objs if s == storage)\n\n def handle_sigterm(self, signum, frame):\n \"\"\"\n Gracefully shuts down the runner when a SIGTERM is received.\n \"\"\"\n self._logger.info(\"SIGTERM received, initiating graceful shutdown...\")\n from_sync.call_in_loop_thread(create_call(self.stop))\n\n sys.exit(0)\n\n @sync_compatible\n async def start(\n self, run_once: bool = False, webserver: Optional[bool] = None\n ) -> None:\n \"\"\"\n Starts a runner.\n\n The runner will begin monitoring for and executing any scheduled work for all added flows.\n\n Args:\n run_once: If True, the runner will through one query loop and then exit.\n webserver: a boolean for whether to start a webserver for this runner. If provided,\n overrides the default on the runner\n\n Examples:\n Initialize a Runner, add two flows, and serve them by starting the Runner:\n\n ```python\n from prefect import flow, Runner\n\n @flow\n def hello_flow(name):\n print(f\"hello {name}\")\n\n @flow\n def goodbye_flow(name):\n print(f\"goodbye {name}\")\n\n if __name__ == \"__main__\"\n runner = Runner(name=\"my-runner\")\n\n # Will be runnable via the API\n runner.add_flow(hello_flow)\n\n # Run on a cron schedule\n runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n runner.start()\n ```\n \"\"\"\n _register_signal(signal.SIGTERM, self.handle_sigterm)\n\n webserver = webserver if webserver is not None else self.webserver\n\n if webserver or PREFECT_RUNNER_SERVER_ENABLE.value():\n # we'll start the ASGI server in a separate thread so that\n # uvicorn does not block the main thread\n server_thread = threading.Thread(\n name=\"runner-server-thread\",\n target=partial(\n start_webserver,\n runner=self,\n ),\n daemon=True,\n )\n server_thread.start()\n\n async with self as runner:\n async with self._loops_task_group as tg:\n for storage in self._storage_objs:\n if storage.pull_interval:\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=storage.pull_code,\n interval=storage.pull_interval,\n run_once=run_once,\n jitter_range=0.3,\n )\n )\n else:\n tg.start_soon(storage.pull_code)\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=runner._get_and_submit_flow_runs,\n interval=self.query_seconds,\n run_once=run_once,\n jitter_range=0.3,\n )\n )\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=runner._check_for_cancelled_flow_runs,\n interval=self.query_seconds * 2,\n run_once=run_once,\n jitter_range=0.3,\n )\n )\n\n def execute_in_background(self, func, *args, **kwargs):\n \"\"\"\n Executes a function in the background.\n \"\"\"\n\n return asyncio.run_coroutine_threadsafe(func(*args, **kwargs), self._loop)\n\n async def cancel_all(self):\n runs_to_cancel = []\n\n # done to avoid dictionary size changing during iteration\n for info in self._flow_run_process_map.values():\n runs_to_cancel.append(info[\"flow_run\"])\n if runs_to_cancel:\n for run in runs_to_cancel:\n try:\n await self._cancel_run(run, state_msg=\"Runner is shutting down.\")\n except Exception:\n self._logger.exception(\n f\"Exception encountered while cancelling {run.id}\",\n exc_info=True,\n )\n\n @sync_compatible\n async def stop(self):\n \"\"\"Stops the runner's polling cycle.\"\"\"\n if not self.started:\n raise RuntimeError(\n \"Runner has not yet started. Please start the runner by calling\"\n \" .start()\"\n )\n\n self.started = False\n self.stopping = True\n await self.cancel_all()\n try:\n self._loops_task_group.cancel_scope.cancel()\n except Exception:\n self._logger.exception(\n \"Exception encountered while shutting down\", exc_info=True\n )\n\n async def execute_flow_run(\n self, flow_run_id: UUID, entrypoint: Optional[str] = None\n ):\n \"\"\"\n Executes a single flow run with the given ID.\n\n Execution will wait to monitor for cancellation requests. Exits once\n the flow run process has exited.\n \"\"\"\n self.pause_on_shutdown = False\n context = self if not self.started else asyncnullcontext()\n\n async with context:\n if not self._acquire_limit_slot(flow_run_id):\n return\n\n async with anyio.create_task_group() as tg:\n with anyio.CancelScope():\n self._submitting_flow_run_ids.add(flow_run_id)\n flow_run = await self._client.read_flow_run(flow_run_id)\n\n pid = await self._runs_task_group.start(\n partial(\n self._submit_run_and_capture_errors,\n flow_run=flow_run,\n entrypoint=entrypoint,\n ),\n )\n\n self._flow_run_process_map[flow_run.id] = dict(\n pid=pid, flow_run=flow_run\n )\n\n # We want this loop to stop when the flow run process exits\n # so we'll check if the flow run process is still alive on\n # each iteration and cancel the task group if it is not.\n workload = partial(\n self._check_for_cancelled_flow_runs,\n should_stop=lambda: not self._flow_run_process_map,\n on_stop=tg.cancel_scope.cancel,\n )\n\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=workload,\n interval=self.query_seconds,\n jitter_range=0.3,\n )\n )\n\n def _get_flow_run_logger(self, flow_run: \"FlowRun\") -> PrefectLogAdapter:\n return flow_run_logger(flow_run=flow_run).getChild(\n \"runner\",\n extra={\n \"runner_name\": self.name,\n },\n )\n\n async def _run_process(\n self,\n flow_run: \"FlowRun\",\n task_status: Optional[anyio.abc.TaskStatus] = None,\n entrypoint: Optional[str] = None,\n ):\n \"\"\"\n Runs the given flow run in a subprocess.\n\n Args:\n flow_run: Flow run to execute via process. The ID of this flow run\n is stored in the PREFECT__FLOW_RUN_ID environment variable to\n allow the engine to retrieve the corresponding flow's code and\n begin execution.\n task_status: anyio task status used to send a message to the caller\n than the flow run process has started.\n \"\"\"\n command = f\"{shlex.quote(sys.executable)} -m prefect.engine\"\n\n flow_run_logger = self._get_flow_run_logger(flow_run)\n\n # We must add creationflags to a dict so it is only passed as a function\n # parameter on Windows, because the presence of creationflags causes\n # errors on Unix even if set to None\n kwargs: Dict[str, object] = {}\n if sys.platform == \"win32\":\n kwargs[\"creationflags\"] = subprocess.CREATE_NEW_PROCESS_GROUP\n\n _use_threaded_child_watcher()\n flow_run_logger.info(\"Opening process...\")\n\n env = get_current_settings().to_environment_variables(exclude_unset=True)\n env.update(\n {\n **{\n \"PREFECT__FLOW_RUN_ID\": str(flow_run.id),\n \"PREFECT__STORAGE_BASE_PATH\": str(self._tmp_dir),\n \"PREFECT__ENABLE_CANCELLATION_AND_CRASHED_HOOKS\": \"false\",\n },\n **({\"PREFECT__FLOW_ENTRYPOINT\": entrypoint} if entrypoint else {}),\n }\n )\n env.update(**os.environ) # is this really necessary??\n\n storage = self._deployment_storage_map.get(flow_run.deployment_id)\n if storage and storage.pull_interval:\n # perform an adhoc pull of code before running the flow if an\n # adhoc pull hasn't been performed in the last pull_interval\n # TODO: Explore integrating this behavior with global concurrency.\n last_adhoc_pull = getattr(storage, \"last_adhoc_pull\", None)\n if (\n last_adhoc_pull is None\n or last_adhoc_pull\n < datetime.datetime.now()\n - datetime.timedelta(seconds=storage.pull_interval)\n ):\n self._logger.debug(\n \"Performing adhoc pull of code for flow run %s with storage %r\",\n flow_run.id,\n storage,\n )\n await storage.pull_code()\n setattr(storage, \"last_adhoc_pull\", datetime.datetime.now())\n\n process = await run_process(\n shlex.split(command),\n stream_output=True,\n task_status=task_status,\n env=env,\n **kwargs,\n cwd=storage.destination if storage else None,\n )\n\n # Use the pid for display if no name was given\n\n if process.returncode:\n help_message = None\n level = logging.ERROR\n if process.returncode == -9:\n level = logging.INFO\n help_message = (\n \"This indicates that the process exited due to a SIGKILL signal. \"\n \"Typically, this is either caused by manual cancellation or \"\n \"high memory usage causing the operating system to \"\n \"terminate the process.\"\n )\n if process.returncode == -15:\n level = logging.INFO\n help_message = (\n \"This indicates that the process exited due to a SIGTERM signal. \"\n \"Typically, this is caused by manual cancellation.\"\n )\n elif process.returncode == 247:\n help_message = (\n \"This indicates that the process was terminated due to high \"\n \"memory usage.\"\n )\n elif (\n sys.platform == \"win32\" and process.returncode == STATUS_CONTROL_C_EXIT\n ):\n level = logging.INFO\n help_message = (\n \"Process was terminated due to a Ctrl+C or Ctrl+Break signal. \"\n \"Typically, this is caused by manual cancellation.\"\n )\n\n flow_run_logger.log(\n level,\n f\"Process for flow run {flow_run.name!r} exited with status code:\"\n f\" {process.returncode}\"\n + (f\"; {help_message}\" if help_message else \"\"),\n )\n else:\n flow_run_logger.info(\n f\"Process for flow run {flow_run.name!r} exited cleanly.\"\n )\n\n return process.returncode\n\n async def _kill_process(\n self,\n pid: int,\n grace_seconds: int = 30,\n ):\n \"\"\"\n Kills a given flow run process.\n\n Args:\n pid: ID of the process to kill\n grace_seconds: Number of seconds to wait for the process to end.\n \"\"\"\n # In a non-windows environment first send a SIGTERM, then, after\n # `grace_seconds` seconds have passed subsequent send SIGKILL. In\n # Windows we use CTRL_BREAK_EVENT as SIGTERM is useless:\n # https://bugs.python.org/issue26350\n if sys.platform == \"win32\":\n try:\n os.kill(pid, signal.CTRL_BREAK_EVENT)\n except (ProcessLookupError, WindowsError):\n raise RuntimeError(\n f\"Unable to kill process {pid!r}: The process was not found.\"\n )\n else:\n try:\n os.kill(pid, signal.SIGTERM)\n except ProcessLookupError:\n raise RuntimeError(\n f\"Unable to kill process {pid!r}: The process was not found.\"\n )\n\n # Throttle how often we check if the process is still alive to keep\n # from making too many system calls in a short period of time.\n check_interval = max(grace_seconds / 10, 1)\n\n with anyio.move_on_after(grace_seconds):\n while True:\n await anyio.sleep(check_interval)\n\n # Detect if the process is still alive. If not do an early\n # return as the process respected the SIGTERM from above.\n try:\n os.kill(pid, 0)\n except ProcessLookupError:\n return\n\n try:\n os.kill(pid, signal.SIGKILL)\n except OSError:\n # We shouldn't ever end up here, but it's possible that the\n # process ended right after the check above.\n return\n\n async def _pause_schedules(self):\n \"\"\"\n Pauses all deployment schedules.\n \"\"\"\n self._logger.info(\"Pausing all deployments...\")\n for deployment_id in self._deployment_ids:\n self._logger.debug(f\"Pausing deployment '{deployment_id}'\")\n await self._client.set_deployment_paused_state(deployment_id, True)\n self._logger.info(\"All deployments have been paused!\")\n\n async def _get_and_submit_flow_runs(self):\n if self.stopping:\n return\n runs_response = await self._get_scheduled_flow_runs()\n self.last_polled = pendulum.now(\"UTC\")\n return await self._submit_scheduled_flow_runs(flow_run_response=runs_response)\n\n async def _check_for_cancelled_flow_runs(\n self, should_stop: Callable = lambda: False, on_stop: Callable = lambda: None\n ):\n \"\"\"\n Checks for flow runs with CANCELLING a cancelling state and attempts to\n cancel them.\n\n Args:\n should_stop: A callable that returns a boolean indicating whether or not\n the runner should stop checking for cancelled flow runs.\n on_stop: A callable that is called when the runner should stop checking\n for cancelled flow runs.\n \"\"\"\n if self.stopping:\n return\n if not self.started:\n raise RuntimeError(\n \"Runner is not set up. Please make sure you are running this runner \"\n \"as an async context manager.\"\n )\n\n if should_stop():\n self._logger.debug(\n \"Runner has no active flow runs or deployments. Sending message to loop\"\n \" service that no further cancellation checks are needed.\"\n )\n on_stop()\n\n self._logger.debug(\"Checking for cancelled flow runs...\")\n\n named_cancelling_flow_runs = await self._client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state=FlowRunFilterState(\n type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n ),\n # Avoid duplicate cancellation calls\n id=FlowRunFilterId(\n any_=list(\n self._flow_run_process_map.keys()\n - self._cancelling_flow_run_ids\n )\n ),\n ),\n )\n\n typed_cancelling_flow_runs = await self._client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state=FlowRunFilterState(\n type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n ),\n # Avoid duplicate cancellation calls\n id=FlowRunFilterId(\n any_=list(\n self._flow_run_process_map.keys()\n - self._cancelling_flow_run_ids\n )\n ),\n ),\n )\n\n cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n if cancelling_flow_runs:\n self._logger.info(\n f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n )\n\n for flow_run in cancelling_flow_runs:\n self._cancelling_flow_run_ids.add(flow_run.id)\n self._runs_task_group.start_soon(self._cancel_run, flow_run)\n\n return cancelling_flow_runs\n\n async def _cancel_run(self, flow_run: \"FlowRun\", state_msg: Optional[str] = None):\n run_logger = self._get_flow_run_logger(flow_run)\n\n pid = self._flow_run_process_map.get(flow_run.id, {}).get(\"pid\")\n if not pid:\n await self._run_on_cancellation_hooks(flow_run, flow_run.state)\n await self._mark_flow_run_as_cancelled(\n flow_run,\n state_updates={\n \"message\": (\n \"Could not find process ID for flow run\"\n \" and cancellation cannot be guaranteed.\"\n )\n },\n )\n return\n\n try:\n await self._kill_process(pid)\n except RuntimeError as exc:\n self._logger.warning(f\"{exc} Marking flow run as cancelled.\")\n await self._run_on_cancellation_hooks(flow_run, flow_run.state)\n await self._mark_flow_run_as_cancelled(flow_run)\n except Exception:\n run_logger.exception(\n \"Encountered exception while killing process for flow run \"\n f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n )\n # We will try again on generic exceptions\n self._cancelling_flow_run_ids.remove(flow_run.id)\n else:\n await self._run_on_cancellation_hooks(flow_run, flow_run.state)\n await self._mark_flow_run_as_cancelled(\n flow_run,\n state_updates={\n \"message\": state_msg or \"Flow run was cancelled successfully.\"\n },\n )\n run_logger.info(f\"Cancelled flow run '{flow_run.name}'!\")\n\n async def _get_scheduled_flow_runs(\n self,\n ) -> List[\"FlowRun\"]:\n \"\"\"\n Retrieve scheduled flow runs for this runner.\n \"\"\"\n scheduled_before = pendulum.now(\"utc\").add(seconds=int(self._prefetch_seconds))\n self._logger.debug(\n f\"Querying for flow runs scheduled before {scheduled_before}\"\n )\n\n scheduled_flow_runs = (\n await self._client.get_scheduled_flow_runs_for_deployments(\n deployment_ids=list(self._deployment_ids),\n scheduled_before=scheduled_before,\n )\n )\n self._logger.debug(f\"Discovered {len(scheduled_flow_runs)} scheduled_flow_runs\")\n return scheduled_flow_runs\n\n def has_slots_available(self) -> bool:\n \"\"\"\n Determine if the flow run limit has been reached.\n\n Returns:\n - bool: True if the limit has not been reached, False otherwise.\n \"\"\"\n return self._limiter.available_tokens > 0\n\n def _acquire_limit_slot(self, flow_run_id: str) -> bool:\n \"\"\"\n Enforces flow run limit set on runner.\n\n Returns:\n - bool: True if a slot was acquired, False otherwise.\n \"\"\"\n try:\n if self._limiter:\n self._limiter.acquire_on_behalf_of_nowait(flow_run_id)\n self._logger.debug(\"Limit slot acquired for flow run '%s'\", flow_run_id)\n return True\n except RuntimeError as exc:\n if (\n \"this borrower is already holding one of this CapacityLimiter's tokens\"\n in str(exc)\n ):\n self._logger.warning(\n f\"Duplicate submission of flow run '{flow_run_id}' detected. Runner\"\n \" will not re-submit flow run.\"\n )\n return False\n else:\n raise\n except anyio.WouldBlock:\n self._logger.info(\n f\"Flow run limit reached; {self._limiter.borrowed_tokens} flow runs\"\n \" in progress. You can control this limit by passing a `limit` value\"\n \" to `serve` or adjusting the PREFECT_RUNNER_PROCESS_LIMIT setting.\"\n )\n return False\n\n def _release_limit_slot(self, flow_run_id: str) -> None:\n \"\"\"\n Frees up a slot taken by the given flow run id.\n \"\"\"\n if self._limiter:\n self._limiter.release_on_behalf_of(flow_run_id)\n self._logger.debug(\"Limit slot released for flow run '%s'\", flow_run_id)\n\n async def _submit_scheduled_flow_runs(\n self,\n flow_run_response: List[\"FlowRun\"],\n entrypoints: Optional[List[str]] = None,\n ) -> List[\"FlowRun\"]:\n \"\"\"\n Takes a list of FlowRuns and submits the referenced flow runs\n for execution by the runner.\n \"\"\"\n submittable_flow_runs = flow_run_response\n submittable_flow_runs.sort(key=lambda run: run.next_scheduled_start_time)\n for i, flow_run in enumerate(submittable_flow_runs):\n if flow_run.id in self._submitting_flow_run_ids:\n continue\n\n if self._acquire_limit_slot(flow_run.id):\n run_logger = self._get_flow_run_logger(flow_run)\n run_logger.info(\n f\"Runner '{self.name}' submitting flow run '{flow_run.id}'\"\n )\n self._submitting_flow_run_ids.add(flow_run.id)\n self._runs_task_group.start_soon(\n partial(\n self._submit_run,\n flow_run=flow_run,\n entrypoint=(\n entrypoints[i] if entrypoints else None\n ), # TODO: avoid relying on index\n )\n )\n else:\n break\n\n return list(\n filter(\n lambda run: run.id in self._submitting_flow_run_ids,\n submittable_flow_runs,\n )\n )\n\n async def _submit_run(self, flow_run: \"FlowRun\", entrypoint: Optional[str] = None):\n \"\"\"\n Submits a given flow run for execution by the runner.\n \"\"\"\n run_logger = self._get_flow_run_logger(flow_run)\n\n ready_to_submit = await self._propose_pending_state(flow_run)\n\n if ready_to_submit:\n readiness_result = await self._runs_task_group.start(\n partial(\n self._submit_run_and_capture_errors,\n flow_run=flow_run,\n entrypoint=entrypoint,\n ),\n )\n\n if readiness_result and not isinstance(readiness_result, Exception):\n self._flow_run_process_map[flow_run.id] = dict(\n pid=readiness_result, flow_run=flow_run\n )\n\n run_logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n else:\n # If the run is not ready to submit, release the concurrency slot\n self._release_limit_slot(flow_run.id)\n\n self._submitting_flow_run_ids.remove(flow_run.id)\n\n async def _submit_run_and_capture_errors(\n self,\n flow_run: \"FlowRun\",\n task_status: Optional[anyio.abc.TaskStatus] = None,\n entrypoint: Optional[str] = None,\n ) -> Union[Optional[int], Exception]:\n run_logger = self._get_flow_run_logger(flow_run)\n\n try:\n status_code = await self._run_process(\n flow_run=flow_run,\n task_status=task_status,\n entrypoint=entrypoint,\n )\n except Exception as exc:\n if not task_status._future.done():\n # This flow run was being submitted and did not start successfully\n run_logger.exception(\n f\"Failed to start process for flow run '{flow_run.id}'.\"\n )\n # Mark the task as started to prevent agent crash\n task_status.started(exc)\n await self._propose_crashed_state(\n flow_run, \"Flow run process could not be started\"\n )\n else:\n run_logger.exception(\n f\"An error occurred while monitoring flow run '{flow_run.id}'. \"\n \"The flow run will not be marked as failed, but an issue may have \"\n \"occurred.\"\n )\n return exc\n finally:\n self._release_limit_slot(flow_run.id)\n self._flow_run_process_map.pop(flow_run.id, None)\n\n if status_code != 0:\n await self._propose_crashed_state(\n flow_run,\n f\"Flow run process exited with non-zero status code {status_code}.\",\n )\n\n api_flow_run = await self._client.read_flow_run(flow_run_id=flow_run.id)\n terminal_state = api_flow_run.state\n if terminal_state.is_crashed():\n await self._run_on_crashed_hooks(flow_run=flow_run, state=terminal_state)\n\n return status_code\n\n async def _propose_pending_state(self, flow_run: \"FlowRun\") -> bool:\n run_logger = self._get_flow_run_logger(flow_run)\n state = flow_run.state\n try:\n state = await propose_state(\n self._client, Pending(), flow_run_id=flow_run.id\n )\n except Abort as exc:\n run_logger.info(\n (\n f\"Aborted submission of flow run '{flow_run.id}'. \"\n f\"Server sent an abort signal: {exc}\"\n ),\n )\n return False\n except Exception:\n run_logger.exception(\n f\"Failed to update state of flow run '{flow_run.id}'\",\n )\n return False\n\n if not state.is_pending():\n run_logger.info(\n (\n f\"Aborted submission of flow run '{flow_run.id}': \"\n f\"Server returned a non-pending state {state.type.value!r}\"\n ),\n )\n return False\n\n return True\n\n async def _propose_failed_state(self, flow_run: \"FlowRun\", exc: Exception) -> None:\n run_logger = self._get_flow_run_logger(flow_run)\n try:\n await propose_state(\n self._client,\n await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n flow_run_id=flow_run.id,\n )\n except Abort:\n # We've already failed, no need to note the abort but we don't want it to\n # raise in the agent process\n pass\n except Exception:\n run_logger.error(\n f\"Failed to update state of flow run '{flow_run.id}'\",\n exc_info=True,\n )\n\n async def _propose_crashed_state(self, flow_run: \"FlowRun\", message: str) -> None:\n run_logger = self._get_flow_run_logger(flow_run)\n try:\n state = await propose_state(\n self._client,\n Crashed(message=message),\n flow_run_id=flow_run.id,\n )\n except Abort:\n # Flow run already marked as failed\n pass\n except Exception:\n run_logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n else:\n if state.is_crashed():\n run_logger.info(\n f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n )\n\n async def _mark_flow_run_as_cancelled(\n self, flow_run: \"FlowRun\", state_updates: Optional[dict] = None\n ) -> None:\n state_updates = state_updates or {}\n state_updates.setdefault(\"name\", \"Cancelled\")\n state_updates.setdefault(\"type\", StateType.CANCELLED)\n state = flow_run.state.copy(update=state_updates)\n\n await self._client.set_flow_run_state(flow_run.id, state, force=True)\n\n # Do not remove the flow run from the cancelling set immediately because\n # the API caches responses for the `read_flow_runs` and we do not want to\n # duplicate cancellations.\n await self._schedule_task(\n 60 * 10, self._cancelling_flow_run_ids.remove, flow_run.id\n )\n\n async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n \"\"\"\n Schedule a background task to start after some time.\n\n These tasks will be run immediately when the runner exits instead of waiting.\n\n The function may be async or sync. Async functions will be awaited.\n \"\"\"\n\n async def wrapper(task_status):\n # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n # time or shutdown\n if self.started:\n with anyio.CancelScope() as scope:\n self._scheduled_task_scopes.add(scope)\n task_status.started()\n await anyio.sleep(__in_seconds)\n\n self._scheduled_task_scopes.remove(scope)\n else:\n task_status.started()\n\n result = fn(*args, **kwargs)\n if inspect.iscoroutine(result):\n await result\n\n await self._runs_task_group.start(wrapper)\n\n async def _run_on_cancellation_hooks(\n self,\n flow_run: \"FlowRun\",\n state: State,\n ) -> None:\n \"\"\"\n Run the hooks for a flow.\n \"\"\"\n if state.is_cancelling():\n flow = await load_flow_from_flow_run(\n flow_run, client=self._client, storage_base_path=str(self._tmp_dir)\n )\n hooks = flow.on_cancellation or []\n\n await _run_hooks(hooks, flow_run, flow, state)\n\n async def _run_on_crashed_hooks(\n self,\n flow_run: \"FlowRun\",\n state: State,\n ) -> None:\n \"\"\"\n Run the hooks for a flow.\n \"\"\"\n if state.is_crashed():\n flow = await load_flow_from_flow_run(\n flow_run, client=self._client, storage_base_path=str(self._tmp_dir)\n )\n hooks = flow.on_crashed or []\n\n await _run_hooks(hooks, flow_run, flow, state)\n\n async def __aenter__(self):\n self._logger.debug(\"Starting runner...\")\n self._client = get_client()\n self._tmp_dir.mkdir(parents=True)\n await self._client.__aenter__()\n await self._runs_task_group.__aenter__()\n\n self.started = True\n return self\n\n async def __aexit__(self, *exc_info):\n self._logger.debug(\"Stopping runner...\")\n if self.pause_on_shutdown:\n await self._pause_schedules()\n self.started = False\n for scope in self._scheduled_task_scopes:\n scope.cancel()\n if self._runs_task_group:\n await self._runs_task_group.__aexit__(*exc_info)\n if self._client:\n await self._client.__aexit__(*exc_info)\n shutil.rmtree(str(self._tmp_dir))\n\n def __repr__(self):\n return f\"Runner(name={self.name!r})\"\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.add_deployment","title":"add_deployment
async
","text":"Registers the deployment with the Prefect API and will monitor for work once the runner is started.
Parameters:
Name Type Description Defaultdeployment
RunnerDeployment
A deployment for the runner to register.
required Source code inprefect/runner/runner.py
@sync_compatible\nasync def add_deployment(\n self,\n deployment: RunnerDeployment,\n) -> UUID:\n \"\"\"\n Registers the deployment with the Prefect API and will monitor for work once\n the runner is started.\n\n Args:\n deployment: A deployment for the runner to register.\n \"\"\"\n deployment_id = await deployment.apply()\n storage = deployment.storage\n if storage is not None:\n storage = await self._add_storage(storage)\n self._deployment_storage_map[deployment_id] = storage\n self._deployment_ids.add(deployment_id)\n\n return deployment_id\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.add_flow","title":"add_flow
async
","text":"Provides a flow to the runner to be run based on the provided configuration.
Will create a deployment for the provided flow and register the deployment with the runner.
Parameters:
Name Type Description Defaultflow
Flow
A flow for the runner to run.
requiredname
str
The name to give the created deployment. Will default to the name of the runner.
None
interval
Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]
An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.
None
cron
Optional[Union[Iterable[str], str]]
A cron schedule of when to execute runs of this flow.
None
rrule
Optional[Union[Iterable[str], str]]
An rrule schedule of when to execute runs of this flow.
None
schedule
Optional[SCHEDULE_TYPES]
A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.
None
is_schedule_active
Optional[bool]
Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.
None
triggers
Optional[List[DeploymentTrigger]]
A list of triggers that should kick of a run of this flow.
None
parameters
Optional[dict]
A dictionary of default parameter values to pass to runs of this flow.
None
description
Optional[str]
A description for the created deployment. Defaults to the flow's description if not provided.
None
tags
Optional[List[str]]
A list of tags to associate with the created deployment for organizational purposes.
None
version
Optional[str]
A version for the created deployment. Defaults to the flow's version.
None
entrypoint_type
EntrypointType
Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.
FILE_PATH
Source code in prefect/runner/runner.py
@sync_compatible\nasync def add_flow(\n self,\n flow: Flow,\n name: str = None,\n interval: Optional[\n Union[\n Iterable[Union[int, float, datetime.timedelta]],\n int,\n float,\n datetime.timedelta,\n ]\n ] = None,\n cron: Optional[Union[Iterable[str], str]] = None,\n rrule: Optional[Union[Iterable[str], str]] = None,\n paused: Optional[bool] = None,\n schedules: Optional[FlexibleScheduleList] = None,\n schedule: Optional[SCHEDULE_TYPES] = None,\n is_schedule_active: Optional[bool] = None,\n parameters: Optional[dict] = None,\n triggers: Optional[List[DeploymentTrigger]] = None,\n description: Optional[str] = None,\n tags: Optional[List[str]] = None,\n version: Optional[str] = None,\n enforce_parameter_schema: bool = False,\n entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n) -> UUID:\n \"\"\"\n Provides a flow to the runner to be run based on the provided configuration.\n\n Will create a deployment for the provided flow and register the deployment\n with the runner.\n\n Args:\n flow: A flow for the runner to run.\n name: The name to give the created deployment. Will default to the name\n of the runner.\n interval: An interval on which to execute the current flow. Accepts either a number\n or a timedelta object. If a number is given, it will be interpreted as seconds.\n cron: A cron schedule of when to execute runs of this flow.\n rrule: An rrule schedule of when to execute runs of this flow.\n schedule: A schedule object of when to execute runs of this flow. Used for\n advanced scheduling options like timezone.\n is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n not provided when creating a deployment, the schedule will be set as active. If not\n provided when updating a deployment, the schedule's activation will not be changed.\n triggers: A list of triggers that should kick of a run of this flow.\n parameters: A dictionary of default parameter values to pass to runs of this flow.\n description: A description for the created deployment. Defaults to the flow's\n description if not provided.\n tags: A list of tags to associate with the created deployment for organizational\n purposes.\n version: A version for the created deployment. Defaults to the flow's version.\n entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n entrypoint, ensure that the module will be importable in the execution environment.\n \"\"\"\n api = PREFECT_API_URL.value()\n if any([interval, cron, rrule]) and not api:\n self._logger.warning(\n \"Cannot schedule flows on an ephemeral server; run `prefect server\"\n \" start` to start the scheduler.\"\n )\n name = self.name if name is None else name\n\n deployment = await flow.to_deployment(\n name=name,\n interval=interval,\n cron=cron,\n rrule=rrule,\n schedules=schedules,\n schedule=schedule,\n paused=paused,\n is_schedule_active=is_schedule_active,\n triggers=triggers,\n parameters=parameters,\n description=description,\n tags=tags,\n version=version,\n enforce_parameter_schema=enforce_parameter_schema,\n entrypoint_type=entrypoint_type,\n )\n return await self.add_deployment(deployment)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.execute_flow_run","title":"execute_flow_run
async
","text":"Executes a single flow run with the given ID.
Execution will wait to monitor for cancellation requests. Exits once the flow run process has exited.
Source code inprefect/runner/runner.py
async def execute_flow_run(\n self, flow_run_id: UUID, entrypoint: Optional[str] = None\n):\n \"\"\"\n Executes a single flow run with the given ID.\n\n Execution will wait to monitor for cancellation requests. Exits once\n the flow run process has exited.\n \"\"\"\n self.pause_on_shutdown = False\n context = self if not self.started else asyncnullcontext()\n\n async with context:\n if not self._acquire_limit_slot(flow_run_id):\n return\n\n async with anyio.create_task_group() as tg:\n with anyio.CancelScope():\n self._submitting_flow_run_ids.add(flow_run_id)\n flow_run = await self._client.read_flow_run(flow_run_id)\n\n pid = await self._runs_task_group.start(\n partial(\n self._submit_run_and_capture_errors,\n flow_run=flow_run,\n entrypoint=entrypoint,\n ),\n )\n\n self._flow_run_process_map[flow_run.id] = dict(\n pid=pid, flow_run=flow_run\n )\n\n # We want this loop to stop when the flow run process exits\n # so we'll check if the flow run process is still alive on\n # each iteration and cancel the task group if it is not.\n workload = partial(\n self._check_for_cancelled_flow_runs,\n should_stop=lambda: not self._flow_run_process_map,\n on_stop=tg.cancel_scope.cancel,\n )\n\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=workload,\n interval=self.query_seconds,\n jitter_range=0.3,\n )\n )\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.execute_in_background","title":"execute_in_background
","text":"Executes a function in the background.
Source code inprefect/runner/runner.py
def execute_in_background(self, func, *args, **kwargs):\n \"\"\"\n Executes a function in the background.\n \"\"\"\n\n return asyncio.run_coroutine_threadsafe(func(*args, **kwargs), self._loop)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.handle_sigterm","title":"handle_sigterm
","text":"Gracefully shuts down the runner when a SIGTERM is received.
Source code inprefect/runner/runner.py
def handle_sigterm(self, signum, frame):\n \"\"\"\n Gracefully shuts down the runner when a SIGTERM is received.\n \"\"\"\n self._logger.info(\"SIGTERM received, initiating graceful shutdown...\")\n from_sync.call_in_loop_thread(create_call(self.stop))\n\n sys.exit(0)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.has_slots_available","title":"has_slots_available
","text":"Determine if the flow run limit has been reached.
Returns:
Type Descriptionbool
prefect/runner/runner.py
def has_slots_available(self) -> bool:\n \"\"\"\n Determine if the flow run limit has been reached.\n\n Returns:\n - bool: True if the limit has not been reached, False otherwise.\n \"\"\"\n return self._limiter.available_tokens > 0\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.start","title":"start
async
","text":"Starts a runner.
The runner will begin monitoring for and executing any scheduled work for all added flows.
Parameters:
Name Type Description Defaultrun_once
bool
If True, the runner will through one query loop and then exit.
False
webserver
Optional[bool]
a boolean for whether to start a webserver for this runner. If provided, overrides the default on the runner
None
Examples:
Initialize a Runner, add two flows, and serve them by starting the Runner:
from prefect import flow, Runner\n\n@flow\ndef hello_flow(name):\n print(f\"hello {name}\")\n\n@flow\ndef goodbye_flow(name):\n print(f\"goodbye {name}\")\n\nif __name__ == \"__main__\"\n runner = Runner(name=\"my-runner\")\n\n # Will be runnable via the API\n runner.add_flow(hello_flow)\n\n # Run on a cron schedule\n runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n runner.start()\n
Source code in prefect/runner/runner.py
@sync_compatible\nasync def start(\n self, run_once: bool = False, webserver: Optional[bool] = None\n) -> None:\n \"\"\"\n Starts a runner.\n\n The runner will begin monitoring for and executing any scheduled work for all added flows.\n\n Args:\n run_once: If True, the runner will through one query loop and then exit.\n webserver: a boolean for whether to start a webserver for this runner. If provided,\n overrides the default on the runner\n\n Examples:\n Initialize a Runner, add two flows, and serve them by starting the Runner:\n\n ```python\n from prefect import flow, Runner\n\n @flow\n def hello_flow(name):\n print(f\"hello {name}\")\n\n @flow\n def goodbye_flow(name):\n print(f\"goodbye {name}\")\n\n if __name__ == \"__main__\"\n runner = Runner(name=\"my-runner\")\n\n # Will be runnable via the API\n runner.add_flow(hello_flow)\n\n # Run on a cron schedule\n runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n runner.start()\n ```\n \"\"\"\n _register_signal(signal.SIGTERM, self.handle_sigterm)\n\n webserver = webserver if webserver is not None else self.webserver\n\n if webserver or PREFECT_RUNNER_SERVER_ENABLE.value():\n # we'll start the ASGI server in a separate thread so that\n # uvicorn does not block the main thread\n server_thread = threading.Thread(\n name=\"runner-server-thread\",\n target=partial(\n start_webserver,\n runner=self,\n ),\n daemon=True,\n )\n server_thread.start()\n\n async with self as runner:\n async with self._loops_task_group as tg:\n for storage in self._storage_objs:\n if storage.pull_interval:\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=storage.pull_code,\n interval=storage.pull_interval,\n run_once=run_once,\n jitter_range=0.3,\n )\n )\n else:\n tg.start_soon(storage.pull_code)\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=runner._get_and_submit_flow_runs,\n interval=self.query_seconds,\n run_once=run_once,\n jitter_range=0.3,\n )\n )\n tg.start_soon(\n partial(\n critical_service_loop,\n workload=runner._check_for_cancelled_flow_runs,\n interval=self.query_seconds * 2,\n run_once=run_once,\n jitter_range=0.3,\n )\n )\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.stop","title":"stop
async
","text":"Stops the runner's polling cycle.
Source code inprefect/runner/runner.py
@sync_compatible\nasync def stop(self):\n \"\"\"Stops the runner's polling cycle.\"\"\"\n if not self.started:\n raise RuntimeError(\n \"Runner has not yet started. Please start the runner by calling\"\n \" .start()\"\n )\n\n self.started = False\n self.stopping = True\n await self.cancel_all()\n try:\n self._loops_task_group.cancel_scope.cancel()\n except Exception:\n self._logger.exception(\n \"Exception encountered while shutting down\", exc_info=True\n )\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.serve","title":"serve
async
","text":"Serve the provided list of deployments.
Parameters:
Name Type Description Default*args
RunnerDeployment
A list of deployments to serve.
()
pause_on_shutdown
bool
A boolean for whether or not to automatically pause deployment schedules on shutdown.
True
print_starting_message
bool
Whether or not to print message to the console on startup.
True
limit
Optional[int]
The maximum number of runs that can be executed concurrently.
None
**kwargs
Additional keyword arguments to pass to the runner.
{}
Examples:
Prepare two deployments and serve them:
import datetime\n\nfrom prefect import flow, serve\n\n@flow\ndef my_flow(name):\n print(f\"hello {name}\")\n\n@flow\ndef my_other_flow(name):\n print(f\"goodbye {name}\")\n\nif __name__ == \"__main__\":\n # Run once a day\n hello_deploy = my_flow.to_deployment(\n \"hello\", tags=[\"dev\"], interval=datetime.timedelta(days=1)\n )\n\n # Run every Sunday at 4:00 AM\n bye_deploy = my_other_flow.to_deployment(\n \"goodbye\", tags=[\"dev\"], cron=\"0 4 * * sun\"\n )\n\n serve(hello_deploy, bye_deploy)\n
Source code in prefect/runner/runner.py
@sync_compatible\nasync def serve(\n *args: RunnerDeployment,\n pause_on_shutdown: bool = True,\n print_starting_message: bool = True,\n limit: Optional[int] = None,\n **kwargs,\n):\n \"\"\"\n Serve the provided list of deployments.\n\n Args:\n *args: A list of deployments to serve.\n pause_on_shutdown: A boolean for whether or not to automatically pause\n deployment schedules on shutdown.\n print_starting_message: Whether or not to print message to the console\n on startup.\n limit: The maximum number of runs that can be executed concurrently.\n **kwargs: Additional keyword arguments to pass to the runner.\n\n Examples:\n Prepare two deployments and serve them:\n\n ```python\n import datetime\n\n from prefect import flow, serve\n\n @flow\n def my_flow(name):\n print(f\"hello {name}\")\n\n @flow\n def my_other_flow(name):\n print(f\"goodbye {name}\")\n\n if __name__ == \"__main__\":\n # Run once a day\n hello_deploy = my_flow.to_deployment(\n \"hello\", tags=[\"dev\"], interval=datetime.timedelta(days=1)\n )\n\n # Run every Sunday at 4:00 AM\n bye_deploy = my_other_flow.to_deployment(\n \"goodbye\", tags=[\"dev\"], cron=\"0 4 * * sun\"\n )\n\n serve(hello_deploy, bye_deploy)\n ```\n \"\"\"\n runner = Runner(pause_on_shutdown=pause_on_shutdown, limit=limit, **kwargs)\n for deployment in args:\n await runner.add_deployment(deployment)\n\n if print_starting_message:\n help_message_top = (\n \"[green]Your deployments are being served and polling for\"\n \" scheduled runs!\\n[/]\"\n )\n\n table = Table(title=\"Deployments\", show_header=False)\n\n table.add_column(style=\"blue\", no_wrap=True)\n\n for deployment in args:\n table.add_row(f\"{deployment.flow_name}/{deployment.name}\")\n\n help_message_bottom = (\n \"\\nTo trigger any of these deployments, use the\"\n \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n \" [DEPLOYMENT_NAME]\\n[/]\"\n )\n if PREFECT_UI_URL:\n help_message_bottom += (\n \"\\nYou can also trigger your deployments via the Prefect UI:\"\n f\" [blue]{PREFECT_UI_URL.value()}/deployments[/]\\n\"\n )\n\n console = Console()\n console.print(\n Group(help_message_top, table, help_message_bottom), soft_wrap=True\n )\n\n await runner.start()\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/server/","title":"server","text":"","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server","title":"prefect.runner.server
","text":"","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server.build_server","title":"build_server
async
","text":"Build a FastAPI server for a runner.
Parameters:
Name Type Description Defaultrunner
Runner
the runner this server interacts with and monitors
requiredlog_level
str
the log level to use for the server
required Source code inprefect/runner/server.py
@sync_compatible\nasync def build_server(runner: \"Runner\") -> FastAPI:\n \"\"\"\n Build a FastAPI server for a runner.\n\n Args:\n runner (Runner): the runner this server interacts with and monitors\n log_level (str): the log level to use for the server\n \"\"\"\n webserver = FastAPI()\n router = APIRouter()\n\n router.add_api_route(\n \"/health\", perform_health_check(runner=runner), methods=[\"GET\"]\n )\n router.add_api_route(\"/run_count\", run_count(runner=runner), methods=[\"GET\"])\n router.add_api_route(\"/shutdown\", shutdown(runner=runner), methods=[\"POST\"])\n webserver.include_router(router)\n\n if PREFECT_EXPERIMENTAL_ENABLE_EXTRA_RUNNER_ENDPOINTS.value():\n deployments_router, deployment_schemas = await get_deployment_router(runner)\n webserver.include_router(deployments_router)\n\n subflow_schemas = await get_subflow_schemas(runner)\n webserver.add_api_route(\n \"/flow/run\",\n _build_generic_endpoint_for_flows(runner=runner, schemas=subflow_schemas),\n methods=[\"POST\"],\n name=\"Run flow in background\",\n description=\"Trigger any flow run as a background task on the runner.\",\n summary=\"Run flow\",\n )\n\n def customize_openapi():\n if webserver.openapi_schema:\n return webserver.openapi_schema\n\n openapi_schema = inject_schemas_into_openapi(webserver, deployment_schemas)\n webserver.openapi_schema = openapi_schema\n return webserver.openapi_schema\n\n webserver.openapi = customize_openapi\n\n return webserver\n
","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server.get_subflow_schemas","title":"get_subflow_schemas
async
","text":"Load available subflow schemas by filtering for only those subflows in the deployment entrypoint's import space.
Source code inprefect/runner/server.py
async def get_subflow_schemas(runner: \"Runner\") -> Dict[str, Dict]:\n \"\"\"\n Load available subflow schemas by filtering for only those subflows in the\n deployment entrypoint's import space.\n \"\"\"\n schemas = {}\n async with get_client() as client:\n for deployment_id in runner._deployment_ids:\n deployment = await client.read_deployment(deployment_id)\n if deployment.entrypoint is None:\n continue\n\n script = deployment.entrypoint.split(\":\")[0]\n subflows = load_flows_from_script(script)\n for flow in subflows:\n schemas[flow.name] = flow.parameters.dict()\n\n return schemas\n
","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server.start_webserver","title":"start_webserver
","text":"Run a FastAPI server for a runner.
Parameters:
Name Type Description Defaultrunner
Runner
the runner this server interacts with and monitors
requiredlog_level
str
the log level to use for the server
None
Source code in prefect/runner/server.py
def start_webserver(runner: \"Runner\", log_level: Optional[str] = None) -> None:\n \"\"\"\n Run a FastAPI server for a runner.\n\n Args:\n runner (Runner): the runner this server interacts with and monitors\n log_level (str): the log level to use for the server\n \"\"\"\n host = PREFECT_RUNNER_SERVER_HOST.value()\n port = PREFECT_RUNNER_SERVER_PORT.value()\n log_level = log_level or PREFECT_RUNNER_SERVER_LOG_LEVEL.value()\n webserver = build_server(runner)\n uvicorn.run(webserver, host=host, port=port, log_level=log_level)\n
","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/storage/","title":"storage","text":"","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage","title":"prefect.runner.storage
","text":"","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.BlockStorageAdapter","title":"BlockStorageAdapter
","text":"A storage adapter for a storage block object to allow it to be used as a runner storage object.
Source code inprefect/runner/storage.py
class BlockStorageAdapter:\n \"\"\"\n A storage adapter for a storage block object to allow it to be used as a\n runner storage object.\n \"\"\"\n\n def __init__(\n self,\n block: Union[ReadableDeploymentStorage, WritableDeploymentStorage],\n pull_interval: Optional[int] = 60,\n ):\n self._block = block\n self._pull_interval = pull_interval\n self._storage_base_path = Path.cwd()\n if not isinstance(block, Block):\n raise TypeError(\n f\"Expected a block object. Received a {type(block).__name__!r} object.\"\n )\n if not hasattr(block, \"get_directory\"):\n raise ValueError(\"Provided block must have a `get_directory` method.\")\n\n self._name = (\n f\"{block.get_block_type_slug()}-{block._block_document_name}\"\n if block._block_document_name\n else str(uuid4())\n )\n\n def set_base_path(self, path: Path):\n self._storage_base_path = path\n\n @property\n def pull_interval(self) -> Optional[int]:\n return self._pull_interval\n\n @property\n def destination(self) -> Path:\n return self._storage_base_path / self._name\n\n async def pull_code(self):\n if not self.destination.exists():\n self.destination.mkdir(parents=True, exist_ok=True)\n await self._block.get_directory(local_path=str(self.destination))\n\n def to_pull_step(self) -> dict:\n # Give blocks the change to implement their own pull step\n if hasattr(self._block, \"get_pull_step\"):\n return self._block.get_pull_step()\n else:\n if not self._block._block_document_name:\n raise BlockNotSavedError(\n \"Block must be saved with `.save()` before it can be converted to a\"\n \" pull step.\"\n )\n return {\n \"prefect.deployments.steps.pull_with_block\": {\n \"block_type_slug\": self._block.get_block_type_slug(),\n \"block_document_name\": self._block._block_document_name,\n }\n }\n\n def __eq__(self, __value) -> bool:\n if isinstance(__value, BlockStorageAdapter):\n return self._block == __value._block\n return False\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.GitRepository","title":"GitRepository
","text":"Pulls the contents of a git repository to the local filesystem.
Parameters:
Name Type Description Defaulturl
str
The URL of the git repository to pull from
requiredcredentials
Union[GitCredentials, Block, Dict[str, Any], None]
A dictionary of credentials to use when pulling from the repository. If a username is provided, an access token must also be provided.
None
name
Optional[str]
The name of the repository. If not provided, the name will be inferred from the repository URL.
None
branch
Optional[str]
The branch to pull from. Defaults to \"main\".
None
pull_interval
Optional[int]
The interval in seconds at which to pull contents from remote storage to local storage. If None, remote storage will perform a one-time sync.
60
Examples:
Pull the contents of a private git repository to the local filesystem:
from prefect.runner.storage import GitRepository\n\nstorage = GitRepository(\n url=\"https://github.com/org/repo.git\",\n credentials={\"username\": \"oauth2\", \"access_token\": \"my-access-token\"},\n)\n\nawait storage.pull_code()\n
Source code in prefect/runner/storage.py
class GitRepository:\n \"\"\"\n Pulls the contents of a git repository to the local filesystem.\n\n Parameters:\n url: The URL of the git repository to pull from\n credentials: A dictionary of credentials to use when pulling from the\n repository. If a username is provided, an access token must also be\n provided.\n name: The name of the repository. If not provided, the name will be\n inferred from the repository URL.\n branch: The branch to pull from. Defaults to \"main\".\n pull_interval: The interval in seconds at which to pull contents from\n remote storage to local storage. If None, remote storage will perform\n a one-time sync.\n\n Examples:\n Pull the contents of a private git repository to the local filesystem:\n\n ```python\n from prefect.runner.storage import GitRepository\n\n storage = GitRepository(\n url=\"https://github.com/org/repo.git\",\n credentials={\"username\": \"oauth2\", \"access_token\": \"my-access-token\"},\n )\n\n await storage.pull_code()\n ```\n \"\"\"\n\n def __init__(\n self,\n url: str,\n credentials: Union[GitCredentials, Block, Dict[str, Any], None] = None,\n name: Optional[str] = None,\n branch: Optional[str] = None,\n include_submodules: bool = False,\n pull_interval: Optional[int] = 60,\n ):\n if credentials is None:\n credentials = {}\n\n if (\n isinstance(credentials, dict)\n and credentials.get(\"username\")\n and not (credentials.get(\"access_token\") or credentials.get(\"password\"))\n ):\n raise ValueError(\n \"If a username is provided, an access token or password must also be\"\n \" provided.\"\n )\n self._url = url\n self._branch = branch\n self._credentials = credentials\n self._include_submodules = include_submodules\n repo_name = urlparse(url).path.split(\"/\")[-1].replace(\".git\", \"\")\n default_name = f\"{repo_name}-{branch}\" if branch else repo_name\n self._name = name or default_name\n self._logger = get_logger(f\"runner.storage.git-repository.{self._name}\")\n self._storage_base_path = Path.cwd()\n self._pull_interval = pull_interval\n\n @property\n def destination(self) -> Path:\n return self._storage_base_path / self._name\n\n def set_base_path(self, path: Path):\n self._storage_base_path = path\n\n @property\n def pull_interval(self) -> Optional[int]:\n return self._pull_interval\n\n @property\n def _repository_url_with_credentials(self) -> str:\n if not self._credentials:\n return self._url\n\n url_components = urlparse(self._url)\n\n credentials = (\n self._credentials.dict()\n if isinstance(self._credentials, Block)\n else deepcopy(self._credentials)\n )\n\n for k, v in credentials.items():\n if isinstance(v, Secret):\n credentials[k] = v.get()\n elif isinstance(v, SecretStr):\n credentials[k] = v.get_secret_value()\n\n formatted_credentials = _format_token_from_credentials(\n urlparse(self._url).netloc, credentials\n )\n if url_components.scheme == \"https\" and formatted_credentials is not None:\n updated_components = url_components._replace(\n netloc=f\"{formatted_credentials}@{url_components.netloc}\"\n )\n repository_url = urlunparse(updated_components)\n else:\n repository_url = self._url\n\n return repository_url\n\n async def pull_code(self):\n \"\"\"\n Pulls the contents of the configured repository to the local filesystem.\n \"\"\"\n self._logger.debug(\n \"Pulling contents from repository '%s' to '%s'...\",\n self._name,\n self.destination,\n )\n\n git_dir = self.destination / \".git\"\n\n if git_dir.exists():\n # Check if the existing repository matches the configured repository\n result = await run_process(\n [\"git\", \"config\", \"--get\", \"remote.origin.url\"],\n cwd=str(self.destination),\n )\n existing_repo_url = None\n if result.stdout is not None:\n existing_repo_url = _strip_auth_from_url(result.stdout.decode().strip())\n\n if existing_repo_url != self._url:\n raise ValueError(\n f\"The existing repository at {str(self.destination)} \"\n f\"does not match the configured repository {self._url}\"\n )\n\n self._logger.debug(\"Pulling latest changes from origin/%s\", self._branch)\n # Update the existing repository\n cmd = [\"git\", \"pull\", \"origin\"]\n if self._branch:\n cmd += [self._branch]\n if self._include_submodules:\n cmd += [\"--recurse-submodules\"]\n cmd += [\"--depth\", \"1\"]\n try:\n await run_process(cmd, cwd=self.destination)\n self._logger.debug(\"Successfully pulled latest changes\")\n except subprocess.CalledProcessError as exc:\n self._logger.error(\n f\"Failed to pull latest changes with exit code {exc}\"\n )\n shutil.rmtree(self.destination)\n await self._clone_repo()\n\n else:\n await self._clone_repo()\n\n async def _clone_repo(self):\n \"\"\"\n Clones the repository into the local destination.\n \"\"\"\n self._logger.debug(\"Cloning repository %s\", self._url)\n\n repository_url = self._repository_url_with_credentials\n\n cmd = [\n \"git\",\n \"clone\",\n repository_url,\n ]\n if self._branch:\n cmd += [\"--branch\", self._branch]\n if self._include_submodules:\n cmd += [\"--recurse-submodules\"]\n\n # Limit git history and set path to clone to\n cmd += [\"--depth\", \"1\", str(self.destination)]\n\n try:\n await run_process(cmd)\n except subprocess.CalledProcessError as exc:\n # Hide the command used to avoid leaking the access token\n exc_chain = None if self._credentials else exc\n raise RuntimeError(\n f\"Failed to clone repository {self._url!r} with exit code\"\n f\" {exc.returncode}.\"\n ) from exc_chain\n\n def __eq__(self, __value) -> bool:\n if isinstance(__value, GitRepository):\n return (\n self._url == __value._url\n and self._branch == __value._branch\n and self._name == __value._name\n )\n return False\n\n def __repr__(self) -> str:\n return (\n f\"GitRepository(name={self._name!r} repository={self._url!r},\"\n f\" branch={self._branch!r})\"\n )\n\n def to_pull_step(self) -> Dict:\n pull_step = {\n \"prefect.deployments.steps.git_clone\": {\n \"repository\": self._url,\n \"branch\": self._branch,\n }\n }\n if isinstance(self._credentials, Block):\n pull_step[\"prefect.deployments.steps.git_clone\"][\n \"credentials\"\n ] = f\"{{{{ {self._credentials.get_block_placeholder()} }}}}\"\n elif isinstance(self._credentials, dict):\n if isinstance(self._credentials.get(\"access_token\"), Secret):\n pull_step[\"prefect.deployments.steps.git_clone\"][\"credentials\"] = {\n **self._credentials,\n \"access_token\": (\n \"{{\"\n f\" {self._credentials['access_token'].get_block_placeholder()} }}}}\"\n ),\n }\n elif self._credentials.get(\"access_token\") is not None:\n raise ValueError(\n \"Please save your access token as a Secret block before converting\"\n \" this storage object to a pull step.\"\n )\n\n return pull_step\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.GitRepository.pull_code","title":"pull_code
async
","text":"Pulls the contents of the configured repository to the local filesystem.
Source code inprefect/runner/storage.py
async def pull_code(self):\n \"\"\"\n Pulls the contents of the configured repository to the local filesystem.\n \"\"\"\n self._logger.debug(\n \"Pulling contents from repository '%s' to '%s'...\",\n self._name,\n self.destination,\n )\n\n git_dir = self.destination / \".git\"\n\n if git_dir.exists():\n # Check if the existing repository matches the configured repository\n result = await run_process(\n [\"git\", \"config\", \"--get\", \"remote.origin.url\"],\n cwd=str(self.destination),\n )\n existing_repo_url = None\n if result.stdout is not None:\n existing_repo_url = _strip_auth_from_url(result.stdout.decode().strip())\n\n if existing_repo_url != self._url:\n raise ValueError(\n f\"The existing repository at {str(self.destination)} \"\n f\"does not match the configured repository {self._url}\"\n )\n\n self._logger.debug(\"Pulling latest changes from origin/%s\", self._branch)\n # Update the existing repository\n cmd = [\"git\", \"pull\", \"origin\"]\n if self._branch:\n cmd += [self._branch]\n if self._include_submodules:\n cmd += [\"--recurse-submodules\"]\n cmd += [\"--depth\", \"1\"]\n try:\n await run_process(cmd, cwd=self.destination)\n self._logger.debug(\"Successfully pulled latest changes\")\n except subprocess.CalledProcessError as exc:\n self._logger.error(\n f\"Failed to pull latest changes with exit code {exc}\"\n )\n shutil.rmtree(self.destination)\n await self._clone_repo()\n\n else:\n await self._clone_repo()\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage","title":"RemoteStorage
","text":"Pulls the contents of a remote storage location to the local filesystem.
Parameters:
Name Type Description Defaulturl
str
The URL of the remote storage location to pull from. Supports fsspec
URLs. Some protocols may require an additional fsspec
dependency to be installed. Refer to the fsspec
docs for more details.
pull_interval
Optional[int]
The interval in seconds at which to pull contents from remote storage to local storage. If None, remote storage will perform a one-time sync.
60
**settings
Any
Any additional settings to pass the fsspec
filesystem class.
{}
Examples:
Pull the contents of a remote storage location to the local filesystem:
from prefect.runner.storage import RemoteStorage\n\nstorage = RemoteStorage(url=\"s3://my-bucket/my-folder\")\n\nawait storage.pull_code()\n
Pull the contents of a remote storage location to the local filesystem with additional settings:
from prefect.runner.storage import RemoteStorage\nfrom prefect.blocks.system import Secret\n\nstorage = RemoteStorage(\n url=\"s3://my-bucket/my-folder\",\n # Use Secret blocks to keep credentials out of your code\n key=Secret.load(\"my-aws-access-key\"),\n secret=Secret.load(\"my-aws-secret-key\"),\n)\n\nawait storage.pull_code()\n
Source code in prefect/runner/storage.py
class RemoteStorage:\n \"\"\"\n Pulls the contents of a remote storage location to the local filesystem.\n\n Parameters:\n url: The URL of the remote storage location to pull from. Supports\n `fsspec` URLs. Some protocols may require an additional `fsspec`\n dependency to be installed. Refer to the\n [`fsspec` docs](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations)\n for more details.\n pull_interval: The interval in seconds at which to pull contents from\n remote storage to local storage. If None, remote storage will perform\n a one-time sync.\n **settings: Any additional settings to pass the `fsspec` filesystem class.\n\n Examples:\n Pull the contents of a remote storage location to the local filesystem:\n\n ```python\n from prefect.runner.storage import RemoteStorage\n\n storage = RemoteStorage(url=\"s3://my-bucket/my-folder\")\n\n await storage.pull_code()\n ```\n\n Pull the contents of a remote storage location to the local filesystem\n with additional settings:\n\n ```python\n from prefect.runner.storage import RemoteStorage\n from prefect.blocks.system import Secret\n\n storage = RemoteStorage(\n url=\"s3://my-bucket/my-folder\",\n # Use Secret blocks to keep credentials out of your code\n key=Secret.load(\"my-aws-access-key\"),\n secret=Secret.load(\"my-aws-secret-key\"),\n )\n\n await storage.pull_code()\n ```\n \"\"\"\n\n def __init__(\n self,\n url: str,\n pull_interval: Optional[int] = 60,\n **settings: Any,\n ):\n self._url = url\n self._settings = settings\n self._logger = get_logger(\"runner.storage.remote-storage\")\n self._storage_base_path = Path.cwd()\n self._pull_interval = pull_interval\n\n @staticmethod\n def _get_required_package_for_scheme(scheme: str) -> Optional[str]:\n # attempt to discover the package name for the given scheme\n # from fsspec's registry\n known_implementation = fsspec.registry.get(scheme)\n if known_implementation:\n return known_implementation.__module__.split(\".\")[0]\n # if we don't know the implementation, try to guess it for some\n # common schemes\n elif scheme == \"s3\":\n return \"s3fs\"\n elif scheme == \"gs\" or scheme == \"gcs\":\n return \"gcsfs\"\n elif scheme == \"abfs\" or scheme == \"az\":\n return \"adlfs\"\n else:\n return None\n\n @property\n def _filesystem(self) -> fsspec.AbstractFileSystem:\n scheme, _, _, _, _ = urlsplit(self._url)\n\n def replace_blocks_with_values(obj: Any) -> Any:\n if isinstance(obj, Block):\n if hasattr(obj, \"get\"):\n return obj.get()\n if hasattr(obj, \"value\"):\n return obj.value\n else:\n return obj.dict()\n return obj\n\n settings_with_block_values = visit_collection(\n self._settings, replace_blocks_with_values, return_data=True\n )\n\n return fsspec.filesystem(scheme, **settings_with_block_values)\n\n def set_base_path(self, path: Path):\n self._storage_base_path = path\n\n @property\n def pull_interval(self) -> Optional[int]:\n \"\"\"\n The interval at which contents from remote storage should be pulled to\n local storage. If None, remote storage will perform a one-time sync.\n \"\"\"\n return self._pull_interval\n\n @property\n def destination(self) -> Path:\n \"\"\"\n The local file path to pull contents from remote storage to.\n \"\"\"\n return self._storage_base_path / self._remote_path\n\n @property\n def _remote_path(self) -> Path:\n \"\"\"\n The remote file path to pull contents from remote storage to.\n \"\"\"\n _, netloc, urlpath, _, _ = urlsplit(self._url)\n return Path(netloc) / Path(urlpath.lstrip(\"/\"))\n\n async def pull_code(self):\n \"\"\"\n Pulls contents from remote storage to the local filesystem.\n \"\"\"\n self._logger.debug(\n \"Pulling contents from remote storage '%s' to '%s'...\",\n self._url,\n self.destination,\n )\n\n if not self.destination.exists():\n self.destination.mkdir(parents=True, exist_ok=True)\n\n remote_path = str(self._remote_path) + \"/\"\n\n try:\n await from_async.wait_for_call_in_new_thread(\n create_call(\n self._filesystem.get,\n remote_path,\n str(self.destination),\n recursive=True,\n )\n )\n except Exception as exc:\n raise RuntimeError(\n f\"Failed to pull contents from remote storage {self._url!r} to\"\n f\" {self.destination!r}\"\n ) from exc\n\n def to_pull_step(self) -> dict:\n \"\"\"\n Returns a dictionary representation of the storage object that can be\n used as a deployment pull step.\n \"\"\"\n\n def replace_block_with_placeholder(obj: Any) -> Any:\n if isinstance(obj, Block):\n return f\"{{{{ {obj.get_block_placeholder()} }}}}\"\n return obj\n\n settings_with_placeholders = visit_collection(\n self._settings, replace_block_with_placeholder, return_data=True\n )\n required_package = self._get_required_package_for_scheme(\n urlparse(self._url).scheme\n )\n step = {\n \"prefect.deployments.steps.pull_from_remote_storage\": {\n \"url\": self._url,\n **settings_with_placeholders,\n }\n }\n if required_package:\n step[\"prefect.deployments.steps.pull_from_remote_storage\"][\n \"requires\"\n ] = required_package\n return step\n\n def __eq__(self, __value) -> bool:\n \"\"\"\n Equality check for runner storage objects.\n \"\"\"\n if isinstance(__value, RemoteStorage):\n return self._url == __value._url and self._settings == __value._settings\n return False\n\n def __repr__(self) -> str:\n return f\"RemoteStorage(url={self._url!r})\"\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.destination","title":"destination: Path
property
","text":"The local file path to pull contents from remote storage to.
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.pull_interval","title":"pull_interval: Optional[int]
property
","text":"The interval at which contents from remote storage should be pulled to local storage. If None, remote storage will perform a one-time sync.
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.pull_code","title":"pull_code
async
","text":"Pulls contents from remote storage to the local filesystem.
Source code inprefect/runner/storage.py
async def pull_code(self):\n \"\"\"\n Pulls contents from remote storage to the local filesystem.\n \"\"\"\n self._logger.debug(\n \"Pulling contents from remote storage '%s' to '%s'...\",\n self._url,\n self.destination,\n )\n\n if not self.destination.exists():\n self.destination.mkdir(parents=True, exist_ok=True)\n\n remote_path = str(self._remote_path) + \"/\"\n\n try:\n await from_async.wait_for_call_in_new_thread(\n create_call(\n self._filesystem.get,\n remote_path,\n str(self.destination),\n recursive=True,\n )\n )\n except Exception as exc:\n raise RuntimeError(\n f\"Failed to pull contents from remote storage {self._url!r} to\"\n f\" {self.destination!r}\"\n ) from exc\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.to_pull_step","title":"to_pull_step
","text":"Returns a dictionary representation of the storage object that can be used as a deployment pull step.
Source code inprefect/runner/storage.py
def to_pull_step(self) -> dict:\n \"\"\"\n Returns a dictionary representation of the storage object that can be\n used as a deployment pull step.\n \"\"\"\n\n def replace_block_with_placeholder(obj: Any) -> Any:\n if isinstance(obj, Block):\n return f\"{{{{ {obj.get_block_placeholder()} }}}}\"\n return obj\n\n settings_with_placeholders = visit_collection(\n self._settings, replace_block_with_placeholder, return_data=True\n )\n required_package = self._get_required_package_for_scheme(\n urlparse(self._url).scheme\n )\n step = {\n \"prefect.deployments.steps.pull_from_remote_storage\": {\n \"url\": self._url,\n **settings_with_placeholders,\n }\n }\n if required_package:\n step[\"prefect.deployments.steps.pull_from_remote_storage\"][\n \"requires\"\n ] = required_package\n return step\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage","title":"RunnerStorage
","text":" Bases: Protocol
A storage interface for a runner to use to retrieve remotely stored flow code.
Source code inprefect/runner/storage.py
@runtime_checkable\nclass RunnerStorage(Protocol):\n \"\"\"\n A storage interface for a runner to use to retrieve\n remotely stored flow code.\n \"\"\"\n\n def set_base_path(self, path: Path):\n \"\"\"\n Sets the base path to use when pulling contents from remote storage to\n local storage.\n \"\"\"\n ...\n\n @property\n def pull_interval(self) -> Optional[int]:\n \"\"\"\n The interval at which contents from remote storage should be pulled to\n local storage. If None, remote storage will perform a one-time sync.\n \"\"\"\n ...\n\n @property\n def destination(self) -> Path:\n \"\"\"\n The local file path to pull contents from remote storage to.\n \"\"\"\n ...\n\n async def pull_code(self):\n \"\"\"\n Pulls contents from remote storage to the local filesystem.\n \"\"\"\n ...\n\n def to_pull_step(self) -> dict:\n \"\"\"\n Returns a dictionary representation of the storage object that can be\n used as a deployment pull step.\n \"\"\"\n ...\n\n def __eq__(self, __value) -> bool:\n \"\"\"\n Equality check for runner storage objects.\n \"\"\"\n ...\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.destination","title":"destination: Path
property
","text":"The local file path to pull contents from remote storage to.
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.pull_interval","title":"pull_interval: Optional[int]
property
","text":"The interval at which contents from remote storage should be pulled to local storage. If None, remote storage will perform a one-time sync.
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.pull_code","title":"pull_code
async
","text":"Pulls contents from remote storage to the local filesystem.
Source code inprefect/runner/storage.py
async def pull_code(self):\n \"\"\"\n Pulls contents from remote storage to the local filesystem.\n \"\"\"\n ...\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.set_base_path","title":"set_base_path
","text":"Sets the base path to use when pulling contents from remote storage to local storage.
Source code inprefect/runner/storage.py
def set_base_path(self, path: Path):\n \"\"\"\n Sets the base path to use when pulling contents from remote storage to\n local storage.\n \"\"\"\n ...\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.to_pull_step","title":"to_pull_step
","text":"Returns a dictionary representation of the storage object that can be used as a deployment pull step.
Source code inprefect/runner/storage.py
def to_pull_step(self) -> dict:\n \"\"\"\n Returns a dictionary representation of the storage object that can be\n used as a deployment pull step.\n \"\"\"\n ...\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.create_storage_from_url","title":"create_storage_from_url
","text":"Creates a storage object from a URL.
Parameters:
Name Type Description Defaulturl
str
The URL to create a storage object from. Supports git and fsspec
URLs.
pull_interval
Optional[int]
The interval at which to pull contents from remote storage to local storage
60
Returns:
Name Type DescriptionRunnerStorage
RunnerStorage
A runner storage compatible object
Source code inprefect/runner/storage.py
def create_storage_from_url(\n url: str, pull_interval: Optional[int] = 60\n) -> RunnerStorage:\n \"\"\"\n Creates a storage object from a URL.\n\n Args:\n url: The URL to create a storage object from. Supports git and `fsspec`\n URLs.\n pull_interval: The interval at which to pull contents from remote storage to\n local storage\n\n Returns:\n RunnerStorage: A runner storage compatible object\n \"\"\"\n parsed_url = urlparse(url)\n if parsed_url.scheme == \"git\" or parsed_url.path.endswith(\".git\"):\n return GitRepository(url=url, pull_interval=pull_interval)\n else:\n return RemoteStorage(url=url, pull_interval=pull_interval)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/utils/","title":"utils","text":"","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils","title":"prefect.runner.utils
","text":"","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.inject_schemas_into_openapi","title":"inject_schemas_into_openapi
","text":"Augments the webserver's OpenAPI schema with additional schemas from deployments / flows / tasks.
Parameters:
Name Type Description Defaultwebserver
FastAPI
The FastAPI instance representing the webserver.
requiredschemas_to_inject
Dict[str, Any]
A dictionary of OpenAPI schemas to integrate.
requiredReturns:
Type DescriptionDict[str, Any]
The augmented OpenAPI schema dictionary.
Source code inprefect/runner/utils.py
def inject_schemas_into_openapi(\n webserver: FastAPI, schemas_to_inject: Dict[str, Any]\n) -> Dict[str, Any]:\n \"\"\"\n Augments the webserver's OpenAPI schema with additional schemas from deployments / flows / tasks.\n\n Args:\n webserver: The FastAPI instance representing the webserver.\n schemas_to_inject: A dictionary of OpenAPI schemas to integrate.\n\n Returns:\n The augmented OpenAPI schema dictionary.\n \"\"\"\n openapi_schema = get_openapi(\n title=\"FastAPI Prefect Runner\", version=PREFECT_VERSION, routes=webserver.routes\n )\n\n augmented_schema = merge_definitions(schemas_to_inject, openapi_schema)\n return update_refs_to_components(augmented_schema)\n
","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.merge_definitions","title":"merge_definitions
","text":"Integrates definitions from injected schemas into the OpenAPI components.
Parameters:
Name Type Description Defaultinjected_schemas
Dict[str, Any]
A dictionary of deployment-specific schemas.
requiredopenapi_schema
Dict[str, Any]
The base OpenAPI schema to update.
required Source code inprefect/runner/utils.py
def merge_definitions(\n injected_schemas: Dict[str, Any], openapi_schema: Dict[str, Any]\n) -> Dict[str, Any]:\n \"\"\"\n Integrates definitions from injected schemas into the OpenAPI components.\n\n Args:\n injected_schemas: A dictionary of deployment-specific schemas.\n openapi_schema: The base OpenAPI schema to update.\n \"\"\"\n openapi_schema_copy = deepcopy(openapi_schema)\n components = openapi_schema_copy.setdefault(\"components\", {}).setdefault(\n \"schemas\", {}\n )\n for definitions in injected_schemas.values():\n if \"definitions\" in definitions:\n for def_name, def_schema in definitions[\"definitions\"].items():\n def_schema_copy = deepcopy(def_schema)\n update_refs_in_schema(def_schema_copy, \"#/components/schemas/\")\n components[def_name] = def_schema_copy\n return openapi_schema_copy\n
","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.update_refs_in_schema","title":"update_refs_in_schema
","text":"Recursively replaces $ref
with a new reference base in a schema item.
Parameters:
Name Type Description Defaultschema_item
Any
A schema or part of a schema to update references in.
requirednew_ref
str
The new base string to replace in $ref
values.
prefect/runner/utils.py
def update_refs_in_schema(schema_item: Any, new_ref: str) -> None:\n \"\"\"\n Recursively replaces `$ref` with a new reference base in a schema item.\n\n Args:\n schema_item: A schema or part of a schema to update references in.\n new_ref: The new base string to replace in `$ref` values.\n \"\"\"\n if isinstance(schema_item, dict):\n if \"$ref\" in schema_item:\n schema_item[\"$ref\"] = schema_item[\"$ref\"].replace(\"#/definitions/\", new_ref)\n for value in schema_item.values():\n update_refs_in_schema(value, new_ref)\n elif isinstance(schema_item, list):\n for item in schema_item:\n update_refs_in_schema(item, new_ref)\n
","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.update_refs_to_components","title":"update_refs_to_components
","text":"Updates all $ref
fields in the OpenAPI schema to reference the components section.
Parameters:
Name Type Description Defaultopenapi_schema
Dict[str, Any]
The OpenAPI schema to modify $ref
fields in.
prefect/runner/utils.py
def update_refs_to_components(openapi_schema: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Updates all `$ref` fields in the OpenAPI schema to reference the components section.\n\n Args:\n openapi_schema: The OpenAPI schema to modify `$ref` fields in.\n \"\"\"\n for path_item in openapi_schema.get(\"paths\", {}).values():\n for operation in path_item.values():\n schema = (\n operation.get(\"requestBody\", {})\n .get(\"content\", {})\n .get(\"application/json\", {})\n .get(\"schema\", {})\n )\n update_refs_in_schema(schema, \"#/components/schemas/\")\n\n for definition in openapi_schema.get(\"definitions\", {}).values():\n update_refs_in_schema(definition, \"#/components/schemas/\")\n\n return openapi_schema\n
","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runtime/deployment/","title":"deployment","text":"","tags":["Python API","deployment context","context"]},{"location":"api-ref/prefect/runtime/deployment/#prefect.runtime.deployment","title":"prefect.runtime.deployment
","text":"Access attributes of the current deployment run dynamically.
Note that if a deployment is not currently being run, all attributes will return empty values.
You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__DEPLOYMENT
.
from prefect.runtime import deployment\n\ndef get_task_runner():\n task_runner_config = deployment.parameters.get(\"runner_config\", \"default config here\")\n return DummyTaskRunner(task_runner_specs=task_runner_config)\n
Available attributes id
: the deployment's unique IDname
: the deployment's nameversion
: the deployment's versionflow_run_id
: the current flow run ID for this deploymentparameters
: the parameters that were passed to this run; note that these do not necessarily include default values set on the flow function, only the parameter values set on the deployment object or those directly provided via API for this runprefect.runtime.flow_run
","text":"Access attributes of the current flow run dynamically.
Note that if a flow run cannot be discovered, all attributes will return empty values.
You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__FLOW_RUN
.
id
: the flow run's unique IDtags
: the flow run's set of tagsscheduled_start_time
: the flow run's expected scheduled start time; defaults to now if not presentname
: the name of the flow runflow_name
: the name of the flowparameters
: the parameters that were passed to this run; note that these do not necessarily include default values set on the flow function, only the parameter values explicitly passed for the runparent_flow_run_id
: the ID of the flow run that triggered this run, if anyparent_deployment_id
: the ID of the deployment that triggered this run, if anyrun_count
: the number of times this flow run has been runprefect.runtime.task_run
","text":"Access attributes of the current task run dynamically.
Note that if a task run cannot be discovered, all attributes will return empty values.
You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__TASK_RUN
.
id
: the task run's unique IDname
: the name of the task runtags
: the task run's set of tagsparameters
: the parameters the task was called withrun_count
: the number of times this task run has been runtask_name
: the name of the taskprefect.utilities.annotations
","text":"","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.BaseAnnotation","title":"BaseAnnotation
","text":" Bases: namedtuple('BaseAnnotation', field_names='value')
, ABC
, Generic[T]
Base class for Prefect annotation types.
Inherits from namedtuple
for unpacking support in another tools.
prefect/utilities/annotations.py
class BaseAnnotation(\n namedtuple(\"BaseAnnotation\", field_names=\"value\"), ABC, Generic[T]\n):\n \"\"\"\n Base class for Prefect annotation types.\n\n Inherits from `namedtuple` for unpacking support in another tools.\n \"\"\"\n\n def unwrap(self) -> T:\n if sys.version_info < (3, 8):\n # cannot simply return self.value due to recursion error in Python 3.7\n # also _asdict does not follow convention; it's not an internal method\n # https://stackoverflow.com/a/26180604\n return self._asdict()[\"value\"]\n else:\n return self.value\n\n def rewrap(self, value: T) -> \"BaseAnnotation[T]\":\n return type(self)(value)\n\n def __eq__(self, other: object) -> bool:\n if not type(self) == type(other):\n return False\n return self.unwrap() == other.unwrap()\n\n def __repr__(self) -> str:\n return f\"{type(self).__name__}({self.value!r})\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.NotSet","title":"NotSet
","text":"Singleton to distinguish None
from a value that is not provided by the user.
prefect/utilities/annotations.py
class NotSet:\n \"\"\"\n Singleton to distinguish `None` from a value that is not provided by the user.\n \"\"\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.allow_failure","title":"allow_failure
","text":" Bases: BaseAnnotation[T]
Wrapper for states or futures.
Indicates that the upstream run for this input can be failed.
Generally, Prefect will not allow a downstream run to start if any of its inputs are failed. This annotation allows you to opt into receiving a failed input downstream.
If the input is from a failed run, the attached exception will be passed to your function.
Source code inprefect/utilities/annotations.py
class allow_failure(BaseAnnotation[T]):\n \"\"\"\n Wrapper for states or futures.\n\n Indicates that the upstream run for this input can be failed.\n\n Generally, Prefect will not allow a downstream run to start if any of its inputs\n are failed. This annotation allows you to opt into receiving a failed input\n downstream.\n\n If the input is from a failed run, the attached exception will be passed to your\n function.\n \"\"\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.quote","title":"quote
","text":" Bases: BaseAnnotation[T]
Simple wrapper to mark an expression as a different type so it will not be coerced by Prefect. For example, if you want to return a state from a flow without having the flow assume that state.
quote will also instruct prefect to ignore introspection of the wrapped object when passed as flow or task parameter. Parameter introspection can be a significant performance hit when the object is a large collection, e.g. a large dictionary or DataFrame, and each element needs to be visited. This will disable task dependency tracking for the wrapped object, but likely will increase performance.
@task\ndef my_task(df):\n ...\n\n@flow\ndef my_flow():\n my_task(quote(df))\n
Source code in prefect/utilities/annotations.py
class quote(BaseAnnotation[T]):\n \"\"\"\n Simple wrapper to mark an expression as a different type so it will not be coerced\n by Prefect. For example, if you want to return a state from a flow without having\n the flow assume that state.\n\n quote will also instruct prefect to ignore introspection of the wrapped object\n when passed as flow or task parameter. Parameter introspection can be a\n significant performance hit when the object is a large collection,\n e.g. a large dictionary or DataFrame, and each element needs to be visited. This\n will disable task dependency tracking for the wrapped object, but likely will\n increase performance.\n\n ```\n @task\n def my_task(df):\n ...\n\n @flow\n def my_flow():\n my_task(quote(df))\n ```\n \"\"\"\n\n def unquote(self) -> T:\n return self.unwrap()\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.unmapped","title":"unmapped
","text":" Bases: BaseAnnotation[T]
Wrapper for iterables.
Indicates that this input should be sent as-is to all runs created during a mapping operation instead of being split.
Source code inprefect/utilities/annotations.py
class unmapped(BaseAnnotation[T]):\n \"\"\"\n Wrapper for iterables.\n\n Indicates that this input should be sent as-is to all runs created during a mapping\n operation instead of being split.\n \"\"\"\n\n def __getitem__(self, _) -> T:\n # Internally, this acts as an infinite array where all items are the same value\n return self.unwrap()\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/asyncutils/","title":"asyncutils","text":"","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils","title":"prefect.utilities.asyncutils
","text":"Utilities for interoperability with async functions and workers from various contexts.
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherIncomplete","title":"GatherIncomplete
","text":" Bases: RuntimeError
Used to indicate retrieving gather results before completion
Source code inprefect/utilities/asyncutils.py
class GatherIncomplete(RuntimeError):\n \"\"\"Used to indicate retrieving gather results before completion\"\"\"\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherTaskGroup","title":"GatherTaskGroup
","text":" Bases: TaskGroup
A task group that gathers results.
AnyIO does not include support gather
. This class extends the TaskGroup
interface to allow simple gathering.
See https://github.com/agronholm/anyio/issues/100
This class should be instantiated with create_gather_task_group
.
prefect/utilities/asyncutils.py
class GatherTaskGroup(anyio.abc.TaskGroup):\n \"\"\"\n A task group that gathers results.\n\n AnyIO does not include support `gather`. This class extends the `TaskGroup`\n interface to allow simple gathering.\n\n See https://github.com/agronholm/anyio/issues/100\n\n This class should be instantiated with `create_gather_task_group`.\n \"\"\"\n\n def __init__(self, task_group: anyio.abc.TaskGroup):\n self._results: Dict[UUID, Any] = {}\n # The concrete task group implementation to use\n self._task_group: anyio.abc.TaskGroup = task_group\n\n async def _run_and_store(self, key, fn, args):\n self._results[key] = await fn(*args)\n\n def start_soon(self, fn, *args) -> UUID:\n key = uuid4()\n # Put a placeholder in-case the result is retrieved earlier\n self._results[key] = GatherIncomplete\n self._task_group.start_soon(self._run_and_store, key, fn, args)\n return key\n\n async def start(self, fn, *args):\n \"\"\"\n Since `start` returns the result of `task_status.started()` but here we must\n return the key instead, we just won't support this method for now.\n \"\"\"\n raise RuntimeError(\"`GatherTaskGroup` does not support `start`.\")\n\n def get_result(self, key: UUID) -> Any:\n result = self._results[key]\n if result is GatherIncomplete:\n raise GatherIncomplete(\n \"Task is not complete. \"\n \"Results should not be retrieved until the task group exits.\"\n )\n return result\n\n async def __aenter__(self):\n await self._task_group.__aenter__()\n return self\n\n async def __aexit__(self, *tb):\n try:\n retval = await self._task_group.__aexit__(*tb)\n return retval\n finally:\n del self._task_group\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherTaskGroup.start","title":"start
async
","text":"Since start
returns the result of task_status.started()
but here we must return the key instead, we just won't support this method for now.
prefect/utilities/asyncutils.py
async def start(self, fn, *args):\n \"\"\"\n Since `start` returns the result of `task_status.started()` but here we must\n return the key instead, we just won't support this method for now.\n \"\"\"\n raise RuntimeError(\"`GatherTaskGroup` does not support `start`.\")\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.add_event_loop_shutdown_callback","title":"add_event_loop_shutdown_callback
async
","text":"Adds a callback to the given callable on event loop closure. The callable must be a coroutine function. It will be awaited when the current event loop is shutting down.
Requires use of asyncio.run()
which waits for async generator shutdown by default or explicit call of asyncio.shutdown_asyncgens()
. If the application is entered with asyncio.run_until_complete()
and the user calls asyncio.close()
without the generator shutdown call, this will not trigger callbacks.
asyncio does not provided any other way to clean up a resource when the event loop is about to close.
Source code inprefect/utilities/asyncutils.py
async def add_event_loop_shutdown_callback(coroutine_fn: Callable[[], Awaitable]):\n \"\"\"\n Adds a callback to the given callable on event loop closure. The callable must be\n a coroutine function. It will be awaited when the current event loop is shutting\n down.\n\n Requires use of `asyncio.run()` which waits for async generator shutdown by\n default or explicit call of `asyncio.shutdown_asyncgens()`. If the application\n is entered with `asyncio.run_until_complete()` and the user calls\n `asyncio.close()` without the generator shutdown call, this will not trigger\n callbacks.\n\n asyncio does not provided _any_ other way to clean up a resource when the event\n loop is about to close.\n \"\"\"\n\n async def on_shutdown(key):\n # It appears that EVENT_LOOP_GC_REFS is somehow being garbage collected early.\n # We hold a reference to it so as to preserve it, at least for the lifetime of\n # this coroutine. See the issue below for the initial report/discussion:\n # https://github.com/PrefectHQ/prefect/issues/7709#issuecomment-1560021109\n _ = EVENT_LOOP_GC_REFS\n try:\n yield\n except GeneratorExit:\n await coroutine_fn()\n # Remove self from the garbage collection set\n EVENT_LOOP_GC_REFS.pop(key)\n\n # Create the iterator and store it in a global variable so it is not garbage\n # collected. If the iterator is garbage collected before the event loop closes, the\n # callback will not run. Since this function does not know the scope of the event\n # loop that is calling it, a reference with global scope is necessary to ensure\n # garbage collection does not occur until after event loop closure.\n key = id(on_shutdown)\n EVENT_LOOP_GC_REFS[key] = on_shutdown(key)\n\n # Begin iterating so it will be cleaned up as an incomplete generator\n try:\n await EVENT_LOOP_GC_REFS[key].__anext__()\n # There is a poorly understood edge case we've seen in CI where the key is\n # removed from the dict before we begin generator iteration.\n except KeyError:\n logger.warn(\"The event loop shutdown callback was not properly registered. \")\n pass\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.create_gather_task_group","title":"create_gather_task_group
","text":"Create a new task group that gathers results
Source code inprefect/utilities/asyncutils.py
def create_gather_task_group() -> GatherTaskGroup:\n \"\"\"Create a new task group that gathers results\"\"\"\n # This function matches the AnyIO API which uses callables since the concrete\n # task group class depends on the async library being used and cannot be\n # determined until runtime\n return GatherTaskGroup(anyio.create_task_group())\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.gather","title":"gather
async
","text":"Run calls concurrently and gather their results.
Unlike asyncio.gather
this expects to receive callables not coroutines. This matches anyio
semantics.
prefect/utilities/asyncutils.py
async def gather(*calls: Callable[[], Coroutine[Any, Any, T]]) -> List[T]:\n \"\"\"\n Run calls concurrently and gather their results.\n\n Unlike `asyncio.gather` this expects to receive _callables_ not _coroutines_.\n This matches `anyio` semantics.\n \"\"\"\n keys = []\n async with create_gather_task_group() as tg:\n for call in calls:\n keys.append(tg.start_soon(call))\n return [tg.get_result(key) for key in keys]\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.is_async_fn","title":"is_async_fn
","text":"Returns True
if a function returns a coroutine.
See https://github.com/microsoft/pyright/issues/2142 for an example use
Source code inprefect/utilities/asyncutils.py
def is_async_fn(\n func: Union[Callable[P, R], Callable[P, Awaitable[R]]],\n) -> TypeGuard[Callable[P, Awaitable[R]]]:\n \"\"\"\n Returns `True` if a function returns a coroutine.\n\n See https://github.com/microsoft/pyright/issues/2142 for an example use\n \"\"\"\n while hasattr(func, \"__wrapped__\"):\n func = func.__wrapped__\n\n return inspect.iscoroutinefunction(func)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.is_async_gen_fn","title":"is_async_gen_fn
","text":"Returns True
if a function is an async generator.
prefect/utilities/asyncutils.py
def is_async_gen_fn(func):\n \"\"\"\n Returns `True` if a function is an async generator.\n \"\"\"\n while hasattr(func, \"__wrapped__\"):\n func = func.__wrapped__\n\n return inspect.isasyncgenfunction(func)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.raise_async_exception_in_thread","title":"raise_async_exception_in_thread
","text":"Raise an exception in a thread asynchronously.
This will not interrupt long-running system calls like sleep
or wait
.
prefect/utilities/asyncutils.py
def raise_async_exception_in_thread(thread: Thread, exc_type: Type[BaseException]):\n \"\"\"\n Raise an exception in a thread asynchronously.\n\n This will not interrupt long-running system calls like `sleep` or `wait`.\n \"\"\"\n ret = ctypes.pythonapi.PyThreadState_SetAsyncExc(\n ctypes.c_long(thread.ident), ctypes.py_object(exc_type)\n )\n if ret == 0:\n raise ValueError(\"Thread not found.\")\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_async_from_worker_thread","title":"run_async_from_worker_thread
","text":"Runs an async function in the main thread's event loop, blocking the worker thread until completion
Source code inprefect/utilities/asyncutils.py
def run_async_from_worker_thread(\n __fn: Callable[..., Awaitable[T]], *args: Any, **kwargs: Any\n) -> T:\n \"\"\"\n Runs an async function in the main thread's event loop, blocking the worker\n thread until completion\n \"\"\"\n call = partial(__fn, *args, **kwargs)\n return anyio.from_thread.run(call)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_sync_in_interruptible_worker_thread","title":"run_sync_in_interruptible_worker_thread
async
","text":"Runs a sync function in a new interruptible worker thread so that the main thread's event loop is not blocked
Unlike the anyio function, this performs best-effort cancellation of the thread using the C API. Cancellation will not interrupt system calls like sleep
.
prefect/utilities/asyncutils.py
async def run_sync_in_interruptible_worker_thread(\n __fn: Callable[..., T], *args: Any, **kwargs: Any\n) -> T:\n \"\"\"\n Runs a sync function in a new interruptible worker thread so that the main\n thread's event loop is not blocked\n\n Unlike the anyio function, this performs best-effort cancellation of the\n thread using the C API. Cancellation will not interrupt system calls like\n `sleep`.\n \"\"\"\n\n class NotSet:\n pass\n\n thread: Thread = None\n result = NotSet\n event = asyncio.Event()\n loop = asyncio.get_running_loop()\n\n def capture_worker_thread_and_result():\n # Captures the worker thread that AnyIO is using to execute the function so\n # the main thread can perform actions on it\n nonlocal thread, result\n try:\n thread = threading.current_thread()\n result = __fn(*args, **kwargs)\n except BaseException as exc:\n result = exc\n raise\n finally:\n loop.call_soon_threadsafe(event.set)\n\n async def send_interrupt_to_thread():\n # This task waits until the result is returned from the thread, if cancellation\n # occurs during that time, we will raise the exception in the thread as well\n try:\n await event.wait()\n except anyio.get_cancelled_exc_class():\n # NOTE: We could send a SIGINT here which allow us to interrupt system\n # calls but the interrupt bubbles from the child thread into the main thread\n # and there is not a clear way to prevent it.\n raise_async_exception_in_thread(thread, anyio.get_cancelled_exc_class())\n raise\n\n async with anyio.create_task_group() as tg:\n tg.start_soon(send_interrupt_to_thread)\n tg.start_soon(\n partial(\n anyio.to_thread.run_sync,\n capture_worker_thread_and_result,\n cancellable=True,\n limiter=get_thread_limiter(),\n )\n )\n\n assert result is not NotSet\n return result\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_sync_in_worker_thread","title":"run_sync_in_worker_thread
async
","text":"Runs a sync function in a new worker thread so that the main thread's event loop is not blocked
Unlike the anyio function, this defaults to a cancellable thread and does not allow passing arguments to the anyio function so users can pass kwargs to their function.
Note that cancellation of threads will not result in interrupted computation, the thread may continue running \u2014 the outcome will just be ignored.
Source code inprefect/utilities/asyncutils.py
async def run_sync_in_worker_thread(\n __fn: Callable[..., T], *args: Any, **kwargs: Any\n) -> T:\n \"\"\"\n Runs a sync function in a new worker thread so that the main thread's event loop\n is not blocked\n\n Unlike the anyio function, this defaults to a cancellable thread and does not allow\n passing arguments to the anyio function so users can pass kwargs to their function.\n\n Note that cancellation of threads will not result in interrupted computation, the\n thread may continue running \u2014 the outcome will just be ignored.\n \"\"\"\n call = partial(__fn, *args, **kwargs)\n return await anyio.to_thread.run_sync(\n call, cancellable=True, limiter=get_thread_limiter()\n )\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.sync","title":"sync
","text":"Call an async function from a synchronous context. Block until completion.
If in an asynchronous context, we will run the code in a separate loop instead of failing but a warning will be displayed since this is not recommended.
Source code inprefect/utilities/asyncutils.py
def sync(__async_fn: Callable[P, Awaitable[T]], *args: P.args, **kwargs: P.kwargs) -> T:\n \"\"\"\n Call an async function from a synchronous context. Block until completion.\n\n If in an asynchronous context, we will run the code in a separate loop instead of\n failing but a warning will be displayed since this is not recommended.\n \"\"\"\n if in_async_main_thread():\n warnings.warn(\n \"`sync` called from an asynchronous context; \"\n \"you should `await` the async function directly instead.\"\n )\n with anyio.start_blocking_portal() as portal:\n return portal.call(partial(__async_fn, *args, **kwargs))\n elif in_async_worker_thread():\n # In a sync context but we can access the event loop thread; send the async\n # call to the parent\n return run_async_from_worker_thread(__async_fn, *args, **kwargs)\n else:\n # In a sync context and there is no event loop; just create an event loop\n # to run the async code then tear it down\n return run_async_in_new_loop(__async_fn, *args, **kwargs)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.sync_compatible","title":"sync_compatible
","text":"Converts an async function into a dual async and sync function.
When the returned function is called, we will attempt to determine the best way to enter the async function.
prefect/utilities/asyncutils.py
def sync_compatible(async_fn: T) -> T:\n \"\"\"\n Converts an async function into a dual async and sync function.\n\n When the returned function is called, we will attempt to determine the best way\n to enter the async function.\n\n - If in a thread with a running event loop, we will return the coroutine for the\n caller to await. This is normal async behavior.\n - If in a blocking worker thread with access to an event loop in another thread, we\n will submit the async method to the event loop.\n - If we cannot find an event loop, we will create a new one and run the async method\n then tear down the loop.\n \"\"\"\n\n @wraps(async_fn)\n def coroutine_wrapper(*args, **kwargs):\n from prefect._internal.concurrency.api import create_call, from_sync\n from prefect._internal.concurrency.calls import get_current_call, logger\n from prefect._internal.concurrency.event_loop import get_running_loop\n from prefect._internal.concurrency.threads import get_global_loop\n\n global_thread_portal = get_global_loop()\n current_thread = threading.current_thread()\n current_call = get_current_call()\n current_loop = get_running_loop()\n\n if current_thread.ident == global_thread_portal.thread.ident:\n logger.debug(f\"{async_fn} --> return coroutine for internal await\")\n # In the prefect async context; return the coro for us to await\n return async_fn(*args, **kwargs)\n elif in_async_main_thread() and (\n not current_call or is_async_fn(current_call.fn)\n ):\n # In the main async context; return the coro for them to await\n logger.debug(f\"{async_fn} --> return coroutine for user await\")\n return async_fn(*args, **kwargs)\n elif in_async_worker_thread():\n # In a sync context but we can access the event loop thread; send the async\n # call to the parent\n return run_async_from_worker_thread(async_fn, *args, **kwargs)\n elif current_loop is not None:\n logger.debug(f\"{async_fn} --> run async in global loop portal\")\n # An event loop is already present but we are in a sync context, run the\n # call in Prefect's event loop thread\n return from_sync.call_soon_in_loop_thread(\n create_call(async_fn, *args, **kwargs)\n ).result()\n else:\n logger.debug(f\"{async_fn} --> run async in new loop\")\n # Run in a new event loop, but use a `Call` for nested context detection\n call = create_call(async_fn, *args, **kwargs)\n return call()\n\n # TODO: This is breaking type hints on the callable... mypy is behind the curve\n # on argument annotations. We can still fix this for editors though.\n if is_async_fn(async_fn):\n wrapper = coroutine_wrapper\n elif is_async_gen_fn(async_fn):\n raise ValueError(\"Async generators cannot yet be marked as `sync_compatible`\")\n else:\n raise TypeError(\"The decorated function must be async.\")\n\n wrapper.aio = async_fn\n return wrapper\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/callables/","title":"callables","text":"","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables","title":"prefect.utilities.callables
","text":"Utilities for working with Python callables.
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.ParameterSchema","title":"ParameterSchema
","text":" Bases: BaseModel
Simple data model corresponding to an OpenAPI Schema
.
prefect/utilities/callables.py
class ParameterSchema(pydantic.BaseModel):\n \"\"\"Simple data model corresponding to an OpenAPI `Schema`.\"\"\"\n\n title: Literal[\"Parameters\"] = \"Parameters\"\n type: Literal[\"object\"] = \"object\"\n properties: Dict[str, Any] = pydantic.Field(default_factory=dict)\n required: List[str] = None\n definitions: Dict[str, Any] = None\n\n def dict(self, *args, **kwargs):\n \"\"\"Exclude `None` fields by default to comply with\n the OpenAPI spec.\n \"\"\"\n kwargs.setdefault(\"exclude_none\", True)\n return super().dict(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.ParameterSchema.dict","title":"dict
","text":"Exclude None
fields by default to comply with the OpenAPI spec.
prefect/utilities/callables.py
def dict(self, *args, **kwargs):\n \"\"\"Exclude `None` fields by default to comply with\n the OpenAPI spec.\n \"\"\"\n kwargs.setdefault(\"exclude_none\", True)\n return super().dict(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.call_with_parameters","title":"call_with_parameters
","text":"Call a function with parameters extracted with get_call_parameters
The function must have an identical signature to the original function or this will fail. If you need to send to a function with a different signature, extract the args/kwargs using parameters_to_positional_and_keyword
directly
prefect/utilities/callables.py
def call_with_parameters(fn: Callable, parameters: Dict[str, Any]):\n \"\"\"\n Call a function with parameters extracted with `get_call_parameters`\n\n The function _must_ have an identical signature to the original function or this\n will fail. If you need to send to a function with a different signature, extract\n the args/kwargs using `parameters_to_positional_and_keyword` directly\n \"\"\"\n args, kwargs = parameters_to_args_kwargs(fn, parameters)\n return fn(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.cloudpickle_wrapped_call","title":"cloudpickle_wrapped_call
","text":"Serializes a function call using cloudpickle then returns a callable which will execute that call and return a cloudpickle serialized return value
This is particularly useful for sending calls to libraries that only use the Python built-in pickler (e.g. anyio.to_process
and multiprocessing
) but may require a wider range of pickling support.
prefect/utilities/callables.py
def cloudpickle_wrapped_call(\n __fn: Callable, *args: Any, **kwargs: Any\n) -> Callable[[], bytes]:\n \"\"\"\n Serializes a function call using cloudpickle then returns a callable which will\n execute that call and return a cloudpickle serialized return value\n\n This is particularly useful for sending calls to libraries that only use the Python\n built-in pickler (e.g. `anyio.to_process` and `multiprocessing`) but may require\n a wider range of pickling support.\n \"\"\"\n payload = cloudpickle.dumps((__fn, args, kwargs))\n return partial(_run_serialized_call, payload)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.collapse_variadic_parameters","title":"collapse_variadic_parameters
","text":"Given a parameter dictionary, move any parameters stored not present in the signature into the variadic keyword argument.
Example:
```python\ndef foo(a, b, **kwargs):\n pass\n\nparameters = {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\ncollapse_variadic_parameters(foo, parameters)\n# {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n```\n
Source code in prefect/utilities/callables.py
def collapse_variadic_parameters(\n fn: Callable, parameters: Dict[str, Any]\n) -> Dict[str, Any]:\n \"\"\"\n Given a parameter dictionary, move any parameters stored not present in the\n signature into the variadic keyword argument.\n\n Example:\n\n ```python\n def foo(a, b, **kwargs):\n pass\n\n parameters = {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n collapse_variadic_parameters(foo, parameters)\n # {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n ```\n \"\"\"\n signature_parameters = inspect.signature(fn).parameters\n variadic_key = None\n for key, parameter in signature_parameters.items():\n if parameter.kind == parameter.VAR_KEYWORD:\n variadic_key = key\n break\n\n missing_parameters = set(parameters.keys()) - set(signature_parameters.keys())\n\n if not variadic_key and missing_parameters:\n raise ValueError(\n f\"Signature for {fn} does not include any variadic keyword argument \"\n \"but parameters were given that are not present in the signature.\"\n )\n\n if variadic_key and not missing_parameters:\n # variadic key is present but no missing parameters, return parameters unchanged\n return parameters\n\n new_parameters = parameters.copy()\n if variadic_key:\n new_parameters[variadic_key] = {}\n\n for key in missing_parameters:\n new_parameters[variadic_key][key] = new_parameters.pop(key)\n\n return new_parameters\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.explode_variadic_parameter","title":"explode_variadic_parameter
","text":"Given a parameter dictionary, move any parameters stored in a variadic keyword argument parameter (i.e. **kwargs) into the top level.
Example:
```python\ndef foo(a, b, **kwargs):\n pass\n\nparameters = {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\nexplode_variadic_parameter(foo, parameters)\n# {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n```\n
Source code in prefect/utilities/callables.py
def explode_variadic_parameter(\n fn: Callable, parameters: Dict[str, Any]\n) -> Dict[str, Any]:\n \"\"\"\n Given a parameter dictionary, move any parameters stored in a variadic keyword\n argument parameter (i.e. **kwargs) into the top level.\n\n Example:\n\n ```python\n def foo(a, b, **kwargs):\n pass\n\n parameters = {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n explode_variadic_parameter(foo, parameters)\n # {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n ```\n \"\"\"\n variadic_key = None\n for key, parameter in inspect.signature(fn).parameters.items():\n if parameter.kind == parameter.VAR_KEYWORD:\n variadic_key = key\n break\n\n if not variadic_key:\n return parameters\n\n new_parameters = parameters.copy()\n for key, value in new_parameters.pop(variadic_key, {}).items():\n new_parameters[key] = value\n\n return new_parameters\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.get_call_parameters","title":"get_call_parameters
","text":"Bind a call to a function to get parameter/value mapping. Default values on the signature will be included if not overridden.
Raises a ParameterBindError if the arguments/kwargs are not valid for the function
Source code inprefect/utilities/callables.py
def get_call_parameters(\n fn: Callable,\n call_args: Tuple[Any, ...],\n call_kwargs: Dict[str, Any],\n apply_defaults: bool = True,\n) -> Dict[str, Any]:\n \"\"\"\n Bind a call to a function to get parameter/value mapping. Default values on the\n signature will be included if not overridden.\n\n Raises a ParameterBindError if the arguments/kwargs are not valid for the function\n \"\"\"\n try:\n bound_signature = inspect.signature(fn).bind(*call_args, **call_kwargs)\n except TypeError as exc:\n raise ParameterBindError.from_bind_failure(fn, exc, call_args, call_kwargs)\n\n if apply_defaults:\n bound_signature.apply_defaults()\n\n # We cast from `OrderedDict` to `dict` because Dask will not convert futures in an\n # ordered dictionary to values during execution; this is the default behavior in\n # Python 3.9 anyway.\n return dict(bound_signature.arguments)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.get_parameter_defaults","title":"get_parameter_defaults
","text":"Get default parameter values for a callable.
Source code inprefect/utilities/callables.py
def get_parameter_defaults(\n fn: Callable,\n) -> Dict[str, Any]:\n \"\"\"\n Get default parameter values for a callable.\n \"\"\"\n signature = inspect.signature(fn)\n\n parameter_defaults = {}\n\n for name, param in signature.parameters.items():\n if param.default is not signature.empty:\n parameter_defaults[name] = param.default\n\n return parameter_defaults\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameter_docstrings","title":"parameter_docstrings
","text":"Given a docstring in Google docstring format, parse the parameter section and return a dictionary that maps parameter names to docstring.
Parameters:
Name Type Description Defaultdocstring
Optional[str]
The function's docstring.
requiredReturns:
Type DescriptionDict[str, str]
Mapping from parameter names to docstrings.
Source code inprefect/utilities/callables.py
def parameter_docstrings(docstring: Optional[str]) -> Dict[str, str]:\n \"\"\"\n Given a docstring in Google docstring format, parse the parameter section\n and return a dictionary that maps parameter names to docstring.\n\n Args:\n docstring: The function's docstring.\n\n Returns:\n Mapping from parameter names to docstrings.\n \"\"\"\n param_docstrings = {}\n\n if not docstring:\n return param_docstrings\n\n with disable_logger(\"griffe.docstrings.google\"), disable_logger(\n \"griffe.agents.nodes\"\n ):\n parsed = parse(Docstring(docstring), Parser.google)\n for section in parsed:\n if section.kind != DocstringSectionKind.parameters:\n continue\n param_docstrings = {\n parameter.name: parameter.description for parameter in section.value\n }\n\n return param_docstrings\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameter_schema","title":"parameter_schema
","text":"Given a function, generates an OpenAPI-compatible description of the function's arguments, including: - name - typing information - whether it is required - a default value - additional constraints (like possible enum values)
Parameters:
Name Type Description Defaultfn
Callable
The function whose arguments will be serialized
requiredReturns:
Name Type DescriptionParameterSchema
ParameterSchema
the argument schema
Source code inprefect/utilities/callables.py
def parameter_schema(fn: Callable) -> ParameterSchema:\n \"\"\"Given a function, generates an OpenAPI-compatible description\n of the function's arguments, including:\n - name\n - typing information\n - whether it is required\n - a default value\n - additional constraints (like possible enum values)\n\n Args:\n fn (Callable): The function whose arguments will be serialized\n\n Returns:\n ParameterSchema: the argument schema\n \"\"\"\n try:\n signature = inspect.signature(fn, eval_str=True)\n except (NameError, TypeError):\n signature = inspect.signature(fn)\n\n model_fields = {}\n aliases = {}\n docstrings = parameter_docstrings(inspect.getdoc(fn))\n\n class ModelConfig:\n arbitrary_types_allowed = True\n\n if HAS_PYDANTIC_V2 and has_v2_type_as_param(signature):\n create_schema = create_v2_schema\n process_params = process_v2_params\n else:\n create_schema = create_v1_schema\n process_params = process_v1_params\n\n for position, param in enumerate(signature.parameters.values()):\n name, type_, field = process_params(\n param, position=position, docstrings=docstrings, aliases=aliases\n )\n # Generate a Pydantic model at each step so we can check if this parameter\n # type supports schema generation\n try:\n create_schema(\n \"CheckParameter\", model_cfg=ModelConfig, **{name: (type_, field)}\n )\n except (ValueError, TypeError):\n # This field's type is not valid for schema creation, update it to `Any`\n type_ = Any\n model_fields[name] = (type_, field)\n\n # Generate the final model and schema\n schema = create_schema(\"Parameters\", model_cfg=ModelConfig, **model_fields)\n return ParameterSchema(**schema)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameters_to_args_kwargs","title":"parameters_to_args_kwargs
","text":"Convert a parameters
dictionary to positional and keyword arguments
The function must have an identical signature to the original function or this will return an empty tuple and dict.
Source code inprefect/utilities/callables.py
def parameters_to_args_kwargs(\n fn: Callable,\n parameters: Dict[str, Any],\n) -> Tuple[Tuple[Any, ...], Dict[str, Any]]:\n \"\"\"\n Convert a `parameters` dictionary to positional and keyword arguments\n\n The function _must_ have an identical signature to the original function or this\n will return an empty tuple and dict.\n \"\"\"\n function_params = dict(inspect.signature(fn).parameters).keys()\n # Check for parameters that are not present in the function signature\n unknown_params = parameters.keys() - function_params\n if unknown_params:\n raise SignatureMismatchError.from_bad_params(\n list(function_params), list(parameters.keys())\n )\n bound_signature = inspect.signature(fn).bind_partial()\n bound_signature.arguments = parameters\n\n return bound_signature.args, bound_signature.kwargs\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.raise_for_reserved_arguments","title":"raise_for_reserved_arguments
","text":"Raise a ReservedArgumentError if fn
has any parameters that conflict with the names contained in reserved_arguments
.
prefect/utilities/callables.py
def raise_for_reserved_arguments(fn: Callable, reserved_arguments: Iterable[str]):\n \"\"\"Raise a ReservedArgumentError if `fn` has any parameters that conflict\n with the names contained in `reserved_arguments`.\"\"\"\n function_paremeters = inspect.signature(fn).parameters\n\n for argument in reserved_arguments:\n if argument in function_paremeters:\n raise ReservedArgumentError(\n f\"{argument!r} is a reserved argument name and cannot be used.\"\n )\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/collections/","title":"collections","text":"","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections","title":"prefect.utilities.collections
","text":"Utilities for extensions of and operations on Python collections.
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.AutoEnum","title":"AutoEnum
","text":" Bases: str
, Enum
An enum class that automatically generates value from variable names.
This guards against common errors where variable names are updated but values are not.
In addition, because AutoEnums inherit from str
, they are automatically JSON-serializable.
See https://docs.python.org/3/library/enum.html#using-automatic-values
Exampleclass MyEnum(AutoEnum):\n RED = AutoEnum.auto() # equivalent to RED = 'RED'\n BLUE = AutoEnum.auto() # equivalent to BLUE = 'BLUE'\n
Source code in prefect/utilities/collections.py
class AutoEnum(str, Enum):\n \"\"\"\n An enum class that automatically generates value from variable names.\n\n This guards against common errors where variable names are updated but values are\n not.\n\n In addition, because AutoEnums inherit from `str`, they are automatically\n JSON-serializable.\n\n See https://docs.python.org/3/library/enum.html#using-automatic-values\n\n Example:\n ```python\n class MyEnum(AutoEnum):\n RED = AutoEnum.auto() # equivalent to RED = 'RED'\n BLUE = AutoEnum.auto() # equivalent to BLUE = 'BLUE'\n ```\n \"\"\"\n\n def _generate_next_value_(name, start, count, last_values):\n return name\n\n @staticmethod\n def auto():\n \"\"\"\n Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n \"\"\"\n return auto()\n\n def __repr__(self) -> str:\n return f\"{type(self).__name__}.{self.value}\"\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.AutoEnum.auto","title":"auto
staticmethod
","text":"Exposes enum.auto()
to avoid requiring a second import to use AutoEnum
prefect/utilities/collections.py
@staticmethod\ndef auto():\n \"\"\"\n Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n \"\"\"\n return auto()\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.StopVisiting","title":"StopVisiting
","text":" Bases: BaseException
A special exception used to stop recursive visits in visit_collection
.
When raised, the expression is returned without modification and recursive visits in that path will end.
Source code inprefect/utilities/collections.py
class StopVisiting(BaseException):\n \"\"\"\n A special exception used to stop recursive visits in `visit_collection`.\n\n When raised, the expression is returned without modification and recursive visits\n in that path will end.\n \"\"\"\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.batched_iterable","title":"batched_iterable
","text":"Yield batches of a certain size from an iterable
Parameters:
Name Type Description Defaultiterable
Iterable
An iterable
requiredsize
int
The batch size to return
requiredYields:
Name Type Descriptiontuple
T
A batch of the iterable
Source code inprefect/utilities/collections.py
def batched_iterable(iterable: Iterable[T], size: int) -> Iterator[Tuple[T, ...]]:\n \"\"\"\n Yield batches of a certain size from an iterable\n\n Args:\n iterable (Iterable): An iterable\n size (int): The batch size to return\n\n Yields:\n tuple: A batch of the iterable\n \"\"\"\n it = iter(iterable)\n while True:\n batch = tuple(itertools.islice(it, size))\n if not batch:\n break\n yield batch\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.dict_to_flatdict","title":"dict_to_flatdict
","text":"Converts a (nested) dictionary to a flattened representation.
Each key of the flat dict will be a CompoundKey tuple containing the \"chain of keys\" for the corresponding value.
Parameters:
Name Type Description Defaultdct
dict
The dictionary to flatten
required_parent
Tuple
The current parent for recursion
None
Returns:
Type DescriptionDict[Tuple[KT, ...], Any]
A flattened dict of the same type as dct
Source code inprefect/utilities/collections.py
def dict_to_flatdict(\n dct: Dict[KT, Union[Any, Dict[KT, Any]]], _parent: Tuple[KT, ...] = None\n) -> Dict[Tuple[KT, ...], Any]:\n \"\"\"Converts a (nested) dictionary to a flattened representation.\n\n Each key of the flat dict will be a CompoundKey tuple containing the \"chain of keys\"\n for the corresponding value.\n\n Args:\n dct (dict): The dictionary to flatten\n _parent (Tuple, optional): The current parent for recursion\n\n Returns:\n A flattened dict of the same type as dct\n \"\"\"\n typ = cast(Type[Dict[Tuple[KT, ...], Any]], type(dct))\n items: List[Tuple[Tuple[KT, ...], Any]] = []\n parent = _parent or tuple()\n\n for k, v in dct.items():\n k_parent = tuple(parent + (k,))\n # if v is a non-empty dict, recurse\n if isinstance(v, dict) and v:\n items.extend(dict_to_flatdict(v, _parent=k_parent).items())\n else:\n items.append((k_parent, v))\n return typ(items)\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.extract_instances","title":"extract_instances
","text":"Extract objects from a file and returns a dict of type -> instances
Parameters:
Name Type Description Defaultobjects
Iterable
An iterable of objects
requiredtypes
Union[Type[T], Tuple[Type[T], ...]]
A type or tuple of types to extract, defaults to all objects
object
Returns:
Type DescriptionUnion[List[T], Dict[Type[T], T]]
If a single type is given: a list of instances of that type
Union[List[T], Dict[Type[T], T]]
If a tuple of types is given: a mapping of type to a list of instances
Source code inprefect/utilities/collections.py
def extract_instances(\n objects: Iterable,\n types: Union[Type[T], Tuple[Type[T], ...]] = object,\n) -> Union[List[T], Dict[Type[T], T]]:\n \"\"\"\n Extract objects from a file and returns a dict of type -> instances\n\n Args:\n objects: An iterable of objects\n types: A type or tuple of types to extract, defaults to all objects\n\n Returns:\n If a single type is given: a list of instances of that type\n If a tuple of types is given: a mapping of type to a list of instances\n \"\"\"\n types = ensure_iterable(types)\n\n # Create a mapping of type -> instance from the exec values\n ret = defaultdict(list)\n\n for o in objects:\n # We iterate here so that the key is the passed type rather than type(o)\n for type_ in types:\n if isinstance(o, type_):\n ret[type_].append(o)\n\n if len(types) == 1:\n return ret[types[0]]\n\n return ret\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.flatdict_to_dict","title":"flatdict_to_dict
","text":"Converts a flattened dictionary back to a nested dictionary.
Parameters:
Name Type Description Defaultdct
dict
The dictionary to be nested. Each key should be a tuple of keys as generated by dict_to_flatdict
Returns A nested dict of the same type as dct
Source code inprefect/utilities/collections.py
def flatdict_to_dict(\n dct: Dict[Tuple[KT, ...], VT],\n) -> Dict[KT, Union[VT, Dict[KT, VT]]]:\n \"\"\"Converts a flattened dictionary back to a nested dictionary.\n\n Args:\n dct (dict): The dictionary to be nested. Each key should be a tuple of keys\n as generated by `dict_to_flatdict`\n\n Returns\n A nested dict of the same type as dct\n \"\"\"\n typ = type(dct)\n result = cast(Dict[KT, Union[VT, Dict[KT, VT]]], typ())\n for key_tuple, value in dct.items():\n current_dict = result\n for prefix_key in key_tuple[:-1]:\n # Build nested dictionaries up for the current key tuple\n # Use `setdefault` in case the nested dict has already been created\n current_dict = current_dict.setdefault(prefix_key, typ()) # type: ignore\n # Set the value\n current_dict[key_tuple[-1]] = value\n\n return result\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.get_from_dict","title":"get_from_dict
","text":"Fetch a value from a nested dictionary or list using a sequence of keys.
This function allows to fetch a value from a deeply nested structure of dictionaries and lists using either a dot-separated string or a list of keys. If a requested key does not exist, the function returns the provided default value.
Parameters:
Name Type Description Defaultdct
Dict
The nested dictionary or list from which to fetch the value.
requiredkeys
Union[str, List[str]]
The sequence of keys to use for access. Can be a dot-separated string or a list of keys. List indices can be included in the sequence as either integer keys or as string indices in square brackets.
requireddefault
Any
The default value to return if the requested key path does not exist. Defaults to None.
None
Returns:
Type DescriptionAny
The fetched value if the key exists, or the default value if it does not.
get_from_dict({'a': {'b': {'c': [1, 2, 3, 4]}}}, 'a.b.c[1]') 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, ['a', 'b', 1, 'c', 1]) 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, 'a.b.1.c.2', 'default') 'default'
Source code inprefect/utilities/collections.py
def get_from_dict(dct: Dict, keys: Union[str, List[str]], default: Any = None) -> Any:\n \"\"\"\n Fetch a value from a nested dictionary or list using a sequence of keys.\n\n This function allows to fetch a value from a deeply nested structure\n of dictionaries and lists using either a dot-separated string or a list\n of keys. If a requested key does not exist, the function returns the\n provided default value.\n\n Args:\n dct: The nested dictionary or list from which to fetch the value.\n keys: The sequence of keys to use for access. Can be a\n dot-separated string or a list of keys. List indices can be included\n in the sequence as either integer keys or as string indices in square\n brackets.\n default: The default value to return if the requested key path does not\n exist. Defaults to None.\n\n Returns:\n The fetched value if the key exists, or the default value if it does not.\n\n Examples:\n >>> get_from_dict({'a': {'b': {'c': [1, 2, 3, 4]}}}, 'a.b.c[1]')\n 2\n >>> get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, ['a', 'b', 1, 'c', 1])\n 2\n >>> get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, 'a.b.1.c.2', 'default')\n 'default'\n \"\"\"\n if isinstance(keys, str):\n keys = keys.replace(\"[\", \".\").replace(\"]\", \"\").split(\".\")\n try:\n for key in keys:\n try:\n # Try to cast to int to handle list indices\n key = int(key)\n except ValueError:\n # If it's not an int, use the key as-is\n # for dict lookup\n pass\n dct = dct[key]\n return dct\n except (TypeError, KeyError, IndexError):\n return default\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.isiterable","title":"isiterable
","text":"Return a boolean indicating if an object is iterable.
Excludes types that are iterable but typically used as singletons: - str - bytes - IO objects
Source code inprefect/utilities/collections.py
def isiterable(obj: Any) -> bool:\n \"\"\"\n Return a boolean indicating if an object is iterable.\n\n Excludes types that are iterable but typically used as singletons:\n - str\n - bytes\n - IO objects\n \"\"\"\n try:\n iter(obj)\n except TypeError:\n return False\n else:\n return not isinstance(obj, (str, bytes, io.IOBase))\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.remove_nested_keys","title":"remove_nested_keys
","text":"Recurses a dictionary returns a copy without all keys that match an entry in key_to_remove
. Return obj
unchanged if not a dictionary.
Parameters:
Name Type Description Defaultkeys_to_remove
List[Hashable]
A list of keys to remove from obj obj: The object to remove keys from.
requiredReturns:
Type Descriptionobj
without keys matching an entry in keys_to_remove
if obj
is a dictionary. obj
if obj
is not a dictionary.
prefect/utilities/collections.py
def remove_nested_keys(keys_to_remove: List[Hashable], obj):\n \"\"\"\n Recurses a dictionary returns a copy without all keys that match an entry in\n `key_to_remove`. Return `obj` unchanged if not a dictionary.\n\n Args:\n keys_to_remove: A list of keys to remove from obj obj: The object to remove keys\n from.\n\n Returns:\n `obj` without keys matching an entry in `keys_to_remove` if `obj` is a\n dictionary. `obj` if `obj` is not a dictionary.\n \"\"\"\n if not isinstance(obj, dict):\n return obj\n return {\n key: remove_nested_keys(keys_to_remove, value)\n for key, value in obj.items()\n if key not in keys_to_remove\n }\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.visit_collection","title":"visit_collection
","text":"This function visits every element of an arbitrary Python collection. If an element is a Python collection, it will be visited recursively. If an element is not a collection, visit_fn
will be called with the element. The return value of visit_fn
can be used to alter the element if return_data
is set.
Note that when using return_data
a copy of each collection is created to avoid mutating the original object. This may have significant performance penalties and should only be used if you intend to transform the collection.
Supported types: - List - Tuple - Set - Dict (note: keys are also visited recursively) - Dataclass - Pydantic model - Prefect annotations
Parameters:
Name Type Description Defaultexpr
Any
a Python object or expression
requiredvisit_fn
Callable[[Any], Awaitable[Any]]
an async function that will be applied to every non-collection element of expr.
requiredreturn_data
bool
if True
, a copy of expr
containing data modified by visit_fn
will be returned. This is slower than return_data=False
(the default).
False
max_depth
int
Controls the depth of recursive visitation. If set to zero, no recursion will occur. If set to a positive integer N, visitation will only descend to N layers deep. If set to any negative integer, no limit will be enforced and recursion will continue until terminal items are reached. By default, recursion is unlimited.
-1
context
Optional[dict]
An optional dictionary. If passed, the context will be sent to each call to the visit_fn
. The context can be mutated by each visitor and will be available for later visits to expressions at the given depth. Values will not be available \"up\" a level from a given expression.
The context will be automatically populated with an 'annotation' key when visiting collections within a BaseAnnotation
type. This requires the caller to pass context={}
and will not be activated by default.
None
remove_annotations
bool
If set, annotations will be replaced by their contents. By default, annotations are preserved but their contents are visited.
False
Source code in prefect/utilities/collections.py
def visit_collection(\n expr,\n visit_fn: Callable[[Any], Any],\n return_data: bool = False,\n max_depth: int = -1,\n context: Optional[dict] = None,\n remove_annotations: bool = False,\n):\n \"\"\"\n This function visits every element of an arbitrary Python collection. If an element\n is a Python collection, it will be visited recursively. If an element is not a\n collection, `visit_fn` will be called with the element. The return value of\n `visit_fn` can be used to alter the element if `return_data` is set.\n\n Note that when using `return_data` a copy of each collection is created to avoid\n mutating the original object. This may have significant performance penalties and\n should only be used if you intend to transform the collection.\n\n Supported types:\n - List\n - Tuple\n - Set\n - Dict (note: keys are also visited recursively)\n - Dataclass\n - Pydantic model\n - Prefect annotations\n\n Args:\n expr (Any): a Python object or expression\n visit_fn (Callable[[Any], Awaitable[Any]]): an async function that\n will be applied to every non-collection element of expr.\n return_data (bool): if `True`, a copy of `expr` containing data modified\n by `visit_fn` will be returned. This is slower than `return_data=False`\n (the default).\n max_depth: Controls the depth of recursive visitation. If set to zero, no\n recursion will occur. If set to a positive integer N, visitation will only\n descend to N layers deep. If set to any negative integer, no limit will be\n enforced and recursion will continue until terminal items are reached. By\n default, recursion is unlimited.\n context: An optional dictionary. If passed, the context will be sent to each\n call to the `visit_fn`. The context can be mutated by each visitor and will\n be available for later visits to expressions at the given depth. Values\n will not be available \"up\" a level from a given expression.\n\n The context will be automatically populated with an 'annotation' key when\n visiting collections within a `BaseAnnotation` type. This requires the\n caller to pass `context={}` and will not be activated by default.\n remove_annotations: If set, annotations will be replaced by their contents. By\n default, annotations are preserved but their contents are visited.\n \"\"\"\n\n def visit_nested(expr):\n # Utility for a recursive call, preserving options and updating the depth.\n return visit_collection(\n expr,\n visit_fn=visit_fn,\n return_data=return_data,\n remove_annotations=remove_annotations,\n max_depth=max_depth - 1,\n # Copy the context on nested calls so it does not \"propagate up\"\n context=context.copy() if context is not None else None,\n )\n\n def visit_expression(expr):\n if context is not None:\n return visit_fn(expr, context)\n else:\n return visit_fn(expr)\n\n # Visit every expression\n try:\n result = visit_expression(expr)\n except StopVisiting:\n max_depth = 0\n result = expr\n\n if return_data:\n # Only mutate the expression while returning data, otherwise it could be null\n expr = result\n\n # Then, visit every child of the expression recursively\n\n # If we have reached the maximum depth, do not perform any recursion\n if max_depth == 0:\n return result if return_data else None\n\n # Get the expression type; treat iterators like lists\n typ = list if isinstance(expr, IteratorABC) and isiterable(expr) else type(expr)\n typ = cast(type, typ) # mypy treats this as 'object' otherwise and complains\n\n # Then visit every item in the expression if it is a collection\n if isinstance(expr, Mock):\n # Do not attempt to recurse into mock objects\n result = expr\n\n elif isinstance(expr, BaseAnnotation):\n if context is not None:\n context[\"annotation\"] = expr\n value = visit_nested(expr.unwrap())\n\n if remove_annotations:\n result = value if return_data else None\n else:\n result = expr.rewrap(value) if return_data else None\n\n elif typ in (list, tuple, set):\n items = [visit_nested(o) for o in expr]\n result = typ(items) if return_data else None\n\n elif typ in (dict, OrderedDict):\n assert isinstance(expr, (dict, OrderedDict)) # typecheck assertion\n items = [(visit_nested(k), visit_nested(v)) for k, v in expr.items()]\n result = typ(items) if return_data else None\n\n elif is_dataclass(expr) and not isinstance(expr, type):\n values = [visit_nested(getattr(expr, f.name)) for f in fields(expr)]\n items = {field.name: value for field, value in zip(fields(expr), values)}\n result = typ(**items) if return_data else None\n\n elif isinstance(expr, pydantic.BaseModel):\n # NOTE: This implementation *does not* traverse private attributes\n # Pydantic does not expose extras in `__fields__` so we use `__fields_set__`\n # as well to get all of the relevant attributes\n # Check for presence of attrs even if they're in the field set due to pydantic#4916\n model_fields = {\n f for f in expr.__fields_set__.union(expr.__fields__) if hasattr(expr, f)\n }\n items = [visit_nested(getattr(expr, key)) for key in model_fields]\n\n if return_data:\n # Collect fields with aliases so reconstruction can use the correct field name\n aliases = {\n key: value.alias\n for key, value in expr.__fields__.items()\n if value.has_alias\n }\n\n model_instance = typ(\n **{\n aliases.get(key) or key: value\n for key, value in zip(model_fields, items)\n }\n )\n\n # Private attributes are not included in `__fields_set__` but we do not want\n # to drop them from the model so we restore them after constructing a new\n # model\n for attr in expr.__private_attributes__:\n # Use `object.__setattr__` to avoid errors on immutable models\n object.__setattr__(model_instance, attr, getattr(expr, attr))\n\n # Preserve data about which fields were explicitly set on the original model\n object.__setattr__(model_instance, \"__fields_set__\", expr.__fields_set__)\n result = model_instance\n else:\n result = None\n\n else:\n result = result if return_data else None\n\n return result\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/compat/","title":"compat","text":"","tags":["Python API","utilities","compatibility"]},{"location":"api-ref/prefect/utilities/compat/#prefect.utilities.compat","title":"prefect.utilities.compat
","text":"Utilities for Python version compatibility
","tags":["Python API","utilities","compatibility"]},{"location":"api-ref/prefect/utilities/context/","title":"context","text":"","tags":["Python API","utilities","context"]},{"location":"api-ref/prefect/utilities/context/#prefect.utilities.context","title":"prefect.utilities.context
","text":"","tags":["Python API","utilities","context"]},{"location":"api-ref/prefect/utilities/dispatch/","title":"dispatch","text":"","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch","title":"prefect.utilities.dispatch
","text":"Provides methods for performing dynamic dispatch for actions on base type to one of its subtypes.
Example:
@register_base_type\nclass Base:\n @classmethod\n def __dispatch_key__(cls):\n return cls.__name__.lower()\n\n\nclass Foo(Base):\n ...\n\nkey = get_dispatch_key(Foo) # 'foo'\nlookup_type(Base, key) # Foo\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.get_dispatch_key","title":"get_dispatch_key
","text":"Retrieve the unique dispatch key for a class type or instance.
This key is defined at the __dispatch_key__
attribute. If it is a callable, it will be resolved.
If allow_missing
is False
, an exception will be raised if the attribute is not defined or the key is null. If True
, None
will be returned in these cases.
prefect/utilities/dispatch.py
def get_dispatch_key(\n cls_or_instance: Any, allow_missing: bool = False\n) -> Optional[str]:\n \"\"\"\n Retrieve the unique dispatch key for a class type or instance.\n\n This key is defined at the `__dispatch_key__` attribute. If it is a callable, it\n will be resolved.\n\n If `allow_missing` is `False`, an exception will be raised if the attribute is not\n defined or the key is null. If `True`, `None` will be returned in these cases.\n \"\"\"\n dispatch_key = getattr(cls_or_instance, \"__dispatch_key__\", None)\n\n type_name = (\n cls_or_instance.__name__\n if isinstance(cls_or_instance, type)\n else type(cls_or_instance).__name__\n )\n\n if dispatch_key is None:\n if allow_missing:\n return None\n raise ValueError(\n f\"Type {type_name!r} does not define a value for \"\n \"'__dispatch_key__' which is required for registry lookup.\"\n )\n\n if callable(dispatch_key):\n dispatch_key = dispatch_key()\n\n if allow_missing and dispatch_key is None:\n return None\n\n if not isinstance(dispatch_key, str):\n raise TypeError(\n f\"Type {type_name!r} has a '__dispatch_key__' of type \"\n f\"{type(dispatch_key).__name__} but a type of 'str' is required.\"\n )\n\n return dispatch_key\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.get_registry_for_type","title":"get_registry_for_type
","text":"Get the first matching registry for a class or any of its base classes.
If not found, None
is returned.
prefect/utilities/dispatch.py
def get_registry_for_type(cls: T) -> Optional[Dict[str, T]]:\n \"\"\"\n Get the first matching registry for a class or any of its base classes.\n\n If not found, `None` is returned.\n \"\"\"\n return next(\n filter(\n lambda registry: registry is not None,\n (_TYPE_REGISTRIES.get(cls) for cls in cls.mro()),\n ),\n None,\n )\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.lookup_type","title":"lookup_type
","text":"Look up a dispatch key in the type registry for the given class.
Source code inprefect/utilities/dispatch.py
def lookup_type(cls: T, dispatch_key: str) -> T:\n \"\"\"\n Look up a dispatch key in the type registry for the given class.\n \"\"\"\n # Get the first matching registry for the class or one of its bases\n registry = get_registry_for_type(cls)\n\n # Look up this type in the registry\n subcls = registry.get(dispatch_key)\n\n if subcls is None:\n raise KeyError(\n f\"No class found for dispatch key {dispatch_key!r} in registry for type \"\n f\"{cls.__name__!r}.\"\n )\n\n return subcls\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.register_base_type","title":"register_base_type
","text":"Register a base type allowing child types to be registered for dispatch with register_type
.
The base class may or may not define a __dispatch_key__
to allow lookups of the base type.
prefect/utilities/dispatch.py
def register_base_type(cls: T) -> T:\n \"\"\"\n Register a base type allowing child types to be registered for dispatch with\n `register_type`.\n\n The base class may or may not define a `__dispatch_key__` to allow lookups of the\n base type.\n \"\"\"\n registry = _TYPE_REGISTRIES.setdefault(cls, {})\n base_key = get_dispatch_key(cls, allow_missing=True)\n if base_key is not None:\n registry[base_key] = cls\n\n # Add automatic subtype registration\n cls.__init_subclass_original__ = getattr(cls, \"__init_subclass__\")\n cls.__init_subclass__ = _register_subclass_of_base_type\n\n return cls\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.register_type","title":"register_type
","text":"Register a type for lookup with dispatch.
The type or one of its parents must define a unique __dispatch_key__
.
One of the classes base types must be registered using register_base_type
.
prefect/utilities/dispatch.py
def register_type(cls: T) -> T:\n \"\"\"\n Register a type for lookup with dispatch.\n\n The type or one of its parents must define a unique `__dispatch_key__`.\n\n One of the classes base types must be registered using `register_base_type`.\n \"\"\"\n # Lookup the registry for this type\n registry = get_registry_for_type(cls)\n\n # Check if a base type is registered\n if registry is None:\n # Include a description of registered base types\n known = \", \".join(repr(base.__name__) for base in _TYPE_REGISTRIES)\n known_message = (\n f\" Did you mean to inherit from one of the following known types: {known}.\"\n if known\n else \"\"\n )\n\n # And a list of all base types for the type they tried to register\n bases = \", \".join(\n repr(base.__name__) for base in cls.mro() if base not in (object, cls)\n )\n\n raise ValueError(\n f\"No registry found for type {cls.__name__!r} with bases {bases}.\"\n + known_message\n )\n\n key = get_dispatch_key(cls)\n existing_value = registry.get(key)\n if existing_value is not None and id(existing_value) != id(cls):\n # Get line numbers for debugging\n file = inspect.getsourcefile(cls)\n line_number = inspect.getsourcelines(cls)[1]\n existing_file = inspect.getsourcefile(existing_value)\n existing_line_number = inspect.getsourcelines(existing_value)[1]\n warnings.warn(\n f\"Type {cls.__name__!r} at {file}:{line_number} has key {key!r} that \"\n f\"matches existing registered type {existing_value.__name__!r} from \"\n f\"{existing_file}:{existing_line_number}. The existing type will be \"\n \"overridden.\"\n )\n\n # Add to the registry\n registry[key] = cls\n\n return cls\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dockerutils/","title":"dockerutils","text":"","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils","title":"prefect.utilities.dockerutils
","text":"","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.BuildError","title":"BuildError
","text":" Bases: Exception
Raised when a Docker build fails
Source code inprefect/utilities/dockerutils.py
class BuildError(Exception):\n \"\"\"Raised when a Docker build fails\"\"\"\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder","title":"ImageBuilder
","text":"An interface for preparing Docker build contexts and building images
Source code inprefect/utilities/dockerutils.py
class ImageBuilder:\n \"\"\"An interface for preparing Docker build contexts and building images\"\"\"\n\n base_directory: Path\n context: Optional[Path]\n platform: Optional[str]\n dockerfile_lines: List[str]\n\n def __init__(\n self,\n base_image: str,\n base_directory: Path = None,\n platform: str = None,\n context: Path = None,\n ):\n \"\"\"Create an ImageBuilder\n\n Args:\n base_image: the base image to use\n base_directory: the starting point on your host for relative file locations,\n defaulting to the current directory\n context: use this path as the build context (if not provided, will create a\n temporary directory for the context)\n\n Returns:\n The image ID\n \"\"\"\n self.base_directory = base_directory or context or Path().absolute()\n self.temporary_directory = None\n self.context = context\n self.platform = platform\n self.dockerfile_lines = []\n\n if self.context:\n dockerfile_path: Path = self.context / \"Dockerfile\"\n if dockerfile_path.exists():\n raise ValueError(f\"There is already a Dockerfile at {context}\")\n\n self.add_line(f\"FROM {base_image}\")\n\n def __enter__(self) -> Self:\n if self.context and not self.temporary_directory:\n return self\n\n self.temporary_directory = TemporaryDirectory()\n self.context = Path(self.temporary_directory.__enter__())\n return self\n\n def __exit__(\n self, exc: Type[BaseException], value: BaseException, traceback: TracebackType\n ) -> None:\n if not self.temporary_directory:\n return\n\n self.temporary_directory.__exit__(exc, value, traceback)\n self.temporary_directory = None\n self.context = None\n\n def add_line(self, line: str) -> None:\n \"\"\"Add a line to this image's Dockerfile\"\"\"\n self.add_lines([line])\n\n def add_lines(self, lines: Iterable[str]) -> None:\n \"\"\"Add lines to this image's Dockerfile\"\"\"\n self.dockerfile_lines.extend(lines)\n\n def copy(self, source: Union[str, Path], destination: Union[str, PurePosixPath]):\n \"\"\"Copy a file to this image\"\"\"\n if not self.context:\n raise Exception(\"No context available\")\n\n if not isinstance(destination, PurePosixPath):\n destination = PurePosixPath(destination)\n\n if not isinstance(source, Path):\n source = Path(source)\n\n if source.is_absolute():\n source = source.resolve().relative_to(self.base_directory)\n\n if self.temporary_directory:\n os.makedirs(self.context / source.parent, exist_ok=True)\n\n if source.is_dir():\n shutil.copytree(self.base_directory / source, self.context / source)\n else:\n shutil.copy2(self.base_directory / source, self.context / source)\n\n self.add_line(f\"COPY {source} {destination}\")\n\n def write_text(self, text: str, destination: Union[str, PurePosixPath]):\n if not self.context:\n raise Exception(\"No context available\")\n\n if not isinstance(destination, PurePosixPath):\n destination = PurePosixPath(destination)\n\n source_hash = hashlib.sha256(text.encode()).hexdigest()\n (self.context / f\".{source_hash}\").write_text(text)\n self.add_line(f\"COPY .{source_hash} {destination}\")\n\n def build(\n self, pull: bool = False, stream_progress_to: Optional[TextIO] = None\n ) -> str:\n \"\"\"Build the Docker image from the current state of the ImageBuilder\n\n Args:\n pull: True to pull the base image during the build\n stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO)\n that will collect the build output as it is reported by Docker\n\n Returns:\n The image ID\n \"\"\"\n dockerfile_path: Path = self.context / \"Dockerfile\"\n\n with dockerfile_path.open(\"w\") as dockerfile:\n dockerfile.writelines(line + \"\\n\" for line in self.dockerfile_lines)\n\n try:\n return build_image(\n self.context,\n platform=self.platform,\n pull=pull,\n stream_progress_to=stream_progress_to,\n )\n finally:\n os.unlink(dockerfile_path)\n\n def assert_has_line(self, line: str) -> None:\n \"\"\"Asserts that the given line is in the Dockerfile\"\"\"\n all_lines = \"\\n\".join(\n [f\" {i+1:>3}: {line}\" for i, line in enumerate(self.dockerfile_lines)]\n )\n message = (\n f\"Expected {line!r} not found in Dockerfile. Dockerfile:\\n{all_lines}\"\n )\n assert line in self.dockerfile_lines, message\n\n def assert_line_absent(self, line: str) -> None:\n \"\"\"Asserts that the given line is absent from the Dockerfile\"\"\"\n if line not in self.dockerfile_lines:\n return\n\n i = self.dockerfile_lines.index(line)\n\n surrounding_lines = \"\\n\".join(\n [\n f\" {i+1:>3}: {line}\"\n for i, line in enumerate(self.dockerfile_lines[i - 2 : i + 2])\n ]\n )\n message = (\n f\"Unexpected {line!r} found in Dockerfile at line {i+1}. \"\n f\"Surrounding lines:\\n{surrounding_lines}\"\n )\n\n assert line not in self.dockerfile_lines, message\n\n def assert_line_before(self, first: str, second: str) -> None:\n \"\"\"Asserts that the first line appears before the second line\"\"\"\n self.assert_has_line(first)\n self.assert_has_line(second)\n\n first_index = self.dockerfile_lines.index(first)\n second_index = self.dockerfile_lines.index(second)\n\n surrounding_lines = \"\\n\".join(\n [\n f\" {i+1:>3}: {line}\"\n for i, line in enumerate(\n self.dockerfile_lines[second_index - 2 : first_index + 2]\n )\n ]\n )\n\n message = (\n f\"Expected {first!r} to appear before {second!r} in the Dockerfile, but \"\n f\"{first!r} was at line {first_index+1} and {second!r} as at line \"\n f\"{second_index+1}. Surrounding lines:\\n{surrounding_lines}\"\n )\n\n assert first_index < second_index, message\n\n def assert_line_after(self, second: str, first: str) -> None:\n \"\"\"Asserts that the second line appears after the first line\"\"\"\n self.assert_line_before(first, second)\n\n def assert_has_file(self, source: Path, container_path: PurePosixPath) -> None:\n \"\"\"Asserts that the given file or directory will be copied into the container\n at the given path\"\"\"\n if source.is_absolute():\n source = source.relative_to(self.base_directory)\n\n self.assert_has_line(f\"COPY {source} {container_path}\")\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.add_line","title":"add_line
","text":"Add a line to this image's Dockerfile
Source code inprefect/utilities/dockerutils.py
def add_line(self, line: str) -> None:\n \"\"\"Add a line to this image's Dockerfile\"\"\"\n self.add_lines([line])\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.add_lines","title":"add_lines
","text":"Add lines to this image's Dockerfile
Source code inprefect/utilities/dockerutils.py
def add_lines(self, lines: Iterable[str]) -> None:\n \"\"\"Add lines to this image's Dockerfile\"\"\"\n self.dockerfile_lines.extend(lines)\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_has_file","title":"assert_has_file
","text":"Asserts that the given file or directory will be copied into the container at the given path
Source code inprefect/utilities/dockerutils.py
def assert_has_file(self, source: Path, container_path: PurePosixPath) -> None:\n \"\"\"Asserts that the given file or directory will be copied into the container\n at the given path\"\"\"\n if source.is_absolute():\n source = source.relative_to(self.base_directory)\n\n self.assert_has_line(f\"COPY {source} {container_path}\")\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_has_line","title":"assert_has_line
","text":"Asserts that the given line is in the Dockerfile
Source code inprefect/utilities/dockerutils.py
def assert_has_line(self, line: str) -> None:\n \"\"\"Asserts that the given line is in the Dockerfile\"\"\"\n all_lines = \"\\n\".join(\n [f\" {i+1:>3}: {line}\" for i, line in enumerate(self.dockerfile_lines)]\n )\n message = (\n f\"Expected {line!r} not found in Dockerfile. Dockerfile:\\n{all_lines}\"\n )\n assert line in self.dockerfile_lines, message\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_line_absent","title":"assert_line_absent
","text":"Asserts that the given line is absent from the Dockerfile
Source code inprefect/utilities/dockerutils.py
def assert_line_absent(self, line: str) -> None:\n \"\"\"Asserts that the given line is absent from the Dockerfile\"\"\"\n if line not in self.dockerfile_lines:\n return\n\n i = self.dockerfile_lines.index(line)\n\n surrounding_lines = \"\\n\".join(\n [\n f\" {i+1:>3}: {line}\"\n for i, line in enumerate(self.dockerfile_lines[i - 2 : i + 2])\n ]\n )\n message = (\n f\"Unexpected {line!r} found in Dockerfile at line {i+1}. \"\n f\"Surrounding lines:\\n{surrounding_lines}\"\n )\n\n assert line not in self.dockerfile_lines, message\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_line_after","title":"assert_line_after
","text":"Asserts that the second line appears after the first line
Source code inprefect/utilities/dockerutils.py
def assert_line_after(self, second: str, first: str) -> None:\n \"\"\"Asserts that the second line appears after the first line\"\"\"\n self.assert_line_before(first, second)\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_line_before","title":"assert_line_before
","text":"Asserts that the first line appears before the second line
Source code inprefect/utilities/dockerutils.py
def assert_line_before(self, first: str, second: str) -> None:\n \"\"\"Asserts that the first line appears before the second line\"\"\"\n self.assert_has_line(first)\n self.assert_has_line(second)\n\n first_index = self.dockerfile_lines.index(first)\n second_index = self.dockerfile_lines.index(second)\n\n surrounding_lines = \"\\n\".join(\n [\n f\" {i+1:>3}: {line}\"\n for i, line in enumerate(\n self.dockerfile_lines[second_index - 2 : first_index + 2]\n )\n ]\n )\n\n message = (\n f\"Expected {first!r} to appear before {second!r} in the Dockerfile, but \"\n f\"{first!r} was at line {first_index+1} and {second!r} as at line \"\n f\"{second_index+1}. Surrounding lines:\\n{surrounding_lines}\"\n )\n\n assert first_index < second_index, message\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.build","title":"build
","text":"Build the Docker image from the current state of the ImageBuilder
Parameters:
Name Type Description Defaultpull
bool
True to pull the base image during the build
False
stream_progress_to
Optional[TextIO]
an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker
None
Returns:
Type Descriptionstr
The image ID
Source code inprefect/utilities/dockerutils.py
def build(\n self, pull: bool = False, stream_progress_to: Optional[TextIO] = None\n) -> str:\n \"\"\"Build the Docker image from the current state of the ImageBuilder\n\n Args:\n pull: True to pull the base image during the build\n stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO)\n that will collect the build output as it is reported by Docker\n\n Returns:\n The image ID\n \"\"\"\n dockerfile_path: Path = self.context / \"Dockerfile\"\n\n with dockerfile_path.open(\"w\") as dockerfile:\n dockerfile.writelines(line + \"\\n\" for line in self.dockerfile_lines)\n\n try:\n return build_image(\n self.context,\n platform=self.platform,\n pull=pull,\n stream_progress_to=stream_progress_to,\n )\n finally:\n os.unlink(dockerfile_path)\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.copy","title":"copy
","text":"Copy a file to this image
Source code inprefect/utilities/dockerutils.py
def copy(self, source: Union[str, Path], destination: Union[str, PurePosixPath]):\n \"\"\"Copy a file to this image\"\"\"\n if not self.context:\n raise Exception(\"No context available\")\n\n if not isinstance(destination, PurePosixPath):\n destination = PurePosixPath(destination)\n\n if not isinstance(source, Path):\n source = Path(source)\n\n if source.is_absolute():\n source = source.resolve().relative_to(self.base_directory)\n\n if self.temporary_directory:\n os.makedirs(self.context / source.parent, exist_ok=True)\n\n if source.is_dir():\n shutil.copytree(self.base_directory / source, self.context / source)\n else:\n shutil.copy2(self.base_directory / source, self.context / source)\n\n self.add_line(f\"COPY {source} {destination}\")\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.PushError","title":"PushError
","text":" Bases: Exception
Raised when a Docker image push fails
Source code inprefect/utilities/dockerutils.py
class PushError(Exception):\n \"\"\"Raised when a Docker image push fails\"\"\"\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.build_image","title":"build_image
","text":"Builds a Docker image, returning the image ID
Parameters:
Name Type Description Defaultcontext
Path
the root directory for the Docker build context
requireddockerfile
str
the path to the Dockerfile, relative to the context
'Dockerfile'
tag
Optional[str]
the tag to give this image
None
pull
bool
True to pull the base image during the build
False
stream_progress_to
Optional[TextIO]
an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker
None
Returns:
Type Descriptionstr
The image ID
Source code inprefect/utilities/dockerutils.py
@silence_docker_warnings()\ndef build_image(\n context: Path,\n dockerfile: str = \"Dockerfile\",\n tag: Optional[str] = None,\n pull: bool = False,\n platform: str = None,\n stream_progress_to: Optional[TextIO] = None,\n **kwargs,\n) -> str:\n \"\"\"Builds a Docker image, returning the image ID\n\n Args:\n context: the root directory for the Docker build context\n dockerfile: the path to the Dockerfile, relative to the context\n tag: the tag to give this image\n pull: True to pull the base image during the build\n stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO) that\n will collect the build output as it is reported by Docker\n\n Returns:\n The image ID\n \"\"\"\n\n if not context:\n raise ValueError(\"context required to build an image\")\n\n if not Path(context).exists():\n raise ValueError(f\"Context path {context} does not exist\")\n\n kwargs = {key: kwargs[key] for key in kwargs if key not in [\"decode\", \"labels\"]}\n\n image_id = None\n with docker_client() as client:\n events = client.api.build(\n path=str(context),\n tag=tag,\n dockerfile=dockerfile,\n pull=pull,\n decode=True,\n labels=IMAGE_LABELS,\n platform=platform,\n **kwargs,\n )\n\n try:\n for event in events:\n if \"stream\" in event:\n if not stream_progress_to:\n continue\n stream_progress_to.write(event[\"stream\"])\n stream_progress_to.flush()\n elif \"aux\" in event:\n image_id = event[\"aux\"][\"ID\"]\n elif \"error\" in event:\n raise BuildError(event[\"error\"])\n elif \"message\" in event:\n raise BuildError(event[\"message\"])\n except docker.errors.APIError as e:\n raise BuildError(e.explanation) from e\n\n assert image_id, \"The Docker daemon did not return an image ID\"\n return image_id\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.docker_client","title":"docker_client
","text":"Get the environmentally-configured Docker client
Source code inprefect/utilities/dockerutils.py
@contextmanager\ndef docker_client() -> Generator[\"DockerClient\", None, None]:\n \"\"\"Get the environmentally-configured Docker client\"\"\"\n client = None\n try:\n with silence_docker_warnings():\n client = docker.DockerClient.from_env()\n\n yield client\n except docker.errors.DockerException as exc:\n raise RuntimeError(\n \"This error is often thrown because Docker is not running. Please ensure Docker is running.\"\n ) from exc\n finally:\n client is not None and client.close()\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.format_outlier_version_name","title":"format_outlier_version_name
","text":"Formats outlier docker version names to pass packaging.version.parse
validation - Current cases are simple, but creates stub for more complicated formatting if eventually needed. - Example outlier versions that throw a parsing exception: - \"20.10.0-ce\" (variant of community edition label) - \"20.10.0-ee\" (variant of enterprise edition label)
Parameters:
Name Type Description Defaultversion
str
raw docker version value
requiredReturns:
Name Type Descriptionstr
value that can pass packaging.version.parse
validation
prefect/utilities/dockerutils.py
def format_outlier_version_name(version: str):\n \"\"\"\n Formats outlier docker version names to pass `packaging.version.parse` validation\n - Current cases are simple, but creates stub for more complicated formatting if eventually needed.\n - Example outlier versions that throw a parsing exception:\n - \"20.10.0-ce\" (variant of community edition label)\n - \"20.10.0-ee\" (variant of enterprise edition label)\n\n Args:\n version (str): raw docker version value\n\n Returns:\n str: value that can pass `packaging.version.parse` validation\n \"\"\"\n return version.replace(\"-ce\", \"\").replace(\"-ee\", \"\")\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.generate_default_dockerfile","title":"generate_default_dockerfile
","text":"Generates a default Dockerfile used for deploying flows. The Dockerfile is written to a temporary file and yielded. The temporary file is removed after the context manager exits.
Parameters:
Name Type Description Default-
context
The context to use for the Dockerfile. Defaults to the current working directory.
required Source code inprefect/utilities/dockerutils.py
@contextmanager\ndef generate_default_dockerfile(context: Optional[Path] = None):\n \"\"\"\n Generates a default Dockerfile used for deploying flows. The Dockerfile is written\n to a temporary file and yielded. The temporary file is removed after the context\n manager exits.\n\n Args:\n - context: The context to use for the Dockerfile. Defaults to\n the current working directory.\n \"\"\"\n if not context:\n context = Path.cwd()\n lines = []\n base_image = get_prefect_image_name()\n lines.append(f\"FROM {base_image}\")\n dir_name = context.name\n\n if (context / \"requirements.txt\").exists():\n lines.append(f\"COPY requirements.txt /opt/prefect/{dir_name}/requirements.txt\")\n lines.append(\n f\"RUN python -m pip install -r /opt/prefect/{dir_name}/requirements.txt\"\n )\n\n lines.append(f\"COPY . /opt/prefect/{dir_name}/\")\n lines.append(f\"WORKDIR /opt/prefect/{dir_name}/\")\n\n temp_dockerfile = context / \"Dockerfile\"\n if Path(temp_dockerfile).exists():\n raise RuntimeError(\n \"Failed to generate Dockerfile. Dockerfile already exists in the\"\n \" current directory.\"\n )\n\n with Path(temp_dockerfile).open(\"w\") as f:\n f.writelines(line + \"\\n\" for line in lines)\n\n try:\n yield temp_dockerfile\n finally:\n temp_dockerfile.unlink()\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.get_prefect_image_name","title":"get_prefect_image_name
","text":"Get the Prefect image name matching the current Prefect and Python versions.
Parameters:
Name Type Description Defaultprefect_version
str
An optional override for the Prefect version.
None
python_version
str
An optional override for the Python version; must be at the minor level e.g. '3.9'.
None
flavor
str
An optional alternative image flavor to build, like 'conda'
None
Source code in prefect/utilities/dockerutils.py
def get_prefect_image_name(\n prefect_version: str = None, python_version: str = None, flavor: str = None\n) -> str:\n \"\"\"\n Get the Prefect image name matching the current Prefect and Python versions.\n\n Args:\n prefect_version: An optional override for the Prefect version.\n python_version: An optional override for the Python version; must be at the\n minor level e.g. '3.9'.\n flavor: An optional alternative image flavor to build, like 'conda'\n \"\"\"\n parsed_version = (prefect_version or prefect.__version__).split(\"+\")\n is_prod_build = len(parsed_version) == 1\n prefect_version = (\n parsed_version[0]\n if is_prod_build\n else \"sha-\" + prefect.__version_info__[\"full-revisionid\"][:7]\n )\n\n python_version = python_version or python_version_minor()\n\n tag = slugify(\n f\"{prefect_version}-python{python_version}\" + (f\"-{flavor}\" if flavor else \"\"),\n lowercase=False,\n max_length=128,\n # Docker allows these characters for tag names\n regex_pattern=r\"[^a-zA-Z0-9_.-]+\",\n )\n\n image = \"prefect\" if is_prod_build else \"prefect-dev\"\n return f\"prefecthq/{image}:{tag}\"\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.parse_image_tag","title":"parse_image_tag
","text":"Parse Docker Image String
Parameters:
Name Type Description Defaultname
str
Name of Docker Image
required Return Source code inprefect/utilities/dockerutils.py
def parse_image_tag(name: str) -> Tuple[str, Optional[str]]:\n \"\"\"\n Parse Docker Image String\n\n - If a tag exists, this function parses and returns the image registry and tag,\n separately as a tuple.\n - Example 1: 'prefecthq/prefect:latest' -> ('prefecthq/prefect', 'latest')\n - Example 2: 'hostname.io:5050/folder/subfolder:latest' -> ('hostname.io:5050/folder/subfolder', 'latest')\n - Supports parsing Docker Image strings that follow Docker Image Specification v1.1.0\n - Image building tools typically enforce this standard\n\n Args:\n name (str): Name of Docker Image\n\n Return:\n tuple: image registry, image tag\n \"\"\"\n tag = None\n name_parts = name.split(\"/\")\n # First handles the simplest image names (DockerHub-based, index-free, potentionally with a tag)\n # - Example: simplename:latest\n if len(name_parts) == 1:\n if \":\" in name_parts[0]:\n image_name, tag = name_parts[0].split(\":\")\n else:\n image_name = name_parts[0]\n else:\n # 1. Separates index (hostname.io or prefecthq) from path:tag (folder/subfolder:latest or prefect:latest)\n # 2. Separates path and tag (if tag exists)\n # 3. Reunites index and path (without tag) as image name\n index_name = name_parts[0]\n image_path = \"/\".join(name_parts[1:])\n if \":\" in image_path:\n image_path, tag = image_path.split(\":\")\n image_name = f\"{index_name}/{image_path}\"\n return image_name, tag\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.push_image","title":"push_image
","text":"Pushes a local image to a Docker registry, returning the registry-qualified tag for that image
This assumes that the environment's Docker daemon is already authenticated to the given registry, and currently makes no attempt to authenticate.
Parameters:
Name Type Description Defaultimage_id
str
a Docker image ID
requiredregistry_url
str
the URL of a Docker registry
requiredname
str
the name of this image
requiredtag
str
the tag to give this image (defaults to a short representation of the image's ID)
None
stream_progress_to
Optional[TextIO]
an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker
None
Returns:
Type Descriptionstr
A registry-qualified tag, like my-registry.example.com/my-image:abcdefg
Source code inprefect/utilities/dockerutils.py
@silence_docker_warnings()\ndef push_image(\n image_id: str,\n registry_url: str,\n name: str,\n tag: Optional[str] = None,\n stream_progress_to: Optional[TextIO] = None,\n) -> str:\n \"\"\"Pushes a local image to a Docker registry, returning the registry-qualified tag\n for that image\n\n This assumes that the environment's Docker daemon is already authenticated to the\n given registry, and currently makes no attempt to authenticate.\n\n Args:\n image_id (str): a Docker image ID\n registry_url (str): the URL of a Docker registry\n name (str): the name of this image\n tag (str): the tag to give this image (defaults to a short representation of\n the image's ID)\n stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO) that\n will collect the build output as it is reported by Docker\n\n Returns:\n A registry-qualified tag, like my-registry.example.com/my-image:abcdefg\n \"\"\"\n\n if not tag:\n tag = slugify(pendulum.now(\"utc\").isoformat())\n\n _, registry, _, _, _ = urlsplit(registry_url)\n repository = f\"{registry}/{name}\"\n\n with docker_client() as client:\n image: \"docker.Image\" = client.images.get(image_id)\n image.tag(repository, tag=tag)\n events = client.api.push(repository, tag=tag, stream=True, decode=True)\n try:\n for event in events:\n if \"status\" in event:\n if not stream_progress_to:\n continue\n stream_progress_to.write(event[\"status\"])\n if \"progress\" in event:\n stream_progress_to.write(\" \" + event[\"progress\"])\n stream_progress_to.write(\"\\n\")\n stream_progress_to.flush()\n elif \"error\" in event:\n raise PushError(event[\"error\"])\n finally:\n client.api.remove_image(f\"{repository}:{tag}\", noprune=True)\n\n return f\"{repository}:{tag}\"\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.split_repository_path","title":"split_repository_path
","text":"Splits a Docker repository path into its namespace and repository components.
Parameters:
Name Type Description Defaultrepository_path
str
The Docker repository path to split.
requiredReturns:
Type DescriptionTuple[Optional[str], str]
Tuple[Optional[str], str]: A tuple containing the namespace and repository components. - namespace (Optional[str]): The Docker namespace, combining the registry and organization. None if not present. - repository (Optionals[str]): The repository name.
Source code inprefect/utilities/dockerutils.py
def split_repository_path(repository_path: str) -> Tuple[Optional[str], str]:\n \"\"\"\n Splits a Docker repository path into its namespace and repository components.\n\n Args:\n repository_path: The Docker repository path to split.\n\n Returns:\n Tuple[Optional[str], str]: A tuple containing the namespace and repository components.\n - namespace (Optional[str]): The Docker namespace, combining the registry and organization. None if not present.\n - repository (Optionals[str]): The repository name.\n \"\"\"\n parts = repository_path.split(\"/\", 2)\n\n # Check if the path includes a registry and organization or just organization/repository\n if len(parts) == 3 or (len(parts) == 2 and (\".\" in parts[0] or \":\" in parts[0])):\n # Namespace includes registry and organization\n namespace = \"/\".join(parts[:-1])\n repository = parts[-1]\n elif len(parts) == 2:\n # Only organization/repository provided, so namespace is just the first part\n namespace = parts[0]\n repository = parts[1]\n else:\n # No namespace provided\n namespace = None\n repository = parts[0]\n\n return namespace, repository\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.to_run_command","title":"to_run_command
","text":"Convert a process-style list of command arguments to a single Dockerfile RUN instruction.
Source code inprefect/utilities/dockerutils.py
def to_run_command(command: List[str]) -> str:\n \"\"\"\n Convert a process-style list of command arguments to a single Dockerfile RUN\n instruction.\n \"\"\"\n if not command:\n return \"\"\n\n run_command = f\"RUN {command[0]}\"\n if len(command) > 1:\n run_command += \" \" + \" \".join([repr(arg) for arg in command[1:]])\n\n # TODO: Consider performing text-wrapping to improve readability of the generated\n # Dockerfile\n # return textwrap.wrap(\n # run_command,\n # subsequent_indent=\" \" * 4,\n # break_on_hyphens=False,\n # break_long_words=False\n # )\n\n return run_command\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/filesystem/","title":"filesystem","text":"","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem","title":"prefect.utilities.filesystem
","text":"Utilities for working with file systems
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.create_default_ignore_file","title":"create_default_ignore_file
","text":"Creates default ignore file in the provided path if one does not already exist; returns boolean specifying whether a file was created.
Source code inprefect/utilities/filesystem.py
def create_default_ignore_file(path: str) -> bool:\n \"\"\"\n Creates default ignore file in the provided path if one does not already exist; returns boolean specifying\n whether a file was created.\n \"\"\"\n path = pathlib.Path(path)\n ignore_file = path / \".prefectignore\"\n if ignore_file.exists():\n return False\n default_file = pathlib.Path(prefect.__module_path__) / \".prefectignore\"\n with ignore_file.open(mode=\"w\") as f:\n f.write(default_file.read_text())\n return True\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.filename","title":"filename
","text":"Extract the file name from a path with remote file system support
Source code inprefect/utilities/filesystem.py
def filename(path: str) -> str:\n \"\"\"Extract the file name from a path with remote file system support\"\"\"\n try:\n of: OpenFile = fsspec.open(path)\n sep = of.fs.sep\n except (ImportError, AttributeError):\n sep = \"\\\\\" if \"\\\\\" in path else \"/\"\n return path.split(sep)[-1]\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.filter_files","title":"filter_files
","text":"This function accepts a root directory path and a list of file patterns to ignore, and returns a list of files that excludes those that should be ignored.
The specification matches that of .gitignore files.
Source code inprefect/utilities/filesystem.py
def filter_files(\n root: str = \".\", ignore_patterns: list = None, include_dirs: bool = True\n) -> set:\n \"\"\"\n This function accepts a root directory path and a list of file patterns to ignore, and returns\n a list of files that excludes those that should be ignored.\n\n The specification matches that of [.gitignore files](https://git-scm.com/docs/gitignore).\n \"\"\"\n if ignore_patterns is None:\n ignore_patterns = []\n spec = pathspec.PathSpec.from_lines(\"gitwildmatch\", ignore_patterns)\n ignored_files = {p.path for p in spec.match_tree_entries(root)}\n if include_dirs:\n all_files = {p.path for p in pathspec.util.iter_tree_entries(root)}\n else:\n all_files = set(pathspec.util.iter_tree_files(root))\n included_files = all_files - ignored_files\n return included_files\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.get_open_file_limit","title":"get_open_file_limit
","text":"Get the maximum number of open files allowed for the current process
Source code inprefect/utilities/filesystem.py
def get_open_file_limit() -> int:\n \"\"\"Get the maximum number of open files allowed for the current process\"\"\"\n\n try:\n if os.name == \"nt\":\n import ctypes\n\n return ctypes.cdll.ucrtbase._getmaxstdio()\n else:\n import resource\n\n soft_limit, _ = resource.getrlimit(resource.RLIMIT_NOFILE)\n return soft_limit\n except Exception:\n # Catch all exceptions, as ctypes can raise several errors\n # depending on what went wrong. Return a safe default if we\n # can't get the limit from the OS.\n return 200\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.is_local_path","title":"is_local_path
","text":"Check if the given path points to a local or remote file system
Source code inprefect/utilities/filesystem.py
def is_local_path(path: Union[str, pathlib.Path, OpenFile]):\n \"\"\"Check if the given path points to a local or remote file system\"\"\"\n if isinstance(path, str):\n try:\n of = fsspec.open(path)\n except ImportError:\n # The path is a remote file system that uses a lib that is not installed\n return False\n elif isinstance(path, pathlib.Path):\n return True\n elif isinstance(path, OpenFile):\n of = path\n else:\n raise TypeError(f\"Invalid path of type {type(path).__name__!r}\")\n\n return type(of.fs) == LocalFileSystem\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.relative_path_to_current_platform","title":"relative_path_to_current_platform
","text":"Converts a relative path generated on any platform to a relative path for the current platform.
Source code inprefect/utilities/filesystem.py
def relative_path_to_current_platform(path_str: str) -> Path:\n \"\"\"\n Converts a relative path generated on any platform to a relative path for the\n current platform.\n \"\"\"\n\n return Path(PureWindowsPath(path_str).as_posix())\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.tmpchdir","title":"tmpchdir
","text":"Change current-working directories for the duration of the context
Source code inprefect/utilities/filesystem.py
@contextmanager\ndef tmpchdir(path: str):\n \"\"\"\n Change current-working directories for the duration of the context\n \"\"\"\n path = os.path.abspath(path)\n if os.path.isfile(path) or (not os.path.exists(path) and not path.endswith(\"/\")):\n path = os.path.dirname(path)\n\n owd = os.getcwd()\n\n with chdir_lock:\n try:\n os.chdir(path)\n yield path\n finally:\n os.chdir(owd)\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.to_display_path","title":"to_display_path
","text":"Convert a path to a displayable path. The absolute path or relative path to the current (or given) directory will be returned, whichever is shorter.
Source code inprefect/utilities/filesystem.py
def to_display_path(\n path: Union[pathlib.Path, str], relative_to: Union[pathlib.Path, str] = None\n) -> str:\n \"\"\"\n Convert a path to a displayable path. The absolute path or relative path to the\n current (or given) directory will be returned, whichever is shorter.\n \"\"\"\n path, relative_to = (\n pathlib.Path(path).resolve(),\n pathlib.Path(relative_to or \".\").resolve(),\n )\n relative_path = str(path.relative_to(relative_to))\n absolute_path = str(path)\n return relative_path if len(relative_path) < len(absolute_path) else absolute_path\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/hashing/","title":"hashing","text":"","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing","title":"prefect.utilities.hashing
","text":"","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.file_hash","title":"file_hash
","text":"Given a path to a file, produces a stable hash of the file contents.
Parameters:
Name Type Description Defaultpath
str
the path to a file
requiredhash_algo
Hash algorithm from hashlib to use.
_md5
Returns:
Name Type Descriptionstr
str
a hash of the file contents
Source code inprefect/utilities/hashing.py
def file_hash(path: str, hash_algo=_md5) -> str:\n \"\"\"Given a path to a file, produces a stable hash of the file contents.\n\n Args:\n path (str): the path to a file\n hash_algo: Hash algorithm from hashlib to use.\n\n Returns:\n str: a hash of the file contents\n \"\"\"\n contents = Path(path).read_bytes()\n return stable_hash(contents, hash_algo=hash_algo)\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.hash_objects","title":"hash_objects
","text":"Attempt to hash objects by dumping to JSON or serializing with cloudpickle. On failure of both, None
will be returned
prefect/utilities/hashing.py
def hash_objects(*args, hash_algo=_md5, **kwargs) -> Optional[str]:\n \"\"\"\n Attempt to hash objects by dumping to JSON or serializing with cloudpickle.\n On failure of both, `None` will be returned\n \"\"\"\n try:\n serializer = JSONSerializer(dumps_kwargs={\"sort_keys\": True})\n return stable_hash(serializer.dumps((args, kwargs)), hash_algo=hash_algo)\n except Exception:\n pass\n\n try:\n return stable_hash(cloudpickle.dumps((args, kwargs)), hash_algo=hash_algo)\n except Exception:\n pass\n\n return None\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.stable_hash","title":"stable_hash
","text":"Given some arguments, produces a stable 64-bit hash of their contents.
Supports bytes and strings. Strings will be UTF-8 encoded.
Parameters:
Name Type Description Default*args
Union[str, bytes]
Items to include in the hash.
()
hash_algo
Hash algorithm from hashlib to use.
_md5
Returns:
Type Descriptionstr
A hex hash.
Source code inprefect/utilities/hashing.py
def stable_hash(*args: Union[str, bytes], hash_algo=_md5) -> str:\n \"\"\"Given some arguments, produces a stable 64-bit hash of their contents.\n\n Supports bytes and strings. Strings will be UTF-8 encoded.\n\n Args:\n *args: Items to include in the hash.\n hash_algo: Hash algorithm from hashlib to use.\n\n Returns:\n A hex hash.\n \"\"\"\n h = hash_algo()\n for a in args:\n if isinstance(a, str):\n a = a.encode()\n h.update(a)\n return h.hexdigest()\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/importtools/","title":"importtools","text":"","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools","title":"prefect.utilities.importtools
","text":"","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.AliasedModuleDefinition","title":"AliasedModuleDefinition
","text":" Bases: NamedTuple
A definition for the AliasedModuleFinder
.
Parameters:
Name Type Description Defaultalias
The import name to create
requiredreal
The import name of the module to reference for the alias
requiredcallback
A function to call when the alias module is loaded
required Source code inprefect/utilities/importtools.py
class AliasedModuleDefinition(NamedTuple):\n \"\"\"\n A definition for the `AliasedModuleFinder`.\n\n Args:\n alias: The import name to create\n real: The import name of the module to reference for the alias\n callback: A function to call when the alias module is loaded\n \"\"\"\n\n alias: str\n real: str\n callback: Optional[Callable[[str], None]]\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.AliasedModuleFinder","title":"AliasedModuleFinder
","text":" Bases: MetaPathFinder
prefect/utilities/importtools.py
class AliasedModuleFinder(MetaPathFinder):\n def __init__(self, aliases: Iterable[AliasedModuleDefinition]):\n \"\"\"\n See `AliasedModuleDefinition` for alias specification.\n\n Aliases apply to all modules nested within an alias.\n \"\"\"\n self.aliases = aliases\n\n def find_spec(\n self,\n fullname: str,\n path=None,\n target=None,\n ) -> Optional[ModuleSpec]:\n \"\"\"\n The fullname is the imported path, e.g. \"foo.bar\". If there is an alias \"phi\"\n for \"foo\" then on import of \"phi.bar\" we will find the spec for \"foo.bar\" and\n create a new spec for \"phi.bar\" that points to \"foo.bar\".\n \"\"\"\n for alias, real, callback in self.aliases:\n if fullname.startswith(alias):\n # Retrieve the spec of the real module\n real_spec = importlib.util.find_spec(fullname.replace(alias, real, 1))\n # Create a new spec for the alias\n return ModuleSpec(\n fullname,\n AliasedModuleLoader(fullname, callback, real_spec),\n origin=real_spec.origin,\n is_package=real_spec.submodule_search_locations is not None,\n )\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.AliasedModuleFinder.find_spec","title":"find_spec
","text":"The fullname is the imported path, e.g. \"foo.bar\". If there is an alias \"phi\" for \"foo\" then on import of \"phi.bar\" we will find the spec for \"foo.bar\" and create a new spec for \"phi.bar\" that points to \"foo.bar\".
Source code inprefect/utilities/importtools.py
def find_spec(\n self,\n fullname: str,\n path=None,\n target=None,\n) -> Optional[ModuleSpec]:\n \"\"\"\n The fullname is the imported path, e.g. \"foo.bar\". If there is an alias \"phi\"\n for \"foo\" then on import of \"phi.bar\" we will find the spec for \"foo.bar\" and\n create a new spec for \"phi.bar\" that points to \"foo.bar\".\n \"\"\"\n for alias, real, callback in self.aliases:\n if fullname.startswith(alias):\n # Retrieve the spec of the real module\n real_spec = importlib.util.find_spec(fullname.replace(alias, real, 1))\n # Create a new spec for the alias\n return ModuleSpec(\n fullname,\n AliasedModuleLoader(fullname, callback, real_spec),\n origin=real_spec.origin,\n is_package=real_spec.submodule_search_locations is not None,\n )\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.DelayedImportErrorModule","title":"DelayedImportErrorModule
","text":" Bases: ModuleType
A fake module returned by lazy_import
when the module cannot be found. When any of the module's attributes are accessed, we will throw a ModuleNotFoundError
.
Adapted from lazy_loader
Source code inprefect/utilities/importtools.py
class DelayedImportErrorModule(ModuleType):\n \"\"\"\n A fake module returned by `lazy_import` when the module cannot be found. When any\n of the module's attributes are accessed, we will throw a `ModuleNotFoundError`.\n\n Adapted from [lazy_loader][1]\n\n [1]: https://github.com/scientific-python/lazy_loader\n \"\"\"\n\n def __init__(self, frame_data, help_message, *args, **kwargs):\n self.__frame_data = frame_data\n self.__help_message = (\n help_message or \"Import errors for this module are only reported when used.\"\n )\n super().__init__(*args, **kwargs)\n\n def __getattr__(self, attr):\n if attr in (\"__class__\", \"__file__\", \"__frame_data\", \"__help_message\"):\n super().__getattr__(attr)\n else:\n fd = self.__frame_data\n raise ModuleNotFoundError(\n f\"No module named '{fd['spec']}'\\n\\nThis module was originally imported\"\n f\" at:\\n File \\\"{fd['filename']}\\\", line {fd['lineno']}, in\"\n f\" {fd['function']}\\n\\n {''.join(fd['code_context']).strip()}\\n\"\n + self.__help_message\n )\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.from_qualified_name","title":"from_qualified_name
","text":"Import an object given a fully-qualified name.
Parameters:
Name Type Description Defaultname
str
The fully-qualified name of the object to import.
requiredReturns:
Type DescriptionAny
the imported object
Examples:
>>> obj = from_qualified_name(\"random.randint\")\n>>> import random\n>>> obj == random.randint\nTrue\n
Source code in prefect/utilities/importtools.py
def from_qualified_name(name: str) -> Any:\n \"\"\"\n Import an object given a fully-qualified name.\n\n Args:\n name: The fully-qualified name of the object to import.\n\n Returns:\n the imported object\n\n Examples:\n >>> obj = from_qualified_name(\"random.randint\")\n >>> import random\n >>> obj == random.randint\n True\n \"\"\"\n # Try importing it first so we support \"module\" or \"module.sub_module\"\n try:\n module = importlib.import_module(name)\n return module\n except ImportError:\n # If no subitem was included raise the import error\n if \".\" not in name:\n raise\n\n # Otherwise, we'll try to load it as an attribute of a module\n mod_name, attr_name = name.rsplit(\".\", 1)\n module = importlib.import_module(mod_name)\n return getattr(module, attr_name)\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.import_object","title":"import_object
","text":"Load an object from an import path.
Import paths can be formatted as one of: - module.object - module:object - /path/to/script.py:object
This function is not thread safe as it modifies the 'sys' module during execution.
Source code inprefect/utilities/importtools.py
def import_object(import_path: str):\n \"\"\"\n Load an object from an import path.\n\n Import paths can be formatted as one of:\n - module.object\n - module:object\n - /path/to/script.py:object\n\n This function is not thread safe as it modifies the 'sys' module during execution.\n \"\"\"\n if \".py:\" in import_path:\n script_path, object_name = import_path.rsplit(\":\", 1)\n module = load_script_as_module(script_path)\n else:\n if \":\" in import_path:\n module_name, object_name = import_path.rsplit(\":\", 1)\n elif \".\" in import_path:\n module_name, object_name = import_path.rsplit(\".\", 1)\n else:\n raise ValueError(\n f\"Invalid format for object import. Received {import_path!r}.\"\n )\n\n module = load_module(module_name)\n\n return getattr(module, object_name)\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.lazy_import","title":"lazy_import
","text":"Create a lazily-imported module to use in place of the module of the given name. Use this to retain module-level imports for libraries that we don't want to actually import until they are needed.
Adapted from the Python documentation and lazy_loader
Source code inprefect/utilities/importtools.py
def lazy_import(\n name: str, error_on_import: bool = False, help_message: str = \"\"\n) -> ModuleType:\n \"\"\"\n Create a lazily-imported module to use in place of the module of the given name.\n Use this to retain module-level imports for libraries that we don't want to\n actually import until they are needed.\n\n Adapted from the [Python documentation][1] and [lazy_loader][2]\n\n [1]: https://docs.python.org/3/library/importlib.html#implementing-lazy-imports\n [2]: https://github.com/scientific-python/lazy_loader\n \"\"\"\n\n try:\n return sys.modules[name]\n except KeyError:\n pass\n\n spec = importlib.util.find_spec(name)\n if spec is None:\n if error_on_import:\n raise ModuleNotFoundError(f\"No module named '{name}'.\\n{help_message}\")\n else:\n try:\n parent = inspect.stack()[1]\n frame_data = {\n \"spec\": name,\n \"filename\": parent.filename,\n \"lineno\": parent.lineno,\n \"function\": parent.function,\n \"code_context\": parent.code_context,\n }\n return DelayedImportErrorModule(\n frame_data, help_message, \"DelayedImportErrorModule\"\n )\n finally:\n del parent\n\n module = importlib.util.module_from_spec(spec)\n sys.modules[name] = module\n\n loader = importlib.util.LazyLoader(spec.loader)\n loader.exec_module(module)\n\n return module\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.load_module","title":"load_module
","text":"Import a module with support for relative imports within the module.
Source code inprefect/utilities/importtools.py
def load_module(module_name: str) -> ModuleType:\n \"\"\"\n Import a module with support for relative imports within the module.\n \"\"\"\n # Ensure relative imports within the imported module work if the user is in the\n # correct working directory\n working_directory = os.getcwd()\n sys.path.insert(0, working_directory)\n\n try:\n return importlib.import_module(module_name)\n finally:\n sys.path.remove(working_directory)\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.load_script_as_module","title":"load_script_as_module
","text":"Execute a script at the given path.
Sets the module name to __prefect_loader__
.
If an exception occurs during execution of the script, a prefect.exceptions.ScriptError
is created to wrap the exception and raised.
During the duration of this function call, sys
is modified to support loading. These changes are reverted after completion, but this function is not thread safe and use of it in threaded contexts may result in undesirable behavior.
See https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly
Source code inprefect/utilities/importtools.py
def load_script_as_module(path: str) -> ModuleType:\n \"\"\"\n Execute a script at the given path.\n\n Sets the module name to `__prefect_loader__`.\n\n If an exception occurs during execution of the script, a\n `prefect.exceptions.ScriptError` is created to wrap the exception and raised.\n\n During the duration of this function call, `sys` is modified to support loading.\n These changes are reverted after completion, but this function is not thread safe\n and use of it in threaded contexts may result in undesirable behavior.\n\n See https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly\n \"\"\"\n # We will add the parent directory to search locations to support relative imports\n # during execution of the script\n if not path.endswith(\".py\"):\n raise ValueError(f\"The provided path does not point to a python file: {path!r}\")\n\n parent_path = str(Path(path).resolve().parent)\n working_directory = os.getcwd()\n\n spec = importlib.util.spec_from_file_location(\n \"__prefect_loader__\",\n path,\n # Support explicit relative imports i.e. `from .foo import bar`\n submodule_search_locations=[parent_path, working_directory],\n )\n module = importlib.util.module_from_spec(spec)\n sys.modules[\"__prefect_loader__\"] = module\n\n # Support implicit relative imports i.e. `from foo import bar`\n sys.path.insert(0, working_directory)\n sys.path.insert(0, parent_path)\n try:\n spec.loader.exec_module(module)\n except Exception as exc:\n raise ScriptError(user_exc=exc, path=path) from exc\n finally:\n sys.modules.pop(\"__prefect_loader__\")\n sys.path.remove(parent_path)\n sys.path.remove(working_directory)\n\n return module\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.objects_from_script","title":"objects_from_script
","text":"Run a python script and return all the global variables
Supports remote paths by copying to a local temporary file.
WARNING: The Python documentation does not recommend using runpy for this pattern.
Furthermore, any functions and classes defined by the executed code are not guaranteed to work correctly after a runpy function has returned. If that limitation is not acceptable for a given use case, importlib is likely to be a more suitable choice than this module.
The function load_script_as_module
uses importlib instead and should be used instead for loading objects from scripts.
Parameters:
Name Type Description Defaultpath
str
The path to the script to run
requiredtext
Union[str, bytes]
Optionally, the text of the script. Skips loading the contents if given.
None
Returns:
Type DescriptionDict[str, Any]
A dictionary mapping variable name to value
Raises:
Type DescriptionScriptError
if the script raises an exception during execution
Source code inprefect/utilities/importtools.py
def objects_from_script(path: str, text: Union[str, bytes] = None) -> Dict[str, Any]:\n \"\"\"\n Run a python script and return all the global variables\n\n Supports remote paths by copying to a local temporary file.\n\n WARNING: The Python documentation does not recommend using runpy for this pattern.\n\n > Furthermore, any functions and classes defined by the executed code are not\n > guaranteed to work correctly after a runpy function has returned. If that\n > limitation is not acceptable for a given use case, importlib is likely to be a\n > more suitable choice than this module.\n\n The function `load_script_as_module` uses importlib instead and should be used\n instead for loading objects from scripts.\n\n Args:\n path: The path to the script to run\n text: Optionally, the text of the script. Skips loading the contents if given.\n\n Returns:\n A dictionary mapping variable name to value\n\n Raises:\n ScriptError: if the script raises an exception during execution\n \"\"\"\n\n def run_script(run_path: str):\n # Cast to an absolute path before changing directories to ensure relative paths\n # are not broken\n abs_run_path = os.path.abspath(run_path)\n with tmpchdir(run_path):\n try:\n return runpy.run_path(abs_run_path)\n except Exception as exc:\n raise ScriptError(user_exc=exc, path=path) from exc\n\n if text:\n with NamedTemporaryFile(\n mode=\"wt\" if isinstance(text, str) else \"wb\",\n prefix=f\"run-{filename(path)}\",\n suffix=\".py\",\n ) as tmpfile:\n tmpfile.write(text)\n tmpfile.flush()\n return run_script(tmpfile.name)\n\n else:\n if not is_local_path(path):\n # Remote paths need to be local to run\n with fsspec.open(path) as f:\n contents = f.read()\n return objects_from_script(path, contents)\n else:\n return run_script(path)\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.to_qualified_name","title":"to_qualified_name
","text":"Given an object, returns its fully-qualified name: a string that represents its Python import path.
Parameters:
Name Type Description Defaultobj
Any
an importable Python object
requiredReturns:
Name Type Descriptionstr
str
the qualified name
Source code inprefect/utilities/importtools.py
def to_qualified_name(obj: Any) -> str:\n \"\"\"\n Given an object, returns its fully-qualified name: a string that represents its\n Python import path.\n\n Args:\n obj (Any): an importable Python object\n\n Returns:\n str: the qualified name\n \"\"\"\n if sys.version_info < (3, 10):\n # These attributes are only available in Python 3.10+\n if isinstance(obj, (classmethod, staticmethod)):\n obj = obj.__func__\n return obj.__module__ + \".\" + obj.__qualname__\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/math/","title":"math","text":"","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math","title":"prefect.utilities.math
","text":"","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.bounded_poisson_interval","title":"bounded_poisson_interval
","text":"Bounds Poisson \"inter-arrival times\" to a range.
Unlike clamped_poisson_interval
this does not take a target average interval. Instead, the interval is predetermined and the average is calculated as their midpoint. This allows Poisson intervals to be used in cases where a lower bound must be enforced.
prefect/utilities/math.py
def bounded_poisson_interval(lower_bound, upper_bound):\n \"\"\"\n Bounds Poisson \"inter-arrival times\" to a range.\n\n Unlike `clamped_poisson_interval` this does not take a target average interval.\n Instead, the interval is predetermined and the average is calculated as their\n midpoint. This allows Poisson intervals to be used in cases where a lower bound\n must be enforced.\n \"\"\"\n average = (float(lower_bound) + float(upper_bound)) / 2.0\n upper_rv = exponential_cdf(upper_bound, average)\n lower_rv = exponential_cdf(lower_bound, average)\n return poisson_interval(average, lower_rv, upper_rv)\n
","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.clamped_poisson_interval","title":"clamped_poisson_interval
","text":"Bounds Poisson \"inter-arrival times\" to a range defined by the clamping factor.
The upper bound for this random variate is: average_interval * (1 + clamping_factor). A lower bound is picked so that the average interval remains approximately fixed.
Source code inprefect/utilities/math.py
def clamped_poisson_interval(average_interval, clamping_factor=0.3):\n \"\"\"\n Bounds Poisson \"inter-arrival times\" to a range defined by the clamping factor.\n\n The upper bound for this random variate is: average_interval * (1 + clamping_factor).\n A lower bound is picked so that the average interval remains approximately fixed.\n \"\"\"\n if clamping_factor <= 0:\n raise ValueError(\"`clamping_factor` must be >= 0.\")\n\n upper_clamp_multiple = 1 + clamping_factor\n upper_bound = average_interval * upper_clamp_multiple\n lower_bound = max(0, average_interval * lower_clamp_multiple(upper_clamp_multiple))\n\n upper_rv = exponential_cdf(upper_bound, average_interval)\n lower_rv = exponential_cdf(lower_bound, average_interval)\n return poisson_interval(average_interval, lower_rv, upper_rv)\n
","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.lower_clamp_multiple","title":"lower_clamp_multiple
","text":"Computes a lower clamp multiple that can be used to bound a random variate drawn from an exponential distribution.
Given an upper clamp multiple k
(and corresponding upper bound k * average_interval), this function computes a lower clamp multiple c
(corresponding to a lower bound c * average_interval) where the probability mass between the lower bound and the median is equal to the probability mass between the median and the upper bound.
prefect/utilities/math.py
def lower_clamp_multiple(k):\n \"\"\"\n Computes a lower clamp multiple that can be used to bound a random variate drawn\n from an exponential distribution.\n\n Given an upper clamp multiple `k` (and corresponding upper bound k * average_interval),\n this function computes a lower clamp multiple `c` (corresponding to a lower bound\n c * average_interval) where the probability mass between the lower bound and the\n median is equal to the probability mass between the median and the upper bound.\n \"\"\"\n if k >= 50:\n # return 0 for large values of `k` to prevent numerical overflow\n return 0.0\n\n return math.log(max(2**k / (2**k - 1), 1e-10), 2)\n
","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.poisson_interval","title":"poisson_interval
","text":"Generates an \"inter-arrival time\" for a Poisson process.
Draws a random variable from an exponential distribution using the inverse-CDF method. Can optionally be passed a lower and upper bound between (0, 1] to clamp the potential output values.
Source code inprefect/utilities/math.py
def poisson_interval(average_interval, lower=0, upper=1):\n \"\"\"\n Generates an \"inter-arrival time\" for a Poisson process.\n\n Draws a random variable from an exponential distribution using the inverse-CDF\n method. Can optionally be passed a lower and upper bound between (0, 1] to clamp\n the potential output values.\n \"\"\"\n\n # note that we ensure the argument to the logarithm is stabilized to prevent\n # calling log(0), which results in a DomainError\n return -math.log(max(1 - random.uniform(lower, upper), 1e-10)) * average_interval\n
","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/names/","title":"names","text":"","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names","title":"prefect.utilities.names
","text":"","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names.generate_slug","title":"generate_slug
","text":"Generates a random slug.
Parameters:
Name Type Description Default-
n_words (int
the number of words in the slug
required Source code inprefect/utilities/names.py
def generate_slug(n_words: int) -> str:\n \"\"\"\n Generates a random slug.\n\n Args:\n - n_words (int): the number of words in the slug\n \"\"\"\n words = coolname.generate(n_words)\n\n # regenerate words if they include ignored words\n while IGNORE_LIST.intersection(words):\n words = coolname.generate(n_words)\n\n return \"-\".join(words)\n
","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names.obfuscate","title":"obfuscate
","text":"Obfuscates any data type's string representation. See obfuscate_string
.
prefect/utilities/names.py
def obfuscate(s: Any, show_tail=False) -> str:\n \"\"\"\n Obfuscates any data type's string representation. See `obfuscate_string`.\n \"\"\"\n if s is None:\n return OBFUSCATED_PREFIX + \"*\" * 4\n\n return obfuscate_string(str(s), show_tail=show_tail)\n
","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names.obfuscate_string","title":"obfuscate_string
","text":"Obfuscates a string by returning a new string of 8 characters. If the input string is longer than 10 characters and show_tail is True, then up to 4 of its final characters will become final characters of the obfuscated string; all other characters are \"*\".
\"abc\" -> \"*\" \"abcdefgh\" -> \"*\" \"abcdefghijk\" -> \"*k\" \"abcdefghijklmnopqrs\" -> \"****pqrs\"
Source code inprefect/utilities/names.py
def obfuscate_string(s: str, show_tail=False) -> str:\n \"\"\"\n Obfuscates a string by returning a new string of 8 characters. If the input\n string is longer than 10 characters and show_tail is True, then up to 4 of\n its final characters will become final characters of the obfuscated string;\n all other characters are \"*\".\n\n \"abc\" -> \"********\"\n \"abcdefgh\" -> \"********\"\n \"abcdefghijk\" -> \"*******k\"\n \"abcdefghijklmnopqrs\" -> \"****pqrs\"\n \"\"\"\n result = OBFUSCATED_PREFIX + \"*\" * 4\n # take up to 4 characters, but only after the 10th character\n suffix = s[10:][-4:]\n if suffix and show_tail:\n result = f\"{result[:-len(suffix)]}{suffix}\"\n return result\n
","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/processutils/","title":"processutils","text":""},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils","title":"prefect.utilities.processutils
","text":""},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.forward_signal_handler","title":"forward_signal_handler
","text":"Forward subsequent signum events (e.g. interrupts) to respective signums.
Source code inprefect/utilities/processutils.py
def forward_signal_handler(\n pid: int, signum: int, *signums: int, process_name: str, print_fn: Callable\n):\n \"\"\"Forward subsequent signum events (e.g. interrupts) to respective signums.\"\"\"\n current_signal, future_signals = signums[0], signums[1:]\n\n # avoid RecursionError when setting up a direct signal forward to the same signal for the main pid\n avoid_infinite_recursion = signum == current_signal and pid == os.getpid()\n if avoid_infinite_recursion:\n # store the vanilla handler so it can be temporarily restored below\n original_handler = signal.getsignal(current_signal)\n\n def handler(*args):\n print_fn(\n f\"Received {getattr(signum, 'name', signum)}. \"\n f\"Sending {getattr(current_signal, 'name', current_signal)} to\"\n f\" {process_name} (PID {pid})...\"\n )\n if avoid_infinite_recursion:\n signal.signal(current_signal, original_handler)\n os.kill(pid, current_signal)\n if future_signals:\n forward_signal_handler(\n pid,\n signum,\n *future_signals,\n process_name=process_name,\n print_fn=print_fn,\n )\n\n # register current and future signal handlers\n _register_signal(signum, handler)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.open_process","title":"open_process
async
","text":"Like anyio.open_process
but with: - Support for Windows command joining - Termination of the process on exception during yield - Forced cleanup of process resources during cancellation
prefect/utilities/processutils.py
@asynccontextmanager\nasync def open_process(command: List[str], **kwargs):\n \"\"\"\n Like `anyio.open_process` but with:\n - Support for Windows command joining\n - Termination of the process on exception during yield\n - Forced cleanup of process resources during cancellation\n \"\"\"\n # Passing a string to open_process is equivalent to shell=True which is\n # generally necessary for Unix-like commands on Windows but otherwise should\n # be avoided\n if not isinstance(command, list):\n raise TypeError(\n \"The command passed to open process must be a list. You passed the command\"\n f\"'{command}', which is type '{type(command)}'.\"\n )\n\n if sys.platform == \"win32\":\n command = \" \".join(command)\n process = await _open_anyio_process(command, **kwargs)\n else:\n process = await anyio.open_process(command, **kwargs)\n\n # if there's a creationflags kwarg and it contains CREATE_NEW_PROCESS_GROUP,\n # use SetConsoleCtrlHandler to handle CTRL-C\n win32_process_group = False\n if (\n sys.platform == \"win32\"\n and \"creationflags\" in kwargs\n and kwargs[\"creationflags\"] & subprocess.CREATE_NEW_PROCESS_GROUP\n ):\n win32_process_group = True\n _windows_process_group_pids.add(process.pid)\n # Add a handler for CTRL-C. Re-adding the handler is safe as Windows\n # will not add a duplicate handler if _win32_ctrl_handler is\n # already registered.\n windll.kernel32.SetConsoleCtrlHandler(_win32_ctrl_handler, 1)\n\n try:\n async with process:\n yield process\n finally:\n try:\n process.terminate()\n if win32_process_group:\n _windows_process_group_pids.remove(process.pid)\n\n except OSError:\n # Occurs if the process is already terminated\n pass\n\n # Ensure the process resource is closed. If not shielded from cancellation,\n # this resource can be left open and the subprocess output can appear after\n # the parent process has exited.\n with anyio.CancelScope(shield=True):\n await process.aclose()\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.run_process","title":"run_process
async
","text":"Like anyio.run_process
but with:
open_process
utility to ensure resources are cleaned upstream_output
support to connect the subprocess to the parent stdout/errTaskGroup.start
marking as 'started' after the process has been created. When used, the PID is returned to the task status.prefect/utilities/processutils.py
async def run_process(\n command: List[str],\n stream_output: Union[bool, Tuple[Optional[TextSink], Optional[TextSink]]] = False,\n task_status: Optional[anyio.abc.TaskStatus] = None,\n task_status_handler: Optional[Callable[[anyio.abc.Process], Any]] = None,\n **kwargs,\n):\n \"\"\"\n Like `anyio.run_process` but with:\n\n - Use of our `open_process` utility to ensure resources are cleaned up\n - Simple `stream_output` support to connect the subprocess to the parent stdout/err\n - Support for submission with `TaskGroup.start` marking as 'started' after the\n process has been created. When used, the PID is returned to the task status.\n\n \"\"\"\n if stream_output is True:\n stream_output = (sys.stdout, sys.stderr)\n\n async with open_process(\n command,\n stdout=subprocess.PIPE if stream_output else subprocess.DEVNULL,\n stderr=subprocess.PIPE if stream_output else subprocess.DEVNULL,\n **kwargs,\n ) as process:\n if task_status is not None:\n if not task_status_handler:\n\n def task_status_handler(process):\n return process.pid\n\n task_status.started(task_status_handler(process))\n\n if stream_output:\n await consume_process_output(\n process, stdout_sink=stream_output[0], stderr_sink=stream_output[1]\n )\n\n await process.wait()\n\n return process\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_agent","title":"setup_signal_handlers_agent
","text":"Handle interrupts of the agent gracefully.
Source code inprefect/utilities/processutils.py
def setup_signal_handlers_agent(pid: int, process_name: str, print_fn: Callable):\n \"\"\"Handle interrupts of the agent gracefully.\"\"\"\n setup_handler = partial(\n forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n )\n # when agent receives SIGINT, it stops dequeueing new FlowRuns, and runs until the subprocesses finish\n # the signal is not forwarded to subprocesses, so they can continue to run and hopefully still complete\n if sys.platform == \"win32\":\n # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n # https://bugs.python.org/issue26350\n setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n else:\n # forward first SIGINT directly, send SIGKILL on subsequent interrupt\n setup_handler(signal.SIGINT, signal.SIGINT, signal.SIGKILL)\n # first SIGTERM: send SIGINT, send SIGKILL on subsequent SIGTERM\n setup_handler(signal.SIGTERM, signal.SIGINT, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_server","title":"setup_signal_handlers_server
","text":"Handle interrupts of the server gracefully.
Source code inprefect/utilities/processutils.py
def setup_signal_handlers_server(pid: int, process_name: str, print_fn: Callable):\n \"\"\"Handle interrupts of the server gracefully.\"\"\"\n setup_handler = partial(\n forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n )\n # when server receives a signal, it needs to be propagated to the uvicorn subprocess\n if sys.platform == \"win32\":\n # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n # https://bugs.python.org/issue26350\n setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n else:\n # first interrupt: SIGTERM, second interrupt: SIGKILL\n setup_handler(signal.SIGINT, signal.SIGTERM, signal.SIGKILL)\n # forward first SIGTERM directly, send SIGKILL on subsequent SIGTERM\n setup_handler(signal.SIGTERM, signal.SIGTERM, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_worker","title":"setup_signal_handlers_worker
","text":"Handle interrupts of workers gracefully.
Source code inprefect/utilities/processutils.py
def setup_signal_handlers_worker(pid: int, process_name: str, print_fn: Callable):\n \"\"\"Handle interrupts of workers gracefully.\"\"\"\n setup_handler = partial(\n forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n )\n # when agent receives SIGINT, it stops dequeueing new FlowRuns, and runs until the subprocesses finish\n # the signal is not forwarded to subprocesses, so they can continue to run and hopefully still complete\n if sys.platform == \"win32\":\n # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n # https://bugs.python.org/issue26350\n setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n else:\n # forward first SIGINT directly, send SIGKILL on subsequent interrupt\n setup_handler(signal.SIGINT, signal.SIGINT, signal.SIGKILL)\n # first SIGTERM: send SIGINT, send SIGKILL on subsequent SIGTERM\n setup_handler(signal.SIGTERM, signal.SIGINT, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/utilities/pydantic/","title":"pydantic","text":"","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic","title":"prefect.utilities.pydantic
","text":"","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.PartialModel","title":"PartialModel
","text":" Bases: Generic[M]
A utility for creating a Pydantic model in several steps.
Fields may be set at initialization, via attribute assignment, or at finalization when the concrete model is returned.
Pydantic validation does not occur until finalization.
Each field can only be set once and a ValueError
will be raised on assignment if a field already has a value.
class MyModel(pydantic.BaseModel): x: int y: str z: float
partial_model = PartialModel(MyModel, x=1) partial_model.y = \"two\" model = partial_model.finalize(z=3.0)
Source code inprefect/utilities/pydantic.py
class PartialModel(Generic[M]):\n \"\"\"\n A utility for creating a Pydantic model in several steps.\n\n Fields may be set at initialization, via attribute assignment, or at finalization\n when the concrete model is returned.\n\n Pydantic validation does not occur until finalization.\n\n Each field can only be set once and a `ValueError` will be raised on assignment if\n a field already has a value.\n\n Example:\n >>> class MyModel(pydantic.BaseModel):\n >>> x: int\n >>> y: str\n >>> z: float\n >>>\n >>> partial_model = PartialModel(MyModel, x=1)\n >>> partial_model.y = \"two\"\n >>> model = partial_model.finalize(z=3.0)\n \"\"\"\n\n def __init__(self, __model_cls: Type[M], **kwargs: Any) -> None:\n self.fields = kwargs\n # Set fields first to avoid issues if `fields` is also set on the `model_cls`\n # in our custom `setattr` implementation.\n self.model_cls = __model_cls\n\n for name in kwargs.keys():\n self.raise_if_not_in_model(name)\n\n def finalize(self, **kwargs: Any) -> M:\n for name in kwargs.keys():\n self.raise_if_already_set(name)\n self.raise_if_not_in_model(name)\n return self.model_cls(**self.fields, **kwargs)\n\n def raise_if_already_set(self, name):\n if name in self.fields:\n raise ValueError(f\"Field {name!r} has already been set.\")\n\n def raise_if_not_in_model(self, name):\n if name not in self.model_cls.__fields__:\n raise ValueError(f\"Field {name!r} is not present in the model.\")\n\n def __setattr__(self, __name: str, __value: Any) -> None:\n if __name in {\"fields\", \"model_cls\"}:\n return super().__setattr__(__name, __value)\n\n self.raise_if_already_set(__name)\n self.raise_if_not_in_model(__name)\n self.fields[__name] = __value\n\n def __repr__(self) -> str:\n dsp_fields = \", \".join(\n f\"{key}={repr(value)}\" for key, value in self.fields.items()\n )\n return f\"PartialModel(cls={self.model_cls.__name__}, {dsp_fields})\"\n
","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.add_cloudpickle_reduction","title":"add_cloudpickle_reduction
","text":"Adds a __reducer__
to the given class that ensures it is cloudpickle compatible.
Workaround for issues with cloudpickle when using cythonized pydantic which throws exceptions when attempting to pickle the class which has \"compiled\" validator methods dynamically attached to it.
We cannot define this utility in the model class itself because the class is the type that contains unserializable methods.
Any model using some features of Pydantic (e.g. Path
validation) with a Cython compiled Pydantic installation may encounter pickling issues.
See related issue at https://github.com/cloudpipe/cloudpickle/issues/408
Source code inprefect/utilities/pydantic.py
def add_cloudpickle_reduction(__model_cls: Type[M] = None, **kwargs: Any):\n \"\"\"\n Adds a `__reducer__` to the given class that ensures it is cloudpickle compatible.\n\n Workaround for issues with cloudpickle when using cythonized pydantic which\n throws exceptions when attempting to pickle the class which has \"compiled\"\n validator methods dynamically attached to it.\n\n We cannot define this utility in the model class itself because the class is the\n type that contains unserializable methods.\n\n Any model using some features of Pydantic (e.g. `Path` validation) with a Cython\n compiled Pydantic installation may encounter pickling issues.\n\n See related issue at https://github.com/cloudpipe/cloudpickle/issues/408\n \"\"\"\n if __model_cls:\n __model_cls.__reduce__ = _reduce_model\n __model_cls.__reduce_kwargs__ = kwargs\n return __model_cls\n else:\n return cast(\n Callable[[Type[M]], Type[M]],\n partial(\n add_cloudpickle_reduction,\n **kwargs,\n ),\n )\n
","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.add_type_dispatch","title":"add_type_dispatch
","text":"Extend a Pydantic model to add a 'type' field that is used a discriminator field to dynamically determine the subtype that when deserializing models.
This allows automatic resolution to subtypes of the decorated model.
If a type field already exists, it should be a string literal field that has a constant value for each subclass. The default value of this field will be used as the dispatch key.
If a type field does not exist, one will be added. In this case, the value of the field will be set to the value of the __dispatch_key__
. The base class should define a __dispatch_key__
class method that is used to determine the unique key for each subclass. Alternatively, each subclass can define the __dispatch_key__
as a string literal.
The base class must not define a 'type' field. If it is not desirable to add a field to the model and the dispatch key can be tracked separately, the lower level utilities in prefect.utilities.dispatch
should be used directly.
prefect/utilities/pydantic.py
def add_type_dispatch(model_cls: Type[M]) -> Type[M]:\n \"\"\"\n Extend a Pydantic model to add a 'type' field that is used a discriminator field\n to dynamically determine the subtype that when deserializing models.\n\n This allows automatic resolution to subtypes of the decorated model.\n\n If a type field already exists, it should be a string literal field that has a\n constant value for each subclass. The default value of this field will be used as\n the dispatch key.\n\n If a type field does not exist, one will be added. In this case, the value of the\n field will be set to the value of the `__dispatch_key__`. The base class should\n define a `__dispatch_key__` class method that is used to determine the unique key\n for each subclass. Alternatively, each subclass can define the `__dispatch_key__`\n as a string literal.\n\n The base class must not define a 'type' field. If it is not desirable to add a field\n to the model and the dispatch key can be tracked separately, the lower level\n utilities in `prefect.utilities.dispatch` should be used directly.\n \"\"\"\n defines_dispatch_key = hasattr(\n model_cls, \"__dispatch_key__\"\n ) or \"__dispatch_key__\" in getattr(model_cls, \"__annotations__\", {})\n\n defines_type_field = \"type\" in model_cls.__fields__\n\n if not defines_dispatch_key and not defines_type_field:\n raise ValueError(\n f\"Model class {model_cls.__name__!r} does not define a `__dispatch_key__` \"\n \"or a type field. One of these is required for dispatch.\"\n )\n\n elif defines_dispatch_key and not defines_type_field:\n # Add a type field to store the value of the dispatch key\n model_cls.__fields__[\"type\"] = pydantic.fields.ModelField(\n name=\"type\",\n type_=str,\n required=True,\n class_validators=None,\n model_config=model_cls.__config__,\n )\n\n elif not defines_dispatch_key and defines_type_field:\n field_type_annotation = model_cls.__fields__[\"type\"].type_\n if field_type_annotation != str:\n raise TypeError(\n f\"Model class {model_cls.__name__!r} defines a 'type' field with \"\n f\"type {field_type_annotation.__name__!r} but it must be 'str'.\"\n )\n\n # Set the dispatch key to retrieve the value from the type field\n @classmethod\n def dispatch_key_from_type_field(cls):\n return cls.__fields__[\"type\"].default\n\n model_cls.__dispatch_key__ = dispatch_key_from_type_field\n\n else:\n raise ValueError(\n f\"Model class {model_cls.__name__!r} defines a `__dispatch_key__` \"\n \"and a type field. Only one of these may be defined for dispatch.\"\n )\n\n cls_init = model_cls.__init__\n cls_new = model_cls.__new__\n\n def __init__(__pydantic_self__, **data: Any) -> None:\n type_string = (\n get_dispatch_key(__pydantic_self__)\n if type(__pydantic_self__) != model_cls\n else \"__base__\"\n )\n data.setdefault(\"type\", type_string)\n cls_init(__pydantic_self__, **data)\n\n def __new__(cls: Type[Self], **kwargs) -> Self:\n if \"type\" in kwargs:\n try:\n subcls = lookup_type(cls, dispatch_key=kwargs[\"type\"])\n except KeyError as exc:\n raise pydantic.ValidationError(errors=[exc], model=cls)\n return cls_new(subcls)\n else:\n return cls_new(cls)\n\n model_cls.__init__ = __init__\n model_cls.__new__ = __new__\n\n register_base_type(model_cls)\n\n return model_cls\n
","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.get_class_fields_only","title":"get_class_fields_only
","text":"Gets all the field names defined on the model class but not any parent classes. Any fields that are on the parent but redefined on the subclass are included.
Source code inprefect/utilities/pydantic.py
def get_class_fields_only(model: Type[pydantic.BaseModel]) -> set:\n \"\"\"\n Gets all the field names defined on the model class but not any parent classes.\n Any fields that are on the parent but redefined on the subclass are included.\n \"\"\"\n subclass_class_fields = set(model.__annotations__.keys())\n parent_class_fields = set()\n\n for base in model.__class__.__bases__:\n if issubclass(base, pydantic.BaseModel):\n parent_class_fields.update(base.__annotations__.keys())\n\n return (subclass_class_fields - parent_class_fields) | (\n subclass_class_fields & parent_class_fields\n )\n
","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/render_swagger/","title":"render_swagger","text":"","tags":["Python API","Swagger"]},{"location":"api-ref/prefect/utilities/render_swagger/#prefect.utilities.render_swagger","title":"prefect.utilities.render_swagger
","text":"","tags":["Python API","Swagger"]},{"location":"api-ref/prefect/utilities/render_swagger/#prefect.utilities.render_swagger.swagger_lib","title":"swagger_lib
","text":"Provides the actual swagger library used
Source code inprefect/utilities/render_swagger.py
def swagger_lib(config) -> dict:\n \"\"\"\n Provides the actual swagger library used\n \"\"\"\n lib_swagger = {\n \"css\": \"https://unpkg.com/swagger-ui-dist@3/swagger-ui.css\",\n \"js\": \"https://unpkg.com/swagger-ui-dist@3/swagger-ui-bundle.js\",\n }\n\n extra_javascript = config.get(\"extra_javascript\", [])\n extra_css = config.get(\"extra_css\", [])\n for lib in extra_javascript:\n if os.path.basename(urllib.parse.urlparse(lib).path) == \"swagger-ui-bundle.js\":\n lib_swagger[\"js\"] = lib\n break\n\n for css in extra_css:\n if os.path.basename(urllib.parse.urlparse(css).path) == \"swagger-ui.css\":\n lib_swagger[\"css\"] = css\n break\n return lib_swagger\n
","tags":["Python API","Swagger"]},{"location":"api-ref/prefect/utilities/services/","title":"services","text":"","tags":["Python API","services"]},{"location":"api-ref/prefect/utilities/services/#prefect.utilities.services","title":"prefect.utilities.services
","text":"","tags":["Python API","services"]},{"location":"api-ref/prefect/utilities/services/#prefect.utilities.services.critical_service_loop","title":"critical_service_loop
async
","text":"Runs the given workload
function on the specified interval
, while being forgiving of intermittent issues like temporary HTTP errors. If more than a certain number of consecutive
errors occur, print a summary of up to memory
recent exceptions to printer
, then begin backoff.
The loop will exit after reaching the consecutive error limit backoff
times. On each backoff, the interval will be doubled. On a successful loop, the backoff will be reset.
Parameters:
Name Type Description Defaultworkload
Callable[..., Coroutine]
the function to call
requiredinterval
float
how frequently to call it
requiredmemory
int
how many recent errors to remember
10
consecutive
int
how many consecutive errors must we see before we begin backoff
3
backoff
int
how many times we should allow consecutive errors before exiting
1
printer
Callable[..., None]
a print
-like function where errors will be reported
print
run_once
bool
if set, the loop will only run once then return
False
jitter_range
float
if set, the interval will be a random variable (rv) drawn from a clamped Poisson distribution where lambda = interval and the rv is bound between interval * (1 - range) < rv < interval * (1 + range)
None
Source code in prefect/utilities/services.py
async def critical_service_loop(\n workload: Callable[..., Coroutine],\n interval: float,\n memory: int = 10,\n consecutive: int = 3,\n backoff: int = 1,\n printer: Callable[..., None] = print,\n run_once: bool = False,\n jitter_range: float = None,\n):\n \"\"\"\n Runs the given `workload` function on the specified `interval`, while being\n forgiving of intermittent issues like temporary HTTP errors. If more than a certain\n number of `consecutive` errors occur, print a summary of up to `memory` recent\n exceptions to `printer`, then begin backoff.\n\n The loop will exit after reaching the consecutive error limit `backoff` times.\n On each backoff, the interval will be doubled. On a successful loop, the backoff\n will be reset.\n\n Args:\n workload: the function to call\n interval: how frequently to call it\n memory: how many recent errors to remember\n consecutive: how many consecutive errors must we see before we begin backoff\n backoff: how many times we should allow consecutive errors before exiting\n printer: a `print`-like function where errors will be reported\n run_once: if set, the loop will only run once then return\n jitter_range: if set, the interval will be a random variable (rv) drawn from\n a clamped Poisson distribution where lambda = interval and the rv is bound\n between `interval * (1 - range) < rv < interval * (1 + range)`\n \"\"\"\n\n track_record: Deque[bool] = deque([True] * consecutive, maxlen=consecutive)\n failures: Deque[Tuple[Exception, TracebackType]] = deque(maxlen=memory)\n backoff_count = 0\n\n while True:\n try:\n workload_display_name = (\n workload.__name__ if hasattr(workload, \"__name__\") else workload\n )\n logger.debug(f\"Starting run of {workload_display_name!r}\")\n await workload()\n\n # Reset the backoff count on success; we may want to consider resetting\n # this only if the track record is _all_ successful to avoid ending backoff\n # prematurely\n if backoff_count > 0:\n printer(\"Resetting backoff due to successful run.\")\n backoff_count = 0\n\n track_record.append(True)\n except httpx.TransportError as exc:\n # httpx.TransportError is the base class for any kind of communications\n # error, like timeouts, connection failures, etc. This does _not_ cover\n # routine HTTP error codes (even 5xx errors like 502/503) so this\n # handler should not be attempting to cover cases where the Prefect server\n # or Prefect Cloud is having an outage (which will be covered by the\n # exception clause below)\n track_record.append(False)\n failures.append((exc, sys.exc_info()[-1]))\n logger.debug(\n f\"Run of {workload!r} failed with TransportError\", exc_info=exc\n )\n except httpx.HTTPStatusError as exc:\n if exc.response.status_code >= 500:\n # 5XX codes indicate a potential outage of the Prefect API which is\n # likely to be temporary and transient. Don't quit over these unless\n # it is prolonged.\n track_record.append(False)\n failures.append((exc, sys.exc_info()[-1]))\n logger.debug(\n f\"Run of {workload!r} failed with HTTPStatusError\", exc_info=exc\n )\n else:\n raise\n\n # Decide whether to exit now based on recent history.\n #\n # Given some typical background error rate of, say, 1%, we may still observe\n # quite a few errors in our recent samples, but this is not necessarily a cause\n # for concern.\n #\n # Imagine two distributions that could reflect our situation at any time: the\n # everything-is-fine distribution of errors, and the everything-is-on-fire\n # distribution of errors. We are trying to determine which of those two worlds\n # we are currently experiencing. We compare the likelihood that we'd draw N\n # consecutive errors from each. In the everything-is-fine distribution, that\n # would be a very low-probability occurrence, but in the everything-is-on-fire\n # distribution, that is a high-probability occurrence.\n #\n # Remarkably, we only need to look back for a small number of consecutive\n # errors to have reasonable confidence that this is indeed an anomaly.\n # @anticorrelator and @chrisguidry estimated that we should only need to look\n # back for 3 consecutive errors.\n if not any(track_record):\n # We've failed enough times to be sure something is wrong, the writing is\n # on the wall. Let's explain what we've seen and exit.\n printer(\n f\"\\nFailed the last {consecutive} attempts. \"\n \"Please check your environment and configuration.\"\n )\n\n printer(\"Examples of recent errors:\\n\")\n\n failures_by_type = distinct(\n reversed(failures),\n key=lambda pair: type(pair[0]), # Group by the type of exception\n )\n for exception, traceback in failures_by_type:\n printer(\"\".join(format_exception(None, exception, traceback)))\n printer()\n\n backoff_count += 1\n\n if backoff_count >= backoff:\n raise RuntimeError(\"Service exceeded error threshold.\")\n\n # Reset the track record\n track_record.extend([True] * consecutive)\n failures.clear()\n printer(\n \"Backing off due to consecutive errors, using increased interval of \"\n f\" {interval * 2**backoff_count}s.\"\n )\n\n if run_once:\n return\n\n if jitter_range is not None:\n sleep = clamped_poisson_interval(interval, clamping_factor=jitter_range)\n else:\n sleep = interval * 2**backoff_count\n\n await anyio.sleep(sleep)\n
","tags":["Python API","services"]},{"location":"api-ref/prefect/utilities/slugify/","title":"slugify","text":"","tags":["Python API","slugify"]},{"location":"api-ref/prefect/utilities/slugify/#prefect.utilities.slugify","title":"prefect.utilities.slugify
","text":"","tags":["Python API","slugify"]},{"location":"api-ref/prefect/utilities/templating/","title":"templating","text":"","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating","title":"prefect.utilities.templating
","text":"","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.apply_values","title":"apply_values
","text":"Replaces placeholders in a template with values from a supplied dictionary.
Will recursively replace placeholders in dictionaries and lists.
If a value has no placeholders, it will be returned unchanged.
If a template contains only a single placeholder, the placeholder will be fully replaced with the value.
If a template contains text before or after a placeholder or there are multiple placeholders, the placeholders will be replaced with the corresponding variable values.
If a template contains a placeholder that is not in values
, NotSet will be returned to signify that no placeholder replacement occurred. If template
is a dictionary that contains a key with a value of NotSet, the key will be removed in the return value unless remove_notset
is set to False.
Parameters:
Name Type Description Defaulttemplate
T
template to discover and replace values in
requiredvalues
Dict[str, Any]
The values to apply to placeholders in the template
requiredremove_notset
bool
If True, remove keys with an unset value
True
Returns:
Type DescriptionUnion[T, Type[NotSet]]
The template with the values applied
Source code inprefect/utilities/templating.py
def apply_values(\n template: T, values: Dict[str, Any], remove_notset: bool = True\n) -> Union[T, Type[NotSet]]:\n \"\"\"\n Replaces placeholders in a template with values from a supplied dictionary.\n\n Will recursively replace placeholders in dictionaries and lists.\n\n If a value has no placeholders, it will be returned unchanged.\n\n If a template contains only a single placeholder, the placeholder will be\n fully replaced with the value.\n\n If a template contains text before or after a placeholder or there are\n multiple placeholders, the placeholders will be replaced with the\n corresponding variable values.\n\n If a template contains a placeholder that is not in `values`, NotSet will\n be returned to signify that no placeholder replacement occurred. If\n `template` is a dictionary that contains a key with a value of NotSet,\n the key will be removed in the return value unless `remove_notset` is set to False.\n\n Args:\n template: template to discover and replace values in\n values: The values to apply to placeholders in the template\n remove_notset: If True, remove keys with an unset value\n\n Returns:\n The template with the values applied\n \"\"\"\n if isinstance(template, (int, float, bool, type(NotSet), type(None))):\n return template\n if isinstance(template, str):\n placeholders = find_placeholders(template)\n if not placeholders:\n # If there are no values, we can just use the template\n return template\n elif (\n len(placeholders) == 1\n and list(placeholders)[0].full_match == template\n and list(placeholders)[0].type is PlaceholderType.STANDARD\n ):\n # If there is only one variable with no surrounding text,\n # we can replace it. If there is no variable value, we\n # return NotSet to indicate that the value should not be included.\n return get_from_dict(values, list(placeholders)[0].name, NotSet)\n else:\n for full_match, name, placeholder_type in placeholders:\n if placeholder_type is PlaceholderType.STANDARD:\n value = get_from_dict(values, name, NotSet)\n elif placeholder_type is PlaceholderType.ENV_VAR:\n name = name.lstrip(ENV_VAR_PLACEHOLDER_PREFIX)\n value = os.environ.get(name, NotSet)\n else:\n continue\n\n if value is NotSet and not remove_notset:\n continue\n elif value is NotSet:\n template = template.replace(full_match, \"\")\n else:\n template = template.replace(full_match, str(value))\n\n return template\n elif isinstance(template, dict):\n updated_template = {}\n for key, value in template.items():\n updated_value = apply_values(value, values, remove_notset=remove_notset)\n if updated_value is not NotSet:\n updated_template[key] = updated_value\n elif not remove_notset:\n updated_template[key] = value\n\n return updated_template\n elif isinstance(template, list):\n updated_list = []\n for value in template:\n updated_value = apply_values(value, values, remove_notset=remove_notset)\n if updated_value is not NotSet:\n updated_list.append(updated_value)\n return updated_list\n else:\n raise ValueError(f\"Unexpected template type {type(template).__name__!r}\")\n
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.determine_placeholder_type","title":"determine_placeholder_type
","text":"Determines the type of a placeholder based on its name.
Parameters:
Name Type Description Defaultname
str
The name of the placeholder
requiredReturns:
Type DescriptionPlaceholderType
The type of the placeholder
Source code inprefect/utilities/templating.py
def determine_placeholder_type(name: str) -> PlaceholderType:\n \"\"\"\n Determines the type of a placeholder based on its name.\n\n Args:\n name: The name of the placeholder\n\n Returns:\n The type of the placeholder\n \"\"\"\n if name.startswith(BLOCK_DOCUMENT_PLACEHOLDER_PREFIX):\n return PlaceholderType.BLOCK_DOCUMENT\n elif name.startswith(VARIABLE_PLACEHOLDER_PREFIX):\n return PlaceholderType.VARIABLE\n elif name.startswith(ENV_VAR_PLACEHOLDER_PREFIX):\n return PlaceholderType.ENV_VAR\n else:\n return PlaceholderType.STANDARD\n
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.find_placeholders","title":"find_placeholders
","text":"Finds all placeholders in a template.
Parameters:
Name Type Description Defaulttemplate
T
template to discover placeholders in
requiredReturns:
Type DescriptionSet[Placeholder]
A set of all placeholders in the template
Source code inprefect/utilities/templating.py
def find_placeholders(template: T) -> Set[Placeholder]:\n \"\"\"\n Finds all placeholders in a template.\n\n Args:\n template: template to discover placeholders in\n\n Returns:\n A set of all placeholders in the template\n \"\"\"\n if isinstance(template, (int, float, bool)):\n return set()\n if isinstance(template, str):\n result = PLACEHOLDER_CAPTURE_REGEX.findall(template)\n return {\n Placeholder(full_match, name, determine_placeholder_type(name))\n for full_match, name in result\n }\n elif isinstance(template, dict):\n return set().union(\n *[find_placeholders(value) for key, value in template.items()]\n )\n elif isinstance(template, list):\n return set().union(*[find_placeholders(item) for item in template])\n else:\n raise ValueError(f\"Unexpected type: {type(template)}\")\n
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_block_document_references","title":"resolve_block_document_references
async
","text":"Resolve block document references in a template by replacing each reference with the data of the block document.
Recursively searches for block document references in dictionaries and lists.
Identifies block document references by the as dictionary with the following structure:
{\n \"$ref\": {\n \"block_document_id\": <block_document_id>\n }\n}\n
where <block_document_id>
is the ID of the block document to resolve. Once the block document is retrieved from the API, the data of the block document is used to replace the reference.
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_block_document_references--accessing-values","title":"Accessing Values:","text":"To access different values in a block document, use dot notation combined with the block document's prefix, slug, and block name.
For a block document with the structure:
{\n \"value\": {\n \"key\": {\n \"nested-key\": \"nested-value\"\n },\n \"list\": [\n {\"list-key\": \"list-value\"},\n 1,\n 2\n ]\n }\n}\n
examples of value resolution are as follows: Accessing a nested dictionary: Format: prefect.blocks...value.key Example: Returns {\"nested-key\": \"nested-value\"}
Accessing a specific nested value: Format: prefect.blocks...value.key.nested-key Example: Returns \"nested-value\"
Accessing a list element's key-value: Format: prefect.blocks...value.list[0].list-key Example: Returns \"list-value\"","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_block_document_references--default-resolution-for-system-blocks","title":"Default Resolution for System Blocks:","text":"
For system blocks, which only contain a value
attribute, this attribute is resolved by default.
Parameters:
Name Type Description Defaulttemplate
T
The template to resolve block documents in
requiredReturns:
Type DescriptionUnion[T, Dict[str, Any]]
The template with block documents resolved
Source code inprefect/utilities/templating.py
@inject_client\nasync def resolve_block_document_references(\n template: T, client: \"PrefectClient\" = None\n) -> Union[T, Dict[str, Any]]:\n \"\"\"\n Resolve block document references in a template by replacing each reference with\n the data of the block document.\n\n Recursively searches for block document references in dictionaries and lists.\n\n Identifies block document references by the as dictionary with the following\n structure:\n ```\n {\n \"$ref\": {\n \"block_document_id\": <block_document_id>\n }\n }\n ```\n where `<block_document_id>` is the ID of the block document to resolve.\n\n Once the block document is retrieved from the API, the data of the block document\n is used to replace the reference.\n\n Accessing Values:\n -----------------\n To access different values in a block document, use dot notation combined with the block document's prefix, slug, and block name.\n\n For a block document with the structure:\n ```json\n {\n \"value\": {\n \"key\": {\n \"nested-key\": \"nested-value\"\n },\n \"list\": [\n {\"list-key\": \"list-value\"},\n 1,\n 2\n ]\n }\n }\n ```\n examples of value resolution are as follows:\n\n 1. Accessing a nested dictionary:\n Format: prefect.blocks.<block_type_slug>.<block_document_name>.value.key\n Example: Returns {\"nested-key\": \"nested-value\"}\n\n 2. Accessing a specific nested value:\n Format: prefect.blocks.<block_type_slug>.<block_document_name>.value.key.nested-key\n Example: Returns \"nested-value\"\n\n 3. Accessing a list element's key-value:\n Format: prefect.blocks.<block_type_slug>.<block_document_name>.value.list[0].list-key\n Example: Returns \"list-value\"\n\n Default Resolution for System Blocks:\n -------------------------------------\n For system blocks, which only contain a `value` attribute, this attribute is resolved by default.\n\n Args:\n template: The template to resolve block documents in\n\n Returns:\n The template with block documents resolved\n \"\"\"\n if isinstance(template, dict):\n block_document_id = template.get(\"$ref\", {}).get(\"block_document_id\")\n if block_document_id:\n block_document = await client.read_block_document(block_document_id)\n return block_document.data\n updated_template = {}\n for key, value in template.items():\n updated_value = await resolve_block_document_references(\n value, client=client\n )\n updated_template[key] = updated_value\n return updated_template\n elif isinstance(template, list):\n return [\n await resolve_block_document_references(item, client=client)\n for item in template\n ]\n elif isinstance(template, str):\n placeholders = find_placeholders(template)\n has_block_document_placeholder = any(\n placeholder.type is PlaceholderType.BLOCK_DOCUMENT\n for placeholder in placeholders\n )\n if len(placeholders) == 0 or not has_block_document_placeholder:\n return template\n elif (\n len(placeholders) == 1\n and list(placeholders)[0].full_match == template\n and list(placeholders)[0].type is PlaceholderType.BLOCK_DOCUMENT\n ):\n # value_keypath will be a list containing a dot path if additional\n # attributes are accessed and an empty list otherwise.\n block_type_slug, block_document_name, *value_keypath = (\n list(placeholders)[0]\n .name.replace(BLOCK_DOCUMENT_PLACEHOLDER_PREFIX, \"\")\n .split(\".\", 2)\n )\n block_document = await client.read_block_document_by_name(\n name=block_document_name, block_type_slug=block_type_slug\n )\n value = block_document.data\n\n # resolving system blocks to their data for backwards compatibility\n if len(value) == 1 and \"value\" in value:\n # only resolve the value if the keypath is not already pointing to \"value\"\n if len(value_keypath) == 0 or value_keypath[0][:5] != \"value\":\n value = value[\"value\"]\n\n # resolving keypath/block attributes\n if len(value_keypath) > 0:\n value_keypath: str = value_keypath[0]\n value = get_from_dict(value, value_keypath, default=NotSet)\n if value is NotSet:\n raise ValueError(\n f\"Invalid template: {template!r}. Could not resolve the\"\n \" keypath in the block document data.\"\n )\n\n return value\n else:\n raise ValueError(\n f\"Invalid template: {template!r}. Only a single block placeholder is\"\n \" allowed in a string and no surrounding text is allowed.\"\n )\n\n return template\n
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_variables","title":"resolve_variables
async
","text":"Resolve variables in a template by replacing each variable placeholder with the value of the variable.
Recursively searches for variable placeholders in dictionaries and lists.
Strips variable placeholders if the variable is not found.
Parameters:
Name Type Description Defaulttemplate
T
The template to resolve variables in
requiredReturns:
Type DescriptionThe template with variables resolved
Source code inprefect/utilities/templating.py
@inject_client\nasync def resolve_variables(template: T, client: \"PrefectClient\" = None):\n \"\"\"\n Resolve variables in a template by replacing each variable placeholder with the\n value of the variable.\n\n Recursively searches for variable placeholders in dictionaries and lists.\n\n Strips variable placeholders if the variable is not found.\n\n Args:\n template: The template to resolve variables in\n\n Returns:\n The template with variables resolved\n \"\"\"\n if isinstance(template, str):\n placeholders = find_placeholders(template)\n has_variable_placeholder = any(\n placeholder.type is PlaceholderType.VARIABLE for placeholder in placeholders\n )\n if not placeholders or not has_variable_placeholder:\n # If there are no values, we can just use the template\n return template\n elif (\n len(placeholders) == 1\n and list(placeholders)[0].full_match == template\n and list(placeholders)[0].type is PlaceholderType.VARIABLE\n ):\n variable_name = list(placeholders)[0].name.replace(\n VARIABLE_PLACEHOLDER_PREFIX, \"\"\n )\n variable = await client.read_variable_by_name(name=variable_name)\n if variable is None:\n return \"\"\n else:\n return variable.value\n else:\n for full_match, name, placeholder_type in placeholders:\n if placeholder_type is PlaceholderType.VARIABLE:\n variable_name = name.replace(VARIABLE_PLACEHOLDER_PREFIX, \"\")\n variable = await client.read_variable_by_name(name=variable_name)\n if variable is None:\n template = template.replace(full_match, \"\")\n else:\n template = template.replace(full_match, variable.value)\n return template\n elif isinstance(template, dict):\n return {\n key: await resolve_variables(value, client=client)\n for key, value in template.items()\n }\n elif isinstance(template, list):\n return [await resolve_variables(item, client=client) for item in template]\n else:\n return template\n
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/text/","title":"text","text":"","tags":["Python API","text"]},{"location":"api-ref/prefect/utilities/text/#prefect.utilities.text","title":"prefect.utilities.text
","text":"","tags":["Python API","text"]},{"location":"api-ref/prefect/utilities/validation/","title":"validation","text":"","tags":["Python API","validation"]},{"location":"api-ref/prefect/utilities/validation/#prefect.utilities.validation","title":"prefect.utilities.validation
","text":"","tags":["Python API","validation"]},{"location":"api-ref/prefect/utilities/validation/#prefect.utilities.validation.validate_schema","title":"validate_schema
","text":"Validate that the provided schema is a valid json schema.
Parameters:
Name Type Description Defaultschema
dict
The schema to validate.
requiredRaises:
Type DescriptionValueError
If the provided schema is not a valid json schema.
Source code inprefect/utilities/validation.py
def validate_schema(schema: dict):\n \"\"\"\n Validate that the provided schema is a valid json schema.\n\n Args:\n schema: The schema to validate.\n\n Raises:\n ValueError: If the provided schema is not a valid json schema.\n\n \"\"\"\n try:\n if schema is not None:\n # Most closely matches the schemas generated by pydantic\n jsonschema.Draft4Validator.check_schema(schema)\n except jsonschema.SchemaError as exc:\n raise ValueError(\n \"The provided schema is not a valid json schema. Schema error:\"\n f\" {exc.message}\"\n ) from exc\n
","tags":["Python API","validation"]},{"location":"api-ref/prefect/utilities/validation/#prefect.utilities.validation.validate_values_conform_to_schema","title":"validate_values_conform_to_schema
","text":"Validate that the provided values conform to the provided json schema.
Parameters:
Name Type Description Defaultvalues
dict
The values to validate.
requiredschema
dict
The schema to validate against.
requiredignore_required
bool
Whether to ignore the required fields in the schema. Should be used when a partial set of values is acceptable.
False
Raises:
Type DescriptionValueError
If the parameters do not conform to the schema.
Source code inprefect/utilities/validation.py
def validate_values_conform_to_schema(\n values: dict, schema: dict, ignore_required: bool = False\n):\n \"\"\"\n Validate that the provided values conform to the provided json schema.\n\n Args:\n values: The values to validate.\n schema: The schema to validate against.\n ignore_required: Whether to ignore the required fields in the schema. Should be\n used when a partial set of values is acceptable.\n\n Raises:\n ValueError: If the parameters do not conform to the schema.\n\n \"\"\"\n if ignore_required:\n schema = remove_nested_keys([\"required\"], schema)\n\n try:\n if schema is not None and values is not None:\n jsonschema.validate(values, schema)\n except jsonschema.ValidationError as exc:\n if exc.json_path == \"$\":\n error_message = \"Validation failed.\"\n else:\n error_message = (\n f\"Validation failed for field {exc.json_path.replace('$.', '')!r}.\"\n )\n error_message += f\" Failure reason: {exc.message}\"\n raise ValueError(error_message) from exc\n except jsonschema.SchemaError as exc:\n raise ValueError(\n \"The provided schema is not a valid json schema. Schema error:\"\n f\" {exc.message}\"\n ) from exc\n
","tags":["Python API","validation"]},{"location":"api-ref/prefect/utilities/visualization/","title":"visualization","text":"","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization","title":"prefect.utilities.visualization
","text":"Utilities for working with Flow.visualize()
","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.TaskVizTracker","title":"TaskVizTracker
","text":"Source code in prefect/utilities/visualization.py
class TaskVizTracker:\n def __init__(self):\n self.tasks = []\n self.dynamic_task_counter = {}\n self.object_id_to_task = {}\n\n def add_task(self, task: VizTask):\n if task.name not in self.dynamic_task_counter:\n self.dynamic_task_counter[task.name] = 0\n else:\n self.dynamic_task_counter[task.name] += 1\n\n task.name = f\"{task.name}-{self.dynamic_task_counter[task.name]}\"\n self.tasks.append(task)\n\n def __enter__(self):\n TaskVizTrackerState.current = self\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n TaskVizTrackerState.current = None\n\n def link_viz_return_value_to_viz_task(\n self, viz_return_value: Any, viz_task: VizTask\n ) -> None:\n \"\"\"\n We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256\n because they are singletons.\n \"\"\"\n from prefect.engine import UNTRACKABLE_TYPES\n\n if (type(viz_return_value) in UNTRACKABLE_TYPES) or (\n isinstance(viz_return_value, int) and (-5 <= viz_return_value <= 256)\n ):\n return\n self.object_id_to_task[id(viz_return_value)] = viz_task\n
","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.TaskVizTracker.link_viz_return_value_to_viz_task","title":"link_viz_return_value_to_viz_task
","text":"We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256 because they are singletons.
Source code inprefect/utilities/visualization.py
def link_viz_return_value_to_viz_task(\n self, viz_return_value: Any, viz_task: VizTask\n) -> None:\n \"\"\"\n We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256\n because they are singletons.\n \"\"\"\n from prefect.engine import UNTRACKABLE_TYPES\n\n if (type(viz_return_value) in UNTRACKABLE_TYPES) or (\n isinstance(viz_return_value, int) and (-5 <= viz_return_value <= 256)\n ):\n return\n self.object_id_to_task[id(viz_return_value)] = viz_task\n
","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.build_task_dependencies","title":"build_task_dependencies
","text":"Constructs a Graphviz directed graph object that represents the dependencies between tasks in the given TaskVizTracker.
Raises: - GraphvizImportError: If there's an ImportError related to graphviz. - FlowVisualizationError: If there's any other error during the visualization process or if return values of tasks are directly accessed without specifying a viz_return_value
.
prefect/utilities/visualization.py
def build_task_dependencies(task_run_tracker: TaskVizTracker):\n \"\"\"\n Constructs a Graphviz directed graph object that represents the dependencies\n between tasks in the given TaskVizTracker.\n\n Parameters:\n - task_run_tracker (TaskVizTracker): An object containing tasks and their\n dependencies.\n\n Returns:\n - graphviz.Digraph: A directed graph object depicting the relationships and\n dependencies between tasks.\n\n Raises:\n - GraphvizImportError: If there's an ImportError related to graphviz.\n - FlowVisualizationError: If there's any other error during the visualization\n process or if return values of tasks are directly accessed without\n specifying a `viz_return_value`.\n \"\"\"\n try:\n g = graphviz.Digraph()\n for task in task_run_tracker.tasks:\n g.node(task.name)\n for upstream in task.upstream_tasks:\n g.edge(upstream.name, task.name)\n return g\n except ImportError as exc:\n raise GraphvizImportError from exc\n except Exception:\n raise FlowVisualizationError(\n \"Something went wrong building the flow's visualization.\"\n \" If you're interacting with the return value of a task\"\n \" directly inside of your flow, you must set a set a `viz_return_value`\"\n \", for example `@task(viz_return_value=[1, 2, 3])`.\"\n )\n
","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.track_viz_task","title":"track_viz_task
","text":"Return a result if sync otherwise return a coroutine that returns the result
Source code inprefect/utilities/visualization.py
def track_viz_task(\n is_async: bool,\n task_name: str,\n parameters: dict,\n viz_return_value: Optional[Any] = None,\n):\n \"\"\"Return a result if sync otherwise return a coroutine that returns the result\"\"\"\n if is_async:\n return from_async.wait_for_call_in_loop_thread(\n partial(_track_viz_task, task_name, parameters, viz_return_value)\n )\n else:\n return _track_viz_task(task_name, parameters, viz_return_value)\n
","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.visualize_task_dependencies","title":"visualize_task_dependencies
","text":"Renders and displays a Graphviz directed graph representing task dependencies.
The graph is rendered in PNG format and saved with the name specified by flow_run_name. After rendering, the visualization is opened and displayed.
Parameters: - graph (graphviz.Digraph): The directed graph object to visualize. - flow_run_name (str): The name to use when saving the rendered graph image.
Raises: - GraphvizExecutableNotFoundError: If Graphviz isn't found on the system. - FlowVisualizationError: If there's any other error during the visualization process or if return values of tasks are directly accessed without specifying a viz_return_value
.
prefect/utilities/visualization.py
def visualize_task_dependencies(graph: graphviz.Digraph, flow_run_name: str):\n \"\"\"\n Renders and displays a Graphviz directed graph representing task dependencies.\n\n The graph is rendered in PNG format and saved with the name specified by\n flow_run_name. After rendering, the visualization is opened and displayed.\n\n Parameters:\n - graph (graphviz.Digraph): The directed graph object to visualize.\n - flow_run_name (str): The name to use when saving the rendered graph image.\n\n Raises:\n - GraphvizExecutableNotFoundError: If Graphviz isn't found on the system.\n - FlowVisualizationError: If there's any other error during the visualization\n process or if return values of tasks are directly accessed without\n specifying a `viz_return_value`.\n \"\"\"\n try:\n graph.render(filename=flow_run_name, view=True, format=\"png\", cleanup=True)\n except graphviz.backend.ExecutableNotFound as exc:\n msg = (\n \"It appears you do not have Graphviz installed, or it is not on your \"\n \"PATH. Please install Graphviz from http://www.graphviz.org/download/. \"\n \"Note: Just installing the `graphviz` python package is not \"\n \"sufficient.\"\n )\n raise GraphvizExecutableNotFoundError(msg) from exc\n except Exception:\n raise FlowVisualizationError(\n \"Something went wrong building the flow's visualization.\"\n \" If you're interacting with the return value of a task\"\n \" directly inside of your flow, you must set a set a `viz_return_value`\"\n \", for example `@task(viz_return_value=[1, 2, 3])`.\"\n )\n
","tags":["Python API","visualization"]},{"location":"api-ref/prefect/workers/base/","title":"base","text":"","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base","title":"prefect.workers.base
","text":"","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration","title":"BaseJobConfiguration
","text":" Bases: BaseModel
prefect/workers/base.py
class BaseJobConfiguration(BaseModel):\n command: Optional[str] = Field(\n default=None,\n description=(\n \"The command to use when starting a flow run. \"\n \"In most cases, this should be left blank and the command \"\n \"will be automatically generated by the worker.\"\n ),\n )\n env: Dict[str, Optional[str]] = Field(\n default_factory=dict,\n title=\"Environment Variables\",\n description=\"Environment variables to set when starting a flow run.\",\n )\n labels: Dict[str, str] = Field(\n default_factory=dict,\n description=(\n \"Labels applied to infrastructure created by the worker using \"\n \"this job configuration.\"\n ),\n )\n name: Optional[str] = Field(\n default=None,\n description=(\n \"Name given to infrastructure created by the worker using this \"\n \"job configuration.\"\n ),\n )\n\n _related_objects: Dict[str, Any] = PrivateAttr(default_factory=dict)\n\n @property\n def is_using_a_runner(self):\n return self.command is not None and \"prefect flow-run execute\" in self.command\n\n @validator(\"command\")\n def _coerce_command(cls, v):\n \"\"\"Make sure that empty strings are treated as None\"\"\"\n if not v:\n return None\n return v\n\n @staticmethod\n def _get_base_config_defaults(variables: dict) -> dict:\n \"\"\"Get default values from base config for all variables that have them.\"\"\"\n defaults = dict()\n for variable_name, attrs in variables.items():\n if \"default\" in attrs:\n defaults[variable_name] = attrs[\"default\"]\n\n return defaults\n\n @classmethod\n @inject_client\n async def from_template_and_values(\n cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n ):\n \"\"\"Creates a valid worker configuration object from the provided base\n configuration and overrides.\n\n Important: this method expects that the base_job_template was already\n validated server-side.\n \"\"\"\n job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n variables_schema = base_job_template[\"variables\"]\n variables = cls._get_base_config_defaults(\n variables_schema.get(\"properties\", {})\n )\n variables.update(values)\n\n populated_configuration = apply_values(template=job_config, values=variables)\n populated_configuration = await resolve_block_document_references(\n template=populated_configuration, client=client\n )\n populated_configuration = await resolve_variables(\n template=populated_configuration, client=client\n )\n return cls(**populated_configuration)\n\n @classmethod\n def json_template(cls) -> dict:\n \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n Defaults to using the job configuration parameter name as the template variable name.\n\n e.g.\n {\n key1: '{{ key1 }}', # default variable template\n key2: '{{ template2 }}', # `template2` specifically provide as template\n }\n \"\"\"\n configuration = {}\n properties = cls.schema()[\"properties\"]\n for k, v in properties.items():\n if v.get(\"template\"):\n template = v[\"template\"]\n else:\n template = \"{{ \" + k + \" }}\"\n configuration[k] = template\n\n return configuration\n\n def prepare_for_flow_run(\n self,\n flow_run: \"FlowRun\",\n deployment: Optional[\"DeploymentResponse\"] = None,\n flow: Optional[\"Flow\"] = None,\n ):\n \"\"\"\n Prepare the job configuration for a flow run.\n\n This method is called by the worker before starting a flow run. It\n should be used to set any configuration values that are dependent on\n the flow run.\n\n Args:\n flow_run: The flow run to be executed.\n deployment: The deployment that the flow run is associated with.\n flow: The flow that the flow run is associated with.\n \"\"\"\n\n self._related_objects = {\n \"deployment\": deployment,\n \"flow\": flow,\n \"flow-run\": flow_run,\n }\n if deployment is not None:\n deployment_labels = self._base_deployment_labels(deployment)\n else:\n deployment_labels = {}\n\n if flow is not None:\n flow_labels = self._base_flow_labels(flow)\n else:\n flow_labels = {}\n\n env = {\n **self._base_environment(),\n **self._base_flow_run_environment(flow_run),\n **self.env,\n }\n self.env = {key: value for key, value in env.items() if value is not None}\n self.labels = {\n **self._base_flow_run_labels(flow_run),\n **deployment_labels,\n **flow_labels,\n **self.labels,\n }\n self.name = self.name or flow_run.name\n self.command = self.command or self._base_flow_run_command()\n\n @staticmethod\n def _base_flow_run_command() -> str:\n \"\"\"\n Generate a command for a flow run job.\n \"\"\"\n if experiment_enabled(\"enhanced_cancellation\"):\n if (\n PREFECT_EXPERIMENTAL_WARN\n and PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION\n ):\n warnings.warn(\n EXPERIMENTAL_WARNING.format(\n feature=\"Enhanced flow run cancellation\",\n group=\"enhanced_cancellation\",\n help=\"\",\n ),\n ExperimentalFeature,\n stacklevel=3,\n )\n return \"prefect flow-run execute\"\n return \"python -m prefect.engine\"\n\n @staticmethod\n def _base_flow_run_labels(flow_run: \"FlowRun\") -> Dict[str, str]:\n \"\"\"\n Generate a dictionary of labels for a flow run job.\n \"\"\"\n return {\n \"prefect.io/flow-run-id\": str(flow_run.id),\n \"prefect.io/flow-run-name\": flow_run.name,\n \"prefect.io/version\": prefect.__version__,\n }\n\n @classmethod\n def _base_environment(cls) -> Dict[str, str]:\n \"\"\"\n Environment variables that should be passed to all created infrastructure.\n\n These values should be overridable with the `env` field.\n \"\"\"\n return get_current_settings().to_environment_variables(exclude_unset=True)\n\n @staticmethod\n def _base_flow_run_environment(flow_run: \"FlowRun\") -> Dict[str, str]:\n \"\"\"\n Generate a dictionary of environment variables for a flow run job.\n \"\"\"\n return {\"PREFECT__FLOW_RUN_ID\": str(flow_run.id)}\n\n @staticmethod\n def _base_deployment_labels(deployment: \"DeploymentResponse\") -> Dict[str, str]:\n labels = {\n \"prefect.io/deployment-id\": str(deployment.id),\n \"prefect.io/deployment-name\": deployment.name,\n }\n if deployment.updated is not None:\n labels[\"prefect.io/deployment-updated\"] = deployment.updated.in_timezone(\n \"utc\"\n ).to_iso8601_string()\n return labels\n\n @staticmethod\n def _base_flow_labels(flow: \"Flow\") -> Dict[str, str]:\n return {\n \"prefect.io/flow-id\": str(flow.id),\n \"prefect.io/flow-name\": flow.name,\n }\n\n def _related_resources(self) -> List[RelatedResource]:\n tags = set()\n related = []\n\n for kind, obj in self._related_objects.items():\n if obj is None:\n continue\n if hasattr(obj, \"tags\"):\n tags.update(obj.tags)\n related.append(object_as_related_resource(kind=kind, role=kind, object=obj))\n\n return related + tags_as_related_resources(tags)\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.from_template_and_values","title":"from_template_and_values
async
classmethod
","text":"Creates a valid worker configuration object from the provided base configuration and overrides.
Important: this method expects that the base_job_template was already validated server-side.
Source code inprefect/workers/base.py
@classmethod\n@inject_client\nasync def from_template_and_values(\n cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n):\n \"\"\"Creates a valid worker configuration object from the provided base\n configuration and overrides.\n\n Important: this method expects that the base_job_template was already\n validated server-side.\n \"\"\"\n job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n variables_schema = base_job_template[\"variables\"]\n variables = cls._get_base_config_defaults(\n variables_schema.get(\"properties\", {})\n )\n variables.update(values)\n\n populated_configuration = apply_values(template=job_config, values=variables)\n populated_configuration = await resolve_block_document_references(\n template=populated_configuration, client=client\n )\n populated_configuration = await resolve_variables(\n template=populated_configuration, client=client\n )\n return cls(**populated_configuration)\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.json_template","title":"json_template
classmethod
","text":"Returns a dict with job configuration as keys and the corresponding templates as values
Defaults to using the job configuration parameter name as the template variable name.
e.g. { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # template2
specifically provide as template }
prefect/workers/base.py
@classmethod\ndef json_template(cls) -> dict:\n \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n Defaults to using the job configuration parameter name as the template variable name.\n\n e.g.\n {\n key1: '{{ key1 }}', # default variable template\n key2: '{{ template2 }}', # `template2` specifically provide as template\n }\n \"\"\"\n configuration = {}\n properties = cls.schema()[\"properties\"]\n for k, v in properties.items():\n if v.get(\"template\"):\n template = v[\"template\"]\n else:\n template = \"{{ \" + k + \" }}\"\n configuration[k] = template\n\n return configuration\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.prepare_for_flow_run","title":"prepare_for_flow_run
","text":"Prepare the job configuration for a flow run.
This method is called by the worker before starting a flow run. It should be used to set any configuration values that are dependent on the flow run.
Parameters:
Name Type Description Defaultflow_run
FlowRun
The flow run to be executed.
requireddeployment
Optional[DeploymentResponse]
The deployment that the flow run is associated with.
None
flow
Optional[Flow]
The flow that the flow run is associated with.
None
Source code in prefect/workers/base.py
def prepare_for_flow_run(\n self,\n flow_run: \"FlowRun\",\n deployment: Optional[\"DeploymentResponse\"] = None,\n flow: Optional[\"Flow\"] = None,\n):\n \"\"\"\n Prepare the job configuration for a flow run.\n\n This method is called by the worker before starting a flow run. It\n should be used to set any configuration values that are dependent on\n the flow run.\n\n Args:\n flow_run: The flow run to be executed.\n deployment: The deployment that the flow run is associated with.\n flow: The flow that the flow run is associated with.\n \"\"\"\n\n self._related_objects = {\n \"deployment\": deployment,\n \"flow\": flow,\n \"flow-run\": flow_run,\n }\n if deployment is not None:\n deployment_labels = self._base_deployment_labels(deployment)\n else:\n deployment_labels = {}\n\n if flow is not None:\n flow_labels = self._base_flow_labels(flow)\n else:\n flow_labels = {}\n\n env = {\n **self._base_environment(),\n **self._base_flow_run_environment(flow_run),\n **self.env,\n }\n self.env = {key: value for key, value in env.items() if value is not None}\n self.labels = {\n **self._base_flow_run_labels(flow_run),\n **deployment_labels,\n **flow_labels,\n **self.labels,\n }\n self.name = self.name or flow_run.name\n self.command = self.command or self._base_flow_run_command()\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker","title":"BaseWorker
","text":" Bases: ABC
prefect/workers/base.py
@register_base_type\nclass BaseWorker(abc.ABC):\n type: str\n job_configuration: Type[BaseJobConfiguration] = BaseJobConfiguration\n job_configuration_variables: Optional[Type[BaseVariables]] = None\n\n _documentation_url = \"\"\n _logo_url = \"\"\n _description = \"\"\n\n def __init__(\n self,\n work_pool_name: str,\n work_queues: Optional[List[str]] = None,\n name: Optional[str] = None,\n prefetch_seconds: Optional[float] = None,\n create_pool_if_not_found: bool = True,\n limit: Optional[int] = None,\n heartbeat_interval_seconds: Optional[int] = None,\n *,\n base_job_template: Optional[Dict[str, Any]] = None,\n ):\n \"\"\"\n Base class for all Prefect workers.\n\n Args:\n name: The name of the worker. If not provided, a random one\n will be generated. If provided, it cannot contain '/' or '%'.\n The name is used to identify the worker in the UI; if two\n processes have the same name, they will be treated as the same\n worker.\n work_pool_name: The name of the work pool to poll.\n work_queues: A list of work queues to poll. If not provided, all\n work queue in the work pool will be polled.\n prefetch_seconds: The number of seconds to prefetch flow runs for.\n create_pool_if_not_found: Whether to create the work pool\n if it is not found. Defaults to `True`, but can be set to `False` to\n ensure that work pools are not created accidentally.\n limit: The maximum number of flow runs this worker should be running at\n a given time.\n base_job_template: If creating the work pool, provide the base job\n template to use. Logs a warning if the pool already exists.\n \"\"\"\n if name and (\"/\" in name or \"%\" in name):\n raise ValueError(\"Worker name cannot contain '/' or '%'\")\n self.name = name or f\"{self.__class__.__name__} {uuid4()}\"\n self._logger = get_logger(f\"worker.{self.__class__.type}.{self.name.lower()}\")\n\n self.is_setup = False\n self._create_pool_if_not_found = create_pool_if_not_found\n self._base_job_template = base_job_template\n self._work_pool_name = work_pool_name\n self._work_queues: Set[str] = set(work_queues) if work_queues else set()\n\n self._prefetch_seconds: float = (\n prefetch_seconds or PREFECT_WORKER_PREFETCH_SECONDS.value()\n )\n self.heartbeat_interval_seconds = (\n heartbeat_interval_seconds or PREFECT_WORKER_HEARTBEAT_SECONDS.value()\n )\n\n self._work_pool: Optional[WorkPool] = None\n self._runs_task_group: Optional[anyio.abc.TaskGroup] = None\n self._client: Optional[PrefectClient] = None\n self._last_polled_time: pendulum.DateTime = pendulum.now(\"utc\")\n self._limit = limit\n self._limiter: Optional[anyio.CapacityLimiter] = None\n self._submitting_flow_run_ids = set()\n self._cancelling_flow_run_ids = set()\n self._scheduled_task_scopes = set()\n\n @classmethod\n def get_documentation_url(cls) -> str:\n return cls._documentation_url\n\n @classmethod\n def get_logo_url(cls) -> str:\n return cls._logo_url\n\n @classmethod\n def get_description(cls) -> str:\n return cls._description\n\n @classmethod\n def get_default_base_job_template(cls) -> Dict:\n if cls.job_configuration_variables is None:\n schema = cls.job_configuration.schema()\n # remove \"template\" key from all dicts in schema['properties'] because it is not a\n # relevant field\n for key, value in schema[\"properties\"].items():\n if isinstance(value, dict):\n schema[\"properties\"][key].pop(\"template\", None)\n variables_schema = schema\n else:\n variables_schema = cls.job_configuration_variables.schema()\n variables_schema.pop(\"title\", None)\n return {\n \"job_configuration\": cls.job_configuration.json_template(),\n \"variables\": variables_schema,\n }\n\n @staticmethod\n def get_worker_class_from_type(type: str) -> Optional[Type[\"BaseWorker\"]]:\n \"\"\"\n Returns the worker class for a given worker type. If the worker type\n is not recognized, returns None.\n \"\"\"\n load_prefect_collections()\n worker_registry = get_registry_for_type(BaseWorker)\n if worker_registry is not None:\n return worker_registry.get(type)\n\n @staticmethod\n def get_all_available_worker_types() -> List[str]:\n \"\"\"\n Returns all worker types available in the local registry.\n \"\"\"\n load_prefect_collections()\n worker_registry = get_registry_for_type(BaseWorker)\n if worker_registry is not None:\n return list(worker_registry.keys())\n return []\n\n def get_name_slug(self):\n return slugify(self.name)\n\n def get_flow_run_logger(self, flow_run: \"FlowRun\") -> PrefectLogAdapter:\n return flow_run_logger(flow_run=flow_run).getChild(\n \"worker\",\n extra={\n \"worker_name\": self.name,\n \"work_pool_name\": (\n self._work_pool_name if self._work_pool else \"<unknown>\"\n ),\n \"work_pool_id\": str(getattr(self._work_pool, \"id\", \"unknown\")),\n },\n )\n\n @abc.abstractmethod\n async def run(\n self,\n flow_run: \"FlowRun\",\n configuration: BaseJobConfiguration,\n task_status: Optional[anyio.abc.TaskStatus] = None,\n ) -> BaseWorkerResult:\n \"\"\"\n Runs a given flow run on the current worker.\n \"\"\"\n raise NotImplementedError(\n \"Workers must implement a method for running submitted flow runs\"\n )\n\n async def kill_infrastructure(\n self,\n infrastructure_pid: str,\n configuration: BaseJobConfiguration,\n grace_seconds: int = 30,\n ):\n \"\"\"\n Method for killing infrastructure created by a worker. Should be implemented by\n individual workers if they support killing infrastructure.\n \"\"\"\n raise NotImplementedError(\n \"This worker does not support killing infrastructure.\"\n )\n\n @classmethod\n def __dispatch_key__(cls):\n if cls.__name__ == \"BaseWorker\":\n return None # The base class is abstract\n return cls.type\n\n async def setup(self):\n \"\"\"Prepares the worker to run.\"\"\"\n self._logger.debug(\"Setting up worker...\")\n self._runs_task_group = anyio.create_task_group()\n self._limiter = (\n anyio.CapacityLimiter(self._limit) if self._limit is not None else None\n )\n self._client = get_client()\n await self._client.__aenter__()\n await self._runs_task_group.__aenter__()\n\n self.is_setup = True\n\n async def teardown(self, *exc_info):\n \"\"\"Cleans up resources after the worker is stopped.\"\"\"\n self._logger.debug(\"Tearing down worker...\")\n self.is_setup = False\n for scope in self._scheduled_task_scopes:\n scope.cancel()\n if self._runs_task_group:\n await self._runs_task_group.__aexit__(*exc_info)\n if self._client:\n await self._client.__aexit__(*exc_info)\n self._runs_task_group = None\n self._client = None\n\n def is_worker_still_polling(self, query_interval_seconds: int) -> bool:\n \"\"\"\n This method is invoked by a webserver healthcheck handler\n and returns a boolean indicating if the worker has recorded a\n scheduled flow run poll within a variable amount of time.\n\n The `query_interval_seconds` is the same value that is used by\n the loop services - we will evaluate if the _last_polled_time\n was within that interval x 30 (so 10s -> 5m)\n\n The instance property `self._last_polled_time`\n is currently set/updated in `get_and_submit_flow_runs()`\n \"\"\"\n threshold_seconds = query_interval_seconds * 30\n\n seconds_since_last_poll = (\n pendulum.now(\"utc\") - self._last_polled_time\n ).in_seconds()\n\n is_still_polling = seconds_since_last_poll <= threshold_seconds\n\n if not is_still_polling:\n self._logger.error(\n f\"Worker has not polled in the last {seconds_since_last_poll} seconds \"\n \"and should be restarted\"\n )\n\n return is_still_polling\n\n async def get_and_submit_flow_runs(self):\n runs_response = await self._get_scheduled_flow_runs()\n\n self._last_polled_time = pendulum.now(\"utc\")\n\n return await self._submit_scheduled_flow_runs(flow_run_response=runs_response)\n\n async def check_for_cancelled_flow_runs(self):\n if not self.is_setup:\n raise RuntimeError(\n \"Worker is not set up. Please make sure you are running this worker \"\n \"as an async context manager.\"\n )\n\n self._logger.debug(\"Checking for cancelled flow runs...\")\n\n work_queue_filter = (\n WorkQueueFilter(name=WorkQueueFilterName(any_=list(self._work_queues)))\n if self._work_queues\n else None\n )\n\n named_cancelling_flow_runs = await self._client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state=FlowRunFilterState(\n type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n ),\n # Avoid duplicate cancellation calls\n id=FlowRunFilterId(not_any_=list(self._cancelling_flow_run_ids)),\n ),\n work_pool_filter=WorkPoolFilter(\n name=WorkPoolFilterName(any_=[self._work_pool_name])\n ),\n work_queue_filter=work_queue_filter,\n )\n\n typed_cancelling_flow_runs = await self._client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state=FlowRunFilterState(\n type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n ),\n # Avoid duplicate cancellation calls\n id=FlowRunFilterId(not_any_=list(self._cancelling_flow_run_ids)),\n ),\n work_pool_filter=WorkPoolFilter(\n name=WorkPoolFilterName(any_=[self._work_pool_name])\n ),\n work_queue_filter=work_queue_filter,\n )\n\n cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n if cancelling_flow_runs:\n self._logger.info(\n f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n )\n\n for flow_run in cancelling_flow_runs:\n self._cancelling_flow_run_ids.add(flow_run.id)\n self._runs_task_group.start_soon(self.cancel_run, flow_run)\n\n return cancelling_flow_runs\n\n async def cancel_run(self, flow_run: \"FlowRun\"):\n run_logger = self.get_flow_run_logger(flow_run)\n\n try:\n configuration = await self._get_configuration(flow_run)\n if configuration.is_using_a_runner:\n self._logger.info(\n f\"Skipping cancellation because flow run {str(flow_run.id)!r} is\"\n \" using enhanced cancellation. A dedicated runner will handle\"\n \" cancellation.\"\n )\n return\n except ObjectNotFound:\n self._logger.warning(\n f\"Flow run {flow_run.id!r} cannot be cancelled by this worker:\"\n f\" associated deployment {flow_run.deployment_id!r} does not exist.\"\n )\n\n if not flow_run.infrastructure_pid:\n run_logger.error(\n f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n \" attached. Cancellation cannot be guaranteed.\"\n )\n await self._mark_flow_run_as_cancelled(\n flow_run,\n state_updates={\n \"message\": (\n \"This flow run is missing infrastructure tracking information\"\n \" and cancellation cannot be guaranteed.\"\n )\n },\n )\n return\n\n try:\n await self.kill_infrastructure(\n infrastructure_pid=flow_run.infrastructure_pid,\n configuration=configuration,\n )\n except NotImplementedError:\n self._logger.error(\n f\"Worker type {self.type!r} does not support killing created \"\n \"infrastructure. Cancellation cannot be guaranteed.\"\n )\n except InfrastructureNotFound as exc:\n self._logger.warning(f\"{exc} Marking flow run as cancelled.\")\n await self._mark_flow_run_as_cancelled(flow_run)\n except InfrastructureNotAvailable as exc:\n self._logger.warning(f\"{exc} Flow run cannot be cancelled by this worker.\")\n except Exception:\n run_logger.exception(\n \"Encountered exception while killing infrastructure for flow run \"\n f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n )\n # We will try again on generic exceptions\n self._cancelling_flow_run_ids.remove(flow_run.id)\n return\n else:\n self._emit_flow_run_cancelled_event(\n flow_run=flow_run, configuration=configuration\n )\n await self._mark_flow_run_as_cancelled(flow_run)\n run_logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n\n async def _update_local_work_pool_info(self):\n try:\n work_pool = await self._client.read_work_pool(\n work_pool_name=self._work_pool_name\n )\n except ObjectNotFound:\n if self._create_pool_if_not_found:\n wp = WorkPoolCreate(\n name=self._work_pool_name,\n type=self.type,\n )\n if self._base_job_template is not None:\n wp.base_job_template = self._base_job_template\n\n work_pool = await self._client.create_work_pool(work_pool=wp)\n self._logger.info(f\"Work pool {self._work_pool_name!r} created.\")\n else:\n self._logger.warning(f\"Work pool {self._work_pool_name!r} not found!\")\n if self._base_job_template is not None:\n self._logger.warning(\n \"Ignoring supplied base job template because the work pool\"\n \" already exists\"\n )\n return\n\n # if the remote config type changes (or if it's being loaded for the\n # first time), check if it matches the local type and warn if not\n if getattr(self._work_pool, \"type\", 0) != work_pool.type:\n if work_pool.type != self.__class__.type:\n self._logger.warning(\n \"Worker type mismatch! This worker process expects type \"\n f\"{self.type!r} but received {work_pool.type!r}\"\n \" from the server. Unexpected behavior may occur.\"\n )\n\n # once the work pool is loaded, verify that it has a `base_job_template` and\n # set it if not\n if not work_pool.base_job_template:\n job_template = self.__class__.get_default_base_job_template()\n await self._set_work_pool_template(work_pool, job_template)\n work_pool.base_job_template = job_template\n\n self._work_pool = work_pool\n\n async def _send_worker_heartbeat(self):\n if self._work_pool:\n await self._client.send_worker_heartbeat(\n work_pool_name=self._work_pool_name,\n worker_name=self.name,\n heartbeat_interval_seconds=self.heartbeat_interval_seconds,\n )\n\n async def sync_with_backend(self):\n \"\"\"\n Updates the worker's local information about it's current work pool and\n queues. Sends a worker heartbeat to the API.\n \"\"\"\n await self._update_local_work_pool_info()\n\n await self._send_worker_heartbeat()\n\n self._logger.debug(\"Worker synchronized with the Prefect API server.\")\n\n async def _get_scheduled_flow_runs(\n self,\n ) -> List[\"WorkerFlowRunResponse\"]:\n \"\"\"\n Retrieve scheduled flow runs from the work pool's queues.\n \"\"\"\n scheduled_before = pendulum.now(\"utc\").add(seconds=int(self._prefetch_seconds))\n self._logger.debug(\n f\"Querying for flow runs scheduled before {scheduled_before}\"\n )\n try:\n scheduled_flow_runs = (\n await self._client.get_scheduled_flow_runs_for_work_pool(\n work_pool_name=self._work_pool_name,\n scheduled_before=scheduled_before,\n work_queue_names=list(self._work_queues),\n )\n )\n self._logger.debug(\n f\"Discovered {len(scheduled_flow_runs)} scheduled_flow_runs\"\n )\n return scheduled_flow_runs\n except ObjectNotFound:\n # the pool doesn't exist; it will be created on the next\n # heartbeat (or an appropriate warning will be logged)\n return []\n\n async def _submit_scheduled_flow_runs(\n self, flow_run_response: List[\"WorkerFlowRunResponse\"]\n ) -> List[\"FlowRun\"]:\n \"\"\"\n Takes a list of WorkerFlowRunResponses and submits the referenced flow runs\n for execution by the worker.\n \"\"\"\n submittable_flow_runs = [entry.flow_run for entry in flow_run_response]\n submittable_flow_runs.sort(key=lambda run: run.next_scheduled_start_time)\n for flow_run in submittable_flow_runs:\n if flow_run.id in self._submitting_flow_run_ids:\n continue\n\n try:\n if self._limiter:\n self._limiter.acquire_on_behalf_of_nowait(flow_run.id)\n except anyio.WouldBlock:\n self._logger.info(\n f\"Flow run limit reached; {self._limiter.borrowed_tokens} flow runs\"\n \" in progress.\"\n )\n break\n else:\n run_logger = self.get_flow_run_logger(flow_run)\n run_logger.info(\n f\"Worker '{self.name}' submitting flow run '{flow_run.id}'\"\n )\n self._submitting_flow_run_ids.add(flow_run.id)\n self._runs_task_group.start_soon(\n self._submit_run,\n flow_run,\n )\n\n return list(\n filter(\n lambda run: run.id in self._submitting_flow_run_ids,\n submittable_flow_runs,\n )\n )\n\n async def _check_flow_run(self, flow_run: \"FlowRun\") -> None:\n \"\"\"\n Performs a check on a submitted flow run to warn the user if the flow run\n was created from a deployment with a storage block.\n \"\"\"\n if flow_run.deployment_id:\n deployment = await self._client.read_deployment(flow_run.deployment_id)\n if deployment.storage_document_id:\n raise ValueError(\n f\"Flow run {flow_run.id!r} was created from deployment\"\n f\" {deployment.name!r} which is configured with a storage block.\"\n \" Please use an\"\n \" agent to execute this flow run.\"\n )\n\n async def _submit_run(self, flow_run: \"FlowRun\") -> None:\n \"\"\"\n Submits a given flow run for execution by the worker.\n \"\"\"\n run_logger = self.get_flow_run_logger(flow_run)\n\n try:\n await self._check_flow_run(flow_run)\n except (ValueError, ObjectNotFound):\n self._logger.exception(\n (\n \"Flow run %s did not pass checks and will not be submitted for\"\n \" execution\"\n ),\n flow_run.id,\n )\n self._submitting_flow_run_ids.remove(flow_run.id)\n return\n\n ready_to_submit = await self._propose_pending_state(flow_run)\n\n if ready_to_submit:\n readiness_result = await self._runs_task_group.start(\n self._submit_run_and_capture_errors, flow_run\n )\n\n if readiness_result and not isinstance(readiness_result, Exception):\n try:\n await self._client.update_flow_run(\n flow_run_id=flow_run.id,\n infrastructure_pid=str(readiness_result),\n )\n except Exception:\n run_logger.exception(\n \"An error occurred while setting the `infrastructure_pid` on \"\n f\"flow run {flow_run.id!r}. The flow run will \"\n \"not be cancellable.\"\n )\n\n run_logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n else:\n # If the run is not ready to submit, release the concurrency slot\n if self._limiter:\n self._limiter.release_on_behalf_of(flow_run.id)\n\n self._submitting_flow_run_ids.remove(flow_run.id)\n\n async def _submit_run_and_capture_errors(\n self, flow_run: \"FlowRun\", task_status: anyio.abc.TaskStatus = None\n ) -> Union[BaseWorkerResult, Exception]:\n run_logger = self.get_flow_run_logger(flow_run)\n\n try:\n configuration = await self._get_configuration(flow_run)\n submitted_event = self._emit_flow_run_submitted_event(configuration)\n result = await self.run(\n flow_run=flow_run,\n task_status=task_status,\n configuration=configuration,\n )\n except Exception as exc:\n if not task_status._future.done():\n # This flow run was being submitted and did not start successfully\n run_logger.exception(\n f\"Failed to submit flow run '{flow_run.id}' to infrastructure.\"\n )\n # Mark the task as started to prevent agent crash\n task_status.started(exc)\n await self._propose_crashed_state(\n flow_run, \"Flow run could not be submitted to infrastructure\"\n )\n else:\n run_logger.exception(\n f\"An error occurred while monitoring flow run '{flow_run.id}'. \"\n \"The flow run will not be marked as failed, but an issue may have \"\n \"occurred.\"\n )\n return exc\n finally:\n if self._limiter:\n self._limiter.release_on_behalf_of(flow_run.id)\n\n if not task_status._future.done():\n run_logger.error(\n f\"Infrastructure returned without reporting flow run '{flow_run.id}' \"\n \"as started or raising an error. This behavior is not expected and \"\n \"generally indicates improper implementation of infrastructure. The \"\n \"flow run will not be marked as failed, but an issue may have occurred.\"\n )\n # Mark the task as started to prevent agent crash\n task_status.started()\n\n if result.status_code != 0:\n await self._propose_crashed_state(\n flow_run,\n (\n \"Flow run infrastructure exited with non-zero status code\"\n f\" {result.status_code}.\"\n ),\n )\n\n self._emit_flow_run_executed_event(result, configuration, submitted_event)\n\n return result\n\n def get_status(self):\n \"\"\"\n Retrieves the status of the current worker including its name, current worker\n pool, the work pool queues it is polling, and its local settings.\n \"\"\"\n return {\n \"name\": self.name,\n \"work_pool\": (\n self._work_pool.dict(json_compatible=True)\n if self._work_pool is not None\n else None\n ),\n \"settings\": {\n \"prefetch_seconds\": self._prefetch_seconds,\n },\n }\n\n async def _get_configuration(\n self,\n flow_run: \"FlowRun\",\n ) -> BaseJobConfiguration:\n deployment = await self._client.read_deployment(flow_run.deployment_id)\n flow = await self._client.read_flow(flow_run.flow_id)\n\n deployment_vars = deployment.infra_overrides or {}\n flow_run_vars = flow_run.job_variables or {}\n job_variables = {**deployment_vars, **flow_run_vars}\n\n configuration = await self.job_configuration.from_template_and_values(\n base_job_template=self._work_pool.base_job_template,\n values=job_variables,\n client=self._client,\n )\n configuration.prepare_for_flow_run(\n flow_run=flow_run, deployment=deployment, flow=flow\n )\n return configuration\n\n async def _propose_pending_state(self, flow_run: \"FlowRun\") -> bool:\n run_logger = self.get_flow_run_logger(flow_run)\n state = flow_run.state\n try:\n state = await propose_state(\n self._client, Pending(), flow_run_id=flow_run.id\n )\n except Abort as exc:\n run_logger.info(\n (\n f\"Aborted submission of flow run '{flow_run.id}'. \"\n f\"Server sent an abort signal: {exc}\"\n ),\n )\n return False\n except Exception:\n run_logger.exception(\n f\"Failed to update state of flow run '{flow_run.id}'\",\n )\n return False\n\n if not state.is_pending():\n run_logger.info(\n (\n f\"Aborted submission of flow run '{flow_run.id}': \"\n f\"Server returned a non-pending state {state.type.value!r}\"\n ),\n )\n return False\n\n return True\n\n async def _propose_failed_state(self, flow_run: \"FlowRun\", exc: Exception) -> None:\n run_logger = self.get_flow_run_logger(flow_run)\n try:\n await propose_state(\n self._client,\n await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n flow_run_id=flow_run.id,\n )\n except Abort:\n # We've already failed, no need to note the abort but we don't want it to\n # raise in the agent process\n pass\n except Exception:\n run_logger.error(\n f\"Failed to update state of flow run '{flow_run.id}'\",\n exc_info=True,\n )\n\n async def _propose_crashed_state(self, flow_run: \"FlowRun\", message: str) -> None:\n run_logger = self.get_flow_run_logger(flow_run)\n try:\n state = await propose_state(\n self._client,\n Crashed(message=message),\n flow_run_id=flow_run.id,\n )\n except Abort:\n # Flow run already marked as failed\n pass\n except Exception:\n run_logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n else:\n if state.is_crashed():\n run_logger.info(\n f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n )\n\n async def _mark_flow_run_as_cancelled(\n self, flow_run: \"FlowRun\", state_updates: Optional[dict] = None\n ) -> None:\n state_updates = state_updates or {}\n state_updates.setdefault(\"name\", \"Cancelled\")\n state_updates.setdefault(\"type\", StateType.CANCELLED)\n state = flow_run.state.copy(update=state_updates)\n\n await self._client.set_flow_run_state(flow_run.id, state, force=True)\n\n # Do not remove the flow run from the cancelling set immediately because\n # the API caches responses for the `read_flow_runs` and we do not want to\n # duplicate cancellations.\n await self._schedule_task(\n 60 * 10, self._cancelling_flow_run_ids.remove, flow_run.id\n )\n\n async def _set_work_pool_template(self, work_pool, job_template):\n \"\"\"Updates the `base_job_template` for the worker's work pool server side.\"\"\"\n await self._client.update_work_pool(\n work_pool_name=work_pool.name,\n work_pool=WorkPoolUpdate(\n base_job_template=job_template,\n ),\n )\n\n async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n \"\"\"\n Schedule a background task to start after some time.\n\n These tasks will be run immediately when the worker exits instead of waiting.\n\n The function may be async or sync. Async functions will be awaited.\n \"\"\"\n\n async def wrapper(task_status):\n # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n # time or shutdown\n if self.is_setup:\n with anyio.CancelScope() as scope:\n self._scheduled_task_scopes.add(scope)\n task_status.started()\n await anyio.sleep(__in_seconds)\n\n self._scheduled_task_scopes.remove(scope)\n else:\n task_status.started()\n\n result = fn(*args, **kwargs)\n if inspect.iscoroutine(result):\n await result\n\n await self._runs_task_group.start(wrapper)\n\n async def __aenter__(self):\n self._logger.debug(\"Entering worker context...\")\n await self.setup()\n return self\n\n async def __aexit__(self, *exc_info):\n self._logger.debug(\"Exiting worker context...\")\n await self.teardown(*exc_info)\n\n def __repr__(self):\n return f\"Worker(pool={self._work_pool_name!r}, name={self.name!r})\"\n\n def _event_resource(self):\n return {\n \"prefect.resource.id\": f\"prefect.worker.{self.type}.{self.get_name_slug()}\",\n \"prefect.resource.name\": self.name,\n \"prefect.version\": prefect.__version__,\n \"prefect.worker-type\": self.type,\n }\n\n def _event_related_resources(\n self,\n configuration: Optional[BaseJobConfiguration] = None,\n include_self: bool = False,\n ) -> List[RelatedResource]:\n related = []\n if configuration:\n related += configuration._related_resources()\n\n if self._work_pool:\n related.append(\n object_as_related_resource(\n kind=\"work-pool\", role=\"work-pool\", object=self._work_pool\n )\n )\n\n if include_self:\n worker_resource = self._event_resource()\n worker_resource[\"prefect.resource.role\"] = \"worker\"\n related.append(RelatedResource(__root__=worker_resource))\n\n return related\n\n def _emit_flow_run_submitted_event(\n self, configuration: BaseJobConfiguration\n ) -> Event:\n return emit_event(\n event=\"prefect.worker.submitted-flow-run\",\n resource=self._event_resource(),\n related=self._event_related_resources(configuration=configuration),\n )\n\n def _emit_flow_run_executed_event(\n self,\n result: BaseWorkerResult,\n configuration: BaseJobConfiguration,\n submitted_event: Event,\n ):\n related = self._event_related_resources(configuration=configuration)\n\n for resource in related:\n if resource.role == \"flow-run\":\n resource[\"prefect.infrastructure.identifier\"] = str(result.identifier)\n resource[\"prefect.infrastructure.status-code\"] = str(result.status_code)\n\n emit_event(\n event=\"prefect.worker.executed-flow-run\",\n resource=self._event_resource(),\n related=related,\n follows=submitted_event,\n )\n\n async def _emit_worker_started_event(self) -> Event:\n return emit_event(\n \"prefect.worker.started\",\n resource=self._event_resource(),\n related=self._event_related_resources(),\n )\n\n async def _emit_worker_stopped_event(self, started_event: Event):\n emit_event(\n \"prefect.worker.stopped\",\n resource=self._event_resource(),\n related=self._event_related_resources(),\n follows=started_event,\n )\n\n def _emit_flow_run_cancelled_event(\n self, flow_run: \"FlowRun\", configuration: BaseJobConfiguration\n ):\n related = self._event_related_resources(configuration=configuration)\n\n for resource in related:\n if resource.role == \"flow-run\":\n resource[\"prefect.infrastructure.identifier\"] = str(\n flow_run.infrastructure_pid\n )\n\n emit_event(\n event=\"prefect.worker.cancelled-flow-run\",\n resource=self._event_resource(),\n related=related,\n )\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_all_available_worker_types","title":"get_all_available_worker_types
staticmethod
","text":"Returns all worker types available in the local registry.
Source code inprefect/workers/base.py
@staticmethod\ndef get_all_available_worker_types() -> List[str]:\n \"\"\"\n Returns all worker types available in the local registry.\n \"\"\"\n load_prefect_collections()\n worker_registry = get_registry_for_type(BaseWorker)\n if worker_registry is not None:\n return list(worker_registry.keys())\n return []\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_status","title":"get_status
","text":"Retrieves the status of the current worker including its name, current worker pool, the work pool queues it is polling, and its local settings.
Source code inprefect/workers/base.py
def get_status(self):\n \"\"\"\n Retrieves the status of the current worker including its name, current worker\n pool, the work pool queues it is polling, and its local settings.\n \"\"\"\n return {\n \"name\": self.name,\n \"work_pool\": (\n self._work_pool.dict(json_compatible=True)\n if self._work_pool is not None\n else None\n ),\n \"settings\": {\n \"prefetch_seconds\": self._prefetch_seconds,\n },\n }\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_worker_class_from_type","title":"get_worker_class_from_type
staticmethod
","text":"Returns the worker class for a given worker type. If the worker type is not recognized, returns None.
Source code inprefect/workers/base.py
@staticmethod\ndef get_worker_class_from_type(type: str) -> Optional[Type[\"BaseWorker\"]]:\n \"\"\"\n Returns the worker class for a given worker type. If the worker type\n is not recognized, returns None.\n \"\"\"\n load_prefect_collections()\n worker_registry = get_registry_for_type(BaseWorker)\n if worker_registry is not None:\n return worker_registry.get(type)\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.is_worker_still_polling","title":"is_worker_still_polling
","text":"This method is invoked by a webserver healthcheck handler and returns a boolean indicating if the worker has recorded a scheduled flow run poll within a variable amount of time.
The query_interval_seconds
is the same value that is used by the loop services - we will evaluate if the _last_polled_time was within that interval x 30 (so 10s -> 5m)
The instance property self._last_polled_time
is currently set/updated in get_and_submit_flow_runs()
prefect/workers/base.py
def is_worker_still_polling(self, query_interval_seconds: int) -> bool:\n \"\"\"\n This method is invoked by a webserver healthcheck handler\n and returns a boolean indicating if the worker has recorded a\n scheduled flow run poll within a variable amount of time.\n\n The `query_interval_seconds` is the same value that is used by\n the loop services - we will evaluate if the _last_polled_time\n was within that interval x 30 (so 10s -> 5m)\n\n The instance property `self._last_polled_time`\n is currently set/updated in `get_and_submit_flow_runs()`\n \"\"\"\n threshold_seconds = query_interval_seconds * 30\n\n seconds_since_last_poll = (\n pendulum.now(\"utc\") - self._last_polled_time\n ).in_seconds()\n\n is_still_polling = seconds_since_last_poll <= threshold_seconds\n\n if not is_still_polling:\n self._logger.error(\n f\"Worker has not polled in the last {seconds_since_last_poll} seconds \"\n \"and should be restarted\"\n )\n\n return is_still_polling\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.kill_infrastructure","title":"kill_infrastructure
async
","text":"Method for killing infrastructure created by a worker. Should be implemented by individual workers if they support killing infrastructure.
Source code inprefect/workers/base.py
async def kill_infrastructure(\n self,\n infrastructure_pid: str,\n configuration: BaseJobConfiguration,\n grace_seconds: int = 30,\n):\n \"\"\"\n Method for killing infrastructure created by a worker. Should be implemented by\n individual workers if they support killing infrastructure.\n \"\"\"\n raise NotImplementedError(\n \"This worker does not support killing infrastructure.\"\n )\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.run","title":"run
abstractmethod
async
","text":"Runs a given flow run on the current worker.
Source code inprefect/workers/base.py
@abc.abstractmethod\nasync def run(\n self,\n flow_run: \"FlowRun\",\n configuration: BaseJobConfiguration,\n task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> BaseWorkerResult:\n \"\"\"\n Runs a given flow run on the current worker.\n \"\"\"\n raise NotImplementedError(\n \"Workers must implement a method for running submitted flow runs\"\n )\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.setup","title":"setup
async
","text":"Prepares the worker to run.
Source code inprefect/workers/base.py
async def setup(self):\n \"\"\"Prepares the worker to run.\"\"\"\n self._logger.debug(\"Setting up worker...\")\n self._runs_task_group = anyio.create_task_group()\n self._limiter = (\n anyio.CapacityLimiter(self._limit) if self._limit is not None else None\n )\n self._client = get_client()\n await self._client.__aenter__()\n await self._runs_task_group.__aenter__()\n\n self.is_setup = True\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.sync_with_backend","title":"sync_with_backend
async
","text":"Updates the worker's local information about it's current work pool and queues. Sends a worker heartbeat to the API.
Source code inprefect/workers/base.py
async def sync_with_backend(self):\n \"\"\"\n Updates the worker's local information about it's current work pool and\n queues. Sends a worker heartbeat to the API.\n \"\"\"\n await self._update_local_work_pool_info()\n\n await self._send_worker_heartbeat()\n\n self._logger.debug(\"Worker synchronized with the Prefect API server.\")\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.teardown","title":"teardown
async
","text":"Cleans up resources after the worker is stopped.
Source code inprefect/workers/base.py
async def teardown(self, *exc_info):\n \"\"\"Cleans up resources after the worker is stopped.\"\"\"\n self._logger.debug(\"Tearing down worker...\")\n self.is_setup = False\n for scope in self._scheduled_task_scopes:\n scope.cancel()\n if self._runs_task_group:\n await self._runs_task_group.__aexit__(*exc_info)\n if self._client:\n await self._client.__aexit__(*exc_info)\n self._runs_task_group = None\n self._client = None\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/block/","title":"block","text":"","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block","title":"prefect.workers.block
","text":"","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerJobConfiguration","title":"BlockWorkerJobConfiguration
","text":" Bases: BaseModel
prefect/workers/block.py
class BlockWorkerJobConfiguration(BaseModel):\n block: Block = Field(\n default=..., description=\"The infrastructure block to use for job creation.\"\n )\n\n @validator(\"block\")\n def _validate_block_is_infrastructure(cls, v):\n print(\"v: \", v)\n if not isinstance(v, Infrastructure):\n raise TypeError(\"Provided block is not a valid infrastructure block.\")\n\n return v\n\n _related_objects: Dict[str, Any] = PrivateAttr(default_factory=dict)\n\n @property\n def is_using_a_runner(self):\n return (\n self.block.command is not None\n and \"prefect flow-run execute\" in shlex.join(self.block.command)\n )\n\n @staticmethod\n def _get_base_config_defaults(variables: dict) -> dict:\n \"\"\"Get default values from base config for all variables that have them.\"\"\"\n defaults = dict()\n for variable_name, attrs in variables.items():\n if \"default\" in attrs:\n defaults[variable_name] = attrs[\"default\"]\n\n return defaults\n\n @classmethod\n @inject_client\n async def from_template_and_values(\n cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n ):\n \"\"\"Creates a valid worker configuration object from the provided base\n configuration and overrides.\n\n Important: this method expects that the base_job_template was already\n validated server-side.\n \"\"\"\n job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n variables_schema = base_job_template[\"variables\"]\n variables = cls._get_base_config_defaults(\n variables_schema.get(\"properties\", {})\n )\n variables.update(values)\n\n populated_configuration = apply_values(template=job_config, values=variables)\n\n block_document_id = get_from_dict(\n populated_configuration, \"block.$ref.block_document_id\"\n )\n if not block_document_id:\n raise ValueError(\n \"Base job template is invalid for this worker type because it does not\"\n \" contain a block_document_id after variable resolution.\"\n )\n\n block_document = await client.read_block_document(\n block_document_id=block_document_id\n )\n infrastructure_block = Block._from_block_document(block_document)\n\n populated_configuration[\"block\"] = infrastructure_block\n\n return cls(**populated_configuration)\n\n @classmethod\n def json_template(cls) -> dict:\n \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n Defaults to using the job configuration parameter name as the template variable name.\n\n e.g.\n {\n key1: '{{ key1 }}', # default variable template\n key2: '{{ template2 }}', # `template2` specifically provide as template\n }\n \"\"\"\n configuration = {}\n properties = cls.schema()[\"properties\"]\n for k, v in properties.items():\n if v.get(\"template\"):\n template = v[\"template\"]\n else:\n template = \"{{ \" + k + \" }}\"\n configuration[k] = template\n\n return configuration\n\n def _related_resources(self) -> List[RelatedResource]:\n tags = set()\n related = []\n\n for kind, obj in self._related_objects.items():\n if obj is None:\n continue\n if hasattr(obj, \"tags\"):\n tags.update(obj.tags)\n related.append(object_as_related_resource(kind=kind, role=kind, object=obj))\n\n return related + tags_as_related_resources(tags)\n\n def prepare_for_flow_run(\n self,\n flow_run: \"FlowRun\",\n deployment: Optional[\"DeploymentResponse\"] = None,\n flow: Optional[\"Flow\"] = None,\n ):\n self.block = self.block.prepare_for_flow_run(\n flow_run=flow_run, deployment=deployment, flow=flow\n )\n
","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerJobConfiguration.from_template_and_values","title":"from_template_and_values
async
classmethod
","text":"Creates a valid worker configuration object from the provided base configuration and overrides.
Important: this method expects that the base_job_template was already validated server-side.
Source code inprefect/workers/block.py
@classmethod\n@inject_client\nasync def from_template_and_values(\n cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n):\n \"\"\"Creates a valid worker configuration object from the provided base\n configuration and overrides.\n\n Important: this method expects that the base_job_template was already\n validated server-side.\n \"\"\"\n job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n variables_schema = base_job_template[\"variables\"]\n variables = cls._get_base_config_defaults(\n variables_schema.get(\"properties\", {})\n )\n variables.update(values)\n\n populated_configuration = apply_values(template=job_config, values=variables)\n\n block_document_id = get_from_dict(\n populated_configuration, \"block.$ref.block_document_id\"\n )\n if not block_document_id:\n raise ValueError(\n \"Base job template is invalid for this worker type because it does not\"\n \" contain a block_document_id after variable resolution.\"\n )\n\n block_document = await client.read_block_document(\n block_document_id=block_document_id\n )\n infrastructure_block = Block._from_block_document(block_document)\n\n populated_configuration[\"block\"] = infrastructure_block\n\n return cls(**populated_configuration)\n
","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerJobConfiguration.json_template","title":"json_template
classmethod
","text":"Returns a dict with job configuration as keys and the corresponding templates as values
Defaults to using the job configuration parameter name as the template variable name.
e.g. { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # template2
specifically provide as template }
prefect/workers/block.py
@classmethod\ndef json_template(cls) -> dict:\n \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n Defaults to using the job configuration parameter name as the template variable name.\n\n e.g.\n {\n key1: '{{ key1 }}', # default variable template\n key2: '{{ template2 }}', # `template2` specifically provide as template\n }\n \"\"\"\n configuration = {}\n properties = cls.schema()[\"properties\"]\n for k, v in properties.items():\n if v.get(\"template\"):\n template = v[\"template\"]\n else:\n template = \"{{ \" + k + \" }}\"\n configuration[k] = template\n\n return configuration\n
","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerResult","title":"BlockWorkerResult
","text":" Bases: BaseWorkerResult
Result of a block worker job
Source code inprefect/workers/block.py
class BlockWorkerResult(BaseWorkerResult):\n \"\"\"Result of a block worker job\"\"\"\n
","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/process/","title":"process","text":"","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process","title":"prefect.workers.process
","text":"Module containing the Process worker used for executing flow runs as subprocesses.
To start a Process worker, run the following command:
prefect worker start --pool 'my-work-pool' --type process\n
Replace my-work-pool
with the name of the work pool you want the worker to poll for flow runs.
For more information about work pools and workers, checkout out the Prefect docs.
","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessJobConfiguration","title":"ProcessJobConfiguration
","text":" Bases: BaseJobConfiguration
prefect/workers/process.py
class ProcessJobConfiguration(BaseJobConfiguration):\n stream_output: bool = Field(default=True)\n working_dir: Optional[Path] = Field(default=None)\n\n @validator(\"working_dir\")\n def validate_command(cls, v):\n \"\"\"Make sure that the working directory is formatted for the current platform.\"\"\"\n if v:\n return relative_path_to_current_platform(v)\n return v\n\n def prepare_for_flow_run(\n self,\n flow_run: \"FlowRun\",\n deployment: Optional[\"DeploymentResponse\"] = None,\n flow: Optional[\"Flow\"] = None,\n ):\n super().prepare_for_flow_run(flow_run, deployment, flow)\n\n self.env = {**os.environ, **self.env}\n self.command = (\n f\"{get_sys_executable()} -m prefect.engine\"\n if self.command == self._base_flow_run_command()\n else self.command\n )\n\n def _base_flow_run_command(self) -> str:\n \"\"\"\n Override the base flow run command because enhanced cancellation doesn't\n work with the process worker.\n \"\"\"\n return \"python -m prefect.engine\"\n
","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessJobConfiguration.validate_command","title":"validate_command
","text":"Make sure that the working directory is formatted for the current platform.
Source code inprefect/workers/process.py
@validator(\"working_dir\")\ndef validate_command(cls, v):\n \"\"\"Make sure that the working directory is formatted for the current platform.\"\"\"\n if v:\n return relative_path_to_current_platform(v)\n return v\n
","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessWorkerResult","title":"ProcessWorkerResult
","text":" Bases: BaseWorkerResult
Contains information about the final state of a completed process
Source code inprefect/workers/process.py
class ProcessWorkerResult(BaseWorkerResult):\n \"\"\"Contains information about the final state of a completed process\"\"\"\n
","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/server/","title":"server","text":"","tags":["Python API","workers","server"]},{"location":"api-ref/prefect/workers/server/#prefect.workers.server","title":"prefect.workers.server
","text":"","tags":["Python API","workers","server"]},{"location":"api-ref/prefect/workers/server/#prefect.workers.server.start_healthcheck_server","title":"start_healthcheck_server
","text":"Run a healthcheck FastAPI server for a worker.
Parameters:
Name Type Description Defaultworker
BaseWorker | ProcessWorker
the worker whose health we will check
requiredlog_level
str
the log level to use for the server
'error'
Source code in prefect/workers/server.py
def start_healthcheck_server(\n worker: Union[BaseWorker, ProcessWorker],\n query_interval_seconds: int,\n log_level: str = \"error\",\n) -> None:\n \"\"\"\n Run a healthcheck FastAPI server for a worker.\n\n Args:\n worker (BaseWorker | ProcessWorker): the worker whose health we will check\n log_level (str): the log level to use for the server\n \"\"\"\n webserver = FastAPI()\n router = APIRouter()\n\n def perform_health_check():\n did_recently_poll = worker.is_worker_still_polling(\n query_interval_seconds=query_interval_seconds\n )\n\n if not did_recently_poll:\n return JSONResponse(\n status_code=status.HTTP_503_SERVICE_UNAVAILABLE,\n content={\"message\": \"Worker may be unresponsive at this time\"},\n )\n return JSONResponse(status_code=status.HTTP_200_OK, content={\"message\": \"OK\"})\n\n router.add_api_route(\"/health\", perform_health_check, methods=[\"GET\"])\n\n webserver.include_router(router)\n\n uvicorn.run(\n webserver,\n host=PREFECT_WORKER_WEBSERVER_HOST.value(),\n port=PREFECT_WORKER_WEBSERVER_PORT.value(),\n log_level=log_level,\n )\n
","tags":["Python API","workers","server"]},{"location":"api-ref/prefect/workers/utilities/","title":"utilities","text":"","tags":["Python API","workers","utilities"]},{"location":"api-ref/prefect/workers/utilities/#prefect.workers.utilities","title":"prefect.workers.utilities
","text":"","tags":["Python API","workers","utilities"]},{"location":"api-ref/python/","title":"Python SDK","text":"The Prefect Python SDK is used to build, test, and execute workflows against the Prefect API.
Explore the modules in the navigation bar to the left to learn more.
","tags":["API","Python SDK"]},{"location":"api-ref/rest-api/","title":"REST API","text":"The Prefect REST API is used for communicating data from clients to a Prefect server so that orchestration can be performed. This API is consumed by clients such as the Prefect Python SDK or the server dashboard.
Prefect Cloud and a locally hosted Prefect server each provide a REST API.
http://localhost:4200/docs
or the /docs
endpoint of the PREFECT_API_URL you have configured to access the server. You must have the server running with prefect server start
to access the interactive documentation.You have many options to interact with the Prefect REST API:
PrefectClient
This example uses PrefectClient
with a locally hosted Prefect server:
import asyncio\nfrom prefect.client import get_client\n\nasync def get_flows():\n client = get_client()\n r = await client.read_flows(limit=5)\n return r\n\nr = asyncio.run(get_flows())\n\nfor flow in r:\n print(flow.name, flow.id)\n\nif __name__ == \"__main__\":\n asyncio.run(get_flows())\n
Output:
cat-facts 58ed68b1-0201-4f37-adef-0ea24bd2a022\ndog-facts e7c0403d-44e7-45cf-a6c8-79117b7f3766\nsloth-facts 771c0574-f5bf-4f59-a69d-3be3e061a62d\ncapybara-facts fbadaf8b-584f-48b9-b092-07d351edd424\nlemur-facts 53f710e7-3b0f-4b2f-ab6b-44934111818c\n
","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#requests-with-prefect","title":"Requests with Prefect","text":"This example uses the Requests library with Prefect Cloud to return the five newest artifacts.
import requests\n\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/abc-my-cloud-account-id-is-here/workspaces/123-my-workspace-id-is-here\"\nPREFECT_API_KEY=\"123abc_my_api_key_goes_here\"\ndata = {\n \"sort\": \"CREATED_DESC\",\n \"limit\": 5,\n \"artifacts\": {\n \"key\": {\n \"exists_\": True\n }\n }\n}\n\nheaders = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\nendpoint = f\"{PREFECT_API_URL}/artifacts/filter\"\n\nresponse = requests.post(endpoint, headers=headers, json=data)\nassert response.status_code == 200\nfor artifact in response.json():\n print(artifact)\n
","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#curl-with-prefect-cloud","title":"curl with Prefect Cloud","text":"This example uses curl with Prefect Cloud to create a flow run:
ACCOUNT_ID=\"abc-my-cloud-account-id-goes-here\"\nWORKSPACE_ID=\"123-my-workspace-id-goes-here\"\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/$ACCOUNT_ID/workspaces/$WORKSPACE_ID\"\nPREFECT_API_KEY=\"123abc_my_api_key_goes_here\"\nDEPLOYMENT_ID=\"my_deployment_id\"\n\ncurl --location --request POST \"$PREFECT_API_URL/deployments/$DEPLOYMENT_ID/create_flow_run\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Authorization: Bearer $PREFECT_API_KEY\" \\\n --header \"X-PREFECT-API-VERSION: 0.8.4\" \\\n --data-raw \"{}\"\n
Note that in this example --data-raw \"{}\"
is required and is where you can specify other aspects of the flow run such as the state. Windows users substitute ^
for \\
for line multi-line commands.
When working with the Prefect Cloud REST API you will need your Account ID and often the Workspace ID for the workspace you want to interact with. You can find both IDs for a Prefect profile in the CLI with prefect profile inspect my_profile
. This command will also display your Prefect API key, as shown below:
PREFECT_API_URL='https://api.prefect.cloud/api/accounts/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here'\nPREFECT_API_KEY='123abc_my_api_key_is_here'\n
Alternatively, view your Account ID and Workspace ID in your browser URL. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here
.
The REST APIs adhere to the following guidelines:
/flows
or /runs
).GET /flows/:id
.GET /task_runs
./task_runs
with a flow run filter instead of accessing /flow_runs/:id/task_runs
./api/:version
prefix that (optionally) allows versioning in the future. By convention, we treat that as part of the base URL and do not include that in API examples.POST
requests where applicable.limit
and offset
.sort
parameter.GET
, PUT
and DELETE
requests are always idempotent. POST
and PATCH
are not guaranteed to be idempotent.GET
requests cannot receive information from the request body.POST
requests can receive information from the request body.POST /collection
creates a new member of the collection.GET /collection
lists all members of the collection.GET /collection/:id
gets a specific member of the collection by ID.DELETE /collection/:id
deletes a specific member of the collection.PUT /collection/:id
creates or replaces a specific member of the collection.PATCH /collection/:id
partially updates a specific member of the collection.POST /collection/action
is how we implement non-CRUD actions. For example, to set a flow run's state, we use POST /flow_runs/:id/set_state
.POST /collection/action
may also be used for read-only queries. This is to allow us to send complex arguments as body arguments (which often cannot be done via GET
). Examples include POST /flow_runs/filter
, POST /flow_runs/count
, and POST /flow_runs/history
.Objects can be filtered by providing filter criteria in the body of a POST
request. When multiple criteria are specified, logical AND will be applied to the criteria.
Filter criteria are structured as follows:
{\n \"objects\": {\n \"object_field\": {\n \"field_operator_\": <field_value>\n }\n }\n}\n
In this example, objects
is the name of the collection to filter over (for example, flows
). The collection can be either the object being queried for (flows
for POST /flows/filter
) or a related object (flow_runs
for POST /flows/filter
).
object_field
is the name of the field over which to filter (name
for flows
). Note that some objects may have nested object fields, such as {flow_run: {state: {type: {any_: []}}}}
.
field_operator_
is the operator to apply to a field when filtering. Common examples include:
any_
: return objects where this field matches any of the following values.is_null_
: return objects where this field is or is not null.eq_
: return objects where this field is equal to the following value.all_
: return objects where this field matches all of the following values.before_
: return objects where this datetime field is less than or equal to the following value.after_
: return objects where this datetime field is greater than or equal to the following value.For example, to query for flows with the tag \"database\"
and failed flow runs, POST /flows/filter
with the following request body:
{\n \"flows\": {\n \"tags\": {\n \"all_\": [\"database\"]\n }\n },\n \"flow_runs\": {\n \"state\": {\n \"type\": {\n \"any_\": [\"FAILED\"]\n }\n }\n }\n}\n
","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#openapi","title":"OpenAPI","text":"The Prefect REST API can be fully described with an OpenAPI 3.0 compliant document. OpenAPI is a standard specification for describing REST APIs.
To generate the Prefect server's complete OpenAPI document, run the following commands in an interactive Python session:
from prefect.server.api.server import create_app\n\napp = create_app()\nopenapi_doc = app.openapi()\n
This document allows you to generate your own API client, explore the API using an API inspection tool, or write tests to ensure API compliance.
","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/server/","title":"Server API","text":"The Prefect server API is used by the server to work with workflow metadata and enforce orchestration logic. This API is primarily used by Prefect developers.
Select links in the left navigation menu to explore.
","tags":["API","Server API"]},{"location":"api-ref/server/api/admin/","title":"server.api.admin","text":"","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin","title":"prefect.server.api.admin
","text":"Routes for admin-level interactions with the Prefect REST API.
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.clear_database","title":"clear_database
async
","text":"Clear all database tables without dropping them.
Source code inprefect/server/api/admin.py
@router.post(\"/database/clear\", status_code=status.HTTP_204_NO_CONTENT)\nasync def clear_database(\n db: PrefectDBInterface = Depends(provide_database_interface),\n confirm: bool = Body(\n False,\n embed=True,\n description=\"Pass confirm=True to confirm you want to modify the database.\",\n ),\n response: Response = None,\n):\n \"\"\"Clear all database tables without dropping them.\"\"\"\n if not confirm:\n response.status_code = status.HTTP_400_BAD_REQUEST\n return\n async with db.session_context(begin_transaction=True) as session:\n # work pool has a circular dependency on pool queue; delete it first\n await session.execute(db.WorkPool.__table__.delete())\n for table in reversed(db.Base.metadata.sorted_tables):\n await session.execute(table.delete())\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.create_database","title":"create_database
async
","text":"Create all database objects.
Source code inprefect/server/api/admin.py
@router.post(\"/database/create\", status_code=status.HTTP_204_NO_CONTENT)\nasync def create_database(\n db: PrefectDBInterface = Depends(provide_database_interface),\n confirm: bool = Body(\n False,\n embed=True,\n description=\"Pass confirm=True to confirm you want to modify the database.\",\n ),\n response: Response = None,\n):\n \"\"\"Create all database objects.\"\"\"\n if not confirm:\n response.status_code = status.HTTP_400_BAD_REQUEST\n return\n\n await db.create_db()\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.drop_database","title":"drop_database
async
","text":"Drop all database objects.
Source code inprefect/server/api/admin.py
@router.post(\"/database/drop\", status_code=status.HTTP_204_NO_CONTENT)\nasync def drop_database(\n db: PrefectDBInterface = Depends(provide_database_interface),\n confirm: bool = Body(\n False,\n embed=True,\n description=\"Pass confirm=True to confirm you want to modify the database.\",\n ),\n response: Response = None,\n):\n \"\"\"Drop all database objects.\"\"\"\n if not confirm:\n response.status_code = status.HTTP_400_BAD_REQUEST\n return\n\n await db.drop_db()\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.read_settings","title":"read_settings
async
","text":"Get the current Prefect REST API settings.
Secret setting values will be obfuscated.
Source code inprefect/server/api/admin.py
@router.get(\"/settings\")\nasync def read_settings() -> prefect.settings.Settings:\n \"\"\"\n Get the current Prefect REST API settings.\n\n Secret setting values will be obfuscated.\n \"\"\"\n return prefect.settings.get_current_settings().with_obfuscated_secrets()\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.read_version","title":"read_version
async
","text":"Returns the Prefect version number
Source code inprefect/server/api/admin.py
@router.get(\"/version\")\nasync def read_version() -> str:\n \"\"\"Returns the Prefect version number\"\"\"\n return prefect.__version__\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/dependencies/","title":"server.api.dependencies","text":"","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies","title":"prefect.server.api.dependencies
","text":"Utilities for injecting FastAPI dependencies.
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.EnforceMinimumAPIVersion","title":"EnforceMinimumAPIVersion
","text":"FastAPI Dependency used to check compatibility between the version of the api and a given request.
Looks for the header 'X-PREFECT-API-VERSION' in the request and compares it to the api's version. Rejects requests that are lower than the minimum version.
Source code inprefect/server/api/dependencies.py
class EnforceMinimumAPIVersion:\n \"\"\"\n FastAPI Dependency used to check compatibility between the version of the api\n and a given request.\n\n Looks for the header 'X-PREFECT-API-VERSION' in the request and compares it\n to the api's version. Rejects requests that are lower than the minimum version.\n \"\"\"\n\n def __init__(self, minimum_api_version: str, logger: logging.Logger):\n self.minimum_api_version = minimum_api_version\n versions = [int(v) for v in minimum_api_version.split(\".\")]\n self.api_major = versions[0]\n self.api_minor = versions[1]\n self.api_patch = versions[2]\n self.logger = logger\n\n async def __call__(\n self,\n x_prefect_api_version: str = Header(None),\n ):\n request_version = x_prefect_api_version\n\n # if no version header, assume latest and continue\n if not request_version:\n return\n\n # parse version\n try:\n major, minor, patch = [int(v) for v in request_version.split(\".\")]\n except ValueError:\n await self._notify_of_invalid_value(request_version)\n raise HTTPException(\n status_code=status.HTTP_400_BAD_REQUEST,\n detail=(\n \"Invalid X-PREFECT-API-VERSION header format.\"\n f\"Expected header in format 'x.y.z' but received {request_version}\"\n ),\n )\n\n if (major, minor, patch) < (self.api_major, self.api_minor, self.api_patch):\n await self._notify_of_outdated_version(request_version)\n raise HTTPException(\n status_code=status.HTTP_400_BAD_REQUEST,\n detail=(\n f\"The request specified API version {request_version} but this \"\n f\"server requires version {self.minimum_api_version} or higher.\"\n ),\n )\n\n async def _notify_of_invalid_value(self, request_version: str):\n self.logger.error(\n f\"Invalid X-PREFECT-API-VERSION header format: '{request_version}'\"\n )\n\n async def _notify_of_outdated_version(self, request_version: str):\n self.logger.error(\n f\"X-PREFECT-API-VERSION header specifies version '{request_version}' \"\n f\"but minimum allowed version is '{self.minimum_api_version}'\"\n )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.LimitBody","title":"LimitBody
","text":"A fastapi.Depends
factory for pulling a limit: int
parameter from the request body while determining the default from the current settings.
prefect/server/api/dependencies.py
def LimitBody() -> Depends:\n \"\"\"\n A `fastapi.Depends` factory for pulling a `limit: int` parameter from the\n request body while determining the default from the current settings.\n \"\"\"\n\n def get_limit(\n limit: int = Body(\n None,\n description=\"Defaults to PREFECT_API_DEFAULT_LIMIT if not provided.\",\n ),\n ):\n default_limit = PREFECT_API_DEFAULT_LIMIT.value()\n limit = limit if limit is not None else default_limit\n if not limit >= 0:\n raise HTTPException(\n status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n detail=\"Invalid limit: must be greater than or equal to 0.\",\n )\n if limit > default_limit:\n raise HTTPException(\n status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n detail=f\"Invalid limit: must be less than or equal to {default_limit}.\",\n )\n return limit\n\n return Depends(get_limit)\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/deployments/","title":"server.api.deployments","text":"","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments","title":"prefect.server.api.deployments
","text":"Routes for interacting with Deployment objects.
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.count_deployments","title":"count_deployments
async
","text":"Count deployments.
Source code inprefect/server/api/deployments.py
@router.post(\"/count\")\nasync def count_deployments(\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n work_pool_queues: schemas.filters.WorkQueueFilter = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n \"\"\"\n Count deployments.\n \"\"\"\n async with db.session_context() as session:\n return await models.deployments.count_deployments(\n session=session,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n work_queue_filter=work_pool_queues,\n )\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.create_deployment","title":"create_deployment
async
","text":"Gracefully creates a new deployment from the provided schema. If a deployment with the same name and flow_id already exists, the deployment is updated.
If the deployment has an active schedule, flow runs will be scheduled. When upserting, any scheduled runs from the existing deployment will be deleted.
Source code inprefect/server/api/deployments.py
@router.post(\"/\")\nasync def create_deployment(\n deployment: schemas.actions.DeploymentCreate,\n response: Response,\n worker_lookups: WorkerLookups = Depends(WorkerLookups),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n \"\"\"\n Gracefully creates a new deployment from the provided schema. If a deployment with\n the same name and flow_id already exists, the deployment is updated.\n\n If the deployment has an active schedule, flow runs will be scheduled.\n When upserting, any scheduled runs from the existing deployment will be deleted.\n \"\"\"\n\n data = deployment.dict(exclude_unset=True)\n\n async with db.session_context(begin_transaction=True) as session:\n if (\n deployment.work_pool_name\n and deployment.work_pool_name != DEFAULT_AGENT_WORK_POOL_NAME\n ):\n # Make sure that deployment is valid before beginning creation process\n work_pool = await models.workers.read_work_pool_by_name(\n session=session, work_pool_name=deployment.work_pool_name\n )\n if work_pool is None:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND,\n detail=f'Work pool \"{deployment.work_pool_name}\" not found.',\n )\n try:\n deployment.check_valid_configuration(work_pool.base_job_template)\n except (MissingVariableError, jsonschema.exceptions.ValidationError) as exc:\n raise HTTPException(\n status_code=status.HTTP_409_CONFLICT,\n detail=f\"Error creating deployment: {exc!r}\",\n )\n\n # hydrate the input model into a full model\n deployment_dict = deployment.dict(exclude={\"work_pool_name\"})\n if deployment.work_pool_name and deployment.work_queue_name:\n # If a specific pool name/queue name combination was provided, get the\n # ID for that work pool queue.\n deployment_dict[\n \"work_queue_id\"\n ] = await worker_lookups._get_work_queue_id_from_name(\n session=session,\n work_pool_name=deployment.work_pool_name,\n work_queue_name=deployment.work_queue_name,\n create_queue_if_not_found=True,\n )\n elif deployment.work_pool_name:\n # If just a pool name was provided, get the ID for its default\n # work pool queue.\n deployment_dict[\n \"work_queue_id\"\n ] = await worker_lookups._get_default_work_queue_id_from_work_pool_name(\n session=session,\n work_pool_name=deployment.work_pool_name,\n )\n elif deployment.work_queue_name:\n # If just a queue name was provided, ensure that the queue exists and\n # get its ID.\n work_queue = await models.work_queues._ensure_work_queue_exists(\n session=session, name=deployment.work_queue_name\n )\n deployment_dict[\"work_queue_id\"] = work_queue.id\n\n deployment = schemas.core.Deployment(**deployment_dict)\n # check to see if relevant blocks exist, allowing us throw a useful error message\n # for debugging\n if deployment.infrastructure_document_id is not None:\n infrastructure_block = (\n await models.block_documents.read_block_document_by_id(\n session=session,\n block_document_id=deployment.infrastructure_document_id,\n )\n )\n if not infrastructure_block:\n raise HTTPException(\n status_code=status.HTTP_409_CONFLICT,\n detail=(\n \"Error creating deployment. Could not find infrastructure\"\n f\" block with id: {deployment.infrastructure_document_id}. This\"\n \" usually occurs when applying a deployment specification that\"\n \" was built against a different Prefect database / workspace.\"\n ),\n )\n\n if deployment.storage_document_id is not None:\n infrastructure_block = (\n await models.block_documents.read_block_document_by_id(\n session=session,\n block_document_id=deployment.storage_document_id,\n )\n )\n if not infrastructure_block:\n raise HTTPException(\n status_code=status.HTTP_409_CONFLICT,\n detail=(\n \"Error creating deployment. Could not find storage block with\"\n f\" id: {deployment.storage_document_id}. This usually occurs\"\n \" when applying a deployment specification that was built\"\n \" against a different Prefect database / workspace.\"\n ),\n )\n\n # Ensure that `paused` and `is_schedule_active` are consistent.\n if \"paused\" in data:\n deployment.is_schedule_active = not data[\"paused\"]\n elif \"is_schedule_active\" in data:\n deployment.paused = not data[\"is_schedule_active\"]\n\n now = pendulum.now(\"UTC\")\n model = await models.deployments.create_deployment(\n session=session, deployment=deployment\n )\n\n if model.created >= now:\n response.status_code = status.HTTP_201_CREATED\n\n return schemas.responses.DeploymentResponse.from_orm(model)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.create_flow_run_from_deployment","title":"create_flow_run_from_deployment
async
","text":"Create a flow run from a deployment.
Any parameters not provided will be inferred from the deployment's parameters. If tags are not provided, the deployment's tags will be used.
If no state is provided, the flow run will be created in a SCHEDULED state.
Source code inprefect/server/api/deployments.py
@router.post(\"/{id}/create_flow_run\")\nasync def create_flow_run_from_deployment(\n flow_run: schemas.actions.DeploymentFlowRunCreate,\n deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n worker_lookups: WorkerLookups = Depends(WorkerLookups),\n response: Response = None,\n) -> schemas.responses.FlowRunResponse:\n \"\"\"\n Create a flow run from a deployment.\n\n Any parameters not provided will be inferred from the deployment's parameters.\n If tags are not provided, the deployment's tags will be used.\n\n If no state is provided, the flow run will be created in a SCHEDULED state.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n # get relevant info from the deployment\n deployment = await models.deployments.read_deployment(\n session=session, deployment_id=deployment_id\n )\n\n if not deployment:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n )\n\n if experiment_enabled(\"enhanced_deployment_parameters\"):\n try:\n dehydrated_params = deployment.parameters\n dehydrated_params.update(flow_run.parameters or {})\n ctx = await HydrationContext.build(session=session, raise_on_error=True)\n parameters = hydrate(dehydrated_params, ctx)\n except HydrationError as exc:\n raise HTTPException(\n status.HTTP_400_BAD_REQUEST,\n detail=f\"Error hydrating flow run parameters: {exc}\",\n )\n else:\n parameters = deployment.parameters\n parameters.update(flow_run.parameters or {})\n\n if deployment.enforce_parameter_schema:\n if not isinstance(deployment.parameter_openapi_schema, dict):\n raise HTTPException(\n status.HTTP_409_CONFLICT,\n detail=(\n \"Error updating deployment: Cannot update parameters because\"\n \" parameter schema enforcement is enabled and the deployment\"\n \" does not have a valid parameter schema.\"\n ),\n )\n try:\n validate(\n parameters, deployment.parameter_openapi_schema, raise_on_error=True\n )\n except ValidationError as exc:\n raise HTTPException(\n status.HTTP_409_CONFLICT,\n detail=f\"Error creating flow run: {exc}\",\n )\n except CircularSchemaRefError:\n raise HTTPException(\n status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n detail=\"Invalid schema: Unable to validate schema with circular references.\",\n )\n\n if PREFECT_EXPERIMENTAL_ENABLE_FLOW_RUN_INFRA_OVERRIDES:\n validate_job_variables_for_flow_run(flow_run, deployment)\n\n work_queue_name = deployment.work_queue_name\n work_queue_id = deployment.work_queue_id\n\n if flow_run.work_queue_name:\n # can't mutate the ORM model or else it will commit the changes back\n work_queue_id = await worker_lookups._get_work_queue_id_from_name(\n session=session,\n work_pool_name=deployment.work_queue.work_pool.name,\n work_queue_name=flow_run.work_queue_name,\n create_queue_if_not_found=True,\n )\n work_queue_name = flow_run.work_queue_name\n\n # hydrate the input model into a full flow run / state model\n flow_run = schemas.core.FlowRun(\n **flow_run.dict(\n exclude={\n \"parameters\",\n \"tags\",\n \"infrastructure_document_id\",\n \"work_queue_name\",\n }\n ),\n flow_id=deployment.flow_id,\n deployment_id=deployment.id,\n parameters=parameters,\n tags=set(deployment.tags).union(flow_run.tags),\n infrastructure_document_id=(\n flow_run.infrastructure_document_id\n or deployment.infrastructure_document_id\n ),\n work_queue_name=work_queue_name,\n work_queue_id=work_queue_id,\n )\n\n if not flow_run.state:\n flow_run.state = schemas.states.Scheduled()\n\n now = pendulum.now(\"UTC\")\n model = await models.flow_runs.create_flow_run(\n session=session, flow_run=flow_run\n )\n if model.created >= now:\n response.status_code = status.HTTP_201_CREATED\n return schemas.responses.FlowRunResponse.from_orm(model)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.delete_deployment","title":"delete_deployment
async
","text":"Delete a deployment by id.
Source code inprefect/server/api/deployments.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_deployment(\n deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Delete a deployment by id.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n result = await models.deployments.delete_deployment(\n session=session, deployment_id=deployment_id\n )\n if not result:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n )\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.get_scheduled_flow_runs_for_deployments","title":"get_scheduled_flow_runs_for_deployments
async
","text":"Get scheduled runs for a set of deployments. Used by a runner to poll for work.
Source code inprefect/server/api/deployments.py
@router.post(\"/get_scheduled_flow_runs\")\nasync def get_scheduled_flow_runs_for_deployments(\n deployment_ids: List[UUID] = Body(\n default=..., description=\"The deployment IDs to get scheduled runs for\"\n ),\n scheduled_before: DateTimeTZ = Body(\n None, description=\"The maximum time to look for scheduled flow runs\"\n ),\n limit: int = dependencies.LimitBody(),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.FlowRunResponse]:\n \"\"\"\n Get scheduled runs for a set of deployments. Used by a runner to poll for work.\n \"\"\"\n async with db.session_context() as session:\n orm_flow_runs = await models.flow_runs.read_flow_runs(\n session=session,\n limit=limit,\n deployment_filter=schemas.filters.DeploymentFilter(\n id=schemas.filters.DeploymentFilterId(any_=deployment_ids),\n ),\n flow_run_filter=schemas.filters.FlowRunFilter(\n next_scheduled_start_time=schemas.filters.FlowRunFilterNextScheduledStartTime(\n before_=scheduled_before\n ),\n state=schemas.filters.FlowRunFilterState(\n type=schemas.filters.FlowRunFilterStateType(\n any_=[schemas.states.StateType.SCHEDULED]\n )\n ),\n ),\n sort=schemas.sorting.FlowRunSort.NEXT_SCHEDULED_START_TIME_ASC,\n )\n\n flow_run_responses = [\n schemas.responses.FlowRunResponse.from_orm(orm_flow_run=orm_flow_run)\n for orm_flow_run in orm_flow_runs\n ]\n\n async with db.session_context(\n begin_transaction=True, with_for_update=True\n ) as session:\n await models.deployments._update_deployment_last_polled(\n session=session, deployment_ids=deployment_ids\n )\n\n return flow_run_responses\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployment","title":"read_deployment
async
","text":"Get a deployment by id.
Source code inprefect/server/api/deployments.py
@router.get(\"/{id}\")\nasync def read_deployment(\n deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n \"\"\"\n Get a deployment by id.\n \"\"\"\n async with db.session_context() as session:\n deployment = await models.deployments.read_deployment(\n session=session, deployment_id=deployment_id\n )\n if not deployment:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n )\n return schemas.responses.DeploymentResponse.from_orm(deployment)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployment_by_name","title":"read_deployment_by_name
async
","text":"Get a deployment using the name of the flow and the deployment.
Source code inprefect/server/api/deployments.py
@router.get(\"/name/{flow_name}/{deployment_name}\")\nasync def read_deployment_by_name(\n flow_name: str = Path(..., description=\"The name of the flow\"),\n deployment_name: str = Path(..., description=\"The name of the deployment\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n \"\"\"\n Get a deployment using the name of the flow and the deployment.\n \"\"\"\n async with db.session_context() as session:\n deployment = await models.deployments.read_deployment_by_name(\n session=session, name=deployment_name, flow_name=flow_name\n )\n if not deployment:\n raise HTTPException(\n status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n )\n return schemas.responses.DeploymentResponse.from_orm(deployment)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployments","title":"read_deployments
async
","text":"Query for deployments.
Source code inprefect/server/api/deployments.py
@router.post(\"/filter\")\nasync def read_deployments(\n limit: int = dependencies.LimitBody(),\n offset: int = Body(0, ge=0),\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n work_pool_queues: schemas.filters.WorkQueueFilter = None,\n sort: schemas.sorting.DeploymentSort = Body(\n schemas.sorting.DeploymentSort.NAME_ASC\n ),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.DeploymentResponse]:\n \"\"\"\n Query for deployments.\n \"\"\"\n async with db.session_context() as session:\n response = await models.deployments.read_deployments(\n session=session,\n offset=offset,\n sort=sort,\n limit=limit,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n work_queue_filter=work_pool_queues,\n )\n return [\n schemas.responses.DeploymentResponse.from_orm(orm_deployment=deployment)\n for deployment in response\n ]\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.schedule_deployment","title":"schedule_deployment
async
","text":"Schedule runs for a deployment. For backfills, provide start/end times in the past.
This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected.
- Runs will be generated starting on or after the `start_time`\n- No more than `max_runs` runs will be generated\n- No runs will be generated after `end_time` is reached\n- At least `min_runs` runs will be generated\n- Runs will be generated until at least `start_time + min_time` is reached\n
Source code in prefect/server/api/deployments.py
@router.post(\"/{id}/schedule\")\nasync def schedule_deployment(\n deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n start_time: DateTimeTZ = Body(None, description=\"The earliest date to schedule\"),\n end_time: DateTimeTZ = Body(None, description=\"The latest date to schedule\"),\n min_time: datetime.timedelta = Body(\n None,\n description=(\n \"Runs will be scheduled until at least this long after the `start_time`\"\n ),\n ),\n min_runs: int = Body(None, description=\"The minimum number of runs to schedule\"),\n max_runs: int = Body(None, description=\"The maximum number of runs to schedule\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n \"\"\"\n Schedule runs for a deployment. For backfills, provide start/end times in the past.\n\n This function will generate the minimum number of runs that satisfy the min\n and max times, and the min and max counts. Specifically, the following order\n will be respected.\n\n - Runs will be generated starting on or after the `start_time`\n - No more than `max_runs` runs will be generated\n - No runs will be generated after `end_time` is reached\n - At least `min_runs` runs will be generated\n - Runs will be generated until at least `start_time + min_time` is reached\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n await models.deployments.schedule_runs(\n session=session,\n deployment_id=deployment_id,\n start_time=start_time,\n min_time=min_time,\n end_time=end_time,\n min_runs=min_runs,\n max_runs=max_runs,\n )\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.set_schedule_active","title":"set_schedule_active
async
","text":"Set a deployment schedule to active. Runs will be scheduled immediately.
Source code inprefect/server/api/deployments.py
@router.post(\"/{id}/set_schedule_active\")\nasync def set_schedule_active(\n deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n \"\"\"\n Set a deployment schedule to active. Runs will be scheduled immediately.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n deployment = await models.deployments.read_deployment(\n session=session, deployment_id=deployment_id\n )\n if not deployment:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n )\n deployment.is_schedule_active = True\n deployment.paused = False\n\n # Ensure that we're updating the replicated schedule's `active` field,\n # if there is only a single schedule. This is support for legacy\n # clients.\n\n number_of_schedules = len(deployment.schedules)\n\n if number_of_schedules == 1:\n deployment.schedules[0].active = True\n elif number_of_schedules > 1:\n raise _multiple_schedules_error(deployment_id)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.set_schedule_inactive","title":"set_schedule_inactive
async
","text":"Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled state will be deleted.
Source code inprefect/server/api/deployments.py
@router.post(\"/{id}/set_schedule_inactive\")\nasync def set_schedule_inactive(\n deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n \"\"\"\n Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled\n state will be deleted.\n \"\"\"\n async with db.session_context(begin_transaction=False) as session:\n deployment = await models.deployments.read_deployment(\n session=session, deployment_id=deployment_id\n )\n if not deployment:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n )\n deployment.is_schedule_active = False\n deployment.paused = False\n\n # Ensure that we're updating the replicated schedule's `active` field,\n # if there is only a single schedule. This is support for legacy\n # clients.\n\n number_of_schedules = len(deployment.schedules)\n\n if number_of_schedules == 1:\n deployment.schedules[0].active = False\n elif number_of_schedules > 1:\n raise _multiple_schedules_error(deployment_id)\n\n # commit here to make the inactive schedule \"visible\" to the scheduler service\n await session.commit()\n\n # delete any auto scheduled runs\n await models.deployments._delete_scheduled_runs(\n session=session,\n deployment_id=deployment_id,\n db=db,\n auto_scheduled_only=True,\n )\n\n await session.commit()\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.work_queue_check_for_deployment","title":"work_queue_check_for_deployment
async
","text":"Get list of work-queues that are able to pick up the specified deployment.
This endpoint is intended to be used by the UI to provide users warnings about deployments that are unable to be executed because there are no work queues that will pick up their runs, based on existing filter criteria. It may be deprecated in the future because there is not a strict relationship between work queues and deployments.
Source code inprefect/server/api/deployments.py
@router.get(\"/{id}/work_queue_check\", deprecated=True)\nasync def work_queue_check_for_deployment(\n deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.WorkQueue]:\n \"\"\"\n Get list of work-queues that are able to pick up the specified deployment.\n\n This endpoint is intended to be used by the UI to provide users warnings\n about deployments that are unable to be executed because there are no work\n queues that will pick up their runs, based on existing filter criteria. It\n may be deprecated in the future because there is not a strict relationship\n between work queues and deployments.\n \"\"\"\n try:\n async with db.session_context() as session:\n work_queues = await models.deployments.check_work_queues_for_deployment(\n session=session, deployment_id=deployment_id\n )\n except ObjectNotFoundError:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n )\n return work_queues\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/flow_run_states/","title":"server.api.flow_run_states","text":"","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states","title":"prefect.server.api.flow_run_states
","text":"Routes for interacting with flow run state objects.
","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states.read_flow_run_state","title":"read_flow_run_state
async
","text":"Get a flow run state by id.
Source code inprefect/server/api/flow_run_states.py
@router.get(\"/{id}\")\nasync def read_flow_run_state(\n flow_run_state_id: UUID = Path(\n ..., description=\"The flow run state id\", alias=\"id\"\n ),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.states.State:\n \"\"\"\n Get a flow run state by id.\n \"\"\"\n async with db.session_context() as session:\n flow_run_state = await models.flow_run_states.read_flow_run_state(\n session=session, flow_run_state_id=flow_run_state_id\n )\n if not flow_run_state:\n raise HTTPException(\n status.HTTP_404_NOT_FOUND, detail=\"Flow run state not found\"\n )\n return flow_run_state\n
","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states.read_flow_run_states","title":"read_flow_run_states
async
","text":"Get states associated with a flow run.
Source code inprefect/server/api/flow_run_states.py
@router.get(\"/\")\nasync def read_flow_run_states(\n flow_run_id: UUID,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.states.State]:\n \"\"\"\n Get states associated with a flow run.\n \"\"\"\n async with db.session_context() as session:\n return await models.flow_run_states.read_flow_run_states(\n session=session, flow_run_id=flow_run_id\n )\n
","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_runs/","title":"server.api.flow_runs","text":"","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs","title":"prefect.server.api.flow_runs
","text":"Routes for interacting with flow run objects.
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.average_flow_run_lateness","title":"average_flow_run_lateness
async
","text":"Query for average flow-run lateness in seconds.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/lateness\")\nasync def average_flow_run_lateness(\n flows: Optional[schemas.filters.FlowFilter] = None,\n flow_runs: Optional[schemas.filters.FlowRunFilter] = None,\n task_runs: Optional[schemas.filters.TaskRunFilter] = None,\n deployments: Optional[schemas.filters.DeploymentFilter] = None,\n work_pools: Optional[schemas.filters.WorkPoolFilter] = None,\n work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> Optional[float]:\n \"\"\"\n Query for average flow-run lateness in seconds.\n \"\"\"\n async with db.session_context() as session:\n if db.dialect.name == \"sqlite\":\n # Since we want an _average_ of the lateness we're unable to use\n # the existing FlowRun.expected_start_time_delta property as it\n # returns a timedelta and SQLite is unable to properly deal with it\n # and always returns 1970.0 as the average. This copies the same\n # logic but ensures that it returns the number of seconds instead\n # so it's compatible with SQLite.\n base_query = sa.case(\n (\n db.FlowRun.start_time > db.FlowRun.expected_start_time,\n sa.func.strftime(\"%s\", db.FlowRun.start_time)\n - sa.func.strftime(\"%s\", db.FlowRun.expected_start_time),\n ),\n (\n db.FlowRun.start_time.is_(None)\n & db.FlowRun.state_type.notin_(schemas.states.TERMINAL_STATES)\n & (db.FlowRun.expected_start_time < sa.func.datetime(\"now\")),\n sa.func.strftime(\"%s\", sa.func.datetime(\"now\"))\n - sa.func.strftime(\"%s\", db.FlowRun.expected_start_time),\n ),\n else_=0,\n )\n else:\n base_query = db.FlowRun.estimated_start_time_delta\n\n query = await models.flow_runs._apply_flow_run_filters(\n sa.select(sa.func.avg(base_query)),\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n work_queue_filter=work_pool_queues,\n )\n result = await session.execute(query)\n\n avg_lateness = result.scalar()\n\n if avg_lateness is None:\n return None\n elif isinstance(avg_lateness, datetime.timedelta):\n return avg_lateness.total_seconds()\n else:\n return avg_lateness\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.count_flow_runs","title":"count_flow_runs
async
","text":"Query for flow runs.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/count\")\nasync def count_flow_runs(\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n work_pool_queues: schemas.filters.WorkQueueFilter = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n \"\"\"\n Query for flow runs.\n \"\"\"\n async with db.session_context() as session:\n return await models.flow_runs.count_flow_runs(\n session=session,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n work_queue_filter=work_pool_queues,\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.create_flow_run","title":"create_flow_run
async
","text":"Create a flow run. If a flow run with the same flow_id and idempotency key already exists, the existing flow run will be returned.
If no state is provided, the flow run will be created in a PENDING state.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/\")\nasync def create_flow_run(\n flow_run: schemas.actions.FlowRunCreate,\n db: PrefectDBInterface = Depends(provide_database_interface),\n response: Response = None,\n orchestration_parameters: dict = Depends(\n orchestration_dependencies.provide_flow_orchestration_parameters\n ),\n api_version=Depends(dependencies.provide_request_api_version),\n) -> schemas.responses.FlowRunResponse:\n \"\"\"\n Create a flow run. If a flow run with the same flow_id and\n idempotency key already exists, the existing flow run will be returned.\n\n If no state is provided, the flow run will be created in a PENDING state.\n \"\"\"\n # hydrate the input model into a full flow run / state model\n flow_run = schemas.core.FlowRun(**flow_run.dict())\n\n # pass the request version to the orchestration engine to support compatibility code\n orchestration_parameters.update({\"api-version\": api_version})\n\n if not flow_run.state:\n flow_run.state = schemas.states.Pending()\n\n now = pendulum.now(\"UTC\")\n\n async with db.session_context(begin_transaction=True) as session:\n model = await models.flow_runs.create_flow_run(\n session=session,\n flow_run=flow_run,\n orchestration_parameters=orchestration_parameters,\n )\n if model.created >= now:\n response.status_code = status.HTTP_201_CREATED\n\n return schemas.responses.FlowRunResponse.from_orm(model)\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.create_flow_run_input","title":"create_flow_run_input
async
","text":"Create a key/value input for a flow run.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/{id}/input\", status_code=status.HTTP_201_CREATED)\nasync def create_flow_run_input(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n key: str = Body(..., description=\"The input key\"),\n value: bytes = Body(..., description=\"The value of the input\"),\n sender: Optional[str] = Body(None, description=\"The sender of the input\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Create a key/value input for a flow run.\n \"\"\"\n async with db.session_context() as session:\n try:\n await models.flow_run_input.create_flow_run_input(\n session=session,\n flow_run_input=schemas.core.FlowRunInput(\n flow_run_id=flow_run_id,\n key=key,\n sender=sender,\n value=value.decode(),\n ),\n )\n await session.commit()\n\n except IntegrityError as exc:\n if \"unique constraint\" in str(exc).lower():\n raise HTTPException(\n status_code=status.HTTP_409_CONFLICT,\n detail=\"A flow run input with this key already exists.\",\n )\n else:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\"\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.delete_flow_run","title":"delete_flow_run
async
","text":"Delete a flow run by id.
Source code inprefect/server/api/flow_runs.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow_run(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Delete a flow run by id.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n result = await models.flow_runs.delete_flow_run(\n session=session, flow_run_id=flow_run_id\n )\n if not result:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\"\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.delete_flow_run_input","title":"delete_flow_run_input
async
","text":"Delete a flow run input
Source code inprefect/server/api/flow_runs.py
@router.delete(\"/{id}/input/{key}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow_run_input(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n key: str = Path(..., description=\"The input key\", alias=\"key\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Delete a flow run input\n \"\"\"\n\n async with db.session_context() as session:\n deleted = await models.flow_run_input.delete_flow_run_input(\n session=session, flow_run_id=flow_run_id, key=key\n )\n await session.commit()\n\n if not deleted:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run input not found\"\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.filter_flow_run_input","title":"filter_flow_run_input
async
","text":"Filter flow run inputs by key prefix
Source code inprefect/server/api/flow_runs.py
@router.post(\"/{id}/input/filter\")\nasync def filter_flow_run_input(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n prefix: str = Body(..., description=\"The input key prefix\", embed=True),\n limit: int = Body(\n 1, description=\"The maximum number of results to return\", embed=True\n ),\n exclude_keys: List[str] = Body(\n [], description=\"Exclude inputs with these keys\", embed=True\n ),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.FlowRunInput]:\n \"\"\"\n Filter flow run inputs by key prefix\n \"\"\"\n async with db.session_context() as session:\n return await models.flow_run_input.filter_flow_run_input(\n session=session,\n flow_run_id=flow_run_id,\n prefix=prefix,\n limit=limit,\n exclude_keys=exclude_keys,\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.flow_run_history","title":"flow_run_history
async
","text":"Query for flow run history data across a given range and interval.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/history\")\nasync def flow_run_history(\n history_start: DateTimeTZ = Body(..., description=\"The history's start time.\"),\n history_end: DateTimeTZ = Body(..., description=\"The history's end time.\"),\n history_interval: datetime.timedelta = Body(\n ...,\n description=(\n \"The size of each history interval, in seconds. Must be at least 1 second.\"\n ),\n alias=\"history_interval_seconds\",\n ),\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n work_queues: schemas.filters.WorkQueueFilter = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.HistoryResponse]:\n \"\"\"\n Query for flow run history data across a given range and interval.\n \"\"\"\n if history_interval < datetime.timedelta(seconds=1):\n raise HTTPException(\n status.HTTP_422_UNPROCESSABLE_ENTITY,\n detail=\"History interval must not be less than 1 second.\",\n )\n\n async with db.session_context() as session:\n return await run_history(\n session=session,\n run_type=\"flow_run\",\n history_start=history_start,\n history_end=history_end,\n history_interval=history_interval,\n flows=flows,\n flow_runs=flow_runs,\n task_runs=task_runs,\n deployments=deployments,\n work_pools=work_pools,\n work_queues=work_queues,\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run","title":"read_flow_run
async
","text":"Get a flow run by id.
Source code inprefect/server/api/flow_runs.py
@router.get(\"/{id}\")\nasync def read_flow_run(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.FlowRunResponse:\n \"\"\"\n Get a flow run by id.\n \"\"\"\n async with db.session_context() as session:\n flow_run = await models.flow_runs.read_flow_run(\n session=session, flow_run_id=flow_run_id\n )\n if not flow_run:\n raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\")\n return schemas.responses.FlowRunResponse.from_orm(flow_run)\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run_graph_v1","title":"read_flow_run_graph_v1
async
","text":"Get a task run dependency map for a given flow run.
Source code inprefect/server/api/flow_runs.py
@router.get(\"/{id}/graph\")\nasync def read_flow_run_graph_v1(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[DependencyResult]:\n \"\"\"\n Get a task run dependency map for a given flow run.\n \"\"\"\n async with db.session_context() as session:\n return await models.flow_runs.read_task_run_dependencies(\n session=session, flow_run_id=flow_run_id\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run_graph_v2","title":"read_flow_run_graph_v2
async
","text":"Get a graph of the tasks and subflow runs for the given flow run
Source code inprefect/server/api/flow_runs.py
@router.get(\"/{id:uuid}/graph-v2\")\nasync def read_flow_run_graph_v2(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n since: datetime.datetime = Query(\n datetime.datetime.min,\n description=\"Only include runs that start or end after this time.\",\n ),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> Graph:\n \"\"\"\n Get a graph of the tasks and subflow runs for the given flow run\n \"\"\"\n async with db.session_context() as session:\n try:\n return await read_flow_run_graph(\n session=session,\n flow_run_id=flow_run_id,\n since=since,\n )\n except FlowRunGraphTooLarge as e:\n raise HTTPException(\n status_code=status.HTTP_400_BAD_REQUEST,\n detail=str(e),\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run_input","title":"read_flow_run_input
async
","text":"Create a value from a flow run input
Source code inprefect/server/api/flow_runs.py
@router.get(\"/{id}/input/{key}\")\nasync def read_flow_run_input(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n key: str = Path(..., description=\"The input key\", alias=\"key\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> PlainTextResponse:\n \"\"\"\n Create a value from a flow run input\n \"\"\"\n\n async with db.session_context() as session:\n flow_run_input = await models.flow_run_input.read_flow_run_input(\n session=session, flow_run_id=flow_run_id, key=key\n )\n\n if flow_run_input:\n return PlainTextResponse(flow_run_input.value)\n else:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run input not found\"\n )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_runs","title":"read_flow_runs
async
","text":"Query for flow runs.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/filter\", response_class=ORJSONResponse)\nasync def read_flow_runs(\n sort: schemas.sorting.FlowRunSort = Body(schemas.sorting.FlowRunSort.ID_DESC),\n limit: int = dependencies.LimitBody(),\n offset: int = Body(0, ge=0),\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n work_pool_queues: schemas.filters.WorkQueueFilter = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.FlowRunResponse]:\n \"\"\"\n Query for flow runs.\n \"\"\"\n async with db.session_context() as session:\n db_flow_runs = await models.flow_runs.read_flow_runs(\n session=session,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n work_queue_filter=work_pool_queues,\n offset=offset,\n limit=limit,\n sort=sort,\n )\n\n # Instead of relying on fastapi.encoders.jsonable_encoder to convert the\n # response to JSON, we do so more efficiently ourselves.\n # In particular, the FastAPI encoder is very slow for large, nested objects.\n # See: https://github.com/tiangolo/fastapi/issues/1224\n encoded = [\n schemas.responses.FlowRunResponse.from_orm(fr).dict(json_compatible=True)\n for fr in db_flow_runs\n ]\n return ORJSONResponse(content=encoded)\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.resume_flow_run","title":"resume_flow_run
async
","text":"Resume a paused flow run.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/{id}/resume\")\nasync def resume_flow_run(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n run_input: Optional[Dict] = Body(default=None, embed=True),\n response: Response = None,\n flow_policy: BaseOrchestrationPolicy = Depends(\n orchestration_dependencies.provide_flow_policy\n ),\n task_policy: BaseOrchestrationPolicy = Depends(\n orchestration_dependencies.provide_task_policy\n ),\n orchestration_parameters: dict = Depends(\n orchestration_dependencies.provide_flow_orchestration_parameters\n ),\n api_version=Depends(dependencies.provide_request_api_version),\n) -> OrchestrationResult:\n \"\"\"\n Resume a paused flow run.\n \"\"\"\n now = pendulum.now(\"UTC\")\n\n async with db.session_context(begin_transaction=True) as session:\n flow_run = await models.flow_runs.read_flow_run(session, flow_run_id)\n state = flow_run.state\n\n if state is None or state.type != schemas.states.StateType.PAUSED:\n result = OrchestrationResult(\n state=None,\n status=schemas.responses.SetStateStatus.ABORT,\n details=schemas.responses.StateAbortDetails(\n reason=\"Cannot resume a flow run that is not paused.\"\n ),\n )\n return result\n\n orchestration_parameters.update({\"api-version\": api_version})\n\n keyset = state.state_details.run_input_keyset\n\n if keyset:\n run_input = run_input or {}\n\n if experiment_enabled(\"enhanced_deployment_parameters\"):\n try:\n hydration_context = await schema_tools.HydrationContext.build(\n session=session, raise_on_error=True\n )\n run_input = schema_tools.hydrate(run_input, hydration_context) or {}\n except schema_tools.HydrationError as exc:\n return OrchestrationResult(\n state=state,\n status=schemas.responses.SetStateStatus.REJECT,\n details=schemas.responses.StateAbortDetails(\n reason=f\"Error hydrating run input: {exc}\",\n ),\n )\n\n schema_json = await models.flow_run_input.read_flow_run_input(\n session=session, flow_run_id=flow_run.id, key=keyset[\"schema\"]\n )\n\n if schema_json is None:\n return OrchestrationResult(\n state=state,\n status=schemas.responses.SetStateStatus.REJECT,\n details=schemas.responses.StateAbortDetails(\n reason=\"Run input schema not found.\"\n ),\n )\n\n try:\n schema = orjson.loads(schema_json.value)\n except orjson.JSONDecodeError:\n return OrchestrationResult(\n state=state,\n status=schemas.responses.SetStateStatus.REJECT,\n details=schemas.responses.StateAbortDetails(\n reason=\"Run input schema is not valid JSON.\"\n ),\n )\n\n if experiment_enabled(\"enhanced_deployment_parameters\"):\n try:\n schema_tools.validate(run_input, schema, raise_on_error=True)\n except schema_tools.ValidationError as exc:\n return OrchestrationResult(\n state=state,\n status=schemas.responses.SetStateStatus.REJECT,\n details=schemas.responses.StateAbortDetails(\n reason=f\"Reason: {exc}\"\n ),\n )\n except schema_tools.CircularSchemaRefError:\n return OrchestrationResult(\n state=state,\n status=schemas.responses.SetStateStatus.REJECT,\n details=schemas.responses.StateAbortDetails(\n reason=\"Invalid schema: Unable to validate schema with circular references.\",\n ),\n )\n else:\n try:\n jsonschema.validate(run_input, schema)\n except (jsonschema.ValidationError, jsonschema.SchemaError) as exc:\n return OrchestrationResult(\n state=state,\n status=schemas.responses.SetStateStatus.REJECT,\n details=schemas.responses.StateAbortDetails(\n reason=f\"Run input validation failed: {exc.message}\"\n ),\n )\n\n if state.state_details.pause_reschedule:\n orchestration_result = await models.flow_runs.set_flow_run_state(\n session=session,\n flow_run_id=flow_run_id,\n state=schemas.states.Scheduled(\n name=\"Resuming\", scheduled_time=pendulum.now(\"UTC\")\n ),\n flow_policy=flow_policy,\n orchestration_parameters=orchestration_parameters,\n )\n else:\n orchestration_result = await models.flow_runs.set_flow_run_state(\n session=session,\n flow_run_id=flow_run_id,\n state=schemas.states.Running(),\n flow_policy=flow_policy,\n orchestration_parameters=orchestration_parameters,\n )\n\n if (\n keyset\n and run_input\n and orchestration_result.status == schemas.responses.SetStateStatus.ACCEPT\n ):\n # The state change is accepted, go ahead and store the validated\n # run input.\n await models.flow_run_input.create_flow_run_input(\n session=session,\n flow_run_input=schemas.core.FlowRunInput(\n flow_run_id=flow_run_id,\n key=keyset[\"response\"],\n value=orjson.dumps(run_input).decode(\"utf-8\"),\n ),\n )\n\n # set the 201 if a new state was created\n if orchestration_result.state and orchestration_result.state.timestamp >= now:\n response.status_code = status.HTTP_201_CREATED\n else:\n response.status_code = status.HTTP_200_OK\n\n return orchestration_result\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.set_flow_run_state","title":"set_flow_run_state
async
","text":"Set a flow run state, invoking any orchestration rules.
Source code inprefect/server/api/flow_runs.py
@router.post(\"/{id}/set_state\")\nasync def set_flow_run_state(\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n state: schemas.actions.StateCreate = Body(..., description=\"The intended state.\"),\n force: bool = Body(\n False,\n description=(\n \"If false, orchestration rules will be applied that may alter or prevent\"\n \" the state transition. If True, orchestration rules are not applied.\"\n ),\n ),\n db: PrefectDBInterface = Depends(provide_database_interface),\n response: Response = None,\n flow_policy: BaseOrchestrationPolicy = Depends(\n orchestration_dependencies.provide_flow_policy\n ),\n orchestration_parameters: dict = Depends(\n orchestration_dependencies.provide_flow_orchestration_parameters\n ),\n api_version=Depends(dependencies.provide_request_api_version),\n) -> OrchestrationResult:\n \"\"\"Set a flow run state, invoking any orchestration rules.\"\"\"\n\n # pass the request version to the orchestration engine to support compatibility code\n orchestration_parameters.update({\"api-version\": api_version})\n\n now = pendulum.now(\"UTC\")\n\n # create the state\n async with db.session_context(\n begin_transaction=True, with_for_update=True\n ) as session:\n orchestration_result = await models.flow_runs.set_flow_run_state(\n session=session,\n flow_run_id=flow_run_id,\n # convert to a full State object\n state=schemas.states.State.parse_obj(state),\n force=force,\n flow_policy=flow_policy,\n orchestration_parameters=orchestration_parameters,\n )\n\n # set the 201 if a new state was created\n if orchestration_result.state and orchestration_result.state.timestamp >= now:\n response.status_code = status.HTTP_201_CREATED\n else:\n response.status_code = status.HTTP_200_OK\n\n return orchestration_result\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.update_flow_run","title":"update_flow_run
async
","text":"Updates a flow run.
Source code inprefect/server/api/flow_runs.py
@router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_flow_run(\n flow_run: schemas.actions.FlowRunUpdate,\n flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Updates a flow run.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n if PREFECT_EXPERIMENTAL_ENABLE_FLOW_RUN_INFRA_OVERRIDES:\n if flow_run.job_variables is not None:\n this_run = await models.flow_runs.read_flow_run(\n session, flow_run_id=flow_run_id\n )\n if this_run is None:\n raise HTTPException(\n status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\"\n )\n if this_run.state.type != schemas.states.StateType.SCHEDULED:\n raise HTTPException(\n status_code=status.HTTP_400_BAD_REQUEST,\n detail=f\"Job variables for a flow run in state {this_run.state.type.name} cannot be updated\",\n )\n if this_run.deployment_id is None:\n raise HTTPException(\n status_code=status.HTTP_400_BAD_REQUEST,\n detail=\"A deployment for the flow run could not be found\",\n )\n\n deployment = await models.deployments.read_deployment(\n session=session, deployment_id=this_run.deployment_id\n )\n if deployment is None:\n raise HTTPException(\n status_code=status.HTTP_400_BAD_REQUEST,\n detail=\"A deployment for the flow run could not be found\",\n )\n\n validate_job_variables_for_flow_run(flow_run, deployment)\n\n result = await models.flow_runs.update_flow_run(\n session=session, flow_run=flow_run, flow_run_id=flow_run_id\n )\n if not result:\n raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\")\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flows/","title":"server.api.flows","text":"","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows","title":"prefect.server.api.flows
","text":"Routes for interacting with flow objects.
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.count_flows","title":"count_flows
async
","text":"Count flows.
Source code inprefect/server/api/flows.py
@router.post(\"/count\")\nasync def count_flows(\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n \"\"\"\n Count flows.\n \"\"\"\n async with db.session_context() as session:\n return await models.flows.count_flows(\n session=session,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.create_flow","title":"create_flow
async
","text":"Gracefully creates a new flow from the provided schema. If a flow with the same name already exists, the existing flow is returned.
Source code inprefect/server/api/flows.py
@router.post(\"/\")\nasync def create_flow(\n flow: schemas.actions.FlowCreate,\n response: Response,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n \"\"\"Gracefully creates a new flow from the provided schema. If a flow with the\n same name already exists, the existing flow is returned.\n \"\"\"\n # hydrate the input model into a full flow model\n flow = schemas.core.Flow(**flow.dict())\n\n now = pendulum.now(\"UTC\")\n\n async with db.session_context(begin_transaction=True) as session:\n model = await models.flows.create_flow(session=session, flow=flow)\n\n if model.created >= now:\n response.status_code = status.HTTP_201_CREATED\n return model\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.delete_flow","title":"delete_flow
async
","text":"Delete a flow by id.
Source code inprefect/server/api/flows.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow(\n flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Delete a flow by id.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n result = await models.flows.delete_flow(session=session, flow_id=flow_id)\n if not result:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flow","title":"read_flow
async
","text":"Get a flow by id.
Source code inprefect/server/api/flows.py
@router.get(\"/{id}\")\nasync def read_flow(\n flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n \"\"\"\n Get a flow by id.\n \"\"\"\n async with db.session_context() as session:\n flow = await models.flows.read_flow(session=session, flow_id=flow_id)\n if not flow:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n )\n return flow\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flow_by_name","title":"read_flow_by_name
async
","text":"Get a flow by name.
Source code inprefect/server/api/flows.py
@router.get(\"/name/{name}\")\nasync def read_flow_by_name(\n name: str = Path(..., description=\"The name of the flow\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n \"\"\"\n Get a flow by name.\n \"\"\"\n async with db.session_context() as session:\n flow = await models.flows.read_flow_by_name(session=session, name=name)\n if not flow:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n )\n return flow\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flows","title":"read_flows
async
","text":"Query for flows.
Source code inprefect/server/api/flows.py
@router.post(\"/filter\")\nasync def read_flows(\n limit: int = dependencies.LimitBody(),\n offset: int = Body(0, ge=0),\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n sort: schemas.sorting.FlowSort = Body(schemas.sorting.FlowSort.NAME_ASC),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.Flow]:\n \"\"\"\n Query for flows.\n \"\"\"\n async with db.session_context() as session:\n return await models.flows.read_flows(\n session=session,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n sort=sort,\n offset=offset,\n limit=limit,\n )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.update_flow","title":"update_flow
async
","text":"Updates a flow.
Source code inprefect/server/api/flows.py
@router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_flow(\n flow: schemas.actions.FlowUpdate,\n flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Updates a flow.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n result = await models.flows.update_flow(\n session=session, flow=flow, flow_id=flow_id\n )\n if not result:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/run_history/","title":"server.api.run_history","text":"","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/run_history/#prefect.server.api.run_history","title":"prefect.server.api.run_history
","text":"Utilities for querying flow and task run history.
","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/run_history/#prefect.server.api.run_history.run_history","title":"run_history
async
","text":"Produce a history of runs aggregated by interval and state
Source code inprefect/server/api/run_history.py
@inject_db\nasync def run_history(\n session: sa.orm.Session,\n db: PrefectDBInterface,\n run_type: Literal[\"flow_run\", \"task_run\"],\n history_start: DateTimeTZ,\n history_end: DateTimeTZ,\n history_interval: datetime.timedelta,\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n work_pools: schemas.filters.WorkPoolFilter = None,\n work_queues: schemas.filters.WorkQueueFilter = None,\n) -> List[schemas.responses.HistoryResponse]:\n \"\"\"\n Produce a history of runs aggregated by interval and state\n \"\"\"\n\n # SQLite has issues with very small intervals\n # (by 0.001 seconds it stops incrementing the interval)\n if history_interval < datetime.timedelta(seconds=1):\n raise ValueError(\"History interval must not be less than 1 second.\")\n\n # prepare run-specific models\n if run_type == \"flow_run\":\n run_model = db.FlowRun\n run_filter_function = models.flow_runs._apply_flow_run_filters\n elif run_type == \"task_run\":\n run_model = db.TaskRun\n run_filter_function = models.task_runs._apply_task_run_filters\n else:\n raise ValueError(\n f\"Unknown run type {run_type!r}. Expected 'flow_run' or 'task_run'.\"\n )\n\n # create a CTE for timestamp intervals\n intervals = db.make_timestamp_intervals(\n history_start,\n history_end,\n history_interval,\n ).cte(\"intervals\")\n\n # apply filters to the flow runs (and related states)\n runs = (\n await run_filter_function(\n sa.select(\n run_model.id,\n run_model.expected_start_time,\n run_model.estimated_run_time,\n run_model.estimated_start_time_delta,\n run_model.state_type,\n run_model.state_name,\n ).select_from(run_model),\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n work_pool_filter=work_pools,\n work_queue_filter=work_queues,\n )\n ).alias(\"runs\")\n # outer join intervals to the filtered runs to create a dataset composed of\n # every interval and the aggregate of all its runs. The runs aggregate is represented\n # by a descriptive JSON object\n counts = (\n sa.select(\n intervals.c.interval_start,\n intervals.c.interval_end,\n # build a JSON object, ignoring the case where the count of runs is 0\n sa.case(\n (sa.func.count(runs.c.id) == 0, None),\n else_=db.build_json_object(\n \"state_type\",\n runs.c.state_type,\n \"state_name\",\n runs.c.state_name,\n \"count_runs\",\n sa.func.count(runs.c.id),\n # estimated run times only includes positive run times (to avoid any unexpected corner cases)\n \"sum_estimated_run_time\",\n sa.func.sum(\n db.greatest(0, sa.extract(\"epoch\", runs.c.estimated_run_time))\n ),\n # estimated lateness is the sum of any positive start time deltas\n \"sum_estimated_lateness\",\n sa.func.sum(\n db.greatest(\n 0, sa.extract(\"epoch\", runs.c.estimated_start_time_delta)\n )\n ),\n ),\n ).label(\"state_agg\"),\n )\n .select_from(intervals)\n .join(\n runs,\n sa.and_(\n runs.c.expected_start_time >= intervals.c.interval_start,\n runs.c.expected_start_time < intervals.c.interval_end,\n ),\n isouter=True,\n )\n .group_by(\n intervals.c.interval_start,\n intervals.c.interval_end,\n runs.c.state_type,\n runs.c.state_name,\n )\n ).alias(\"counts\")\n\n # aggregate all state-aggregate objects into a single array for each interval,\n # ensuring that intervals with no runs have an empty array\n query = (\n sa.select(\n counts.c.interval_start,\n counts.c.interval_end,\n sa.func.coalesce(\n db.json_arr_agg(db.cast_to_json(counts.c.state_agg)).filter(\n counts.c.state_agg.is_not(None)\n ),\n sa.text(\"'[]'\"),\n ).label(\"states\"),\n )\n .group_by(counts.c.interval_start, counts.c.interval_end)\n .order_by(counts.c.interval_start)\n # return no more than 500 bars\n .limit(500)\n )\n\n # issue the query\n result = await session.execute(query)\n records = result.mappings()\n\n # load and parse the record if the database returns JSON as strings\n if db.uses_json_strings:\n records = [dict(r) for r in records]\n for r in records:\n r[\"states\"] = json.loads(r[\"states\"])\n\n return pydantic.parse_obj_as(List[schemas.responses.HistoryResponse], list(records))\n
","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/saved_searches/","title":"server.api.saved_searches","text":"","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches","title":"prefect.server.api.saved_searches
","text":"Routes for interacting with saved search objects.
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.create_saved_search","title":"create_saved_search
async
","text":"Gracefully creates a new saved search from the provided schema.
If a saved search with the same name already exists, the saved search's fields are replaced.
Source code inprefect/server/api/saved_searches.py
@router.put(\"/\")\nasync def create_saved_search(\n saved_search: schemas.actions.SavedSearchCreate,\n response: Response,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.SavedSearch:\n \"\"\"Gracefully creates a new saved search from the provided schema.\n\n If a saved search with the same name already exists, the saved search's fields are\n replaced.\n \"\"\"\n\n # hydrate the input model into a full model\n saved_search = schemas.core.SavedSearch(**saved_search.dict())\n\n now = pendulum.now(\"UTC\")\n\n async with db.session_context(begin_transaction=True) as session:\n model = await models.saved_searches.create_saved_search(\n session=session, saved_search=saved_search\n )\n\n if model.created >= now:\n response.status_code = status.HTTP_201_CREATED\n\n return model\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.delete_saved_search","title":"delete_saved_search
async
","text":"Delete a saved search by id.
Source code inprefect/server/api/saved_searches.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_saved_search(\n saved_search_id: UUID = Path(..., description=\"The saved search id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Delete a saved search by id.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n result = await models.saved_searches.delete_saved_search(\n session=session, saved_search_id=saved_search_id\n )\n if not result:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Saved search not found\"\n )\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.read_saved_search","title":"read_saved_search
async
","text":"Get a saved search by id.
Source code inprefect/server/api/saved_searches.py
@router.get(\"/{id}\")\nasync def read_saved_search(\n saved_search_id: UUID = Path(..., description=\"The saved search id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.SavedSearch:\n \"\"\"\n Get a saved search by id.\n \"\"\"\n async with db.session_context() as session:\n saved_search = await models.saved_searches.read_saved_search(\n session=session, saved_search_id=saved_search_id\n )\n if not saved_search:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Saved search not found\"\n )\n return saved_search\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.read_saved_searches","title":"read_saved_searches
async
","text":"Query for saved searches.
Source code inprefect/server/api/saved_searches.py
@router.post(\"/filter\")\nasync def read_saved_searches(\n limit: int = dependencies.LimitBody(),\n offset: int = Body(0, ge=0),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.SavedSearch]:\n \"\"\"\n Query for saved searches.\n \"\"\"\n async with db.session_context() as session:\n return await models.saved_searches.read_saved_searches(\n session=session,\n offset=offset,\n limit=limit,\n )\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/server/","title":"server.api.server","text":"","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server","title":"prefect.server.api.server
","text":"Defines the Prefect REST API FastAPI app.
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.RequestLimitMiddleware","title":"RequestLimitMiddleware
","text":"A middleware that limits the number of concurrent requests handled by the API.
This is a blunt tool for limiting SQLite concurrent writes which will cause failures at high volume. Ideally, we would only apply the limit to routes that perform writes.
Source code inprefect/server/api/server.py
class RequestLimitMiddleware:\n \"\"\"\n A middleware that limits the number of concurrent requests handled by the API.\n\n This is a blunt tool for limiting SQLite concurrent writes which will cause failures\n at high volume. Ideally, we would only apply the limit to routes that perform\n writes.\n \"\"\"\n\n def __init__(self, app, limit: float):\n self.app = app\n self._limiter = anyio.CapacityLimiter(limit)\n\n async def __call__(self, scope, receive, send) -> None:\n async with self._limiter:\n await self.app(scope, receive, send)\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.SPAStaticFiles","title":"SPAStaticFiles
","text":" Bases: StaticFiles
Implementation of StaticFiles
for serving single page applications.
Adds get_response
handling to ensure that when a resource isn't found the application still returns the index.
prefect/server/api/server.py
class SPAStaticFiles(StaticFiles):\n \"\"\"\n Implementation of `StaticFiles` for serving single page applications.\n\n Adds `get_response` handling to ensure that when a resource isn't found the\n application still returns the index.\n \"\"\"\n\n async def get_response(self, path: str, scope):\n try:\n return await super().get_response(path, scope)\n except HTTPException:\n return await super().get_response(\"./index.html\", scope)\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.create_api_app","title":"create_api_app
","text":"Create a FastAPI app that includes the Prefect REST API
Parameters:
Name Type Description Defaultrouter_prefix
Optional[str]
a prefix to apply to all included routers
''
dependencies
Optional[List[Depends]]
a list of global dependencies to add to each Prefect REST API router
None
health_check_path
str
the health check route path
'/health'
fast_api_app_kwargs
dict
kwargs to pass to the FastAPI constructor
None
router_overrides
Mapping[str, Optional[APIRouter]]
a mapping of route prefixes (i.e. \"/admin\") to new routers allowing the caller to override the default routers. If None
is provided as a value, the default router will be dropped from the application.
None
Returns:
Type DescriptionFastAPI
a FastAPI app that serves the Prefect REST API
Source code inprefect/server/api/server.py
def create_api_app(\n router_prefix: Optional[str] = \"\",\n dependencies: Optional[List[Depends]] = None,\n health_check_path: str = \"/health\",\n version_check_path: str = \"/version\",\n fast_api_app_kwargs: dict = None,\n router_overrides: Mapping[str, Optional[APIRouter]] = None,\n) -> FastAPI:\n \"\"\"\n Create a FastAPI app that includes the Prefect REST API\n\n Args:\n router_prefix: a prefix to apply to all included routers\n dependencies: a list of global dependencies to add to each Prefect REST API router\n health_check_path: the health check route path\n fast_api_app_kwargs: kwargs to pass to the FastAPI constructor\n router_overrides: a mapping of route prefixes (i.e. \"/admin\") to new routers\n allowing the caller to override the default routers. If `None` is provided\n as a value, the default router will be dropped from the application.\n\n Returns:\n a FastAPI app that serves the Prefect REST API\n \"\"\"\n fast_api_app_kwargs = fast_api_app_kwargs or {}\n api_app = FastAPI(title=API_TITLE, **fast_api_app_kwargs)\n api_app.add_middleware(GZipMiddleware)\n\n @api_app.get(health_check_path, tags=[\"Root\"])\n async def health_check():\n return True\n\n @api_app.get(version_check_path, tags=[\"Root\"])\n async def orion_info():\n return SERVER_API_VERSION\n\n # always include version checking\n if dependencies is None:\n dependencies = [Depends(enforce_minimum_version)]\n else:\n dependencies.append(Depends(enforce_minimum_version))\n\n routers = {router.prefix: router for router in API_ROUTERS}\n\n if router_overrides:\n for prefix, router in router_overrides.items():\n # We may want to allow this behavior in the future to inject new routes, but\n # for now this will be treated an as an exception\n if prefix not in routers:\n raise KeyError(\n \"Router override provided for prefix that does not exist:\"\n f\" {prefix!r}\"\n )\n\n # Drop the existing router\n existing_router = routers.pop(prefix)\n\n # Replace it with a new router if provided\n if router is not None:\n if prefix != router.prefix:\n # We may want to allow this behavior in the future, but it will\n # break expectations without additional routing and is banned for\n # now\n raise ValueError(\n f\"Router override for {prefix!r} defines a different prefix \"\n f\"{router.prefix!r}.\"\n )\n\n existing_paths = method_paths_from_routes(existing_router.routes)\n new_paths = method_paths_from_routes(router.routes)\n if not existing_paths.issubset(new_paths):\n raise ValueError(\n f\"Router override for {prefix!r} is missing paths defined by \"\n f\"the original router: {existing_paths.difference(new_paths)}\"\n )\n\n routers[prefix] = router\n\n for router in routers.values():\n api_app.include_router(router, prefix=router_prefix, dependencies=dependencies)\n\n return api_app\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.create_app","title":"create_app
","text":"Create an FastAPI app that includes the Prefect REST API and UI
Parameters:
Name Type Description Defaultsettings
Settings
The settings to use to create the app. If not set, settings are pulled from the context.
None
ignore_cache
bool
If set, a new application will be created even if the settings match. Otherwise, an application is returned from the cache.
False
ephemeral
bool
If set, the application will be treated as ephemeral. The UI and services will be disabled.
False
Source code in prefect/server/api/server.py
def create_app(\n settings: prefect.settings.Settings = None,\n ephemeral: bool = False,\n ignore_cache: bool = False,\n) -> FastAPI:\n \"\"\"\n Create an FastAPI app that includes the Prefect REST API and UI\n\n Args:\n settings: The settings to use to create the app. If not set, settings are pulled\n from the context.\n ignore_cache: If set, a new application will be created even if the settings\n match. Otherwise, an application is returned from the cache.\n ephemeral: If set, the application will be treated as ephemeral. The UI\n and services will be disabled.\n \"\"\"\n settings = settings or prefect.settings.get_current_settings()\n cache_key = (settings.hash_key(), ephemeral)\n\n if cache_key in APP_CACHE and not ignore_cache:\n return APP_CACHE[cache_key]\n\n # TODO: Move these startup functions out of this closure into the top-level or\n # another dedicated location\n async def run_migrations():\n \"\"\"Ensure the database is created and up to date with the current migrations\"\"\"\n if prefect.settings.PREFECT_API_DATABASE_MIGRATE_ON_START:\n from prefect.server.database.dependencies import provide_database_interface\n\n db = provide_database_interface()\n await db.create_db()\n\n @_memoize_block_auto_registration\n async def add_block_types():\n \"\"\"Add all registered blocks to the database\"\"\"\n if not prefect.settings.PREFECT_API_BLOCKS_REGISTER_ON_START:\n return\n\n from prefect.server.database.dependencies import provide_database_interface\n from prefect.server.models.block_registration import run_block_auto_registration\n\n db = provide_database_interface()\n session = await db.session()\n\n async with session:\n await run_block_auto_registration(session=session)\n\n async def start_services():\n \"\"\"Start additional services when the Prefect REST API starts up.\"\"\"\n\n if ephemeral:\n app.state.services = None\n return\n\n service_instances = []\n\n if prefect.settings.PREFECT_API_SERVICES_SCHEDULER_ENABLED.value():\n service_instances.append(services.scheduler.Scheduler())\n service_instances.append(services.scheduler.RecentDeploymentsScheduler())\n\n if prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_ENABLED.value():\n service_instances.append(services.late_runs.MarkLateRuns())\n\n if prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED.value():\n service_instances.append(services.pause_expirations.FailExpiredPauses())\n\n if prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED.value():\n service_instances.append(\n services.cancellation_cleanup.CancellationCleanup()\n )\n\n if prefect.settings.PREFECT_SERVER_ANALYTICS_ENABLED.value():\n service_instances.append(services.telemetry.Telemetry())\n\n if prefect.settings.PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED.value():\n service_instances.append(\n services.flow_run_notifications.FlowRunNotifications()\n )\n\n if prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value():\n service_instances.append(services.task_scheduling.TaskSchedulingTimeouts())\n\n loop = asyncio.get_running_loop()\n\n app.state.services = {\n service: loop.create_task(service.start()) for service in service_instances\n }\n\n for service, task in app.state.services.items():\n logger.info(f\"{service.name} service scheduled to start in-app\")\n task.add_done_callback(partial(on_service_exit, service))\n\n async def stop_services():\n \"\"\"Ensure services are stopped before the Prefect REST API shuts down.\"\"\"\n if hasattr(app.state, \"services\") and app.state.services:\n await asyncio.gather(*[service.stop() for service in app.state.services])\n try:\n await asyncio.gather(\n *[task.stop() for task in app.state.services.values()]\n )\n except Exception:\n # `on_service_exit` should handle logging exceptions on exit\n pass\n\n @asynccontextmanager\n async def lifespan(app):\n try:\n await run_migrations()\n await add_block_types()\n await start_services()\n yield\n finally:\n await stop_services()\n\n def on_service_exit(service, task):\n \"\"\"\n Added as a callback for completion of services to log exit\n \"\"\"\n try:\n # Retrieving the result will raise the exception\n task.result()\n except Exception:\n logger.error(f\"{service.name} service failed!\", exc_info=True)\n else:\n logger.info(f\"{service.name} service stopped!\")\n\n app = FastAPI(\n title=TITLE,\n version=API_VERSION,\n lifespan=lifespan,\n )\n api_app = create_api_app(\n fast_api_app_kwargs={\n \"exception_handlers\": {\n # NOTE: FastAPI special cases the generic `Exception` handler and\n # registers it as a separate middleware from the others\n Exception: custom_internal_exception_handler,\n RequestValidationError: validation_exception_handler,\n sa.exc.IntegrityError: integrity_exception_handler,\n ObjectNotFoundError: prefect_object_not_found_exception_handler,\n }\n },\n )\n ui_app = create_ui_app(ephemeral)\n\n # middleware\n app.add_middleware(\n CORSMiddleware,\n allow_origins=[\"*\"],\n allow_methods=[\"*\"],\n allow_headers=[\"*\"],\n )\n\n # Limit the number of concurrent requests when using a SQLite database to reduce\n # chance of errors where the database cannot be opened due to a high number of\n # concurrent writes\n if (\n get_dialect(prefect.settings.PREFECT_API_DATABASE_CONNECTION_URL.value()).name\n == \"sqlite\"\n ):\n app.add_middleware(RequestLimitMiddleware, limit=100)\n\n api_app.mount(\n \"/static\",\n StaticFiles(\n directory=os.path.join(\n os.path.dirname(os.path.realpath(__file__)), \"static\"\n )\n ),\n name=\"static\",\n )\n app.api_app = api_app\n app.mount(\"/api\", app=api_app, name=\"api\")\n app.mount(\"/\", app=ui_app, name=\"ui\")\n\n def openapi():\n \"\"\"\n Convenience method for extracting the user facing OpenAPI schema from the API app.\n\n This method is attached to the global public app for easy access.\n \"\"\"\n partial_schema = get_openapi(\n title=API_TITLE,\n version=API_VERSION,\n routes=api_app.routes,\n )\n new_schema = partial_schema.copy()\n new_schema[\"paths\"] = {}\n for path, value in partial_schema[\"paths\"].items():\n new_schema[\"paths\"][f\"/api{path}\"] = value\n\n new_schema[\"info\"][\"x-logo\"] = {\"url\": \"static/prefect-logo-mark-gradient.png\"}\n return new_schema\n\n app.openapi = openapi\n\n APP_CACHE[cache_key] = app\n return app\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.custom_internal_exception_handler","title":"custom_internal_exception_handler
async
","text":"Log a detailed exception for internal server errors before returning.
Send 503 for errors clients can retry on.
Source code inprefect/server/api/server.py
async def custom_internal_exception_handler(request: Request, exc: Exception):\n \"\"\"\n Log a detailed exception for internal server errors before returning.\n\n Send 503 for errors clients can retry on.\n \"\"\"\n logger.error(\"Encountered exception in request:\", exc_info=True)\n\n if is_client_retryable_exception(exc):\n return JSONResponse(\n content={\"exception_message\": \"Service Unavailable\"},\n status_code=status.HTTP_503_SERVICE_UNAVAILABLE,\n )\n\n return JSONResponse(\n content={\"exception_message\": \"Internal Server Error\"},\n status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,\n )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.integrity_exception_handler","title":"integrity_exception_handler
async
","text":"Capture database integrity errors.
Source code inprefect/server/api/server.py
async def integrity_exception_handler(request: Request, exc: Exception):\n \"\"\"Capture database integrity errors.\"\"\"\n logger.error(\"Encountered exception in request:\", exc_info=True)\n return JSONResponse(\n content={\n \"detail\": (\n \"Data integrity conflict. This usually means a \"\n \"unique or foreign key constraint was violated. \"\n \"See server logs for details.\"\n )\n },\n status_code=status.HTTP_409_CONFLICT,\n )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.prefect_object_not_found_exception_handler","title":"prefect_object_not_found_exception_handler
async
","text":"Return 404 status code on object not found exceptions.
Source code inprefect/server/api/server.py
async def prefect_object_not_found_exception_handler(\n request: Request, exc: ObjectNotFoundError\n):\n \"\"\"Return 404 status code on object not found exceptions.\"\"\"\n return JSONResponse(\n content={\"exception_message\": str(exc)}, status_code=status.HTTP_404_NOT_FOUND\n )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.replace_placeholder_string_in_files","title":"replace_placeholder_string_in_files
","text":"Recursively loops through all files in the given directory and replaces a placeholder string.
Source code inprefect/server/api/server.py
def replace_placeholder_string_in_files(\n directory, placeholder, replacement, allowed_extensions=None\n):\n \"\"\"\n Recursively loops through all files in the given directory and replaces\n a placeholder string.\n \"\"\"\n if allowed_extensions is None:\n allowed_extensions = [\".txt\", \".html\", \".css\", \".js\", \".json\", \".txt\"]\n\n for root, dirs, files in os.walk(directory):\n for file in files:\n if any(file.endswith(ext) for ext in allowed_extensions):\n file_path = os.path.join(root, file)\n\n with open(file_path, \"r\", encoding=\"utf-8\") as file:\n file_data = file.read()\n\n file_data = file_data.replace(placeholder, replacement)\n\n with open(file_path, \"w\", encoding=\"utf-8\") as file:\n file.write(file_data)\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.validation_exception_handler","title":"validation_exception_handler
async
","text":"Provide a detailed message for request validation errors.
Source code inprefect/server/api/server.py
async def validation_exception_handler(request: Request, exc: RequestValidationError):\n \"\"\"Provide a detailed message for request validation errors.\"\"\"\n return JSONResponse(\n status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n content=jsonable_encoder(\n {\n \"exception_message\": \"Invalid request received.\",\n \"exception_detail\": exc.errors(),\n \"request_body\": exc.body,\n }\n ),\n )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/task_run_states/","title":"server.api.task_run_states","text":"","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states","title":"prefect.server.api.task_run_states
","text":"Routes for interacting with task run state objects.
","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states.read_task_run_state","title":"read_task_run_state
async
","text":"Get a task run state by id.
Source code inprefect/server/api/task_run_states.py
@router.get(\"/{id}\")\nasync def read_task_run_state(\n task_run_state_id: UUID = Path(\n ..., description=\"The task run state id\", alias=\"id\"\n ),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.states.State:\n \"\"\"\n Get a task run state by id.\n \"\"\"\n async with db.session_context() as session:\n task_run_state = await models.task_run_states.read_task_run_state(\n session=session, task_run_state_id=task_run_state_id\n )\n if not task_run_state:\n raise HTTPException(\n status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run state not found\"\n )\n return task_run_state\n
","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states.read_task_run_states","title":"read_task_run_states
async
","text":"Get states associated with a task run.
Source code inprefect/server/api/task_run_states.py
@router.get(\"/\")\nasync def read_task_run_states(\n task_run_id: UUID,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.states.State]:\n \"\"\"\n Get states associated with a task run.\n \"\"\"\n async with db.session_context() as session:\n return await models.task_run_states.read_task_run_states(\n session=session, task_run_id=task_run_id\n )\n
","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_runs/","title":"server.api.task_runs","text":"","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs","title":"prefect.server.api.task_runs
","text":"Routes for interacting with task run objects.
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.count_task_runs","title":"count_task_runs
async
","text":"Count task runs.
Source code inprefect/server/api/task_runs.py
@router.post(\"/count\")\nasync def count_task_runs(\n db: PrefectDBInterface = Depends(provide_database_interface),\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n) -> int:\n \"\"\"\n Count task runs.\n \"\"\"\n async with db.session_context() as session:\n return await models.task_runs.count_task_runs(\n session=session,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n )\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.create_task_run","title":"create_task_run
async
","text":"Create a task run. If a task run with the same flow_run_id, task_key, and dynamic_key already exists, the existing task run will be returned.
If no state is provided, the task run will be created in a PENDING state.
Source code inprefect/server/api/task_runs.py
@router.post(\"/\")\nasync def create_task_run(\n task_run: schemas.actions.TaskRunCreate,\n response: Response,\n db: PrefectDBInterface = Depends(provide_database_interface),\n orchestration_parameters: dict = Depends(\n orchestration_dependencies.provide_task_orchestration_parameters\n ),\n) -> schemas.core.TaskRun:\n \"\"\"\n Create a task run. If a task run with the same flow_run_id,\n task_key, and dynamic_key already exists, the existing task\n run will be returned.\n\n If no state is provided, the task run will be created in a PENDING state.\n \"\"\"\n # hydrate the input model into a full task run / state model\n task_run = schemas.core.TaskRun(**task_run.dict())\n\n if not task_run.state:\n task_run.state = schemas.states.Pending()\n\n now = pendulum.now(\"UTC\")\n\n async with db.session_context(begin_transaction=True) as session:\n model = await models.task_runs.create_task_run(\n session=session,\n task_run=task_run,\n orchestration_parameters=orchestration_parameters,\n )\n\n if model.created >= now:\n response.status_code = status.HTTP_201_CREATED\n\n new_task_run: schemas.core.TaskRun = schemas.core.TaskRun.from_orm(model)\n\n return new_task_run\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.delete_task_run","title":"delete_task_run
async
","text":"Delete a task run by id.
Source code inprefect/server/api/task_runs.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_task_run(\n task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Delete a task run by id.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n result = await models.task_runs.delete_task_run(\n session=session, task_run_id=task_run_id\n )\n if not result:\n raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task not found\")\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.read_task_run","title":"read_task_run
async
","text":"Get a task run by id.
Source code inprefect/server/api/task_runs.py
@router.get(\"/{id}\")\nasync def read_task_run(\n task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.TaskRun:\n \"\"\"\n Get a task run by id.\n \"\"\"\n async with db.session_context() as session:\n task_run = await models.task_runs.read_task_run(\n session=session, task_run_id=task_run_id\n )\n if not task_run:\n raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task not found\")\n return task_run\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.read_task_runs","title":"read_task_runs
async
","text":"Query for task runs.
Source code inprefect/server/api/task_runs.py
@router.post(\"/filter\")\nasync def read_task_runs(\n sort: schemas.sorting.TaskRunSort = Body(schemas.sorting.TaskRunSort.ID_DESC),\n limit: int = dependencies.LimitBody(),\n offset: int = Body(0, ge=0),\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.TaskRun]:\n \"\"\"\n Query for task runs.\n \"\"\"\n async with db.session_context() as session:\n return await models.task_runs.read_task_runs(\n session=session,\n flow_filter=flows,\n flow_run_filter=flow_runs,\n task_run_filter=task_runs,\n deployment_filter=deployments,\n offset=offset,\n limit=limit,\n sort=sort,\n )\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.set_task_run_state","title":"set_task_run_state
async
","text":"Set a task run state, invoking any orchestration rules.
Source code inprefect/server/api/task_runs.py
@router.post(\"/{id}/set_state\")\nasync def set_task_run_state(\n task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n state: schemas.actions.StateCreate = Body(..., description=\"The intended state.\"),\n force: bool = Body(\n False,\n description=(\n \"If false, orchestration rules will be applied that may alter or prevent\"\n \" the state transition. If True, orchestration rules are not applied.\"\n ),\n ),\n db: PrefectDBInterface = Depends(provide_database_interface),\n response: Response = None,\n task_policy: BaseOrchestrationPolicy = Depends(\n orchestration_dependencies.provide_task_policy\n ),\n orchestration_parameters: dict = Depends(\n orchestration_dependencies.provide_task_orchestration_parameters\n ),\n) -> OrchestrationResult:\n \"\"\"Set a task run state, invoking any orchestration rules.\"\"\"\n\n now = pendulum.now(\"UTC\")\n\n # create the state\n async with db.session_context(\n begin_transaction=True, with_for_update=True\n ) as session:\n orchestration_result = await models.task_runs.set_task_run_state(\n session=session,\n task_run_id=task_run_id,\n state=schemas.states.State.parse_obj(\n state\n ), # convert to a full State object\n force=force,\n task_policy=task_policy,\n orchestration_parameters=orchestration_parameters,\n )\n\n # set the 201 if a new state was created\n if orchestration_result.state and orchestration_result.state.timestamp >= now:\n response.status_code = status.HTTP_201_CREATED\n else:\n response.status_code = status.HTTP_200_OK\n\n return orchestration_result\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.task_run_history","title":"task_run_history
async
","text":"Query for task run history data across a given range and interval.
Source code inprefect/server/api/task_runs.py
@router.post(\"/history\")\nasync def task_run_history(\n history_start: DateTimeTZ = Body(..., description=\"The history's start time.\"),\n history_end: DateTimeTZ = Body(..., description=\"The history's end time.\"),\n history_interval: datetime.timedelta = Body(\n ...,\n description=(\n \"The size of each history interval, in seconds. Must be at least 1 second.\"\n ),\n alias=\"history_interval_seconds\",\n ),\n flows: schemas.filters.FlowFilter = None,\n flow_runs: schemas.filters.FlowRunFilter = None,\n task_runs: schemas.filters.TaskRunFilter = None,\n deployments: schemas.filters.DeploymentFilter = None,\n db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.HistoryResponse]:\n \"\"\"\n Query for task run history data across a given range and interval.\n \"\"\"\n if history_interval < datetime.timedelta(seconds=1):\n raise HTTPException(\n status.HTTP_422_UNPROCESSABLE_ENTITY,\n detail=\"History interval must not be less than 1 second.\",\n )\n\n async with db.session_context() as session:\n return await run_history(\n session=session,\n run_type=\"task_run\",\n history_start=history_start,\n history_end=history_end,\n history_interval=history_interval,\n flows=flows,\n flow_runs=flow_runs,\n task_runs=task_runs,\n deployments=deployments,\n )\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.update_task_run","title":"update_task_run
async
","text":"Updates a task run.
Source code inprefect/server/api/task_runs.py
@router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_task_run(\n task_run: schemas.actions.TaskRunUpdate,\n task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n db: PrefectDBInterface = Depends(provide_database_interface),\n):\n \"\"\"\n Updates a task run.\n \"\"\"\n async with db.session_context(begin_transaction=True) as session:\n result = await models.task_runs.update_task_run(\n session=session, task_run=task_run, task_run_id=task_run_id\n )\n if not result:\n raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task run not found\")\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/models/deployments/","title":"server.models.deployments","text":""},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments","title":"prefect.server.models.deployments
","text":"Functions for interacting with deployment ORM objects. Intended for internal use by the Prefect REST API.
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.check_work_queues_for_deployment","title":"check_work_queues_for_deployment
async
","text":"Get work queues that can pick up the specified deployment.
Work queues will pick up a deployment when all of the following are met.
Notes on the query:
json_contains(A, B)
should be interpreted as \"True if A contains B\".Returns:
Type DescriptionList[WorkQueue]
List[db.WorkQueue]: WorkQueues
Source code inprefect/server/models/deployments.py
@inject_db\nasync def check_work_queues_for_deployment(\n db: PrefectDBInterface, session: sa.orm.Session, deployment_id: UUID\n) -> List[schemas.core.WorkQueue]:\n \"\"\"\n Get work queues that can pick up the specified deployment.\n\n Work queues will pick up a deployment when all of the following are met.\n\n - The deployment has ALL tags that the work queue has (i.e. the work\n queue's tags must be a subset of the deployment's tags).\n - The work queue's specified deployment IDs match the deployment's ID,\n or the work queue does NOT have specified deployment IDs.\n - The work queue's specified flow runners match the deployment's flow\n runner or the work queue does NOT have a specified flow runner.\n\n Notes on the query:\n\n - Our database currently allows either \"null\" and empty lists as\n null values in filters, so we need to catch both cases with \"or\".\n - `json_contains(A, B)` should be interpreted as \"True if A\n contains B\".\n\n Returns:\n List[db.WorkQueue]: WorkQueues\n \"\"\"\n deployment = await session.get(db.Deployment, deployment_id)\n if not deployment:\n raise ObjectNotFoundError(f\"Deployment with id {deployment_id} not found\")\n\n query = (\n select(db.WorkQueue)\n # work queue tags are a subset of deployment tags\n .filter(\n or_(\n json_contains(deployment.tags, db.WorkQueue.filter[\"tags\"]),\n json_contains([], db.WorkQueue.filter[\"tags\"]),\n json_contains(None, db.WorkQueue.filter[\"tags\"]),\n )\n )\n # deployment_ids is null or contains the deployment's ID\n .filter(\n or_(\n json_contains(\n db.WorkQueue.filter[\"deployment_ids\"],\n str(deployment.id),\n ),\n json_contains(None, db.WorkQueue.filter[\"deployment_ids\"]),\n json_contains([], db.WorkQueue.filter[\"deployment_ids\"]),\n )\n )\n )\n\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.count_deployments","title":"count_deployments
async
","text":"Count deployments.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_filter
FlowFilter
only count deployments whose flows match these criteria
None
flow_run_filter
FlowRunFilter
only count deployments whose flow runs match these criteria
None
task_run_filter
TaskRunFilter
only count deployments whose task runs match these criteria
None
deployment_filter
DeploymentFilter
only count deployment that match these filters
None
work_pool_filter
WorkPoolFilter
only count deployments that match these work pool filters
None
work_queue_filter
WorkQueueFilter
only count deployments that match these work pool queue filters
None
Returns:
Name Type Descriptionint
int
the number of deployments matching filters
Source code inprefect/server/models/deployments.py
@inject_db\nasync def count_deployments(\n session: sa.orm.Session,\n db: PrefectDBInterface,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n work_pool_filter: schemas.filters.WorkPoolFilter = None,\n work_queue_filter: schemas.filters.WorkQueueFilter = None,\n) -> int:\n \"\"\"\n Count deployments.\n\n Args:\n session: A database session\n flow_filter: only count deployments whose flows match these criteria\n flow_run_filter: only count deployments whose flow runs match these criteria\n task_run_filter: only count deployments whose task runs match these criteria\n deployment_filter: only count deployment that match these filters\n work_pool_filter: only count deployments that match these work pool filters\n work_queue_filter: only count deployments that match these work pool queue filters\n\n Returns:\n int: the number of deployments matching filters\n \"\"\"\n\n query = select(sa.func.count(sa.text(\"*\"))).select_from(db.Deployment)\n\n query = await _apply_deployment_filters(\n query=query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n work_pool_filter=work_pool_filter,\n work_queue_filter=work_queue_filter,\n db=db,\n )\n\n result = await session.execute(query)\n return result.scalar()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.create_deployment","title":"create_deployment
async
","text":"Upserts a deployment.
Parameters:
Name Type Description Defaultsession
AsyncSession
a database session
requireddeployment
Deployment
a deployment model
requiredReturns:
Type Descriptiondb.Deployment: the newly-created or updated deployment
Source code inprefect/server/models/deployments.py
@inject_db\nasync def create_deployment(\n session: AsyncSession,\n deployment: schemas.core.Deployment,\n db: PrefectDBInterface,\n):\n \"\"\"Upserts a deployment.\n\n Args:\n session: a database session\n deployment: a deployment model\n\n Returns:\n db.Deployment: the newly-created or updated deployment\n\n \"\"\"\n\n # set `updated` manually\n # known limitation of `on_conflict_do_update`, will not use `Column.onupdate`\n # https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#the-set-clause\n deployment.updated = pendulum.now(\"UTC\")\n\n schedules = deployment.schedules\n insert_values = deployment.dict(\n shallow=True, exclude_unset=True, exclude={\"schedules\"}\n )\n\n insert_stmt = (\n (await db.insert(db.Deployment))\n .values(**insert_values)\n .on_conflict_do_update(\n index_elements=db.deployment_unique_upsert_columns,\n set_={\n **deployment.dict(\n shallow=True,\n exclude_unset=True,\n exclude={\"id\", \"created\", \"created_by\", \"schedules\"},\n ),\n },\n )\n )\n\n await session.execute(insert_stmt)\n\n # Get the id of the deployment we just created or updated\n result = await session.execute(\n sa.select(db.Deployment.id).where(\n sa.and_(\n db.Deployment.flow_id == deployment.flow_id,\n db.Deployment.name == deployment.name,\n )\n )\n )\n deployment_id = result.scalar_one_or_none()\n\n if not deployment_id:\n return None\n\n # Because this was possibly an upsert, we need to delete any existing\n # schedules and any runs from the old deployment.\n\n await _delete_scheduled_runs(\n session=session, deployment_id=deployment_id, db=db, auto_scheduled_only=True\n )\n\n await delete_schedules_for_deployment(session=session, deployment_id=deployment_id)\n\n if schedules:\n await create_deployment_schedules(\n session=session,\n deployment_id=deployment_id,\n schedules=[\n schemas.actions.DeploymentScheduleCreate(\n schedule=schedule.schedule,\n active=schedule.active, # type: ignore[call-arg]\n )\n for schedule in schedules\n ],\n )\n\n query = (\n sa.select(db.Deployment)\n .where(\n sa.and_(\n db.Deployment.flow_id == deployment.flow_id,\n db.Deployment.name == deployment.name,\n )\n )\n .execution_options(populate_existing=True)\n )\n result = await session.execute(query)\n return result.scalar()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.create_deployment_schedules","title":"create_deployment_schedules
async
","text":"Creates a deployment's schedules.
Parameters:
Name Type Description Defaultsession
AsyncSession
A database session
requireddeployment_id
UUID
a deployment id
requiredschedules
List[DeploymentScheduleCreate]
a list of deployment schedule create actions
required Source code inprefect/server/models/deployments.py
@inject_db\nasync def create_deployment_schedules(\n db: PrefectDBInterface,\n session: AsyncSession,\n deployment_id: UUID,\n schedules: List[schemas.actions.DeploymentScheduleCreate],\n) -> List[schemas.core.DeploymentSchedule]:\n \"\"\"\n Creates a deployment's schedules.\n\n Args:\n session: A database session\n deployment_id: a deployment id\n schedules: a list of deployment schedule create actions\n \"\"\"\n\n schedules_with_deployment_id = []\n for schedule in schedules:\n data = schedule.dict()\n data[\"deployment_id\"] = deployment_id\n schedules_with_deployment_id.append(data)\n\n models = [\n db.DeploymentSchedule(**schedule) for schedule in schedules_with_deployment_id\n ]\n session.add_all(models)\n await session.flush()\n\n return [schemas.core.DeploymentSchedule.from_orm(m) for m in models]\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.delete_deployment","title":"delete_deployment
async
","text":"Delete a deployment by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requireddeployment_id
UUID
a deployment id
requiredReturns:
Name Type Descriptionbool
bool
whether or not the deployment was deleted
Source code inprefect/server/models/deployments.py
@inject_db\nasync def delete_deployment(\n session: sa.orm.Session, deployment_id: UUID, db: PrefectDBInterface\n) -> bool:\n \"\"\"\n Delete a deployment by id.\n\n Args:\n session: A database session\n deployment_id: a deployment id\n\n Returns:\n bool: whether or not the deployment was deleted\n \"\"\"\n\n # delete scheduled runs, both auto- and user- created.\n await _delete_scheduled_runs(\n session=session, deployment_id=deployment_id, auto_scheduled_only=False\n )\n\n result = await session.execute(\n delete(db.Deployment).where(db.Deployment.id == deployment_id)\n )\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.delete_deployment_schedule","title":"delete_deployment_schedule
async
","text":"Deletes a deployment schedule.
Parameters:
Name Type Description Defaultsession
AsyncSession
A database session
requireddeployment_schedule_id
UUID
a deployment schedule id
required Source code inprefect/server/models/deployments.py
@inject_db\nasync def delete_deployment_schedule(\n db: PrefectDBInterface,\n session: AsyncSession,\n deployment_id: UUID,\n deployment_schedule_id: UUID,\n) -> bool:\n \"\"\"\n Deletes a deployment schedule.\n\n Args:\n session: A database session\n deployment_schedule_id: a deployment schedule id\n \"\"\"\n\n result = await session.execute(\n sa.delete(db.DeploymentSchedule).where(\n sa.and_(\n db.DeploymentSchedule.id == deployment_schedule_id,\n db.DeploymentSchedule.deployment_id == deployment_id,\n )\n )\n )\n\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.delete_schedules_for_deployment","title":"delete_schedules_for_deployment
async
","text":"Deletes a deployment schedule.
Parameters:
Name Type Description Defaultsession
AsyncSession
A database session
requireddeployment_id
UUID
a deployment id
required Source code inprefect/server/models/deployments.py
@inject_db\nasync def delete_schedules_for_deployment(\n db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID\n) -> bool:\n \"\"\"\n Deletes a deployment schedule.\n\n Args:\n session: A database session\n deployment_id: a deployment id\n \"\"\"\n\n result = await session.execute(\n sa.delete(db.DeploymentSchedule).where(\n db.DeploymentSchedule.deployment_id == deployment_id\n )\n )\n\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment","title":"read_deployment
async
","text":"Reads a deployment by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requireddeployment_id
UUID
a deployment id
requiredReturns:
Type Descriptiondb.Deployment: the deployment
Source code inprefect/server/models/deployments.py
@inject_db\nasync def read_deployment(\n session: sa.orm.Session, deployment_id: UUID, db: PrefectDBInterface\n):\n \"\"\"Reads a deployment by id.\n\n Args:\n session: A database session\n deployment_id: a deployment id\n\n Returns:\n db.Deployment: the deployment\n \"\"\"\n\n return await session.get(db.Deployment, deployment_id)\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment_by_name","title":"read_deployment_by_name
async
","text":"Reads a deployment by name.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredname
str
a deployment name
requiredflow_name
str
the name of the flow the deployment belongs to
requiredReturns:
Type Descriptiondb.Deployment: the deployment
Source code inprefect/server/models/deployments.py
@inject_db\nasync def read_deployment_by_name(\n session: sa.orm.Session, name: str, flow_name: str, db: PrefectDBInterface\n):\n \"\"\"Reads a deployment by name.\n\n Args:\n session: A database session\n name: a deployment name\n flow_name: the name of the flow the deployment belongs to\n\n Returns:\n db.Deployment: the deployment\n \"\"\"\n\n result = await session.execute(\n select(db.Deployment)\n .join(db.Flow, db.Deployment.flow_id == db.Flow.id)\n .where(\n sa.and_(\n db.Flow.name == flow_name,\n db.Deployment.name == name,\n )\n )\n .limit(1)\n )\n return result.scalar()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment_schedules","title":"read_deployment_schedules
async
","text":"Reads a deployment's schedules.
Parameters:
Name Type Description Defaultsession
AsyncSession
A database session
requireddeployment_id
UUID
a deployment id
requiredReturns:
Type DescriptionList[DeploymentSchedule]
list[schemas.core.DeploymentSchedule]: the deployment's schedules
Source code inprefect/server/models/deployments.py
@inject_db\nasync def read_deployment_schedules(\n db: PrefectDBInterface,\n session: AsyncSession,\n deployment_id: UUID,\n deployment_schedule_filter: Optional[\n schemas.filters.DeploymentScheduleFilter\n ] = None,\n) -> List[schemas.core.DeploymentSchedule]:\n \"\"\"\n Reads a deployment's schedules.\n\n Args:\n session: A database session\n deployment_id: a deployment id\n\n Returns:\n list[schemas.core.DeploymentSchedule]: the deployment's schedules\n \"\"\"\n\n query = (\n sa.select(db.DeploymentSchedule)\n .where(db.DeploymentSchedule.deployment_id == deployment_id)\n .order_by(db.DeploymentSchedule.updated.desc())\n )\n\n if deployment_schedule_filter:\n query = query.where(deployment_schedule_filter.as_sql_filter(db))\n\n result = await session.execute(query)\n\n return [schemas.core.DeploymentSchedule.from_orm(s) for s in result.scalars().all()]\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployments","title":"read_deployments
async
","text":"Read deployments.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredoffset
int
Query offset
None
limit
int
Query limit
None
flow_filter
FlowFilter
only select deployments whose flows match these criteria
None
flow_run_filter
FlowRunFilter
only select deployments whose flow runs match these criteria
None
task_run_filter
TaskRunFilter
only select deployments whose task runs match these criteria
None
deployment_filter
DeploymentFilter
only select deployment that match these filters
None
work_pool_filter
WorkPoolFilter
only select deployments whose work pools match these criteria
None
work_queue_filter
WorkQueueFilter
only select deployments whose work pool queues match these criteria
None
sort
DeploymentSort
the sort criteria for selected deployments. Defaults to name
ASC.
NAME_ASC
Returns:
Type DescriptionList[db.Deployment]: deployments
Source code inprefect/server/models/deployments.py
@inject_db\nasync def read_deployments(\n session: sa.orm.Session,\n db: PrefectDBInterface,\n offset: int = None,\n limit: int = None,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n work_pool_filter: schemas.filters.WorkPoolFilter = None,\n work_queue_filter: schemas.filters.WorkQueueFilter = None,\n sort: schemas.sorting.DeploymentSort = schemas.sorting.DeploymentSort.NAME_ASC,\n):\n \"\"\"\n Read deployments.\n\n Args:\n session: A database session\n offset: Query offset\n limit: Query limit\n flow_filter: only select deployments whose flows match these criteria\n flow_run_filter: only select deployments whose flow runs match these criteria\n task_run_filter: only select deployments whose task runs match these criteria\n deployment_filter: only select deployment that match these filters\n work_pool_filter: only select deployments whose work pools match these criteria\n work_queue_filter: only select deployments whose work pool queues match these criteria\n sort: the sort criteria for selected deployments. Defaults to `name` ASC.\n\n Returns:\n List[db.Deployment]: deployments\n \"\"\"\n\n query = select(db.Deployment).order_by(sort.as_sql_sort(db=db))\n\n query = await _apply_deployment_filters(\n query=query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n work_pool_filter=work_pool_filter,\n work_queue_filter=work_queue_filter,\n db=db,\n )\n\n if offset is not None:\n query = query.offset(offset)\n if limit is not None:\n query = query.limit(limit)\n\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.schedule_runs","title":"schedule_runs
async
","text":"Schedule flow runs for a deployment
Parameters:
Name Type Description Defaultsession
Session
a database session
requireddeployment_id
UUID
the id of the deployment to schedule
requiredstart_time
datetime
the time from which to start scheduling runs
None
end_time
datetime
runs will be scheduled until at most this time
None
min_time
timedelta
runs will be scheduled until at least this far in the future
None
min_runs
int
a minimum amount of runs to schedule
None
max_runs
int
a maximum amount of runs to schedule
None
This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected.
- Runs will be generated starting on or after the `start_time`\n- No more than `max_runs` runs will be generated\n- No runs will be generated after `end_time` is reached\n- At least `min_runs` runs will be generated\n- Runs will be generated until at least `start_time` + `min_time` is reached\n
Returns:
Type DescriptionList[UUID]
a list of flow run ids scheduled for the deployment
Source code inprefect/server/models/deployments.py
async def schedule_runs(\n session: sa.orm.Session,\n deployment_id: UUID,\n start_time: datetime.datetime = None,\n end_time: datetime.datetime = None,\n min_time: datetime.timedelta = None,\n min_runs: int = None,\n max_runs: int = None,\n auto_scheduled: bool = True,\n) -> List[UUID]:\n \"\"\"\n Schedule flow runs for a deployment\n\n Args:\n session: a database session\n deployment_id: the id of the deployment to schedule\n start_time: the time from which to start scheduling runs\n end_time: runs will be scheduled until at most this time\n min_time: runs will be scheduled until at least this far in the future\n min_runs: a minimum amount of runs to schedule\n max_runs: a maximum amount of runs to schedule\n\n This function will generate the minimum number of runs that satisfy the min\n and max times, and the min and max counts. Specifically, the following order\n will be respected.\n\n - Runs will be generated starting on or after the `start_time`\n - No more than `max_runs` runs will be generated\n - No runs will be generated after `end_time` is reached\n - At least `min_runs` runs will be generated\n - Runs will be generated until at least `start_time` + `min_time` is reached\n\n Returns:\n a list of flow run ids scheduled for the deployment\n \"\"\"\n if min_runs is None:\n min_runs = PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS.value()\n if max_runs is None:\n max_runs = PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS.value()\n if start_time is None:\n start_time = pendulum.now(\"UTC\")\n if end_time is None:\n end_time = start_time + (\n PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME.value()\n )\n if min_time is None:\n min_time = PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME.value()\n\n start_time = pendulum.instance(start_time)\n end_time = pendulum.instance(end_time)\n\n runs = await _generate_scheduled_flow_runs(\n session=session,\n deployment_id=deployment_id,\n start_time=start_time,\n end_time=end_time,\n min_time=min_time,\n min_runs=min_runs,\n max_runs=max_runs,\n auto_scheduled=auto_scheduled,\n )\n return await _insert_scheduled_flow_runs(session=session, runs=runs)\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.update_deployment","title":"update_deployment
async
","text":"Updates a deployment.
Parameters:
Name Type Description Defaultsession
Session
a database session
requireddeployment_id
UUID
the ID of the deployment to modify
requireddeployment
DeploymentUpdate
changes to a deployment model
requiredReturns:
Name Type Descriptionbool
bool
whether the deployment was updated
Source code inprefect/server/models/deployments.py
@inject_db\nasync def update_deployment(\n session: sa.orm.Session,\n deployment_id: UUID,\n deployment: schemas.actions.DeploymentUpdate,\n db: PrefectDBInterface,\n) -> bool:\n \"\"\"Updates a deployment.\n\n Args:\n session: a database session\n deployment_id: the ID of the deployment to modify\n deployment: changes to a deployment model\n\n Returns:\n bool: whether the deployment was updated\n\n \"\"\"\n\n schedules = deployment.schedules\n\n # exclude_unset=True allows us to only update values provided by\n # the user, ignoring any defaults on the model\n update_data = deployment.dict(\n shallow=True,\n exclude_unset=True,\n exclude={\"work_pool_name\"},\n )\n\n should_update_schedules = update_data.pop(\"schedules\", None) is not None\n\n if deployment.work_pool_name and deployment.work_queue_name:\n # If a specific pool name/queue name combination was provided, get the\n # ID for that work pool queue.\n update_data[\n \"work_queue_id\"\n ] = await WorkerLookups()._get_work_queue_id_from_name(\n session=session,\n work_pool_name=deployment.work_pool_name,\n work_queue_name=deployment.work_queue_name,\n create_queue_if_not_found=True,\n )\n elif deployment.work_pool_name:\n # If just a pool name was provided, get the ID for its default\n # work pool queue.\n update_data[\n \"work_queue_id\"\n ] = await WorkerLookups()._get_default_work_queue_id_from_work_pool_name(\n session=session,\n work_pool_name=deployment.work_pool_name,\n )\n elif deployment.work_queue_name:\n # If just a queue name was provided, ensure the queue exists and\n # get its ID.\n work_queue = await models.work_queues._ensure_work_queue_exists(\n session=session, name=update_data[\"work_queue_name\"], db=db\n )\n update_data[\"work_queue_id\"] = work_queue.id\n\n if \"is_schedule_active\" in update_data:\n update_data[\"paused\"] = not update_data[\"is_schedule_active\"]\n\n update_stmt = (\n sa.update(db.Deployment)\n .where(db.Deployment.id == deployment_id)\n .values(**update_data)\n )\n result = await session.execute(update_stmt)\n\n # delete any auto scheduled runs that would have reflected the old deployment config\n await _delete_scheduled_runs(\n session=session, deployment_id=deployment_id, db=db, auto_scheduled_only=True\n )\n\n if should_update_schedules:\n # If schedules were provided, remove the existing schedules and\n # replace them with the new ones.\n await delete_schedules_for_deployment(\n session=session, deployment_id=deployment_id\n )\n await create_deployment_schedules(\n session=session,\n deployment_id=deployment_id,\n schedules=[\n schemas.actions.DeploymentScheduleCreate(\n schedule=schedule.schedule,\n active=schedule.active, # type: ignore[call-arg]\n )\n for schedule in schedules\n ],\n )\n\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.update_deployment_schedule","title":"update_deployment_schedule
async
","text":"Updates a deployment's schedules.
Parameters:
Name Type Description Defaultsession
AsyncSession
A database session
requireddeployment_schedule_id
UUID
a deployment schedule id
requiredschedule
DeploymentScheduleUpdate
a deployment schedule update action
required Source code inprefect/server/models/deployments.py
@inject_db\nasync def update_deployment_schedule(\n db: PrefectDBInterface,\n session: AsyncSession,\n deployment_id: UUID,\n deployment_schedule_id: UUID,\n schedule: schemas.actions.DeploymentScheduleUpdate,\n) -> bool:\n \"\"\"\n Updates a deployment's schedules.\n\n Args:\n session: A database session\n deployment_schedule_id: a deployment schedule id\n schedule: a deployment schedule update action\n \"\"\"\n\n result = await session.execute(\n sa.update(db.DeploymentSchedule)\n .where(\n sa.and_(\n db.DeploymentSchedule.id == deployment_schedule_id,\n db.DeploymentSchedule.deployment_id == deployment_id,\n )\n )\n .values(**schedule.dict(exclude_none=True))\n )\n\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flow_run_states/","title":"server.models.flow_run_states","text":""},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states","title":"prefect.server.models.flow_run_states
","text":"Functions for interacting with flow run state ORM objects. Intended for internal use by the Prefect REST API.
"},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.delete_flow_run_state","title":"delete_flow_run_state
async
","text":"Delete a flow run state by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_run_state_id
UUID
a flow run state id
requiredReturns:
Name Type Descriptionbool
bool
whether or not the flow run state was deleted
Source code inprefect/server/models/flow_run_states.py
@inject_db\nasync def delete_flow_run_state(\n session: sa.orm.Session, flow_run_state_id: UUID, db: PrefectDBInterface\n) -> bool:\n \"\"\"\n Delete a flow run state by id.\n\n Args:\n session: A database session\n flow_run_state_id: a flow run state id\n\n Returns:\n bool: whether or not the flow run state was deleted\n \"\"\"\n\n result = await session.execute(\n delete(db.FlowRunState).where(db.FlowRunState.id == flow_run_state_id)\n )\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.read_flow_run_state","title":"read_flow_run_state
async
","text":"Reads a flow run state by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_run_state_id
UUID
a flow run state id
requiredReturns:
Type Descriptiondb.FlowRunState: the flow state
Source code inprefect/server/models/flow_run_states.py
@inject_db\nasync def read_flow_run_state(\n session: sa.orm.Session, flow_run_state_id: UUID, db: PrefectDBInterface\n):\n \"\"\"\n Reads a flow run state by id.\n\n Args:\n session: A database session\n flow_run_state_id: a flow run state id\n\n Returns:\n db.FlowRunState: the flow state\n \"\"\"\n\n return await session.get(db.FlowRunState, flow_run_state_id)\n
"},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.read_flow_run_states","title":"read_flow_run_states
async
","text":"Reads flow runs states for a flow run.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_run_id
UUID
the flow run id
requiredReturns:
Type DescriptionList[db.FlowRunState]: the flow run states
Source code inprefect/server/models/flow_run_states.py
@inject_db\nasync def read_flow_run_states(\n session: sa.orm.Session, flow_run_id: UUID, db: PrefectDBInterface\n):\n \"\"\"\n Reads flow runs states for a flow run.\n\n Args:\n session: A database session\n flow_run_id: the flow run id\n\n Returns:\n List[db.FlowRunState]: the flow run states\n \"\"\"\n\n query = (\n select(db.FlowRunState)\n .filter_by(flow_run_id=flow_run_id)\n .order_by(db.FlowRunState.timestamp)\n )\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/flow_runs/","title":"server.models.flow_runs","text":""},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs","title":"prefect.server.models.flow_runs
","text":"Functions for interacting with flow run ORM objects. Intended for internal use by the Prefect REST API.
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.count_flow_runs","title":"count_flow_runs
async
","text":"Count flow runs.
Parameters:
Name Type Description Defaultsession
AsyncSession
a database session
requiredflow_filter
FlowFilter
only count flow runs whose flows match these filters
None
flow_run_filter
FlowRunFilter
only count flow runs that match these filters
None
task_run_filter
TaskRunFilter
only count flow runs whose task runs match these filters
None
deployment_filter
DeploymentFilter
only count flow runs whose deployments match these filters
None
Returns:
Name Type Descriptionint
int
count of flow runs
Source code inprefect/server/models/flow_runs.py
@inject_db\nasync def count_flow_runs(\n session: AsyncSession,\n db: PrefectDBInterface,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n work_pool_filter: schemas.filters.WorkPoolFilter = None,\n work_queue_filter: schemas.filters.WorkQueueFilter = None,\n) -> int:\n \"\"\"\n Count flow runs.\n\n Args:\n session: a database session\n flow_filter: only count flow runs whose flows match these filters\n flow_run_filter: only count flow runs that match these filters\n task_run_filter: only count flow runs whose task runs match these filters\n deployment_filter: only count flow runs whose deployments match these filters\n\n Returns:\n int: count of flow runs\n \"\"\"\n\n query = select(sa.func.count(sa.text(\"*\"))).select_from(db.FlowRun)\n\n query = await _apply_flow_run_filters(\n query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n work_pool_filter=work_pool_filter,\n work_queue_filter=work_queue_filter,\n db=db,\n )\n\n result = await session.execute(query)\n return result.scalar()\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.create_flow_run","title":"create_flow_run
async
","text":"Creates a new flow run.
If the provided flow run has a state attached, it will also be created.
Parameters:
Name Type Description Defaultsession
AsyncSession
a database session
requiredflow_run
FlowRun
a flow run model
requiredReturns:
Type Descriptiondb.FlowRun: the newly-created flow run
Source code inprefect/server/models/flow_runs.py
@inject_db\nasync def create_flow_run(\n session: AsyncSession,\n flow_run: schemas.core.FlowRun,\n db: PrefectDBInterface,\n orchestration_parameters: Optional[dict] = None,\n):\n \"\"\"Creates a new flow run.\n\n If the provided flow run has a state attached, it will also be created.\n\n Args:\n session: a database session\n flow_run: a flow run model\n\n Returns:\n db.FlowRun: the newly-created flow run\n \"\"\"\n now = pendulum.now(\"UTC\")\n\n flow_run_dict = dict(\n **flow_run.dict(\n shallow=True,\n exclude={\n \"created\",\n \"state\",\n \"estimated_run_time\",\n \"estimated_start_time_delta\",\n },\n exclude_unset=True,\n ),\n created=now,\n )\n\n # if no idempotency key was provided, create the run directly\n if not flow_run.idempotency_key:\n model = db.FlowRun(**flow_run_dict)\n session.add(model)\n await session.flush()\n\n # otherwise let the database take care of enforcing idempotency\n else:\n insert_stmt = (\n (await db.insert(db.FlowRun))\n .values(**flow_run_dict)\n .on_conflict_do_nothing(\n index_elements=db.flow_run_unique_upsert_columns,\n )\n )\n await session.execute(insert_stmt)\n\n # read the run to see if idempotency was applied or not\n query = (\n sa.select(db.FlowRun)\n .where(\n sa.and_(\n db.FlowRun.flow_id == flow_run.flow_id,\n db.FlowRun.idempotency_key == flow_run.idempotency_key,\n )\n )\n .limit(1)\n .execution_options(populate_existing=True)\n .options(\n selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n )\n )\n result = await session.execute(query)\n model = result.scalar()\n\n # if the flow run was created in this function call then we need to set the\n # state. If it was created idempotently, the created time won't match.\n if model.created == now and flow_run.state:\n await models.flow_runs.set_flow_run_state(\n session=session,\n flow_run_id=model.id,\n state=flow_run.state,\n force=True,\n orchestration_parameters=orchestration_parameters,\n )\n return model\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.delete_flow_run","title":"delete_flow_run
async
","text":"Delete a flow run by flow_run_id.
Parameters:
Name Type Description Defaultsession
AsyncSession
A database session
requiredflow_run_id
UUID
a flow run id
requiredReturns:
Name Type Descriptionbool
bool
whether or not the flow run was deleted
Source code inprefect/server/models/flow_runs.py
@inject_db\nasync def delete_flow_run(\n session: AsyncSession, flow_run_id: UUID, db: PrefectDBInterface\n) -> bool:\n \"\"\"\n Delete a flow run by flow_run_id.\n\n Args:\n session: A database session\n flow_run_id: a flow run id\n\n Returns:\n bool: whether or not the flow run was deleted\n \"\"\"\n\n result = await session.execute(\n delete(db.FlowRun).where(db.FlowRun.id == flow_run_id)\n )\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_run","title":"read_flow_run
async
","text":"Reads a flow run by id.
Parameters:
Name Type Description Defaultsession
AsyncSession
A database session
requiredflow_run_id
UUID
a flow run id
requiredReturns:
Type Descriptiondb.FlowRun: the flow run
Source code inprefect/server/models/flow_runs.py
@inject_db\nasync def read_flow_run(\n session: AsyncSession,\n flow_run_id: UUID,\n db: PrefectDBInterface,\n for_update: bool = False,\n):\n \"\"\"\n Reads a flow run by id.\n\n Args:\n session: A database session\n flow_run_id: a flow run id\n\n Returns:\n db.FlowRun: the flow run\n \"\"\"\n select = (\n sa.select(db.FlowRun)\n .where(db.FlowRun.id == flow_run_id)\n .options(\n selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n )\n )\n\n if for_update:\n select = select.with_for_update()\n\n result = await session.execute(select)\n return result.scalar()\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_run_graph","title":"read_flow_run_graph
async
","text":"Given a flow run, return the graph of it's task and subflow runs. If a since
datetime is provided, only return items that may have changed since that time.
prefect/server/models/flow_runs.py
@inject_db\nasync def read_flow_run_graph(\n db: PrefectDBInterface,\n session: AsyncSession,\n flow_run_id: UUID,\n since: datetime.datetime = datetime.datetime.min,\n) -> Graph:\n \"\"\"Given a flow run, return the graph of it's task and subflow runs. If a `since`\n datetime is provided, only return items that may have changed since that time.\"\"\"\n return await db.queries.flow_run_graph_v2(\n db=db,\n session=session,\n flow_run_id=flow_run_id,\n since=since,\n max_nodes=PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES.value(),\n max_artifacts=PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS.value(),\n )\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_runs","title":"read_flow_runs
async
","text":"Read flow runs.
Parameters:
Name Type Description Defaultsession
AsyncSession
a database session
requiredcolumns
List
a list of the flow run ORM columns to load, for performance
None
flow_filter
FlowFilter
only select flow runs whose flows match these filters
None
flow_run_filter
FlowRunFilter
only select flow runs match these filters
None
task_run_filter
TaskRunFilter
only select flow runs whose task runs match these filters
None
deployment_filter
DeploymentFilter
only select flow runs whose deployments match these filters
None
offset
int
Query offset
None
limit
int
Query limit
None
sort
FlowRunSort
Query sort
ID_DESC
Returns:
Type DescriptionList[db.FlowRun]: flow runs
Source code inprefect/server/models/flow_runs.py
@inject_db\nasync def read_flow_runs(\n session: AsyncSession,\n db: PrefectDBInterface,\n columns: List = None,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n work_pool_filter: schemas.filters.WorkPoolFilter = None,\n work_queue_filter: schemas.filters.WorkQueueFilter = None,\n offset: int = None,\n limit: int = None,\n sort: schemas.sorting.FlowRunSort = schemas.sorting.FlowRunSort.ID_DESC,\n):\n \"\"\"\n Read flow runs.\n\n Args:\n session: a database session\n columns: a list of the flow run ORM columns to load, for performance\n flow_filter: only select flow runs whose flows match these filters\n flow_run_filter: only select flow runs match these filters\n task_run_filter: only select flow runs whose task runs match these filters\n deployment_filter: only select flow runs whose deployments match these filters\n offset: Query offset\n limit: Query limit\n sort: Query sort\n\n Returns:\n List[db.FlowRun]: flow runs\n \"\"\"\n query = (\n select(db.FlowRun)\n .order_by(sort.as_sql_sort(db))\n .options(\n selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n )\n )\n\n if columns:\n query = query.options(load_only(*columns))\n\n query = await _apply_flow_run_filters(\n query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n work_pool_filter=work_pool_filter,\n work_queue_filter=work_queue_filter,\n db=db,\n )\n\n if offset is not None:\n query = query.offset(offset)\n\n if limit is not None:\n query = query.limit(limit)\n\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_task_run_dependencies","title":"read_task_run_dependencies
async
","text":"Get a task run dependency map for a given flow run.
Source code inprefect/server/models/flow_runs.py
async def read_task_run_dependencies(\n session: AsyncSession,\n flow_run_id: UUID,\n) -> List[DependencyResult]:\n \"\"\"\n Get a task run dependency map for a given flow run.\n \"\"\"\n flow_run = await models.flow_runs.read_flow_run(\n session=session, flow_run_id=flow_run_id\n )\n if not flow_run:\n raise ObjectNotFoundError(f\"Flow run with id {flow_run_id} not found\")\n\n task_runs = await models.task_runs.read_task_runs(\n session=session,\n flow_run_filter=schemas.filters.FlowRunFilter(\n id=schemas.filters.FlowRunFilterId(any_=[flow_run_id])\n ),\n )\n\n dependency_graph = []\n\n for task_run in task_runs:\n inputs = list(set(chain(*task_run.task_inputs.values())))\n untrackable_result_status = (\n False\n if task_run.state is None\n else task_run.state.state_details.untrackable_result\n )\n dependency_graph.append(\n {\n \"id\": task_run.id,\n \"upstream_dependencies\": inputs,\n \"state\": task_run.state,\n \"expected_start_time\": task_run.expected_start_time,\n \"name\": task_run.name,\n \"start_time\": task_run.start_time,\n \"end_time\": task_run.end_time,\n \"total_run_time\": task_run.total_run_time,\n \"estimated_run_time\": task_run.estimated_run_time,\n \"untrackable_result\": untrackable_result_status,\n }\n )\n\n return dependency_graph\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.set_flow_run_state","title":"set_flow_run_state
async
","text":"Creates a new orchestrated flow run state.
Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed state
input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A force
flag is supplied to bypass a subset of orchestration logic.
Parameters:
Name Type Description Defaultsession
AsyncSession
a database session
requiredflow_run_id
UUID
the flow run id
requiredstate
State
a flow run state model
requiredforce
bool
if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.
False
Returns:
Type DescriptionOrchestrationResult
OrchestrationResult object
Source code inprefect/server/models/flow_runs.py
async def set_flow_run_state(\n session: AsyncSession,\n flow_run_id: UUID,\n state: schemas.states.State,\n force: bool = False,\n flow_policy: BaseOrchestrationPolicy = None,\n orchestration_parameters: dict = None,\n) -> OrchestrationResult:\n \"\"\"\n Creates a new orchestrated flow run state.\n\n Setting a new state on a run is the one of the principal actions that is governed by\n Prefect's orchestration logic. Setting a new run state will not guarantee creation,\n but instead trigger orchestration rules to govern the proposed `state` input. If\n the state is considered valid, it will be written to the database. Otherwise, a\n it's possible a different state, or no state, will be created. A `force` flag is\n supplied to bypass a subset of orchestration logic.\n\n Args:\n session: a database session\n flow_run_id: the flow run id\n state: a flow run state model\n force: if False, orchestration rules will be applied that may alter or prevent\n the state transition. If True, orchestration rules are not applied.\n\n Returns:\n OrchestrationResult object\n \"\"\"\n\n # load the flow run\n run = await models.flow_runs.read_flow_run(\n session=session,\n flow_run_id=flow_run_id,\n # Lock the row to prevent orchestration race conditions\n for_update=True,\n )\n\n if not run:\n raise ObjectNotFoundError(f\"Flow run with id {flow_run_id} not found\")\n\n initial_state = run.state.as_state() if run.state else None\n initial_state_type = initial_state.type if initial_state else None\n proposed_state_type = state.type if state else None\n intended_transition = (initial_state_type, proposed_state_type)\n\n if force or flow_policy is None:\n flow_policy = MinimalFlowPolicy\n\n orchestration_rules = flow_policy.compile_transition_rules(*intended_transition)\n global_rules = GlobalFlowPolicy.compile_transition_rules(*intended_transition)\n\n context = FlowOrchestrationContext(\n session=session,\n run=run,\n initial_state=initial_state,\n proposed_state=state,\n )\n\n if orchestration_parameters is not None:\n context.parameters = orchestration_parameters\n\n # apply orchestration rules and create the new flow run state\n async with contextlib.AsyncExitStack() as stack:\n for rule in orchestration_rules:\n context = await stack.enter_async_context(\n rule(context, *intended_transition)\n )\n\n for rule in global_rules:\n context = await stack.enter_async_context(\n rule(context, *intended_transition)\n )\n\n await context.validate_proposed_state()\n\n if context.orchestration_error is not None:\n raise context.orchestration_error\n\n result = OrchestrationResult(\n state=context.validated_state,\n status=context.response_status,\n details=context.response_details,\n )\n\n # if a new state is being set (either ACCEPTED from user or REJECTED\n # and set by the server), check for any notification policies\n if result.status in (SetStateStatus.ACCEPT, SetStateStatus.REJECT):\n await models.flow_run_notification_policies.queue_flow_run_notifications(\n session=session, flow_run=run\n )\n\n return result\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.update_flow_run","title":"update_flow_run
async
","text":"Updates a flow run.
Parameters:
Name Type Description Defaultsession
AsyncSession
a database session
requiredflow_run_id
UUID
the flow run id to update
requiredflow_run
FlowRunUpdate
a flow run model
requiredReturns:
Name Type Descriptionbool
bool
whether or not matching rows were found to update
Source code inprefect/server/models/flow_runs.py
@inject_db\nasync def update_flow_run(\n session: AsyncSession,\n flow_run_id: UUID,\n flow_run: schemas.actions.FlowRunUpdate,\n db: PrefectDBInterface,\n) -> bool:\n \"\"\"\n Updates a flow run.\n\n Args:\n session: a database session\n flow_run_id: the flow run id to update\n flow_run: a flow run model\n\n Returns:\n bool: whether or not matching rows were found to update\n \"\"\"\n update_stmt = (\n sa.update(db.FlowRun)\n .where(db.FlowRun.id == flow_run_id)\n # exclude_unset=True allows us to only update values provided by\n # the user, ignoring any defaults on the model\n .values(**flow_run.dict(shallow=True, exclude_unset=True))\n )\n result = await session.execute(update_stmt)\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flows/","title":"server.models.flows","text":""},{"location":"api-ref/server/models/flows/#prefect.server.models.flows","title":"prefect.server.models.flows
","text":"Functions for interacting with flow ORM objects. Intended for internal use by the Prefect REST API.
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.count_flows","title":"count_flows
async
","text":"Count flows.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_filter
FlowFilter
only count flows that match these filters
None
flow_run_filter
FlowRunFilter
only count flows whose flow runs match these filters
None
task_run_filter
TaskRunFilter
only count flows whose task runs match these filters
None
deployment_filter
DeploymentFilter
only count flows whose deployments match these filters
None
work_pool_filter
WorkPoolFilter
only count flows whose work pools match these filters
None
Returns:
Name Type Descriptionint
int
count of flows
Source code inprefect/server/models/flows.py
@inject_db\nasync def count_flows(\n session: sa.orm.Session,\n db: PrefectDBInterface,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n work_pool_filter: schemas.filters.WorkPoolFilter = None,\n) -> int:\n \"\"\"\n Count flows.\n\n Args:\n session: A database session\n flow_filter: only count flows that match these filters\n flow_run_filter: only count flows whose flow runs match these filters\n task_run_filter: only count flows whose task runs match these filters\n deployment_filter: only count flows whose deployments match these filters\n work_pool_filter: only count flows whose work pools match these filters\n\n Returns:\n int: count of flows\n \"\"\"\n\n query = select(sa.func.count(sa.text(\"*\"))).select_from(db.Flow)\n\n query = await _apply_flow_filters(\n query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n work_pool_filter=work_pool_filter,\n db=db,\n )\n\n result = await session.execute(query)\n return result.scalar()\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.create_flow","title":"create_flow
async
","text":"Creates a new flow.
If a flow with the same name already exists, the existing flow is returned.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredflow
Flow
a flow model
requiredReturns:
Type Descriptiondb.Flow: the newly-created or existing flow
Source code inprefect/server/models/flows.py
@inject_db\nasync def create_flow(\n session: sa.orm.Session, flow: schemas.core.Flow, db: PrefectDBInterface\n):\n \"\"\"\n Creates a new flow.\n\n If a flow with the same name already exists, the existing flow is returned.\n\n Args:\n session: a database session\n flow: a flow model\n\n Returns:\n db.Flow: the newly-created or existing flow\n \"\"\"\n\n insert_stmt = (\n (await db.insert(db.Flow))\n .values(**flow.dict(shallow=True, exclude_unset=True))\n .on_conflict_do_nothing(\n index_elements=db.flow_unique_upsert_columns,\n )\n )\n await session.execute(insert_stmt)\n\n query = (\n sa.select(db.Flow)\n .where(\n db.Flow.name == flow.name,\n )\n .limit(1)\n .execution_options(populate_existing=True)\n )\n result = await session.execute(query)\n model = result.scalar()\n return model\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.delete_flow","title":"delete_flow
async
","text":"Delete a flow by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_id
UUID
a flow id
requiredReturns:
Name Type Descriptionbool
bool
whether or not the flow was deleted
Source code inprefect/server/models/flows.py
@inject_db\nasync def delete_flow(\n session: sa.orm.Session, flow_id: UUID, db: PrefectDBInterface\n) -> bool:\n \"\"\"\n Delete a flow by id.\n\n Args:\n session: A database session\n flow_id: a flow id\n\n Returns:\n bool: whether or not the flow was deleted\n \"\"\"\n\n result = await session.execute(delete(db.Flow).where(db.Flow.id == flow_id))\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flow","title":"read_flow
async
","text":"Reads a flow by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_id
UUID
a flow id
requiredReturns:
Type Descriptiondb.Flow: the flow
Source code inprefect/server/models/flows.py
@inject_db\nasync def read_flow(session: sa.orm.Session, flow_id: UUID, db: PrefectDBInterface):\n \"\"\"\n Reads a flow by id.\n\n Args:\n session: A database session\n flow_id: a flow id\n\n Returns:\n db.Flow: the flow\n \"\"\"\n return await session.get(db.Flow, flow_id)\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flow_by_name","title":"read_flow_by_name
async
","text":"Reads a flow by name.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredname
str
a flow name
requiredReturns:
Type Descriptiondb.Flow: the flow
Source code inprefect/server/models/flows.py
@inject_db\nasync def read_flow_by_name(session: sa.orm.Session, name: str, db: PrefectDBInterface):\n \"\"\"\n Reads a flow by name.\n\n Args:\n session: A database session\n name: a flow name\n\n Returns:\n db.Flow: the flow\n \"\"\"\n\n result = await session.execute(select(db.Flow).filter_by(name=name))\n return result.scalar()\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flows","title":"read_flows
async
","text":"Read multiple flows.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredflow_filter
FlowFilter
only select flows that match these filters
None
flow_run_filter
FlowRunFilter
only select flows whose flow runs match these filters
None
task_run_filter
TaskRunFilter
only select flows whose task runs match these filters
None
deployment_filter
DeploymentFilter
only select flows whose deployments match these filters
None
work_pool_filter
WorkPoolFilter
only select flows whose work pools match these filters
None
offset
int
Query offset
None
limit
int
Query limit
None
Returns:
Type DescriptionList[db.Flow]: flows
Source code inprefect/server/models/flows.py
@inject_db\nasync def read_flows(\n session: sa.orm.Session,\n db: PrefectDBInterface,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n work_pool_filter: schemas.filters.WorkPoolFilter = None,\n sort: schemas.sorting.FlowSort = schemas.sorting.FlowSort.NAME_ASC,\n offset: int = None,\n limit: int = None,\n):\n \"\"\"\n Read multiple flows.\n\n Args:\n session: A database session\n flow_filter: only select flows that match these filters\n flow_run_filter: only select flows whose flow runs match these filters\n task_run_filter: only select flows whose task runs match these filters\n deployment_filter: only select flows whose deployments match these filters\n work_pool_filter: only select flows whose work pools match these filters\n offset: Query offset\n limit: Query limit\n\n Returns:\n List[db.Flow]: flows\n \"\"\"\n\n query = select(db.Flow).order_by(sort.as_sql_sort(db=db))\n\n query = await _apply_flow_filters(\n query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n work_pool_filter=work_pool_filter,\n db=db,\n )\n\n if offset is not None:\n query = query.offset(offset)\n\n if limit is not None:\n query = query.limit(limit)\n\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.update_flow","title":"update_flow
async
","text":"Updates a flow.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredflow_id
UUID
the flow id to update
requiredflow
FlowUpdate
a flow update model
requiredReturns:
Name Type Descriptionbool
whether or not matching rows were found to update
Source code inprefect/server/models/flows.py
@inject_db\nasync def update_flow(\n session: sa.orm.Session,\n flow_id: UUID,\n flow: schemas.actions.FlowUpdate,\n db: PrefectDBInterface,\n):\n \"\"\"\n Updates a flow.\n\n Args:\n session: a database session\n flow_id: the flow id to update\n flow: a flow update model\n\n Returns:\n bool: whether or not matching rows were found to update\n \"\"\"\n update_stmt = (\n sa.update(db.Flow)\n .where(db.Flow.id == flow_id)\n # exclude_unset=True allows us to only update values provided by\n # the user, ignoring any defaults on the model\n .values(**flow.dict(shallow=True, exclude_unset=True))\n )\n result = await session.execute(update_stmt)\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/saved_searches/","title":"server.models.saved_searches","text":""},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches","title":"prefect.server.models.saved_searches
","text":"Functions for interacting with saved search ORM objects. Intended for internal use by the Prefect REST API.
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.create_saved_search","title":"create_saved_search
async
","text":"Upserts a SavedSearch.
If a SavedSearch with the same name exists, all properties will be updated.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredsaved_search
SavedSearch
a SavedSearch model
requiredReturns:
Type Descriptiondb.SavedSearch: the newly-created or updated SavedSearch
Source code inprefect/server/models/saved_searches.py
@inject_db\nasync def create_saved_search(\n session: sa.orm.Session,\n saved_search: schemas.core.SavedSearch,\n db: PrefectDBInterface,\n):\n \"\"\"\n Upserts a SavedSearch.\n\n If a SavedSearch with the same name exists, all properties will be updated.\n\n Args:\n session (sa.orm.Session): a database session\n saved_search (schemas.core.SavedSearch): a SavedSearch model\n\n Returns:\n db.SavedSearch: the newly-created or updated SavedSearch\n\n \"\"\"\n\n insert_stmt = (\n (await db.insert(db.SavedSearch))\n .values(**saved_search.dict(shallow=True, exclude_unset=True))\n .on_conflict_do_update(\n index_elements=db.saved_search_unique_upsert_columns,\n set_=saved_search.dict(shallow=True, include={\"filters\"}),\n )\n )\n\n await session.execute(insert_stmt)\n\n query = (\n sa.select(db.SavedSearch)\n .where(\n db.SavedSearch.name == saved_search.name,\n )\n .execution_options(populate_existing=True)\n )\n result = await session.execute(query)\n model = result.scalar()\n\n return model\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.delete_saved_search","title":"delete_saved_search
async
","text":"Delete a SavedSearch by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredsaved_search_id
str
a SavedSearch id
requiredReturns:
Name Type Descriptionbool
bool
whether or not the SavedSearch was deleted
Source code inprefect/server/models/saved_searches.py
@inject_db\nasync def delete_saved_search(\n session: sa.orm.Session, saved_search_id: UUID, db: PrefectDBInterface\n) -> bool:\n \"\"\"\n Delete a SavedSearch by id.\n\n Args:\n session (sa.orm.Session): A database session\n saved_search_id (str): a SavedSearch id\n\n Returns:\n bool: whether or not the SavedSearch was deleted\n \"\"\"\n\n result = await session.execute(\n delete(db.SavedSearch).where(db.SavedSearch.id == saved_search_id)\n )\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_search","title":"read_saved_search
async
","text":"Reads a SavedSearch by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredsaved_search_id
str
a SavedSearch id
requiredReturns:
Type Descriptiondb.SavedSearch: the SavedSearch
Source code inprefect/server/models/saved_searches.py
@inject_db\nasync def read_saved_search(\n session: sa.orm.Session, saved_search_id: UUID, db: PrefectDBInterface\n):\n \"\"\"\n Reads a SavedSearch by id.\n\n Args:\n session (sa.orm.Session): A database session\n saved_search_id (str): a SavedSearch id\n\n Returns:\n db.SavedSearch: the SavedSearch\n \"\"\"\n\n return await session.get(db.SavedSearch, saved_search_id)\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_search_by_name","title":"read_saved_search_by_name
async
","text":"Reads a SavedSearch by name.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredname
str
a SavedSearch name
requiredReturns:
Type Descriptiondb.SavedSearch: the SavedSearch
Source code inprefect/server/models/saved_searches.py
@inject_db\nasync def read_saved_search_by_name(\n session: sa.orm.Session, name: str, db: PrefectDBInterface\n):\n \"\"\"\n Reads a SavedSearch by name.\n\n Args:\n session (sa.orm.Session): A database session\n name (str): a SavedSearch name\n\n Returns:\n db.SavedSearch: the SavedSearch\n \"\"\"\n result = await session.execute(\n select(db.SavedSearch).where(db.SavedSearch.name == name).limit(1)\n )\n return result.scalar()\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_searches","title":"read_saved_searches
async
","text":"Read SavedSearches.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredoffset
int
Query offset
None
limit(int)
Query limit
requiredReturns:
Type DescriptionList[db.SavedSearch]: SavedSearches
Source code inprefect/server/models/saved_searches.py
@inject_db\nasync def read_saved_searches(\n db: PrefectDBInterface,\n session: sa.orm.Session,\n offset: int = None,\n limit: int = None,\n):\n \"\"\"\n Read SavedSearches.\n\n Args:\n session (sa.orm.Session): A database session\n offset (int): Query offset\n limit(int): Query limit\n\n Returns:\n List[db.SavedSearch]: SavedSearches\n \"\"\"\n\n query = select(db.SavedSearch).order_by(db.SavedSearch.name)\n\n if offset is not None:\n query = query.offset(offset)\n if limit is not None:\n query = query.limit(limit)\n\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/task_run_states/","title":"server.models.task_run_states","text":""},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states","title":"prefect.server.models.task_run_states
","text":"Functions for interacting with task run state ORM objects. Intended for internal use by the Prefect REST API.
"},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.delete_task_run_state","title":"delete_task_run_state
async
","text":"Delete a task run state by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredtask_run_state_id
UUID
a task run state id
requiredReturns:
Name Type Descriptionbool
bool
whether or not the task run state was deleted
Source code inprefect/server/models/task_run_states.py
@inject_db\nasync def delete_task_run_state(\n session: sa.orm.Session, task_run_state_id: UUID, db: PrefectDBInterface\n) -> bool:\n \"\"\"\n Delete a task run state by id.\n\n Args:\n session: A database session\n task_run_state_id: a task run state id\n\n Returns:\n bool: whether or not the task run state was deleted\n \"\"\"\n\n result = await session.execute(\n delete(db.TaskRunState).where(db.TaskRunState.id == task_run_state_id)\n )\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.read_task_run_state","title":"read_task_run_state
async
","text":"Reads a task run state by id.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredtask_run_state_id
UUID
a task run state id
requiredReturns:
Type Descriptiondb.TaskRunState: the task state
Source code inprefect/server/models/task_run_states.py
@inject_db\nasync def read_task_run_state(\n session: sa.orm.Session, task_run_state_id: UUID, db: PrefectDBInterface\n):\n \"\"\"\n Reads a task run state by id.\n\n Args:\n session: A database session\n task_run_state_id: a task run state id\n\n Returns:\n db.TaskRunState: the task state\n \"\"\"\n\n return await session.get(db.TaskRunState, task_run_state_id)\n
"},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.read_task_run_states","title":"read_task_run_states
async
","text":"Reads task runs states for a task run.
Parameters:
Name Type Description Defaultsession
Session
A database session
requiredtask_run_id
UUID
the task run id
requiredReturns:
Type DescriptionList[db.TaskRunState]: the task run states
Source code inprefect/server/models/task_run_states.py
@inject_db\nasync def read_task_run_states(\n session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n):\n \"\"\"\n Reads task runs states for a task run.\n\n Args:\n session: A database session\n task_run_id: the task run id\n\n Returns:\n List[db.TaskRunState]: the task run states\n \"\"\"\n\n query = (\n select(db.TaskRunState)\n .filter_by(task_run_id=task_run_id)\n .order_by(db.TaskRunState.timestamp)\n )\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/task_runs/","title":"server.models.task_runs","text":""},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs","title":"prefect.server.models.task_runs
","text":"Functions for interacting with task run ORM objects. Intended for internal use by the Prefect REST API.
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.count_task_runs","title":"count_task_runs
async
","text":"Count task runs.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredflow_filter
FlowFilter
only count task runs whose flows match these filters
None
flow_run_filter
FlowRunFilter
only count task runs whose flow runs match these filters
None
task_run_filter
TaskRunFilter
only count task runs that match these filters
None
deployment_filter
DeploymentFilter
only count task runs whose deployments match these filters
None
Source code in prefect/server/models/task_runs.py
@inject_db\nasync def count_task_runs(\n session: sa.orm.Session,\n db: PrefectDBInterface,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n) -> int:\n \"\"\"\n Count task runs.\n\n Args:\n session: a database session\n flow_filter: only count task runs whose flows match these filters\n flow_run_filter: only count task runs whose flow runs match these filters\n task_run_filter: only count task runs that match these filters\n deployment_filter: only count task runs whose deployments match these filters\n Returns:\n int: count of task runs\n \"\"\"\n\n query = select(sa.func.count(sa.text(\"*\"))).select_from(db.TaskRun)\n\n query = await _apply_task_run_filters(\n query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n db=db,\n )\n\n result = await session.execute(query)\n return result.scalar()\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.create_task_run","title":"create_task_run
async
","text":"Creates a new task run.
If a task run with the same flow_run_id, task_key, and dynamic_key already exists, the existing task run will be returned. If the provided task run has a state attached, it will also be created.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredtask_run
TaskRun
a task run model
requiredReturns:
Type Descriptiondb.TaskRun: the newly-created or existing task run
Source code inprefect/server/models/task_runs.py
@inject_db\nasync def create_task_run(\n session: sa.orm.Session,\n task_run: schemas.core.TaskRun,\n db: PrefectDBInterface,\n orchestration_parameters: dict = None,\n):\n \"\"\"\n Creates a new task run.\n\n If a task run with the same flow_run_id, task_key, and dynamic_key already exists,\n the existing task run will be returned. If the provided task run has a state\n attached, it will also be created.\n\n Args:\n session: a database session\n task_run: a task run model\n\n Returns:\n db.TaskRun: the newly-created or existing task run\n \"\"\"\n\n now = pendulum.now(\"UTC\")\n\n # if a dynamic key exists, we need to guard against conflicts\n if task_run.flow_run_id:\n insert_stmt = (\n (await db.insert(db.TaskRun))\n .values(\n created=now,\n **task_run.dict(\n shallow=True, exclude={\"state\", \"created\"}, exclude_unset=True\n ),\n )\n .on_conflict_do_nothing(\n index_elements=db.task_run_unique_upsert_columns,\n )\n )\n await session.execute(insert_stmt)\n\n query = (\n sa.select(db.TaskRun)\n .where(\n sa.and_(\n db.TaskRun.flow_run_id == task_run.flow_run_id,\n db.TaskRun.task_key == task_run.task_key,\n db.TaskRun.dynamic_key == task_run.dynamic_key,\n )\n )\n .limit(1)\n .execution_options(populate_existing=True)\n )\n result = await session.execute(query)\n model = result.scalar()\n else:\n # Upsert on (task_key, dynamic_key) application logic.\n query = (\n sa.select(db.TaskRun)\n .where(\n sa.and_(\n db.TaskRun.flow_run_id.is_(None),\n db.TaskRun.task_key == task_run.task_key,\n db.TaskRun.dynamic_key == task_run.dynamic_key,\n )\n )\n .limit(1)\n .execution_options(populate_existing=True)\n )\n\n result = await session.execute(query)\n model = result.scalar()\n\n if model is None:\n model = db.TaskRun(\n created=now,\n **task_run.dict(\n shallow=True, exclude={\"state\", \"created\"}, exclude_unset=True\n ),\n state=None,\n )\n session.add(model)\n await session.flush()\n\n if model.created == now and task_run.state:\n await models.task_runs.set_task_run_state(\n session=session,\n task_run_id=model.id,\n state=task_run.state,\n force=True,\n orchestration_parameters=orchestration_parameters,\n )\n return model\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.delete_task_run","title":"delete_task_run
async
","text":"Delete a task run by id.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredtask_run_id
UUID
the task run id to delete
requiredReturns:
Name Type Descriptionbool
bool
whether or not the task run was deleted
Source code inprefect/server/models/task_runs.py
@inject_db\nasync def delete_task_run(\n session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n) -> bool:\n \"\"\"\n Delete a task run by id.\n\n Args:\n session: a database session\n task_run_id: the task run id to delete\n\n Returns:\n bool: whether or not the task run was deleted\n \"\"\"\n\n result = await session.execute(\n delete(db.TaskRun).where(db.TaskRun.id == task_run_id)\n )\n return result.rowcount > 0\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.read_task_run","title":"read_task_run
async
","text":"Read a task run by id.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredtask_run_id
UUID
the task run id
requiredReturns:
Type Descriptiondb.TaskRun: the task run
Source code inprefect/server/models/task_runs.py
@inject_db\nasync def read_task_run(\n session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n):\n \"\"\"\n Read a task run by id.\n\n Args:\n session: a database session\n task_run_id: the task run id\n\n Returns:\n db.TaskRun: the task run\n \"\"\"\n\n model = await session.get(db.TaskRun, task_run_id)\n return model\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.read_task_runs","title":"read_task_runs
async
","text":"Read task runs.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredflow_filter
FlowFilter
only select task runs whose flows match these filters
None
flow_run_filter
FlowRunFilter
only select task runs whose flow runs match these filters
None
task_run_filter
TaskRunFilter
only select task runs that match these filters
None
deployment_filter
DeploymentFilter
only select task runs whose deployments match these filters
None
offset
int
Query offset
None
limit
int
Query limit
None
sort
TaskRunSort
Query sort
ID_DESC
Returns:
Type DescriptionList[db.TaskRun]: the task runs
Source code inprefect/server/models/task_runs.py
@inject_db\nasync def read_task_runs(\n session: sa.orm.Session,\n db: PrefectDBInterface,\n flow_filter: schemas.filters.FlowFilter = None,\n flow_run_filter: schemas.filters.FlowRunFilter = None,\n task_run_filter: schemas.filters.TaskRunFilter = None,\n deployment_filter: schemas.filters.DeploymentFilter = None,\n offset: int = None,\n limit: int = None,\n sort: schemas.sorting.TaskRunSort = schemas.sorting.TaskRunSort.ID_DESC,\n):\n \"\"\"\n Read task runs.\n\n Args:\n session: a database session\n flow_filter: only select task runs whose flows match these filters\n flow_run_filter: only select task runs whose flow runs match these filters\n task_run_filter: only select task runs that match these filters\n deployment_filter: only select task runs whose deployments match these filters\n offset: Query offset\n limit: Query limit\n sort: Query sort\n\n Returns:\n List[db.TaskRun]: the task runs\n \"\"\"\n\n query = select(db.TaskRun).order_by(sort.as_sql_sort(db))\n\n query = await _apply_task_run_filters(\n query,\n flow_filter=flow_filter,\n flow_run_filter=flow_run_filter,\n task_run_filter=task_run_filter,\n deployment_filter=deployment_filter,\n db=db,\n )\n\n if offset is not None:\n query = query.offset(offset)\n\n if limit is not None:\n query = query.limit(limit)\n\n logger.debug(f\"In read_task_runs, query generated is:\\n{query}\")\n result = await session.execute(query)\n return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.set_task_run_state","title":"set_task_run_state
async
","text":"Creates a new orchestrated task run state.
Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed state
input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A force
flag is supplied to bypass a subset of orchestration logic.
Parameters:
Name Type Description Defaultsession
Session
a database session
requiredtask_run_id
UUID
the task run id
requiredstate
State
a task run state model
requiredforce
bool
if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.
False
Returns:
Type DescriptionOrchestrationResult
OrchestrationResult object
Source code inprefect/server/models/task_runs.py
async def set_task_run_state(\n session: sa.orm.Session,\n task_run_id: UUID,\n state: schemas.states.State,\n force: bool = False,\n task_policy: BaseOrchestrationPolicy = None,\n orchestration_parameters: dict = None,\n) -> OrchestrationResult:\n \"\"\"\n Creates a new orchestrated task run state.\n\n Setting a new state on a run is the one of the principal actions that is governed by\n Prefect's orchestration logic. Setting a new run state will not guarantee creation,\n but instead trigger orchestration rules to govern the proposed `state` input. If\n the state is considered valid, it will be written to the database. Otherwise, a\n it's possible a different state, or no state, will be created. A `force` flag is\n supplied to bypass a subset of orchestration logic.\n\n Args:\n session: a database session\n task_run_id: the task run id\n state: a task run state model\n force: if False, orchestration rules will be applied that may alter or prevent\n the state transition. If True, orchestration rules are not applied.\n\n Returns:\n OrchestrationResult object\n \"\"\"\n\n # load the task run\n run = await models.task_runs.read_task_run(session=session, task_run_id=task_run_id)\n\n if not run:\n raise ObjectNotFoundError(f\"Task run with id {task_run_id} not found\")\n\n initial_state = run.state.as_state() if run.state else None\n initial_state_type = initial_state.type if initial_state else None\n proposed_state_type = state.type if state else None\n intended_transition = (initial_state_type, proposed_state_type)\n\n if run.flow_run_id is None:\n task_policy = AutonomousTaskPolicy # CoreTaskPolicy + prevent `Running` -> `Running` transition\n elif force or task_policy is None:\n task_policy = MinimalTaskPolicy\n\n orchestration_rules = task_policy.compile_transition_rules(*intended_transition)\n global_rules = GlobalTaskPolicy.compile_transition_rules(*intended_transition)\n\n context = TaskOrchestrationContext(\n session=session,\n run=run,\n initial_state=initial_state,\n proposed_state=state,\n )\n\n if orchestration_parameters is not None:\n context.parameters = orchestration_parameters\n\n # apply orchestration rules and create the new task run state\n async with contextlib.AsyncExitStack() as stack:\n for rule in orchestration_rules:\n context = await stack.enter_async_context(\n rule(context, *intended_transition)\n )\n\n for rule in global_rules:\n context = await stack.enter_async_context(\n rule(context, *intended_transition)\n )\n\n await context.validate_proposed_state()\n\n if context.orchestration_error is not None:\n raise context.orchestration_error\n\n result = OrchestrationResult(\n state=context.validated_state,\n status=context.response_status,\n details=context.response_details,\n )\n\n return result\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.update_task_run","title":"update_task_run
async
","text":"Updates a task run.
Parameters:
Name Type Description Defaultsession
AsyncSession
a database session
requiredtask_run_id
UUID
the task run id to update
requiredtask_run
TaskRunUpdate
a task run model
requiredReturns:
Name Type Descriptionbool
bool
whether or not matching rows were found to update
Source code inprefect/server/models/task_runs.py
@inject_db\nasync def update_task_run(\n session: AsyncSession,\n task_run_id: UUID,\n task_run: schemas.actions.TaskRunUpdate,\n db: PrefectDBInterface,\n) -> bool:\n \"\"\"\n Updates a task run.\n\n Args:\n session: a database session\n task_run_id: the task run id to update\n task_run: a task run model\n\n Returns:\n bool: whether or not matching rows were found to update\n \"\"\"\n update_stmt = (\n sa.update(db.TaskRun)\n .where(db.TaskRun.id == task_run_id)\n # exclude_unset=True allows us to only update values provided by\n # the user, ignoring any defaults on the model\n .values(**task_run.dict(shallow=True, exclude_unset=True))\n )\n result = await session.execute(update_stmt)\n return result.rowcount > 0\n
"},{"location":"api-ref/server/orchestration/core_policy/","title":"server.orchestration.core_policy","text":""},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy","title":"prefect.server.orchestration.core_policy
","text":"Orchestration logic that fires on state transitions.
CoreFlowPolicy
and CoreTaskPolicy
contain all default orchestration rules that Prefect enforces on a state transition.
CoreFlowPolicy
","text":" Bases: BaseOrchestrationPolicy
Orchestration rules that run against flow-run-state transitions in priority order.
Source code inprefect/server/orchestration/core_policy.py
class CoreFlowPolicy(BaseOrchestrationPolicy):\n \"\"\"\n Orchestration rules that run against flow-run-state transitions in priority order.\n \"\"\"\n\n def priority():\n return [\n PreventDuplicateTransitions,\n HandleFlowTerminalStateTransitions,\n EnforceCancellingToCancelledTransition,\n BypassCancellingScheduledFlowRuns,\n PreventPendingTransitions,\n EnsureOnlyScheduledFlowsMarkedLate,\n HandlePausingFlows,\n HandleResumingPausedFlows,\n CopyScheduledTime,\n WaitForScheduledTime,\n RetryFailedFlows,\n ]\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CoreTaskPolicy","title":"CoreTaskPolicy
","text":" Bases: BaseOrchestrationPolicy
Orchestration rules that run against task-run-state transitions in priority order.
Source code inprefect/server/orchestration/core_policy.py
class CoreTaskPolicy(BaseOrchestrationPolicy):\n \"\"\"\n Orchestration rules that run against task-run-state transitions in priority order.\n \"\"\"\n\n def priority():\n return [\n CacheRetrieval,\n HandleTaskTerminalStateTransitions,\n PreventRunningTasksFromStoppedFlows,\n SecureTaskConcurrencySlots, # retrieve cached states even if slots are full\n CopyScheduledTime,\n WaitForScheduledTime,\n RetryFailedTasks,\n RenameReruns,\n UpdateFlowRunTrackerOnTasks,\n CacheInsertion,\n ReleaseTaskConcurrencySlots,\n ]\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.AutonomousTaskPolicy","title":"AutonomousTaskPolicy
","text":" Bases: BaseOrchestrationPolicy
Orchestration rules that run against task-run-state transitions in priority order.
Source code inprefect/server/orchestration/core_policy.py
class AutonomousTaskPolicy(BaseOrchestrationPolicy):\n \"\"\"\n Orchestration rules that run against task-run-state transitions in priority order.\n \"\"\"\n\n def priority():\n return [\n PreventPendingTransitions,\n CacheRetrieval,\n HandleTaskTerminalStateTransitions,\n SecureTaskConcurrencySlots, # retrieve cached states even if slots are full\n CopyScheduledTime,\n WaitForScheduledTime,\n RetryFailedTasks,\n RenameReruns,\n UpdateFlowRunTrackerOnTasks,\n CacheInsertion,\n ReleaseTaskConcurrencySlots,\n EnqueueScheduledTasks,\n ]\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.SecureTaskConcurrencySlots","title":"SecureTaskConcurrencySlots
","text":" Bases: BaseOrchestrationRule
Checks relevant concurrency slots are available before entering a Running state.
This rule checks if concurrency limits have been set on the tags associated with a TaskRun. If so, a concurrency slot will be secured against each concurrency limit before being allowed to transition into a running state. If a concurrency limit has been reached, the client will be instructed to delay the transition for the duration specified by the \"PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS\" setting before trying again. If the concurrency limit set on a tag is 0, the transition will be aborted to prevent deadlocks.
Source code inprefect/server/orchestration/core_policy.py
class SecureTaskConcurrencySlots(BaseOrchestrationRule):\n \"\"\"\n Checks relevant concurrency slots are available before entering a Running state.\n\n This rule checks if concurrency limits have been set on the tags associated with a\n TaskRun. If so, a concurrency slot will be secured against each concurrency limit\n before being allowed to transition into a running state. If a concurrency limit has\n been reached, the client will be instructed to delay the transition for the duration\n specified by the \"PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS\" setting\n before trying again. If the concurrency limit set on a tag is 0, the transition will\n be aborted to prevent deadlocks.\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.RUNNING]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n self._applied_limits = []\n filtered_limits = (\n await concurrency_limits.filter_concurrency_limits_for_orchestration(\n context.session, tags=context.run.tags\n )\n )\n run_limits = {limit.tag: limit for limit in filtered_limits}\n for tag, cl in run_limits.items():\n limit = cl.concurrency_limit\n if limit == 0:\n # limits of 0 will deadlock, and the transition needs to abort\n for stale_tag in self._applied_limits:\n stale_limit = run_limits.get(stale_tag, None)\n active_slots = set(stale_limit.active_slots)\n active_slots.discard(str(context.run.id))\n stale_limit.active_slots = list(active_slots)\n\n await self.abort_transition(\n reason=(\n f'The concurrency limit on tag \"{tag}\" is 0 and will deadlock'\n \" if the task tries to run again.\"\n ),\n )\n elif len(cl.active_slots) >= limit:\n # if the limit has already been reached, delay the transition\n for stale_tag in self._applied_limits:\n stale_limit = run_limits.get(stale_tag, None)\n active_slots = set(stale_limit.active_slots)\n active_slots.discard(str(context.run.id))\n stale_limit.active_slots = list(active_slots)\n\n await self.delay_transition(\n PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS.value(),\n f\"Concurrency limit for the {tag} tag has been reached\",\n )\n else:\n # log the TaskRun ID to active_slots\n self._applied_limits.append(tag)\n active_slots = set(cl.active_slots)\n active_slots.add(str(context.run.id))\n cl.active_slots = list(active_slots)\n\n async def cleanup(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n for tag in self._applied_limits:\n cl = await concurrency_limits.read_concurrency_limit_by_tag(\n context.session, tag\n )\n active_slots = set(cl.active_slots)\n active_slots.discard(str(context.run.id))\n cl.active_slots = list(active_slots)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.ReleaseTaskConcurrencySlots","title":"ReleaseTaskConcurrencySlots
","text":" Bases: BaseUniversalTransform
Releases any concurrency slots held by a run upon exiting a Running or Cancelling state.
Source code inprefect/server/orchestration/core_policy.py
class ReleaseTaskConcurrencySlots(BaseUniversalTransform):\n \"\"\"\n Releases any concurrency slots held by a run upon exiting a Running or\n Cancelling state.\n \"\"\"\n\n async def after_transition(\n self,\n context: OrchestrationContext,\n ):\n if self.nullified_transition():\n return\n\n if context.validated_state and context.validated_state.type not in [\n states.StateType.RUNNING,\n states.StateType.CANCELLING,\n ]:\n filtered_limits = (\n await concurrency_limits.filter_concurrency_limits_for_orchestration(\n context.session, tags=context.run.tags\n )\n )\n run_limits = {limit.tag: limit for limit in filtered_limits}\n for tag, cl in run_limits.items():\n active_slots = set(cl.active_slots)\n active_slots.discard(str(context.run.id))\n cl.active_slots = list(active_slots)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.AddUnknownResult","title":"AddUnknownResult
","text":" Bases: BaseOrchestrationRule
Assign an \"unknown\" result to runs that are forced to complete from a failed or crashed state, if the previous state used a persisted result.
When we retry a flow run, we retry any task runs that were in a failed or crashed state, but we also retry completed task runs that didn't use a persisted result. This means that without a sentinel value for unknown results, a task run forced into Completed state will always get rerun if the flow run retries because the task run lacks a persisted result. The \"unknown\" sentinel ensures that when we see a completed task run with an unknown result, we know that it was forced to complete and we shouldn't rerun it.
Flow runs forced into a Completed state have a similar problem: without a sentinel value, attempting to refer to the flow run's result will raise an exception because the flow run has no result. The sentinel ensures that we can distinguish between a flow run that has no result and a flow run that has an unknown result.
Source code inprefect/server/orchestration/core_policy.py
class AddUnknownResult(BaseOrchestrationRule):\n \"\"\"\n Assign an \"unknown\" result to runs that are forced to complete from a\n failed or crashed state, if the previous state used a persisted result.\n\n When we retry a flow run, we retry any task runs that were in a failed or\n crashed state, but we also retry completed task runs that didn't use a\n persisted result. This means that without a sentinel value for unknown\n results, a task run forced into Completed state will always get rerun if the\n flow run retries because the task run lacks a persisted result. The\n \"unknown\" sentinel ensures that when we see a completed task run with an\n unknown result, we know that it was forced to complete and we shouldn't\n rerun it.\n\n Flow runs forced into a Completed state have a similar problem: without a\n sentinel value, attempting to refer to the flow run's result will raise an\n exception because the flow run has no result. The sentinel ensures that we\n can distinguish between a flow run that has no result and a flow run that\n has an unknown result.\n \"\"\"\n\n FROM_STATES = [StateType.FAILED, StateType.CRASHED]\n TO_STATES = [StateType.COMPLETED]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n if (\n initial_state\n and initial_state.data\n and initial_state.data.get(\"type\") == \"reference\"\n ):\n unknown_result = await UnknownResult.create()\n self.context.proposed_state.data = unknown_result.dict()\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CacheInsertion","title":"CacheInsertion
","text":" Bases: BaseOrchestrationRule
Caches completed states with cache keys after they are validated.
Source code inprefect/server/orchestration/core_policy.py
class CacheInsertion(BaseOrchestrationRule):\n \"\"\"\n Caches completed states with cache keys after they are validated.\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.COMPLETED]\n\n @inject_db\n async def after_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: TaskOrchestrationContext,\n db: PrefectDBInterface,\n ) -> None:\n if not validated_state or not context.session:\n return\n\n cache_key = validated_state.state_details.cache_key\n if cache_key:\n new_cache_item = db.TaskRunStateCache(\n cache_key=cache_key,\n cache_expiration=validated_state.state_details.cache_expiration,\n task_run_state_id=validated_state.id,\n )\n context.session.add(new_cache_item)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CacheRetrieval","title":"CacheRetrieval
","text":" Bases: BaseOrchestrationRule
Rejects running states if a completed state has been cached.
This rule rejects transitions into a running state with a cache key if the key has already been associated with a completed state in the cache table. The client will be instructed to transition into the cached completed state instead.
Source code inprefect/server/orchestration/core_policy.py
class CacheRetrieval(BaseOrchestrationRule):\n \"\"\"\n Rejects running states if a completed state has been cached.\n\n This rule rejects transitions into a running state with a cache key if the key\n has already been associated with a completed state in the cache table. The client\n will be instructed to transition into the cached completed state instead.\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.RUNNING]\n\n @inject_db\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n db: PrefectDBInterface,\n ) -> None:\n cache_key = proposed_state.state_details.cache_key\n if cache_key and not proposed_state.state_details.refresh_cache:\n # Check for cached states matching the cache key\n cached_state_id = (\n select(db.TaskRunStateCache.task_run_state_id)\n .where(\n sa.and_(\n db.TaskRunStateCache.cache_key == cache_key,\n sa.or_(\n db.TaskRunStateCache.cache_expiration.is_(None),\n db.TaskRunStateCache.cache_expiration > pendulum.now(\"utc\"),\n ),\n ),\n )\n .order_by(db.TaskRunStateCache.created.desc())\n .limit(1)\n ).scalar_subquery()\n query = select(db.TaskRunState).where(db.TaskRunState.id == cached_state_id)\n cached_state = (await context.session.execute(query)).scalar()\n if cached_state:\n new_state = cached_state.as_state().copy(reset_fields=True)\n new_state.name = \"Cached\"\n await self.reject_transition(\n state=new_state, reason=\"Retrieved state from cache\"\n )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RetryFailedFlows","title":"RetryFailedFlows
","text":" Bases: BaseOrchestrationRule
Rejects failed states and schedules a retry if the retry limit has not been reached.
This rule rejects transitions into a failed state if retries
has been set and the run count has not reached the specified limit. The client will be instructed to transition into a scheduled state to retry flow execution.
prefect/server/orchestration/core_policy.py
class RetryFailedFlows(BaseOrchestrationRule):\n \"\"\"\n Rejects failed states and schedules a retry if the retry limit has not been reached.\n\n This rule rejects transitions into a failed state if `retries` has been\n set and the run count has not reached the specified limit. The client will be\n instructed to transition into a scheduled state to retry flow execution.\n \"\"\"\n\n FROM_STATES = [StateType.RUNNING]\n TO_STATES = [StateType.FAILED]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: FlowOrchestrationContext,\n ) -> None:\n run_settings = context.run_settings\n run_count = context.run.run_count\n\n if run_settings.retries is None or run_count > run_settings.retries:\n return # Retry count exceeded, allow transition to failed\n\n scheduled_start_time = pendulum.now(\"UTC\").add(\n seconds=run_settings.retry_delay or 0\n )\n\n # support old-style flow run retries for older clients\n # older flow retries require us to loop over failed tasks to update their state\n # this is not required after API version 0.8.3\n api_version = context.parameters.get(\"api-version\", None)\n if api_version and api_version < Version(\"0.8.3\"):\n failed_task_runs = await models.task_runs.read_task_runs(\n context.session,\n flow_run_filter=filters.FlowRunFilter(id={\"any_\": [context.run.id]}),\n task_run_filter=filters.TaskRunFilter(\n state={\"type\": {\"any_\": [\"FAILED\"]}}\n ),\n )\n for run in failed_task_runs:\n await models.task_runs.set_task_run_state(\n context.session,\n run.id,\n state=states.AwaitingRetry(scheduled_time=scheduled_start_time),\n force=True,\n )\n # Reset the run count so that the task run retries still work correctly\n run.run_count = 0\n\n # Reset pause metadata on retry\n # Pauses as a concept only exist after API version 0.8.4\n api_version = context.parameters.get(\"api-version\", None)\n if api_version is None or api_version >= Version(\"0.8.4\"):\n updated_policy = context.run.empirical_policy.dict()\n updated_policy[\"resuming\"] = False\n updated_policy[\"pause_keys\"] = set()\n context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n\n # Generate a new state for the flow\n retry_state = states.AwaitingRetry(\n scheduled_time=scheduled_start_time,\n message=proposed_state.message,\n data=proposed_state.data,\n )\n await self.reject_transition(state=retry_state, reason=\"Retrying\")\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RetryFailedTasks","title":"RetryFailedTasks
","text":" Bases: BaseOrchestrationRule
Rejects failed states and schedules a retry if the retry limit has not been reached.
This rule rejects transitions into a failed state if retries
has been set, the run count has not reached the specified limit, and the client asserts it is a retriable task run. The client will be instructed to transition into a scheduled state to retry task execution.
prefect/server/orchestration/core_policy.py
class RetryFailedTasks(BaseOrchestrationRule):\n \"\"\"\n Rejects failed states and schedules a retry if the retry limit has not been reached.\n\n This rule rejects transitions into a failed state if `retries` has been\n set, the run count has not reached the specified limit, and the client\n asserts it is a retriable task run. The client will be instructed to\n transition into a scheduled state to retry task execution.\n \"\"\"\n\n FROM_STATES = [StateType.RUNNING]\n TO_STATES = [StateType.FAILED]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n run_settings = context.run_settings\n run_count = context.run.run_count\n delay = run_settings.retry_delay\n\n if isinstance(delay, list):\n base_delay = delay[min(run_count - 1, len(delay) - 1)]\n else:\n base_delay = run_settings.retry_delay or 0\n\n # guard against negative relative jitter inputs\n if run_settings.retry_jitter_factor:\n delay = clamped_poisson_interval(\n base_delay, clamping_factor=run_settings.retry_jitter_factor\n )\n else:\n delay = base_delay\n\n # set by user to conditionally retry a task using @task(retry_condition_fn=...)\n if getattr(proposed_state.state_details, \"retriable\", True) is False:\n return\n\n if run_settings.retries is not None and run_count <= run_settings.retries:\n retry_state = states.AwaitingRetry(\n scheduled_time=pendulum.now(\"UTC\").add(seconds=delay),\n message=proposed_state.message,\n data=proposed_state.data,\n )\n await self.reject_transition(state=retry_state, reason=\"Retrying\")\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.EnqueueScheduledTasks","title":"EnqueueScheduledTasks
","text":" Bases: BaseOrchestrationRule
Enqueues autonomous task runs when they are scheduled
Source code inprefect/server/orchestration/core_policy.py
class EnqueueScheduledTasks(BaseOrchestrationRule):\n \"\"\"\n Enqueues autonomous task runs when they are scheduled\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.SCHEDULED]\n\n async def after_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n if not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value():\n # Only if task scheduling is enabled\n return\n\n if not validated_state:\n # Only if the transition was valid\n return\n\n if context.run.flow_run_id:\n # Only for autonomous tasks\n return\n\n task_run: TaskRun = TaskRun.from_orm(context.run)\n queue = TaskQueue.for_key(task_run.task_key)\n\n if validated_state.name == \"AwaitingRetry\":\n await queue.retry(task_run)\n else:\n await queue.enqueue(task_run)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RenameReruns","title":"RenameReruns
","text":" Bases: BaseOrchestrationRule
Name the states if they have run more than once.
In the special case where the initial state is an \"AwaitingRetry\" scheduled state, the proposed state will be renamed to \"Retrying\" instead.
Source code inprefect/server/orchestration/core_policy.py
class RenameReruns(BaseOrchestrationRule):\n \"\"\"\n Name the states if they have run more than once.\n\n In the special case where the initial state is an \"AwaitingRetry\" scheduled state,\n the proposed state will be renamed to \"Retrying\" instead.\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.RUNNING]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n run_count = context.run.run_count\n if run_count > 0:\n if initial_state.name == \"AwaitingRetry\":\n await self.rename_state(\"Retrying\")\n else:\n await self.rename_state(\"Rerunning\")\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CopyScheduledTime","title":"CopyScheduledTime
","text":" Bases: BaseOrchestrationRule
Ensures scheduled time is copied from scheduled states to pending states.
If a new scheduled time has been proposed on the pending state, the scheduled time on the scheduled state will be ignored.
Source code inprefect/server/orchestration/core_policy.py
class CopyScheduledTime(BaseOrchestrationRule):\n \"\"\"\n Ensures scheduled time is copied from scheduled states to pending states.\n\n If a new scheduled time has been proposed on the pending state, the scheduled time\n on the scheduled state will be ignored.\n \"\"\"\n\n FROM_STATES = [StateType.SCHEDULED]\n TO_STATES = [StateType.PENDING]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n if not proposed_state.state_details.scheduled_time:\n proposed_state.state_details.scheduled_time = (\n initial_state.state_details.scheduled_time\n )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.WaitForScheduledTime","title":"WaitForScheduledTime
","text":" Bases: BaseOrchestrationRule
Prevents transitions to running states from happening to early.
This rule enforces that all scheduled states will only start with the machine clock used by the Prefect REST API instance. This rule will identify transitions from scheduled states that are too early and nullify them. Instead, no state will be written to the database and the client will be sent an instruction to wait for delay_seconds
before attempting the transition again.
prefect/server/orchestration/core_policy.py
class WaitForScheduledTime(BaseOrchestrationRule):\n \"\"\"\n Prevents transitions to running states from happening to early.\n\n This rule enforces that all scheduled states will only start with the machine clock\n used by the Prefect REST API instance. This rule will identify transitions from scheduled\n states that are too early and nullify them. Instead, no state will be written to the\n database and the client will be sent an instruction to wait for `delay_seconds`\n before attempting the transition again.\n \"\"\"\n\n FROM_STATES = [StateType.SCHEDULED, StateType.PENDING]\n TO_STATES = [StateType.RUNNING]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n scheduled_time = initial_state.state_details.scheduled_time\n if not scheduled_time:\n return\n\n # At this moment, we round delay to the nearest second as the API schema\n # specifies an integer return value.\n delay = scheduled_time - pendulum.now(\"UTC\")\n delay_seconds = delay.in_seconds()\n delay_seconds += round(delay.microseconds / 1e6)\n if delay_seconds > 0:\n await self.delay_transition(\n delay_seconds, reason=\"Scheduled time is in the future\"\n )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandlePausingFlows","title":"HandlePausingFlows
","text":" Bases: BaseOrchestrationRule
Governs runs attempting to enter a Paused/Suspended state
Source code inprefect/server/orchestration/core_policy.py
class HandlePausingFlows(BaseOrchestrationRule):\n \"\"\"\n Governs runs attempting to enter a Paused/Suspended state\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.PAUSED]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n verb = \"suspend\" if proposed_state.name == \"Suspended\" else \"pause\"\n\n if initial_state is None:\n await self.abort_transition(f\"Cannot {verb} flows with no state.\")\n return\n\n if not initial_state.is_running():\n await self.reject_transition(\n state=None,\n reason=f\"Cannot {verb} flows that are not currently running.\",\n )\n return\n\n self.key = proposed_state.state_details.pause_key\n if self.key is None:\n # if no pause key is provided, default to a UUID\n self.key = str(uuid4())\n\n if self.key in context.run.empirical_policy.pause_keys:\n await self.reject_transition(\n state=None, reason=f\"This {verb} has already fired.\"\n )\n return\n\n if proposed_state.state_details.pause_reschedule:\n if context.run.parent_task_run_id:\n await self.abort_transition(\n reason=f\"Cannot {verb} subflows.\",\n )\n return\n\n if context.run.deployment_id is None:\n await self.abort_transition(\n reason=f\"Cannot {verb} flows without a deployment.\",\n )\n return\n\n async def after_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n updated_policy = context.run.empirical_policy.dict()\n updated_policy[\"pause_keys\"].add(self.key)\n context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleResumingPausedFlows","title":"HandleResumingPausedFlows
","text":" Bases: BaseOrchestrationRule
Governs runs attempting to leave a Paused state
Source code inprefect/server/orchestration/core_policy.py
class HandleResumingPausedFlows(BaseOrchestrationRule):\n \"\"\"\n Governs runs attempting to leave a Paused state\n \"\"\"\n\n FROM_STATES = [StateType.PAUSED]\n TO_STATES = ALL_ORCHESTRATION_STATES\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n if not (\n proposed_state.is_running()\n or proposed_state.is_scheduled()\n or proposed_state.is_final()\n ):\n await self.reject_transition(\n state=None,\n reason=(\n f\"This run cannot transition to the {proposed_state.type} state\"\n f\" from the {initial_state.type} state.\"\n ),\n )\n return\n\n verb = \"suspend\" if proposed_state.name == \"Suspended\" else \"pause\"\n\n if initial_state.state_details.pause_reschedule:\n if not context.run.deployment_id:\n await self.reject_transition(\n state=None,\n reason=(\n f\"Cannot reschedule a {proposed_state.name.lower()} flow run\"\n \" without a deployment.\"\n ),\n )\n return\n pause_timeout = initial_state.state_details.pause_timeout\n if pause_timeout and pause_timeout < pendulum.now(\"UTC\"):\n pause_timeout_failure = states.Failed(\n message=(\n f\"The flow was {proposed_state.name.lower()} and never resumed.\"\n ),\n )\n await self.reject_transition(\n state=pause_timeout_failure,\n reason=f\"The flow run {verb} has timed out and can no longer resume.\",\n )\n return\n\n async def after_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n updated_policy = context.run.empirical_policy.dict()\n updated_policy[\"resuming\"] = True\n context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.UpdateFlowRunTrackerOnTasks","title":"UpdateFlowRunTrackerOnTasks
","text":" Bases: BaseOrchestrationRule
Tracks the flow run attempt a task run state is associated with.
Source code inprefect/server/orchestration/core_policy.py
class UpdateFlowRunTrackerOnTasks(BaseOrchestrationRule):\n \"\"\"\n Tracks the flow run attempt a task run state is associated with.\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.RUNNING]\n\n async def after_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n if context.run.flow_run_id is not None:\n self.flow_run = await context.flow_run()\n if self.flow_run:\n context.run.flow_run_run_count = self.flow_run.run_count\n else:\n raise ObjectNotFoundError(\n (\n \"Unable to read flow run associated with task run:\"\n f\" {context.run.id}, this flow run might have been deleted\"\n ),\n )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleTaskTerminalStateTransitions","title":"HandleTaskTerminalStateTransitions
","text":" Bases: BaseOrchestrationRule
We do not allow tasks to leave terminal states if: - The task is completed and has a persisted result - The task is going to CANCELLING / PAUSED / CRASHED
We reset the run count when a task leaves a terminal state for a non-terminal state which resets task run retries; this is particularly relevant for flow run retries.
Source code inprefect/server/orchestration/core_policy.py
class HandleTaskTerminalStateTransitions(BaseOrchestrationRule):\n \"\"\"\n We do not allow tasks to leave terminal states if:\n - The task is completed and has a persisted result\n - The task is going to CANCELLING / PAUSED / CRASHED\n\n We reset the run count when a task leaves a terminal state for a non-terminal state\n which resets task run retries; this is particularly relevant for flow run retries.\n \"\"\"\n\n FROM_STATES = TERMINAL_STATES\n TO_STATES = ALL_ORCHESTRATION_STATES\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n self.original_run_count = context.run.run_count\n\n # Do not allow runs to be marked as crashed, paused, or cancelling if already terminal\n if proposed_state.type in {\n StateType.CANCELLING,\n StateType.PAUSED,\n StateType.CRASHED,\n }:\n await self.abort_transition(f\"Run is already {initial_state.type.value}.\")\n return\n\n # Only allow departure from a happily completed state if the result is not persisted\n if (\n initial_state.is_completed()\n and initial_state.data\n and initial_state.data.get(\"type\") != \"unpersisted\"\n ):\n await self.reject_transition(None, \"This run is already completed.\")\n return\n\n if not proposed_state.is_final():\n # Reset run count to reset retries\n context.run.run_count = 0\n\n # Change the name of the state to retrying if its a flow run retry\n if proposed_state.is_running() and context.run.flow_run_id is not None:\n self.flow_run = await context.flow_run()\n flow_retrying = context.run.flow_run_run_count < self.flow_run.run_count\n if flow_retrying:\n await self.rename_state(\"Retrying\")\n\n async def cleanup(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n ):\n # reset run count\n context.run.run_count = self.original_run_count\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleFlowTerminalStateTransitions","title":"HandleFlowTerminalStateTransitions
","text":" Bases: BaseOrchestrationRule
We do not allow flows to leave terminal states if: - The flow is completed and has a persisted result - The flow is going to CANCELLING / PAUSED / CRASHED - The flow is going to scheduled and has no deployment
We reset the pause metadata when a flow leaves a terminal state for a non-terminal state. This resets pause behavior during manual flow run retries.
Source code inprefect/server/orchestration/core_policy.py
class HandleFlowTerminalStateTransitions(BaseOrchestrationRule):\n \"\"\"\n We do not allow flows to leave terminal states if:\n - The flow is completed and has a persisted result\n - The flow is going to CANCELLING / PAUSED / CRASHED\n - The flow is going to scheduled and has no deployment\n\n We reset the pause metadata when a flow leaves a terminal state for a non-terminal\n state. This resets pause behavior during manual flow run retries.\n \"\"\"\n\n FROM_STATES = TERMINAL_STATES\n TO_STATES = ALL_ORCHESTRATION_STATES\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: FlowOrchestrationContext,\n ) -> None:\n self.original_flow_policy = context.run.empirical_policy.dict()\n\n # Do not allow runs to be marked as crashed, paused, or cancelling if already terminal\n if proposed_state.type in {\n StateType.CANCELLING,\n StateType.PAUSED,\n StateType.CRASHED,\n }:\n await self.abort_transition(\n f\"Run is already in terminal state {initial_state.type.value}.\"\n )\n return\n\n # Only allow departure from a happily completed state if the result is not\n # persisted and the a rerun is being proposed\n if (\n initial_state.is_completed()\n and not proposed_state.is_final()\n and initial_state.data\n and initial_state.data.get(\"type\") != \"unpersisted\"\n ):\n await self.reject_transition(None, \"Run is already COMPLETED.\")\n return\n\n # Do not allows runs to be rescheduled without a deployment\n if proposed_state.is_scheduled() and not context.run.deployment_id:\n await self.abort_transition(\n \"Cannot reschedule a run without an associated deployment.\"\n )\n return\n\n if not proposed_state.is_final():\n # Reset pause metadata when leaving a terminal state\n api_version = context.parameters.get(\"api-version\", None)\n if api_version is None or api_version >= Version(\"0.8.4\"):\n updated_policy = context.run.empirical_policy.dict()\n updated_policy[\"resuming\"] = False\n updated_policy[\"pause_keys\"] = set()\n context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n\n async def cleanup(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n ):\n context.run.empirical_policy = core.FlowRunPolicy(**self.original_flow_policy)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventPendingTransitions","title":"PreventPendingTransitions
","text":" Bases: BaseOrchestrationRule
Prevents transitions to PENDING.
This rule is only used for flow runs.
This is intended to prevent race conditions during duplicate submissions of runs. Before a run is submitted to its execution environment, it should be placed in a PENDING state. If two workers attempt to submit the same run, one of them should encounter a PENDING -> PENDING transition and abort orchestration of the run.
Similarly, if the execution environment starts quickly the run may be in a RUNNING state when the second worker attempts the PENDING transition. We deny these state changes as well to prevent duplicate submission. If a run has transitioned to a RUNNING state a worker should not attempt to submit it again unless it has moved into a terminal state.
CANCELLING and CANCELLED runs should not be allowed to transition to PENDING. For re-runs of deployed runs, they should transition to SCHEDULED first. For re-runs of ad-hoc runs, they should transition directly to RUNNING.
Source code inprefect/server/orchestration/core_policy.py
class PreventPendingTransitions(BaseOrchestrationRule):\n \"\"\"\n Prevents transitions to PENDING.\n\n This rule is only used for flow runs.\n\n This is intended to prevent race conditions during duplicate submissions of runs.\n Before a run is submitted to its execution environment, it should be placed in a\n PENDING state. If two workers attempt to submit the same run, one of them should\n encounter a PENDING -> PENDING transition and abort orchestration of the run.\n\n Similarly, if the execution environment starts quickly the run may be in a RUNNING\n state when the second worker attempts the PENDING transition. We deny these state\n changes as well to prevent duplicate submission. If a run has transitioned to a\n RUNNING state a worker should not attempt to submit it again unless it has moved\n into a terminal state.\n\n CANCELLING and CANCELLED runs should not be allowed to transition to PENDING.\n For re-runs of deployed runs, they should transition to SCHEDULED first.\n For re-runs of ad-hoc runs, they should transition directly to RUNNING.\n \"\"\"\n\n FROM_STATES = [\n StateType.PENDING,\n StateType.CANCELLING,\n StateType.RUNNING,\n StateType.CANCELLED,\n ]\n TO_STATES = [StateType.PENDING]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n await self.abort_transition(\n reason=(\n f\"This run is in a {initial_state.type.name} state and cannot\"\n \" transition to a PENDING state.\"\n )\n )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventRunningTasksFromStoppedFlows","title":"PreventRunningTasksFromStoppedFlows
","text":" Bases: BaseOrchestrationRule
Prevents running tasks from stopped flows.
A running state implies execution, but also the converse. This rule ensures that a flow's tasks cannot be run unless the flow is also running.
Source code inprefect/server/orchestration/core_policy.py
class PreventRunningTasksFromStoppedFlows(BaseOrchestrationRule):\n \"\"\"\n Prevents running tasks from stopped flows.\n\n A running state implies execution, but also the converse. This rule ensures that a\n flow's tasks cannot be run unless the flow is also running.\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = [StateType.RUNNING]\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n flow_run = await context.flow_run()\n if flow_run is not None:\n if flow_run.state is None:\n await self.abort_transition(\n reason=\"The enclosing flow must be running to begin task execution.\"\n )\n elif flow_run.state.type == StateType.PAUSED:\n # Use the flow run's Paused state details to preserve data like\n # timeouts.\n paused_state = states.Paused(\n name=\"NotReady\",\n pause_expiration_time=flow_run.state.state_details.pause_timeout,\n reschedule=flow_run.state.state_details.pause_reschedule,\n )\n await self.reject_transition(\n state=paused_state,\n reason=(\n \"The flow is paused, new tasks can execute after resuming flow\"\n f\" run: {flow_run.id}.\"\n ),\n )\n elif not flow_run.state.type == StateType.RUNNING:\n # task runners should abort task run execution\n await self.abort_transition(\n reason=(\n \"The enclosing flow must be running to begin task execution.\"\n ),\n )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.EnforceCancellingToCancelledTransition","title":"EnforceCancellingToCancelledTransition
","text":" Bases: BaseOrchestrationRule
Rejects transitions from Cancelling to any terminal state except for Cancelled.
Source code inprefect/server/orchestration/core_policy.py
class EnforceCancellingToCancelledTransition(BaseOrchestrationRule):\n \"\"\"\n Rejects transitions from Cancelling to any terminal state except for Cancelled.\n \"\"\"\n\n FROM_STATES = {StateType.CANCELLED, StateType.CANCELLING}\n TO_STATES = ALL_ORCHESTRATION_STATES - {StateType.CANCELLED}\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: TaskOrchestrationContext,\n ) -> None:\n await self.reject_transition(\n state=None,\n reason=(\n \"Cannot transition flows that are cancelling to a state other \"\n \"than Cancelled.\"\n ),\n )\n return\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.BypassCancellingScheduledFlowRuns","title":"BypassCancellingScheduledFlowRuns
","text":" Bases: BaseOrchestrationRule
Rejects transitions from Scheduled to Cancelling, and instead sets the state to Cancelled, if the flow run has no associated infrastructure process ID.
The Cancelling
state is used to clean up infrastructure. If there is not infrastructure to clean up, we can transition directly to Cancelled
. Runs that are AwaitingRetry
are a Scheduled
state that may have associated infrastructure.
prefect/server/orchestration/core_policy.py
class BypassCancellingScheduledFlowRuns(BaseOrchestrationRule):\n \"\"\"Rejects transitions from Scheduled to Cancelling, and instead sets the state to Cancelled,\n if the flow run has no associated infrastructure process ID.\n\n The `Cancelling` state is used to clean up infrastructure. If there is not infrastructure\n to clean up, we can transition directly to `Cancelled`. Runs that are `AwaitingRetry` are\n a `Scheduled` state that may have associated infrastructure.\n \"\"\"\n\n FROM_STATES = {StateType.SCHEDULED}\n TO_STATES = {StateType.CANCELLING}\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: FlowOrchestrationContext,\n ) -> None:\n if not context.run.infrastructure_pid:\n await self.reject_transition(\n state=states.Cancelled(),\n reason=\"Scheduled flow run has no infrastructure to terminate.\",\n )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventDuplicateTransitions","title":"PreventDuplicateTransitions
","text":" Bases: BaseOrchestrationRule
Prevent duplicate transitions from being made right after one another.
This rule allows for clients to set an optional transition_id on a state. If the run's next transition has the same transition_id, the transition will be rejected and the existing state will be returned.
This allows for clients to make state transition requests without worrying about the following case: - A client making a state transition request - The server accepts transition and commits the transition - The client is unable to receive the response and retries the request
Source code inprefect/server/orchestration/core_policy.py
class PreventDuplicateTransitions(BaseOrchestrationRule):\n \"\"\"\n Prevent duplicate transitions from being made right after one another.\n\n This rule allows for clients to set an optional transition_id on a state. If the\n run's next transition has the same transition_id, the transition will be\n rejected and the existing state will be returned.\n\n This allows for clients to make state transition requests without worrying about\n the following case:\n - A client making a state transition request\n - The server accepts transition and commits the transition\n - The client is unable to receive the response and retries the request\n \"\"\"\n\n FROM_STATES = ALL_ORCHESTRATION_STATES\n TO_STATES = ALL_ORCHESTRATION_STATES\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n if (\n initial_state is None\n or proposed_state is None\n or initial_state.state_details is None\n or proposed_state.state_details is None\n ):\n return\n\n initial_transition_id = getattr(\n initial_state.state_details, \"transition_id\", None\n )\n proposed_transition_id = getattr(\n proposed_state.state_details, \"transition_id\", None\n )\n if (\n initial_transition_id is not None\n and proposed_transition_id is not None\n and initial_transition_id == proposed_transition_id\n ):\n await self.reject_transition(\n # state=None will return the initial (current) state\n state=None,\n reason=\"This run has already made this state transition.\",\n )\n
"},{"location":"api-ref/server/orchestration/global_policy/","title":"server.orchestration.global_policy","text":""},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy","title":"prefect.server.orchestration.global_policy
","text":"Bookkeeping logic that fires on every state transition.
For clarity, GlobalFlowpolicy
and GlobalTaskPolicy
contain all transition logic implemented using BaseUniversalTransform
. None of these operations modify state, and regardless of what orchestration Prefect REST API might enforce on a transition, the global policies contain Prefect's necessary bookkeeping. Because these transforms record information about the validated state committed to the state database, they should be the most deeply nested contexts in orchestration loop.
GlobalFlowPolicy
","text":" Bases: BaseOrchestrationPolicy
Global transforms that run against flow-run-state transitions in priority order.
These transforms are intended to run immediately before and after a state transition is validated.
Source code inprefect/server/orchestration/global_policy.py
class GlobalFlowPolicy(BaseOrchestrationPolicy):\n \"\"\"\n Global transforms that run against flow-run-state transitions in priority order.\n\n These transforms are intended to run immediately before and after a state transition\n is validated.\n \"\"\"\n\n def priority():\n return COMMON_GLOBAL_TRANSFORMS() + [\n UpdateSubflowParentTask,\n UpdateSubflowStateDetails,\n IncrementFlowRunCount,\n RemoveResumingIndicator,\n ]\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.GlobalTaskPolicy","title":"GlobalTaskPolicy
","text":" Bases: BaseOrchestrationPolicy
Global transforms that run against task-run-state transitions in priority order.
These transforms are intended to run immediately before and after a state transition is validated.
Source code inprefect/server/orchestration/global_policy.py
class GlobalTaskPolicy(BaseOrchestrationPolicy):\n \"\"\"\n Global transforms that run against task-run-state transitions in priority order.\n\n These transforms are intended to run immediately before and after a state transition\n is validated.\n \"\"\"\n\n def priority():\n return COMMON_GLOBAL_TRANSFORMS() + [\n IncrementTaskRunCount,\n ]\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateType","title":"SetRunStateType
","text":" Bases: BaseUniversalTransform
Updates the state type of a run on a state transition.
Source code inprefect/server/orchestration/global_policy.py
class SetRunStateType(BaseUniversalTransform):\n \"\"\"\n Updates the state type of a run on a state transition.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # record the new state's type\n context.run.state_type = context.proposed_state.type\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateName","title":"SetRunStateName
","text":" Bases: BaseUniversalTransform
Updates the state name of a run on a state transition.
Source code inprefect/server/orchestration/global_policy.py
class SetRunStateName(BaseUniversalTransform):\n \"\"\"\n Updates the state name of a run on a state transition.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # record the new state's name\n context.run.state_name = context.proposed_state.name\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetStartTime","title":"SetStartTime
","text":" Bases: BaseUniversalTransform
Records the time a run enters a running state for the first time.
Source code inprefect/server/orchestration/global_policy.py
class SetStartTime(BaseUniversalTransform):\n \"\"\"\n Records the time a run enters a running state for the first time.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # if entering a running state and no start time is set...\n if context.proposed_state.is_running() and context.run.start_time is None:\n # set the start time\n context.run.start_time = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateTimestamp","title":"SetRunStateTimestamp
","text":" Bases: BaseUniversalTransform
Records the time a run changes states.
Source code inprefect/server/orchestration/global_policy.py
class SetRunStateTimestamp(BaseUniversalTransform):\n \"\"\"\n Records the time a run changes states.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # record the new state's timestamp\n context.run.state_timestamp = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetEndTime","title":"SetEndTime
","text":" Bases: BaseUniversalTransform
Records the time a run enters a terminal state.
With normal client usage, a run will not transition out of a terminal state. However, it's possible to force these transitions manually via the API. While leaving a terminal state, the end time will be unset.
Source code inprefect/server/orchestration/global_policy.py
class SetEndTime(BaseUniversalTransform):\n \"\"\"\n Records the time a run enters a terminal state.\n\n With normal client usage, a run will not transition out of a terminal state.\n However, it's possible to force these transitions manually via the API. While\n leaving a terminal state, the end time will be unset.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # if exiting a final state for a non-final state...\n if (\n context.initial_state\n and context.initial_state.is_final()\n and not context.proposed_state.is_final()\n ):\n # clear the end time\n context.run.end_time = None\n\n # if entering a final state...\n if context.proposed_state.is_final():\n # if the run has a start time and no end time, give it one\n if context.run.start_time and not context.run.end_time:\n context.run.end_time = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementRunTime","title":"IncrementRunTime
","text":" Bases: BaseUniversalTransform
Records the amount of time a run spends in the running state.
Source code inprefect/server/orchestration/global_policy.py
class IncrementRunTime(BaseUniversalTransform):\n \"\"\"\n Records the amount of time a run spends in the running state.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # if exiting a running state...\n if context.initial_state and context.initial_state.is_running():\n # increment the run time by the time spent in the previous state\n context.run.total_run_time += (\n context.proposed_state.timestamp - context.initial_state.timestamp\n )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementFlowRunCount","title":"IncrementFlowRunCount
","text":" Bases: BaseUniversalTransform
Records the number of times a run enters a running state. For use with retries.
Source code inprefect/server/orchestration/global_policy.py
class IncrementFlowRunCount(BaseUniversalTransform):\n \"\"\"\n Records the number of times a run enters a running state. For use with retries.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # if entering a running state...\n if context.proposed_state.is_running():\n # do not increment the run count if resuming a paused flow\n api_version = context.parameters.get(\"api-version\", None)\n if api_version is None or api_version >= Version(\"0.8.4\"):\n if context.run.empirical_policy.resuming:\n return\n\n # increment the run count\n context.run.run_count += 1\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.RemoveResumingIndicator","title":"RemoveResumingIndicator
","text":" Bases: BaseUniversalTransform
Removes the indicator on a flow run that marks it as resuming.
Source code inprefect/server/orchestration/global_policy.py
class RemoveResumingIndicator(BaseUniversalTransform):\n \"\"\"\n Removes the indicator on a flow run that marks it as resuming.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n proposed_state = context.proposed_state\n\n api_version = context.parameters.get(\"api-version\", None)\n if api_version is None or api_version >= Version(\"0.8.4\"):\n if proposed_state.is_running() or proposed_state.is_final():\n if context.run.empirical_policy.resuming:\n updated_policy = context.run.empirical_policy.dict()\n updated_policy[\"resuming\"] = False\n context.run.empirical_policy = FlowRunPolicy(**updated_policy)\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementTaskRunCount","title":"IncrementTaskRunCount
","text":" Bases: BaseUniversalTransform
Records the number of times a run enters a running state. For use with retries.
Source code inprefect/server/orchestration/global_policy.py
class IncrementTaskRunCount(BaseUniversalTransform):\n \"\"\"\n Records the number of times a run enters a running state. For use with retries.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # if entering a running state...\n if context.proposed_state.is_running():\n # increment the run count\n context.run.run_count += 1\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetExpectedStartTime","title":"SetExpectedStartTime
","text":" Bases: BaseUniversalTransform
Estimates the time a state is expected to start running if not set.
For scheduled states, this estimate is simply the scheduled time. For other states, this is set to the time the proposed state was created by Prefect.
Source code inprefect/server/orchestration/global_policy.py
class SetExpectedStartTime(BaseUniversalTransform):\n \"\"\"\n Estimates the time a state is expected to start running if not set.\n\n For scheduled states, this estimate is simply the scheduled time. For other states,\n this is set to the time the proposed state was created by Prefect.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # set expected start time if this is the first state\n if not context.run.expected_start_time:\n if context.proposed_state.is_scheduled():\n context.run.expected_start_time = (\n context.proposed_state.state_details.scheduled_time\n )\n else:\n context.run.expected_start_time = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetNextScheduledStartTime","title":"SetNextScheduledStartTime
","text":" Bases: BaseUniversalTransform
Records the scheduled time on a run.
When a run enters a scheduled state, run.next_scheduled_start_time
is set to the state's scheduled time. When leaving a scheduled state, run.next_scheduled_start_time
is unset.
prefect/server/orchestration/global_policy.py
class SetNextScheduledStartTime(BaseUniversalTransform):\n \"\"\"\n Records the scheduled time on a run.\n\n When a run enters a scheduled state, `run.next_scheduled_start_time` is set to\n the state's scheduled time. When leaving a scheduled state,\n `run.next_scheduled_start_time` is unset.\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # remove the next scheduled start time if exiting a scheduled state\n if context.initial_state and context.initial_state.is_scheduled():\n context.run.next_scheduled_start_time = None\n\n # set next scheduled start time if entering a scheduled state\n if context.proposed_state.is_scheduled():\n context.run.next_scheduled_start_time = (\n context.proposed_state.state_details.scheduled_time\n )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateSubflowParentTask","title":"UpdateSubflowParentTask
","text":" Bases: BaseUniversalTransform
Whenever a subflow changes state, it must update its parent task run's state.
Source code inprefect/server/orchestration/global_policy.py
class UpdateSubflowParentTask(BaseUniversalTransform):\n \"\"\"\n Whenever a subflow changes state, it must update its parent task run's state.\n \"\"\"\n\n async def after_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # only applies to flow runs with a parent task run id\n if context.run.parent_task_run_id is not None:\n # avoid mutation of the flow run state\n subflow_parent_task_state = context.validated_state.copy(\n reset_fields=True,\n include={\n \"type\",\n \"timestamp\",\n \"name\",\n \"message\",\n \"state_details\",\n \"data\",\n },\n )\n\n # set the task's \"child flow run id\" to be the subflow run id\n subflow_parent_task_state.state_details.child_flow_run_id = context.run.id\n\n await models.task_runs.set_task_run_state(\n session=context.session,\n task_run_id=context.run.parent_task_run_id,\n state=subflow_parent_task_state,\n force=True,\n )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateSubflowStateDetails","title":"UpdateSubflowStateDetails
","text":" Bases: BaseUniversalTransform
Update a child subflow state's references to a corresponding tracking task run id in the parent flow run
Source code inprefect/server/orchestration/global_policy.py
class UpdateSubflowStateDetails(BaseUniversalTransform):\n \"\"\"\n Update a child subflow state's references to a corresponding tracking task run id\n in the parent flow run\n \"\"\"\n\n async def before_transition(self, context: OrchestrationContext) -> None:\n if self.nullified_transition():\n return\n\n # only applies to flow runs with a parent task run id\n if context.run.parent_task_run_id is not None:\n context.proposed_state.state_details.task_run_id = (\n context.run.parent_task_run_id\n )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateStateDetails","title":"UpdateStateDetails
","text":" Bases: BaseUniversalTransform
Update a state's references to a corresponding flow- or task- run.
Source code inprefect/server/orchestration/global_policy.py
class UpdateStateDetails(BaseUniversalTransform):\n \"\"\"\n Update a state's references to a corresponding flow- or task- run.\n \"\"\"\n\n async def before_transition(\n self,\n context: OrchestrationContext,\n ) -> None:\n if self.nullified_transition():\n return\n\n if isinstance(context, FlowOrchestrationContext):\n flow_run = await context.flow_run()\n context.proposed_state.state_details.flow_run_id = flow_run.id\n\n elif isinstance(context, TaskOrchestrationContext):\n task_run = await context.task_run()\n context.proposed_state.state_details.flow_run_id = task_run.flow_run_id\n context.proposed_state.state_details.task_run_id = task_run.id\n
"},{"location":"api-ref/server/orchestration/policies/","title":"server.orchestration.policies","text":""},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies","title":"prefect.server.orchestration.policies
","text":"Policies are collections of orchestration rules and transforms.
Prefect implements (most) orchestration with logic that governs a Prefect flow or task changing state. Policies organize of orchestration logic both to provide an ordering mechanism as well as provide observability into the orchestration process.
While Prefect's orchestration rules can gracefully run independently of one another, ordering can still have an impact on the observed behavior of the system. For example, it makes no sense to secure a concurrency slot for a run if a cached state exists. Furthermore, policies, provide a mechanism to configure and observe exactly what logic will fire against a transition.
"},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy","title":"BaseOrchestrationPolicy
","text":" Bases: ABC
An abstract base class used to organize orchestration rules in priority order.
Different collections of orchestration rules might be used to govern various kinds of transitions. For example, flow-run states and task-run states might require different orchestration logic.
Source code inprefect/server/orchestration/policies.py
class BaseOrchestrationPolicy(ABC):\n \"\"\"\n An abstract base class used to organize orchestration rules in priority order.\n\n Different collections of orchestration rules might be used to govern various kinds\n of transitions. For example, flow-run states and task-run states might require\n different orchestration logic.\n \"\"\"\n\n @staticmethod\n @abstractmethod\n def priority():\n \"\"\"\n A list of orchestration rules in priority order.\n \"\"\"\n\n return []\n\n @classmethod\n def compile_transition_rules(cls, from_state=None, to_state=None):\n \"\"\"\n Returns rules in policy that are valid for the specified state transition.\n \"\"\"\n\n transition_rules = []\n for rule in cls.priority():\n if from_state in rule.FROM_STATES and to_state in rule.TO_STATES:\n transition_rules.append(rule)\n return transition_rules\n
"},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy.priority","title":"priority
abstractmethod
staticmethod
","text":"A list of orchestration rules in priority order.
Source code inprefect/server/orchestration/policies.py
@staticmethod\n@abstractmethod\ndef priority():\n \"\"\"\n A list of orchestration rules in priority order.\n \"\"\"\n\n return []\n
"},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy.compile_transition_rules","title":"compile_transition_rules
classmethod
","text":"Returns rules in policy that are valid for the specified state transition.
Source code inprefect/server/orchestration/policies.py
@classmethod\ndef compile_transition_rules(cls, from_state=None, to_state=None):\n \"\"\"\n Returns rules in policy that are valid for the specified state transition.\n \"\"\"\n\n transition_rules = []\n for rule in cls.priority():\n if from_state in rule.FROM_STATES and to_state in rule.TO_STATES:\n transition_rules.append(rule)\n return transition_rules\n
"},{"location":"api-ref/server/orchestration/rules/","title":"server.orchestration.rules","text":""},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules","title":"prefect.server.orchestration.rules
","text":"Prefect's flow and task-run orchestration machinery.
This module contains all the core concepts necessary to implement Prefect's state orchestration engine. These states correspond to intuitive descriptions of all the points that a Prefect flow or task can observe executing user code and intervene, if necessary. A detailed description of states can be found in our concept documentation.
Prefect's orchestration engine operates under the assumption that no governed user code will execute without first requesting Prefect REST API validate a change in state and record metadata about the run. With all attempts to run user code being checked against a Prefect instance, the Prefect REST API database becomes the unambiguous source of truth for managing the execution of complex interacting workflows. Orchestration rules can be implemented as discrete units of logic that operate against each state transition and can be fully observable, extensible, and customizable -- all without needing to store or parse a single line of user code.
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext","title":"OrchestrationContext
","text":" Bases: PrefectBaseModel
A container for a state transition, governed by orchestration rules.
NoteAn OrchestrationContext
should not be instantiated directly, instead use the flow- or task- specific subclasses, FlowOrchestrationContext
and TaskOrchestrationContext
.
When a flow- or task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext
, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule
ABC.
OrchestrationContext
introduces the concept of a state being None
in the context of an intended state transition. An initial state can be None
if a run is is attempting to set a state for the first time. The proposed state might be None
if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.
Attributes:
Name Type Descriptionsession
Optional[Union[Session, AsyncSession]]
a SQLAlchemy database session
initial_state
Optional[State]
the initial state of a run
proposed_state
Optional[State]
the proposed state a run is transitioning into
validated_state
Optional[State]
a proposed state that has committed to the database
rule_signature
List[str]
a record of rules that have fired on entry into a managed context, currently only used for debugging purposes
finalization_signature
List[str]
a record of rules that have fired on exit from a managed context, currently only used for debugging purposes
response_status
SetStateStatus
a SetStateStatus object used to build the API response
response_details
StateResponseDetails
a StateResponseDetails object use to build the API response
Parameters:
Name Type Description Defaultsession
a SQLAlchemy database session
requiredinitial_state
the initial state of a run
requiredproposed_state
the proposed state a run is transitioning into
required Source code inprefect/server/orchestration/rules.py
class OrchestrationContext(PrefectBaseModel):\n \"\"\"\n A container for a state transition, governed by orchestration rules.\n\n Note:\n An `OrchestrationContext` should not be instantiated directly, instead\n use the flow- or task- specific subclasses, `FlowOrchestrationContext` and\n `TaskOrchestrationContext`.\n\n When a flow- or task- run attempts to change state, Prefect REST API has an opportunity\n to decide whether this transition can proceed. All the relevant information\n associated with the state transition is stored in an `OrchestrationContext`,\n which is subsequently governed by nested orchestration rules implemented using\n the `BaseOrchestrationRule` ABC.\n\n `OrchestrationContext` introduces the concept of a state being `None` in the\n context of an intended state transition. An initial state can be `None` if a run\n is is attempting to set a state for the first time. The proposed state might be\n `None` if a rule governing the transition determines that no state change\n should occur at all and nothing is written to the database.\n\n Attributes:\n session: a SQLAlchemy database session\n initial_state: the initial state of a run\n proposed_state: the proposed state a run is transitioning into\n validated_state: a proposed state that has committed to the database\n rule_signature: a record of rules that have fired on entry into a\n managed context, currently only used for debugging purposes\n finalization_signature: a record of rules that have fired on exit from a\n managed context, currently only used for debugging purposes\n response_status: a SetStateStatus object used to build the API response\n response_details:a StateResponseDetails object use to build the API response\n\n Args:\n session: a SQLAlchemy database session\n initial_state: the initial state of a run\n proposed_state: the proposed state a run is transitioning into\n \"\"\"\n\n class Config:\n arbitrary_types_allowed = True\n\n session: Optional[Union[sa.orm.Session, AsyncSession]] = ...\n initial_state: Optional[states.State] = ...\n proposed_state: Optional[states.State] = ...\n validated_state: Optional[states.State]\n rule_signature: List[str] = Field(default_factory=list)\n finalization_signature: List[str] = Field(default_factory=list)\n response_status: SetStateStatus = Field(default=SetStateStatus.ACCEPT)\n response_details: StateResponseDetails = Field(default_factory=StateAcceptDetails)\n orchestration_error: Optional[Exception] = Field(default=None)\n parameters: Dict[Any, Any] = Field(default_factory=dict)\n\n @property\n def initial_state_type(self) -> Optional[states.StateType]:\n \"\"\"The state type of `self.initial_state` if it exists.\"\"\"\n\n return self.initial_state.type if self.initial_state else None\n\n @property\n def proposed_state_type(self) -> Optional[states.StateType]:\n \"\"\"The state type of `self.proposed_state` if it exists.\"\"\"\n\n return self.proposed_state.type if self.proposed_state else None\n\n @property\n def validated_state_type(self) -> Optional[states.StateType]:\n \"\"\"The state type of `self.validated_state` if it exists.\"\"\"\n return self.validated_state.type if self.validated_state else None\n\n def safe_copy(self):\n \"\"\"\n Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n Orchestration rules govern state transitions using information stored in\n an `OrchestrationContext`. However, mutating objects stored on the context\n directly can have unintended side-effects. To guard against this,\n `self.safe_copy` can be used to pass information to orchestration rules\n without risking mutation.\n\n Returns:\n A mutation-safe copy of the `OrchestrationContext`\n \"\"\"\n\n safe_copy = self.copy()\n\n safe_copy.initial_state = (\n self.initial_state.copy() if self.initial_state else None\n )\n safe_copy.proposed_state = (\n self.proposed_state.copy() if self.proposed_state else None\n )\n safe_copy.validated_state = (\n self.validated_state.copy() if self.validated_state else None\n )\n safe_copy.parameters = self.parameters.copy()\n return safe_copy\n\n def entry_context(self):\n \"\"\"\n A convenience method that generates input parameters for orchestration rules.\n\n An `OrchestrationContext` defines a state transition that is managed by\n orchestration rules which can fire hooks before a transition has been committed\n to the database. These hooks have a consistent interface which can be generated\n with this method.\n \"\"\"\n\n safe_context = self.safe_copy()\n return safe_context.initial_state, safe_context.proposed_state, safe_context\n\n def exit_context(self):\n \"\"\"\n A convenience method that generates input parameters for orchestration rules.\n\n An `OrchestrationContext` defines a state transition that is managed by\n orchestration rules which can fire hooks after a transition has been committed\n to the database. These hooks have a consistent interface which can be generated\n with this method.\n \"\"\"\n\n safe_context = self.safe_copy()\n return safe_context.initial_state, safe_context.validated_state, safe_context\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.initial_state_type","title":"initial_state_type: Optional[states.StateType]
property
","text":"The state type of self.initial_state
if it exists.
proposed_state_type: Optional[states.StateType]
property
","text":"The state type of self.proposed_state
if it exists.
validated_state_type: Optional[states.StateType]
property
","text":"The state type of self.validated_state
if it exists.
safe_copy
","text":"Creates a mostly-mutation-safe copy for use in orchestration rules.
Orchestration rules govern state transitions using information stored in an OrchestrationContext
. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy
can be used to pass information to orchestration rules without risking mutation.
Returns:
Type DescriptionA mutation-safe copy of the OrchestrationContext
prefect/server/orchestration/rules.py
def safe_copy(self):\n \"\"\"\n Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n Orchestration rules govern state transitions using information stored in\n an `OrchestrationContext`. However, mutating objects stored on the context\n directly can have unintended side-effects. To guard against this,\n `self.safe_copy` can be used to pass information to orchestration rules\n without risking mutation.\n\n Returns:\n A mutation-safe copy of the `OrchestrationContext`\n \"\"\"\n\n safe_copy = self.copy()\n\n safe_copy.initial_state = (\n self.initial_state.copy() if self.initial_state else None\n )\n safe_copy.proposed_state = (\n self.proposed_state.copy() if self.proposed_state else None\n )\n safe_copy.validated_state = (\n self.validated_state.copy() if self.validated_state else None\n )\n safe_copy.parameters = self.parameters.copy()\n return safe_copy\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.entry_context","title":"entry_context
","text":"A convenience method that generates input parameters for orchestration rules.
An OrchestrationContext
defines a state transition that is managed by orchestration rules which can fire hooks before a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method.
prefect/server/orchestration/rules.py
def entry_context(self):\n \"\"\"\n A convenience method that generates input parameters for orchestration rules.\n\n An `OrchestrationContext` defines a state transition that is managed by\n orchestration rules which can fire hooks before a transition has been committed\n to the database. These hooks have a consistent interface which can be generated\n with this method.\n \"\"\"\n\n safe_context = self.safe_copy()\n return safe_context.initial_state, safe_context.proposed_state, safe_context\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.exit_context","title":"exit_context
","text":"A convenience method that generates input parameters for orchestration rules.
An OrchestrationContext
defines a state transition that is managed by orchestration rules which can fire hooks after a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method.
prefect/server/orchestration/rules.py
def exit_context(self):\n \"\"\"\n A convenience method that generates input parameters for orchestration rules.\n\n An `OrchestrationContext` defines a state transition that is managed by\n orchestration rules which can fire hooks after a transition has been committed\n to the database. These hooks have a consistent interface which can be generated\n with this method.\n \"\"\"\n\n safe_context = self.safe_copy()\n return safe_context.initial_state, safe_context.validated_state, safe_context\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext","title":"FlowOrchestrationContext
","text":" Bases: OrchestrationContext
A container for a flow run state transition, governed by orchestration rules.
When a flow- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext
, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule
ABC.
FlowOrchestrationContext
introduces the concept of a state being None
in the context of an intended state transition. An initial state can be None
if a run is is attempting to set a state for the first time. The proposed state might be None
if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.
Attributes:
Name Type Descriptionsession
a SQLAlchemy database session
run
Any
the flow run attempting to change state
initial_state
Any
the initial state of the run
proposed_state
Any
the proposed state the run is transitioning into
validated_state
Any
a proposed state that has committed to the database
rule_signature
Any
a record of rules that have fired on entry into a managed context, currently only used for debugging purposes
finalization_signature
Any
a record of rules that have fired on exit from a managed context, currently only used for debugging purposes
response_status
Any
a SetStateStatus object used to build the API response
response_details
Any
a StateResponseDetails object use to build the API response
Parameters:
Name Type Description Defaultsession
a SQLAlchemy database session
requiredrun
the flow run attempting to change state
requiredinitial_state
the initial state of a run
requiredproposed_state
the proposed state a run is transitioning into
required Source code inprefect/server/orchestration/rules.py
class FlowOrchestrationContext(OrchestrationContext):\n \"\"\"\n A container for a flow run state transition, governed by orchestration rules.\n\n When a flow- run attempts to change state, Prefect REST API has an opportunity\n to decide whether this transition can proceed. All the relevant information\n associated with the state transition is stored in an `OrchestrationContext`,\n which is subsequently governed by nested orchestration rules implemented using\n the `BaseOrchestrationRule` ABC.\n\n `FlowOrchestrationContext` introduces the concept of a state being `None` in the\n context of an intended state transition. An initial state can be `None` if a run\n is is attempting to set a state for the first time. The proposed state might be\n `None` if a rule governing the transition determines that no state change\n should occur at all and nothing is written to the database.\n\n Attributes:\n session: a SQLAlchemy database session\n run: the flow run attempting to change state\n initial_state: the initial state of the run\n proposed_state: the proposed state the run is transitioning into\n validated_state: a proposed state that has committed to the database\n rule_signature: a record of rules that have fired on entry into a\n managed context, currently only used for debugging purposes\n finalization_signature: a record of rules that have fired on exit from a\n managed context, currently only used for debugging purposes\n response_status: a SetStateStatus object used to build the API response\n response_details:a StateResponseDetails object use to build the API response\n\n Args:\n session: a SQLAlchemy database session\n run: the flow run attempting to change state\n initial_state: the initial state of a run\n proposed_state: the proposed state a run is transitioning into\n \"\"\"\n\n # run: db.FlowRun = ...\n run: Any = ...\n\n @inject_db\n async def validate_proposed_state(\n self,\n db: PrefectDBInterface,\n ):\n \"\"\"\n Validates a proposed state by committing it to the database.\n\n After the `FlowOrchestrationContext` is governed by orchestration rules, the\n proposed state can be validated: the proposed state is added to the current\n SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n state. The state on the run is set to the validated state as well.\n\n If the proposed state is `None` when this method is called, no state will be\n written and `self.validated_state` will be set to the run's current state.\n\n Returns:\n None\n \"\"\"\n # (circular import)\n from prefect.server.api.server import is_client_retryable_exception\n\n try:\n await self._validate_proposed_state()\n return\n except Exception as exc:\n logger.exception(\"Encountered error during state validation\")\n self.proposed_state = None\n\n if is_client_retryable_exception(exc):\n # Do not capture retryable database exceptions, this exception will be\n # raised as a 503 in the API layer\n raise\n\n reason = f\"Error validating state: {exc!r}\"\n self.response_status = SetStateStatus.ABORT\n self.response_details = StateAbortDetails(reason=reason)\n\n @inject_db\n async def _validate_proposed_state(\n self,\n db: PrefectDBInterface,\n ):\n if self.proposed_state is None:\n validated_orm_state = self.run.state\n # We cannot access `self.run.state.data` directly for unknown reasons\n state_data = (\n (\n await artifacts.read_artifact(\n self.session, self.run.state.result_artifact_id\n )\n ).data\n if self.run.state.result_artifact_id\n else None\n )\n else:\n state_payload = self.proposed_state.dict(shallow=True)\n state_data = state_payload.pop(\"data\", None)\n\n if state_data is not None:\n state_result_artifact = core.Artifact.from_result(state_data)\n state_result_artifact.flow_run_id = self.run.id\n await artifacts.create_artifact(self.session, state_result_artifact)\n state_payload[\"result_artifact_id\"] = state_result_artifact.id\n\n validated_orm_state = db.FlowRunState(\n flow_run_id=self.run.id,\n **state_payload,\n )\n\n self.session.add(validated_orm_state)\n self.run.set_state(validated_orm_state)\n\n await self.session.flush()\n if validated_orm_state:\n self.validated_state = states.State.from_orm_without_result(\n validated_orm_state, with_data=state_data\n )\n else:\n self.validated_state = None\n\n def safe_copy(self):\n \"\"\"\n Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n Orchestration rules govern state transitions using information stored in\n an `OrchestrationContext`. However, mutating objects stored on the context\n directly can have unintended side-effects. To guard against this,\n `self.safe_copy` can be used to pass information to orchestration rules\n without risking mutation.\n\n Note:\n `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n Returns:\n A mutation-safe copy of `FlowOrchestrationContext`\n \"\"\"\n\n return super().safe_copy()\n\n @property\n def run_settings(self) -> Dict:\n \"\"\"Run-level settings used to orchestrate the state transition.\"\"\"\n\n return self.run.empirical_policy\n\n async def task_run(self):\n return None\n\n async def flow_run(self):\n return self.run\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.run_settings","title":"run_settings: Dict
property
","text":"Run-level settings used to orchestrate the state transition.
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.validate_proposed_state","title":"validate_proposed_state
async
","text":"Validates a proposed state by committing it to the database.
After the FlowOrchestrationContext
is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. self.validated_state
set to the flushed state. The state on the run is set to the validated state as well.
If the proposed state is None
when this method is called, no state will be written and self.validated_state
will be set to the run's current state.
Returns:
Type DescriptionNone
Source code inprefect/server/orchestration/rules.py
@inject_db\nasync def validate_proposed_state(\n self,\n db: PrefectDBInterface,\n):\n \"\"\"\n Validates a proposed state by committing it to the database.\n\n After the `FlowOrchestrationContext` is governed by orchestration rules, the\n proposed state can be validated: the proposed state is added to the current\n SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n state. The state on the run is set to the validated state as well.\n\n If the proposed state is `None` when this method is called, no state will be\n written and `self.validated_state` will be set to the run's current state.\n\n Returns:\n None\n \"\"\"\n # (circular import)\n from prefect.server.api.server import is_client_retryable_exception\n\n try:\n await self._validate_proposed_state()\n return\n except Exception as exc:\n logger.exception(\"Encountered error during state validation\")\n self.proposed_state = None\n\n if is_client_retryable_exception(exc):\n # Do not capture retryable database exceptions, this exception will be\n # raised as a 503 in the API layer\n raise\n\n reason = f\"Error validating state: {exc!r}\"\n self.response_status = SetStateStatus.ABORT\n self.response_details = StateAbortDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.safe_copy","title":"safe_copy
","text":"Creates a mostly-mutation-safe copy for use in orchestration rules.
Orchestration rules govern state transitions using information stored in an OrchestrationContext
. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy
can be used to pass information to orchestration rules without risking mutation.
self.run
is an ORM model, and even when copied is unsafe to mutate
Returns:
Type DescriptionA mutation-safe copy of FlowOrchestrationContext
prefect/server/orchestration/rules.py
def safe_copy(self):\n \"\"\"\n Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n Orchestration rules govern state transitions using information stored in\n an `OrchestrationContext`. However, mutating objects stored on the context\n directly can have unintended side-effects. To guard against this,\n `self.safe_copy` can be used to pass information to orchestration rules\n without risking mutation.\n\n Note:\n `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n Returns:\n A mutation-safe copy of `FlowOrchestrationContext`\n \"\"\"\n\n return super().safe_copy()\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext","title":"TaskOrchestrationContext
","text":" Bases: OrchestrationContext
A container for a task run state transition, governed by orchestration rules.
When a task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext
, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule
ABC.
TaskOrchestrationContext
introduces the concept of a state being None
in the context of an intended state transition. An initial state can be None
if a run is is attempting to set a state for the first time. The proposed state might be None
if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.
Attributes:
Name Type Descriptionsession
a SQLAlchemy database session
run
Any
the task run attempting to change state
initial_state
Any
the initial state of the run
proposed_state
Any
the proposed state the run is transitioning into
validated_state
Any
a proposed state that has committed to the database
rule_signature
Any
a record of rules that have fired on entry into a managed context, currently only used for debugging purposes
finalization_signature
Any
a record of rules that have fired on exit from a managed context, currently only used for debugging purposes
response_status
Any
a SetStateStatus object used to build the API response
response_details
Any
a StateResponseDetails object use to build the API response
Parameters:
Name Type Description Defaultsession
a SQLAlchemy database session
requiredrun
the task run attempting to change state
requiredinitial_state
the initial state of a run
requiredproposed_state
the proposed state a run is transitioning into
required Source code inprefect/server/orchestration/rules.py
class TaskOrchestrationContext(OrchestrationContext):\n \"\"\"\n A container for a task run state transition, governed by orchestration rules.\n\n When a task- run attempts to change state, Prefect REST API has an opportunity\n to decide whether this transition can proceed. All the relevant information\n associated with the state transition is stored in an `OrchestrationContext`,\n which is subsequently governed by nested orchestration rules implemented using\n the `BaseOrchestrationRule` ABC.\n\n `TaskOrchestrationContext` introduces the concept of a state being `None` in the\n context of an intended state transition. An initial state can be `None` if a run\n is is attempting to set a state for the first time. The proposed state might be\n `None` if a rule governing the transition determines that no state change\n should occur at all and nothing is written to the database.\n\n Attributes:\n session: a SQLAlchemy database session\n run: the task run attempting to change state\n initial_state: the initial state of the run\n proposed_state: the proposed state the run is transitioning into\n validated_state: a proposed state that has committed to the database\n rule_signature: a record of rules that have fired on entry into a\n managed context, currently only used for debugging purposes\n finalization_signature: a record of rules that have fired on exit from a\n managed context, currently only used for debugging purposes\n response_status: a SetStateStatus object used to build the API response\n response_details:a StateResponseDetails object use to build the API response\n\n Args:\n session: a SQLAlchemy database session\n run: the task run attempting to change state\n initial_state: the initial state of a run\n proposed_state: the proposed state a run is transitioning into\n \"\"\"\n\n # run: db.TaskRun = ...\n run: Any = ...\n\n @inject_db\n async def validate_proposed_state(\n self,\n db: PrefectDBInterface,\n ):\n \"\"\"\n Validates a proposed state by committing it to the database.\n\n After the `TaskOrchestrationContext` is governed by orchestration rules, the\n proposed state can be validated: the proposed state is added to the current\n SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n state. The state on the run is set to the validated state as well.\n\n If the proposed state is `None` when this method is called, no state will be\n written and `self.validated_state` will be set to the run's current state.\n\n Returns:\n None\n \"\"\"\n # (circular import)\n from prefect.server.api.server import is_client_retryable_exception\n\n try:\n await self._validate_proposed_state()\n return\n except Exception as exc:\n logger.exception(\"Encountered error during state validation\")\n self.proposed_state = None\n\n if is_client_retryable_exception(exc):\n # Do not capture retryable database exceptions, this exception will be\n # raised as a 503 in the API layer\n raise\n\n reason = f\"Error validating state: {exc!r}\"\n self.response_status = SetStateStatus.ABORT\n self.response_details = StateAbortDetails(reason=reason)\n\n @inject_db\n async def _validate_proposed_state(\n self,\n db: PrefectDBInterface,\n ):\n if self.proposed_state is None:\n validated_orm_state = self.run.state\n # We cannot access `self.run.state.data` directly for unknown reasons\n state_data = (\n (\n await artifacts.read_artifact(\n self.session, self.run.state.result_artifact_id\n )\n ).data\n if self.run.state.result_artifact_id\n else None\n )\n else:\n state_payload = self.proposed_state.dict(shallow=True)\n state_data = state_payload.pop(\"data\", None)\n\n if state_data is not None:\n state_result_artifact = core.Artifact.from_result(state_data)\n state_result_artifact.task_run_id = self.run.id\n\n if self.run.flow_run_id is not None:\n flow_run = await self.flow_run()\n state_result_artifact.flow_run_id = flow_run.id\n\n await artifacts.create_artifact(self.session, state_result_artifact)\n state_payload[\"result_artifact_id\"] = state_result_artifact.id\n\n validated_orm_state = db.TaskRunState(\n task_run_id=self.run.id,\n **state_payload,\n )\n\n self.session.add(validated_orm_state)\n self.run.set_state(validated_orm_state)\n\n await self.session.flush()\n if validated_orm_state:\n self.validated_state = states.State.from_orm_without_result(\n validated_orm_state, with_data=state_data\n )\n else:\n self.validated_state = None\n\n def safe_copy(self):\n \"\"\"\n Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n Orchestration rules govern state transitions using information stored in\n an `OrchestrationContext`. However, mutating objects stored on the context\n directly can have unintended side-effects. To guard against this,\n `self.safe_copy` can be used to pass information to orchestration rules\n without risking mutation.\n\n Note:\n `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n Returns:\n A mutation-safe copy of `TaskOrchestrationContext`\n \"\"\"\n\n return super().safe_copy()\n\n @property\n def run_settings(self) -> Dict:\n \"\"\"Run-level settings used to orchestrate the state transition.\"\"\"\n\n return self.run.empirical_policy\n\n async def task_run(self):\n return self.run\n\n async def flow_run(self):\n return await flow_runs.read_flow_run(\n session=self.session,\n flow_run_id=self.run.flow_run_id,\n )\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.run_settings","title":"run_settings: Dict
property
","text":"Run-level settings used to orchestrate the state transition.
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.validate_proposed_state","title":"validate_proposed_state
async
","text":"Validates a proposed state by committing it to the database.
After the TaskOrchestrationContext
is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. self.validated_state
set to the flushed state. The state on the run is set to the validated state as well.
If the proposed state is None
when this method is called, no state will be written and self.validated_state
will be set to the run's current state.
Returns:
Type DescriptionNone
Source code inprefect/server/orchestration/rules.py
@inject_db\nasync def validate_proposed_state(\n self,\n db: PrefectDBInterface,\n):\n \"\"\"\n Validates a proposed state by committing it to the database.\n\n After the `TaskOrchestrationContext` is governed by orchestration rules, the\n proposed state can be validated: the proposed state is added to the current\n SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n state. The state on the run is set to the validated state as well.\n\n If the proposed state is `None` when this method is called, no state will be\n written and `self.validated_state` will be set to the run's current state.\n\n Returns:\n None\n \"\"\"\n # (circular import)\n from prefect.server.api.server import is_client_retryable_exception\n\n try:\n await self._validate_proposed_state()\n return\n except Exception as exc:\n logger.exception(\"Encountered error during state validation\")\n self.proposed_state = None\n\n if is_client_retryable_exception(exc):\n # Do not capture retryable database exceptions, this exception will be\n # raised as a 503 in the API layer\n raise\n\n reason = f\"Error validating state: {exc!r}\"\n self.response_status = SetStateStatus.ABORT\n self.response_details = StateAbortDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.safe_copy","title":"safe_copy
","text":"Creates a mostly-mutation-safe copy for use in orchestration rules.
Orchestration rules govern state transitions using information stored in an OrchestrationContext
. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy
can be used to pass information to orchestration rules without risking mutation.
self.run
is an ORM model, and even when copied is unsafe to mutate
Returns:
Type DescriptionA mutation-safe copy of TaskOrchestrationContext
prefect/server/orchestration/rules.py
def safe_copy(self):\n \"\"\"\n Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n Orchestration rules govern state transitions using information stored in\n an `OrchestrationContext`. However, mutating objects stored on the context\n directly can have unintended side-effects. To guard against this,\n `self.safe_copy` can be used to pass information to orchestration rules\n without risking mutation.\n\n Note:\n `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n Returns:\n A mutation-safe copy of `TaskOrchestrationContext`\n \"\"\"\n\n return super().safe_copy()\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule","title":"BaseOrchestrationRule
","text":" Bases: AbstractAsyncContextManager
An abstract base class used to implement a discrete piece of orchestration logic.
An OrchestrationRule
is a stateful context manager that directly governs a state transition. Complex orchestration is achieved by nesting multiple rules. Each rule runs against an OrchestrationContext
that contains the transition details; this context is then passed to subsequent rules. The context can be modified by hooks that fire before and after a new state is validated and committed to the database. These hooks will fire as long as the state transition is considered \"valid\" and govern a transition by either modifying the proposed state before it is validated or by producing a side-effect.
A state transition occurs whenever a flow- or task- run changes state, prompting Prefect REST API to decide whether or not this transition can proceed. The current state of the run is referred to as the \"initial state\", and the state a run is attempting to transition into is the \"proposed state\". Together, the initial state transitioning into the proposed state is the intended transition that is governed by these orchestration rules. After using rules to enter a runtime context, the OrchestrationContext
will contain a proposed state that has been governed by each rule, and at that point can validate the proposed state and commit it to the database. The validated state will be set on the context as context.validated_state
, and rules will call the self.after_transition
hook upon exiting the managed context.
Examples:
Create a rule:\n\n>>> class BasicRule(BaseOrchestrationRule):\n>>> # allowed initial state types\n>>> FROM_STATES = [StateType.RUNNING]\n>>> # allowed proposed state types\n>>> TO_STATES = [StateType.COMPLETED, StateType.FAILED]\n>>>\n>>> async def before_transition(initial_state, proposed_state, ctx):\n>>> # side effects and proposed state mutation can happen here\n>>> ...\n>>>\n>>> async def after_transition(initial_state, validated_state, ctx):\n>>> # operations on states that have been validated can happen here\n>>> ...\n>>>\n>>> async def cleanup(intitial_state, validated_state, ctx):\n>>> # reverts side effects generated by `before_transition` if necessary\n>>> ...\n\nUse a rule:\n\n>>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n>>> async with BasicRule(context, *intended_transition):\n>>> # context.proposed_state has been governed by BasicRule\n>>> ...\n\nUse multiple rules:\n\n>>> rules = [BasicRule, BasicRule]\n>>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n>>> async with contextlib.AsyncExitStack() as stack:\n>>> for rule in rules:\n>>> stack.enter_async_context(rule(context, *intended_transition))\n>>>\n>>> # context.proposed_state has been governed by all rules\n>>> ...\n
Attributes:
Name Type DescriptionFROM_STATES
Iterable
list of valid initial state types this rule governs
TO_STATES
Iterable
list of valid proposed state types this rule governs
context
the orchestration context
from_state_type
the state type a run is currently in
to_state_type
the intended proposed state type prior to any orchestration
Parameters:
Name Type Description Defaultcontext
OrchestrationContext
A FlowOrchestrationContext
or TaskOrchestrationContext
that is passed between rules
from_state_type
Optional[StateType]
The state type of the initial state of a run, if this state type is not contained in FROM_STATES
, no hooks will fire
to_state_type
Optional[StateType]
The state type of the proposed state before orchestration, if this state type is not contained in TO_STATES
, no hooks will fire
prefect/server/orchestration/rules.py
class BaseOrchestrationRule(contextlib.AbstractAsyncContextManager):\n \"\"\"\n An abstract base class used to implement a discrete piece of orchestration logic.\n\n An `OrchestrationRule` is a stateful context manager that directly governs a state\n transition. Complex orchestration is achieved by nesting multiple rules.\n Each rule runs against an `OrchestrationContext` that contains the transition\n details; this context is then passed to subsequent rules. The context can be\n modified by hooks that fire before and after a new state is validated and committed\n to the database. These hooks will fire as long as the state transition is\n considered \"valid\" and govern a transition by either modifying the proposed state\n before it is validated or by producing a side-effect.\n\n A state transition occurs whenever a flow- or task- run changes state, prompting\n Prefect REST API to decide whether or not this transition can proceed. The current state of\n the run is referred to as the \"initial state\", and the state a run is\n attempting to transition into is the \"proposed state\". Together, the initial state\n transitioning into the proposed state is the intended transition that is governed\n by these orchestration rules. After using rules to enter a runtime context, the\n `OrchestrationContext` will contain a proposed state that has been governed by\n each rule, and at that point can validate the proposed state and commit it to\n the database. The validated state will be set on the context as\n `context.validated_state`, and rules will call the `self.after_transition` hook\n upon exiting the managed context.\n\n Examples:\n\n Create a rule:\n\n >>> class BasicRule(BaseOrchestrationRule):\n >>> # allowed initial state types\n >>> FROM_STATES = [StateType.RUNNING]\n >>> # allowed proposed state types\n >>> TO_STATES = [StateType.COMPLETED, StateType.FAILED]\n >>>\n >>> async def before_transition(initial_state, proposed_state, ctx):\n >>> # side effects and proposed state mutation can happen here\n >>> ...\n >>>\n >>> async def after_transition(initial_state, validated_state, ctx):\n >>> # operations on states that have been validated can happen here\n >>> ...\n >>>\n >>> async def cleanup(intitial_state, validated_state, ctx):\n >>> # reverts side effects generated by `before_transition` if necessary\n >>> ...\n\n Use a rule:\n\n >>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n >>> async with BasicRule(context, *intended_transition):\n >>> # context.proposed_state has been governed by BasicRule\n >>> ...\n\n Use multiple rules:\n\n >>> rules = [BasicRule, BasicRule]\n >>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n >>> async with contextlib.AsyncExitStack() as stack:\n >>> for rule in rules:\n >>> stack.enter_async_context(rule(context, *intended_transition))\n >>>\n >>> # context.proposed_state has been governed by all rules\n >>> ...\n\n Attributes:\n FROM_STATES: list of valid initial state types this rule governs\n TO_STATES: list of valid proposed state types this rule governs\n context: the orchestration context\n from_state_type: the state type a run is currently in\n to_state_type: the intended proposed state type prior to any orchestration\n\n Args:\n context: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is\n passed between rules\n from_state_type: The state type of the initial state of a run, if this\n state type is not contained in `FROM_STATES`, no hooks will fire\n to_state_type: The state type of the proposed state before orchestration, if\n this state type is not contained in `TO_STATES`, no hooks will fire\n \"\"\"\n\n FROM_STATES: Iterable = []\n TO_STATES: Iterable = []\n\n def __init__(\n self,\n context: OrchestrationContext,\n from_state_type: Optional[states.StateType],\n to_state_type: Optional[states.StateType],\n ):\n self.context = context\n self.from_state_type = from_state_type\n self.to_state_type = to_state_type\n self._invalid_on_entry = None\n\n async def __aenter__(self) -> OrchestrationContext:\n \"\"\"\n Enter an async runtime context governed by this rule.\n\n The `with` statement will bind a governed `OrchestrationContext` to the target\n specified by the `as` clause. If the transition proposed by the\n `OrchestrationContext` is considered invalid on entry, entering this context\n will do nothing. Otherwise, `self.before_transition` will fire.\n \"\"\"\n\n if await self.invalid():\n pass\n else:\n try:\n entry_context = self.context.entry_context()\n await self.before_transition(*entry_context)\n self.context.rule_signature.append(str(self.__class__))\n except Exception as before_transition_error:\n reason = (\n f\"Aborting orchestration due to error in {self.__class__!r}:\"\n f\" !{before_transition_error!r}\"\n )\n logger.exception(\n f\"Error running before-transition hook in rule {self.__class__!r}:\"\n f\" !{before_transition_error!r}\"\n )\n\n self.context.proposed_state = None\n self.context.response_status = SetStateStatus.ABORT\n self.context.response_details = StateAbortDetails(reason=reason)\n self.context.orchestration_error = before_transition_error\n\n return self.context\n\n async def __aexit__(\n self,\n exc_type: Optional[Type[BaseException]],\n exc_val: Optional[BaseException],\n exc_tb: Optional[TracebackType],\n ) -> None:\n \"\"\"\n Exit the async runtime context governed by this rule.\n\n One of three outcomes can happen upon exiting this rule's context depending on\n the state of the rule. If the rule was found to be invalid on entry, nothing\n happens. If the rule was valid on entry and continues to be valid on exit,\n `self.after_transition` will fire. If the rule was valid on entry but invalid\n on exit, the rule will \"fizzle\" and `self.cleanup` will fire in order to revert\n any side-effects produced by `self.before_transition`.\n \"\"\"\n\n exit_context = self.context.exit_context()\n if await self.invalid():\n pass\n elif await self.fizzled():\n await self.cleanup(*exit_context)\n else:\n await self.after_transition(*exit_context)\n self.context.finalization_signature.append(str(self.__class__))\n\n async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n \"\"\"\n Implements a hook that can fire before a state is committed to the database.\n\n This hook may produce side-effects or mutate the proposed state of a\n transition using one of four methods: `self.reject_transition`,\n `self.delay_transition`, `self.abort_transition`, and `self.rename_state`.\n\n Note:\n As currently implemented, the `before_transition` hook is not\n perfectly isolated from mutating the transition. It is a standard instance\n method that has access to `self`, and therefore `self.context`. This should\n never be modified directly. Furthermore, `context.run` is an ORM model, and\n mutating the run can also cause unintended writes to the database.\n\n Args:\n initial_state: The initial state of a transition\n proposed_state: The proposed state of a transition\n context: A safe copy of the `OrchestrationContext`, with the exception of\n `context.run`, mutating this context will have no effect on the broader\n orchestration environment.\n\n Returns:\n None\n \"\"\"\n\n async def after_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n \"\"\"\n Implements a hook that can fire after a state is committed to the database.\n\n Args:\n initial_state: The initial state of a transition\n validated_state: The governed state that has been committed to the database\n context: A safe copy of the `OrchestrationContext`, with the exception of\n `context.run`, mutating this context will have no effect on the broader\n orchestration environment.\n\n Returns:\n None\n \"\"\"\n\n async def cleanup(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n ) -> None:\n \"\"\"\n Implements a hook that can fire after a state is committed to the database.\n\n The intended use of this method is to revert side-effects produced by\n `self.before_transition` when the transition is found to be invalid on exit.\n This allows multiple rules to be gracefully run in sequence, without logic that\n keeps track of all other rules that might govern a transition.\n\n Args:\n initial_state: The initial state of a transition\n validated_state: The governed state that has been committed to the database\n context: A safe copy of the `OrchestrationContext`, with the exception of\n `context.run`, mutating this context will have no effect on the broader\n orchestration environment.\n\n Returns:\n None\n \"\"\"\n\n async def invalid(self) -> bool:\n \"\"\"\n Determines if a rule is invalid.\n\n Invalid rules do nothing and no hooks fire upon entering or exiting a governed\n context. Rules are invalid if the transition states types are not contained in\n `self.FROM_STATES` and `self.TO_STATES`, or if the context is proposing\n a transition that differs from the transition the rule was instantiated with.\n\n Returns:\n True if the rules in invalid, False otherwise.\n \"\"\"\n # invalid and fizzled states are mutually exclusive,\n # `_invalid_on_entry` holds this statefulness\n if self.from_state_type not in self.FROM_STATES:\n self._invalid_on_entry = True\n if self.to_state_type not in self.TO_STATES:\n self._invalid_on_entry = True\n\n if self._invalid_on_entry is None:\n self._invalid_on_entry = await self.invalid_transition()\n return self._invalid_on_entry\n\n async def fizzled(self) -> bool:\n \"\"\"\n Determines if a rule is fizzled and side-effects need to be reverted.\n\n Rules are fizzled if the transitions were valid on entry (thus firing\n `self.before_transition`) but are invalid upon exiting the governed context,\n most likely caused by another rule mutating the transition.\n\n Returns:\n True if the rule is fizzled, False otherwise.\n \"\"\"\n\n if self._invalid_on_entry:\n return False\n return await self.invalid_transition()\n\n async def invalid_transition(self) -> bool:\n \"\"\"\n Determines if the transition proposed by the `OrchestrationContext` is invalid.\n\n If the `OrchestrationContext` is attempting to manage a transition with this\n rule that differs from the transition the rule was instantiated with, the\n transition is considered to be invalid. Depending on the context, a rule with an\n invalid transition is either \"invalid\" or \"fizzled\".\n\n Returns:\n True if the transition is invalid, False otherwise.\n \"\"\"\n\n initial_state_type = self.context.initial_state_type\n proposed_state_type = self.context.proposed_state_type\n return (self.from_state_type != initial_state_type) or (\n self.to_state_type != proposed_state_type\n )\n\n async def reject_transition(self, state: Optional[states.State], reason: str):\n \"\"\"\n Rejects a proposed transition before the transition is validated.\n\n This method will reject a proposed transition, mutating the proposed state to\n the provided `state`. A reason for rejecting the transition is also passed on\n to the `OrchestrationContext`. Rules that reject the transition will not fizzle,\n despite the proposed state type changing.\n\n Args:\n state: The new proposed state. If `None`, the current run state will be\n returned in the result instead.\n reason: The reason for rejecting the transition\n \"\"\"\n\n # don't run if the transition is already validated\n if self.context.validated_state:\n raise RuntimeError(\"The transition is already validated\")\n\n # the current state will be used if a new one is not provided\n if state is None:\n if self.from_state_type is None:\n raise OrchestrationError(\n \"The current run has no state; this transition cannot be \"\n \"rejected without providing a new state.\"\n )\n self.to_state_type = None\n self.context.proposed_state = None\n else:\n # a rule that mutates state should not fizzle itself\n self.to_state_type = state.type\n self.context.proposed_state = state\n\n self.context.response_status = SetStateStatus.REJECT\n self.context.response_details = StateRejectDetails(reason=reason)\n\n async def delay_transition(\n self,\n delay_seconds: int,\n reason: str,\n ):\n \"\"\"\n Delays a proposed transition before the transition is validated.\n\n This method will delay a proposed transition, setting the proposed state to\n `None`, signaling to the `OrchestrationContext` that no state should be\n written to the database. The number of seconds a transition should be delayed is\n passed to the `OrchestrationContext`. A reason for delaying the transition is\n also provided. Rules that delay the transition will not fizzle, despite the\n proposed state type changing.\n\n Args:\n delay_seconds: The number of seconds the transition should be delayed\n reason: The reason for delaying the transition\n \"\"\"\n\n # don't run if the transition is already validated\n if self.context.validated_state:\n raise RuntimeError(\"The transition is already validated\")\n\n # a rule that mutates state should not fizzle itself\n self.to_state_type = None\n self.context.proposed_state = None\n self.context.response_status = SetStateStatus.WAIT\n self.context.response_details = StateWaitDetails(\n delay_seconds=delay_seconds, reason=reason\n )\n\n async def abort_transition(self, reason: str):\n \"\"\"\n Aborts a proposed transition before the transition is validated.\n\n This method will abort a proposed transition, expecting no further action to\n occur for this run. The proposed state is set to `None`, signaling to the\n `OrchestrationContext` that no state should be written to the database. A\n reason for aborting the transition is also provided. Rules that abort the\n transition will not fizzle, despite the proposed state type changing.\n\n Args:\n reason: The reason for aborting the transition\n \"\"\"\n\n # don't run if the transition is already validated\n if self.context.validated_state:\n raise RuntimeError(\"The transition is already validated\")\n\n # a rule that mutates state should not fizzle itself\n self.to_state_type = None\n self.context.proposed_state = None\n self.context.response_status = SetStateStatus.ABORT\n self.context.response_details = StateAbortDetails(reason=reason)\n\n async def rename_state(self, state_name):\n \"\"\"\n Sets the \"name\" attribute on a proposed state.\n\n The name of a state is an annotation intended to provide rich, human-readable\n context for how a run is progressing. This method only updates the name and not\n the canonical state TYPE, and will not fizzle or invalidate any other rules\n that might govern this state transition.\n \"\"\"\n if self.context.proposed_state is not None:\n self.context.proposed_state.name = state_name\n\n async def update_context_parameters(self, key, value):\n \"\"\"\n Updates the \"parameters\" dictionary attribute with the specified key-value pair.\n\n This mechanism streamlines the process of passing messages and information\n between orchestration rules if necessary and is simpler and more ephemeral than\n message-passing via the database or some other side-effect. This mechanism can\n be used to break up large rules for ease of testing or comprehension, but note\n that any rules coupled this way (or any other way) are no longer independent and\n the order in which they appear in the orchestration policy priority will matter.\n \"\"\"\n\n self.context.parameters.update({key: value})\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.before_transition","title":"before_transition
async
","text":"Implements a hook that can fire before a state is committed to the database.
This hook may produce side-effects or mutate the proposed state of a transition using one of four methods: self.reject_transition
, self.delay_transition
, self.abort_transition
, and self.rename_state
.
As currently implemented, the before_transition
hook is not perfectly isolated from mutating the transition. It is a standard instance method that has access to self
, and therefore self.context
. This should never be modified directly. Furthermore, context.run
is an ORM model, and mutating the run can also cause unintended writes to the database.
Parameters:
Name Type Description Defaultinitial_state
Optional[State]
The initial state of a transition
requiredproposed_state
Optional[State]
The proposed state of a transition
requiredcontext
OrchestrationContext
A safe copy of the OrchestrationContext
, with the exception of context.run
, mutating this context will have no effect on the broader orchestration environment.
Returns:
Type DescriptionNone
None
Source code inprefect/server/orchestration/rules.py
async def before_transition(\n self,\n initial_state: Optional[states.State],\n proposed_state: Optional[states.State],\n context: OrchestrationContext,\n) -> None:\n \"\"\"\n Implements a hook that can fire before a state is committed to the database.\n\n This hook may produce side-effects or mutate the proposed state of a\n transition using one of four methods: `self.reject_transition`,\n `self.delay_transition`, `self.abort_transition`, and `self.rename_state`.\n\n Note:\n As currently implemented, the `before_transition` hook is not\n perfectly isolated from mutating the transition. It is a standard instance\n method that has access to `self`, and therefore `self.context`. This should\n never be modified directly. Furthermore, `context.run` is an ORM model, and\n mutating the run can also cause unintended writes to the database.\n\n Args:\n initial_state: The initial state of a transition\n proposed_state: The proposed state of a transition\n context: A safe copy of the `OrchestrationContext`, with the exception of\n `context.run`, mutating this context will have no effect on the broader\n orchestration environment.\n\n Returns:\n None\n \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.after_transition","title":"after_transition
async
","text":"Implements a hook that can fire after a state is committed to the database.
Parameters:
Name Type Description Defaultinitial_state
Optional[State]
The initial state of a transition
requiredvalidated_state
Optional[State]
The governed state that has been committed to the database
requiredcontext
OrchestrationContext
A safe copy of the OrchestrationContext
, with the exception of context.run
, mutating this context will have no effect on the broader orchestration environment.
Returns:
Type DescriptionNone
None
Source code inprefect/server/orchestration/rules.py
async def after_transition(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n) -> None:\n \"\"\"\n Implements a hook that can fire after a state is committed to the database.\n\n Args:\n initial_state: The initial state of a transition\n validated_state: The governed state that has been committed to the database\n context: A safe copy of the `OrchestrationContext`, with the exception of\n `context.run`, mutating this context will have no effect on the broader\n orchestration environment.\n\n Returns:\n None\n \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.cleanup","title":"cleanup
async
","text":"Implements a hook that can fire after a state is committed to the database.
The intended use of this method is to revert side-effects produced by self.before_transition
when the transition is found to be invalid on exit. This allows multiple rules to be gracefully run in sequence, without logic that keeps track of all other rules that might govern a transition.
Parameters:
Name Type Description Defaultinitial_state
Optional[State]
The initial state of a transition
requiredvalidated_state
Optional[State]
The governed state that has been committed to the database
requiredcontext
OrchestrationContext
A safe copy of the OrchestrationContext
, with the exception of context.run
, mutating this context will have no effect on the broader orchestration environment.
Returns:
Type DescriptionNone
None
Source code inprefect/server/orchestration/rules.py
async def cleanup(\n self,\n initial_state: Optional[states.State],\n validated_state: Optional[states.State],\n context: OrchestrationContext,\n) -> None:\n \"\"\"\n Implements a hook that can fire after a state is committed to the database.\n\n The intended use of this method is to revert side-effects produced by\n `self.before_transition` when the transition is found to be invalid on exit.\n This allows multiple rules to be gracefully run in sequence, without logic that\n keeps track of all other rules that might govern a transition.\n\n Args:\n initial_state: The initial state of a transition\n validated_state: The governed state that has been committed to the database\n context: A safe copy of the `OrchestrationContext`, with the exception of\n `context.run`, mutating this context will have no effect on the broader\n orchestration environment.\n\n Returns:\n None\n \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.invalid","title":"invalid
async
","text":"Determines if a rule is invalid.
Invalid rules do nothing and no hooks fire upon entering or exiting a governed context. Rules are invalid if the transition states types are not contained in self.FROM_STATES
and self.TO_STATES
, or if the context is proposing a transition that differs from the transition the rule was instantiated with.
Returns:
Type Descriptionbool
True if the rules in invalid, False otherwise.
Source code inprefect/server/orchestration/rules.py
async def invalid(self) -> bool:\n \"\"\"\n Determines if a rule is invalid.\n\n Invalid rules do nothing and no hooks fire upon entering or exiting a governed\n context. Rules are invalid if the transition states types are not contained in\n `self.FROM_STATES` and `self.TO_STATES`, or if the context is proposing\n a transition that differs from the transition the rule was instantiated with.\n\n Returns:\n True if the rules in invalid, False otherwise.\n \"\"\"\n # invalid and fizzled states are mutually exclusive,\n # `_invalid_on_entry` holds this statefulness\n if self.from_state_type not in self.FROM_STATES:\n self._invalid_on_entry = True\n if self.to_state_type not in self.TO_STATES:\n self._invalid_on_entry = True\n\n if self._invalid_on_entry is None:\n self._invalid_on_entry = await self.invalid_transition()\n return self._invalid_on_entry\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.fizzled","title":"fizzled
async
","text":"Determines if a rule is fizzled and side-effects need to be reverted.
Rules are fizzled if the transitions were valid on entry (thus firing self.before_transition
) but are invalid upon exiting the governed context, most likely caused by another rule mutating the transition.
Returns:
Type Descriptionbool
True if the rule is fizzled, False otherwise.
Source code inprefect/server/orchestration/rules.py
async def fizzled(self) -> bool:\n \"\"\"\n Determines if a rule is fizzled and side-effects need to be reverted.\n\n Rules are fizzled if the transitions were valid on entry (thus firing\n `self.before_transition`) but are invalid upon exiting the governed context,\n most likely caused by another rule mutating the transition.\n\n Returns:\n True if the rule is fizzled, False otherwise.\n \"\"\"\n\n if self._invalid_on_entry:\n return False\n return await self.invalid_transition()\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.invalid_transition","title":"invalid_transition
async
","text":"Determines if the transition proposed by the OrchestrationContext
is invalid.
If the OrchestrationContext
is attempting to manage a transition with this rule that differs from the transition the rule was instantiated with, the transition is considered to be invalid. Depending on the context, a rule with an invalid transition is either \"invalid\" or \"fizzled\".
Returns:
Type Descriptionbool
True if the transition is invalid, False otherwise.
Source code inprefect/server/orchestration/rules.py
async def invalid_transition(self) -> bool:\n \"\"\"\n Determines if the transition proposed by the `OrchestrationContext` is invalid.\n\n If the `OrchestrationContext` is attempting to manage a transition with this\n rule that differs from the transition the rule was instantiated with, the\n transition is considered to be invalid. Depending on the context, a rule with an\n invalid transition is either \"invalid\" or \"fizzled\".\n\n Returns:\n True if the transition is invalid, False otherwise.\n \"\"\"\n\n initial_state_type = self.context.initial_state_type\n proposed_state_type = self.context.proposed_state_type\n return (self.from_state_type != initial_state_type) or (\n self.to_state_type != proposed_state_type\n )\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.reject_transition","title":"reject_transition
async
","text":"Rejects a proposed transition before the transition is validated.
This method will reject a proposed transition, mutating the proposed state to the provided state
. A reason for rejecting the transition is also passed on to the OrchestrationContext
. Rules that reject the transition will not fizzle, despite the proposed state type changing.
Parameters:
Name Type Description Defaultstate
Optional[State]
The new proposed state. If None
, the current run state will be returned in the result instead.
reason
str
The reason for rejecting the transition
required Source code inprefect/server/orchestration/rules.py
async def reject_transition(self, state: Optional[states.State], reason: str):\n \"\"\"\n Rejects a proposed transition before the transition is validated.\n\n This method will reject a proposed transition, mutating the proposed state to\n the provided `state`. A reason for rejecting the transition is also passed on\n to the `OrchestrationContext`. Rules that reject the transition will not fizzle,\n despite the proposed state type changing.\n\n Args:\n state: The new proposed state. If `None`, the current run state will be\n returned in the result instead.\n reason: The reason for rejecting the transition\n \"\"\"\n\n # don't run if the transition is already validated\n if self.context.validated_state:\n raise RuntimeError(\"The transition is already validated\")\n\n # the current state will be used if a new one is not provided\n if state is None:\n if self.from_state_type is None:\n raise OrchestrationError(\n \"The current run has no state; this transition cannot be \"\n \"rejected without providing a new state.\"\n )\n self.to_state_type = None\n self.context.proposed_state = None\n else:\n # a rule that mutates state should not fizzle itself\n self.to_state_type = state.type\n self.context.proposed_state = state\n\n self.context.response_status = SetStateStatus.REJECT\n self.context.response_details = StateRejectDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.delay_transition","title":"delay_transition
async
","text":"Delays a proposed transition before the transition is validated.
This method will delay a proposed transition, setting the proposed state to None
, signaling to the OrchestrationContext
that no state should be written to the database. The number of seconds a transition should be delayed is passed to the OrchestrationContext
. A reason for delaying the transition is also provided. Rules that delay the transition will not fizzle, despite the proposed state type changing.
Parameters:
Name Type Description Defaultdelay_seconds
int
The number of seconds the transition should be delayed
requiredreason
str
The reason for delaying the transition
required Source code inprefect/server/orchestration/rules.py
async def delay_transition(\n self,\n delay_seconds: int,\n reason: str,\n):\n \"\"\"\n Delays a proposed transition before the transition is validated.\n\n This method will delay a proposed transition, setting the proposed state to\n `None`, signaling to the `OrchestrationContext` that no state should be\n written to the database. The number of seconds a transition should be delayed is\n passed to the `OrchestrationContext`. A reason for delaying the transition is\n also provided. Rules that delay the transition will not fizzle, despite the\n proposed state type changing.\n\n Args:\n delay_seconds: The number of seconds the transition should be delayed\n reason: The reason for delaying the transition\n \"\"\"\n\n # don't run if the transition is already validated\n if self.context.validated_state:\n raise RuntimeError(\"The transition is already validated\")\n\n # a rule that mutates state should not fizzle itself\n self.to_state_type = None\n self.context.proposed_state = None\n self.context.response_status = SetStateStatus.WAIT\n self.context.response_details = StateWaitDetails(\n delay_seconds=delay_seconds, reason=reason\n )\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.abort_transition","title":"abort_transition
async
","text":"Aborts a proposed transition before the transition is validated.
This method will abort a proposed transition, expecting no further action to occur for this run. The proposed state is set to None
, signaling to the OrchestrationContext
that no state should be written to the database. A reason for aborting the transition is also provided. Rules that abort the transition will not fizzle, despite the proposed state type changing.
Parameters:
Name Type Description Defaultreason
str
The reason for aborting the transition
required Source code inprefect/server/orchestration/rules.py
async def abort_transition(self, reason: str):\n \"\"\"\n Aborts a proposed transition before the transition is validated.\n\n This method will abort a proposed transition, expecting no further action to\n occur for this run. The proposed state is set to `None`, signaling to the\n `OrchestrationContext` that no state should be written to the database. A\n reason for aborting the transition is also provided. Rules that abort the\n transition will not fizzle, despite the proposed state type changing.\n\n Args:\n reason: The reason for aborting the transition\n \"\"\"\n\n # don't run if the transition is already validated\n if self.context.validated_state:\n raise RuntimeError(\"The transition is already validated\")\n\n # a rule that mutates state should not fizzle itself\n self.to_state_type = None\n self.context.proposed_state = None\n self.context.response_status = SetStateStatus.ABORT\n self.context.response_details = StateAbortDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.rename_state","title":"rename_state
async
","text":"Sets the \"name\" attribute on a proposed state.
The name of a state is an annotation intended to provide rich, human-readable context for how a run is progressing. This method only updates the name and not the canonical state TYPE, and will not fizzle or invalidate any other rules that might govern this state transition.
Source code inprefect/server/orchestration/rules.py
async def rename_state(self, state_name):\n \"\"\"\n Sets the \"name\" attribute on a proposed state.\n\n The name of a state is an annotation intended to provide rich, human-readable\n context for how a run is progressing. This method only updates the name and not\n the canonical state TYPE, and will not fizzle or invalidate any other rules\n that might govern this state transition.\n \"\"\"\n if self.context.proposed_state is not None:\n self.context.proposed_state.name = state_name\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.update_context_parameters","title":"update_context_parameters
async
","text":"Updates the \"parameters\" dictionary attribute with the specified key-value pair.
This mechanism streamlines the process of passing messages and information between orchestration rules if necessary and is simpler and more ephemeral than message-passing via the database or some other side-effect. This mechanism can be used to break up large rules for ease of testing or comprehension, but note that any rules coupled this way (or any other way) are no longer independent and the order in which they appear in the orchestration policy priority will matter.
Source code inprefect/server/orchestration/rules.py
async def update_context_parameters(self, key, value):\n \"\"\"\n Updates the \"parameters\" dictionary attribute with the specified key-value pair.\n\n This mechanism streamlines the process of passing messages and information\n between orchestration rules if necessary and is simpler and more ephemeral than\n message-passing via the database or some other side-effect. This mechanism can\n be used to break up large rules for ease of testing or comprehension, but note\n that any rules coupled this way (or any other way) are no longer independent and\n the order in which they appear in the orchestration policy priority will matter.\n \"\"\"\n\n self.context.parameters.update({key: value})\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform","title":"BaseUniversalTransform
","text":" Bases: AbstractAsyncContextManager
An abstract base class used to implement privileged bookkeeping logic.
WarningIn almost all cases, use the BaseOrchestrationRule
base class instead.
Beyond the orchestration rules implemented with the BaseOrchestrationRule
ABC, Universal transforms are not stateful, and fire their before- and after- transition hooks on every state transition unless the proposed state is None
, indicating that no state should be written to the database. Because there are no guardrails in place to prevent directly mutating state or other parts of the orchestration context, universal transforms should only be used with care.
Attributes:
Name Type DescriptionFROM_STATES
Iterable
for compatibility with BaseOrchestrationPolicy
TO_STATES
Iterable
for compatibility with BaseOrchestrationPolicy
context
the orchestration context
from_state_type
the state type a run is currently in
to_state_type
the intended proposed state type prior to any orchestration
Parameters:
Name Type Description Defaultcontext
OrchestrationContext
A FlowOrchestrationContext
or TaskOrchestrationContext
that is passed between transforms
prefect/server/orchestration/rules.py
class BaseUniversalTransform(contextlib.AbstractAsyncContextManager):\n \"\"\"\n An abstract base class used to implement privileged bookkeeping logic.\n\n Warning:\n In almost all cases, use the `BaseOrchestrationRule` base class instead.\n\n Beyond the orchestration rules implemented with the `BaseOrchestrationRule` ABC,\n Universal transforms are not stateful, and fire their before- and after- transition\n hooks on every state transition unless the proposed state is `None`, indicating that\n no state should be written to the database. Because there are no guardrails in place\n to prevent directly mutating state or other parts of the orchestration context,\n universal transforms should only be used with care.\n\n Attributes:\n FROM_STATES: for compatibility with `BaseOrchestrationPolicy`\n TO_STATES: for compatibility with `BaseOrchestrationPolicy`\n context: the orchestration context\n from_state_type: the state type a run is currently in\n to_state_type: the intended proposed state type prior to any orchestration\n\n Args:\n context: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is\n passed between transforms\n \"\"\"\n\n # `BaseUniversalTransform` will always fire on non-null transitions\n FROM_STATES: Iterable = ALL_ORCHESTRATION_STATES\n TO_STATES: Iterable = ALL_ORCHESTRATION_STATES\n\n def __init__(\n self,\n context: OrchestrationContext,\n from_state_type: Optional[states.StateType],\n to_state_type: Optional[states.StateType],\n ):\n self.context = context\n self.from_state_type = from_state_type\n self.to_state_type = to_state_type\n\n async def __aenter__(self):\n \"\"\"\n Enter an async runtime context governed by this transform.\n\n The `with` statement will bind a governed `OrchestrationContext` to the target\n specified by the `as` clause. If the transition proposed by the\n `OrchestrationContext` has been nullified on entry and `context.proposed_state`\n is `None`, entering this context will do nothing. Otherwise\n `self.before_transition` will fire.\n \"\"\"\n\n await self.before_transition(self.context)\n self.context.rule_signature.append(str(self.__class__))\n return self.context\n\n async def __aexit__(\n self,\n exc_type: Optional[Type[BaseException]],\n exc_val: Optional[BaseException],\n exc_tb: Optional[TracebackType],\n ) -> None:\n \"\"\"\n Exit the async runtime context governed by this transform.\n\n If the transition has been nullified or errorred upon exiting this transforms's context,\n nothing happens. Otherwise, `self.after_transition` will fire on every non-null\n proposed state.\n \"\"\"\n\n if not self.exception_in_transition():\n await self.after_transition(self.context)\n self.context.finalization_signature.append(str(self.__class__))\n\n async def before_transition(self, context) -> None:\n \"\"\"\n Implements a hook that fires before a state is committed to the database.\n\n Args:\n context: the `OrchestrationContext` that contains transition details\n\n Returns:\n None\n \"\"\"\n\n async def after_transition(self, context) -> None:\n \"\"\"\n Implements a hook that can fire after a state is committed to the database.\n\n Args:\n context: the `OrchestrationContext` that contains transition details\n\n Returns:\n None\n \"\"\"\n\n def nullified_transition(self) -> bool:\n \"\"\"\n Determines if the transition has been nullified.\n\n Transitions are nullified if the proposed state is `None`, indicating that\n nothing should be written to the database.\n\n Returns:\n True if the transition is nullified, False otherwise.\n \"\"\"\n\n return self.context.proposed_state is None\n\n def exception_in_transition(self) -> bool:\n \"\"\"\n Determines if the transition has encountered an exception.\n\n Returns:\n True if the transition is encountered an exception, False otherwise.\n \"\"\"\n\n return self.context.orchestration_error is not None\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.before_transition","title":"before_transition
async
","text":"Implements a hook that fires before a state is committed to the database.
Parameters:
Name Type Description Defaultcontext
the OrchestrationContext
that contains transition details
Returns:
Type DescriptionNone
None
Source code inprefect/server/orchestration/rules.py
async def before_transition(self, context) -> None:\n \"\"\"\n Implements a hook that fires before a state is committed to the database.\n\n Args:\n context: the `OrchestrationContext` that contains transition details\n\n Returns:\n None\n \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.after_transition","title":"after_transition
async
","text":"Implements a hook that can fire after a state is committed to the database.
Parameters:
Name Type Description Defaultcontext
the OrchestrationContext
that contains transition details
Returns:
Type DescriptionNone
None
Source code inprefect/server/orchestration/rules.py
async def after_transition(self, context) -> None:\n \"\"\"\n Implements a hook that can fire after a state is committed to the database.\n\n Args:\n context: the `OrchestrationContext` that contains transition details\n\n Returns:\n None\n \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.nullified_transition","title":"nullified_transition
","text":"Determines if the transition has been nullified.
Transitions are nullified if the proposed state is None
, indicating that nothing should be written to the database.
Returns:
Type Descriptionbool
True if the transition is nullified, False otherwise.
Source code inprefect/server/orchestration/rules.py
def nullified_transition(self) -> bool:\n \"\"\"\n Determines if the transition has been nullified.\n\n Transitions are nullified if the proposed state is `None`, indicating that\n nothing should be written to the database.\n\n Returns:\n True if the transition is nullified, False otherwise.\n \"\"\"\n\n return self.context.proposed_state is None\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.exception_in_transition","title":"exception_in_transition
","text":"Determines if the transition has encountered an exception.
Returns:
Type Descriptionbool
True if the transition is encountered an exception, False otherwise.
Source code inprefect/server/orchestration/rules.py
def exception_in_transition(self) -> bool:\n \"\"\"\n Determines if the transition has encountered an exception.\n\n Returns:\n True if the transition is encountered an exception, False otherwise.\n \"\"\"\n\n return self.context.orchestration_error is not None\n
"},{"location":"api-ref/server/schemas/actions/","title":"server.schemas.actions","text":""},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions","title":"prefect.server.schemas.actions
","text":"Reduced schemas for accepting API actions.
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactCreate","title":"ArtifactCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create an artifact.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass ArtifactCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create an artifact.\"\"\"\n\n key: Optional[str] = FieldFrom(schemas.core.Artifact)\n type: Optional[str] = FieldFrom(schemas.core.Artifact)\n description: Optional[str] = FieldFrom(schemas.core.Artifact)\n data: Optional[Union[Dict[str, Any], Any]] = FieldFrom(schemas.core.Artifact)\n metadata_: Optional[Dict[str, str]] = FieldFrom(schemas.core.Artifact)\n flow_run_id: Optional[UUID] = FieldFrom(schemas.core.Artifact)\n task_run_id: Optional[UUID] = FieldFrom(schemas.core.Artifact)\n\n _validate_artifact_format = validator(\"key\", allow_reuse=True)(\n validate_artifact_key\n )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactUpdate","title":"ArtifactUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update an artifact.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass ArtifactUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update an artifact.\"\"\"\n\n data: Optional[Union[Dict[str, Any], Any]] = FieldFrom(schemas.core.Artifact)\n description: Optional[str] = FieldFrom(schemas.core.Artifact)\n metadata_: Optional[Dict[str, str]] = FieldFrom(schemas.core.Artifact)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentCreate","title":"BlockDocumentCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a block document.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass BlockDocumentCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a block document.\"\"\"\n\n name: Optional[str] = FieldFrom(schemas.core.BlockDocument)\n data: dict = FieldFrom(schemas.core.BlockDocument)\n block_schema_id: UUID = FieldFrom(schemas.core.BlockDocument)\n block_type_id: UUID = FieldFrom(schemas.core.BlockDocument)\n is_anonymous: bool = FieldFrom(schemas.core.BlockDocument)\n\n _validate_name_format = validator(\"name\", allow_reuse=True)(\n validate_block_document_name\n )\n\n @root_validator\n def validate_name_is_present_if_not_anonymous(cls, values):\n # TODO: We should find an elegant way to reuse this logic from the origin model\n if not values.get(\"is_anonymous\") and not values.get(\"name\"):\n raise ValueError(\"Names must be provided for block documents.\")\n return values\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentReferenceCreate","title":"BlockDocumentReferenceCreate
","text":" Bases: ActionBaseModel
Data used to create block document reference.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass BlockDocumentReferenceCreate(ActionBaseModel):\n \"\"\"Data used to create block document reference.\"\"\"\n\n id: UUID = FieldFrom(schemas.core.BlockDocumentReference)\n parent_block_document_id: UUID = FieldFrom(schemas.core.BlockDocumentReference)\n reference_block_document_id: UUID = FieldFrom(schemas.core.BlockDocumentReference)\n name: str = FieldFrom(schemas.core.BlockDocumentReference)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentReferenceCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentUpdate","title":"BlockDocumentUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a block document.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass BlockDocumentUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a block document.\"\"\"\n\n block_schema_id: Optional[UUID] = Field(\n default=None, description=\"A block schema ID\"\n )\n data: dict = FieldFrom(schemas.core.BlockDocument)\n merge_existing_data: bool = True\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockSchemaCreate","title":"BlockSchemaCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a block schema.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass BlockSchemaCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a block schema.\"\"\"\n\n fields: dict = FieldFrom(schemas.core.BlockSchema)\n block_type_id: Optional[UUID] = FieldFrom(schemas.core.BlockSchema)\n capabilities: List[str] = FieldFrom(schemas.core.BlockSchema)\n version: str = FieldFrom(schemas.core.BlockSchema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockSchemaCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeCreate","title":"BlockTypeCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a block type.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass BlockTypeCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a block type.\"\"\"\n\n name: str = FieldFrom(schemas.core.BlockType)\n slug: str = FieldFrom(schemas.core.BlockType)\n logo_url: Optional[schemas.core.HttpUrl] = FieldFrom(schemas.core.BlockType)\n documentation_url: Optional[schemas.core.HttpUrl] = FieldFrom(\n schemas.core.BlockType\n )\n description: Optional[str] = FieldFrom(schemas.core.BlockType)\n code_example: Optional[str] = FieldFrom(schemas.core.BlockType)\n\n # validators\n _validate_slug_format = validator(\"slug\", allow_reuse=True)(\n validate_block_type_slug\n )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeUpdate","title":"BlockTypeUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a block type.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass BlockTypeUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a block type.\"\"\"\n\n logo_url: Optional[schemas.core.HttpUrl] = FieldFrom(schemas.core.BlockType)\n documentation_url: Optional[schemas.core.HttpUrl] = FieldFrom(\n schemas.core.BlockType\n )\n description: Optional[str] = FieldFrom(schemas.core.BlockType)\n code_example: Optional[str] = FieldFrom(schemas.core.BlockType)\n\n @classmethod\n def updatable_fields(cls) -> set:\n return get_class_fields_only(cls)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitCreate","title":"ConcurrencyLimitCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a concurrency limit.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass ConcurrencyLimitCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a concurrency limit.\"\"\"\n\n tag: str = FieldFrom(schemas.core.ConcurrencyLimit)\n concurrency_limit: int = FieldFrom(schemas.core.ConcurrencyLimit)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Create","title":"ConcurrencyLimitV2Create
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a v2 concurrency limit.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass ConcurrencyLimitV2Create(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a v2 concurrency limit.\"\"\"\n\n active: bool = FieldFrom(schemas.core.ConcurrencyLimitV2)\n name: str = FieldFrom(schemas.core.ConcurrencyLimitV2)\n limit: int = FieldFrom(schemas.core.ConcurrencyLimitV2)\n active_slots: int = FieldFrom(schemas.core.ConcurrencyLimitV2)\n denied_slots: int = FieldFrom(schemas.core.ConcurrencyLimitV2)\n slot_decay_per_second: float = FieldFrom(schemas.core.ConcurrencyLimitV2)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Create.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Update","title":"ConcurrencyLimitV2Update
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a v2 concurrency limit.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass ConcurrencyLimitV2Update(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a v2 concurrency limit.\"\"\"\n\n active: Optional[bool] = FieldFrom(schemas.core.ConcurrencyLimitV2)\n name: Optional[str] = FieldFrom(schemas.core.ConcurrencyLimitV2)\n limit: Optional[int] = FieldFrom(schemas.core.ConcurrencyLimitV2)\n active_slots: Optional[int] = FieldFrom(schemas.core.ConcurrencyLimitV2)\n denied_slots: Optional[int] = FieldFrom(schemas.core.ConcurrencyLimitV2)\n slot_decay_per_second: Optional[float] = FieldFrom(schemas.core.ConcurrencyLimitV2)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Update.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate","title":"DeploymentCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a deployment.
Source code inprefect/server/schemas/actions.py
@experimental_field(\n \"work_pool_name\",\n group=\"work_pools\",\n when=lambda x: x is not None,\n)\n@copy_model_fields\nclass DeploymentCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a deployment.\"\"\"\n\n @root_validator\n def populate_schedules(cls, values):\n if not values.get(\"schedules\") and values.get(\"schedule\"):\n values[\"schedules\"] = [\n DeploymentScheduleCreate(\n schedule=values[\"schedule\"],\n active=values[\"is_schedule_active\"],\n )\n ]\n\n return values\n\n @root_validator(pre=True)\n def remove_old_fields(cls, values):\n # 2.7.7 removed worker_pool_queue_id in lieu of worker_pool_name and\n # worker_pool_queue_name. Those fields were later renamed to work_pool_name\n # and work_queue_name. This validator removes old fields provided\n # by older clients to avoid 422 errors.\n values_copy = copy(values)\n worker_pool_queue_id = values_copy.pop(\"worker_pool_queue_id\", None)\n worker_pool_name = values_copy.pop(\"worker_pool_name\", None)\n worker_pool_queue_name = values_copy.pop(\"worker_pool_queue_name\", None)\n work_pool_queue_name = values_copy.pop(\"work_pool_queue_name\", None)\n if worker_pool_queue_id:\n warnings.warn(\n (\n \"`worker_pool_queue_id` is no longer supported for creating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n if worker_pool_name or worker_pool_queue_name or work_pool_queue_name:\n warnings.warn(\n (\n \"`worker_pool_name`, `worker_pool_queue_name`, and \"\n \"`work_pool_name` are\"\n \"no longer supported for creating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n return values_copy\n\n name: str = FieldFrom(schemas.core.Deployment)\n flow_id: UUID = FieldFrom(schemas.core.Deployment)\n is_schedule_active: Optional[bool] = FieldFrom(schemas.core.Deployment)\n paused: bool = FieldFrom(schemas.core.Deployment)\n schedules: List[DeploymentScheduleCreate] = Field(\n default_factory=list,\n description=\"A list of schedules for the deployment.\",\n )\n enforce_parameter_schema: bool = FieldFrom(schemas.core.Deployment)\n parameter_openapi_schema: Optional[Dict[str, Any]] = FieldFrom(\n schemas.core.Deployment\n )\n parameters: Dict[str, Any] = FieldFrom(schemas.core.Deployment)\n tags: List[str] = FieldFrom(schemas.core.Deployment)\n pull_steps: Optional[List[dict]] = FieldFrom(schemas.core.Deployment)\n\n manifest_path: Optional[str] = FieldFrom(schemas.core.Deployment)\n work_queue_name: Optional[str] = FieldFrom(schemas.core.Deployment)\n work_pool_name: Optional[str] = Field(\n default=None,\n description=\"The name of the deployment's work pool.\",\n example=\"my-work-pool\",\n )\n storage_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n schedule: Optional[schemas.schedules.SCHEDULE_TYPES] = FieldFrom(\n schemas.core.Deployment\n )\n description: Optional[str] = FieldFrom(schemas.core.Deployment)\n path: Optional[str] = FieldFrom(schemas.core.Deployment)\n version: Optional[str] = FieldFrom(schemas.core.Deployment)\n entrypoint: Optional[str] = FieldFrom(schemas.core.Deployment)\n infra_overrides: Optional[Dict[str, Any]] = FieldFrom(schemas.core.Deployment)\n\n def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n jsonschema.validate(self.infra_overrides, variables_schema)\n\n @validator(\"parameters\")\n def _validate_parameters_conform_to_schema(cls, value, values):\n \"\"\"Validate that the parameters conform to the parameter schema.\"\"\"\n if values.get(\"enforce_parameter_schema\"):\n validate_values_conform_to_schema(\n value, values.get(\"parameter_openapi_schema\"), ignore_required=True\n )\n return value\n\n @validator(\"parameter_openapi_schema\")\n def _validate_parameter_openapi_schema(cls, value, values):\n \"\"\"Validate that the parameter_openapi_schema is a valid json schema.\"\"\"\n if values.get(\"enforce_parameter_schema\"):\n validate_schema(value)\n return value\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.check_valid_configuration","title":"check_valid_configuration
","text":"Check that the combination of base_job_template defaults and infra_overrides conforms to the specified schema.
Source code inprefect/server/schemas/actions.py
def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n jsonschema.validate(self.infra_overrides, variables_schema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentFlowRunCreate","title":"DeploymentFlowRunCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow run from a deployment.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass DeploymentFlowRunCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow run from a deployment.\"\"\"\n\n # FlowRunCreate states must be provided as StateCreate objects\n state: Optional[StateCreate] = Field(\n default=None, description=\"The state of the flow run to create\"\n )\n\n name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n parameters: dict = FieldFrom(schemas.core.FlowRun)\n context: dict = FieldFrom(schemas.core.FlowRun)\n infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n empirical_policy: schemas.core.FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n tags: List[str] = FieldFrom(schemas.core.FlowRun)\n idempotency_key: Optional[str] = FieldFrom(schemas.core.FlowRun)\n parent_task_run_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n work_queue_name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n job_variables: Optional[Dict[str, Any]] = FieldFrom(schemas.core.FlowRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentFlowRunCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate","title":"DeploymentUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a deployment.
Source code inprefect/server/schemas/actions.py
@experimental_field(\n \"work_pool_name\",\n group=\"work_pools\",\n when=lambda x: x is not None,\n)\n@copy_model_fields\nclass DeploymentUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a deployment.\"\"\"\n\n @root_validator(pre=True)\n def remove_old_fields(cls, values):\n # 2.7.7 removed worker_pool_queue_id in lieu of worker_pool_name and\n # worker_pool_queue_name. Those fields were later renamed to work_pool_name\n # and work_queue_name. This validator removes old fields provided\n # by older clients to avoid 422 errors.\n values_copy = copy(values)\n worker_pool_queue_id = values_copy.pop(\"worker_pool_queue_id\", None)\n worker_pool_name = values_copy.pop(\"worker_pool_name\", None)\n worker_pool_queue_name = values_copy.pop(\"worker_pool_queue_name\", None)\n work_pool_queue_name = values_copy.pop(\"work_pool_queue_name\", None)\n if worker_pool_queue_id:\n warnings.warn(\n (\n \"`worker_pool_queue_id` is no longer supported for updating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n if worker_pool_name or worker_pool_queue_name or work_pool_queue_name:\n warnings.warn(\n (\n \"`worker_pool_name`, `worker_pool_queue_name`, and \"\n \"`work_pool_name` are\"\n \"no longer supported for creating \"\n \"deployments. Please use `work_pool_name` and \"\n \"`work_queue_name` instead.\"\n ),\n UserWarning,\n )\n return values_copy\n\n version: Optional[str] = FieldFrom(schemas.core.Deployment)\n schedule: Optional[schemas.schedules.SCHEDULE_TYPES] = FieldFrom(\n schemas.core.Deployment\n )\n description: Optional[str] = FieldFrom(schemas.core.Deployment)\n is_schedule_active: bool = FieldFrom(schemas.core.Deployment)\n paused: bool = FieldFrom(schemas.core.Deployment)\n schedules: List[DeploymentScheduleCreate] = Field(\n default_factory=list,\n description=\"A list of schedules for the deployment.\",\n )\n parameters: Optional[Dict[str, Any]] = Field(\n default=None,\n description=\"Parameters for flow runs scheduled by the deployment.\",\n )\n tags: List[str] = FieldFrom(schemas.core.Deployment)\n work_queue_name: Optional[str] = FieldFrom(schemas.core.Deployment)\n work_pool_name: Optional[str] = Field(\n default=None,\n description=\"The name of the deployment's work pool.\",\n example=\"my-work-pool\",\n )\n path: Optional[str] = FieldFrom(schemas.core.Deployment)\n infra_overrides: Optional[Dict[str, Any]] = FieldFrom(schemas.core.Deployment)\n entrypoint: Optional[str] = FieldFrom(schemas.core.Deployment)\n manifest_path: Optional[str] = FieldFrom(schemas.core.Deployment)\n storage_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n enforce_parameter_schema: Optional[bool] = Field(\n default=None,\n description=(\n \"Whether or not the deployment should enforce the parameter schema.\"\n ),\n )\n\n def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n if variables_schema is not None:\n jsonschema.validate(self.infra_overrides, variables_schema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.check_valid_configuration","title":"check_valid_configuration
","text":"Check that the combination of base_job_template defaults and infra_overrides conforms to the specified schema.
Source code inprefect/server/schemas/actions.py
def check_valid_configuration(self, base_job_template: dict):\n \"\"\"Check that the combination of base_job_template defaults\n and infra_overrides conforms to the specified schema.\n \"\"\"\n variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n if variables_schema is not None:\n # jsonschema considers required fields, even if that field has a default,\n # to still be required. To get around this we remove the fields from\n # required if there is a default present.\n required = variables_schema.get(\"required\")\n properties = variables_schema.get(\"properties\")\n if required is not None and properties is not None:\n for k, v in properties.items():\n if \"default\" in v and k in required:\n required.remove(k)\n\n if variables_schema is not None:\n jsonschema.validate(self.infra_overrides, variables_schema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowCreate","title":"FlowCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass FlowCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow.\"\"\"\n\n name: str = FieldFrom(schemas.core.Flow)\n tags: List[str] = FieldFrom(schemas.core.Flow)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunCreate","title":"FlowRunCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow run.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow run.\"\"\"\n\n # FlowRunCreate states must be provided as StateCreate objects\n state: Optional[StateCreate] = Field(\n default=None, description=\"The state of the flow run to create\"\n )\n\n name: str = FieldFrom(schemas.core.FlowRun)\n flow_id: UUID = FieldFrom(schemas.core.FlowRun)\n flow_version: Optional[str] = FieldFrom(schemas.core.FlowRun)\n parameters: dict = FieldFrom(schemas.core.FlowRun)\n context: dict = FieldFrom(schemas.core.FlowRun)\n parent_task_run_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n empirical_policy: schemas.core.FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n tags: List[str] = FieldFrom(schemas.core.FlowRun)\n idempotency_key: Optional[str] = FieldFrom(schemas.core.FlowRun)\n\n # DEPRECATED\n\n deployment_id: Optional[UUID] = Field(\n None,\n description=(\n \"DEPRECATED: The id of the deployment associated with this flow run, if\"\n \" available.\"\n ),\n deprecated=True,\n )\n\n class Config(ActionBaseModel.Config):\n json_dumps = orjson_dumps_extra_compatible\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyCreate","title":"FlowRunNotificationPolicyCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a flow run notification policy.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunNotificationPolicyCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a flow run notification policy.\"\"\"\n\n is_active: bool = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n state_names: List[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n tags: List[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n block_document_id: UUID = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n message_template: Optional[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyUpdate","title":"FlowRunNotificationPolicyUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a flow run notification policy.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunNotificationPolicyUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a flow run notification policy.\"\"\"\n\n is_active: Optional[bool] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n state_names: Optional[List[str]] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n tags: Optional[List[str]] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n block_document_id: Optional[UUID] = FieldFrom(\n schemas.core.FlowRunNotificationPolicy\n )\n message_template: Optional[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunUpdate","title":"FlowRunUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a flow run.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a flow run.\"\"\"\n\n name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n flow_version: Optional[str] = FieldFrom(schemas.core.FlowRun)\n parameters: dict = FieldFrom(schemas.core.FlowRun)\n empirical_policy: schemas.core.FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n tags: List[str] = FieldFrom(schemas.core.FlowRun)\n infrastructure_pid: Optional[str] = FieldFrom(schemas.core.FlowRun)\n job_variables: Optional[Dict[str, Any]] = FieldFrom(schemas.core.FlowRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowUpdate","title":"FlowUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a flow.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass FlowUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a flow.\"\"\"\n\n tags: List[str] = FieldFrom(schemas.core.Flow)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.LogCreate","title":"LogCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a log.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass LogCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a log.\"\"\"\n\n name: str = FieldFrom(schemas.core.Log)\n level: int = FieldFrom(schemas.core.Log)\n message: str = FieldFrom(schemas.core.Log)\n timestamp: DateTimeTZ = FieldFrom(schemas.core.Log)\n flow_run_id: Optional[UUID] = FieldFrom(schemas.core.Log)\n task_run_id: Optional[UUID] = FieldFrom(schemas.core.Log)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.LogCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.SavedSearchCreate","title":"SavedSearchCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a saved search.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass SavedSearchCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a saved search.\"\"\"\n\n name: str = FieldFrom(schemas.core.SavedSearch)\n filters: List[schemas.core.SavedSearchFilter] = FieldFrom(schemas.core.SavedSearch)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.SavedSearchCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.StateCreate","title":"StateCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a new state.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass StateCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a new state.\"\"\"\n\n type: schemas.states.StateType = FieldFrom(schemas.states.State)\n name: Optional[str] = FieldFrom(schemas.states.State)\n message: Optional[str] = FieldFrom(schemas.states.State)\n data: Optional[Any] = FieldFrom(schemas.states.State)\n state_details: schemas.states.StateDetails = FieldFrom(schemas.states.State)\n\n # DEPRECATED\n\n timestamp: Optional[DateTimeTZ] = Field(\n default=None,\n repr=False,\n ignored=True,\n )\n id: Optional[UUID] = Field(default=None, repr=False, ignored=True)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.StateCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunCreate","title":"TaskRunCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a task run
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass TaskRunCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a task run\"\"\"\n\n # TaskRunCreate states must be provided as StateCreate objects\n state: Optional[StateCreate] = Field(\n default=None, description=\"The state of the task run to create\"\n )\n\n name: str = FieldFrom(schemas.core.TaskRun)\n flow_run_id: Optional[UUID] = FieldFrom(schemas.core.TaskRun)\n task_key: str = FieldFrom(schemas.core.TaskRun)\n dynamic_key: str = FieldFrom(schemas.core.TaskRun)\n cache_key: Optional[str] = FieldFrom(schemas.core.TaskRun)\n cache_expiration: Optional[DateTimeTZ] = FieldFrom(schemas.core.TaskRun)\n task_version: Optional[str] = FieldFrom(schemas.core.TaskRun)\n empirical_policy: schemas.core.TaskRunPolicy = FieldFrom(schemas.core.TaskRun)\n tags: List[str] = FieldFrom(schemas.core.TaskRun)\n task_inputs: Dict[\n str,\n List[\n Union[\n schemas.core.TaskRunResult,\n schemas.core.Parameter,\n schemas.core.Constant,\n ]\n ],\n ] = FieldFrom(schemas.core.TaskRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunUpdate","title":"TaskRunUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a task run
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass TaskRunUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a task run\"\"\"\n\n name: str = FieldFrom(schemas.core.TaskRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableCreate","title":"VariableCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a Variable.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass VariableCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a Variable.\"\"\"\n\n name: str = FieldFrom(schemas.core.Variable)\n value: str = FieldFrom(schemas.core.Variable)\n tags: Optional[List[str]] = FieldFrom(schemas.core.Variable)\n\n # validators\n _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableUpdate","title":"VariableUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a Variable.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass VariableUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a Variable.\"\"\"\n\n name: Optional[str] = Field(\n default=None,\n description=\"The name of the variable\",\n example=\"my_variable\",\n max_length=schemas.core.MAX_VARIABLE_NAME_LENGTH,\n )\n value: Optional[str] = Field(\n default=None,\n description=\"The value of the variable\",\n example=\"my-value\",\n max_length=schemas.core.MAX_VARIABLE_VALUE_LENGTH,\n )\n tags: Optional[List[str]] = FieldFrom(schemas.core.Variable)\n\n # validators\n _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolCreate","title":"WorkPoolCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a work pool.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass WorkPoolCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a work pool.\"\"\"\n\n name: str = FieldFrom(schemas.core.WorkPool)\n description: Optional[str] = FieldFrom(schemas.core.WorkPool)\n type: str = Field(description=\"The work pool type.\", default=\"prefect-agent\")\n base_job_template: Dict[str, Any] = FieldFrom(schemas.core.WorkPool)\n is_paused: bool = FieldFrom(schemas.core.WorkPool)\n concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkPool)\n\n _validate_base_job_template = validator(\"base_job_template\", allow_reuse=True)(\n validate_base_job_template\n )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolUpdate","title":"WorkPoolUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a work pool.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass WorkPoolUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a work pool.\"\"\"\n\n description: Optional[str] = FieldFrom(schemas.core.WorkPool)\n is_paused: Optional[bool] = FieldFrom(schemas.core.WorkPool)\n base_job_template: Optional[Dict[str, Any]] = FieldFrom(schemas.core.WorkPool)\n concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkPool)\n\n _validate_base_job_template = validator(\"base_job_template\", allow_reuse=True)(\n validate_base_job_template\n )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueCreate","title":"WorkQueueCreate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to create a work queue.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass WorkQueueCreate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to create a work queue.\"\"\"\n\n name: str = FieldFrom(schemas.core.WorkQueue)\n description: Optional[str] = FieldFrom(schemas.core.WorkQueue)\n is_paused: bool = FieldFrom(schemas.core.WorkQueue)\n concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkQueue)\n priority: Optional[int] = Field(\n default=None,\n description=(\n \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n ),\n )\n\n # DEPRECATED\n\n filter: Optional[schemas.core.QueueFilter] = Field(\n None,\n description=\"DEPRECATED: Filter criteria for the work queue.\",\n deprecated=True,\n )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueCreate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueUpdate","title":"WorkQueueUpdate
","text":" Bases: ActionBaseModel
Data used by the Prefect REST API to update a work queue.
Source code inprefect/server/schemas/actions.py
@copy_model_fields\nclass WorkQueueUpdate(ActionBaseModel):\n \"\"\"Data used by the Prefect REST API to update a work queue.\"\"\"\n\n name: str = FieldFrom(schemas.core.WorkQueue)\n description: Optional[str] = FieldFrom(schemas.core.WorkQueue)\n is_paused: bool = FieldFrom(schemas.core.WorkQueue)\n concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkQueue)\n priority: Optional[int] = FieldFrom(schemas.core.WorkQueue)\n last_polled: Optional[DateTimeTZ] = FieldFrom(schemas.core.WorkQueue)\n\n # DEPRECATED\n\n filter: Optional[schemas.core.QueueFilter] = Field(\n None,\n description=\"DEPRECATED: Filter criteria for the work queue.\",\n deprecated=True,\n )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueUpdate.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/core/","title":"server.schemas.core","text":""},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core","title":"prefect.server.schemas.core
","text":"Full schemas of Prefect REST API objects.
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Agent","title":"Agent
","text":" Bases: ORMBaseModel
An ORM representation of an agent
Source code inprefect/server/schemas/core.py
class Agent(ORMBaseModel):\n \"\"\"An ORM representation of an agent\"\"\"\n\n name: str = Field(\n default_factory=lambda: generate_slug(2),\n description=(\n \"The name of the agent. If a name is not provided, it will be\"\n \" auto-generated.\"\n ),\n )\n work_queue_id: UUID = Field(\n default=..., description=\"The work queue with which the agent is associated.\"\n )\n last_activity_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The last time this agent polled for work.\"\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockDocument","title":"BlockDocument
","text":" Bases: ORMBaseModel
An ORM representation of a block document.
Source code inprefect/server/schemas/core.py
class BlockDocument(ORMBaseModel):\n \"\"\"An ORM representation of a block document.\"\"\"\n\n name: Optional[str] = Field(\n default=None,\n description=(\n \"The block document's name. Not required for anonymous block documents.\"\n ),\n )\n data: dict = Field(default_factory=dict, description=\"The block document's data\")\n block_schema_id: UUID = Field(default=..., description=\"A block schema ID\")\n block_schema: Optional[BlockSchema] = Field(\n default=None, description=\"The associated block schema\"\n )\n block_type_id: UUID = Field(default=..., description=\"A block type ID\")\n block_type_name: Optional[str] = Field(\n default=None, description=\"The associated block type's name\"\n )\n block_type: Optional[BlockType] = Field(\n default=None, description=\"The associated block type\"\n )\n block_document_references: Dict[str, Dict[str, Any]] = Field(\n default_factory=dict, description=\"Record of the block document's references\"\n )\n is_anonymous: bool = Field(\n default=False,\n description=(\n \"Whether the block is anonymous (anonymous blocks are usually created by\"\n \" Prefect automatically)\"\n ),\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n # the BlockDocumentCreate subclass allows name=None\n # and will inherit this validator\n if v is not None:\n raise_on_name_with_banned_characters(v)\n return v\n\n @root_validator\n def validate_name_is_present_if_not_anonymous(cls, values):\n # anonymous blocks may have no name prior to actually being\n # stored in the database\n if not values.get(\"is_anonymous\") and not values.get(\"name\"):\n raise ValueError(\"Names must be provided for block documents.\")\n return values\n\n @classmethod\n async def from_orm_model(\n cls,\n session,\n orm_block_document: \"prefect.server.database.orm_models.ORMBlockDocument\",\n include_secrets: bool = False,\n ):\n data = await orm_block_document.decrypt_data(session=session)\n # if secrets are not included, obfuscate them based on the schema's\n # `secret_fields`. Note this walks any nested blocks as well. If the\n # nested blocks were recovered from named blocks, they will already\n # be obfuscated, but if nested fields were hardcoded into the parent\n # blocks data, this is the only opportunity to obfuscate them.\n if not include_secrets:\n flat_data = dict_to_flatdict(data)\n # iterate over the (possibly nested) secret fields\n # and obfuscate their data\n for secret_field in orm_block_document.block_schema.fields.get(\n \"secret_fields\", []\n ):\n secret_key = tuple(secret_field.split(\".\"))\n if flat_data.get(secret_key) is not None:\n flat_data[secret_key] = obfuscate_string(flat_data[secret_key])\n # If a wildcard (*) is in the current secret key path, we take the portion\n # of the path before the wildcard and compare it to the same level of each\n # key. A match means that the field is nested under the secret key and should\n # be obfuscated.\n elif \"*\" in secret_key:\n wildcard_index = secret_key.index(\"*\")\n for data_key in flat_data.keys():\n if secret_key[0:wildcard_index] == data_key[0:wildcard_index]:\n flat_data[data_key] = obfuscate(flat_data[data_key])\n data = flatdict_to_dict(flat_data)\n\n return cls(\n id=orm_block_document.id,\n created=orm_block_document.created,\n updated=orm_block_document.updated,\n name=orm_block_document.name,\n data=data,\n block_schema_id=orm_block_document.block_schema_id,\n block_schema=orm_block_document.block_schema,\n block_type_id=orm_block_document.block_type_id,\n block_type_name=orm_block_document.block_type_name,\n block_type=orm_block_document.block_type,\n is_anonymous=orm_block_document.is_anonymous,\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockDocumentReference","title":"BlockDocumentReference
","text":" Bases: ORMBaseModel
An ORM representation of a block document reference.
Source code inprefect/server/schemas/core.py
class BlockDocumentReference(ORMBaseModel):\n \"\"\"An ORM representation of a block document reference.\"\"\"\n\n parent_block_document_id: UUID = Field(\n default=..., description=\"ID of block document the reference is nested within\"\n )\n parent_block_document: Optional[BlockDocument] = Field(\n default=None, description=\"The block document the reference is nested within\"\n )\n reference_block_document_id: UUID = Field(\n default=..., description=\"ID of the nested block document\"\n )\n reference_block_document: Optional[BlockDocument] = Field(\n default=None, description=\"The nested block document\"\n )\n name: str = Field(\n default=..., description=\"The name that the reference is nested under\"\n )\n\n @root_validator\n def validate_parent_and_ref_are_different(cls, values):\n parent_id = values.get(\"parent_block_document_id\")\n ref_id = values.get(\"reference_block_document_id\")\n if parent_id and ref_id and parent_id == ref_id:\n raise ValueError(\n \"`parent_block_document_id` and `reference_block_document_id` cannot be\"\n \" the same\"\n )\n return values\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockSchema","title":"BlockSchema
","text":" Bases: ORMBaseModel
An ORM representation of a block schema.
Source code inprefect/server/schemas/core.py
class BlockSchema(ORMBaseModel):\n \"\"\"An ORM representation of a block schema.\"\"\"\n\n checksum: str = Field(default=..., description=\"The block schema's unique checksum\")\n fields: dict = Field(\n default_factory=dict, description=\"The block schema's field schema\"\n )\n block_type_id: Optional[UUID] = Field(default=..., description=\"A block type ID\")\n block_type: Optional[BlockType] = Field(\n default=None, description=\"The associated block type\"\n )\n capabilities: List[str] = Field(\n default_factory=list,\n description=\"A list of Block capabilities\",\n )\n version: str = Field(\n default=DEFAULT_BLOCK_SCHEMA_VERSION,\n description=\"Human readable identifier for the block schema\",\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockSchemaReference","title":"BlockSchemaReference
","text":" Bases: ORMBaseModel
An ORM representation of a block schema reference.
Source code inprefect/server/schemas/core.py
class BlockSchemaReference(ORMBaseModel):\n \"\"\"An ORM representation of a block schema reference.\"\"\"\n\n parent_block_schema_id: UUID = Field(\n default=..., description=\"ID of block schema the reference is nested within\"\n )\n parent_block_schema: Optional[BlockSchema] = Field(\n default=None, description=\"The block schema the reference is nested within\"\n )\n reference_block_schema_id: UUID = Field(\n default=..., description=\"ID of the nested block schema\"\n )\n reference_block_schema: Optional[BlockSchema] = Field(\n default=None, description=\"The nested block schema\"\n )\n name: str = Field(\n default=..., description=\"The name that the reference is nested under\"\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockType","title":"BlockType
","text":" Bases: ORMBaseModel
An ORM representation of a block type
Source code inprefect/server/schemas/core.py
class BlockType(ORMBaseModel):\n \"\"\"An ORM representation of a block type\"\"\"\n\n name: str = Field(default=..., description=\"A block type's name\")\n slug: str = Field(default=..., description=\"A block type's slug\")\n logo_url: Optional[HttpUrl] = Field(\n default=None, description=\"Web URL for the block type's logo\"\n )\n documentation_url: Optional[HttpUrl] = Field(\n default=None, description=\"Web URL for the block type's documentation\"\n )\n description: Optional[str] = Field(\n default=None,\n description=\"A short blurb about the corresponding block's intended use\",\n )\n code_example: Optional[str] = Field(\n default=None,\n description=\"A code snippet demonstrating use of the corresponding block\",\n )\n is_protected: bool = Field(\n default=False, description=\"Protected block types cannot be modified via API.\"\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.ConcurrencyLimit","title":"ConcurrencyLimit
","text":" Bases: ORMBaseModel
An ORM representation of a concurrency limit.
Source code inprefect/server/schemas/core.py
class ConcurrencyLimit(ORMBaseModel):\n \"\"\"An ORM representation of a concurrency limit.\"\"\"\n\n tag: str = Field(\n default=..., description=\"A tag the concurrency limit is applied to.\"\n )\n concurrency_limit: int = Field(default=..., description=\"The concurrency limit.\")\n active_slots: List[UUID] = Field(\n default_factory=list,\n description=\"A list of active run ids using a concurrency slot\",\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.ConcurrencyLimitV2","title":"ConcurrencyLimitV2
","text":" Bases: ORMBaseModel
An ORM representation of a v2 concurrency limit.
Source code inprefect/server/schemas/core.py
class ConcurrencyLimitV2(ORMBaseModel):\n \"\"\"An ORM representation of a v2 concurrency limit.\"\"\"\n\n active: bool = Field(\n default=True, description=\"Whether the concurrency limit is active.\"\n )\n name: str = Field(default=..., description=\"The name of the concurrency limit.\")\n limit: int = Field(default=..., description=\"The concurrency limit.\")\n active_slots: int = Field(default=0, description=\"The number of active slots.\")\n denied_slots: int = Field(default=0, description=\"The number of denied slots.\")\n slot_decay_per_second: float = Field(\n default=0,\n description=\"The decay rate for active slots when used as a rate limit.\",\n )\n avg_slot_occupancy_seconds: float = Field(\n default=2.0, description=\"The average amount of time a slot is occupied.\"\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Configuration","title":"Configuration
","text":" Bases: ORMBaseModel
An ORM representation of account info.
Source code inprefect/server/schemas/core.py
class Configuration(ORMBaseModel):\n \"\"\"An ORM representation of account info.\"\"\"\n\n key: str = Field(default=..., description=\"Account info key\")\n value: dict = Field(default=..., description=\"Account info\")\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Deployment","title":"Deployment
","text":" Bases: ORMBaseModel
An ORM representation of deployment data.
Source code inprefect/server/schemas/core.py
class Deployment(ORMBaseModel):\n \"\"\"An ORM representation of deployment data.\"\"\"\n\n name: str = Field(default=..., description=\"The name of the deployment.\")\n version: Optional[str] = Field(\n default=None, description=\"An optional version for the deployment.\"\n )\n description: Optional[str] = Field(\n default=None, description=\"A description for the deployment.\"\n )\n flow_id: UUID = Field(\n default=..., description=\"The flow id associated with the deployment.\"\n )\n schedule: Optional[schedules.SCHEDULE_TYPES] = Field(\n default=None, description=\"A schedule for the deployment.\"\n )\n is_schedule_active: bool = Field(\n default=True, description=\"Whether or not the deployment schedule is active.\"\n )\n paused: bool = Field(\n default=False, description=\"Whether or not the deployment is paused.\"\n )\n schedules: List[DeploymentSchedule] = Field(\n default_factory=list, description=\"A list of schedules for the deployment.\"\n )\n infra_overrides: Dict[str, Any] = Field(\n default_factory=dict,\n description=\"Overrides to apply to the base infrastructure block at runtime.\",\n )\n parameters: Dict[str, Any] = Field(\n default_factory=dict,\n description=\"Parameters for flow runs scheduled by the deployment.\",\n )\n pull_steps: Optional[List[dict]] = Field(\n default=None,\n description=\"Pull steps for cloning and running this deployment.\",\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"A list of tags for the deployment\",\n example=[\"tag-1\", \"tag-2\"],\n )\n work_queue_name: Optional[str] = Field(\n default=None,\n description=(\n \"The work queue for the deployment. If no work queue is set, work will not\"\n \" be scheduled.\"\n ),\n )\n last_polled: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The last time the deployment was polled for status updates.\",\n )\n parameter_openapi_schema: Optional[Dict[str, Any]] = Field(\n default=None,\n description=\"The parameter schema of the flow, including defaults.\",\n )\n path: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the working directory for the workflow, relative to remote\"\n \" storage or an absolute path.\"\n ),\n )\n entrypoint: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the entrypoint for the workflow, relative to the `path`.\"\n ),\n )\n manifest_path: Optional[str] = Field(\n default=None,\n description=(\n \"The path to the flow's manifest file, relative to the chosen storage.\"\n ),\n )\n storage_document_id: Optional[UUID] = Field(\n default=None,\n description=\"The block document defining storage used for this flow.\",\n )\n infrastructure_document_id: Optional[UUID] = Field(\n default=None,\n description=\"The block document defining infrastructure to use for flow runs.\",\n )\n created_by: Optional[CreatedBy] = Field(\n default=None,\n description=\"Optional information about the creator of this deployment.\",\n )\n updated_by: Optional[UpdatedBy] = Field(\n default=None,\n description=\"Optional information about the updater of this deployment.\",\n )\n work_queue_id: UUID = Field(\n default=None,\n description=(\n \"The id of the work pool queue to which this deployment is assigned.\"\n ),\n )\n enforce_parameter_schema: bool = Field(\n default=False,\n description=(\n \"Whether or not the deployment should enforce the parameter schema.\"\n ),\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Flow","title":"Flow
","text":" Bases: ORMBaseModel
An ORM representation of flow data.
Source code inprefect/server/schemas/core.py
class Flow(ORMBaseModel):\n \"\"\"An ORM representation of flow data.\"\"\"\n\n name: str = Field(\n default=..., description=\"The name of the flow\", example=\"my-flow\"\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"A list of flow tags\",\n example=[\"tag-1\", \"tag-2\"],\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRun","title":"FlowRun
","text":" Bases: ORMBaseModel
An ORM representation of flow run data.
Source code inprefect/server/schemas/core.py
class FlowRun(ORMBaseModel):\n \"\"\"An ORM representation of flow run data.\"\"\"\n\n name: str = Field(\n default_factory=lambda: generate_slug(2),\n description=(\n \"The name of the flow run. Defaults to a random slug if not specified.\"\n ),\n example=\"my-flow-run\",\n )\n flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n state_id: Optional[UUID] = Field(\n default=None, description=\"The id of the flow run's current state.\"\n )\n deployment_id: Optional[UUID] = Field(\n default=None,\n description=(\n \"The id of the deployment associated with this flow run, if available.\"\n ),\n )\n work_queue_name: Optional[str] = Field(\n default=None, description=\"The work queue that handled this flow run.\"\n )\n flow_version: Optional[str] = Field(\n default=None,\n description=\"The version of the flow executed in this flow run.\",\n example=\"1.0\",\n )\n parameters: dict = Field(\n default_factory=dict, description=\"Parameters for the flow run.\"\n )\n idempotency_key: Optional[str] = Field(\n default=None,\n description=(\n \"An optional idempotency key for the flow run. Used to ensure the same flow\"\n \" run is not created multiple times.\"\n ),\n )\n context: dict = Field(\n default_factory=dict,\n description=\"Additional context for the flow run.\",\n example={\"my_var\": \"my_val\"},\n )\n empirical_policy: FlowRunPolicy = Field(\n default_factory=FlowRunPolicy,\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"A list of tags on the flow run\",\n example=[\"tag-1\", \"tag-2\"],\n )\n parent_task_run_id: Optional[UUID] = Field(\n default=None,\n description=(\n \"If the flow run is a subflow, the id of the 'dummy' task in the parent\"\n \" flow used to track subflow state.\"\n ),\n )\n\n state_type: Optional[states.StateType] = Field(\n default=None, description=\"The type of the current flow run state.\"\n )\n state_name: Optional[str] = Field(\n default=None, description=\"The name of the current flow run state.\"\n )\n run_count: int = Field(\n default=0, description=\"The number of times the flow run was executed.\"\n )\n expected_start_time: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The flow run's expected start time.\",\n )\n next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The next time the flow run is scheduled to start.\",\n )\n start_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The actual start time.\"\n )\n end_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The actual end time.\"\n )\n total_run_time: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=(\n \"Total run time. If the flow run was executed multiple times, the time of\"\n \" each run will be summed.\"\n ),\n )\n estimated_run_time: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=\"A real-time estimate of the total run time.\",\n )\n estimated_start_time_delta: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=\"The difference between actual and expected start time.\",\n )\n auto_scheduled: bool = Field(\n default=False,\n description=\"Whether or not the flow run was automatically scheduled.\",\n )\n infrastructure_document_id: Optional[UUID] = Field(\n default=None,\n description=\"The block document defining infrastructure to use this flow run.\",\n )\n infrastructure_pid: Optional[str] = Field(\n default=None,\n description=\"The id of the flow run as returned by an infrastructure block.\",\n )\n created_by: Optional[CreatedBy] = Field(\n default=None,\n description=\"Optional information about the creator of this flow run.\",\n )\n work_queue_id: Optional[UUID] = Field(\n default=None, description=\"The id of the run's work pool queue.\"\n )\n\n # relationships\n # flow: Flow = None\n # task_runs: List[\"TaskRun\"] = Field(default_factory=list)\n state: Optional[states.State] = Field(\n default=None, description=\"The current state of the flow run.\"\n )\n # parent_task_run: \"TaskRun\" = None\n\n job_variables: Optional[Dict[str, Any]] = Field(\n default=None,\n description=\"Variables used as overrides in the base job template\",\n )\n\n @validator(\"name\", pre=True)\n def set_name(cls, name):\n return name or generate_slug(2)\n\n def __eq__(self, other: Any) -> bool:\n \"\"\"\n Check for \"equality\" to another flow run schema\n\n Estimates times are rolling and will always change with repeated queries for\n a flow run so we ignore them during equality checks.\n \"\"\"\n if isinstance(other, FlowRun):\n exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n return self.dict(exclude=exclude_fields) == other.dict(\n exclude=exclude_fields\n )\n return super().__eq__(other)\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunNotificationPolicy","title":"FlowRunNotificationPolicy
","text":" Bases: ORMBaseModel
An ORM representation of a flow run notification.
Source code inprefect/server/schemas/core.py
class FlowRunNotificationPolicy(ORMBaseModel):\n \"\"\"An ORM representation of a flow run notification.\"\"\"\n\n is_active: bool = Field(\n default=True, description=\"Whether the policy is currently active\"\n )\n state_names: List[str] = Field(\n default=..., description=\"The flow run states that trigger notifications\"\n )\n tags: List[str] = Field(\n default=...,\n description=\"The flow run tags that trigger notifications (set [] to disable)\",\n )\n block_document_id: UUID = Field(\n default=..., description=\"The block document ID used for sending notifications\"\n )\n message_template: Optional[str] = Field(\n default=None,\n description=(\n \"A templatable notification message. Use {braces} to add variables.\"\n \" Valid variables include:\"\n f\" {listrepr(sorted(FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS), sep=', ')}\"\n ),\n example=(\n \"Flow run {flow_run_name} with id {flow_run_id} entered state\"\n \" {flow_run_state_name}.\"\n ),\n )\n\n @validator(\"message_template\")\n def validate_message_template_variables(cls, v):\n if v is not None:\n try:\n v.format(**{k: \"test\" for k in FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS})\n except KeyError as exc:\n raise ValueError(f\"Invalid template variable provided: '{exc.args[0]}'\")\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunPolicy","title":"FlowRunPolicy
","text":" Bases: PrefectBaseModel
Defines of how a flow run should retry.
Source code inprefect/server/schemas/core.py
class FlowRunPolicy(PrefectBaseModel):\n \"\"\"Defines of how a flow run should retry.\"\"\"\n\n # TODO: Determine how to separate between infrastructure and within-process level\n # retries\n max_retries: int = Field(\n default=0,\n description=(\n \"The maximum number of retries. Field is not used. Please use `retries`\"\n \" instead.\"\n ),\n deprecated=True,\n )\n retry_delay_seconds: float = Field(\n default=0,\n description=(\n \"The delay between retries. Field is not used. Please use `retry_delay`\"\n \" instead.\"\n ),\n deprecated=True,\n )\n retries: Optional[int] = Field(default=None, description=\"The number of retries.\")\n retry_delay: Optional[int] = Field(\n default=None, description=\"The delay time between retries, in seconds.\"\n )\n pause_keys: Optional[set] = Field(\n default_factory=set, description=\"Tracks pauses this run has observed.\"\n )\n resuming: Optional[bool] = Field(\n default=False, description=\"Indicates if this run is resuming from a pause.\"\n )\n\n @root_validator\n def populate_deprecated_fields(cls, values):\n \"\"\"\n If deprecated fields are provided, populate the corresponding new fields\n to preserve orchestration behavior.\n \"\"\"\n if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n values[\"retries\"] = values[\"max_retries\"]\n if (\n not values.get(\"retry_delay\", None)\n and values.get(\"retry_delay_seconds\", 0) != 0\n ):\n values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n return values\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunPolicy.populate_deprecated_fields","title":"populate_deprecated_fields
","text":"If deprecated fields are provided, populate the corresponding new fields to preserve orchestration behavior.
Source code inprefect/server/schemas/core.py
@root_validator\ndef populate_deprecated_fields(cls, values):\n \"\"\"\n If deprecated fields are provided, populate the corresponding new fields\n to preserve orchestration behavior.\n \"\"\"\n if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n values[\"retries\"] = values[\"max_retries\"]\n if (\n not values.get(\"retry_delay\", None)\n and values.get(\"retry_delay_seconds\", 0) != 0\n ):\n values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n return values\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunnerSettings","title":"FlowRunnerSettings
","text":" Bases: PrefectBaseModel
An API schema for passing details about the flow runner.
This schema is agnostic to the types and configuration provided by clients
Source code inprefect/server/schemas/core.py
class FlowRunnerSettings(PrefectBaseModel):\n \"\"\"\n An API schema for passing details about the flow runner.\n\n This schema is agnostic to the types and configuration provided by clients\n \"\"\"\n\n type: Optional[str] = Field(\n default=None,\n description=(\n \"The type of the flow runner which can be used by the client for\"\n \" dispatching.\"\n ),\n )\n config: Optional[dict] = Field(\n default=None, description=\"The configuration for the given flow runner type.\"\n )\n\n # The following is required for composite compatibility in the ORM\n\n def __init__(self, type: str = None, config: dict = None, **kwargs) -> None:\n # Pydantic does not support positional arguments so they must be converted to\n # keyword arguments\n super().__init__(type=type, config=config, **kwargs)\n\n def __composite_values__(self):\n return self.type, self.config\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Log","title":"Log
","text":" Bases: ORMBaseModel
An ORM representation of log data.
Source code inprefect/server/schemas/core.py
class Log(ORMBaseModel):\n \"\"\"An ORM representation of log data.\"\"\"\n\n name: str = Field(default=..., description=\"The logger name.\")\n level: int = Field(default=..., description=\"The log level.\")\n message: str = Field(default=..., description=\"The log message.\")\n timestamp: DateTimeTZ = Field(default=..., description=\"The log timestamp.\")\n flow_run_id: Optional[UUID] = Field(\n default=None, description=\"The flow run ID associated with the log.\"\n )\n task_run_id: Optional[UUID] = Field(\n default=None, description=\"The task run ID associated with the log.\"\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.QueueFilter","title":"QueueFilter
","text":" Bases: PrefectBaseModel
Filter criteria definition for a work queue.
Source code inprefect/server/schemas/core.py
class QueueFilter(PrefectBaseModel):\n \"\"\"Filter criteria definition for a work queue.\"\"\"\n\n tags: Optional[List[str]] = Field(\n default=None,\n description=\"Only include flow runs with these tags in the work queue.\",\n )\n deployment_ids: Optional[List[UUID]] = Field(\n default=None,\n description=\"Only include flow runs from these deployments in the work queue.\",\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.SavedSearch","title":"SavedSearch
","text":" Bases: ORMBaseModel
An ORM representation of saved search data. Represents a set of filter criteria.
Source code inprefect/server/schemas/core.py
class SavedSearch(ORMBaseModel):\n \"\"\"An ORM representation of saved search data. Represents a set of filter criteria.\"\"\"\n\n name: str = Field(default=..., description=\"The name of the saved search.\")\n filters: List[SavedSearchFilter] = Field(\n default_factory=list, description=\"The filter set for the saved search.\"\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.SavedSearchFilter","title":"SavedSearchFilter
","text":" Bases: PrefectBaseModel
A filter for a saved search model. Intended for use by the Prefect UI.
Source code inprefect/server/schemas/core.py
class SavedSearchFilter(PrefectBaseModel):\n \"\"\"A filter for a saved search model. Intended for use by the Prefect UI.\"\"\"\n\n object: str = Field(default=..., description=\"The object over which to filter.\")\n property: str = Field(\n default=..., description=\"The property of the object on which to filter.\"\n )\n type: str = Field(default=..., description=\"The type of the property.\")\n operation: str = Field(\n default=...,\n description=\"The operator to apply to the object. For example, `equals`.\",\n )\n value: Any = Field(\n default=..., description=\"A JSON-compatible value for the filter.\"\n )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.TaskRun","title":"TaskRun
","text":" Bases: ORMBaseModel
An ORM representation of task run data.
Source code inprefect/server/schemas/core.py
class TaskRun(ORMBaseModel):\n \"\"\"An ORM representation of task run data.\"\"\"\n\n name: str = Field(default_factory=lambda: generate_slug(2), example=\"my-task-run\")\n flow_run_id: Optional[UUID] = Field(\n default=None, description=\"The flow run id of the task run.\"\n )\n task_key: str = Field(\n default=..., description=\"A unique identifier for the task being run.\"\n )\n dynamic_key: str = Field(\n default=...,\n description=(\n \"A dynamic key used to differentiate between multiple runs of the same task\"\n \" within the same flow run.\"\n ),\n )\n cache_key: Optional[str] = Field(\n default=None,\n description=(\n \"An optional cache key. If a COMPLETED state associated with this cache key\"\n \" is found, the cached COMPLETED state will be used instead of executing\"\n \" the task run.\"\n ),\n )\n cache_expiration: Optional[DateTimeTZ] = Field(\n default=None, description=\"Specifies when the cached state should expire.\"\n )\n task_version: Optional[str] = Field(\n default=None, description=\"The version of the task being run.\"\n )\n empirical_policy: TaskRunPolicy = Field(\n default_factory=TaskRunPolicy,\n )\n tags: List[str] = Field(\n default_factory=list,\n description=\"A list of tags for the task run.\",\n example=[\"tag-1\", \"tag-2\"],\n )\n state_id: Optional[UUID] = Field(\n default=None, description=\"The id of the current task run state.\"\n )\n task_inputs: Dict[str, List[Union[TaskRunResult, Parameter, Constant]]] = Field(\n default_factory=dict,\n description=(\n \"Tracks the source of inputs to a task run. Used for internal bookkeeping.\"\n ),\n )\n state_type: Optional[states.StateType] = Field(\n default=None, description=\"The type of the current task run state.\"\n )\n state_name: Optional[str] = Field(\n default=None, description=\"The name of the current task run state.\"\n )\n run_count: int = Field(\n default=0, description=\"The number of times the task run has been executed.\"\n )\n flow_run_run_count: int = Field(\n default=0,\n description=(\n \"If the parent flow has retried, this indicates the flow retry this run is\"\n \" associated with.\"\n ),\n )\n expected_start_time: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The task run's expected start time.\",\n )\n\n # the next scheduled start time will be populated\n # whenever the run is in a scheduled state\n next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"The next time the task run is scheduled to start.\",\n )\n start_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The actual start time.\"\n )\n end_time: Optional[DateTimeTZ] = Field(\n default=None, description=\"The actual end time.\"\n )\n total_run_time: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=(\n \"Total run time. If the task run was executed multiple times, the time of\"\n \" each run will be summed.\"\n ),\n )\n estimated_run_time: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=\"A real-time estimate of total run time.\",\n )\n estimated_start_time_delta: datetime.timedelta = Field(\n default=datetime.timedelta(0),\n description=\"The difference between actual and expected start time.\",\n )\n\n # relationships\n # flow_run: FlowRun = None\n # subflow_runs: List[FlowRun] = Field(default_factory=list)\n state: Optional[states.State] = Field(\n default=None, description=\"The current task run state.\"\n )\n\n @validator(\"name\", pre=True)\n def set_name(cls, name):\n return name or generate_slug(2)\n\n @validator(\"cache_key\")\n def validate_cache_key_length(cls, cache_key):\n if cache_key and len(cache_key) > PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH.value():\n raise ValueError(\n \"Cache key exceeded maximum allowed length of\"\n f\" {PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH.value()} characters.\"\n )\n return cache_key\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkPool","title":"WorkPool
","text":" Bases: ORMBaseModel
An ORM representation of a work pool
Source code inprefect/server/schemas/core.py
class WorkPool(ORMBaseModel):\n \"\"\"An ORM representation of a work pool\"\"\"\n\n name: str = Field(\n description=\"The name of the work pool.\",\n )\n description: Optional[str] = Field(\n default=None, description=\"A description of the work pool.\"\n )\n type: str = Field(description=\"The work pool type.\")\n base_job_template: Dict[str, Any] = Field(\n default_factory=dict, description=\"The work pool's base job template.\"\n )\n is_paused: bool = Field(\n default=False,\n description=\"Pausing the work pool stops the delivery of all work.\",\n )\n concurrency_limit: Optional[conint(ge=0)] = Field(\n default=None, description=\"A concurrency limit for the work pool.\"\n )\n # this required field has a default of None so that the custom validator\n # below will be called and produce a more helpful error message\n default_queue_id: UUID = Field(\n None, description=\"The id of the pool's default queue.\"\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n\n @validator(\"default_queue_id\", always=True)\n def helpful_error_for_missing_default_queue_id(cls, v):\n \"\"\"\n Default queue ID is required because all pools must have a default queue\n ID, but it represents a circular foreign key relationship to a\n WorkQueue (which can't be created until the work pool exists).\n Therefore, while this field can *technically* be null, it shouldn't be.\n This should only be an issue when creating new pools, as reading\n existing ones will always have this field populated. This custom error\n message will help users understand that they should use the\n `actions.WorkPoolCreate` model in that case.\n \"\"\"\n if v is None:\n raise ValueError(\n \"`default_queue_id` is a required field. If you are \"\n \"creating a new WorkPool and don't have a queue \"\n \"ID yet, use the `actions.WorkPoolCreate` model instead.\"\n )\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkPool.helpful_error_for_missing_default_queue_id","title":"helpful_error_for_missing_default_queue_id
","text":"Default queue ID is required because all pools must have a default queue ID, but it represents a circular foreign key relationship to a WorkQueue (which can't be created until the work pool exists). Therefore, while this field can technically be null, it shouldn't be. This should only be an issue when creating new pools, as reading existing ones will always have this field populated. This custom error message will help users understand that they should use the actions.WorkPoolCreate
model in that case.
prefect/server/schemas/core.py
@validator(\"default_queue_id\", always=True)\ndef helpful_error_for_missing_default_queue_id(cls, v):\n \"\"\"\n Default queue ID is required because all pools must have a default queue\n ID, but it represents a circular foreign key relationship to a\n WorkQueue (which can't be created until the work pool exists).\n Therefore, while this field can *technically* be null, it shouldn't be.\n This should only be an issue when creating new pools, as reading\n existing ones will always have this field populated. This custom error\n message will help users understand that they should use the\n `actions.WorkPoolCreate` model in that case.\n \"\"\"\n if v is None:\n raise ValueError(\n \"`default_queue_id` is a required field. If you are \"\n \"creating a new WorkPool and don't have a queue \"\n \"ID yet, use the `actions.WorkPoolCreate` model instead.\"\n )\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueue","title":"WorkQueue
","text":" Bases: ORMBaseModel
An ORM representation of a work queue
Source code inprefect/server/schemas/core.py
class WorkQueue(ORMBaseModel):\n \"\"\"An ORM representation of a work queue\"\"\"\n\n name: str = Field(default=..., description=\"The name of the work queue.\")\n description: Optional[str] = Field(\n default=\"\", description=\"An optional description for the work queue.\"\n )\n is_paused: bool = Field(\n default=False, description=\"Whether or not the work queue is paused.\"\n )\n concurrency_limit: Optional[conint(ge=0)] = Field(\n default=None, description=\"An optional concurrency limit for the work queue.\"\n )\n priority: conint(ge=1) = Field(\n default=1,\n description=(\n \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n ),\n )\n # Will be required after a future migration\n work_pool_id: Optional[UUID] = Field(\n default=None, description=\"The work pool with which the queue is associated.\"\n )\n filter: Optional[QueueFilter] = Field(\n default=None,\n description=\"DEPRECATED: Filter criteria for the work queue.\",\n deprecated=True,\n )\n last_polled: Optional[DateTimeTZ] = Field(\n default=None, description=\"The last time an agent polled this queue for work.\"\n )\n\n @validator(\"name\", check_fields=False)\n def validate_name_characters(cls, v):\n raise_on_name_with_banned_characters(v)\n return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueueHealthPolicy","title":"WorkQueueHealthPolicy
","text":" Bases: PrefectBaseModel
prefect/server/schemas/core.py
class WorkQueueHealthPolicy(PrefectBaseModel):\n maximum_late_runs: Optional[int] = Field(\n default=0,\n description=(\n \"The maximum number of late runs in the work queue before it is deemed\"\n \" unhealthy. Defaults to `0`.\"\n ),\n )\n maximum_seconds_since_last_polled: Optional[int] = Field(\n default=60,\n description=(\n \"The maximum number of time in seconds elapsed since work queue has been\"\n \" polled before it is deemed unhealthy. Defaults to `60`.\"\n ),\n )\n\n def evaluate_health_status(\n self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n ) -> bool:\n \"\"\"\n Given empirical information about the state of the work queue, evaluate its health status.\n\n Args:\n late_runs: the count of late runs for the work queue.\n last_polled: the last time the work queue was polled, if available.\n\n Returns:\n bool: whether or not the work queue is healthy.\n \"\"\"\n healthy = True\n if (\n self.maximum_late_runs is not None\n and late_runs_count > self.maximum_late_runs\n ):\n healthy = False\n\n if self.maximum_seconds_since_last_polled is not None:\n if (\n last_polled is None\n or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n > self.maximum_seconds_since_last_polled\n ):\n healthy = False\n\n return healthy\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueueHealthPolicy.evaluate_health_status","title":"evaluate_health_status
","text":"Given empirical information about the state of the work queue, evaluate its health status.
Parameters:
Name Type Description Defaultlate_runs
the count of late runs for the work queue.
requiredlast_polled
Optional[DateTimeTZ]
the last time the work queue was polled, if available.
None
Returns:
Name Type Descriptionbool
bool
whether or not the work queue is healthy.
Source code inprefect/server/schemas/core.py
def evaluate_health_status(\n self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n) -> bool:\n \"\"\"\n Given empirical information about the state of the work queue, evaluate its health status.\n\n Args:\n late_runs: the count of late runs for the work queue.\n last_polled: the last time the work queue was polled, if available.\n\n Returns:\n bool: whether or not the work queue is healthy.\n \"\"\"\n healthy = True\n if (\n self.maximum_late_runs is not None\n and late_runs_count > self.maximum_late_runs\n ):\n healthy = False\n\n if self.maximum_seconds_since_last_polled is not None:\n if (\n last_polled is None\n or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n > self.maximum_seconds_since_last_polled\n ):\n healthy = False\n\n return healthy\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Worker","title":"Worker
","text":" Bases: ORMBaseModel
An ORM representation of a worker
Source code inprefect/server/schemas/core.py
class Worker(ORMBaseModel):\n \"\"\"An ORM representation of a worker\"\"\"\n\n name: str = Field(description=\"The name of the worker.\")\n work_pool_id: UUID = Field(\n description=\"The work pool with which the queue is associated.\"\n )\n last_heartbeat_time: datetime.datetime = Field(\n None, description=\"The last time the worker process sent a heartbeat.\"\n )\n heartbeat_interval_seconds: Optional[int] = Field(\n default=None,\n description=(\n \"The number of seconds to expect between heartbeats sent by the worker.\"\n ),\n )\n
"},{"location":"api-ref/server/schemas/filters/","title":"server.schemas.filters","text":""},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters","title":"prefect.server.schemas.filters
","text":"Schemas that define Prefect REST API filtering operations.
Each filter schema includes logic for transforming itself into a SQL where
clause.
ArtifactCollectionFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter artifact collections. Only artifact collections matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class ArtifactCollectionFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter artifact collections. Only artifact collections matching all criteria will be returned\"\"\"\n\n latest_id: Optional[ArtifactCollectionFilterLatestId] = Field(\n default=None, description=\"Filter criteria for `Artifact.id`\"\n )\n key: Optional[ArtifactCollectionFilterKey] = Field(\n default=None, description=\"Filter criteria for `Artifact.key`\"\n )\n flow_run_id: Optional[ArtifactCollectionFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n )\n task_run_id: Optional[ArtifactCollectionFilterTaskRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n )\n type: Optional[ArtifactCollectionFilterType] = Field(\n default=None, description=\"Filter criteria for `Artifact.type`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.latest_id is not None:\n filters.append(self.latest_id.as_sql_filter(db))\n if self.key is not None:\n filters.append(self.key.as_sql_filter(db))\n if self.flow_run_id is not None:\n filters.append(self.flow_run_id.as_sql_filter(db))\n if self.task_run_id is not None:\n filters.append(self.task_run_id.as_sql_filter(db))\n if self.type is not None:\n filters.append(self.type.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterFlowRunId","title":"ArtifactCollectionFilterFlowRunId
","text":" Bases: PrefectFilterBaseModel
Filter by ArtifactCollection.flow_run_id
.
prefect/server/schemas/filters.py
class ArtifactCollectionFilterFlowRunId(PrefectFilterBaseModel):\n \"\"\"Filter by `ArtifactCollection.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run IDs to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.ArtifactCollection.flow_run_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterFlowRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterKey","title":"ArtifactCollectionFilterKey
","text":" Bases: PrefectFilterBaseModel
Filter by ArtifactCollection.key
.
prefect/server/schemas/filters.py
class ArtifactCollectionFilterKey(PrefectFilterBaseModel):\n \"\"\"Filter by `ArtifactCollection.key`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact keys to include\"\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match artifact keys against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-artifact-%\",\n )\n\n exists_: Optional[bool] = Field(\n default=None,\n description=(\n \"If `true`, only include artifacts with a non-null key. If `false`, \"\n \"only include artifacts with a null key. Should return all rows in \"\n \"the ArtifactCollection table if specified.\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.ArtifactCollection.key.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.ArtifactCollection.key.ilike(f\"%{self.like_}%\"))\n if self.exists_ is not None:\n filters.append(\n db.ArtifactCollection.key.isnot(None)\n if self.exists_\n else db.ArtifactCollection.key.is_(None)\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterKey.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterLatestId","title":"ArtifactCollectionFilterLatestId
","text":" Bases: PrefectFilterBaseModel
Filter by ArtifactCollection.latest_id
.
prefect/server/schemas/filters.py
class ArtifactCollectionFilterLatestId(PrefectFilterBaseModel):\n \"\"\"Filter by `ArtifactCollection.latest_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of artifact ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.ArtifactCollection.latest_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterLatestId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterTaskRunId","title":"ArtifactCollectionFilterTaskRunId
","text":" Bases: PrefectFilterBaseModel
Filter by ArtifactCollection.task_run_id
.
prefect/server/schemas/filters.py
class ArtifactCollectionFilterTaskRunId(PrefectFilterBaseModel):\n \"\"\"Filter by `ArtifactCollection.task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run IDs to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.ArtifactCollection.task_run_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterTaskRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterType","title":"ArtifactCollectionFilterType
","text":" Bases: PrefectFilterBaseModel
Filter by ArtifactCollection.type
.
prefect/server/schemas/filters.py
class ArtifactCollectionFilterType(PrefectFilterBaseModel):\n \"\"\"Filter by `ArtifactCollection.type`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to include\"\n )\n not_any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to exclude\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.ArtifactCollection.type.in_(self.any_))\n if self.not_any_ is not None:\n filters.append(db.ArtifactCollection.type.notin_(self.not_any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterType.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilter","title":"ArtifactFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter artifacts. Only artifacts matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class ArtifactFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter artifacts. Only artifacts matching all criteria will be returned\"\"\"\n\n id: Optional[ArtifactFilterId] = Field(\n default=None, description=\"Filter criteria for `Artifact.id`\"\n )\n key: Optional[ArtifactFilterKey] = Field(\n default=None, description=\"Filter criteria for `Artifact.key`\"\n )\n flow_run_id: Optional[ArtifactFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n )\n task_run_id: Optional[ArtifactFilterTaskRunId] = Field(\n default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n )\n type: Optional[ArtifactFilterType] = Field(\n default=None, description=\"Filter criteria for `Artifact.type`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.key is not None:\n filters.append(self.key.as_sql_filter(db))\n if self.flow_run_id is not None:\n filters.append(self.flow_run_id.as_sql_filter(db))\n if self.task_run_id is not None:\n filters.append(self.task_run_id.as_sql_filter(db))\n if self.type is not None:\n filters.append(self.type.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterFlowRunId","title":"ArtifactFilterFlowRunId
","text":" Bases: PrefectFilterBaseModel
Filter by Artifact.flow_run_id
.
prefect/server/schemas/filters.py
class ArtifactFilterFlowRunId(PrefectFilterBaseModel):\n \"\"\"Filter by `Artifact.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run IDs to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Artifact.flow_run_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterFlowRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterId","title":"ArtifactFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by Artifact.id
.
prefect/server/schemas/filters.py
class ArtifactFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `Artifact.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of artifact ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Artifact.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterKey","title":"ArtifactFilterKey
","text":" Bases: PrefectFilterBaseModel
Filter by Artifact.key
.
prefect/server/schemas/filters.py
class ArtifactFilterKey(PrefectFilterBaseModel):\n \"\"\"Filter by `Artifact.key`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact keys to include\"\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match artifact keys against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-artifact-%\",\n )\n\n exists_: Optional[bool] = Field(\n default=None,\n description=(\n \"If `true`, only include artifacts with a non-null key. If `false`, \"\n \"only include artifacts with a null key.\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Artifact.key.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.Artifact.key.ilike(f\"%{self.like_}%\"))\n if self.exists_ is not None:\n filters.append(\n db.Artifact.key.isnot(None)\n if self.exists_\n else db.Artifact.key.is_(None)\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterKey.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterTaskRunId","title":"ArtifactFilterTaskRunId
","text":" Bases: PrefectFilterBaseModel
Filter by Artifact.task_run_id
.
prefect/server/schemas/filters.py
class ArtifactFilterTaskRunId(PrefectFilterBaseModel):\n \"\"\"Filter by `Artifact.task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run IDs to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Artifact.task_run_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterTaskRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterType","title":"ArtifactFilterType
","text":" Bases: PrefectFilterBaseModel
Filter by Artifact.type
.
prefect/server/schemas/filters.py
class ArtifactFilterType(PrefectFilterBaseModel):\n \"\"\"Filter by `Artifact.type`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to include\"\n )\n not_any_: Optional[List[str]] = Field(\n default=None, description=\"A list of artifact types to exclude\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Artifact.type.in_(self.any_))\n if self.not_any_ is not None:\n filters.append(db.Artifact.type.notin_(self.not_any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterType.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilter","title":"BlockDocumentFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class BlockDocumentFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned\"\"\"\n\n id: Optional[BlockDocumentFilterId] = Field(\n default=None, description=\"Filter criteria for `BlockDocument.id`\"\n )\n is_anonymous: Optional[BlockDocumentFilterIsAnonymous] = Field(\n # default is to exclude anonymous blocks\n BlockDocumentFilterIsAnonymous(eq_=False),\n description=(\n \"Filter criteria for `BlockDocument.is_anonymous`. \"\n \"Defaults to excluding anonymous blocks.\"\n ),\n )\n block_type_id: Optional[BlockDocumentFilterBlockTypeId] = Field(\n default=None, description=\"Filter criteria for `BlockDocument.block_type_id`\"\n )\n name: Optional[BlockDocumentFilterName] = Field(\n default=None, description=\"Filter criteria for `BlockDocument.name`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.is_anonymous is not None:\n filters.append(self.is_anonymous.as_sql_filter(db))\n if self.block_type_id is not None:\n filters.append(self.block_type_id.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterBlockTypeId","title":"BlockDocumentFilterBlockTypeId
","text":" Bases: PrefectFilterBaseModel
Filter by BlockDocument.block_type_id
.
prefect/server/schemas/filters.py
class BlockDocumentFilterBlockTypeId(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockDocument.block_type_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of block type ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.BlockDocument.block_type_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterBlockTypeId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterId","title":"BlockDocumentFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by BlockDocument.id
.
prefect/server/schemas/filters.py
class BlockDocumentFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockDocument.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of block ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.BlockDocument.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterIsAnonymous","title":"BlockDocumentFilterIsAnonymous
","text":" Bases: PrefectFilterBaseModel
Filter by BlockDocument.is_anonymous
.
prefect/server/schemas/filters.py
class BlockDocumentFilterIsAnonymous(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockDocument.is_anonymous`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=(\n \"Filter block documents for only those that are or are not anonymous.\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.eq_ is not None:\n filters.append(db.BlockDocument.is_anonymous.is_(self.eq_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterIsAnonymous.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterName","title":"BlockDocumentFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by BlockDocument.name
.
prefect/server/schemas/filters.py
class BlockDocumentFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockDocument.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of block names to include\"\n )\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match block names against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-block%\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.BlockDocument.name.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.BlockDocument.name.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilter","title":"BlockSchemaFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter BlockSchemas
Source code inprefect/server/schemas/filters.py
class BlockSchemaFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter BlockSchemas\"\"\"\n\n block_type_id: Optional[BlockSchemaFilterBlockTypeId] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.block_type_id`\"\n )\n block_capabilities: Optional[BlockSchemaFilterCapabilities] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.capabilities`\"\n )\n id: Optional[BlockSchemaFilterId] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.id`\"\n )\n version: Optional[BlockSchemaFilterVersion] = Field(\n default=None, description=\"Filter criteria for `BlockSchema.version`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.block_type_id is not None:\n filters.append(self.block_type_id.as_sql_filter(db))\n if self.block_capabilities is not None:\n filters.append(self.block_capabilities.as_sql_filter(db))\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.version is not None:\n filters.append(self.version.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterBlockTypeId","title":"BlockSchemaFilterBlockTypeId
","text":" Bases: PrefectFilterBaseModel
Filter by BlockSchema.block_type_id
.
prefect/server/schemas/filters.py
class BlockSchemaFilterBlockTypeId(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockSchema.block_type_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of block type ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.BlockSchema.block_type_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterBlockTypeId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterCapabilities","title":"BlockSchemaFilterCapabilities
","text":" Bases: PrefectFilterBaseModel
Filter by BlockSchema.capabilities
prefect/server/schemas/filters.py
class BlockSchemaFilterCapabilities(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"write-storage\", \"read-storage\"],\n description=(\n \"A list of block capabilities. Block entities will be returned only if an\"\n \" associated block schema has a superset of the defined capabilities.\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n from prefect.server.utilities.database import json_has_all_keys\n\n filters = []\n if self.all_ is not None:\n filters.append(json_has_all_keys(db.BlockSchema.capabilities, self.all_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterCapabilities.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterId","title":"BlockSchemaFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by BlockSchema.id
Source code inprefect/server/schemas/filters.py
class BlockSchemaFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by BlockSchema.id\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of IDs to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.BlockSchema.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterVersion","title":"BlockSchemaFilterVersion
","text":" Bases: PrefectFilterBaseModel
Filter by BlockSchema.capabilities
prefect/server/schemas/filters.py
class BlockSchemaFilterVersion(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n example=[\"2.0.0\", \"2.1.0\"],\n description=\"A list of block schema versions.\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n pass\n\n filters = []\n if self.any_ is not None:\n filters.append(db.BlockSchema.version.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterVersion.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilter","title":"BlockTypeFilter
","text":" Bases: PrefectFilterBaseModel
Filter BlockTypes
Source code inprefect/server/schemas/filters.py
class BlockTypeFilter(PrefectFilterBaseModel):\n \"\"\"Filter BlockTypes\"\"\"\n\n name: Optional[BlockTypeFilterName] = Field(\n default=None, description=\"Filter criteria for `BlockType.name`\"\n )\n\n slug: Optional[BlockTypeFilterSlug] = Field(\n default=None, description=\"Filter criteria for `BlockType.slug`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n if self.slug is not None:\n filters.append(self.slug.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterName","title":"BlockTypeFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by BlockType.name
prefect/server/schemas/filters.py
class BlockTypeFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockType.name`\"\"\"\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.like_ is not None:\n filters.append(db.BlockType.name.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterSlug","title":"BlockTypeFilterSlug
","text":" Bases: PrefectFilterBaseModel
Filter by BlockType.slug
prefect/server/schemas/filters.py
class BlockTypeFilterSlug(PrefectFilterBaseModel):\n \"\"\"Filter by `BlockType.slug`\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of slugs to match\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.BlockType.slug.in_(self.any_))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterSlug.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilter","title":"DeploymentFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter for deployments. Only deployments matching all criteria will be returned.
Source code inprefect/server/schemas/filters.py
class DeploymentFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter for deployments. Only deployments matching all criteria will be returned.\"\"\"\n\n id: Optional[DeploymentFilterId] = Field(\n default=None, description=\"Filter criteria for `Deployment.id`\"\n )\n name: Optional[DeploymentFilterName] = Field(\n default=None, description=\"Filter criteria for `Deployment.name`\"\n )\n paused: Optional[DeploymentFilterPaused] = Field(\n default=None, description=\"Filter criteria for `Deployment.paused`\"\n )\n is_schedule_active: Optional[DeploymentFilterIsScheduleActive] = Field(\n default=None, description=\"Filter criteria for `Deployment.is_schedule_active`\"\n )\n tags: Optional[DeploymentFilterTags] = Field(\n default=None, description=\"Filter criteria for `Deployment.tags`\"\n )\n work_queue_name: Optional[DeploymentFilterWorkQueueName] = Field(\n default=None, description=\"Filter criteria for `Deployment.work_queue_name`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n if self.paused is not None:\n filters.append(self.paused.as_sql_filter(db))\n if self.is_schedule_active is not None:\n filters.append(self.is_schedule_active.as_sql_filter(db))\n if self.tags is not None:\n filters.append(self.tags.as_sql_filter(db))\n if self.work_queue_name is not None:\n filters.append(self.work_queue_name.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterId","title":"DeploymentFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by Deployment.id
.
prefect/server/schemas/filters.py
class DeploymentFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `Deployment.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of deployment ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Deployment.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterIsScheduleActive","title":"DeploymentFilterIsScheduleActive
","text":" Bases: PrefectFilterBaseModel
Legacy filter to filter by Deployment.is_schedule_active
which is always the opposite of Deployment.paused
.
prefect/server/schemas/filters.py
class DeploymentFilterIsScheduleActive(PrefectFilterBaseModel):\n \"\"\"Legacy filter to filter by `Deployment.is_schedule_active` which\n is always the opposite of `Deployment.paused`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=\"Only returns where deployment schedule is/is not active\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.eq_ is not None:\n filters.append(db.Deployment.paused.is_not(self.eq_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterIsScheduleActive.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterName","title":"DeploymentFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by Deployment.name
.
prefect/server/schemas/filters.py
class DeploymentFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `Deployment.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of deployment names to include\",\n example=[\"my-deployment-1\", \"my-deployment-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Deployment.name.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.Deployment.name.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterPaused","title":"DeploymentFilterPaused
","text":" Bases: PrefectFilterBaseModel
Filter by Deployment.paused
.
prefect/server/schemas/filters.py
class DeploymentFilterPaused(PrefectFilterBaseModel):\n \"\"\"Filter by `Deployment.paused`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=\"Only returns where deployment is/is not paused\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.eq_ is not None:\n filters.append(db.Deployment.paused.is_(self.eq_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterPaused.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterTags","title":"DeploymentFilterTags
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by Deployment.tags
.
prefect/server/schemas/filters.py
class DeploymentFilterTags(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `Deployment.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Deployments will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include deployments without tags\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n from prefect.server.utilities.database import json_has_all_keys\n\n filters = []\n if self.all_ is not None:\n filters.append(json_has_all_keys(db.Deployment.tags, self.all_))\n if self.is_null_ is not None:\n filters.append(\n db.Deployment.tags == [] if self.is_null_ else db.Deployment.tags != []\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterTags.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterWorkQueueName","title":"DeploymentFilterWorkQueueName
","text":" Bases: PrefectFilterBaseModel
Filter by Deployment.work_queue_name
.
prefect/server/schemas/filters.py
class DeploymentFilterWorkQueueName(PrefectFilterBaseModel):\n \"\"\"Filter by `Deployment.work_queue_name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of work queue names to include\",\n example=[\"work_queue_1\", \"work_queue_2\"],\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Deployment.work_queue_name.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterWorkQueueName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilter","title":"DeploymentScheduleFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter for deployments. Only deployments matching all criteria will be returned.
Source code inprefect/server/schemas/filters.py
class DeploymentScheduleFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter for deployments. Only deployments matching all criteria will be returned.\"\"\"\n\n active: Optional[DeploymentScheduleFilterActive] = Field(\n default=None, description=\"Filter criteria for `DeploymentSchedule.active`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.active is not None:\n filters.append(self.active.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilterActive","title":"DeploymentScheduleFilterActive
","text":" Bases: PrefectFilterBaseModel
Filter by DeploymentSchedule.active
.
prefect/server/schemas/filters.py
class DeploymentScheduleFilterActive(PrefectFilterBaseModel):\n \"\"\"Filter by `DeploymentSchedule.active`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=\"Only returns where deployment schedule is/is not active\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.eq_ is not None:\n filters.append(db.DeploymentSchedule.active.is_(self.eq_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilterActive.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FilterSet","title":"FilterSet
","text":" Bases: PrefectBaseModel
A collection of filters for common objects
Source code inprefect/server/schemas/filters.py
class FilterSet(PrefectBaseModel):\n \"\"\"A collection of filters for common objects\"\"\"\n\n flows: FlowFilter = Field(\n default_factory=FlowFilter, description=\"Filters that apply to flows\"\n )\n flow_runs: FlowRunFilter = Field(\n default_factory=FlowRunFilter, description=\"Filters that apply to flow runs\"\n )\n task_runs: TaskRunFilter = Field(\n default_factory=TaskRunFilter, description=\"Filters that apply to task runs\"\n )\n deployments: DeploymentFilter = Field(\n default_factory=DeploymentFilter,\n description=\"Filters that apply to deployments\",\n )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FilterSet.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilter","title":"FlowFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter for flows. Only flows matching all criteria will be returned.
Source code inprefect/server/schemas/filters.py
class FlowFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter for flows. Only flows matching all criteria will be returned.\"\"\"\n\n id: Optional[FlowFilterId] = Field(\n default=None, description=\"Filter criteria for `Flow.id`\"\n )\n deployment: Optional[FlowFilterDeployment] = Field(\n default=None, description=\"Filter criteria for Flow deployments\"\n )\n name: Optional[FlowFilterName] = Field(\n default=None, description=\"Filter criteria for `Flow.name`\"\n )\n tags: Optional[FlowFilterTags] = Field(\n default=None, description=\"Filter criteria for `Flow.tags`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.deployment is not None:\n filters.append(self.deployment.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n if self.tags is not None:\n filters.append(self.tags.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterDeployment","title":"FlowFilterDeployment
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by flows by deployment
Source code inprefect/server/schemas/filters.py
class FlowFilterDeployment(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by flows by deployment\"\"\"\n\n is_null_: Optional[bool] = Field(\n default=None,\n description=\"If true, only include flows without deployments\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.is_null_ is not None:\n deployments_subquery = (\n sa.select(db.Deployment.flow_id).distinct().subquery()\n )\n\n if self.is_null_:\n filters.append(\n db.Flow.id.not_in(sa.select(deployments_subquery.c.flow_id))\n )\n else:\n filters.append(\n db.Flow.id.in_(sa.select(deployments_subquery.c.flow_id))\n )\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterDeployment.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterId","title":"FlowFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by Flow.id
.
prefect/server/schemas/filters.py
class FlowFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `Flow.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Flow.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterName","title":"FlowFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by Flow.name
.
prefect/server/schemas/filters.py
class FlowFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `Flow.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of flow names to include\",\n example=[\"my-flow-1\", \"my-flow-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Flow.name.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.Flow.name.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterTags","title":"FlowFilterTags
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by Flow.tags
.
prefect/server/schemas/filters.py
class FlowFilterTags(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `Flow.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Flows will be returned only if their tags are a superset\"\n \" of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include flows without tags\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n from prefect.server.utilities.database import json_has_all_keys\n\n filters = []\n if self.all_ is not None:\n filters.append(json_has_all_keys(db.Flow.tags, self.all_))\n if self.is_null_ is not None:\n filters.append(db.Flow.tags == [] if self.is_null_ else db.Flow.tags != [])\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterTags.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilter","title":"FlowRunFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter flow runs. Only flow runs matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class FlowRunFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter flow runs. Only flow runs matching all criteria will be returned\"\"\"\n\n id: Optional[FlowRunFilterId] = Field(\n default=None, description=\"Filter criteria for `FlowRun.id`\"\n )\n name: Optional[FlowRunFilterName] = Field(\n default=None, description=\"Filter criteria for `FlowRun.name`\"\n )\n tags: Optional[FlowRunFilterTags] = Field(\n default=None, description=\"Filter criteria for `FlowRun.tags`\"\n )\n deployment_id: Optional[FlowRunFilterDeploymentId] = Field(\n default=None, description=\"Filter criteria for `FlowRun.deployment_id`\"\n )\n work_queue_name: Optional[FlowRunFilterWorkQueueName] = Field(\n default=None, description=\"Filter criteria for `FlowRun.work_queue_name\"\n )\n state: Optional[FlowRunFilterState] = Field(\n default=None, description=\"Filter criteria for `FlowRun.state`\"\n )\n flow_version: Optional[FlowRunFilterFlowVersion] = Field(\n default=None, description=\"Filter criteria for `FlowRun.flow_version`\"\n )\n start_time: Optional[FlowRunFilterStartTime] = Field(\n default=None, description=\"Filter criteria for `FlowRun.start_time`\"\n )\n expected_start_time: Optional[FlowRunFilterExpectedStartTime] = Field(\n default=None, description=\"Filter criteria for `FlowRun.expected_start_time`\"\n )\n next_scheduled_start_time: Optional[FlowRunFilterNextScheduledStartTime] = Field(\n default=None,\n description=\"Filter criteria for `FlowRun.next_scheduled_start_time`\",\n )\n parent_flow_run_id: Optional[FlowRunFilterParentFlowRunId] = Field(\n default=None, description=\"Filter criteria for subflows of the given flow runs\"\n )\n parent_task_run_id: Optional[FlowRunFilterParentTaskRunId] = Field(\n default=None, description=\"Filter criteria for `FlowRun.parent_task_run_id`\"\n )\n idempotency_key: Optional[FlowRunFilterIdempotencyKey] = Field(\n default=None, description=\"Filter criteria for `FlowRun.idempotency_key`\"\n )\n\n def only_filters_on_id(self):\n return (\n self.id is not None\n and (self.id.any_ and not self.id.not_any_)\n and self.name is None\n and self.tags is None\n and self.deployment_id is None\n and self.work_queue_name is None\n and self.state is None\n and self.flow_version is None\n and self.start_time is None\n and self.expected_start_time is None\n and self.next_scheduled_start_time is None\n and self.parent_flow_run_id is None\n and self.parent_task_run_id is None\n and self.idempotency_key is None\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n if self.tags is not None:\n filters.append(self.tags.as_sql_filter(db))\n if self.deployment_id is not None:\n filters.append(self.deployment_id.as_sql_filter(db))\n if self.work_queue_name is not None:\n filters.append(self.work_queue_name.as_sql_filter(db))\n if self.flow_version is not None:\n filters.append(self.flow_version.as_sql_filter(db))\n if self.state is not None:\n filters.append(self.state.as_sql_filter(db))\n if self.start_time is not None:\n filters.append(self.start_time.as_sql_filter(db))\n if self.expected_start_time is not None:\n filters.append(self.expected_start_time.as_sql_filter(db))\n if self.next_scheduled_start_time is not None:\n filters.append(self.next_scheduled_start_time.as_sql_filter(db))\n if self.parent_flow_run_id is not None:\n filters.append(self.parent_flow_run_id.as_sql_filter(db))\n if self.parent_task_run_id is not None:\n filters.append(self.parent_task_run_id.as_sql_filter(db))\n if self.idempotency_key is not None:\n filters.append(self.idempotency_key.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterDeploymentId","title":"FlowRunFilterDeploymentId
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by FlowRun.deployment_id
.
prefect/server/schemas/filters.py
class FlowRunFilterDeploymentId(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `FlowRun.deployment_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run deployment ids to include\"\n )\n is_null_: Optional[bool] = Field(\n default=None,\n description=\"If true, only include flow runs without deployment ids\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.deployment_id.in_(self.any_))\n if self.is_null_ is not None:\n filters.append(\n db.FlowRun.deployment_id.is_(None)\n if self.is_null_\n else db.FlowRun.deployment_id.is_not(None)\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterDeploymentId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterExpectedStartTime","title":"FlowRunFilterExpectedStartTime
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.expected_start_time
.
prefect/server/schemas/filters.py
class FlowRunFilterExpectedStartTime(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.expected_start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs scheduled to start at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs scheduled to start at or after this time\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.before_ is not None:\n filters.append(db.FlowRun.expected_start_time <= self.before_)\n if self.after_ is not None:\n filters.append(db.FlowRun.expected_start_time >= self.after_)\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterExpectedStartTime.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterFlowVersion","title":"FlowRunFilterFlowVersion
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.flow_version
.
prefect/server/schemas/filters.py
class FlowRunFilterFlowVersion(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.flow_version`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of flow run flow_versions to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.flow_version.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterFlowVersion.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterId","title":"FlowRunFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.id
.
prefect/server/schemas/filters.py
class FlowRunFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run ids to include\"\n )\n not_any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run ids to exclude\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.id.in_(self.any_))\n if self.not_any_ is not None:\n filters.append(db.FlowRun.id.not_in(self.not_any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterIdempotencyKey","title":"FlowRunFilterIdempotencyKey
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.idempotency_key.
Source code inprefect/server/schemas/filters.py
class FlowRunFilterIdempotencyKey(PrefectFilterBaseModel):\n \"\"\"Filter by FlowRun.idempotency_key.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of flow run idempotency keys to include\"\n )\n not_any_: Optional[List[str]] = Field(\n default=None, description=\"A list of flow run idempotency keys to exclude\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.idempotency_key.in_(self.any_))\n if self.not_any_ is not None:\n filters.append(db.FlowRun.idempotency_key.not_in(self.not_any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterIdempotencyKey.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterName","title":"FlowRunFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.name
.
prefect/server/schemas/filters.py
class FlowRunFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of flow run names to include\",\n example=[\"my-flow-run-1\", \"my-flow-run-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.name.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.FlowRun.name.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterNextScheduledStartTime","title":"FlowRunFilterNextScheduledStartTime
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.next_scheduled_start_time
.
prefect/server/schemas/filters.py
class FlowRunFilterNextScheduledStartTime(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.next_scheduled_start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include flow runs with a next_scheduled_start_time or before this\"\n \" time\"\n ),\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include flow runs with a next_scheduled_start_time at or after this\"\n \" time\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.before_ is not None:\n filters.append(db.FlowRun.next_scheduled_start_time <= self.before_)\n if self.after_ is not None:\n filters.append(db.FlowRun.next_scheduled_start_time >= self.after_)\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterNextScheduledStartTime.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentFlowRunId","title":"FlowRunFilterParentFlowRunId
","text":" Bases: PrefectOperatorFilterBaseModel
Filter for subflows of a given flow run
Source code inprefect/server/schemas/filters.py
class FlowRunFilterParentFlowRunId(PrefectOperatorFilterBaseModel):\n \"\"\"Filter for subflows of a given flow run\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of parent flow run ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(\n db.FlowRun.id.in_(\n sa.select(db.FlowRun.id)\n .join(\n db.TaskRun,\n sa.and_(\n db.TaskRun.id == db.FlowRun.parent_task_run_id,\n ),\n )\n .where(db.TaskRun.flow_run_id.in_(self.any_))\n )\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentFlowRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentTaskRunId","title":"FlowRunFilterParentTaskRunId
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by FlowRun.parent_task_run_id
.
prefect/server/schemas/filters.py
class FlowRunFilterParentTaskRunId(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `FlowRun.parent_task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run parent_task_run_ids to include\"\n )\n is_null_: Optional[bool] = Field(\n default=None,\n description=\"If true, only include flow runs without parent_task_run_id\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.parent_task_run_id.in_(self.any_))\n if self.is_null_ is not None:\n filters.append(\n db.FlowRun.parent_task_run_id.is_(None)\n if self.is_null_\n else db.FlowRun.parent_task_run_id.is_not(None)\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentTaskRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStartTime","title":"FlowRunFilterStartTime
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.start_time
.
prefect/server/schemas/filters.py
class FlowRunFilterStartTime(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs starting at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include flow runs starting at or after this time\",\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only return flow runs without a start time\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.before_ is not None:\n filters.append(db.FlowRun.start_time <= self.before_)\n if self.after_ is not None:\n filters.append(db.FlowRun.start_time >= self.after_)\n if self.is_null_ is not None:\n filters.append(\n db.FlowRun.start_time.is_(None)\n if self.is_null_\n else db.FlowRun.start_time.is_not(None)\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStartTime.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterState","title":"FlowRunFilterState
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by FlowRun.state_type
and FlowRun.state_name
.
prefect/server/schemas/filters.py
class FlowRunFilterState(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `FlowRun.state_type` and `FlowRun.state_name`.\"\"\"\n\n type: Optional[FlowRunFilterStateType] = Field(\n default=None, description=\"Filter criteria for `FlowRun.state_type`\"\n )\n name: Optional[FlowRunFilterStateName] = Field(\n default=None, description=\"Filter criteria for `FlowRun.state_name`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.type is not None:\n filters.extend(self.type._get_filter_list(db))\n if self.name is not None:\n filters.extend(self.name._get_filter_list(db))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterState.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateName","title":"FlowRunFilterStateName
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.state_name
.
prefect/server/schemas/filters.py
class FlowRunFilterStateName(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.state_name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of flow run state names to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.state_name.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateType","title":"FlowRunFilterStateType
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRun.state_type
.
prefect/server/schemas/filters.py
class FlowRunFilterStateType(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRun.state_type`.\"\"\"\n\n any_: Optional[List[schemas.states.StateType]] = Field(\n default=None, description=\"A list of flow run state types to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.state_type.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateType.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterTags","title":"FlowRunFilterTags
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by FlowRun.tags
.
prefect/server/schemas/filters.py
class FlowRunFilterTags(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `FlowRun.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Flow runs will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include flow runs without tags\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n from prefect.server.utilities.database import json_has_all_keys\n\n filters = []\n if self.all_ is not None:\n filters.append(json_has_all_keys(db.FlowRun.tags, self.all_))\n if self.is_null_ is not None:\n filters.append(\n db.FlowRun.tags == [] if self.is_null_ else db.FlowRun.tags != []\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterTags.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterWorkQueueName","title":"FlowRunFilterWorkQueueName
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by FlowRun.work_queue_name
.
prefect/server/schemas/filters.py
class FlowRunFilterWorkQueueName(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `FlowRun.work_queue_name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of work queue names to include\",\n example=[\"work_queue_1\", \"work_queue_2\"],\n )\n is_null_: Optional[bool] = Field(\n default=None,\n description=\"If true, only include flow runs without work queue names\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.FlowRun.work_queue_name.in_(self.any_))\n if self.is_null_ is not None:\n filters.append(\n db.FlowRun.work_queue_name.is_(None)\n if self.is_null_\n else db.FlowRun.work_queue_name.is_not(None)\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterWorkQueueName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilter","title":"FlowRunNotificationPolicyFilter
","text":" Bases: PrefectFilterBaseModel
Filter FlowRunNotificationPolicies.
Source code inprefect/server/schemas/filters.py
class FlowRunNotificationPolicyFilter(PrefectFilterBaseModel):\n \"\"\"Filter FlowRunNotificationPolicies.\"\"\"\n\n is_active: Optional[FlowRunNotificationPolicyFilterIsActive] = Field(\n default=FlowRunNotificationPolicyFilterIsActive(eq_=False),\n description=\"Filter criteria for `FlowRunNotificationPolicy.is_active`. \",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.is_active is not None:\n filters.append(self.is_active.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilterIsActive","title":"FlowRunNotificationPolicyFilterIsActive
","text":" Bases: PrefectFilterBaseModel
Filter by FlowRunNotificationPolicy.is_active
.
prefect/server/schemas/filters.py
class FlowRunNotificationPolicyFilterIsActive(PrefectFilterBaseModel):\n \"\"\"Filter by `FlowRunNotificationPolicy.is_active`.\"\"\"\n\n eq_: Optional[bool] = Field(\n default=None,\n description=(\n \"Filter notification policies for only those that are or are not active.\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.eq_ is not None:\n filters.append(db.FlowRunNotificationPolicy.is_active.is_(self.eq_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilterIsActive.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilter","title":"LogFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter logs. Only logs matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class LogFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter logs. Only logs matching all criteria will be returned\"\"\"\n\n level: Optional[LogFilterLevel] = Field(\n default=None, description=\"Filter criteria for `Log.level`\"\n )\n timestamp: Optional[LogFilterTimestamp] = Field(\n default=None, description=\"Filter criteria for `Log.timestamp`\"\n )\n flow_run_id: Optional[LogFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `Log.flow_run_id`\"\n )\n task_run_id: Optional[LogFilterTaskRunId] = Field(\n default=None, description=\"Filter criteria for `Log.task_run_id`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.level is not None:\n filters.append(self.level.as_sql_filter(db))\n if self.timestamp is not None:\n filters.append(self.timestamp.as_sql_filter(db))\n if self.flow_run_id is not None:\n filters.append(self.flow_run_id.as_sql_filter(db))\n if self.task_run_id is not None:\n filters.append(self.task_run_id.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterFlowRunId","title":"LogFilterFlowRunId
","text":" Bases: PrefectFilterBaseModel
Filter by Log.flow_run_id
.
prefect/server/schemas/filters.py
class LogFilterFlowRunId(PrefectFilterBaseModel):\n \"\"\"Filter by `Log.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of flow run IDs to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Log.flow_run_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterFlowRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterLevel","title":"LogFilterLevel
","text":" Bases: PrefectFilterBaseModel
Filter by Log.level
.
prefect/server/schemas/filters.py
class LogFilterLevel(PrefectFilterBaseModel):\n \"\"\"Filter by `Log.level`.\"\"\"\n\n ge_: Optional[int] = Field(\n default=None,\n description=\"Include logs with a level greater than or equal to this level\",\n example=20,\n )\n\n le_: Optional[int] = Field(\n default=None,\n description=\"Include logs with a level less than or equal to this level\",\n example=50,\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.ge_ is not None:\n filters.append(db.Log.level >= self.ge_)\n if self.le_ is not None:\n filters.append(db.Log.level <= self.le_)\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterLevel.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterName","title":"LogFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by Log.name
.
prefect/server/schemas/filters.py
class LogFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `Log.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of log names to include\",\n example=[\"prefect.logger.flow_runs\", \"prefect.logger.task_runs\"],\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Log.name.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTaskRunId","title":"LogFilterTaskRunId
","text":" Bases: PrefectFilterBaseModel
Filter by Log.task_run_id
.
prefect/server/schemas/filters.py
class LogFilterTaskRunId(PrefectFilterBaseModel):\n \"\"\"Filter by `Log.task_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run IDs to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Log.task_run_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTaskRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTimestamp","title":"LogFilterTimestamp
","text":" Bases: PrefectFilterBaseModel
Filter by Log.timestamp
.
prefect/server/schemas/filters.py
class LogFilterTimestamp(PrefectFilterBaseModel):\n \"\"\"Filter by `Log.timestamp`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include logs with a timestamp at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include logs with a timestamp at or after this time\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.before_ is not None:\n filters.append(db.Log.timestamp <= self.before_)\n if self.after_ is not None:\n filters.append(db.Log.timestamp >= self.after_)\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTimestamp.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.Operator","title":"Operator
","text":" Bases: AutoEnum
Operators for combining filter criteria.
Source code inprefect/server/schemas/filters.py
class Operator(AutoEnum):\n \"\"\"Operators for combining filter criteria.\"\"\"\n\n and_ = AutoEnum.auto()\n or_ = AutoEnum.auto()\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.Operator.auto","title":"auto
staticmethod
","text":"Exposes enum.auto()
to avoid requiring a second import to use AutoEnum
prefect/utilities/collections.py
@staticmethod\ndef auto():\n \"\"\"\n Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n \"\"\"\n return auto()\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectFilterBaseModel","title":"PrefectFilterBaseModel
","text":" Bases: PrefectBaseModel
Base model for Prefect filters
Source code inprefect/server/schemas/filters.py
class PrefectFilterBaseModel(PrefectBaseModel):\n \"\"\"Base model for Prefect filters\"\"\"\n\n class Config:\n extra = \"forbid\"\n\n def as_sql_filter(self, db: \"PrefectDBInterface\") -> \"BooleanClauseList\":\n \"\"\"Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter.\"\"\"\n filters = self._get_filter_list(db)\n if not filters:\n return True\n return sa.and_(*filters)\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n \"\"\"Return a list of boolean filter statements based on filter parameters\"\"\"\n raise NotImplementedError(\"_get_filter_list must be implemented\")\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectFilterBaseModel.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectOperatorFilterBaseModel","title":"PrefectOperatorFilterBaseModel
","text":" Bases: PrefectFilterBaseModel
Base model for Prefect filters that combines criteria with a user-provided operator
Source code inprefect/server/schemas/filters.py
class PrefectOperatorFilterBaseModel(PrefectFilterBaseModel):\n \"\"\"Base model for Prefect filters that combines criteria with a user-provided operator\"\"\"\n\n operator: Operator = Field(\n default=Operator.and_,\n description=\"Operator for combining filter criteria. Defaults to 'and_'.\",\n )\n\n def as_sql_filter(self, db: \"PrefectDBInterface\") -> \"BooleanClauseList\":\n filters = self._get_filter_list(db)\n if not filters:\n return True\n return sa.and_(*filters) if self.operator == Operator.and_ else sa.or_(*filters)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectOperatorFilterBaseModel.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilter","title":"TaskRunFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter task runs. Only task runs matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class TaskRunFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter task runs. Only task runs matching all criteria will be returned\"\"\"\n\n id: Optional[TaskRunFilterId] = Field(\n default=None, description=\"Filter criteria for `TaskRun.id`\"\n )\n name: Optional[TaskRunFilterName] = Field(\n default=None, description=\"Filter criteria for `TaskRun.name`\"\n )\n tags: Optional[TaskRunFilterTags] = Field(\n default=None, description=\"Filter criteria for `TaskRun.tags`\"\n )\n state: Optional[TaskRunFilterState] = Field(\n default=None, description=\"Filter criteria for `TaskRun.state`\"\n )\n start_time: Optional[TaskRunFilterStartTime] = Field(\n default=None, description=\"Filter criteria for `TaskRun.start_time`\"\n )\n subflow_runs: Optional[TaskRunFilterSubFlowRuns] = Field(\n default=None, description=\"Filter criteria for `TaskRun.subflow_run`\"\n )\n flow_run_id: Optional[TaskRunFilterFlowRunId] = Field(\n default=None, description=\"Filter criteria for `TaskRun.flow_run_id`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n if self.tags is not None:\n filters.append(self.tags.as_sql_filter(db))\n if self.state is not None:\n filters.append(self.state.as_sql_filter(db))\n if self.start_time is not None:\n filters.append(self.start_time.as_sql_filter(db))\n if self.subflow_runs is not None:\n filters.append(self.subflow_runs.as_sql_filter(db))\n if self.flow_run_id is not None:\n filters.append(self.flow_run_id.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterFlowRunId","title":"TaskRunFilterFlowRunId
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by TaskRun.flow_run_id
.
prefect/server/schemas/filters.py
class TaskRunFilterFlowRunId(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `TaskRun.flow_run_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run flow run ids to include\"\n )\n\n is_null_: bool = Field(\n default=False, description=\"Filter for task runs with None as their flow run id\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.is_null_ is True:\n filters.append(db.TaskRun.flow_run_id.is_(None))\n else:\n if self.any_ is not None:\n filters.append(db.TaskRun.flow_run_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterFlowRunId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterId","title":"TaskRunFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by TaskRun.id
.
prefect/server/schemas/filters.py
class TaskRunFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `TaskRun.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of task run ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.TaskRun.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterName","title":"TaskRunFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by TaskRun.name
.
prefect/server/schemas/filters.py
class TaskRunFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `TaskRun.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of task run names to include\",\n example=[\"my-task-run-1\", \"my-task-run-2\"],\n )\n\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A case-insensitive partial match. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n ),\n example=\"marvin\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.TaskRun.name.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.TaskRun.name.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStartTime","title":"TaskRunFilterStartTime
","text":" Bases: PrefectFilterBaseModel
Filter by TaskRun.start_time
.
prefect/server/schemas/filters.py
class TaskRunFilterStartTime(PrefectFilterBaseModel):\n \"\"\"Filter by `TaskRun.start_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include task runs starting at or before this time\",\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=\"Only include task runs starting at or after this time\",\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only return task runs without a start time\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.before_ is not None:\n filters.append(db.TaskRun.start_time <= self.before_)\n if self.after_ is not None:\n filters.append(db.TaskRun.start_time >= self.after_)\n if self.is_null_ is not None:\n filters.append(\n db.TaskRun.start_time.is_(None)\n if self.is_null_\n else db.TaskRun.start_time.is_not(None)\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStartTime.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterState","title":"TaskRunFilterState
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by TaskRun.type
and TaskRun.name
.
prefect/server/schemas/filters.py
class TaskRunFilterState(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `TaskRun.type` and `TaskRun.name`.\"\"\"\n\n type: Optional[TaskRunFilterStateType]\n name: Optional[TaskRunFilterStateName]\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.type is not None:\n filters.extend(self.type._get_filter_list(db))\n if self.name is not None:\n filters.extend(self.name._get_filter_list(db))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterState.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateName","title":"TaskRunFilterStateName
","text":" Bases: PrefectFilterBaseModel
Filter by TaskRun.state_name
.
prefect/server/schemas/filters.py
class TaskRunFilterStateName(PrefectFilterBaseModel):\n \"\"\"Filter by `TaskRun.state_name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of task run state names to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.TaskRun.state_name.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateType","title":"TaskRunFilterStateType
","text":" Bases: PrefectFilterBaseModel
Filter by TaskRun.state_type
.
prefect/server/schemas/filters.py
class TaskRunFilterStateType(PrefectFilterBaseModel):\n \"\"\"Filter by `TaskRun.state_type`.\"\"\"\n\n any_: Optional[List[schemas.states.StateType]] = Field(\n default=None, description=\"A list of task run state types to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.TaskRun.state_type.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateType.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterSubFlowRuns","title":"TaskRunFilterSubFlowRuns
","text":" Bases: PrefectFilterBaseModel
Filter by TaskRun.subflow_run
.
prefect/server/schemas/filters.py
class TaskRunFilterSubFlowRuns(PrefectFilterBaseModel):\n \"\"\"Filter by `TaskRun.subflow_run`.\"\"\"\n\n exists_: Optional[bool] = Field(\n default=None,\n description=(\n \"If true, only include task runs that are subflow run parents; if false,\"\n \" exclude parent task runs\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.exists_ is True:\n filters.append(db.TaskRun.subflow_run.has())\n elif self.exists_ is False:\n filters.append(sa.not_(db.TaskRun.subflow_run.has()))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterSubFlowRuns.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterTags","title":"TaskRunFilterTags
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by TaskRun.tags
.
prefect/server/schemas/filters.py
class TaskRunFilterTags(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `TaskRun.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Task runs will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include task runs without tags\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n from prefect.server.utilities.database import json_has_all_keys\n\n filters = []\n if self.all_ is not None:\n filters.append(json_has_all_keys(db.TaskRun.tags, self.all_))\n if self.is_null_ is not None:\n filters.append(\n db.TaskRun.tags == [] if self.is_null_ else db.TaskRun.tags != []\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterTags.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilter","title":"VariableFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter variables. Only variables matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class VariableFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter variables. Only variables matching all criteria will be returned\"\"\"\n\n id: Optional[VariableFilterId] = Field(\n default=None, description=\"Filter criteria for `Variable.id`\"\n )\n name: Optional[VariableFilterName] = Field(\n default=None, description=\"Filter criteria for `Variable.name`\"\n )\n value: Optional[VariableFilterValue] = Field(\n default=None, description=\"Filter criteria for `Variable.value`\"\n )\n tags: Optional[VariableFilterTags] = Field(\n default=None, description=\"Filter criteria for `Variable.tags`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n if self.value is not None:\n filters.append(self.value.as_sql_filter(db))\n if self.tags is not None:\n filters.append(self.tags.as_sql_filter(db))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterId","title":"VariableFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by Variable.id
.
prefect/server/schemas/filters.py
class VariableFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `Variable.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of variable ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Variable.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterName","title":"VariableFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by Variable.name
.
prefect/server/schemas/filters.py
class VariableFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `Variable.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of variables names to include\"\n )\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match variable names against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my_variable_%\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Variable.name.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.Variable.name.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterTags","title":"VariableFilterTags
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by Variable.tags
.
prefect/server/schemas/filters.py
class VariableFilterTags(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `Variable.tags`.\"\"\"\n\n all_: Optional[List[str]] = Field(\n default=None,\n example=[\"tag-1\", \"tag-2\"],\n description=(\n \"A list of tags. Variables will be returned only if their tags are a\"\n \" superset of the list\"\n ),\n )\n is_null_: Optional[bool] = Field(\n default=None, description=\"If true, only include Variables without tags\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n from prefect.server.utilities.database import json_has_all_keys\n\n filters = []\n if self.all_ is not None:\n filters.append(json_has_all_keys(db.Variable.tags, self.all_))\n if self.is_null_ is not None:\n filters.append(\n db.Variable.tags == [] if self.is_null_ else db.Variable.tags != []\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterTags.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterValue","title":"VariableFilterValue
","text":" Bases: PrefectFilterBaseModel
Filter by Variable.value
.
prefect/server/schemas/filters.py
class VariableFilterValue(PrefectFilterBaseModel):\n \"\"\"Filter by `Variable.value`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of variables value to include\"\n )\n like_: Optional[str] = Field(\n default=None,\n description=(\n \"A string to match variable value against. This can include \"\n \"SQL wildcard characters like `%` and `_`.\"\n ),\n example=\"my-value-%\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Variable.value.in_(self.any_))\n if self.like_ is not None:\n filters.append(db.Variable.value.ilike(f\"%{self.like_}%\"))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterValue.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilter","title":"WorkPoolFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter work pools. Only work pools matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class WorkPoolFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter work pools. Only work pools matching all criteria will be returned\"\"\"\n\n id: Optional[WorkPoolFilterId] = Field(\n default=None, description=\"Filter criteria for `WorkPool.id`\"\n )\n name: Optional[WorkPoolFilterName] = Field(\n default=None, description=\"Filter criteria for `WorkPool.name`\"\n )\n type: Optional[WorkPoolFilterType] = Field(\n default=None, description=\"Filter criteria for `WorkPool.type`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n if self.type is not None:\n filters.append(self.type.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterId","title":"WorkPoolFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by WorkPool.id
.
prefect/server/schemas/filters.py
class WorkPoolFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `WorkPool.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of work pool ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.WorkPool.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterName","title":"WorkPoolFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by WorkPool.name
.
prefect/server/schemas/filters.py
class WorkPoolFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `WorkPool.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of work pool names to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.WorkPool.name.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterType","title":"WorkPoolFilterType
","text":" Bases: PrefectFilterBaseModel
Filter by WorkPool.type
.
prefect/server/schemas/filters.py
class WorkPoolFilterType(PrefectFilterBaseModel):\n \"\"\"Filter by `WorkPool.type`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None, description=\"A list of work pool types to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.WorkPool.type.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterType.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilter","title":"WorkQueueFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter work queues. Only work queues matching all criteria will be returned
Source code inprefect/server/schemas/filters.py
class WorkQueueFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter work queues. Only work queues matching all criteria will be\n returned\"\"\"\n\n id: Optional[WorkQueueFilterId] = Field(\n default=None, description=\"Filter criteria for `WorkQueue.id`\"\n )\n\n name: Optional[WorkQueueFilterName] = Field(\n default=None, description=\"Filter criteria for `WorkQueue.name`\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.id is not None:\n filters.append(self.id.as_sql_filter(db))\n if self.name is not None:\n filters.append(self.name.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterId","title":"WorkQueueFilterId
","text":" Bases: PrefectFilterBaseModel
Filter by WorkQueue.id
.
prefect/server/schemas/filters.py
class WorkQueueFilterId(PrefectFilterBaseModel):\n \"\"\"Filter by `WorkQueue.id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None,\n description=\"A list of work queue ids to include\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.WorkQueue.id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterName","title":"WorkQueueFilterName
","text":" Bases: PrefectFilterBaseModel
Filter by WorkQueue.name
.
prefect/server/schemas/filters.py
class WorkQueueFilterName(PrefectFilterBaseModel):\n \"\"\"Filter by `WorkQueue.name`.\"\"\"\n\n any_: Optional[List[str]] = Field(\n default=None,\n description=\"A list of work queue names to include\",\n example=[\"wq-1\", \"wq-2\"],\n )\n\n startswith_: Optional[List[str]] = Field(\n default=None,\n description=(\n \"A list of case-insensitive starts-with matches. For example, \"\n \" passing 'marvin' will match \"\n \"'marvin', and 'Marvin-robot', but not 'sad-marvin'.\"\n ),\n example=[\"marvin\", \"Marvin-robot\"],\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.WorkQueue.name.in_(self.any_))\n if self.startswith_ is not None:\n filters.append(\n sa.or_(\n *[db.WorkQueue.name.ilike(f\"{item}%\") for item in self.startswith_]\n )\n )\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterName.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilter","title":"WorkerFilter
","text":" Bases: PrefectOperatorFilterBaseModel
Filter by Worker.last_heartbeat_time
.
prefect/server/schemas/filters.py
class WorkerFilter(PrefectOperatorFilterBaseModel):\n \"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n # worker_config_id: Optional[WorkerFilterWorkPoolId] = Field(\n # default=None, description=\"Filter criteria for `Worker.worker_config_id`\"\n # )\n\n last_heartbeat_time: Optional[WorkerFilterLastHeartbeatTime] = Field(\n default=None,\n description=\"Filter criteria for `Worker.last_heartbeat_time`\",\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n\n if self.last_heartbeat_time is not None:\n filters.append(self.last_heartbeat_time.as_sql_filter(db))\n\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilter.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterLastHeartbeatTime","title":"WorkerFilterLastHeartbeatTime
","text":" Bases: PrefectFilterBaseModel
Filter by Worker.last_heartbeat_time
.
prefect/server/schemas/filters.py
class WorkerFilterLastHeartbeatTime(PrefectFilterBaseModel):\n \"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n before_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include processes whose last heartbeat was at or before this time\"\n ),\n )\n after_: Optional[DateTimeTZ] = Field(\n default=None,\n description=(\n \"Only include processes whose last heartbeat was at or after this time\"\n ),\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.before_ is not None:\n filters.append(db.Worker.last_heartbeat_time <= self.before_)\n if self.after_ is not None:\n filters.append(db.Worker.last_heartbeat_time >= self.after_)\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterLastHeartbeatTime.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterWorkPoolId","title":"WorkerFilterWorkPoolId
","text":" Bases: PrefectFilterBaseModel
Filter by Worker.worker_config_id
.
prefect/server/schemas/filters.py
class WorkerFilterWorkPoolId(PrefectFilterBaseModel):\n \"\"\"Filter by `Worker.worker_config_id`.\"\"\"\n\n any_: Optional[List[UUID]] = Field(\n default=None, description=\"A list of work pool ids to include\"\n )\n\n def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n filters = []\n if self.any_ is not None:\n filters.append(db.Worker.worker_config_id.in_(self.any_))\n return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterWorkPoolId.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/responses/","title":"server.schemas.responses","text":""},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses","title":"prefect.server.schemas.responses
","text":"Schemas for special responses from the Prefect REST API.
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.FlowRunResponse","title":"FlowRunResponse
","text":" Bases: ORMBaseModel
prefect/server/schemas/responses.py
@copy_model_fields\nclass FlowRunResponse(ORMBaseModel):\n name: str = FieldFrom(schemas.core.FlowRun)\n flow_id: UUID = FieldFrom(schemas.core.FlowRun)\n state_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n deployment_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n work_queue_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n work_queue_name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n flow_version: Optional[str] = FieldFrom(schemas.core.FlowRun)\n parameters: dict = FieldFrom(schemas.core.FlowRun)\n idempotency_key: Optional[str] = FieldFrom(schemas.core.FlowRun)\n context: dict = FieldFrom(schemas.core.FlowRun)\n empirical_policy: FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n tags: List[str] = FieldFrom(schemas.core.FlowRun)\n parent_task_run_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n state_type: Optional[schemas.states.StateType] = FieldFrom(schemas.core.FlowRun)\n state_name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n run_count: int = FieldFrom(schemas.core.FlowRun)\n expected_start_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n next_scheduled_start_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n start_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n end_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n total_run_time: datetime.timedelta = FieldFrom(schemas.core.FlowRun)\n estimated_run_time: datetime.timedelta = FieldFrom(schemas.core.FlowRun)\n estimated_start_time_delta: datetime.timedelta = FieldFrom(schemas.core.FlowRun)\n auto_scheduled: bool = FieldFrom(schemas.core.FlowRun)\n infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n infrastructure_pid: Optional[str] = FieldFrom(schemas.core.FlowRun)\n created_by: Optional[CreatedBy] = FieldFrom(schemas.core.FlowRun)\n work_pool_id: Optional[UUID] = Field(\n default=None,\n description=\"The id of the flow run's work pool.\",\n )\n work_pool_name: Optional[str] = Field(\n default=None,\n description=\"The name of the flow run's work pool.\",\n example=\"my-work-pool\",\n )\n state: Optional[schemas.states.State] = FieldFrom(schemas.core.FlowRun)\n job_variables: Optional[Dict[str, Any]] = FieldFrom(schemas.core.FlowRun)\n\n @classmethod\n def from_orm(cls, orm_flow_run: \"prefect.server.database.orm_models.ORMFlowRun\"):\n response = super().from_orm(orm_flow_run)\n if orm_flow_run.work_queue:\n response.work_queue_id = orm_flow_run.work_queue.id\n response.work_queue_name = orm_flow_run.work_queue.name\n if orm_flow_run.work_queue.work_pool:\n response.work_pool_id = orm_flow_run.work_queue.work_pool.id\n response.work_pool_name = orm_flow_run.work_queue.work_pool.name\n\n return response\n\n def __eq__(self, other: Any) -> bool:\n \"\"\"\n Check for \"equality\" to another flow run schema\n\n Estimates times are rolling and will always change with repeated queries for\n a flow run so we ignore them during equality checks.\n \"\"\"\n if isinstance(other, FlowRunResponse):\n exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n return self.dict(exclude=exclude_fields) == other.dict(\n exclude=exclude_fields\n )\n return super().__eq__(other)\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.HistoryResponse","title":"HistoryResponse
","text":" Bases: PrefectBaseModel
Represents a history of aggregation states over an interval
Source code inprefect/server/schemas/responses.py
class HistoryResponse(PrefectBaseModel):\n \"\"\"Represents a history of aggregation states over an interval\"\"\"\n\n interval_start: DateTimeTZ = Field(\n default=..., description=\"The start date of the interval.\"\n )\n interval_end: DateTimeTZ = Field(\n default=..., description=\"The end date of the interval.\"\n )\n states: List[HistoryResponseState] = Field(\n default=..., description=\"A list of state histories during the interval.\"\n )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.HistoryResponseState","title":"HistoryResponseState
","text":" Bases: PrefectBaseModel
Represents a single state's history over an interval.
Source code inprefect/server/schemas/responses.py
class HistoryResponseState(PrefectBaseModel):\n \"\"\"Represents a single state's history over an interval.\"\"\"\n\n state_type: schemas.states.StateType = Field(\n default=..., description=\"The state type.\"\n )\n state_name: str = Field(default=..., description=\"The state name.\")\n count_runs: int = Field(\n default=...,\n description=\"The number of runs in the specified state during the interval.\",\n )\n sum_estimated_run_time: datetime.timedelta = Field(\n default=...,\n description=\"The total estimated run time of all runs during the interval.\",\n )\n sum_estimated_lateness: datetime.timedelta = Field(\n default=...,\n description=(\n \"The sum of differences between actual and expected start time during the\"\n \" interval.\"\n ),\n )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.OrchestrationResult","title":"OrchestrationResult
","text":" Bases: PrefectBaseModel
A container for the output of state orchestration.
Source code inprefect/server/schemas/responses.py
class OrchestrationResult(PrefectBaseModel):\n \"\"\"\n A container for the output of state orchestration.\n \"\"\"\n\n state: Optional[schemas.states.State]\n status: SetStateStatus\n details: StateResponseDetails\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.SetStateStatus","title":"SetStateStatus
","text":" Bases: AutoEnum
Enumerates return statuses for setting run states.
Source code inprefect/server/schemas/responses.py
class SetStateStatus(AutoEnum):\n \"\"\"Enumerates return statuses for setting run states.\"\"\"\n\n ACCEPT = AutoEnum.auto()\n REJECT = AutoEnum.auto()\n ABORT = AutoEnum.auto()\n WAIT = AutoEnum.auto()\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateAbortDetails","title":"StateAbortDetails
","text":" Bases: PrefectBaseModel
Details associated with an ABORT state transition.
Source code inprefect/server/schemas/responses.py
class StateAbortDetails(PrefectBaseModel):\n \"\"\"Details associated with an ABORT state transition.\"\"\"\n\n type: Literal[\"abort_details\"] = Field(\n default=\"abort_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n reason: Optional[str] = Field(\n default=None, description=\"The reason why the state transition was aborted.\"\n )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateAcceptDetails","title":"StateAcceptDetails
","text":" Bases: PrefectBaseModel
Details associated with an ACCEPT state transition.
Source code inprefect/server/schemas/responses.py
class StateAcceptDetails(PrefectBaseModel):\n \"\"\"Details associated with an ACCEPT state transition.\"\"\"\n\n type: Literal[\"accept_details\"] = Field(\n default=\"accept_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateRejectDetails","title":"StateRejectDetails
","text":" Bases: PrefectBaseModel
Details associated with a REJECT state transition.
Source code inprefect/server/schemas/responses.py
class StateRejectDetails(PrefectBaseModel):\n \"\"\"Details associated with a REJECT state transition.\"\"\"\n\n type: Literal[\"reject_details\"] = Field(\n default=\"reject_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n reason: Optional[str] = Field(\n default=None, description=\"The reason why the state transition was rejected.\"\n )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateWaitDetails","title":"StateWaitDetails
","text":" Bases: PrefectBaseModel
Details associated with a WAIT state transition.
Source code inprefect/server/schemas/responses.py
class StateWaitDetails(PrefectBaseModel):\n \"\"\"Details associated with a WAIT state transition.\"\"\"\n\n type: Literal[\"wait_details\"] = Field(\n default=\"wait_details\",\n description=(\n \"The type of state transition detail. Used to ensure pydantic does not\"\n \" coerce into a different type.\"\n ),\n )\n delay_seconds: int = Field(\n default=...,\n description=(\n \"The length of time in seconds the client should wait before transitioning\"\n \" states.\"\n ),\n )\n reason: Optional[str] = Field(\n default=None, description=\"The reason why the state transition should wait.\"\n )\n
"},{"location":"api-ref/server/schemas/schedules/","title":"server.schemas.schedules","text":""},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules","title":"prefect.server.schemas.schedules
","text":"Schedule schemas
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.CronSchedule","title":"CronSchedule
","text":" Bases: PrefectBaseModel
Cron schedule
NOTE: If the timezone is a DST-observing one, then the schedule will adjust itself appropriately. Cron's rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule will fire on every new schedule hour, not every elapsed hour; for example, when clocks are set back this will result in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later. Longer schedules, such as one that fires at 9am every morning, will automatically adjust for DST.
Parameters:
Name Type Description Defaultcron
str
a valid cron string
requiredtimezone
str
a valid timezone string in IANA tzdata format (for example, America/New_York).
requiredday_or
bool
Control how croniter handles day
and day_of_week
entries. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday.
prefect/server/schemas/schedules.py
class CronSchedule(PrefectBaseModel):\n \"\"\"\n Cron schedule\n\n NOTE: If the timezone is a DST-observing one, then the schedule will adjust\n itself appropriately. Cron's rules for DST are based on schedule times, not\n intervals. This means that an hourly cron schedule will fire on every new\n schedule hour, not every elapsed hour; for example, when clocks are set back\n this will result in a two-hour pause as the schedule will fire *the first\n time* 1am is reached and *the first time* 2am is reached, 120 minutes later.\n Longer schedules, such as one that fires at 9am every morning, will\n automatically adjust for DST.\n\n Args:\n cron (str): a valid cron string\n timezone (str): a valid timezone string in IANA tzdata format (for example,\n America/New_York).\n day_or (bool, optional): Control how croniter handles `day` and `day_of_week`\n entries. Defaults to True, matching cron which connects those values using\n OR. If the switch is set to False, the values are connected using AND. This\n behaves like fcron and enables you to e.g. define a job that executes each\n 2nd friday of a month by setting the days of month and the weekday.\n\n \"\"\"\n\n class Config:\n extra = \"forbid\"\n\n cron: str = Field(default=..., example=\"0 0 * * *\")\n timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n day_or: bool = Field(\n default=True,\n description=(\n \"Control croniter behavior for handling day and day_of_week entries.\"\n ),\n )\n\n @validator(\"timezone\")\n def valid_timezone(cls, v):\n # pendulum.tz.timezones is a callable in 3.0 and above\n # https://github.com/PrefectHQ/prefect/issues/11619\n if callable(pendulum.tz.timezones):\n timezones = pendulum.tz.timezones()\n else:\n timezones = pendulum.tz.timezones\n\n if v and v not in timezones:\n raise ValueError(\n f'Invalid timezone: \"{v}\" (specify in IANA tzdata format, for example,'\n \" America/New_York)\"\n )\n return v\n\n @validator(\"cron\")\n def valid_cron_string(cls, v):\n # croniter allows \"random\" and \"hashed\" expressions\n # which we do not support https://github.com/kiorky/croniter\n if not croniter.is_valid(v):\n raise ValueError(f'Invalid cron string: \"{v}\"')\n elif any(c for c in v.split() if c.casefold() in [\"R\", \"H\", \"r\", \"h\"]):\n raise ValueError(\n f'Random and Hashed expressions are unsupported, received: \"{v}\"'\n )\n return v\n\n async def get_dates(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n ) -> List[pendulum.DateTime]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to None. If a timezone-naive datetime is\n provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): The maximum scheduled date to return. If\n a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: A list of dates\n \"\"\"\n return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n def _get_dates_generator(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n ) -> Generator[pendulum.DateTime, None, None]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to the current date. If a timezone-naive\n datetime is provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): No returned date will exceed this date.\n If a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: a list of dates\n \"\"\"\n if start is None:\n start = pendulum.now(\"UTC\")\n\n start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n if n is None:\n # if an end was supplied, we do our best to supply all matching dates (up to\n # MAX_ITERATIONS)\n if end is not None:\n n = MAX_ITERATIONS\n else:\n n = 1\n\n elif self.timezone:\n start = start.in_tz(self.timezone)\n\n # subtract one second from the start date, so that croniter returns it\n # as an event (if it meets the cron criteria)\n start = start.subtract(seconds=1)\n\n # Respect microseconds by rounding up\n if start.microsecond > 0:\n start += datetime.timedelta(seconds=1)\n\n # croniter's DST logic interferes with all other datetime libraries except pytz\n start_localized = pytz.timezone(start.tz.name).localize(\n datetime.datetime(\n year=start.year,\n month=start.month,\n day=start.day,\n hour=start.hour,\n minute=start.minute,\n second=start.second,\n microsecond=start.microsecond,\n )\n )\n start_naive_tz = start.naive()\n\n cron = croniter(self.cron, start_naive_tz, day_or=self.day_or) # type: ignore\n dates = set()\n counter = 0\n\n while True:\n # croniter does not handle DST properly when the start time is\n # in and around when the actual shift occurs. To work around this,\n # we use the naive start time to get the next cron date delta, then\n # add that time to the original scheduling anchor.\n next_time = cron.get_next(datetime.datetime)\n delta = next_time - start_naive_tz\n next_date = pendulum.instance(start_localized + delta)\n\n # if the end date was exceeded, exit\n if end and next_date > end:\n break\n # ensure no duplicates; weird things can happen with DST\n if next_date not in dates:\n dates.add(next_date)\n yield next_date\n\n # if enough dates have been collected or enough attempts were made, exit\n if len(dates) >= n or counter > MAX_ITERATIONS:\n break\n\n counter += 1\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.CronSchedule.get_dates","title":"get_dates
async
","text":"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.
Parameters:
Name Type Description Defaultn
int
The number of dates to generate
None
start
datetime
The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.
None
end
datetime
The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.
None
Returns:
Type DescriptionList[DateTime]
List[pendulum.DateTime]: A list of dates
Source code inprefect/server/schemas/schedules.py
async def get_dates(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to None. If a timezone-naive datetime is\n provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): The maximum scheduled date to return. If\n a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: A list of dates\n \"\"\"\n return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.IntervalSchedule","title":"IntervalSchedule
","text":" Bases: PrefectBaseModel
A schedule formed by adding interval
increments to an anchor_date
. If no anchor_date
is supplied, the current UTC time is used. If a timezone-naive datetime is provided for anchor_date
, it is assumed to be in the schedule's timezone (or UTC). Even if supplied with an IANA timezone, anchor dates are always stored as UTC offsets, so a timezone
can be provided to determine localization behaviors like DST boundary handling. If none is provided it will be inferred from the anchor date.
NOTE: If the IntervalSchedule
anchor_date
or timezone
is provided in a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals. For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time. For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.
Parameters:
Name Type Description Defaultinterval
timedelta
an interval to schedule on.
requiredanchor_date
DateTimeTZ
an anchor date to schedule increments against; if not provided, the current timestamp will be used.
requiredtimezone
str
a valid timezone string.
required Source code inprefect/server/schemas/schedules.py
class IntervalSchedule(PrefectBaseModel):\n \"\"\"\n A schedule formed by adding `interval` increments to an `anchor_date`. If no\n `anchor_date` is supplied, the current UTC time is used. If a\n timezone-naive datetime is provided for `anchor_date`, it is assumed to be\n in the schedule's timezone (or UTC). Even if supplied with an IANA timezone,\n anchor dates are always stored as UTC offsets, so a `timezone` can be\n provided to determine localization behaviors like DST boundary handling. If\n none is provided it will be inferred from the anchor date.\n\n NOTE: If the `IntervalSchedule` `anchor_date` or `timezone` is provided in a\n DST-observing timezone, then the schedule will adjust itself appropriately.\n Intervals greater than 24 hours will follow DST conventions, while intervals\n of less than 24 hours will follow UTC intervals. For example, an hourly\n schedule will fire every UTC hour, even across DST boundaries. When clocks\n are set back, this will result in two runs that *appear* to both be\n scheduled for 1am local time, even though they are an hour apart in UTC\n time. For longer intervals, like a daily schedule, the interval schedule\n will adjust for DST boundaries so that the clock-hour remains constant. This\n means that a daily schedule that always fires at 9am will observe DST and\n continue to fire at 9am in the local time zone.\n\n Args:\n interval (datetime.timedelta): an interval to schedule on.\n anchor_date (DateTimeTZ, optional): an anchor date to schedule increments against;\n if not provided, the current timestamp will be used.\n timezone (str, optional): a valid timezone string.\n \"\"\"\n\n class Config:\n extra = \"forbid\"\n exclude_none = True\n\n interval: datetime.timedelta\n anchor_date: DateTimeTZ = None\n timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n\n @validator(\"interval\")\n def interval_must_be_positive(cls, v):\n if v.total_seconds() <= 0:\n raise ValueError(\"The interval must be positive\")\n return v\n\n @validator(\"anchor_date\", always=True)\n def default_anchor_date(cls, v):\n if v is None:\n return pendulum.now(\"UTC\")\n return pendulum.instance(v)\n\n @validator(\"timezone\", always=True)\n def default_timezone(cls, v, *, values, **kwargs):\n # pendulum.tz.timezones is a callable in 3.0 and above\n # https://github.com/PrefectHQ/prefect/issues/11619\n if callable(pendulum.tz.timezones):\n timezones = pendulum.tz.timezones()\n else:\n timezones = pendulum.tz.timezones\n\n # if was provided, make sure its a valid IANA string\n if v and v not in timezones:\n raise ValueError(f'Invalid timezone: \"{v}\"')\n\n # otherwise infer the timezone from the anchor date\n elif v is None and values.get(\"anchor_date\"):\n tz = values[\"anchor_date\"].tz.name\n if tz in timezones:\n return tz\n # sometimes anchor dates have \"timezones\" that are UTC offsets\n # like \"-04:00\". This happens when parsing ISO8601 strings.\n # In this case we, the correct inferred localization is \"UTC\".\n else:\n return \"UTC\"\n\n return v\n\n async def get_dates(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n ) -> List[pendulum.DateTime]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to None. If a timezone-naive datetime is\n provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): The maximum scheduled date to return. If\n a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: A list of dates\n \"\"\"\n return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n def _get_dates_generator(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n ) -> Generator[pendulum.DateTime, None, None]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to None. If a timezone-naive datetime is\n provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): The maximum scheduled date to return. If\n a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: a list of dates\n \"\"\"\n if n is None:\n # if an end was supplied, we do our best to supply all matching dates (up to\n # MAX_ITERATIONS)\n if end is not None:\n n = MAX_ITERATIONS\n else:\n n = 1\n\n if start is None:\n start = pendulum.now(\"UTC\")\n\n anchor_tz = self.anchor_date.in_tz(self.timezone)\n start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n # compute the offset between the anchor date and the start date to jump to the\n # next date\n offset = (start - anchor_tz).total_seconds() / self.interval.total_seconds()\n next_date = anchor_tz.add(seconds=self.interval.total_seconds() * int(offset))\n\n # break the interval into `days` and `seconds` because pendulum\n # will handle DST boundaries properly if days are provided, but not\n # if we add `total seconds`. Therefore, `next_date + self.interval`\n # fails while `next_date.add(days=days, seconds=seconds)` works.\n interval_days = self.interval.days\n interval_seconds = self.interval.total_seconds() - (\n interval_days * 24 * 60 * 60\n )\n\n # daylight saving time boundaries can create a situation where the next date is\n # before the start date, so we advance it if necessary\n while next_date < start:\n next_date = next_date.add(days=interval_days, seconds=interval_seconds)\n\n counter = 0\n dates = set()\n\n while True:\n # if the end date was exceeded, exit\n if end and next_date > end:\n break\n\n # ensure no duplicates; weird things can happen with DST\n if next_date not in dates:\n dates.add(next_date)\n yield next_date\n\n # if enough dates have been collected or enough attempts were made, exit\n if len(dates) >= n or counter > MAX_ITERATIONS:\n break\n\n counter += 1\n\n next_date = next_date.add(days=interval_days, seconds=interval_seconds)\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.IntervalSchedule.get_dates","title":"get_dates
async
","text":"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.
Parameters:
Name Type Description Defaultn
int
The number of dates to generate
None
start
datetime
The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.
None
end
datetime
The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.
None
Returns:
Type DescriptionList[DateTime]
List[pendulum.DateTime]: A list of dates
Source code inprefect/server/schemas/schedules.py
async def get_dates(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to None. If a timezone-naive datetime is\n provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): The maximum scheduled date to return. If\n a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: A list of dates\n \"\"\"\n return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule","title":"RRuleSchedule
","text":" Bases: PrefectBaseModel
RRule schedule, based on the iCalendar standard (RFC 5545) as implemented in dateutils.rrule
.
RRules are appropriate for any kind of calendar-date manipulation, including irregular intervals, repetition, exclusions, week day or day-of-month adjustments, and more.
Note that as a calendar-oriented standard, RRuleSchedules
are sensitive to to the initial timezone provided. A 9am daily schedule with a daylight saving time-aware start date will maintain a local 9am time through DST boundaries; a 9am daily schedule with a UTC start date will maintain a 9am UTC time.
Parameters:
Name Type Description Defaultrrule
str
a valid RRule string
requiredtimezone
str
a valid timezone string
required Source code inprefect/server/schemas/schedules.py
class RRuleSchedule(PrefectBaseModel):\n \"\"\"\n RRule schedule, based on the iCalendar standard\n ([RFC 5545](https://datatracker.ietf.org/doc/html/rfc5545)) as\n implemented in `dateutils.rrule`.\n\n RRules are appropriate for any kind of calendar-date manipulation, including\n irregular intervals, repetition, exclusions, week day or day-of-month\n adjustments, and more.\n\n Note that as a calendar-oriented standard, `RRuleSchedules` are sensitive to\n to the initial timezone provided. A 9am daily schedule with a daylight saving\n time-aware start date will maintain a local 9am time through DST boundaries;\n a 9am daily schedule with a UTC start date will maintain a 9am UTC time.\n\n Args:\n rrule (str): a valid RRule string\n timezone (str, optional): a valid timezone string\n \"\"\"\n\n class Config:\n extra = \"forbid\"\n\n rrule: str\n timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n\n @validator(\"rrule\")\n def validate_rrule_str(cls, v):\n # attempt to parse the rrule string as an rrule object\n # this will error if the string is invalid\n try:\n dateutil.rrule.rrulestr(v, cache=True)\n except ValueError as exc:\n # rrules errors are a mix of cryptic and informative\n # so reraise to be clear that the string was invalid\n raise ValueError(f'Invalid RRule string \"{v}\": {exc}')\n if len(v) > MAX_RRULE_LENGTH:\n raise ValueError(\n f'Invalid RRule string \"{v[:40]}...\"\\n'\n f\"Max length is {MAX_RRULE_LENGTH}, got {len(v)}\"\n )\n return v\n\n @classmethod\n def from_rrule(cls, rrule: dateutil.rrule.rrule):\n if isinstance(rrule, dateutil.rrule.rrule):\n if rrule._dtstart.tzinfo is not None:\n timezone = rrule._dtstart.tzinfo.name\n else:\n timezone = \"UTC\"\n return RRuleSchedule(rrule=str(rrule), timezone=timezone)\n elif isinstance(rrule, dateutil.rrule.rruleset):\n dtstarts = [rr._dtstart for rr in rrule._rrule if rr._dtstart is not None]\n unique_dstarts = set(pendulum.instance(d).in_tz(\"UTC\") for d in dtstarts)\n unique_timezones = set(d.tzinfo for d in dtstarts if d.tzinfo is not None)\n\n if len(unique_timezones) > 1:\n raise ValueError(\n f\"rruleset has too many dtstart timezones: {unique_timezones}\"\n )\n\n if len(unique_dstarts) > 1:\n raise ValueError(f\"rruleset has too many dtstarts: {unique_dstarts}\")\n\n if unique_dstarts and unique_timezones:\n timezone = dtstarts[0].tzinfo.name\n else:\n timezone = \"UTC\"\n\n rruleset_string = \"\"\n if rrule._rrule:\n rruleset_string += \"\\n\".join(str(r) for r in rrule._rrule)\n if rrule._exrule:\n rruleset_string += \"\\n\" if rruleset_string else \"\"\n rruleset_string += \"\\n\".join(str(r) for r in rrule._exrule).replace(\n \"RRULE\", \"EXRULE\"\n )\n if rrule._rdate:\n rruleset_string += \"\\n\" if rruleset_string else \"\"\n rruleset_string += \"RDATE:\" + \",\".join(\n rd.strftime(\"%Y%m%dT%H%M%SZ\") for rd in rrule._rdate\n )\n if rrule._exdate:\n rruleset_string += \"\\n\" if rruleset_string else \"\"\n rruleset_string += \"EXDATE:\" + \",\".join(\n exd.strftime(\"%Y%m%dT%H%M%SZ\") for exd in rrule._exdate\n )\n return RRuleSchedule(rrule=rruleset_string, timezone=timezone)\n else:\n raise ValueError(f\"Invalid RRule object: {rrule}\")\n\n def to_rrule(self) -> dateutil.rrule.rrule:\n \"\"\"\n Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n here\n \"\"\"\n rrule = dateutil.rrule.rrulestr(\n self.rrule,\n dtstart=DEFAULT_ANCHOR_DATE,\n cache=True,\n )\n timezone = dateutil.tz.gettz(self.timezone)\n if isinstance(rrule, dateutil.rrule.rrule):\n kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n if rrule._until:\n kwargs.update(\n until=rrule._until.replace(tzinfo=timezone),\n )\n return rrule.replace(**kwargs)\n elif isinstance(rrule, dateutil.rrule.rruleset):\n # update rrules\n localized_rrules = []\n for rr in rrule._rrule:\n kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n if rr._until:\n kwargs.update(\n until=rr._until.replace(tzinfo=timezone),\n )\n localized_rrules.append(rr.replace(**kwargs))\n rrule._rrule = localized_rrules\n\n # update exrules\n localized_exrules = []\n for exr in rrule._exrule:\n kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n if exr._until:\n kwargs.update(\n until=exr._until.replace(tzinfo=timezone),\n )\n localized_exrules.append(exr.replace(**kwargs))\n rrule._exrule = localized_exrules\n\n # update rdates\n localized_rdates = []\n for rd in rrule._rdate:\n localized_rdates.append(rd.replace(tzinfo=timezone))\n rrule._rdate = localized_rdates\n\n # update exdates\n localized_exdates = []\n for exd in rrule._exdate:\n localized_exdates.append(exd.replace(tzinfo=timezone))\n rrule._exdate = localized_exdates\n\n return rrule\n\n @validator(\"timezone\", always=True)\n def valid_timezone(cls, v):\n if v and v not in pytz.all_timezones_set:\n raise ValueError(f'Invalid timezone: \"{v}\"')\n elif v is None:\n return \"UTC\"\n return v\n\n async def get_dates(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n ) -> List[pendulum.DateTime]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to None. If a timezone-naive datetime is\n provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): The maximum scheduled date to return. If\n a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: A list of dates\n \"\"\"\n return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n def _get_dates_generator(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n ) -> Generator[pendulum.DateTime, None, None]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to the current date. If a timezone-naive\n datetime is provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): No returned date will exceed this date.\n If a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: a list of dates\n \"\"\"\n if start is None:\n start = pendulum.now(\"UTC\")\n\n start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n if n is None:\n # if an end was supplied, we do our best to supply all matching dates (up\n # to MAX_ITERATIONS)\n if end is not None:\n n = MAX_ITERATIONS\n else:\n n = 1\n\n dates = set()\n counter = 0\n\n # pass count = None to account for discrepancies with duplicates around DST\n # boundaries\n for next_date in self.to_rrule().xafter(start, count=None, inc=True):\n next_date = pendulum.instance(next_date).in_tz(self.timezone)\n\n # if the end date was exceeded, exit\n if end and next_date > end:\n break\n\n # ensure no duplicates; weird things can happen with DST\n if next_date not in dates:\n dates.add(next_date)\n yield next_date\n\n # if enough dates have been collected or enough attempts were made, exit\n if len(dates) >= n or counter > MAX_ITERATIONS:\n break\n\n counter += 1\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule.get_dates","title":"get_dates
async
","text":"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.
Parameters:
Name Type Description Defaultn
int
The number of dates to generate
None
start
datetime
The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.
None
end
datetime
The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.
None
Returns:
Type DescriptionList[DateTime]
List[pendulum.DateTime]: A list of dates
Source code inprefect/server/schemas/schedules.py
async def get_dates(\n self,\n n: int = None,\n start: datetime.datetime = None,\n end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n following the start date.\n\n Args:\n n (int): The number of dates to generate\n start (datetime.datetime, optional): The first returned date will be on or\n after this date. Defaults to None. If a timezone-naive datetime is\n provided, it is assumed to be in the schedule's timezone.\n end (datetime.datetime, optional): The maximum scheduled date to return. If\n a timezone-naive datetime is provided, it is assumed to be in the\n schedule's timezone.\n\n Returns:\n List[pendulum.DateTime]: A list of dates\n \"\"\"\n return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule.to_rrule","title":"to_rrule
","text":"Since rrule doesn't properly serialize/deserialize timezones, we localize dates here
Source code inprefect/server/schemas/schedules.py
def to_rrule(self) -> dateutil.rrule.rrule:\n \"\"\"\n Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n here\n \"\"\"\n rrule = dateutil.rrule.rrulestr(\n self.rrule,\n dtstart=DEFAULT_ANCHOR_DATE,\n cache=True,\n )\n timezone = dateutil.tz.gettz(self.timezone)\n if isinstance(rrule, dateutil.rrule.rrule):\n kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n if rrule._until:\n kwargs.update(\n until=rrule._until.replace(tzinfo=timezone),\n )\n return rrule.replace(**kwargs)\n elif isinstance(rrule, dateutil.rrule.rruleset):\n # update rrules\n localized_rrules = []\n for rr in rrule._rrule:\n kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n if rr._until:\n kwargs.update(\n until=rr._until.replace(tzinfo=timezone),\n )\n localized_rrules.append(rr.replace(**kwargs))\n rrule._rrule = localized_rrules\n\n # update exrules\n localized_exrules = []\n for exr in rrule._exrule:\n kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n if exr._until:\n kwargs.update(\n until=exr._until.replace(tzinfo=timezone),\n )\n localized_exrules.append(exr.replace(**kwargs))\n rrule._exrule = localized_exrules\n\n # update rdates\n localized_rdates = []\n for rd in rrule._rdate:\n localized_rdates.append(rd.replace(tzinfo=timezone))\n rrule._rdate = localized_rdates\n\n # update exdates\n localized_exdates = []\n for exd in rrule._exdate:\n localized_exdates.append(exd.replace(tzinfo=timezone))\n rrule._exdate = localized_exdates\n\n return rrule\n
"},{"location":"api-ref/server/schemas/sorting/","title":"server.schemas.sorting","text":""},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting","title":"prefect.server.schemas.sorting
","text":"Schemas for sorting Prefect REST API objects.
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactCollectionSort","title":"ArtifactCollectionSort
","text":" Bases: AutoEnum
Defines artifact collection sorting options.
Source code inprefect/server/schemas/sorting.py
class ArtifactCollectionSort(AutoEnum):\n \"\"\"Defines artifact collection sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n ID_DESC = AutoEnum.auto()\n KEY_DESC = AutoEnum.auto()\n KEY_ASC = AutoEnum.auto()\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort artifact collections\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.ArtifactCollection.created.desc(),\n \"UPDATED_DESC\": db.ArtifactCollection.updated.desc(),\n \"ID_DESC\": db.ArtifactCollection.id.desc(),\n \"KEY_DESC\": db.ArtifactCollection.key.desc(),\n \"KEY_ASC\": db.ArtifactCollection.key.asc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactCollectionSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort artifact collections
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort artifact collections\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.ArtifactCollection.created.desc(),\n \"UPDATED_DESC\": db.ArtifactCollection.updated.desc(),\n \"ID_DESC\": db.ArtifactCollection.id.desc(),\n \"KEY_DESC\": db.ArtifactCollection.key.desc(),\n \"KEY_ASC\": db.ArtifactCollection.key.asc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactSort","title":"ArtifactSort
","text":" Bases: AutoEnum
Defines artifact sorting options.
Source code inprefect/server/schemas/sorting.py
class ArtifactSort(AutoEnum):\n \"\"\"Defines artifact sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n ID_DESC = AutoEnum.auto()\n KEY_DESC = AutoEnum.auto()\n KEY_ASC = AutoEnum.auto()\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort artifacts\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Artifact.created.desc(),\n \"UPDATED_DESC\": db.Artifact.updated.desc(),\n \"ID_DESC\": db.Artifact.id.desc(),\n \"KEY_DESC\": db.Artifact.key.desc(),\n \"KEY_ASC\": db.Artifact.key.asc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort artifacts
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort artifacts\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Artifact.created.desc(),\n \"UPDATED_DESC\": db.Artifact.updated.desc(),\n \"ID_DESC\": db.Artifact.id.desc(),\n \"KEY_DESC\": db.Artifact.key.desc(),\n \"KEY_ASC\": db.Artifact.key.asc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.BlockDocumentSort","title":"BlockDocumentSort
","text":" Bases: AutoEnum
Defines block document sorting options.
Source code inprefect/server/schemas/sorting.py
class BlockDocumentSort(AutoEnum):\n \"\"\"Defines block document sorting options.\"\"\"\n\n NAME_DESC = \"NAME_DESC\"\n NAME_ASC = \"NAME_ASC\"\n BLOCK_TYPE_AND_NAME_ASC = \"BLOCK_TYPE_AND_NAME_ASC\"\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort block documents\"\"\"\n sort_mapping = {\n \"NAME_DESC\": db.BlockDocument.name.desc(),\n \"NAME_ASC\": db.BlockDocument.name.asc(),\n \"BLOCK_TYPE_AND_NAME_ASC\": sa.text(\"block_type_name asc, name asc\"),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.BlockDocumentSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort block documents
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort block documents\"\"\"\n sort_mapping = {\n \"NAME_DESC\": db.BlockDocument.name.desc(),\n \"NAME_ASC\": db.BlockDocument.name.asc(),\n \"BLOCK_TYPE_AND_NAME_ASC\": sa.text(\"block_type_name asc, name asc\"),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.DeploymentSort","title":"DeploymentSort
","text":" Bases: AutoEnum
Defines deployment sorting options.
Source code inprefect/server/schemas/sorting.py
class DeploymentSort(AutoEnum):\n \"\"\"Defines deployment sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort deployments\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Deployment.created.desc(),\n \"UPDATED_DESC\": db.Deployment.updated.desc(),\n \"NAME_ASC\": db.Deployment.name.asc(),\n \"NAME_DESC\": db.Deployment.name.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.DeploymentSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort deployments
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort deployments\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Deployment.created.desc(),\n \"UPDATED_DESC\": db.Deployment.updated.desc(),\n \"NAME_ASC\": db.Deployment.name.asc(),\n \"NAME_DESC\": db.Deployment.name.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowRunSort","title":"FlowRunSort
","text":" Bases: AutoEnum
Defines flow run sorting options.
Source code inprefect/server/schemas/sorting.py
class FlowRunSort(AutoEnum):\n \"\"\"Defines flow run sorting options.\"\"\"\n\n ID_DESC = AutoEnum.auto()\n START_TIME_ASC = AutoEnum.auto()\n START_TIME_DESC = AutoEnum.auto()\n EXPECTED_START_TIME_ASC = AutoEnum.auto()\n EXPECTED_START_TIME_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n END_TIME_DESC = AutoEnum.auto()\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n from sqlalchemy.sql.functions import coalesce\n\n \"\"\"Return an expression used to sort flow runs\"\"\"\n sort_mapping = {\n \"ID_DESC\": db.FlowRun.id.desc(),\n \"START_TIME_ASC\": coalesce(\n db.FlowRun.start_time, db.FlowRun.expected_start_time\n ).asc(),\n \"START_TIME_DESC\": coalesce(\n db.FlowRun.start_time, db.FlowRun.expected_start_time\n ).desc(),\n \"EXPECTED_START_TIME_ASC\": db.FlowRun.expected_start_time.asc(),\n \"EXPECTED_START_TIME_DESC\": db.FlowRun.expected_start_time.desc(),\n \"NAME_ASC\": db.FlowRun.name.asc(),\n \"NAME_DESC\": db.FlowRun.name.desc(),\n \"NEXT_SCHEDULED_START_TIME_ASC\": db.FlowRun.next_scheduled_start_time.asc(),\n \"END_TIME_DESC\": db.FlowRun.end_time.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowSort","title":"FlowSort
","text":" Bases: AutoEnum
Defines flow sorting options.
Source code inprefect/server/schemas/sorting.py
class FlowSort(AutoEnum):\n \"\"\"Defines flow sorting options.\"\"\"\n\n CREATED_DESC = AutoEnum.auto()\n UPDATED_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort flows\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Flow.created.desc(),\n \"UPDATED_DESC\": db.Flow.updated.desc(),\n \"NAME_ASC\": db.Flow.name.asc(),\n \"NAME_DESC\": db.Flow.name.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort flows
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort flows\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Flow.created.desc(),\n \"UPDATED_DESC\": db.Flow.updated.desc(),\n \"NAME_ASC\": db.Flow.name.asc(),\n \"NAME_DESC\": db.Flow.name.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.LogSort","title":"LogSort
","text":" Bases: AutoEnum
Defines log sorting options.
Source code inprefect/server/schemas/sorting.py
class LogSort(AutoEnum):\n \"\"\"Defines log sorting options.\"\"\"\n\n TIMESTAMP_ASC = AutoEnum.auto()\n TIMESTAMP_DESC = AutoEnum.auto()\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort task runs\"\"\"\n sort_mapping = {\n \"TIMESTAMP_ASC\": db.Log.timestamp.asc(),\n \"TIMESTAMP_DESC\": db.Log.timestamp.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.LogSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort task runs
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort task runs\"\"\"\n sort_mapping = {\n \"TIMESTAMP_ASC\": db.Log.timestamp.asc(),\n \"TIMESTAMP_DESC\": db.Log.timestamp.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.TaskRunSort","title":"TaskRunSort
","text":" Bases: AutoEnum
Defines task run sorting options.
Source code inprefect/server/schemas/sorting.py
class TaskRunSort(AutoEnum):\n \"\"\"Defines task run sorting options.\"\"\"\n\n ID_DESC = AutoEnum.auto()\n EXPECTED_START_TIME_ASC = AutoEnum.auto()\n EXPECTED_START_TIME_DESC = AutoEnum.auto()\n NAME_ASC = AutoEnum.auto()\n NAME_DESC = AutoEnum.auto()\n NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n END_TIME_DESC = AutoEnum.auto()\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort task runs\"\"\"\n sort_mapping = {\n \"ID_DESC\": db.TaskRun.id.desc(),\n \"EXPECTED_START_TIME_ASC\": db.TaskRun.expected_start_time.asc(),\n \"EXPECTED_START_TIME_DESC\": db.TaskRun.expected_start_time.desc(),\n \"NAME_ASC\": db.TaskRun.name.asc(),\n \"NAME_DESC\": db.TaskRun.name.desc(),\n \"NEXT_SCHEDULED_START_TIME_ASC\": db.TaskRun.next_scheduled_start_time.asc(),\n \"END_TIME_DESC\": db.TaskRun.end_time.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.TaskRunSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort task runs
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort task runs\"\"\"\n sort_mapping = {\n \"ID_DESC\": db.TaskRun.id.desc(),\n \"EXPECTED_START_TIME_ASC\": db.TaskRun.expected_start_time.asc(),\n \"EXPECTED_START_TIME_DESC\": db.TaskRun.expected_start_time.desc(),\n \"NAME_ASC\": db.TaskRun.name.asc(),\n \"NAME_DESC\": db.TaskRun.name.desc(),\n \"NEXT_SCHEDULED_START_TIME_ASC\": db.TaskRun.next_scheduled_start_time.asc(),\n \"END_TIME_DESC\": db.TaskRun.end_time.desc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.VariableSort","title":"VariableSort
","text":" Bases: AutoEnum
Defines variables sorting options.
Source code inprefect/server/schemas/sorting.py
class VariableSort(AutoEnum):\n \"\"\"Defines variables sorting options.\"\"\"\n\n CREATED_DESC = \"CREATED_DESC\"\n UPDATED_DESC = \"UPDATED_DESC\"\n NAME_DESC = \"NAME_DESC\"\n NAME_ASC = \"NAME_ASC\"\n\n def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort variables\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Variable.created.desc(),\n \"UPDATED_DESC\": db.Variable.updated.desc(),\n \"NAME_DESC\": db.Variable.name.desc(),\n \"NAME_ASC\": db.Variable.name.asc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.VariableSort.as_sql_sort","title":"as_sql_sort
","text":"Return an expression used to sort variables
Source code inprefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n \"\"\"Return an expression used to sort variables\"\"\"\n sort_mapping = {\n \"CREATED_DESC\": db.Variable.created.desc(),\n \"UPDATED_DESC\": db.Variable.updated.desc(),\n \"NAME_DESC\": db.Variable.name.desc(),\n \"NAME_ASC\": db.Variable.name.asc(),\n }\n return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/states/","title":"server.schemas.states","text":""},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states","title":"prefect.server.schemas.states
","text":"State schemas.
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State","title":"State
","text":" Bases: StateBaseModel
, Generic[R]
Represents the state of a run.
Source code inprefect/server/schemas/states.py
class State(StateBaseModel, Generic[R]):\n \"\"\"Represents the state of a run.\"\"\"\n\n class Config:\n orm_mode = True\n\n type: StateType\n name: Optional[str] = Field(default=None)\n timestamp: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n message: Optional[str] = Field(default=None, example=\"Run started\")\n data: Optional[Any] = Field(\n default=None,\n description=(\n \"Data associated with the state, e.g. a result. \"\n \"Content must be storable as JSON.\"\n ),\n )\n state_details: StateDetails = Field(default_factory=StateDetails)\n\n @classmethod\n def from_orm_without_result(\n cls,\n orm_state: Union[\n \"prefect.server.database.orm_models.ORMFlowRunState\",\n \"prefect.server.database.orm_models.ORMTaskRunState\",\n ],\n with_data: Optional[Any] = None,\n ):\n \"\"\"\n During orchestration, ORM states can be instantiated prior to inserting results\n into the artifact table and the `data` field will not be eagerly loaded. In\n these cases, sqlalchemy will attempt to lazily load the the relationship, which\n will fail when called within a synchronous pydantic method.\n\n This method will construct a `State` object from an ORM model without a loaded\n artifact and attach data passed using the `with_data` argument to the `data`\n field.\n \"\"\"\n\n field_keys = cls.schema()[\"properties\"].keys()\n state_data = {\n field: getattr(orm_state, field, None)\n for field in field_keys\n if field != \"data\"\n }\n state_data[\"data\"] = with_data\n return cls(**state_data)\n\n @validator(\"name\", always=True)\n def default_name_from_type(cls, v, *, values, **kwargs):\n \"\"\"If a name is not provided, use the type\"\"\"\n\n # if `type` is not in `values` it means the `type` didn't pass its own\n # validation check and an error will be raised after this function is called\n if v is None and values.get(\"type\"):\n v = \" \".join([v.capitalize() for v in values.get(\"type\").value.split(\"_\")])\n return v\n\n @root_validator\n def default_scheduled_start_time(cls, values):\n \"\"\"\n TODO: This should throw an error instead of setting a default but is out of\n scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled\n into work refactoring state initialization\n \"\"\"\n if values.get(\"type\") == StateType.SCHEDULED:\n state_details = values.setdefault(\n \"state_details\", cls.__fields__[\"state_details\"].get_default()\n )\n if not state_details.scheduled_time:\n state_details.scheduled_time = pendulum.now(\"utc\")\n return values\n\n def is_scheduled(self) -> bool:\n return self.type == StateType.SCHEDULED\n\n def is_pending(self) -> bool:\n return self.type == StateType.PENDING\n\n def is_running(self) -> bool:\n return self.type == StateType.RUNNING\n\n def is_completed(self) -> bool:\n return self.type == StateType.COMPLETED\n\n def is_failed(self) -> bool:\n return self.type == StateType.FAILED\n\n def is_crashed(self) -> bool:\n return self.type == StateType.CRASHED\n\n def is_cancelled(self) -> bool:\n return self.type == StateType.CANCELLED\n\n def is_cancelling(self) -> bool:\n return self.type == StateType.CANCELLING\n\n def is_final(self) -> bool:\n return self.type in TERMINAL_STATES\n\n def is_paused(self) -> bool:\n return self.type == StateType.PAUSED\n\n def copy(self, *, update: dict = None, reset_fields: bool = False, **kwargs):\n \"\"\"\n Copying API models should return an object that could be inserted into the\n database again. The 'timestamp' is reset using the default factory.\n \"\"\"\n update = update or {}\n update.setdefault(\"timestamp\", self.__fields__[\"timestamp\"].get_default())\n return super().copy(reset_fields=reset_fields, update=update, **kwargs)\n\n def result(self, raise_on_failure: bool = True, fetch: Optional[bool] = None):\n # Backwards compatible `result` handling on the server-side schema\n from prefect.states import State\n\n warnings.warn(\n (\n \"`result` is no longer supported by\"\n \" `prefect.server.schemas.states.State` and will be removed in a future\"\n \" release. When result retrieval is needed, use `prefect.states.State`.\"\n ),\n DeprecationWarning,\n stacklevel=2,\n )\n\n state = State.parse_obj(self)\n return state.result(raise_on_failure=raise_on_failure, fetch=fetch)\n\n def to_state_create(self):\n # Backwards compatibility for `to_state_create`\n from prefect.client.schemas import State\n\n warnings.warn(\n (\n \"Use of `prefect.server.schemas.states.State` from the client is\"\n \" deprecated and support will be removed in a future release. Use\"\n \" `prefect.states.State` instead.\"\n ),\n DeprecationWarning,\n stacklevel=2,\n )\n\n state = State.parse_obj(self)\n return state.to_state_create()\n\n def __repr__(self) -> str:\n \"\"\"\n Generates a complete state representation appropriate for introspection\n and debugging, including the result:\n\n `MyCompletedState(message=\"my message\", type=COMPLETED, result=...)`\n \"\"\"\n from prefect.deprecated.data_documents import DataDocument\n\n if isinstance(self.data, DataDocument):\n result = self.data.decode()\n else:\n result = self.data\n\n display = dict(\n message=repr(self.message),\n type=str(self.type.value),\n result=repr(result),\n )\n\n return f\"{self.name}({', '.join(f'{k}={v}' for k, v in display.items())})\"\n\n def __str__(self) -> str:\n \"\"\"\n Generates a simple state representation appropriate for logging:\n\n `MyCompletedState(\"my message\", type=COMPLETED)`\n \"\"\"\n\n display = []\n\n if self.message:\n display.append(repr(self.message))\n\n if self.type.value.lower() != self.name.lower():\n display.append(f\"type={self.type.value}\")\n\n return f\"{self.name}({', '.join(display)})\"\n\n def __hash__(self) -> int:\n return hash(\n (\n getattr(self.state_details, \"flow_run_id\", None),\n getattr(self.state_details, \"task_run_id\", None),\n self.timestamp,\n self.type,\n )\n )\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State.from_orm_without_result","title":"from_orm_without_result
classmethod
","text":"During orchestration, ORM states can be instantiated prior to inserting results into the artifact table and the data
field will not be eagerly loaded. In these cases, sqlalchemy will attempt to lazily load the the relationship, which will fail when called within a synchronous pydantic method.
This method will construct a State
object from an ORM model without a loaded artifact and attach data passed using the with_data
argument to the data
field.
prefect/server/schemas/states.py
@classmethod\ndef from_orm_without_result(\n cls,\n orm_state: Union[\n \"prefect.server.database.orm_models.ORMFlowRunState\",\n \"prefect.server.database.orm_models.ORMTaskRunState\",\n ],\n with_data: Optional[Any] = None,\n):\n \"\"\"\n During orchestration, ORM states can be instantiated prior to inserting results\n into the artifact table and the `data` field will not be eagerly loaded. In\n these cases, sqlalchemy will attempt to lazily load the the relationship, which\n will fail when called within a synchronous pydantic method.\n\n This method will construct a `State` object from an ORM model without a loaded\n artifact and attach data passed using the `with_data` argument to the `data`\n field.\n \"\"\"\n\n field_keys = cls.schema()[\"properties\"].keys()\n state_data = {\n field: getattr(orm_state, field, None)\n for field in field_keys\n if field != \"data\"\n }\n state_data[\"data\"] = with_data\n return cls(**state_data)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateBaseModel","title":"StateBaseModel
","text":" Bases: IDBaseModel
prefect/server/schemas/states.py
class StateBaseModel(IDBaseModel):\n def orm_dict(\n self, *args, shallow: bool = False, json_compatible: bool = False, **kwargs\n ) -> dict:\n \"\"\"\n This method is used as a convenience method for constructing fixtues by first\n building a `State` schema object and converting it into an ORM-compatible\n format. Because the `data` field is not writable on ORM states, this method\n omits the `data` field entirely for the purposes of constructing an ORM model.\n If state data is required, an artifact must be created separately.\n \"\"\"\n\n schema_dict = self.dict(\n *args, shallow=shallow, json_compatible=json_compatible, **kwargs\n )\n # remove the data field in order to construct a state ORM model\n schema_dict.pop(\"data\", None)\n return schema_dict\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateBaseModel.json","title":"json
","text":"Returns a representation of the model as JSON.
If include_secrets=True
, then SecretStr
and SecretBytes
objects are fully revealed. Otherwise they are obfuscated.
prefect/server/utilities/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n \"\"\"\n Returns a representation of the model as JSON.\n\n If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n fully revealed. Otherwise they are obfuscated.\n\n \"\"\"\n if include_secrets:\n if \"encoder\" in kwargs:\n raise ValueError(\n \"Alternative encoder provided; can not set encoder for\"\n \" SecretFields.\"\n )\n kwargs[\"encoder\"] = partial(\n custom_pydantic_encoder,\n {SecretField: lambda v: v.get_secret_value() if v else None},\n )\n return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateType","title":"StateType
","text":" Bases: AutoEnum
Enumeration of state types.
Source code inprefect/server/schemas/states.py
class StateType(AutoEnum):\n \"\"\"Enumeration of state types.\"\"\"\n\n SCHEDULED = AutoEnum.auto()\n PENDING = AutoEnum.auto()\n RUNNING = AutoEnum.auto()\n COMPLETED = AutoEnum.auto()\n FAILED = AutoEnum.auto()\n CANCELLED = AutoEnum.auto()\n CRASHED = AutoEnum.auto()\n PAUSED = AutoEnum.auto()\n CANCELLING = AutoEnum.auto()\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateType.auto","title":"auto
staticmethod
","text":"Exposes enum.auto()
to avoid requiring a second import to use AutoEnum
prefect/utilities/collections.py
@staticmethod\ndef auto():\n \"\"\"\n Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n \"\"\"\n return auto()\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.AwaitingRetry","title":"AwaitingRetry
","text":"Convenience function for creating AwaitingRetry
states.
Returns:
Name Type DescriptionState
State
a AwaitingRetry state
Source code inprefect/server/schemas/states.py
def AwaitingRetry(\n scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n \"\"\"Convenience function for creating `AwaitingRetry` states.\n\n Returns:\n State: a AwaitingRetry state\n \"\"\"\n return Scheduled(\n cls=cls, scheduled_time=scheduled_time, name=\"AwaitingRetry\", **kwargs\n )\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Cancelled","title":"Cancelled
","text":"Convenience function for creating Cancelled
states.
Returns:
Name Type DescriptionState
State
a Cancelled state
Source code inprefect/server/schemas/states.py
def Cancelled(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Cancelled` states.\n\n Returns:\n State: a Cancelled state\n \"\"\"\n return cls(type=StateType.CANCELLED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Cancelling","title":"Cancelling
","text":"Convenience function for creating Cancelling
states.
Returns:
Name Type DescriptionState
State
a Cancelling state
Source code inprefect/server/schemas/states.py
def Cancelling(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Cancelling` states.\n\n Returns:\n State: a Cancelling state\n \"\"\"\n return cls(type=StateType.CANCELLING, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Completed","title":"Completed
","text":"Convenience function for creating Completed
states.
Returns:
Name Type DescriptionState
State
a Completed state
Source code inprefect/server/schemas/states.py
def Completed(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Completed` states.\n\n Returns:\n State: a Completed state\n \"\"\"\n return cls(type=StateType.COMPLETED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Crashed","title":"Crashed
","text":"Convenience function for creating Crashed
states.
Returns:
Name Type DescriptionState
State
a Crashed state
Source code inprefect/server/schemas/states.py
def Crashed(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Crashed` states.\n\n Returns:\n State: a Crashed state\n \"\"\"\n return cls(type=StateType.CRASHED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Failed","title":"Failed
","text":"Convenience function for creating Failed
states.
Returns:
Name Type DescriptionState
State
a Failed state
Source code inprefect/server/schemas/states.py
def Failed(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Failed` states.\n\n Returns:\n State: a Failed state\n \"\"\"\n return cls(type=StateType.FAILED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Late","title":"Late
","text":"Convenience function for creating Late
states.
Returns:
Name Type DescriptionState
State
a Late state
Source code inprefect/server/schemas/states.py
def Late(\n scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n \"\"\"Convenience function for creating `Late` states.\n\n Returns:\n State: a Late state\n \"\"\"\n return Scheduled(cls=cls, scheduled_time=scheduled_time, name=\"Late\", **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Paused","title":"Paused
","text":"Convenience function for creating Paused
states.
Returns:
Name Type DescriptionState
State
a Paused state
Source code inprefect/server/schemas/states.py
def Paused(\n cls: Type[State] = State,\n timeout_seconds: int = None,\n pause_expiration_time: datetime.datetime = None,\n reschedule: bool = False,\n pause_key: str = None,\n **kwargs,\n) -> State:\n \"\"\"Convenience function for creating `Paused` states.\n\n Returns:\n State: a Paused state\n \"\"\"\n state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n\n if state_details.pause_timeout:\n raise ValueError(\"An extra pause timeout was provided in state_details\")\n\n if pause_expiration_time is not None and timeout_seconds is not None:\n raise ValueError(\n \"Cannot supply both a pause_expiration_time and timeout_seconds\"\n )\n\n if pause_expiration_time is None and timeout_seconds is None:\n pass\n else:\n state_details.pause_timeout = pause_expiration_time or (\n pendulum.now(\"UTC\") + pendulum.Duration(seconds=timeout_seconds)\n )\n\n state_details.pause_reschedule = reschedule\n state_details.pause_key = pause_key\n\n return cls(type=StateType.PAUSED, state_details=state_details, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Pending","title":"Pending
","text":"Convenience function for creating Pending
states.
Returns:
Name Type DescriptionState
State
a Pending state
Source code inprefect/server/schemas/states.py
def Pending(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Pending` states.\n\n Returns:\n State: a Pending state\n \"\"\"\n return cls(type=StateType.PENDING, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Retrying","title":"Retrying
","text":"Convenience function for creating Retrying
states.
Returns:
Name Type DescriptionState
State
a Retrying state
Source code inprefect/server/schemas/states.py
def Retrying(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Retrying` states.\n\n Returns:\n State: a Retrying state\n \"\"\"\n return cls(type=StateType.RUNNING, name=\"Retrying\", **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Running","title":"Running
","text":"Convenience function for creating Running
states.
Returns:
Name Type DescriptionState
State
a Running state
Source code inprefect/server/schemas/states.py
def Running(cls: Type[State] = State, **kwargs) -> State:\n \"\"\"Convenience function for creating `Running` states.\n\n Returns:\n State: a Running state\n \"\"\"\n return cls(type=StateType.RUNNING, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Scheduled","title":"Scheduled
","text":"Convenience function for creating Scheduled
states.
Returns:
Name Type DescriptionState
State
a Scheduled state
Source code inprefect/server/schemas/states.py
def Scheduled(\n scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n \"\"\"Convenience function for creating `Scheduled` states.\n\n Returns:\n State: a Scheduled state\n \"\"\"\n # NOTE: `scheduled_time` must come first for backwards compatibility\n\n state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n if scheduled_time is None:\n scheduled_time = pendulum.now(\"UTC\")\n elif state_details.scheduled_time:\n raise ValueError(\"An extra scheduled_time was provided in state_details\")\n state_details.scheduled_time = scheduled_time\n\n return cls(type=StateType.SCHEDULED, state_details=state_details, **kwargs)\n
"},{"location":"api-ref/server/services/late_runs/","title":"server.services.late_runs","text":""},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs","title":"prefect.server.services.late_runs
","text":"The MarkLateRuns service. Responsible for putting flow runs in a Late state if they are not started on time. The threshold for a late run can be configured by changing PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS
.
MarkLateRuns
","text":" Bases: LoopService
A simple loop service responsible for identifying flow runs that are \"late\".
A flow run is defined as \"late\" if has not scheduled within a certain amount of time after its scheduled start time. The exact amount is configurable in Prefect REST API Settings.
Source code inprefect/server/services/late_runs.py
class MarkLateRuns(LoopService):\n \"\"\"\n A simple loop service responsible for identifying flow runs that are \"late\".\n\n A flow run is defined as \"late\" if has not scheduled within a certain amount\n of time after its scheduled start time. The exact amount is configurable in\n Prefect REST API Settings.\n \"\"\"\n\n def __init__(self, loop_seconds: float = None, **kwargs):\n super().__init__(\n loop_seconds=loop_seconds\n or PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS.value(),\n **kwargs,\n )\n\n # mark runs late if they are this far past their expected start time\n self.mark_late_after: datetime.timedelta = (\n PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS.value()\n )\n\n # query for this many runs to mark as late at once\n self.batch_size = 400\n\n @inject_db\n async def run_once(self, db: PrefectDBInterface):\n \"\"\"\n Mark flow runs as late by:\n\n - Querying for flow runs in a scheduled state that are Scheduled to start in the past\n - For any runs past the \"late\" threshold, setting the flow run state to a new `Late` state\n \"\"\"\n scheduled_to_start_before = pendulum.now(\"UTC\").subtract(\n seconds=self.mark_late_after.total_seconds()\n )\n\n while True:\n async with db.session_context(begin_transaction=True) as session:\n query = self._get_select_late_flow_runs_query(\n scheduled_to_start_before=scheduled_to_start_before, db=db\n )\n\n result = await session.execute(query)\n runs = result.all()\n\n # mark each run as late\n for run in runs:\n await self._mark_flow_run_as_late(session=session, flow_run=run)\n\n # if no runs were found, exit the loop\n if len(runs) < self.batch_size:\n break\n\n self.logger.info(\"Finished monitoring for late runs.\")\n\n @inject_db\n def _get_select_late_flow_runs_query(\n self, scheduled_to_start_before: datetime.datetime, db: PrefectDBInterface\n ):\n \"\"\"\n Returns a sqlalchemy query for late flow runs.\n\n Args:\n scheduled_to_start_before: the maximum next scheduled start time of\n scheduled flow runs to consider in the returned query\n \"\"\"\n query = (\n sa.select(\n db.FlowRun.id,\n db.FlowRun.next_scheduled_start_time,\n )\n .where(\n # The next scheduled start time is in the past, including the mark late\n # after buffer\n (db.FlowRun.next_scheduled_start_time <= scheduled_to_start_before),\n db.FlowRun.state_type == states.StateType.SCHEDULED,\n db.FlowRun.state_name == \"Scheduled\",\n )\n .limit(self.batch_size)\n )\n return query\n\n async def _mark_flow_run_as_late(\n self, session: AsyncSession, flow_run: PrefectDBInterface.FlowRun\n ) -> None:\n \"\"\"\n Mark a flow run as late.\n\n Pass-through method for overrides.\n \"\"\"\n await models.flow_runs.set_flow_run_state(\n session=session,\n flow_run_id=flow_run.id,\n state=states.Late(scheduled_time=flow_run.next_scheduled_start_time),\n flow_policy=MarkLateRunsPolicy, # type: ignore\n )\n
"},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs.MarkLateRuns.run_once","title":"run_once
async
","text":"Mark flow runs as late by:
Late
stateprefect/server/services/late_runs.py
@inject_db\nasync def run_once(self, db: PrefectDBInterface):\n \"\"\"\n Mark flow runs as late by:\n\n - Querying for flow runs in a scheduled state that are Scheduled to start in the past\n - For any runs past the \"late\" threshold, setting the flow run state to a new `Late` state\n \"\"\"\n scheduled_to_start_before = pendulum.now(\"UTC\").subtract(\n seconds=self.mark_late_after.total_seconds()\n )\n\n while True:\n async with db.session_context(begin_transaction=True) as session:\n query = self._get_select_late_flow_runs_query(\n scheduled_to_start_before=scheduled_to_start_before, db=db\n )\n\n result = await session.execute(query)\n runs = result.all()\n\n # mark each run as late\n for run in runs:\n await self._mark_flow_run_as_late(session=session, flow_run=run)\n\n # if no runs were found, exit the loop\n if len(runs) < self.batch_size:\n break\n\n self.logger.info(\"Finished monitoring for late runs.\")\n
"},{"location":"api-ref/server/services/loop_service/","title":"server.services.loop_service","text":""},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service","title":"prefect.server.services.loop_service
","text":"The base class for all Prefect REST API loop services.
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService","title":"LoopService
","text":"Loop services are relatively lightweight maintenance routines that need to run periodically.
This class makes it straightforward to design and integrate them. Users only need to define the run_once
coroutine to describe the behavior of the service on each loop.
prefect/server/services/loop_service.py
class LoopService:\n \"\"\"\n Loop services are relatively lightweight maintenance routines that need to run periodically.\n\n This class makes it straightforward to design and integrate them. Users only need to\n define the `run_once` coroutine to describe the behavior of the service on each loop.\n \"\"\"\n\n loop_seconds = 60\n\n def __init__(self, loop_seconds: float = None, handle_signals: bool = True):\n \"\"\"\n Args:\n loop_seconds (float): if provided, overrides the loop interval\n otherwise specified as a class variable\n handle_signals (bool): if True (default), SIGINT and SIGTERM are\n gracefully intercepted and shut down the running service.\n \"\"\"\n if loop_seconds:\n self.loop_seconds = loop_seconds # seconds between runs\n self._should_stop = False # flag for whether the service should stop running\n self._is_running = False # flag for whether the service is running\n self.name = type(self).__name__\n self.logger = get_logger(f\"server.services.{self.name.lower()}\")\n\n if handle_signals:\n _register_signal(signal.SIGINT, self._stop)\n _register_signal(signal.SIGTERM, self._stop)\n\n @inject_db\n async def _on_start(self, db: PrefectDBInterface) -> None:\n \"\"\"\n Called prior to running the service\n \"\"\"\n # reset the _should_stop flag\n self._should_stop = False\n # set the _is_running flag\n self._is_running = True\n\n async def _on_stop(self) -> None:\n \"\"\"\n Called after running the service\n \"\"\"\n # reset the _is_running flag\n self._is_running = False\n\n async def start(self, loops=None) -> None:\n \"\"\"\n Run the service `loops` time. Pass loops=None to run forever.\n\n Args:\n loops (int, optional): the number of loops to run before exiting.\n \"\"\"\n\n await self._on_start()\n\n i = 0\n while not self._should_stop:\n start_time = pendulum.now(\"UTC\")\n\n try:\n self.logger.debug(f\"About to run {self.name}...\")\n await self.run_once()\n\n except NotImplementedError as exc:\n raise exc from None\n\n # if an error is raised, log and continue\n except Exception as exc:\n self.logger.error(f\"Unexpected error in: {repr(exc)}\", exc_info=True)\n\n end_time = pendulum.now(\"UTC\")\n\n # if service took longer than its loop interval, log a warning\n # that the interval might be too short\n if (end_time - start_time).total_seconds() > self.loop_seconds:\n self.logger.warning(\n f\"{self.name} took {(end_time-start_time).total_seconds()} seconds\"\n \" to run, which is longer than its loop interval of\"\n f\" {self.loop_seconds} seconds.\"\n )\n\n # check if early stopping was requested\n i += 1\n if loops is not None and i == loops:\n self.logger.debug(f\"{self.name} exiting after {loops} loop(s).\")\n await self.stop(block=False)\n\n # next run is every \"loop seconds\" after each previous run *started*.\n # note that if the loop took unexpectedly long, the \"next_run\" time\n # might be in the past, which will result in an instant start\n next_run = max(\n start_time.add(seconds=self.loop_seconds), pendulum.now(\"UTC\")\n )\n self.logger.debug(f\"Finished running {self.name}. Next run at {next_run}\")\n\n # check the `_should_stop` flag every 1 seconds until the next run time is reached\n while pendulum.now(\"UTC\") < next_run and not self._should_stop:\n await asyncio.sleep(\n min(1, (next_run - pendulum.now(\"UTC\")).total_seconds())\n )\n\n await self._on_stop()\n\n async def stop(self, block=True) -> None:\n \"\"\"\n Gracefully stops a running LoopService and optionally blocks until the\n service stops.\n\n Args:\n block (bool): if True, blocks until the service is\n finished running. Otherwise it requests a stop and returns but\n the service may still be running a final loop.\n\n \"\"\"\n self._stop()\n\n if block:\n # if block=True, sleep until the service stops running,\n # but no more than `loop_seconds` to avoid a deadlock\n with anyio.move_on_after(self.loop_seconds):\n while self._is_running:\n await asyncio.sleep(0.1)\n\n # if the service is still running after `loop_seconds`, something's wrong\n if self._is_running:\n self.logger.warning(\n f\"`stop(block=True)` was called on {self.name} but more than one\"\n f\" loop interval ({self.loop_seconds} seconds) has passed. This\"\n \" usually means something is wrong. If `stop()` was called from\"\n \" inside the loop service, use `stop(block=False)` instead.\"\n )\n\n def _stop(self, *_) -> None:\n \"\"\"\n Private, synchronous method for setting the `_should_stop` flag. Takes arbitrary\n arguments so it can be used as a signal handler.\n \"\"\"\n self._should_stop = True\n\n async def run_once(self) -> None:\n \"\"\"\n Represents one loop of the service.\n\n Users should override this method.\n\n To actually run the service once, call `LoopService().start(loops=1)`\n instead of `LoopService().run_once()`, because this method will not invoke setup\n and teardown methods properly.\n \"\"\"\n raise NotImplementedError(\"LoopService subclasses must implement this method.\")\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.run_once","title":"run_once
async
","text":"Represents one loop of the service.
Users should override this method.
To actually run the service once, call LoopService().start(loops=1)
instead of LoopService().run_once()
, because this method will not invoke setup and teardown methods properly.
prefect/server/services/loop_service.py
async def run_once(self) -> None:\n \"\"\"\n Represents one loop of the service.\n\n Users should override this method.\n\n To actually run the service once, call `LoopService().start(loops=1)`\n instead of `LoopService().run_once()`, because this method will not invoke setup\n and teardown methods properly.\n \"\"\"\n raise NotImplementedError(\"LoopService subclasses must implement this method.\")\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.start","title":"start
async
","text":"Run the service loops
time. Pass loops=None to run forever.
Parameters:
Name Type Description Defaultloops
int
the number of loops to run before exiting.
None
Source code in prefect/server/services/loop_service.py
async def start(self, loops=None) -> None:\n \"\"\"\n Run the service `loops` time. Pass loops=None to run forever.\n\n Args:\n loops (int, optional): the number of loops to run before exiting.\n \"\"\"\n\n await self._on_start()\n\n i = 0\n while not self._should_stop:\n start_time = pendulum.now(\"UTC\")\n\n try:\n self.logger.debug(f\"About to run {self.name}...\")\n await self.run_once()\n\n except NotImplementedError as exc:\n raise exc from None\n\n # if an error is raised, log and continue\n except Exception as exc:\n self.logger.error(f\"Unexpected error in: {repr(exc)}\", exc_info=True)\n\n end_time = pendulum.now(\"UTC\")\n\n # if service took longer than its loop interval, log a warning\n # that the interval might be too short\n if (end_time - start_time).total_seconds() > self.loop_seconds:\n self.logger.warning(\n f\"{self.name} took {(end_time-start_time).total_seconds()} seconds\"\n \" to run, which is longer than its loop interval of\"\n f\" {self.loop_seconds} seconds.\"\n )\n\n # check if early stopping was requested\n i += 1\n if loops is not None and i == loops:\n self.logger.debug(f\"{self.name} exiting after {loops} loop(s).\")\n await self.stop(block=False)\n\n # next run is every \"loop seconds\" after each previous run *started*.\n # note that if the loop took unexpectedly long, the \"next_run\" time\n # might be in the past, which will result in an instant start\n next_run = max(\n start_time.add(seconds=self.loop_seconds), pendulum.now(\"UTC\")\n )\n self.logger.debug(f\"Finished running {self.name}. Next run at {next_run}\")\n\n # check the `_should_stop` flag every 1 seconds until the next run time is reached\n while pendulum.now(\"UTC\") < next_run and not self._should_stop:\n await asyncio.sleep(\n min(1, (next_run - pendulum.now(\"UTC\")).total_seconds())\n )\n\n await self._on_stop()\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.stop","title":"stop
async
","text":"Gracefully stops a running LoopService and optionally blocks until the service stops.
Parameters:
Name Type Description Defaultblock
bool
if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop.
True
Source code in prefect/server/services/loop_service.py
async def stop(self, block=True) -> None:\n \"\"\"\n Gracefully stops a running LoopService and optionally blocks until the\n service stops.\n\n Args:\n block (bool): if True, blocks until the service is\n finished running. Otherwise it requests a stop and returns but\n the service may still be running a final loop.\n\n \"\"\"\n self._stop()\n\n if block:\n # if block=True, sleep until the service stops running,\n # but no more than `loop_seconds` to avoid a deadlock\n with anyio.move_on_after(self.loop_seconds):\n while self._is_running:\n await asyncio.sleep(0.1)\n\n # if the service is still running after `loop_seconds`, something's wrong\n if self._is_running:\n self.logger.warning(\n f\"`stop(block=True)` was called on {self.name} but more than one\"\n f\" loop interval ({self.loop_seconds} seconds) has passed. This\"\n \" usually means something is wrong. If `stop()` was called from\"\n \" inside the loop service, use `stop(block=False)` instead.\"\n )\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.run_multiple_services","title":"run_multiple_services
async
","text":"Only one signal handler can be active at a time, so this function takes a list of loop services and runs all of them with a global signal handler.
Source code inprefect/server/services/loop_service.py
async def run_multiple_services(loop_services: List[LoopService]):\n \"\"\"\n Only one signal handler can be active at a time, so this function takes a list\n of loop services and runs all of them with a global signal handler.\n \"\"\"\n\n def stop_all_services(self, *_):\n for service in loop_services:\n service._stop()\n\n signal.signal(signal.SIGINT, stop_all_services)\n signal.signal(signal.SIGTERM, stop_all_services)\n await asyncio.gather(*[service.start() for service in loop_services])\n
"},{"location":"api-ref/server/services/scheduler/","title":"server.services.scheduler","text":""},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler","title":"prefect.server.services.scheduler
","text":"The Scheduler service.
"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.RecentDeploymentsScheduler","title":"RecentDeploymentsScheduler
","text":" Bases: Scheduler
A scheduler that only schedules deployments that were updated very recently. This scheduler can run on a tight loop and ensure that runs from newly-created or updated deployments are rapidly scheduled without having to wait for the \"main\" scheduler to complete its loop.
Note that scheduling is idempotent, so its ok for this scheduler to attempt to schedule the same deployments as the main scheduler. It's purpose is to accelerate scheduling for any deployments that users are interacting with.
Source code inprefect/server/services/scheduler.py
class RecentDeploymentsScheduler(Scheduler):\n \"\"\"\n A scheduler that only schedules deployments that were updated very recently.\n This scheduler can run on a tight loop and ensure that runs from\n newly-created or updated deployments are rapidly scheduled without having to\n wait for the \"main\" scheduler to complete its loop.\n\n Note that scheduling is idempotent, so its ok for this scheduler to attempt\n to schedule the same deployments as the main scheduler. It's purpose is to\n accelerate scheduling for any deployments that users are interacting with.\n \"\"\"\n\n # this scheduler runs on a tight loop\n loop_seconds = 5\n\n @inject_db\n def _get_select_deployments_to_schedule_query(self, db: PrefectDBInterface):\n \"\"\"\n Returns a sqlalchemy query for selecting deployments to schedule\n \"\"\"\n query = (\n sa.select(db.Deployment.id)\n .where(\n sa.and_(\n db.Deployment.paused.is_not(True),\n # use a slightly larger window than the loop interval to pick up\n # any deployments that were created *while* the scheduler was\n # last running (assuming the scheduler takes less than one\n # second to run). Scheduling is idempotent so picking up schedules\n # multiple times is not a concern.\n db.Deployment.updated\n >= pendulum.now(\"UTC\").subtract(seconds=self.loop_seconds + 1),\n (\n # Only include deployments that have at least one\n # active schedule.\n sa.select(db.DeploymentSchedule.deployment_id)\n .where(\n sa.and_(\n db.DeploymentSchedule.deployment_id == db.Deployment.id,\n db.DeploymentSchedule.active.is_(True),\n )\n )\n .exists()\n ),\n )\n )\n .order_by(db.Deployment.id)\n .limit(self.deployment_batch_size)\n )\n return query\n
"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.Scheduler","title":"Scheduler
","text":" Bases: LoopService
A loop service that schedules flow runs from deployments.
Source code inprefect/server/services/scheduler.py
class Scheduler(LoopService):\n \"\"\"\n A loop service that schedules flow runs from deployments.\n \"\"\"\n\n # the main scheduler takes its loop interval from\n # PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS\n loop_seconds = None\n\n def __init__(self, loop_seconds: float = None, **kwargs):\n super().__init__(\n loop_seconds=(\n loop_seconds\n or self.loop_seconds\n or PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS.value()\n ),\n **kwargs,\n )\n self.deployment_batch_size: int = (\n PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE.value()\n )\n self.max_runs: int = PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS.value()\n self.min_runs: int = PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS.value()\n self.max_scheduled_time: datetime.timedelta = (\n PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME.value()\n )\n self.min_scheduled_time: datetime.timedelta = (\n PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME.value()\n )\n self.insert_batch_size = (\n PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE.value()\n )\n\n @inject_db\n async def run_once(self, db: PrefectDBInterface):\n \"\"\"\n Schedule flow runs by:\n\n - Querying for deployments with active schedules\n - Generating the next set of flow runs based on each deployments schedule\n - Inserting all scheduled flow runs into the database\n\n All inserted flow runs are committed to the database at the termination of the\n loop.\n \"\"\"\n total_inserted_runs = 0\n\n last_id = None\n while True:\n async with db.session_context(begin_transaction=False) as session:\n query = self._get_select_deployments_to_schedule_query()\n\n # use cursor based pagination\n if last_id:\n query = query.where(db.Deployment.id > last_id)\n\n result = await session.execute(query)\n deployment_ids = result.scalars().unique().all()\n\n # collect runs across all deployments\n try:\n runs_to_insert = await self._collect_flow_runs(\n session=session, deployment_ids=deployment_ids\n )\n except TryAgain:\n continue\n\n # bulk insert the runs based on batch size setting\n for batch in batched_iterable(runs_to_insert, self.insert_batch_size):\n async with db.session_context(begin_transaction=True) as session:\n inserted_runs = await self._insert_scheduled_flow_runs(\n session=session, runs=batch\n )\n total_inserted_runs += len(inserted_runs)\n\n # if this is the last page of deployments, exit the loop\n if len(deployment_ids) < self.deployment_batch_size:\n break\n else:\n # record the last deployment ID\n last_id = deployment_ids[-1]\n\n self.logger.info(f\"Scheduled {total_inserted_runs} runs.\")\n\n @inject_db\n def _get_select_deployments_to_schedule_query(self, db: PrefectDBInterface):\n \"\"\"\n Returns a sqlalchemy query for selecting deployments to schedule.\n\n The query gets the IDs of any deployments with:\n\n - an active schedule\n - EITHER:\n - fewer than `min_runs` auto-scheduled runs\n - OR the max scheduled time is less than `max_scheduled_time` in the future\n \"\"\"\n now = pendulum.now(\"UTC\")\n query = (\n sa.select(db.Deployment.id)\n .select_from(db.Deployment)\n # TODO: on Postgres, this could be replaced with a lateral join that\n # sorts by `next_scheduled_start_time desc` and limits by\n # `self.min_runs` for a ~ 50% speedup. At the time of writing,\n # performance of this universal query appears to be fast enough that\n # this optimization is not worth maintaining db-specific queries\n .join(\n db.FlowRun,\n # join on matching deployments, only picking up future scheduled runs\n sa.and_(\n db.Deployment.id == db.FlowRun.deployment_id,\n db.FlowRun.state_type == StateType.SCHEDULED,\n db.FlowRun.next_scheduled_start_time >= now,\n db.FlowRun.auto_scheduled.is_(True),\n ),\n isouter=True,\n )\n .where(\n sa.and_(\n db.Deployment.paused.is_not(True),\n (\n # Only include deployments that have at least one\n # active schedule.\n sa.select(db.DeploymentSchedule.deployment_id)\n .where(\n sa.and_(\n db.DeploymentSchedule.deployment_id == db.Deployment.id,\n db.DeploymentSchedule.active.is_(True),\n )\n )\n .exists()\n ),\n )\n )\n .group_by(db.Deployment.id)\n # having EITHER fewer than three runs OR runs not scheduled far enough out\n .having(\n sa.or_(\n sa.func.count(db.FlowRun.next_scheduled_start_time) < self.min_runs,\n sa.func.max(db.FlowRun.next_scheduled_start_time)\n < now + self.min_scheduled_time,\n )\n )\n .order_by(db.Deployment.id)\n .limit(self.deployment_batch_size)\n )\n return query\n\n async def _collect_flow_runs(\n self,\n session: sa.orm.Session,\n deployment_ids: List[UUID],\n ) -> List[Dict]:\n runs_to_insert = []\n for deployment_id in deployment_ids:\n now = pendulum.now(\"UTC\")\n # guard against erroneously configured schedules\n try:\n runs_to_insert.extend(\n await self._generate_scheduled_flow_runs(\n session=session,\n deployment_id=deployment_id,\n start_time=now,\n end_time=now + self.max_scheduled_time,\n min_time=self.min_scheduled_time,\n min_runs=self.min_runs,\n max_runs=self.max_runs,\n )\n )\n except Exception:\n self.logger.exception(\n f\"Error scheduling deployment {deployment_id!r}.\",\n )\n finally:\n connection = await session.connection()\n if connection.invalidated:\n # If the error we handled above was the kind of database error that\n # causes underlying transaction to rollback and the connection to\n # become invalidated, rollback this session. Errors that may cause\n # this are connection drops, database restarts, and things of the\n # sort.\n #\n # This rollback _does not rollback a transaction_, since that has\n # actually already happened due to the error above. It brings the\n # Python session in sync with underlying connection so that when we\n # exec the outer with block, the context manager will not attempt to\n # commit the session.\n #\n # Then, raise TryAgain to break out of these nested loops, back to\n # the outer loop, where we'll begin a new transaction with\n # session.begin() in the next loop iteration.\n await session.rollback()\n raise TryAgain()\n return runs_to_insert\n\n @inject_db\n async def _generate_scheduled_flow_runs(\n self,\n session: sa.orm.Session,\n deployment_id: UUID,\n start_time: datetime.datetime,\n end_time: datetime.datetime,\n min_time: datetime.timedelta,\n min_runs: int,\n max_runs: int,\n db: PrefectDBInterface,\n ) -> List[Dict]:\n \"\"\"\n Given a `deployment_id` and schedule params, generates a list of flow run\n objects and associated scheduled states that represent scheduled flow runs.\n\n Pass-through method for overrides.\n\n\n Args:\n session: a database session\n deployment_id: the id of the deployment to schedule\n start_time: the time from which to start scheduling runs\n end_time: runs will be scheduled until at most this time\n min_time: runs will be scheduled until at least this far in the future\n min_runs: a minimum amount of runs to schedule\n max_runs: a maximum amount of runs to schedule\n\n This function will generate the minimum number of runs that satisfy the min\n and max times, and the min and max counts. Specifically, the following order\n will be respected:\n\n - Runs will be generated starting on or after the `start_time`\n - No more than `max_runs` runs will be generated\n - No runs will be generated after `end_time` is reached\n - At least `min_runs` runs will be generated\n - Runs will be generated until at least `start_time + min_time` is reached\n\n \"\"\"\n return await models.deployments._generate_scheduled_flow_runs(\n session=session,\n deployment_id=deployment_id,\n start_time=start_time,\n end_time=end_time,\n min_time=min_time,\n min_runs=min_runs,\n max_runs=max_runs,\n )\n\n @inject_db\n async def _insert_scheduled_flow_runs(\n self,\n session: sa.orm.Session,\n runs: List[Dict],\n db: PrefectDBInterface,\n ) -> List[UUID]:\n \"\"\"\n Given a list of flow runs to schedule, as generated by\n `_generate_scheduled_flow_runs`, inserts them into the database. Note this is a\n separate method to facilitate batch operations on many scheduled runs.\n\n Pass-through method for overrides.\n \"\"\"\n return await models.deployments._insert_scheduled_flow_runs(\n session=session, runs=runs\n )\n
"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.Scheduler.run_once","title":"run_once
async
","text":"Schedule flow runs by:
All inserted flow runs are committed to the database at the termination of the loop.
Source code inprefect/server/services/scheduler.py
@inject_db\nasync def run_once(self, db: PrefectDBInterface):\n \"\"\"\n Schedule flow runs by:\n\n - Querying for deployments with active schedules\n - Generating the next set of flow runs based on each deployments schedule\n - Inserting all scheduled flow runs into the database\n\n All inserted flow runs are committed to the database at the termination of the\n loop.\n \"\"\"\n total_inserted_runs = 0\n\n last_id = None\n while True:\n async with db.session_context(begin_transaction=False) as session:\n query = self._get_select_deployments_to_schedule_query()\n\n # use cursor based pagination\n if last_id:\n query = query.where(db.Deployment.id > last_id)\n\n result = await session.execute(query)\n deployment_ids = result.scalars().unique().all()\n\n # collect runs across all deployments\n try:\n runs_to_insert = await self._collect_flow_runs(\n session=session, deployment_ids=deployment_ids\n )\n except TryAgain:\n continue\n\n # bulk insert the runs based on batch size setting\n for batch in batched_iterable(runs_to_insert, self.insert_batch_size):\n async with db.session_context(begin_transaction=True) as session:\n inserted_runs = await self._insert_scheduled_flow_runs(\n session=session, runs=batch\n )\n total_inserted_runs += len(inserted_runs)\n\n # if this is the last page of deployments, exit the loop\n if len(deployment_ids) < self.deployment_batch_size:\n break\n else:\n # record the last deployment ID\n last_id = deployment_ids[-1]\n\n self.logger.info(f\"Scheduled {total_inserted_runs} runs.\")\n
"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.TryAgain","title":"TryAgain
","text":" Bases: Exception
Internal control-flow exception used to retry the Scheduler's main loop
Source code inprefect/server/services/scheduler.py
class TryAgain(Exception):\n \"\"\"Internal control-flow exception used to retry the Scheduler's main loop\"\"\"\n
"},{"location":"api-ref/server/utilities/database/","title":"server.utilities.database","text":""},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database","title":"prefect.server.utilities.database
","text":"Utilities for interacting with Prefect REST API database and ORM layer.
Prefect supports both SQLite and Postgres. Many of these utilities allow the Prefect REST API to seamlessly switch between the two.
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.GenerateUUID","title":"GenerateUUID
","text":" Bases: FunctionElement
Platform-independent UUID default generator. Note the actual functionality for this class is specified in the compiles
-decorated functions below
prefect/server/utilities/database.py
class GenerateUUID(FunctionElement):\n \"\"\"\n Platform-independent UUID default generator.\n Note the actual functionality for this class is specified in the\n `compiles`-decorated functions below\n \"\"\"\n\n name = \"uuid_default\"\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.JSON","title":"JSON
","text":" Bases: TypeDecorator
JSON type that returns SQLAlchemy's dialect-specific JSON types, where possible. Uses generic JSON otherwise.
The \"base\" type is postgresql.JSONB to expose useful methods prior to SQL compilation
Source code inprefect/server/utilities/database.py
class JSON(TypeDecorator):\n \"\"\"\n JSON type that returns SQLAlchemy's dialect-specific JSON types, where\n possible. Uses generic JSON otherwise.\n\n The \"base\" type is postgresql.JSONB to expose useful methods prior\n to SQL compilation\n \"\"\"\n\n impl = postgresql.JSONB\n cache_ok = True\n\n def load_dialect_impl(self, dialect):\n if dialect.name == \"postgresql\":\n return dialect.type_descriptor(postgresql.JSONB(none_as_null=True))\n elif dialect.name == \"sqlite\":\n return dialect.type_descriptor(sqlite.JSON(none_as_null=True))\n else:\n return dialect.type_descriptor(sa.JSON(none_as_null=True))\n\n def process_bind_param(self, value, dialect):\n \"\"\"Prepares the given value to be used as a JSON field in a parameter binding\"\"\"\n if not value:\n return value\n\n # PostgreSQL does not support the floating point extrema values `NaN`,\n # `-Infinity`, or `Infinity`\n # https://www.postgresql.org/docs/current/datatype-json.html#JSON-TYPE-MAPPING-TABLE\n #\n # SQLite supports storing and retrieving full JSON values that include\n # `NaN`, `-Infinity`, or `Infinity`, but any query that requires SQLite to parse\n # the value (like `json_extract`) will fail.\n #\n # Replace any `NaN`, `-Infinity`, or `Infinity` values with `None` in the\n # returned value. See more about `parse_constant` at\n # https://docs.python.org/3/library/json.html#json.load.\n return json.loads(json.dumps(value), parse_constant=lambda c: None)\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.Pydantic","title":"Pydantic
","text":" Bases: TypeDecorator
A pydantic type that converts inserted parameters to json and converts read values to the pydantic type.
Source code inprefect/server/utilities/database.py
class Pydantic(TypeDecorator):\n \"\"\"\n A pydantic type that converts inserted parameters to\n json and converts read values to the pydantic type.\n \"\"\"\n\n impl = JSON\n cache_ok = True\n\n def __init__(self, pydantic_type, sa_column_type=None):\n super().__init__()\n self._pydantic_type = pydantic_type\n if sa_column_type is not None:\n self.impl = sa_column_type\n\n def process_bind_param(self, value, dialect):\n if value is None:\n return None\n # parse the value to ensure it complies with the schema\n # (this will raise validation errors if not)\n value = pydantic.parse_obj_as(self._pydantic_type, value)\n # sqlalchemy requires the bind parameter's value to be a python-native\n # collection of JSON-compatible objects. we achieve that by dumping the\n # value to a json string using the pydantic JSON encoder and re-parsing\n # it into a python-native form.\n return json.loads(json.dumps(value, default=pydantic.json.pydantic_encoder))\n\n def process_result_value(self, value, dialect):\n if value is not None:\n # load the json object into a fully hydrated typed object\n return pydantic.parse_obj_as(self._pydantic_type, value)\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.Timestamp","title":"Timestamp
","text":" Bases: TypeDecorator
TypeDecorator that ensures that timestamps have a timezone.
For SQLite, all timestamps are converted to UTC (since they are stored as naive timestamps without timezones) and recovered as UTC.
Source code inprefect/server/utilities/database.py
class Timestamp(TypeDecorator):\n \"\"\"TypeDecorator that ensures that timestamps have a timezone.\n\n For SQLite, all timestamps are converted to UTC (since they are stored\n as naive timestamps without timezones) and recovered as UTC.\n \"\"\"\n\n impl = sa.TIMESTAMP(timezone=True)\n cache_ok = True\n\n def load_dialect_impl(self, dialect):\n if dialect.name == \"postgresql\":\n return dialect.type_descriptor(postgresql.TIMESTAMP(timezone=True))\n elif dialect.name == \"sqlite\":\n return dialect.type_descriptor(\n sqlite.DATETIME(\n # SQLite is very particular about datetimes, and performs all comparisons\n # as alphanumeric comparisons without regard for actual timestamp\n # semantics or timezones. Therefore, it's important to have uniform\n # and sortable datetime representations. The default is an ISO8601-compatible\n # string with NO time zone and a space (\" \") delimiter between the date\n # and the time. The below settings can be used to add a \"T\" delimiter but\n # will require all other sqlite datetimes to be set similarly, including\n # the custom default value for datetime columns and any handwritten SQL\n # formed with `strftime()`.\n #\n # store with \"T\" separator for time\n # storage_format=(\n # \"%(year)04d-%(month)02d-%(day)02d\"\n # \"T%(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d\"\n # ),\n # handle ISO 8601 with \"T\" or \" \" as the time separator\n # regexp=r\"(\\d+)-(\\d+)-(\\d+)[T ](\\d+):(\\d+):(\\d+).(\\d+)\",\n )\n )\n else:\n return dialect.type_descriptor(sa.TIMESTAMP(timezone=True))\n\n def process_bind_param(self, value, dialect):\n if value is None:\n return None\n else:\n if value.tzinfo is None:\n raise ValueError(\"Timestamps must have a timezone.\")\n elif dialect.name == \"sqlite\":\n return pendulum.instance(value).in_timezone(\"UTC\")\n else:\n return value\n\n def process_result_value(self, value, dialect):\n # retrieve timestamps in their native timezone (or UTC)\n if value is not None:\n return pendulum.instance(value).in_timezone(\"utc\")\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.UUID","title":"UUID
","text":" Bases: TypeDecorator
Platform-independent UUID type.
Uses PostgreSQL's UUID type, otherwise uses CHAR(36), storing as stringified hex values with hyphens.
Source code inprefect/server/utilities/database.py
class UUID(TypeDecorator):\n \"\"\"\n Platform-independent UUID type.\n\n Uses PostgreSQL's UUID type, otherwise uses\n CHAR(36), storing as stringified hex values with\n hyphens.\n \"\"\"\n\n impl = TypeEngine\n cache_ok = True\n\n def load_dialect_impl(self, dialect):\n if dialect.name == \"postgresql\":\n return dialect.type_descriptor(postgresql.UUID())\n else:\n return dialect.type_descriptor(CHAR(36))\n\n def process_bind_param(self, value, dialect):\n if value is None:\n return None\n elif dialect.name == \"postgresql\":\n return str(value)\n elif isinstance(value, uuid.UUID):\n return str(value)\n else:\n return str(uuid.UUID(value))\n\n def process_result_value(self, value, dialect):\n if value is None:\n return value\n else:\n if not isinstance(value, uuid.UUID):\n value = uuid.UUID(value)\n return value\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.date_add","title":"date_add
","text":" Bases: FunctionElement
Platform-independent way to add a date and an interval.
Source code inprefect/server/utilities/database.py
class date_add(FunctionElement):\n \"\"\"\n Platform-independent way to add a date and an interval.\n \"\"\"\n\n type = Timestamp()\n name = \"date_add\"\n # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n inherit_cache = False\n\n def __init__(self, dt, interval):\n self.dt = dt\n self.interval = interval\n super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.date_diff","title":"date_diff
","text":" Bases: FunctionElement
Platform-independent difference of dates. Computes d1 - d2.
Source code inprefect/server/utilities/database.py
class date_diff(FunctionElement):\n \"\"\"\n Platform-independent difference of dates. Computes d1 - d2.\n \"\"\"\n\n type = sa.Interval()\n name = \"date_diff\"\n # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n inherit_cache = False\n\n def __init__(self, d1, d2):\n self.d1 = d1\n self.d2 = d2\n super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.interval_add","title":"interval_add
","text":" Bases: FunctionElement
Platform-independent way to add two intervals.
Source code inprefect/server/utilities/database.py
class interval_add(FunctionElement):\n \"\"\"\n Platform-independent way to add two intervals.\n \"\"\"\n\n type = sa.Interval()\n name = \"interval_add\"\n # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n inherit_cache = False\n\n def __init__(self, i1, i2):\n self.i1 = i1\n self.i2 = i2\n super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_contains","title":"json_contains
","text":" Bases: FunctionElement
Platform independent json_contains operator, tests if the left
expression contains the right
expression.
On postgres this is equivalent to the @> containment operator. https://www.postgresql.org/docs/current/functions-json.html
Source code inprefect/server/utilities/database.py
class json_contains(FunctionElement):\n \"\"\"\n Platform independent json_contains operator, tests if the\n `left` expression contains the `right` expression.\n\n On postgres this is equivalent to the @> containment operator.\n https://www.postgresql.org/docs/current/functions-json.html\n \"\"\"\n\n type = BOOLEAN\n name = \"json_contains\"\n # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n inherit_cache = False\n\n def __init__(self, left, right):\n self.left = left\n self.right = right\n super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_has_all_keys","title":"json_has_all_keys
","text":" Bases: FunctionElement
Platform independent json_has_all_keys operator.
On postgres this is equivalent to the ?& existence operator. https://www.postgresql.org/docs/current/functions-json.html
Source code inprefect/server/utilities/database.py
class json_has_all_keys(FunctionElement):\n \"\"\"Platform independent json_has_all_keys operator.\n\n On postgres this is equivalent to the ?& existence operator.\n https://www.postgresql.org/docs/current/functions-json.html\n \"\"\"\n\n type = BOOLEAN\n name = \"json_has_all_keys\"\n # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n inherit_cache = False\n\n def __init__(self, json_expr, values: List):\n self.json_expr = json_expr\n if isinstance(values, list) and not all(isinstance(v, str) for v in values):\n raise ValueError(\n \"json_has_all_key values must be strings if provided as a literal list\"\n )\n self.values = values\n super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_has_any_key","title":"json_has_any_key
","text":" Bases: FunctionElement
Platform independent json_has_any_key operator.
On postgres this is equivalent to the ?| existence operator. https://www.postgresql.org/docs/current/functions-json.html
Source code inprefect/server/utilities/database.py
class json_has_any_key(FunctionElement):\n \"\"\"\n Platform independent json_has_any_key operator.\n\n On postgres this is equivalent to the ?| existence operator.\n https://www.postgresql.org/docs/current/functions-json.html\n \"\"\"\n\n type = BOOLEAN\n name = \"json_has_any_key\"\n # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n inherit_cache = False\n\n def __init__(self, json_expr, values: List):\n self.json_expr = json_expr\n if not all(isinstance(v, str) for v in values):\n raise ValueError(\"json_has_any_key values must be strings\")\n self.values = values\n super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.now","title":"now
","text":" Bases: FunctionElement
Platform-independent \"now\" generator.
Source code inprefect/server/utilities/database.py
class now(FunctionElement):\n \"\"\"\n Platform-independent \"now\" generator.\n \"\"\"\n\n type = Timestamp()\n name = \"now\"\n # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n inherit_cache = True\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.get_dialect","title":"get_dialect
","text":"Get the dialect of a session, engine, or connection url.
Primary use case is figuring out whether the Prefect REST API is communicating with SQLite or Postgres.
Exampleimport prefect.settings\nfrom prefect.server.utilities.database import get_dialect\n\ndialect = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value())\nif dialect == \"sqlite\":\n print(\"Using SQLite!\")\nelse:\n print(\"Using Postgres!\")\n
Source code in prefect/server/utilities/database.py
def get_dialect(\n obj: Union[str, sa.orm.Session, sa.engine.Engine],\n) -> sa.engine.Dialect:\n \"\"\"\n Get the dialect of a session, engine, or connection url.\n\n Primary use case is figuring out whether the Prefect REST API is communicating with\n SQLite or Postgres.\n\n Example:\n ```python\n import prefect.settings\n from prefect.server.utilities.database import get_dialect\n\n dialect = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value())\n if dialect == \"sqlite\":\n print(\"Using SQLite!\")\n else:\n print(\"Using Postgres!\")\n ```\n \"\"\"\n if isinstance(obj, sa.orm.Session):\n url = obj.bind.url\n elif isinstance(obj, sa.engine.Engine):\n url = obj.url\n else:\n url = sa.engine.url.make_url(obj)\n\n return url.get_dialect()\n
"},{"location":"api-ref/server/utilities/schemas/","title":"server.utilities.schemas","text":""},{"location":"api-ref/server/utilities/schemas/#prefect.server.utilities.schemas","title":"prefect.server.utilities.schemas
","text":""},{"location":"api-ref/server/utilities/server/","title":"server.utilities.server","text":""},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server","title":"prefect.server.utilities.server
","text":"Utilities for the Prefect REST API server.
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectAPIRoute","title":"PrefectAPIRoute
","text":" Bases: APIRoute
A FastAPIRoute class which attaches an async stack to requests that exits before a response is returned.
Requests already have request.scope['fastapi_astack']
which is an async stack for the full scope of the request. This stack is used for managing contexts of FastAPI dependencies. If we want to close a dependency before the request is complete (i.e. before returning a response to the user), we need a stack with a different scope. This extension adds this stack at request.state.response_scoped_stack
.
prefect/server/utilities/server.py
class PrefectAPIRoute(APIRoute):\n \"\"\"\n A FastAPIRoute class which attaches an async stack to requests that exits before\n a response is returned.\n\n Requests already have `request.scope['fastapi_astack']` which is an async stack for\n the full scope of the request. This stack is used for managing contexts of FastAPI\n dependencies. If we want to close a dependency before the request is complete\n (i.e. before returning a response to the user), we need a stack with a different\n scope. This extension adds this stack at `request.state.response_scoped_stack`.\n \"\"\"\n\n def get_route_handler(self) -> Callable[[Request], Coroutine[Any, Any, Response]]:\n default_handler = super().get_route_handler()\n\n async def handle_response_scoped_depends(request: Request) -> Response:\n # Create a new stack scoped to exit before the response is returned\n async with AsyncExitStack() as stack:\n request.state.response_scoped_stack = stack\n response = await default_handler(request)\n\n return response\n\n return handle_response_scoped_depends\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectRouter","title":"PrefectRouter
","text":" Bases: APIRouter
A base class for Prefect REST API routers.
Source code inprefect/server/utilities/server.py
class PrefectRouter(APIRouter):\n \"\"\"\n A base class for Prefect REST API routers.\n \"\"\"\n\n def __init__(self, **kwargs: Any) -> None:\n kwargs.setdefault(\"route_class\", PrefectAPIRoute)\n super().__init__(**kwargs)\n\n def add_api_route(\n self, path: str, endpoint: Callable[..., Any], **kwargs: Any\n ) -> None:\n \"\"\"\n Add an API route.\n\n For routes that return content and have not specified a `response_model`,\n use return type annotation to infer the response model.\n\n For routes that return No-Content status codes, explicitly set\n a `response_class` to ensure nothing is returned in the response body.\n \"\"\"\n if kwargs.get(\"status_code\") == status.HTTP_204_NO_CONTENT:\n # any routes that return No-Content status codes must\n # explicitly set a response_class that will handle status codes\n # and not return anything in the body\n kwargs[\"response_class\"] = Response\n if kwargs.get(\"response_model\") is None:\n kwargs[\"response_model\"] = get_type_hints(endpoint).get(\"return\")\n return super().add_api_route(path, endpoint, **kwargs)\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectRouter.add_api_route","title":"add_api_route
","text":"Add an API route.
For routes that return content and have not specified a response_model
, use return type annotation to infer the response model.
For routes that return No-Content status codes, explicitly set a response_class
to ensure nothing is returned in the response body.
prefect/server/utilities/server.py
def add_api_route(\n self, path: str, endpoint: Callable[..., Any], **kwargs: Any\n) -> None:\n \"\"\"\n Add an API route.\n\n For routes that return content and have not specified a `response_model`,\n use return type annotation to infer the response model.\n\n For routes that return No-Content status codes, explicitly set\n a `response_class` to ensure nothing is returned in the response body.\n \"\"\"\n if kwargs.get(\"status_code\") == status.HTTP_204_NO_CONTENT:\n # any routes that return No-Content status codes must\n # explicitly set a response_class that will handle status codes\n # and not return anything in the body\n kwargs[\"response_class\"] = Response\n if kwargs.get(\"response_model\") is None:\n kwargs[\"response_model\"] = get_type_hints(endpoint).get(\"return\")\n return super().add_api_route(path, endpoint, **kwargs)\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.method_paths_from_routes","title":"method_paths_from_routes
","text":"Generate a set of strings describing the given routes in the format:
For example, \"GET /logs/\"
Source code inprefect/server/utilities/server.py
def method_paths_from_routes(routes: Sequence[BaseRoute]) -> Set[str]:\n \"\"\"\n Generate a set of strings describing the given routes in the format: <method> <path>\n\n For example, \"GET /logs/\"\n \"\"\"\n method_paths = set()\n for route in routes:\n if isinstance(route, (APIRoute, StarletteRoute)):\n for method in route.methods:\n method_paths.add(f\"{method} {route.path}\")\n\n return method_paths\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.response_scoped_dependency","title":"response_scoped_dependency
","text":"Ensure that this dependency closes before the response is returned to the client. By default, FastAPI closes dependencies after sending the response.
Uses an async stack that is exited before the response is returned. This is particularly useful for database sessions which must be committed before the client can do more work.
Do not use a response-scoped dependency within a FastAPI background task.Background tasks run after FastAPI sends the response, so a response-scoped dependency will already be closed. Use a normal FastAPI dependency instead.
Parameters:
Name Type Description Defaultdependency
Callable
An async callable. FastAPI dependencies may still be used.
requiredReturns:
Type DescriptionA wrapped dependency
which will push the dependency
context manager onto
a stack when called.
Source code inprefect/server/utilities/server.py
def response_scoped_dependency(dependency: Callable):\n \"\"\"\n Ensure that this dependency closes before the response is returned to the client. By\n default, FastAPI closes dependencies after sending the response.\n\n Uses an async stack that is exited before the response is returned. This is\n particularly useful for database sessions which must be committed before the client\n can do more work.\n\n NOTE: Do not use a response-scoped dependency within a FastAPI background task.\n Background tasks run after FastAPI sends the response, so a response-scoped\n dependency will already be closed. Use a normal FastAPI dependency instead.\n\n Args:\n dependency: An async callable. FastAPI dependencies may still be used.\n\n Returns:\n A wrapped `dependency` which will push the `dependency` context manager onto\n a stack when called.\n \"\"\"\n signature = inspect.signature(dependency)\n\n async def wrapper(*args, request: Request, **kwargs):\n # Replicate FastAPI behavior of auto-creating a context manager\n if inspect.isasyncgenfunction(dependency):\n context_manager = asynccontextmanager(dependency)\n else:\n context_manager = dependency\n\n # Ensure request is provided if requested\n if \"request\" in signature.parameters:\n kwargs[\"request\"] = request\n\n # Enter the route handler provided stack that is closed before responding,\n # return the value yielded by the wrapped dependency\n return await request.state.response_scoped_stack.enter_async_context(\n context_manager(*args, **kwargs)\n )\n\n # Ensure that the signature includes `request: Request` to ensure that FastAPI will\n # inject the request as a dependency; maintain the old signature so those depends\n # work\n request_parameter = inspect.signature(wrapper).parameters[\"request\"]\n functools.update_wrapper(wrapper, dependency)\n\n if \"request\" not in signature.parameters:\n new_parameters = signature.parameters.copy()\n new_parameters[\"request\"] = request_parameter\n wrapper.__signature__ = signature.replace(\n parameters=tuple(new_parameters.values())\n )\n\n return wrapper\n
"},{"location":"cloud/","title":"Welcome to Prefect Cloud","text":"Prefect Cloud is a hosted workflow application framework that provides all the capabilities of Prefect server plus additional features, such as:
Getting Started with Prefect Cloud
Ready to jump right in and start running with Prefect Cloud? See the Quickstart and follow the instructions on the Cloud tabs to write and deploy your first Prefect Cloud-monitored flow run.
Prefect Cloud includes all the features in the open-source Prefect server plus the following:
Prefect Cloud features
Failed
and Crashed
flow runs into actionable information.When you sign up for Prefect Cloud, an account and a user profile are automatically provisioned for you.
Your profile is the place where you'll manage settings related to yourself as a user, including:
As an account Admin, you will also have access to account settings from the Account Settings page, such as:
As an account Admin you can create a workspace and invite other individuals to your workspace.
Upgrading from a Prefect Cloud Free tier plan to a Pro or Enterprise tier plan enables additional functionality for adding workspaces, managing teams, and running higher volume workloads.
Workspace Admins have the ability to use single sign-on (SSO), set role-based access controls (RBAC), view Audit Logs, and configure service accounts.
Enterprise add custom roles, object-level access control lists, teams, and Directory Sync/SCIM provisioning for SSO.
Prefect Cloud plans for teams of every size
See the Prefect Cloud plans for details on Pro and Enterprise account tiers.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#workspaces","title":"Workspaces","text":"A workspace is an isolated environment within Prefect Cloud for your flows, deployments, and block configuration. See the Workspaces documentation for more information about configuring and using workspaces.
Each workspace keeps track of its own:
Prefect Cloud allows you to see your events. Events provide information about the state of your workflows, and can be used as automation triggers.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#automations","title":"Automations","text":"Prefect Cloud automations provide additional notification capabilities beyond those in a self-hosted open-source Prefect server. Automations also enable you to create event-driven workflows, toggle resources such as schedules and work pools, and declare incidents.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#incidents","title":"Incidents","text":"Prefect Cloud's incidents help teams identify, rectify, and document issues in mission-critical workflows. Incidents are formal declarations of disruptions to a workspace. With automations), activity in that workspace can be paused when an incident is created and resumed when it is resolved.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#error-summaries","title":"Error summaries","text":"Prefect Cloud error summaries, enabled by Marvin AI, distill the error logs of Failed
and Crashed
flow runs into actionable information. To enable this feature and others powered by Marvin AI, visit the Settings page for your account.
Service accounts enable you to create Prefect Cloud API keys that are not associated with a user account. Service accounts are typically used to configure API access for running workers or executing flow runs on remote infrastructure. See the service accounts documentation for more information about creating and managing service accounts.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#roles-and-custom-permissions","title":"Roles and custom permissions","text":"Role-based access controls (RBAC) enable you to assign users a role with permissions to perform certain activities within an account or a workspace. See the role-based access controls (RBAC) documentation for more information about managing user roles in a Prefect Cloud account.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#single-sign-on-sso","title":"Single Sign-on (SSO)","text":"Prefect Cloud's Pro and Enterprise plans offer single sign-on (SSO) authentication integration with your team\u2019s identity provider. SSO integration can bet set up with identity providers that support OIDC and SAML. Directory Sync and SCIM provisioning is also available with Enterprise plans.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#audit-log","title":"Audit log","text":"Prefect Cloud's Pro and Enterprise plans offer Audit Logs for compliance and security. Audit logs provide a chronological record of activities performed by users in an account.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#prefect-cloud-rest-api","title":"Prefect Cloud REST API","text":"The Prefect REST API is used for communicating data from Prefect clients to Prefect Cloud or a local Prefect server for orchestration and monitoring. This API is mainly consumed by Prefect clients like the Prefect Python Client or the Prefect UI.
Prefect Cloud REST API interactive documentation
Prefect Cloud REST API documentation is available at https://app.prefect.cloud/api/docs.
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#start-using-prefect-cloud","title":"Start using Prefect Cloud","text":"To create an account or sign in with an existing Prefect Cloud account, go to https://app.prefect.cloud/.
Then follow the steps in the UI to deploy your first Prefect Cloud-monitored flow run. For more details, see the Prefect Quickstart and follow the instructions on the Cloud tabs.
Need help?
Get your questions answered by a Prefect Product Advocate! Book a Meeting
","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/cloud-quickstart/","title":"Getting Started with Prefect Cloud","text":"Get started with Prefect Cloud in just a few steps:
To sign in with an existing account or register an account, go to https://app.prefect.cloud/.
You can create an account with any of the following:
A workspace is an isolated environment within Prefect Cloud for your flows and deployments. You can use workspaces to organize or compartmentalize your workflows.
When you register a new account, you'll be prompted to provide a name and description for your workspace.
Note that the Owner setting applies only to users who are members of Prefect Cloud accounts and have permission to create workspaces within account.
Select Create to create the workspace. If you change your mind, select Edit from the options menu to modify the workspace details or to delete it.
The Workspace Settings page for your new workspace displays the commands that enable you to install Prefect and log into Prefect Cloud in a local execution environment.
","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#install-prefect","title":"Install Prefect","text":"Configure a local execution environment to use Prefect Cloud as the API server for flow runs. In other words, \"log in\" to Prefect Cloud from a local environment where you want to run a flow.
Open a new terminal session.
Install Prefect in the environment in which you want to execute flow runs.
pip install -U prefect\n
Installation requirements
Prefect requires Python 3.8 or later. If you have any questions about Prefect installations requirements or dependencies in your preferred development environment, check out the Installation documentation.
","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#log-into-prefect-cloud-from-a-terminal","title":"Log into Prefect Cloud from a terminal","text":"Use the prefect cloud login
Prefect CLI command to log into Prefect Cloud from your environment.
prefect cloud login\n
The prefect cloud login
command, used on its own, provides an interactive login experience. Using this command, you may log in with either an API key or through a browser.
? How would you like to authenticate? [Use arrows to move; enter to select]\n> Log in with a webb browser \n Paste an API key \nOpening browser...\nWaiting for response...\nAuthenticated with Prefect Cloud! Using workspace 'jeffdc/prod'.\n
If you choose to log in via the browser, Prefect opens a new tab in your default browser and enables you to log in and authenticate the session.
","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#run-a-flow-with-prefect-cloud","title":"Run a flow with Prefect Cloud","text":"You're all set to run a flow locally, orchestrated with Prefect Cloud.
In your local environment, where you configured the previous steps, create a file named quickstart_flow.py
with the following contents:
from prefect import flow\n\n@flow(log_prints=True)\ndef quickstart_flow():\n print(\"Local quickstart flow is running!\")\n\nif __name__ == \"__main__\":\n quickstart_flow()\n
Now run quickstart_flow.py
. You'll see log messages like this in your terminal, indicating that the flow is running correctly:
17:18:09.863 | INFO | prefect.engine - Created flow run 'fragrant-quetzal' for flow 'quickstart-flow'\n17:18:09.864 | INFO | Flow run 'fragrant-quetzal' - View at https://app.prefect.cloud/account/my_workspace_id/workspace/my_flow_id/flow-runs/flow-run/my_flow_run_id\n17:18:10.010 | INFO | Flow run 'fragrant-quetzal' - Local quickstart flow is running!\n17:18:10.144 | INFO | Flow run 'fragrant-quetzal' - Finished in state Completed()\n
Go to the Flow Runs pages in your workspace in Prefect Cloud. You'll see the flow run results right there in Prefect Cloud!
Prefect Cloud automatically tracks any flow runs in a local execution environment logged into Prefect Cloud.
Select the name of the flow run to see details about this run.
Congratulations! You successfully ran a local flow and, because you're logged into Prefect Cloud, the local flow run results were captured by Prefect Cloud.
","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#next-steps","title":"Next steps","text":"If you're new to Prefect, learn more about writing and running flows in the Prefect Flows First Steps tutorial. If you're already familiar with flows, try creating a deployment and triggering flow runs with Prefect Cloud by following the Deployments tutorial.
Want to learn more about the features available in Prefect Cloud? Start with the Prefect Cloud Overview.
If you ran into any issues getting your first flow run with Prefect Cloud working, please join our community to ask questions or provide feedback:
Prefect's Slack Community is helpful, friendly, and fast growing - come say hi!
","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/connecting/","title":"Connecting & Troubleshooting Prefect Cloud","text":"To create flow runs in a local or remote execution environment and use either Prefect Cloud or a Prefect server as the backend API server, you need to
Configure a local execution environment to use Prefect Cloud as the API server for flow runs. In other words, \"log in\" to Prefect Cloud from a local environment where you want to run a flow.
$ pip install -U prefect\n
prefect cloud login
Prefect CLI command to log into Prefect Cloud from your environment.$ prefect cloud login\n
The prefect cloud login
command, used on its own, provides an interactive login experience. Using this command, you can log in with either an API key or through a browser.
$ prefect cloud login\n? How would you like to authenticate? [Use arrows to move; enter to select]\n> Log in with a web browser\n Paste an API key\nPaste your authentication key:\n? Which workspace would you like to use? [Use arrows to move; enter to select]\n> prefect/terry-prefect-workspace\n g-gadflow/g-workspace\nAuthenticated with Prefect Cloud! Using workspace 'prefect/terry-prefect-workspace'.\n
You can also log in by providing a Prefect Cloud API key that you create.
","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#change-workspaces","title":"Change workspaces","text":"If you need to change which workspace you're syncing with, use the prefect cloud workspace set
Prefect CLI command while logged in, passing the account handle and workspace name.
$ prefect cloud workspace set --workspace \"prefect/my-workspace\"\n
If no workspace is provided, you will be prompted to select one.
Workspace Settings also shows you the prefect cloud workspace set
Prefect CLI command you can use to sync a local execution environment with a given workspace.
You may also use the prefect cloud login
command with the --workspace
or -w
option to set the current workspace.
$ prefect cloud login --workspace \"prefect/my-workspace\"\n
","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#manually-configure-prefect-api-settings","title":"Manually configure Prefect API settings","text":"You can also manually configure the PREFECT_API_URL
setting to specify the Prefect Cloud API.
For Prefect Cloud, you can configure the PREFECT_API_URL
and PREFECT_API_KEY
settings to authenticate with Prefect Cloud by using an account ID, workspace ID, and API key.
$ prefect config set PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n$ prefect config set PREFECT_API_KEY=\"[API-KEY]\"\n
When you're in a Prefect Cloud workspace, you can copy the PREFECT_API_URL
value directly from the page URL.
In this example, we configured PREFECT_API_URL
and PREFECT_API_KEY
in the default profile. You can use prefect profile
CLI commands to create settings profiles for different configurations. For example, you could have a \"cloud\" profile configured to use the Prefect Cloud API URL and API key, and another \"local\" profile for local development using a local Prefect API server started with prefect server start
. See Settings for details.
Environment variables
You can also set PREFECT_API_URL
and PREFECT_API_KEY
as you would any other environment variable. See Overriding defaults with environment variables for more information.
See the Flow orchestration with Prefect tutorial for examples.
","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#install-requirements-in-execution-environments","title":"Install requirements in execution environments","text":"In local and remote execution environments \u2014 such as VMs and containers \u2014 you must make sure any flow requirements or dependencies have been installed before creating a flow run.
","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#troubleshooting-prefect-cloud","title":"Troubleshooting Prefect Cloud","text":"This section provides tips that may be helpful if you run into problems using Prefect Cloud.
","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#prefect-cloud-and-proxies","title":"Prefect Cloud and proxies","text":"Proxies intermediate network requests between a server and a client.
To communicate with Prefect Cloud, the Prefect client library makes HTTPS requests. These requests are made using the httpx
Python library. httpx
respects accepted proxy environment variables, so the Prefect client is able to communicate through proxies.
To enable communication via proxies, simply set the HTTPS_PROXY
and SSL_CERT_FILE
environment variables as appropriate in your execution environment and things should \u201cjust work.\u201d
See the Using Prefect Cloud with proxies topic in Prefect Discourse for examples of proxy configuration.
URLs that should be whitelisted for outbound-communication in a secure environment include the UI, the API, Authentication, and the current OCSP server:
If the Prefect Cloud API key, environment variable settings, or account login for your execution environment are not configured correctly, you may experience errors or unexpected flow run results when using Prefect CLI commands, running flows, or observing flow run results in Prefect Cloud.
Use the prefect config view
CLI command to make sure your execution environment is correctly configured to access Prefect Cloud.
$ prefect config view\nPREFECT_PROFILE='cloud'\nPREFECT_API_KEY='pnu_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' (from profile)\nPREFECT_API_URL='https://api.prefect.cloud/api/accounts/...' (from profile)\n
Make sure PREFECT_API_URL
is configured to use https://api.prefect.cloud/api/...
.
Make sure PREFECT_API_KEY
is configured to use a valid API key.
You can use the prefect cloud workspace ls
CLI command to view or set the active workspace.
$ prefect cloud workspace ls\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Available Workspaces: \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 g-gadflow/g-workspace \u2502\n\u2502 * prefect/workinonit \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n * active workspace\n
You can also check that the account and workspace IDs specified in the URL for PREFECT_API_URL
match those shown in the URL bar for your Prefect Cloud workspace.
If you're having difficulty logging in to Prefect Cloud, the following troubleshooting steps may resolve the issue, or will provide more information when sharing your case to the support channel.
Other tips to help with login difficulties:
None of this worked?
Email us at help@prefect.io and provide answers to the questions above in your email to make it faster to troubleshoot and unblock you. Make sure you add the email address with which you were trying to log in, your Prefect Cloud account name, and, if applicable, the organization to which it belongs.
","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/events/","title":"Events","text":"An event is a notification of a change. Together, events form a feed of activity recording what's happening across your stack.
Events power several features in Prefect Cloud, including flow run logs, audit logs, and automations.
Events can represent API calls, state transitions, or changes in your execution environment or infrastructure.
Events enable observability into your data stack via the event feed, and the configuration of Prefect's reactivity via automations.
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-specification","title":"Event specification","text":"Events adhere to a structured specification.
Name Type Required? Description occurred String yes When the event happened event String yes The name of the event that happened resource Object yes The primary Resource this event concerns related Array no A list of additional Resources involved in this event payload Object no An open-ended set of data describing what happened id String yes The client-provided identifier of this event follows String no The ID of an event that is known to have occurred prior to this one.","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-grammar","title":"Event grammar","text":"Generally, events have a consistent and informative grammar - an event describes a resource and an action that the resource took or that was taken on that resource. For example, events emitted by Prefect objects take the form of:
prefect.block.write-method.called\nprefect-cloud.automation.action.executed\nprefect-cloud.user.logged-in\n
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-sources","title":"Event sources","text":"Events are automatically emitted by all Prefect objects, including flows, tasks, deployments, work queues, and logs. Prefect-emitted events will contain the prefect
or prefect-cloud
resource prefix. Events can also be sent to the Prefect events API via authenticated http request.
The Prefect Python SDK provides an emit_event
function that emits an Prefect event when called. The function can be called inside or outside of a task or flow. Running the following code will emit an event to Prefect Cloud, which will validate and ingest the event data.
from prefect.events import emit_event\n\ndef some_function(name: str=\"kiki\") -> None:\n print(f\"hi {name}!\")\n emit_event(event=f\"{name}.sent.event!\", resource={\"prefect.resource.id\": f\"coder.{name}\"})\n\nsome_function()\n
Note that the emit_event
arguments shown above are required: event
represents the name of the event and resource={\"prefect.resource.id\": \"my_string\"}
is the resource id. To get data into an event for use in an automation action, you can specify a dictionary of values for the payload
parameter.
Prefect Cloud offers programmable webhooks to receive HTTP requests from other systems and translate them into events within your workspace. Webhooks can emit pre-defined static events, dynamic events that use portions of the incoming HTTP request, or events derived from CloudEvents.
Events emitted from any source will appear in the event feed, where you can visualize activity in context and configure automations to react to the presence or absence of it in the future.
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#resources","title":"Resources","text":"Every event has a primary resource, which describes the object that emitted an event. Resources are used as quasi-stable identifiers for sources of events, and are constructed as dot-delimited strings, for example:
prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3.action.0\nacme.user.kiki.elt_script_1\nprefect.flow-run.e3755d32-cec5-42ca-9bcd-af236e308ba6\n
Resources can optionally have additional arbitrary labels which can be used in event aggregation queries, such as:
\"resource\": {\n \"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3\",\n \"prefect-cloud.action.type\": \"call-webhook\"\n }\n
Events can optionally contain related resources, used to associate the event with other resources, such as in the case that the primary resource acted on or with another resource:
\"resource\": {\n \"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3.action.0\",\n \"prefect-cloud.action.type\": \"call-webhook\"\n },\n\"related\": [\n {\n \"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3\",\n \"prefect.resource.role\": \"automation\",\n \"prefect-cloud.name\": \"webhook_body_demo\",\n \"prefect-cloud.posture\": \"Reactive\"\n }\n]\n
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#events-in-the-cloud-ui","title":"Events in the Cloud UI","text":"Prefect Cloud provides an interactive dashboard to analyze and take action on events that occurred in your workspace on the event feed page.
The event feed is the primary place to view, search, and filter events to understand activity across your stack. Each entry displays data on the resource, related resource, and event that took place.
You can view more information about an event by clicking into it, where you can view the full details of an event's resource, related resources, and its payload.
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#reacting-to-events","title":"Reacting to events","text":"From an event page, you can configure an automation to trigger on the observation of matching events or a lack of matching events by clicking the automate button in the overflow menu:
The default trigger configuration will fire every time it sees an event with a matching resource identifier. Advanced configuration is possible via custom triggers.
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/incidents/","title":"Incidents","text":"","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#overview","title":"Overview","text":"Incidents are a Prefect Cloud feature to help your team manage workflow disruptions. Incidents help you identify, resolve, and document issues with mission-critical workflows. This system enhances operational efficiency by automating the incident management process and providing a centralized platform for collaboration and compliance.
","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#what-are-incidents","title":"What are incidents?","text":"Incidents are formal declarations of disruptions to a workspace. With automations, activity in a workspace can be paused when an incident is created and resumed when it is resolved.
Incidents vary in nature and severity, ranging from minor glitches to critical system failures. Prefect Cloud enables users to effectively and automatically track and manage these incidents, ensuring minimal impact on operational continuity.
","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#why-use-incident-management","title":"Why use incident management?","text":"Automated detection and reporting: Incidents can be automatically identified based on specific triggers or manually reported by team members, facilitating prompt response.
Collaborative problem-solving: The platform fosters collaboration, allowing team members to share insights, discuss resolutions, and track contributions.
Comprehensive impact assessment: Users gain insights into the incident's influence on workflows, helping in prioritizing response efforts.
Compliance with incident management processes: Detailed documentation and reporting features support compliance with incident management systems.
Enhanced operational transparency: The system provides a transparent view of both ongoing and resolved incidents, promoting accountability and continuous improvement.
There are several ways to create an incident:
From the Incidents page:
From a flow run, work pool, or block:
Via an automation:
Automations can be used for triggering an incident and for selecting actions to take when an incident is triggered. For example, a work pool status change could trigger the declaration of an incident, or a critical level incident could trigger a notification action.
To automatically take action when an incident is declared, set up a custom trigger that listens for declaration events.
{\n \"match\": {\n \"prefect.resource.id\": \"prefect-cloud.incident.*\"\n },\n \"expect\": [\n \"prefect-cloud.incident.declared\"\n ],\n \"posture\": \"Reactive\",\n \"threshold\": 1,\n \"within\": 0\n}\n
Building custom triggers
To get started with incident automations, you only need to specify two fields in your trigger:
match: The resource emitting your event of interest. You can match on specific resource IDs, use wildcards to match on all resources of a given type, and even match on other resource attributes, like prefect.resource.name
.
expect: The event type to listen for. For example, you could listen for any (or all) of the following event types:
prefect-cloud.incident.declared
prefect-cloud.incident.resolved
prefect-cloud.incident.updated.severity
See Event Triggers for more information on custom triggers, and check out your Event Feed to see the event types emitted by your incidents and other resources (i.e. events that you can react to).
When an incident is declared, any actions you configure such as pausing work pools or sending notifications, will execute immediately.
","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#managing-an-incident","title":"Managing an incident","text":"API rate limits restrict the number of requests that a single client can make in a given time period. They ensure Prefect Cloud's stability, so that when you make an API call, you always get a response.
Prefect Cloud rate limits are subject to change
The following rate limits are in effect currently, but are subject to change. Contact Prefect support at help@prefect.io if you have questions about current rate limits.
Prefect Cloud enforces the following rate limits:
Prefect Cloud limits the flow_runs
, task_runs
, and flows
endpoints and their subroutes at the following levels:
The Prefect Cloud API will return a 429
response with an appropriate Retry-After
header if these limits are triggered.
Prefect Cloud limits the number of logs accepted:
The Prefect Cloud API will return a 429
response if these limits are triggered.
Prefect Cloud feature
The Flow Run Retention Policy setting is only applicable in Prefect Cloud.
Flow runs in Prefect Cloud are retained according to the Flow Run Retention Policy set by your account tier. The policy setting applies to all workspaces owned by the account.
The flow run retention policy represents the number of days each flow run is available in the Prefect Cloud UI, and via the Prefect CLI and API after it ends. Once a flow run reaches a terminal state (detailed in the chart here), it will be retained until the end of the flow run retention period.
Flow Run Retention Policy keys on terminal state
Note that, because Flow Run Retention Policy keys on terminal state, if two flows start at the same time, but reach a terminal state at different times, they will be removed at different times according to when they each reached their respective terminal states.
This retention policy applies to all details about a flow run, including its task runs. Subflow runs follow the retention policy independently from their parent flow runs, and are removed based on the time each subflow run reaches a terminal state.
If you or your organization have needs that require a tailored retention period, contact the Prefect Sales team.
","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/workspaces/","title":"Workspaces","text":"A workspace is a discrete environment within Prefect Cloud for your workflows and blocks. Workspaces are available to Prefect Cloud accounts only.
Workspaces can be used to organize and compartmentalize your workflows. For example, you can use separate workspaces to isolate dev, staging, and prod environments, or to provide separation between different teams.
When you first log into Prefect Cloud, you will be prompted to create your own initial workspace. After creating your workspace, you'll be able to view flow runs, flows, deployments, and other workspace-specific features in the Prefect Cloud UI.
Select a workspace name in the navigation menu to see all workspaces you can access.
Your list of available workspaces may include:
Workspace-specific features
Each workspace keeps track of its own:
Your user permissions within workspaces may vary. Account admins can assign roles and permissions at the workspace level.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#create-a-workspace","title":"Create a workspace","text":"On the Account Workspaces dropdown or the Workspaces page select the + icon to create a new workspace.
You'll be prompted to configure:
Select Create to create the new workspace. The number of available workspaces varies by Prefect Cloud plan. See Pricing if you need additional workspaces or users.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-settings","title":"Workspace settings","text":"Within a workspace, select Settings -> General to view or edit workspace details.
On this page you can edit workspace details or delete the workspace.
Deleting a workspace
Deleting a workspace deletes all deployments, flow run history, work pools, and notifications configured in workspace.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-access","title":"Workspace access","text":"Within a Prefect Cloud Pro or Enterprise tier account, Workspace Owners can invite other people to be members and provision service accounts to a workspace. In addition to giving the user access to the workspace, a Workspace Owner assigns a workspace role to the user. The role specifies the scope of permissions for the user within the workspace.
As a Workspace Owner, select Workspaces -> Sharing to manage members and service accounts for the workspace.
If you've previously invited individuals to your account or provisioned service accounts, you'll see them listed here.
To invite someone to an account, select the Members + icon. You can select from a list of existing account members.
Select a Role for the user. This will be the initial role for the user within the workspace. A workspace Owner can change this role at any time.
Select Send to initiate the invitation.
To add a service account to a workspace, select the Service Accounts + icon. You can select from a list of configured service accounts. Select a Workspace Role for the service account. This will be the initial role for the service account within the workspace. A workspace Owner can change this role at any time. Select Share to finalize adding the service account.
To remove a workspace member or service account, select Remove from the menu on the right side of the user or service account information on this page.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-transfer","title":"Workspace transfer","text":"Workspace transfer enables you to move an existing workspace from one account to another.
Workspace transfer retains existing workspace configuration and flow run history, including blocks, deployments, notifications, work pools, and logs.
Workspace transfer permissions
Workspace transfer must be initiated or approved by a user with admin privileges for the workspace to be transferred.
To initiate a workspace transfer between personal accounts, contact support@prefect.io.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#transfer-a-workspace","title":"Transfer a workspace","text":"To transfer a workspace, select Settings -> General within the workspace. Then, from the three dot menu in the upper right of the page, select Transfer.
The Transfer Workspace page shows the workspace to be transferred on the left. Select the target account for the workspace on the right.
Workspace transfer impact on accounts
Workspace transfer may impact resource usage and costs for source and target accounts.
When you transfer a workspace, users, API keys, and service accounts may lose access to the workspace. Audit log will no longer track activity on the workspace. Flow runs ending outside of the destination account\u2019s flow run retention period will be removed. You may also need to update Prefect CLI profiles and execution environment settings to access the workspace's new location.
You may also incur new charges in the target account to accommodate the transferred workspace.
The Transfer Workspace page outlines the impacts of transferring the selected workspace to the selected target. Please review these notes carefully before selecting Transfer to transfer the workspace.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/","title":"User accounts","text":"Sign up for a Prefect Cloud account at app.prefect.cloud.
An individual user can be invited to become a member of other accounts.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#user-settings","title":"User settings","text":"Users can access their personal settings in the profile menu, including:
Users who are part of an account can hold the role of Admin or Member. Admins can invite other users to join the account and manage the account's workspaces and teams.
Admins on Pro and Enterprise tier Prefect Cloud accounts can grant members of the account roles in a workspace, such as Runner or Viewer. Custom roles are available on Enterprise tier accounts.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#api-keys","title":"API keys","text":"API keys enable you to authenticate an environment to work with Prefect Cloud.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#service-accounts","title":"Service accounts","text":"Service accounts enable you to create a Prefect Cloud API key that is not associated with a user account.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#single-sign-on-sso","title":"Single sign-on (SSO)","text":"Pro and Enterprise plans offer single sign-on (SSO) integration with your team\u2019s identity provider. Enterprise tier accounts provide additional options with directory sync and SCIM provisioning.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#audit-log","title":"Audit log","text":"Audit logs provide a chronological record of activities performed by Prefect Cloud users who are members of an account.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#object-level-access-control-lists-acls","title":"Object-level access control lists (ACLs)","text":"Prefect Cloud's Enterprise plan offers object-level access control lists to restrict access to specific users and service accounts within a workspace.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#teams","title":"Teams","text":"Users of Enterprise tier Prefect Cloud accounts can be added to Teams to simplify access control governance.
","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/api-keys/","title":"Manage Prefect Cloud API Keys","text":"API keys enable you to authenticate a local environment to work with Prefect Cloud.
If you run prefect cloud login
from your CLI, you'll have the choice to authenticate through your browser or by pasting an API key.
If you choose to authenticate through your browser, you'll be directed to an authorization page. After you grant approval to connect, you'll be redirected to the CLI and the API key will be saved to your local Prefect profile.
If you choose to authenticate by pasting an API key, you'll need to create an API key in the Prefect Cloud UI first.
","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#create-an-api-key","title":"Create an API key","text":"To create an API key, select the account icon at the bottom-left corner of the UI.
Select API Keys. The page displays a list of previously generated keys and lets you create new API keys or delete keys.
Select the + button to create a new API key. Provide a name for the key and an expiration date.
Note that API keys cannot be revealed again in the UI after you generate them, so copy the key to a secure location.
","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#log-into-prefect-cloud-with-an-api-key","title":"Log into Prefect Cloud with an API Key","text":"prefect cloud login -k '<my-api-key>'\n
","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#service-account-api-keys","title":"Service account API keys","text":"Service accounts are a feature of Prefect Cloud Pro and Enterprise tier plans that enable you to create a Prefect Cloud API key that is not associated with a user account.
Service accounts are typically used to configure API access for running workers or executing flow runs on remote infrastructure. Events and logs for flow runs in those environments are then associated with the service account rather than a user, and API access may be managed or revoked by configuring or removing the service account without disrupting user access.
See the service accounts documentation for more information about creating and managing service accounts in Prefect Cloud.
","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/audit-log/","title":"Audit Log","text":"Prefect Cloud's Pro and Enterprise plans offer enhanced compliance and transparency tools with Audit Log. Audit logs provide a chronological record of activities performed by members in your account, allowing you to monitor detailed Prefect Cloud actions for security and compliance purposes.
Audit logs enable you to identify who took what action, when, and using what resources within your Prefect Cloud account. In conjunction with appropriate tools and procedures, audit logs can assist in detecting potential security violations and investigating application errors.
Audit logs can be used to identify changes in:
See the Prefect Cloud plan information to learn more about options for supporting audit logs.
","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/audit-log/#viewing-audit-logs","title":"Viewing audit logs","text":"From your Pro or Enterprise account settings page, select the Audit Log page to view audit logs.
Pro and Enterprise account tier admins can view audit logs for:
Admins can filter audit logs on multiple dimensions to restrict the results they see by workspace, user, or event type. Available audit log events are displayed in the Events drop-down menu.
Audit logs may also be filtered by date range. Audit log retention period varies by Prefect Cloud plan.
","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/object-access-control-lists/","title":"Object Access Control Lists","text":"Prefect Cloud's Enterprise plan offers object-level access control lists to restrict access to specific users and service accounts within a workspace. ACLs are supported for blocks and deployments.
Organization Admins and Workspace Owners can configure access control lists by navigating to an object and clicking manage access. When an ACL is added, all users and service accounts with access to an object via their workspace role will lose access if not explicitly added to the ACL.
ACLs and visibility
Objects not governed by access control lists such as flow runs, flows, and artifacts will be visible to a user within a workspace even if an associated block or deployment has been restricted for that user.
See the Prefect Cloud plans to learn more about options for supporting object-level access control.
","tags":["UI","Permissions","Access","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/roles/","title":"User and Service Account Roles","text":"Prefect Cloud's Pro and Enterprise tiers allow you to set team member access to the appropriate level within specific workspaces.
Role-based access controls (RBAC) enable you to assign users granular permissions to perform certain activities.
To give users access to functionality beyond the scope of Prefect\u2019s built-in workspace roles, Enterprise account Admins can create custom roles for users.
","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#built-in-roles","title":"Built-in roles","text":"Roles give users abilities at either the account level or at the individual workspace level.
The following sections outline the abilities of the built-in, Prefect-defined ac and workspace roles.
","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#account-level-roles","title":"Account-level roles","text":"The following built-in roles have permissions across an account in Prefect Cloud.
Role Abilities Owner \u2022 Set/change all account profile settings allowed to be set/changed by a Prefect user. \u2022 Add and remove account members, and their account roles. \u2022 Create and delete service accounts in the account. \u2022 Create workspaces in the account. \u2022 Implicit workspace owner access on all workspaces in the account. \u2022 Bypass SSO. Admin \u2022 Set/change all account profile settings allowed to be set/changed by a Prefect user. \u2022 Add and remove account members, and their account roles. \u2022 Create and delete service accounts in the account. \u2022 Create workspaces in the account. \u2022 Implicit workspace owner access on all workspaces in the account. \u2022 Cannot bypass SSO. Member \u2022 View account profile settings. \u2022 View workspaces I have access to in the account. \u2022 View account members and their roles. \u2022 View service accounts in the account.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-level-roles","title":"Workspace-level roles","text":"The following built-in roles have permissions within a given workspace in Prefect Cloud.
Role Abilities Viewer \u2022 View flow runs within a workspace. \u2022 View deployments within a workspace. \u2022 View all work pools within a workspace. \u2022 View all blocks within a workspace. \u2022 View all automations within a workspace. \u2022 View workspace handle and description. Runner All Viewer abilities, plus: \u2022 Run deployments within a workspace. Developer All Runner abilities, plus: \u2022 Run flows within a workspace. \u2022 Delete flow runs within a workspace. \u2022 Create, edit, and delete deployments within a workspace. \u2022 Create, edit, and delete work pools within a workspace. \u2022 Create, edit, and delete all blocks and their secrets within a workspace. \u2022 Create, edit, and delete automations within a workspace. \u2022 View all workspace settings. Owner All Developer abilities, plus: \u2022 Add and remove account members, and set their role within a workspace. \u2022 Set the workspace\u2019s default workspace role for all users in the account. \u2022 Set, view, edit workspace settings. Worker The minimum scopes required for a worker to poll for and submit work.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#custom-workspace-roles","title":"Custom workspace roles","text":"The built-in roles will serve the needs of most users, but your team may need to configure custom roles, giving users access to specific permissions within a workspace.
Custom roles can inherit permissions from a built-in role. This enables tweaks to the role to meet your team\u2019s needs, while ensuring users can still benefit from Prefect\u2019s default workspace role permission curation as new functionality becomes available.
Custom workspace roles can also be created independent of Prefect\u2019s built-in roles. This option gives workspace admins full control of user access to workspace functionality. However, for non-inherited custom roles, the workspace admin takes on the responsibility for monitoring and setting permissions for new functionality as it is released.
See Role permissions for details of permissions you may set for custom roles.
After you create a new role, it become available in the account Members page and the Workspace Sharing page for you to apply to users.
","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#inherited-roles","title":"Inherited roles","text":"A custom role may be configured as an Inherited Role. Using an inherited role allows you to create a custom role using a set of initial permissions associated with a built-in Prefect role. Additional permissions can be added to the custom role. Permissions included in the inherited role cannot be removed.
Custom roles created using an inherited role will follow Prefect's default workspace role permission curation as new functionality becomes available.
To configure an inherited role when configuring a custom role, select the Inherit permission from a default role check box, then select the role from which the new role should inherit permissions.
","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-role-permissions","title":"Workspace role permissions","text":"The following permissions are available for custom roles.
","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#automations","title":"Automations","text":"Permission Description View automations User can see configured automations within a workspace. Create, edit, and delete automations User can create, edit, and delete automations within a workspace. Includes permissions of View automations.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#blocks","title":"Blocks","text":"Permission Description View blocks User can see configured blocks within a workspace. View secret block data User can see configured blocks and their secrets within a workspace. Includes permissions of\u00a0View blocks. Create, edit, and delete blocks User can create, edit, and delete blocks within a workspace. Includes permissions of View blocks and View secret block data.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#deployments","title":"Deployments","text":"Permission Description View deployments User can see configured deployments within a workspace. Run deployments User can run deployments within a workspace. This does not give a user permission to execute the flow associated with the deployment. This only gives a user (via their key) the ability to run a deployment \u2014 another user/key must actually execute that flow, such as a service account with an appropriate role. Includes permissions of View deployments. Create and edit deployments User can create and edit deployments within a workspace. Includes permissions of View deployments and Run deployments. Delete deployments User can delete deployments within a workspace. Includes permissions of View deployments, Run deployments, and Create and edit deployments.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#flows","title":"Flows","text":"Permission Description View flows and flow runs User can see flows and flow runs within a workspace. Create, update, and delete saved search filters User can create, update, and delete saved flow run search filters configured within a workspace. Includes permissions of View flows and flow runs. Create, update, and run flows User can create, update, and run flows within a workspace. Includes permissions of View flows and flow runs. Delete flows User can delete flows within a workspace. Includes permissions of View flows and flow runs and Create, update, and run flows.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#notifications","title":"Notifications","text":"Permission Description View notification policies User can see notification policies configured within a workspace. Create and edit notification policies User can create and edit notification policies configured within a workspace. Includes permissions of View notification policies. Delete notification policies User can delete notification policies configured within a workspace. Includes permissions of View notification policies and Create and edit notification policies.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#task-run-concurrency","title":"Task run concurrency","text":"Permission Description View concurrency limits User can see configured task run concurrency limits within a workspace. Create, edit, and delete concurrency limits User can create, edit, and delete task run concurrency limits within a workspace. Includes permissions of View concurrency limits.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#work-pools","title":"Work pools","text":"Permission Description View work pools User can see work pools configured within a workspace. Create, edit, and pause work pools User can create, edit, and pause work pools configured within a workspace. Includes permissions of View work pools. Delete work pools User can delete work pools configured within a workspace. Includes permissions of View work pools and Create, edit, and pause work pools.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-management","title":"Workspace management","text":"Permission Description View information about workspace service accounts User can see service accounts configured within a workspace. View information about workspace users User can see user accounts for users invited to the workspace. View workspace settings User can see settings configured within a workspace. Edit workspace settings User can edit settings for a workspace. Includes permissions of View workspace settings. Delete the workspace User can delete a workspace. Includes permissions of View workspace settings and Edit workspace settings.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/service-accounts/","title":"Service Accounts","text":"Service accounts enable you to create a Prefect Cloud API key that is not associated with a user account. Service accounts are typically used to configure API access for running workers or executing deployment flow runs on remote infrastructure.
Service accounts are non-user accounts that have the following features:
Using service account credentials, you can configure an execution environment to interact with your Prefect Cloud workspaces without a user having to manually log in from that environment. Service accounts may be created, added to workspaces, have their roles changed, or deleted without affecting other user accounts.
Select Service Accounts to view, create, or edit service accounts.
Service accounts are created at the account level, but individual workspaces may be shared with the service account. See workspace sharing for more information.
Service account credentials
When you create a service account, Prefect Cloud creates a new API key for the account and provides the API configuration command for the execution environment. Save these to a safe location for future use. If the access credentials are lost or compromised, you should regenerate the credentials from the service account page.
Service account roles
Service accounts are created at the account level, and can then be added to workspaces within the account.
A service account may only be a Member of an account. It can never be an account Admin. You may apply any valid workspace-level role to a service account.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/service-accounts/#create-a-service-account","title":"Create a service account","text":"Within your account, on the Service Accounts page, select the + icon to create a new service account. You'll be prompted to configure:
Service account roles
A service account may only be a Member of an account. You may apply any valid workspace-level role to a service account when it is added to a workspace.
Select Create to create the new service account.
Note that API keys cannot be revealed again in the UI after you generate them, so copy the key to a secure location.
You can change the API key and expiration for a service account by rotating the API key. Select Rotate API Key from the menu on the left side of the service account's information on this page.
To delete a service account, select Remove from the menu on the left side of the service account's information.
","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/sso/","title":"Single Sign-on (SSO)","text":"Prefect Cloud's Pro and Enterprise plans offer single sign-on (SSO) integration with your team\u2019s identity provider. SSO integration can bet set up with any identity provider that supports:
When using SSO, Prefect Cloud won't store passwords for any accounts managed by your identity provider. Members of your Prefect Cloud account will instead log in and authenticate using your identity provider.
Once your SSO integration has been set up, non-admins will be required to authenticate through the SSO provider when accessing account resources.
See the Prefect Cloud plans to learn more about options for supporting more users and workspaces, service accounts, and SSO.
","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#configuring-sso","title":"Configuring SSO","text":"Within your account, select the SSO page to enable SSO for users.
If you haven't enabled SSO for a domain yet, enter the email domains for which you want to configure SSO in Prefect Cloud and save it.
Under Enabled Domains, select the domains from the Domains list, then select Generate Link. This step creates a link you can use to configure SSO with your identity provider.
Using the provided link navigate to the Identity Provider Configuration dashboard and select your identity provider to continue configuration. If your provider isn't listed, you can continue with the SAML
or Open ID Connect
choices instead.
Once you complete SSO configuration your users will be required to authenticate via your identity provider when accessing account resources, giving you full control over application access.
","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#directory-sync","title":"Directory sync","text":"Directory sync automatically provisions and de-provisions users for your account.
Provisioned users are given basic \u201cMember\u201d roles and will have access to any resources that role entails.
When a user is unassigned from the Prefect Cloud application in your identity provider, they will automatically lose access to Prefect Cloud resources, allowing your IT team to control access to Prefect Cloud without ever signing into the app.
","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#scim-provisioning","title":"SCIM Provisioning","text":"Enterprise accounts have access to SCIM for user provisioning. The SSO tab provides access to enable SCIM provisioning.
","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/teams/","title":"Teams","text":"Prefect Cloud's Enterprise plan offers team management to simplify access control governance.
Account Admins can configure teams and team membership from the account settings menu by clicking Teams. Teams are composed of users and service accounts. Teams can be added to workspaces or object access control lists just like users and service accounts.
If SCIM is enabled on your account, the set of teams and the users within them is governed by your IDP. Prefect Cloud service accounts, which are not governed by your IDP, can be still be added to your existing set of teams.
See the Prefect Cloud plans to learn more about options for supporting teams.
","tags":["UI","Permissions","Access","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"community/","title":"Community","text":"There are many ways to get involved with the Prefect community
Many features specific to Prefect Cloud are in their own navigation subheading.
","tags":["concepts","features","overview"],"boost":2},{"location":"concepts/agents/","title":"Agents","text":"Workers are recommended
Agents are part of the block-based deployment model. Work Pools and Workers simplify the specification of a flow's infrastructure and runtime environment. If you have existing agents, you can upgrade from agents to workers to significantly enhance the experience of deploying flows.
","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#agent-overview","title":"Agent overview","text":"Agent processes are lightweight polling services that get scheduled work from a work pool and deploy the corresponding flow runs.
Agents poll for work every 15 seconds by default. This interval is configurable in your profile settings with the PREFECT_AGENT_QUERY_INTERVAL
setting.
It is possible for multiple agent processes to be started for a single work pool. Each agent process sends a unique ID to the server to help disambiguate themselves and let users know how many agents are active.
","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#agent-options","title":"Agent options","text":"Agents are configured to pull work from one or more work pool queues. If the agent references a work queue that doesn't exist, it will be created automatically.
Configuration parameters you can specify when starting an agent include:
Option Description--api
The API URL for the Prefect server. Default is the value of PREFECT_API_URL
. --hide-welcome
Do not display the startup ASCII art for the agent process. --limit
Maximum number of flow runs to start simultaneously. [default: None] --match
, -m
Dynamically matches work queue names with the specified prefix for the agent to pull from,for example dev-
will match all work queues with a name that starts with dev-
. [default: None] --pool
, -p
A work pool name for the agent to pull from. [default: None] --prefetch-seconds
The amount of time before a flow run's scheduled start time to begin submission. Default is the value of PREFECT_AGENT_PREFETCH_SECONDS
. --run-once
Only run agent polling once. By default, the agent runs forever. [default: no-run-once] --work-queue
, -q
One or more work queue names for the agent to pull from. [default: None] You must start an agent within an environment that can access or create the infrastructure needed to execute flow runs. Your agent will deploy flow runs to the infrastructure specified by the deployment.
Prefect must be installed in execution environments
Prefect must be installed in any environment in which you intend to run the agent or execute a flow run.
PREFECT_API_URL
and PREFECT_API_KEY
settings for agents
PREFECT_API_URL
must be set for the environment in which your agent is running or specified when starting the agent with the --api
flag. You must also have a user or service account with the Worker
role, which can be configured by setting the PREFECT_API_KEY
.
If you want an agent to communicate with Prefect Cloud or a Prefect server from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL
in that environment.
Use the prefect agent start
CLI command to start an agent. You must pass at least one work pool name or match string that the agent will poll for work. If the work pool does not exist, it will be created.
prefect agent start -p [work pool name]\n
For example:
Starting agent with ephemeral API...\n\u00a0 ___ ___ ___ ___ ___ ___ _____ \u00a0 \u00a0 _ \u00a0 ___ ___ _\u00a0 _ _____\n\u00a0| _ \\ _ \\ __| __| __/ __|_ \u00a0 _| \u00a0 /_\\ / __| __| \\| |_ \u00a0 _|\n\u00a0|\u00a0 _/ \u00a0 / _|| _|| _| (__\u00a0 | |\u00a0 \u00a0 / _ \\ (_ | _|| .` | | |\n\u00a0|_| |_|_\\___|_| |___\\___| |_| \u00a0 /_/ \\_\\___|___|_|\\_| |_|\n\nAgent started! Looking for work from work pool 'my-pool'...\n
By default, the agent polls the API specified by the PREFECT_API_URL
environment variable. To configure the agent to poll from a different server location, use the --api
flag, specifying the URL of the server.
In addition, agents can match multiple queues in a work pool by providing a --match
string instead of specifying all of the queues. The agent will poll every queue with a name that starts with the given string. New queues matching this prefix will be found by the agent without needing to restart it.
For example:
prefect agent start --match \"foo-\"\n
This example will poll every work queue that starts with \"foo-\".
","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#configuring-prefetch","title":"Configuring prefetch","text":"By default, the agent begins submission of flow runs a short time (10 seconds) before they are scheduled to run. This allows time for the infrastructure to be created, so the flow run can start on time. In some cases, infrastructure will take longer than this to actually start the flow run. In these cases, the prefetch can be increased using the --prefetch-seconds
option or the PREFECT_AGENT_PREFETCH_SECONDS
setting.
Submission can begin an arbitrary amount of time before the flow run is scheduled to start. If this value is larger than the amount of time it takes for the infrastructure to start, the flow run will wait until its scheduled start time. This allows flow runs to start exactly on time.
","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#troubleshooting","title":"Troubleshooting","text":"","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#agent-crash-or-keyboard-interrupt","title":"Agent crash or keyboard interrupt","text":"If the agent process is ended abruptly, you can sometimes have left over flows that were destined for the agent whose process was ended. In the UI, these will show up as pending. You will need to delete these flows in order for the restarted agent to begin processing the work queue again. Take note of the flows you deleted, you might need to set them to run manually.
","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/artifacts/","title":"Artifacts","text":"Artifacts are persisted outputs such as tables, Markdown, or links. They are stored on Prefect Cloud or a Prefect server instance and rendered in the Prefect UI. Artifacts make it easy to track and monitor the objects that your flows produce and update over time.
Published artifacts may be associated with a particular task run or flow run. Artifacts can also be created outside of any flow run context.
Whether you're publishing links, Markdown, or tables, artifacts provide a powerful and flexible way to showcase data within your workflows.
With artifacts, you can easily manage and share information with your team, providing valuable insights and context.
Common use cases for artifacts include:
Creating artifacts allows you to publish data from task and flow runs or outside of a flow run context. Currently, you can render three artifact types: links, Markdown, and tables.
Artifacts render individually
Please note that every artifact created within a task will be displayed as an individual artifact in the Prefect UI. This means that each call to create_link_artifact()
or create_markdown_artifact()
generates a distinct artifact.
Unlike the print()
command, where you can concatenate multiple calls to include additional items in a report, within a task, these commands must be used multiple times if necessary.
To create artifacts like reports or summaries using create_markdown_artifact()
, compile your message string separately and then pass it to create_markdown_artifact()
to create the complete artifact.
To create a link artifact, use the create_link_artifact()
function. To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key
argument to the create_link_artifact()
function to track an artifact's history over time. Without a key
, the artifact will only be visible in the Artifacts tab of the associated flow run or task run.
from prefect import flow, task\nfrom prefect.artifacts import create_link_artifact\n\n@task\ndef my_first_task():\n create_link_artifact(\n key=\"irregular-data\",\n link=\"https://nyc3.digitaloceanspaces.com/my-bucket-name/highly_variable_data.csv\",\n description=\"## Highly variable data\",\n )\n\n@task\ndef my_second_task():\n create_link_artifact(\n key=\"irregular-data\",\n link=\"https://nyc3.digitaloceanspaces.com/my-bucket-name/low_pred_data.csv\",\n description=\"# Low prediction accuracy\",\n )\n\n@flow\ndef my_flow():\n my_first_task()\n my_second_task()\n\nif __name__ == \"__main__\":\n my_flow()\n
Tip
You can specify multiple artifacts with the same key to more easily track something very specific that you care about, such as irregularities in your data pipeline.
After running the above flows, you can find your new artifacts in the Artifacts page of the UI. Click into the \"irregular-data\" artifact and see all versions of it, along with custom descriptions and links to the relevant data.
Here, you'll also be able to view information about your artifact such as its associated flow run or task run id, previous and future versions of the artifact (multiple artifacts can have the same key in order to show lineage), the data you've stored (in this case a Markdown-rendered link), an optional Markdown description, and when the artifact was created or updated.
To make the links more readable for you and your collaborators, you can pass in a link_text
argument for your link artifacts:
from prefect import flow\nfrom prefect.artifacts import create_link_artifact\n\n@flow\ndef my_flow():\n create_link_artifact(\n key=\"my-important-link\",\n link=\"https://www.prefect.io/\",\n link_text=\"Prefect\",\n )\n\nif __name__ == \"__main__\":\n my_flow()\n
In the above example, the create_link_artifact
method is used within a flow to create a link artifact with a key of my-important-link
. The link
parameter is used to specify the external resource to be linked to, and link_text
is used to specify the text to be displayed for the link. An optional description
could also be added for context.
To create a Markdown artifact, you can use the create_markdown_artifact()
function. To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key
argument to the create_markdown_artifact()
function to track an artifact's history over time. Without a key
, the artifact will only be visible in the Artifacts tab of the associated flow run or task run.
Don't indent Markdown
Markdown in mult-line strings must be unindented to be interpreted correctly.
from prefect import flow, task\nfrom prefect.artifacts import create_markdown_artifact\n\n@task\ndef markdown_task():\n na_revenue = 500000\n markdown_report = f\"\"\"# Sales Report\n\n## Summary\n\nIn the past quarter, our company saw a significant increase in sales, with a total revenue of $1,000,000. \nThis represents a 20% increase over the same period last year.\n\n## Sales by Region\n\n| Region | Revenue |\n|:--------------|-------:|\n| North America | ${na_revenue:,} |\n| Europe | $250,000 |\n| Asia | $150,000 |\n| South America | $75,000 |\n| Africa | $25,000 |\n\n## Top Products\n\n1. Product A - $300,000 in revenue\n2. Product B - $200,000 in revenue\n3. Product C - $150,000 in revenue\n\n## Conclusion\n\nOverall, these results are very encouraging and demonstrate the success of our sales team in increasing revenue \nacross all regions. However, we still have room for improvement and should focus on further increasing sales in \nthe coming quarter.\n\"\"\"\n create_markdown_artifact(\n key=\"gtm-report\",\n markdown=markdown_report,\n description=\"Quarterly Sales Report\",\n )\n\n@flow()\ndef my_flow():\n markdown_task()\n\n\nif __name__ == \"__main__\":\n my_flow()\n
After running the above flow, you should see your \"gtm-report\" artifact in the Artifacts page of the UI.
As with all artifacts, you'll be able to view the associated flow run or task run id, previous and future versions of the artifact, your rendered Markdown data, and your optional Markdown description.
","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#create-table-artifacts","title":"Create table artifacts","text":"You can create a table artifact by calling create_table_artifact()
. To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key
argument to the create_table_artifact()
function to track an artifact's history over time. Without a key
, the artifact will only be visible in the artifacts tab of the associated flow run or task run.
Note
The create_table_artifact()
function accepts a table
argument, which can be provided as either a list of lists, a list of dictionaries, or a dictionary of lists.
from prefect.artifacts import create_table_artifact\n\ndef my_fn():\n highest_churn_possibility = [\n {'customer_id':'12345', 'name': 'John Smith', 'churn_probability': 0.85 }, \n {'customer_id':'56789', 'name': 'Jane Jones', 'churn_probability': 0.65 } \n ]\n\n create_table_artifact(\n key=\"personalized-reachout\",\n table=highest_churn_possibility,\n description= \"# Marvin, please reach out to these customers today!\"\n )\n\nif __name__ == \"__main__\":\n my_fn()\n
As you can see, you don't need to create an artifact in a flow run context. You can create one anywhere in a Python script and see it in the Prefect UI.
","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#managing-artifacts","title":"Managing artifacts","text":"","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#reading-artifacts","title":"Reading artifacts","text":"In the Prefect UI, you can view all of the latest versions of your artifacts and click into a specific artifact to see its lineage over time. Additionally, you can inspect all versions of an artifact with a given key by running:
prefect artifact inspect <my_key>\n
or view all artifacts by running:
prefect artifact ls\n
You can also use the Prefect REST API to programmatically filter your results.
","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#deleting-artifacts","title":"Deleting artifacts","text":"You can delete an artifact directly using the CLI to delete specific artifacts with a given key or id:
prefect artifact delete <my_key>\n
prefect artifact delete --id <my_id>\n
Alternatively, you can delete artifacts using the Prefect REST API.
","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#artifacts-api","title":"Artifacts API","text":"Prefect provides the Prefect REST API to allow you to create, read, and delete artifacts programmatically. With the Artifacts API, you can automate the creation and management of artifacts as part of your workflow.
For example, to read the five most recently created Markdown, table, and link artifacts, you can run the following:
import requests\n\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/abc/workspaces/xyz\"\nPREFECT_API_KEY=\"pnu_ghijk\"\ndata = {\n \"sort\": \"CREATED_DESC\",\n \"limit\": 5,\n \"artifacts\": {\n \"key\": {\n \"exists_\": True\n }\n }\n}\n\nheaders = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\nendpoint = f\"{PREFECT_API_URL}/artifacts/filter\"\n\nresponse = requests.post(endpoint, headers=headers, json=data)\nassert response.status_code == 200\nfor artifact in response.json():\n print(artifact)\n
If you don't specify a key or that a key must exist, you will also return results (which are a type of key-less artifact).
See the rest of the Prefect REST API documentation on artifacts for more information!
","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/automations/","title":"Automations","text":"Automations in Prefect Cloud enable you to configure actions that Prefect executes automatically based on trigger conditions.
Potential triggers include the occurrence of events from changes in a flow run's state - or the absence of such events. You can event define your own custom trigger to fire based on an event created from a webhook or a custom event defined in Python code.
Potential actions include kicking off flow runs, pausing schedules, and sending custom notifications.
Automations are only available in Prefect Cloud
Notifications in an open-source Prefect server provide a subset of the notification message-sending features available in Automations.
Automations provide a flexible and powerful framework for automatically taking action in response to events.
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#automations-overview","title":"Automations overview","text":"The Automations page provides an overview of all configured automations for your workspace.
Selecting the toggle next to an automation pauses execution of the automation.
The button next to the toggle provides commands to copy the automation ID, edit the automation, or delete the automation.
Select the name of an automation to view Details about it and relevant Events.
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#create-an-automation","title":"Create an automation","text":"On the Automations page, select the + icon to create a new automation. You'll be prompted to configure:
Triggers specify the conditions under which your action should be performed. Triggers can be of several types, including triggers based on:
OR
criteriaAutomations API
The automations API enables further programmatic customization of trigger and action policies based on arbitrary events.
Importantly, triggers can be configured not only in reaction to events, but also proactively: to fire in the absence of an expected event.
For example, in the case of flow run state change triggers, you might expect production flows to finish in no longer than thirty minutes. But transient infrastructure or network issues could cause your flow to get \u201cstuck\u201d in a running state. A trigger could kick off an action if the flow stays in a running state for more than 30 minutes. This action could be taken on the flow itself, such as canceling or restarting it. Or the action could take the form of a notification so someone can take manual remediation steps. Or you could set both actions to to take place when the trigger occurs.
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#actions","title":"Actions","text":"Actions specify what your automation does when its trigger criteria are met. Current action types include:
Some actions require you to either select the target of the action, or specify that the target of the action should be inferred.
Selected targets are simple, and useful for when you know exactly what object your action should act on \u2014 for example, the case of a cleanup flow you want to run or a specific notification you\u2019d like to send.
Inferred targets are deduced from the trigger itself.
For example, if a trigger fires on a flow run that is stuck in a running state, and the action is to cancel an inferred flow run, the flow run to cancel is inferred as the stuck run that caused the trigger to fire.
Similarly, if a trigger fires on a work queue event and the corresponding action is to pause an inferred work queue, the inferred work queue is the one that emitted the event.
Prefect tries to infer the relevant event whenever possible, but sometimes one does not exist.
Specify a name and, optionally, a description for the automation.
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#custom-triggers","title":"Custom triggers","text":"Custom triggers allow advanced configuration of the conditions on which an automation executes its actions. Several custom trigger fields accept values that end with trailing wildcards, like \"prefect.flow-run.*\"
.
The schema that defines a trigger is as follows:
Name Type Supports trailing wildcards Description match object Labels for resources which this Automation will match. match_related object Labels for related resources which this Automation will match. after array of strings Event(s), one of which must have first been seen to start this automation. expect array of strings The event(s) this automation is expecting to see. If empty, this automation will evaluate any matched event. for_each array of strings Evaluate the Automation separately for each distinct value of these labels on the resource. By default, labels refer to the primary resource of the triggering event. You may also refer to labels from related resources by specifyingrelated:<role>:<label>
. This will use the value of that label for the first related resource in that role. posture string enum N/A The posture of this Automation, either Reactive or Proactive. Reactive automations respond to the presence of the expected events, while Proactive automations respond to the absence of those expected events. threshold integer N/A The number of events required for this Automation to trigger (for Reactive automations), or the number of events expected (for Proactive automations) within number N/A The time period over which the events must occur. For Reactive triggers, this may be as low as 0 seconds, but must be at least 10 seconds for Proactive triggers","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#resource-matching","title":"Resource matching","text":"match
and match_related
control which events a trigger considers for evaluation by filtering on the contents of their resource
and related
fields, respectively. Each label added to a match
filter is AND
ed with the other labels, and can accept a single value or a list of multiple values that are OR
ed together.
Consider the resource
and related
fields on the following prefect.flow-run.Completed
event, truncated for the sake of example. Its primary resource is a flow run, and since that flow run was started via a deployment, it is related to both its flow and its deployment:
\"resource\": {\n \"prefect.resource.id\": \"prefect.flow-run.925eacce-7fe5-4753-8f02-77f1511543db\",\n \"prefect.resource.name\": \"cute-kittiwake\"\n}\n\"related\": [\n {\n \"prefect.resource.id\": \"prefect.flow.cb6126db-d528-402f-b439-96637187a8ca\",\n \"prefect.resource.role\": \"flow\",\n \"prefect.resource.name\": \"hello\"\n },\n {\n \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\",\n \"prefect.resource.role\": \"deployment\",\n \"prefect.resource.name\": \"example\"\n }\n]\n
There are a number of valid ways to select the above event for evaluation, and the approach depends on the purpose of the automation.
The following configuration will filter for any events whose primary resource is a flow run, and that flow run has a name starting with cute-
or radical-
.
\"match\": {\n \"prefect.resource.id\": \"prefect.flow-run.*\",\n \"prefect.resource.name\": [\"cute-*\", \"radical-*\"]\n},\n\"match_related\": {},\n...\n
This configuration, on the other hand, will filter for any events for which this specific deployment is a related resource.
\"match\": {},\n\"match_related\": {\n \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\"\n},\n...\n
Both of the above approaches will select the example prefect.flow-run.Completed
event, but will permit additional, possibly undesired events through the filter as well. match
and match_related
can be combined for more restrictive filtering:
\"match\": {\n \"prefect.resource.id\": \"prefect.flow-run.*\",\n \"prefect.resource.name\": [\"cute-*\", \"radical-*\"]\n},\n\"match_related\": {\n \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\"\n},\n...\n
Now this trigger will filter only for events whose primary resource is a flow run started by a specific deployment, and that flow run has a name starting with cute-
or radical-
.
Once an event has passed through the match
filters, it must be decided if this event should be counted toward the trigger's threshold
. Whether that is the case is determined by the event names present in expect
.
This configuration informs the trigger to evaluate only prefect.flow-run.Completed
events that have passed the match
filters.
\"expect\": [\n \"prefect.flow-run.Completed\"\n],\n...\n
threshold
decides the quantity of expect
ed events needed to satisfy the trigger. Increasing the threshold
above 1 will also require use of within
to define a range of time in which multiple events are seen. The following configuration will expect two occurrences of prefect.flow-run.Completed
within 60 seconds.
\"expect\": [\n \"prefect.flow-run.Completed\"\n],\n\"threshold\": 2,\n\"within\": 60,\n...\n
after
can be used to handle scenarios that require more complex event reactivity.
Take, for example, this flow which emits an event indicating the table it operates on is missing or empty:
from prefect import flow\nfrom prefect.events import emit_event\nfrom db import Table\n\n\n@flow\ndef transform(table_name: str):\n table = Table(table_name)\n\n if not table.exists():\n emit_event(\n event=\"table-missing\",\n resource={\"prefect.resource.id\": \"etl-events.transform\"}\n )\n elif table.is_empty():\n emit_event(\n event=\"table-empty\",\n resource={\"prefect.resource.id\": \"etl-events.transform\"}\n )\n else:\n # transform data\n
The following configuration uses after
to prevent this automation from firing unless either a table-missing
or a table-empty
event has occurred before a flow run of this deployment completes.
Tip
Note how match
and match_related
are used to ensure the trigger only evaluates events that are relevant to its purpose.
\"match\": {\n \"prefect.resource.id\": [\n \"prefect.flow-run.*\",\n \"etl-events.transform\"\n ]\n},\n\"match_related\": {\n \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\"\n}\n\"after\": [\n \"table-missing\",\n \"table-empty\"\n]\n\"expect\": [\n \"prefect.flow-run.Completed\"\n],\n...\n
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#evaluation-strategy","title":"Evaluation strategy","text":"All of the previous examples were designed around a reactive posture
- that is, count up events toward the threshold
until it is met, then execute actions. To respond to the absence of events, use a proactive posture
. A proactive trigger will fire when its threshold
has not been met by the end of the window of time defined by within
. Proactive triggers must have a within
of at least 10 seconds.
The following trigger will fire if a prefect.flow-run.Completed
event is not seen within 60 seconds after a prefect.flow-run.Running
event is seen.
{\n \"match\": {\n \"prefect.resource.id\": \"prefect.flow-run.*\"\n },\n \"match_related\": {},\n \"after\": [\n \"prefect.flow-run.Running\"\n ],\n \"expect\": [\n \"prefect.flow-run.Completed\"\n ],\n \"for_each\": [],\n \"posture\": \"Proactive\",\n \"threshold\": 1,\n \"within\": 60\n}\n
However, without for_each
, a prefect.flow-run.Completed
event from a different flow run than the one that started this trigger with its prefect.flow-run.Running
event could satisfy the condition. Adding a for_each
of prefect.resource.id
will cause this trigger to be evaluated separately for each flow run id associated with these events. {\n \"match\": {\n \"prefect.resource.id\": \"prefect.flow-run.*\"\n },\n \"match_related\": {},\n \"after\": [\n \"prefect.flow-run.Running\"\n ],\n \"expect\": [\n \"prefect.flow-run.Completed\"\n ],\n \"for_each\": [\n \"prefect.resource.id\"\n ],\n \"posture\": \"Proactive\",\n \"threshold\": 1,\n \"within\": 60\n}\n
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#create-an-automation-via-deployment-triggers","title":"Create an automation via deployment triggers","text":"To enable the simple configuration of event-driven deployments, Prefect provides deployment triggers - a shorthand for creating automations that are linked to specific deployments to run them based on the presence or absence of events.
# prefect.yaml\ndeployments:\n - name: my-deployment\n entrypoint: path/to/flow.py:decorated_fn\n work_pool:\n name: my-process-pool\n triggers:\n - enabled: true\n match:\n prefect.resource.id: my.external.resource\n expect:\n - external.resource.pinged\n parameters:\n param_1: \"{{ event }}\"\n
At deployment time, this will create a linked automation that is triggered by events matching your chosen grammar, which will pass the templatable event
as a parameter to the deployment's flow run.
prefect deploy
","text":"You can pass one or more --trigger
arguments to prefect deploy
, which can be either a JSON string or a path to a .yaml
or .json
file.
# Pass a trigger as a JSON string\nprefect deploy -n test-deployment \\\n --trigger '{\n \"enabled\": true, \n \"match\": {\n \"prefect.resource.id\": \"prefect.flow-run.*\"\n }, \n \"expect\": [\"prefect.flow-run.Completed\"]\n }'\n\n# Pass a trigger using a JSON/YAML file\nprefect deploy -n test-deployment --trigger triggers.yaml\nprefect deploy -n test-deployment --trigger my_stuff/triggers.json\n
For example, a triggers.yaml
file could have many triggers defined:
triggers:\n - enabled: true\n match:\n prefect.resource.id: my.external.resource\n expect:\n - external.resource.pinged\n parameters:\n param_1: \"{{ event }}\"\n - enabled: true\n match:\n prefect.resource.id: my.other.external.resource\n expect:\n - some.other.event\n parameters:\n param_1: \"{{ event }}\"\n
Both of the above triggers would be attached to test-deployment
after running prefect deploy
. Triggers passed to prefect deploy
will override any triggers defined in prefect.yaml
While you can define triggers in prefect.yaml
for a given deployment, triggers passed to prefect deploy
will take precedence over those defined in prefect.yaml
.
Note that deployment triggers contribute to the total number of automations in your workspace.
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#automation-notifications","title":"Automation notifications","text":"Notifications enable you to set up automation actions that send a message.
Automation notifications support sending notifications via any predefined block that is capable of and configured to send a message. That includes, for example:
Automation actions can access templated variables through Jinja syntax. Templated variables enable you to dynamically include details from an automation trigger, such as a flow or pool name.
Jinja templated variable syntax wraps the variable name in double curly brackets, like this: {{ variable }}
.
You can access properties of the underlying flow run objects including:
In addition to its native properties, each object includes an id
along with created
and updated
timestamps.
The flow_run|ui_url
token returns the URL for viewing the flow run in Prefect Cloud.
Here\u2019s an example for something that would be relevant to a flow run state-based notification:
Flow run {{ flow_run.name }} entered state {{ flow_run.state.name }}. \n\n Timestamp: {{ flow_run.state.timestamp }}\n Flow ID: {{ flow_run.flow_id }}\n Flow Run ID: {{ flow_run.id }}\n State message: {{ flow_run.state.message }}\n
The resulting Slack webhook notification would look something like this:
You could include flow
and deployment
properties.
Flow run {{ flow_run.name }} for flow {{ flow.name }}\nentered state {{ flow_run.state.name }}\nwith message {{ flow_run.state.message }}\n\nFlow tags: {{ flow_run.tags }}\nDeployment name: {{ deployment.name }}\nDeployment version: {{ deployment.version }}\nDeployment parameters: {{ deployment.parameters }}\n
An automation that reports on work pool status might include notifications using work_pool
properties.
Work pool status alert!\n\nName: {{ work_pool.name }}\nLast polled: {{ work_pool.last_polled }}\n
In addition to those shortcuts for flows, deployments, and work pools, you have access to the automation and the event that triggered the automation. See the Automations API for additional details.
Automation: {{ automation.name }}\nDescription: {{ automation.description }}\n\nEvent: {{ event.id }}\nResource:\n{% for label, value in event.resource %}\n{{ label }}: {{ value }}\n{% endfor %}\nRelated Resources:\n{% for related in event.related %}\n Role: {{ related.role }}\n {% for label, value in event.resource %}\n {{ label }}: {{ value }}\n {% endfor %}\n{% endfor %}\n
Note that this example also illustrates the ability to use Jinja features such as iterator and for loop control structures when templating notifications.
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/blocks/","title":"Blocks","text":"Blocks are a primitive within Prefect that enable the storage of configuration and provide an interface for interacting with external systems.
With blocks, you can securely store credentials for authenticating with services like AWS, GitHub, Slack, and any other system you'd like to orchestrate with Prefect.
Blocks expose methods that provide pre-built functionality for performing actions against an external system. They can be used to download data from or upload data to an S3 bucket, query data from or write data to a database, or send a message to a Slack channel.
You may configure blocks through code or via the Prefect Cloud and the Prefect server UI.
You can access blocks for both configuring flow deployments and directly from within your flow code.
Prefect provides some built-in block types that you can use right out of the box. Additional blocks are available through Prefect Integrations. To use these blocks you can pip install
the package, then register the blocks you want to use with Prefect Cloud or a Prefect server.
Prefect Cloud and the Prefect server UI display a library of block types available for you to configure blocks that may be used by your flows.
Blocks and parameters
Blocks are useful for configuration that needs to be shared across flow runs and between flows.
For configuration that will change between flow runs, we recommend using parameters.
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#prefect-built-in-blocks","title":"Prefect built-in blocks","text":"Prefect provides a broad range of commonly used, built-in block types. These block types are available in Prefect Cloud and the Prefect server UI.
Block Slug Description Azureazure
Store data as a file on Azure Datalake and Azure Blob Storage. Date Time date-time
A block that represents a datetime. Docker Container docker-container
Runs a command in a container. Docker Registry docker-registry
Connects to a Docker registry. Requires a Docker Engine to be connectable. GCS gcs
Store data as a file on Google Cloud Storage. GitHub github
Interact with files stored on public GitHub repositories. JSON json
A block that represents JSON. Kubernetes Cluster Config kubernetes-cluster-config
Stores configuration for interaction with Kubernetes clusters. Kubernetes Job kubernetes-job
Runs a command as a Kubernetes Job. Local File System local-file-system
Store data as a file on a local file system. Microsoft Teams Webhook ms-teams-webhook
Enables sending notifications via a provided Microsoft Teams webhook. Opsgenie Webhook opsgenie-webhook
Enables sending notifications via a provided Opsgenie webhook. Pager Duty Webhook pager-duty-webhook
Enables sending notifications via a provided PagerDuty webhook. Process process
Run a command in a new process. Remote File System remote-file-system
Store data as a file on a remote file system. Supports any remote file system supported by fsspec
. S3 s3
Store data as a file on AWS S3. Secret secret
A block that represents a secret value. The value stored in this block will be obfuscated when this block is logged or shown in the UI. Slack Webhook slack-webhook
Enables sending notifications via a provided Slack webhook. SMB smb
Store data as a file on a SMB share. String string
A block that represents a string. Twilio SMS twilio-sms
Enables sending notifications via Twilio SMS. Webhook webhook
Block that enables calling webhooks.","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#blocks-in-prefect-integrations","title":"Blocks in Prefect Integrations","text":"Blocks can also be created by anyone and shared with the community. You'll find blocks that are available for consumption in many of the published Prefect Integrations. The following table provides an overview of the blocks available from our most popular Prefect Integrations.
Integration Block Slug prefect-airbyte Airbyte Connectionairbyte-connection
prefect-airbyte Airbyte Server airbyte-server
prefect-aws AWS Credentials aws-credentials
prefect-aws ECS Task ecs-task
prefect-aws MinIO Credentials minio-credentials
prefect-aws S3 Bucket s3-bucket
prefect-azure Azure Blob Storage Credentials azure-blob-storage-credentials
prefect-azure Azure Container Instance Credentials azure-container-instance-credentials
prefect-azure Azure Container Instance Job azure-container-instance-job
prefect-azure Azure Cosmos DB Credentials azure-cosmos-db-credentials
prefect-azure AzureML Credentials azureml-credentials
prefect-bitbucket BitBucket Credentials bitbucket-credentials
prefect-bitbucket BitBucket Repository bitbucket-repository
prefect-census Census Credentials census-credentials
prefect-census Census Sync census-sync
prefect-databricks Databricks Credentials databricks-credentials
prefect-dbt dbt CLI BigQuery Target Configs dbt-cli-bigquery-target-configs
prefect-dbt dbt CLI Profile dbt-cli-profile
prefect-dbt dbt Cloud Credentials dbt-cloud-credentials
prefect-dbt dbt CLI Global Configs dbt-cli-global-configs
prefect-dbt dbt CLI Postgres Target Configs dbt-cli-postgres-target-configs
prefect-dbt dbt CLI Snowflake Target Configs dbt-cli-snowflake-target-configs
prefect-dbt dbt CLI Target Configs dbt-cli-target-configs
prefect-docker Docker Host docker-host
prefect-docker Docker Registry Credentials docker-registry-credentials
prefect-email Email Server Credentials email-server-credentials
prefect-firebolt Firebolt Credentials firebolt-credentials
prefect-firebolt Firebolt Database firebolt-database
prefect-gcp BigQuery Warehouse bigquery-warehouse
prefect-gcp GCP Cloud Run Job cloud-run-job
prefect-gcp GCP Credentials gcp-credentials
prefect-gcp GcpSecret gcpsecret
prefect-gcp GCS Bucket gcs-bucket
prefect-gcp Vertex AI Custom Training Job vertex-ai-custom-training-job
prefect-github GitHub Credentials github-credentials
prefect-github GitHub Repository github-repository
prefect-gitlab GitLab Credentials gitlab-credentials
prefect-gitlab GitLab Repository gitlab-repository
prefect-hex Hex Credentials hex-credentials
prefect-hightouch Hightouch Credentials hightouch-credentials
prefect-kubernetes Kubernetes Credentials kubernetes-credentials
prefect-monday Monday Credentials monday-credentials
prefect-monte-carlo Monte Carlo Credentials monte-carlo-credentials
prefect-openai OpenAI Completion Model openai-completion-model
prefect-openai OpenAI Image Model openai-image-model
prefect-openai OpenAI Credentials openai-credentials
prefect-slack Slack Credentials slack-credentials
prefect-slack Slack Incoming Webhook slack-incoming-webhook
prefect-snowflake Snowflake Connector snowflake-connector
prefect-snowflake Snowflake Credentials snowflake-credentials
prefect-sqlalchemy Database Credentials database-credentials
prefect-sqlalchemy SQLAlchemy Connector sqlalchemy-connector
prefect-twitter Twitter Credentials twitter-credentials
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#using-existing-block-types","title":"Using existing block types","text":"Blocks are classes that subclass the Block
base class. They can be instantiated and used like normal classes.
For example, to instantiate a block that stores a JSON value, use the JSON
block:
from prefect.blocks.system import JSON\n\njson_block = JSON(value={\"the_answer\": 42})\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#saving-blocks","title":"Saving blocks","text":"If this JSON value needs to be retrieved later to be used within a flow or task, we can use the .save()
method on the block to store the value in a block document on the Prefect database for retrieval later:
json_block.save(name=\"life-the-universe-everything\")\n
If you'd like to update the block value stored for a given name
, you can overwrite the existing block document by setting overwrite=True
:
json_block.save(overwrite=True)\n
Tip
in the above example, the name \"life-the-universe-everything\"
is inferred from the existing block document
... or save the same block value as a new block document by setting the name
parameter to a new value:
json_block.save(name=\"actually-life-the-universe-everything\")\n
Utilizing the UI
Blocks documents can also be created and updated via the Prefect UI.
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#loading-blocks","title":"Loading blocks","text":"The name given when saving the value stored in the JSON block can be used when retrieving the value during a flow or task run:
from prefect import flow\nfrom prefect.blocks.system import JSON\n\n@flow\ndef what_is_the_answer():\n json_block = JSON.load(\"life-the-universe-everything\")\n print(json_block.value[\"the_answer\"])\n\nwhat_is_the_answer() # 42\n
Blocks can also be loaded with a unique slug that is a combination of a block type slug and a block document name.
To load our JSON block document from before, we can run the following:
from prefect.blocks.core import Block\n\njson_block = Block.load(\"json/life-the-universe-everything\")\nprint(json_block.value[\"the-answer\"]) #42\n
Sharing Blocks
Blocks can also be loaded by fellow Workspace Collaborators, available on Prefect Cloud.
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#deleting-blocks","title":"Deleting blocks","text":"You can delete a block by using the .delete()
method on the block:
from prefect.blocks.core import Block\nBlock.delete(\"json/life-the-universe-everything\")\n
You can also use the CLI to delete specific blocks with a given slug or id:
prefect block delete json/life-the-universe-everything\n
prefect block delete --id <my-id>\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#creating-new-block-types","title":"Creating new block types","text":"To create a custom block type, define a class that subclasses Block
. The Block
base class builds off of Pydantic's BaseModel
, so custom blocks can be declared in same manner as a Pydantic model.
Here's a block that represents a cube and holds information about the length of each edge in inches:
from prefect.blocks.core import Block\n\nclass Cube(Block):\n edge_length_inches: float\n
You can also include methods on a block include useful functionality. Here's the same cube block with methods to calculate the volume and surface area of the cube:
from prefect.blocks.core import Block\n\nclass Cube(Block):\n edge_length_inches: float\n\n def get_volume(self):\n return self.edge_length_inches**3\n\n def get_surface_area(self):\n return 6 * self.edge_length_inches**2\n
Now the Cube
block can be used to store different cube configuration that can later be used in a flow:
from prefect import flow\n\nrubiks_cube = Cube(edge_length_inches=2.25)\nrubiks_cube.save(\"rubiks-cube\")\n\n@flow\ndef calculate_cube_surface_area(cube_name):\n cube = Cube.load(cube_name)\n print(cube.get_surface_area())\n\ncalculate_cube_surface_area(\"rubiks-cube\") # 30.375\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#secret-fields","title":"Secret fields","text":"All block values are encrypted before being stored, but if you have values that you would not like visible in the UI or in logs, then you can use the SecretStr
field type provided by Pydantic to automatically obfuscate those values. This can be useful for fields that are used to store credentials like passwords and API tokens.
Here's an example of an AWSCredentials
block that uses SecretStr
:
from typing import Optional\n\nfrom prefect.blocks.core import Block\nfrom pydantic import SecretStr\n\nclass AWSCredentials(Block):\n aws_access_key_id: Optional[str] = None\n aws_secret_access_key: Optional[SecretStr] = None\n aws_session_token: Optional[str] = None\n profile_name: Optional[str] = None\n region_name: Optional[str] = None\n
Because aws_secret_access_key
has the SecretStr
type hint assigned to it, the value of that field will not be exposed if the object is logged:
aws_credentials_block = AWSCredentials(\n aws_access_key_id=\"AKIAJKLJKLJKLJKLJKLJK\",\n aws_secret_access_key=\"secret_access_key\"\n)\n\nprint(aws_credentials_block)\n# aws_access_key_id='AKIAJKLJKLJKLJKLJKLJK' aws_secret_access_key=SecretStr('**********') aws_session_token=None profile_name=None region_name=None\n
There's also use the SecretDict
field type provided by Prefect. This type will allow you to add a dictionary field to your block that will have values at all levels automatically obfuscated in the UI or in logs. This is useful for blocks where typing or structure of secret fields is not known until configuration time.
Here's an example of a block that uses SecretDict
:
from typing import Dict\n\nfrom prefect.blocks.core import Block\nfrom prefect.blocks.fields import SecretDict\n\n\nclass SystemConfiguration(Block):\n system_secrets: SecretDict\n system_variables: Dict\n\n\nsystem_configuration_block = SystemConfiguration(\n system_secrets={\n \"password\": \"p@ssw0rd\",\n \"api_token\": \"token_123456789\",\n \"private_key\": \"<private key here>\",\n },\n system_variables={\n \"self_destruct_countdown_seconds\": 60,\n \"self_destruct_countdown_stop_time\": 7,\n },\n)\n
system_secrets
will be obfuscated when system_configuration_block
is displayed, but system_variables
will be shown in plain-text: print(system_configuration_block)\n# SystemConfiguration(\n# system_secrets=SecretDict('{'password': '**********', 'api_token': '**********', 'private_key': '**********'}'), \n# system_variables={'self_destruct_countdown_seconds': 60, 'self_destruct_countdown_stop_time': 7}\n# )\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#blocks-metadata","title":"Blocks metadata","text":"The way that a block is displayed can be controlled by metadata fields that can be set on a block subclass.
Available metadata fields include:
Property Description _block_type_name Display name of the block in the UI. Defaults to the class name. _block_type_slug Unique slug used to reference the block type in the API. Defaults to a lowercase, dash-delimited version of the block type name. _logo_url URL pointing to an image that should be displayed for the block type in the UI. Default toNone
. _description Short description of block type. Defaults to docstring, if provided. _code_example Short code snippet shown in UI for how to load/use block type. Default to first example provided in the docstring of the class, if provided.","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#nested-blocks","title":"Nested blocks","text":"Block are composable. This means that you can create a block that uses functionality from another block by declaring it as an attribute on the block that you're creating. It also means that configuration can be changed for each block independently, which allows configuration that may change on different time frames to be easily managed and configuration can be shared across multiple use cases.
To illustrate, here's a an expanded AWSCredentials
block that includes the ability to get an authenticated session via the boto3
library:
from typing import Optional\n\nimport boto3\nfrom prefect.blocks.core import Block\nfrom pydantic import SecretStr\n\nclass AWSCredentials(Block):\n aws_access_key_id: Optional[str] = None\n aws_secret_access_key: Optional[SecretStr] = None\n aws_session_token: Optional[str] = None\n profile_name: Optional[str] = None\n region_name: Optional[str] = None\n\n def get_boto3_session(self):\n return boto3.Session(\n aws_access_key_id = self.aws_access_key_id\n aws_secret_access_key = self.aws_secret_access_key\n aws_session_token = self.aws_session_token\n profile_name = self.profile_name\n region_name = self.region\n )\n
The AWSCredentials
block can be used within an S3Bucket block to provide authentication when interacting with an S3 bucket:
import io\n\nclass S3Bucket(Block):\n bucket_name: str\n credentials: AWSCredentials\n\n def read(self, key: str) -> bytes:\n s3_client = self.credentials.get_boto3_session().client(\"s3\")\n\n stream = io.BytesIO()\n s3_client.download_fileobj(Bucket=self.bucket_name, key=key, Fileobj=stream)\n\n stream.seek(0)\n output = stream.read()\n\n return output\n\n def write(self, key: str, data: bytes) -> None:\n s3_client = self.credentials.get_boto3_session().client(\"s3\")\n stream = io.BytesIO(data)\n s3_client.upload_fileobj(stream, Bucket=self.bucket_name, Key=key)\n
You can use this S3Bucket
block with previously saved AWSCredentials
block values in order to interact with the configured S3 bucket:
my_s3_bucket = S3Bucket(\n bucket_name=\"my_s3_bucket\",\n credentials=AWSCredentials.load(\"my_aws_credentials\")\n)\n\nmy_s3_bucket.save(\"my_s3_bucket\")\n
Saving block values like this links the values of the two blocks so that any changes to the values stored for the AWSCredentials
block with the name my_aws_credentials
will be seen the next time that block values for the S3Bucket
block named my_s3_bucket
is loaded.
Values for nested blocks can also be hard coded by not first saving child blocks:
my_s3_bucket = S3Bucket(\n bucket_name=\"my_s3_bucket\",\n credentials=AWSCredentials(\n aws_access_key_id=\"AKIAJKLJKLJKLJKLJKLJK\",\n aws_secret_access_key=\"secret_access_key\"\n )\n)\n\nmy_s3_bucket.save(\"my_s3_bucket\")\n
In the above example, the values for AWSCredentials
are saved with my_s3_bucket
and will not be usable with any other blocks.
Block
types","text":"Let's say that you now want to add a bucket_folder
field to your custom S3Bucket
block that represents the default path to read and write objects from (this field exists on our implementation).
We can add the new field to the class definition:
class S3Bucket(Block):\n bucket_name: str\n credentials: AWSCredentials\n bucket_folder: str = None\n ...\n
Then register the updated block type with either Prefect Cloud or your self-hosted Prefect server.
If you have any existing blocks of this type that were created before the update and you'd prefer to not re-create them, you can migrate them to the new version of your block type by adding the missing values:
# Bypass Pydantic validation to allow your local Block class to load the old block version\nmy_s3_bucket_block = S3Bucket.load(\"my-s3-bucket\", validate=False)\n\n# Set the new field to an appropriate value\nmy_s3_bucket_block.bucket_path = \"my-default-bucket-path\"\n\n# Overwrite the old block values and update the expected fields on the block\nmy_s3_bucket_block.save(\"my-s3-bucket\", overwrite=True)\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#registering-blocks-for-use-in-the-prefect-ui","title":"Registering blocks for use in the Prefect UI","text":"Blocks can be registered from a Python module available in the current virtual environment with a CLI command like this:
$ prefect block register --module prefect_aws.credentials\n
This command is useful for registering all blocks found in the credentials module within Prefect Integrations.
Or, if a block has been created in a .py
file, the block can also be registered with the CLI command:
$ prefect block register --file my_block.py\n
The registered block will then be available in the Prefect UI for configuration.
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/deployments-block-based/","title":"Block Based Deployments","text":"Workers are recommended
This page is about the block-based deployment model. The Work Pools and Workers based deployment model simplifies the specification of a flow's infrastructure and runtime environment. If you have existing agents, you can upgrade from agents to workers to significantly enhance the experience of deploying flows.
We encourage you to check out the new deployment experience with guided command line prompts and convenient CI/CD with prefect.yaml
files.
With remote storage blocks, you can package not only your flow code script but also any supporting files, including your custom modules, SQL scripts and any configuration files needed in your project.
To define how your flow execution environment should be configured, you may either reference pre-configured infrastructure blocks or let Prefect create those automatically for you as anonymous blocks (this happens when you specify the infrastructure type using --infra
flag during the build process).
Work queue affinity improved starting from Prefect 2.0.5
Until Prefect 2.0.4, tags were used to associate flow runs with work queues. Starting in Prefect 2.0.5, tag-based work queues are deprecated. Instead, work queue names are used to explicitly direct flow runs from deployments into queues.
Note that backward compatibility is maintained and work queues that use tag-based matching can still be created and will continue to work. However, those work queues are now considered legacy and we encourage you to use the new behavior by specifying work queues explicitly on agents and deployments.
See Agents & Work Pools for details.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#deployments-and-flows","title":"Deployments and flows","text":"Each deployment is associated with a single flow, but any given flow can be referenced by multiple deployments.
Deployments are uniquely identified by the combination of: flow_name/deployment_name
.
graph LR\n F(\"my_flow\"):::yellow -.-> A(\"Deployment 'daily'\"):::tan --> W(\"my_flow/daily\"):::fgreen\n F -.-> B(\"Deployment 'weekly'\"):::gold --> X(\"my_flow/weekly\"):::green\n F -.-> C(\"Deployment 'ad-hoc'\"):::dgold --> Y(\"my_flow/ad-hoc\"):::dgreen\n F -.-> D(\"Deployment 'trigger-based'\"):::dgold --> Z(\"my_flow/trigger-based\"):::dgreen\n\n classDef gold fill:goldenrod,stroke:goldenrod,stroke-width:4px,color:white\n classDef yellow fill:gold,stroke:gold,stroke-width:4px\n classDef dgold fill:darkgoldenrod,stroke:darkgoldenrod,stroke-width:4px,color:white\n classDef tan fill:tan,stroke:tan,stroke-width:4px,color:white\n classDef fgreen fill:forestgreen,stroke:forestgreen,stroke-width:4px,color:white\n classDef green fill:green,stroke:green,stroke-width:4px,color:white\n classDef dgreen fill:darkgreen,stroke:darkgreen,stroke-width:4px,color:white
This enables you to run a single flow with different parameters, based on multiple schedules and triggers, and in different environments. This also enables you to run different versions of the same flow for testing and production purposes.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#deployment-definition","title":"Deployment definition","text":"A deployment definition captures the settings for creating a deployment object on the Prefect API. You can create the deployment definition by:
prefect deployment build
CLI command with deployment options to create a deployment.yaml
deployment definition file, then run prefect deployment apply
to create a deployment on the API using the settings in deployment.yaml
.Deployment
Python object, specifying the deployment options as properties of the object, then building and applying the object using methods of Deployment
.The minimum required information to create a deployment includes:
You may provide additional settings for the deployment. Any settings you do not explicitly specify are inferred from defaults.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-deployment-on-the-cli","title":"Create a deployment on the CLI","text":"To create a deployment on the CLI, there are two steps:
deployment.yaml
. This step includes uploading your flow to its configured remote storage location, if one is specified.To build the deployment definition file deployment.yaml
, run the prefect deployment build
Prefect CLI command from the folder containing your flow script and any dependencies of the script.
$ prefect deployment build [OPTIONS] PATH\n
Path to the flow is specified in the format path-to-script:flow-function-name
\u2014 The path and filename of the flow script file, a colon, then the name of the entrypoint flow function.
For example:
$ prefect deployment build -n marvin -p default-agent-pool -q test flows/marvin.py:say_hi\n
When you run this command, Prefect:
marvin_flow-deployment.yaml
file for your deployment based on your flow code and options.test
. The work queue test
will be created if it doesn't exist.Uploading files may require storage filesystem libraries
Note that the appropriate filesystem library supporting the storage location must be installed prior to building a deployment with a storage block. For example, the AWS S3 Storage block requires the s3fs
library.
Ignore files or directories from a deployment
By default, Prefect uploads all files in the current folder to the configured storage location (local by default) when you build a deployment.
If you want to omit certain files or directories from your deployments, add a .prefectignore
file to the root directory. .prefectignore
enables users to omit certain files or directories from their deployments.
Similar to other .ignore
files, the syntax supports pattern matching, so an entry of *.pyc
will ensure all .pyc
files are ignored by the deployment call when uploading to remote storage.
You may specify additional options to further customize your deployment.
Options Description PATH Path, filename, and flow name of the flow definition. (Required) --apply
, -a
When provided, automatically registers the resulting deployment with the API. --cron TEXT
A cron string that will be used to set a CronSchedule
on the deployment. For example, --cron \"*/1 * * * *\"
to create flow runs from that deployment every minute. --help
Display help for available commands and options. --infra-block TEXT
, -ib
The infrastructure block to use, in block-type/block-name
format. --infra
, -i
The infrastructure type to use. (Default is Process
) --interval INTEGER
An integer specifying an interval (in seconds) that will be used to set an IntervalSchedule
on the deployment. For example, --interval 60
to create flow runs from that deployment every minute. --name TEXT
, -n
The name of the deployment. --output TEXT
, -o
Optional location for the YAML manifest generated as a result of the build
step. You can version-control that file, but it's not required since the CLI can generate everything you need to define a deployment. --override TEXT
One or more optional infrastructure overrides provided as a dot delimited path. For example, specify an environment variable: env.env_key=env_value
. For Kubernetes, specify customizations: customizations='[{\"op\": \"add\",\"path\": \"/spec/template/spec/containers/0/resources/limits\", \"value\": {\"memory\": \"8Gi\",\"cpu\": \"4000m\"}}]'
(note the string format). --param
An optional parameter override, values are parsed as JSON strings. For example, --param question=ultimate --param answer=42
. --params
An optional parameter override in a JSON string format. For example, --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\'
. --path
An optional path to specify a subdirectory of remote storage to upload to, or to point to a subdirectory of a locally stored flow. --pool TEXT
, -p
The work pool that will handle this deployment's runs. \u2502 --rrule TEXT
An RRule
that will be used to set an RRuleSchedule
on the deployment. For example, --rrule 'FREQ=HOURLY;BYDAY=MO,TU,WE,TH,FR;BYHOUR=9,10,11,12,13,14,15,16,17'
to create flow runs from that deployment every hour but only during business hours. --skip-upload
When provided, skips uploading this deployment's files to remote storage. --storage-block TEXT
, -sb
The storage block to use, in block-type/block-name
or block-type/block-name/path
format. Note that the appropriate library supporting the storage filesystem must be installed. --tag TEXT
, -t
One or more optional tags to apply to the deployment. --version TEXT
, -v
An optional version for the deployment. This could be a git commit hash if you use this command from a CI/CD pipeline. --work-queue TEXT
, -q
The work queue that will handle this deployment's runs. It will be created if it doesn't already exist. Defaults to None
. Note that if a work queue is not set, work will not be scheduled.","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#block-identifiers","title":"Block identifiers","text":"
When specifying a storage block with the -sb
or --storage-block
flag, you may specify the block by passing its slug. The storage block slug is formatted as block-type/block-name
.
For example, s3/example-block
is the slug for an S3 block named example-block
.
In addition, when passing the storage block slug, you may pass just the block slug or the block slug and a path.
block-type/block-name
indicates just the block, including any path included in the block configuration.block-type/block-name/path
indicates a storage path in addition to any path included in the block configuration.When specifying an infrastructure block with the -ib
or --infra-block
flag, you specify the block by passing its slug. The infrastructure block slug is formatted as block-type/block-name
.
Azure
azure
Docker Container DockerContainer
docker-container
GitHub GitHub
github
GCS GCS
gcs
Kubernetes Job KubernetesJob
kubernetes-job
Process Process
process
Remote File System RemoteFileSystem
remote-file-system
S3 S3
s3
SMB SMB
smb
GitLab Repository GitLabRepository
gitlab-repository
Note that the appropriate library supporting the storage filesystem must be installed prior to building a deployment with a storage block. For example, the AWS S3 Storage block requires the s3fs
library. See Storage for more information.
A deployment's YAML file configures additional settings needed to create a deployment on the server.
A single flow may have multiple deployments created for it, with different schedules, tags, and so on. A single flow definition may have multiple deployment YAML files referencing it, each specifying different settings. The only requirement is that each deployment must have a unique name.
The default {flow-name}-deployment.yaml
filename may be edited as needed with the --output
flag to prefect deployment build
.
###\n### A complete description of a Prefect Deployment for flow 'Cat Facts'\n###\nname: catfact\ndescription: null\nversion: c0fc95308d8137c50d2da51af138aa23\n# The work queue that will handle this deployment's runs\nwork_queue_name: test\nwork_pool_name: null\ntags: []\nparameters: {}\nschedule: null\ninfra_overrides: {}\ninfrastructure:\n type: process\n env: {}\n labels: {}\n name: null\n command:\n - python\n - -m\n - prefect.engine\n stream_output: true\n###\n### DO NOT EDIT BELOW THIS LINE\n###\nflow_name: Cat Facts\nmanifest_path: null\nstorage: null\npath: /Users/terry/test/testflows/catfact\nentrypoint: catfact.py:catfacts_flow\nparameter_openapi_schema:\n title: Parameters\n type: object\n properties:\n url:\n title: url\n required:\n - url\n definitions: null\n
Editing deployment.yaml
Note the big DO NOT EDIT comment in your deployment's YAML: In practice, anything above this block can be freely edited before running prefect deployment apply
to create the deployment on the API.
We recommend editing most of these fields from the CLI or Prefect UI for convenience.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#parameters-in-deployments","title":"Parameters in deployments","text":"You may provide default parameter values in the deployment.yaml
configuration, and these parameter values will be used for flow runs based on the deployment.
To configure default parameter values, add them to the parameters: {}
line of deployment.yaml
as JSON key-value pairs. The parameter list configured in deployment.yaml
must match the parameters expected by the entrypoint flow function.
parameters: {\"name\": \"Marvin\", \"num\": 42, \"url\": \"https://catfact.ninja/fact\"}\n
Passing **kwargs as flow parameters
You may pass **kwargs
as a deployment parameter as a \"kwargs\":{}
JSON object containing the key-value pairs of any passed keyword arguments.
parameters: {\"name\": \"Marvin\", \"kwargs\":{\"cattype\":\"tabby\",\"num\": 42}\n
You can edit default parameters for deployments in the Prefect UI, and you can override default parameter values when creating ad-hoc flow runs via the Prefect UI.
To edit parameters in the Prefect UI, go the the details page for a deployment, then select Edit from the commands menu. If you change parameter values, the new values are used for all future flow runs based on the deployment.
To create an ad-hoc flow run with different parameter values, go the the details page for a deployment, select Run, then select Custom. You will be able to provide custom values for any editable deployment fields. Under Parameters, select Custom. Provide the new values, then select Save. Select Run to begin the flow run with custom values.
If you want the Prefect API to verify the parameter values passed to a flow run against the schema defined by parameter_openapi_schema
, set enforce_parameter_schema
to true
.
When you've configured deployment.yaml
for a deployment, you can create the deployment on the API by running the prefect deployment apply
Prefect CLI command.
$ prefect deployment apply catfacts_flow-deployment.yaml\n
For example:
$ prefect deployment apply ./catfacts_flow-deployment.yaml\nSuccessfully loaded 'catfact'\nDeployment '76a9f1ac-4d8c-4a92-8869-615bec502685' successfully created.\n
prefect deployment apply
accepts an optional --upload
flag that, when provided, uploads this deployment's files to remote storage.
Once the deployment has been created, you'll see it in the Prefect UI and can inspect it using the CLI.
$ prefect deployment ls\n Deployments\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name \u2503 ID \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 Cat Facts/catfact \u2502 76a9f1ac-4d8c-4a92-8869-615bec502685 \u2502\n\u2502 leonardo_dicapriflow/hello_leo \u2502 fb4681d7-aa5a-4617-bf6f-f67e6f964984 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
When you run a deployed flow with Prefect, the following happens:
Agents and work pools enable the Prefect orchestration engine and API to run deployments in your local execution environments. To execute deployed flow runs you need to configure at least one agent.
Scheduled flow runs
Scheduled flow runs will not be created unless the scheduler is running with either Prefect Cloud or a local Prefect server started with prefect server start
.
Scheduled flow runs will not run unless an appropriate agent and work pool are configured.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-deployment-from-a-python-object","title":"Create a deployment from a Python object","text":"You can also create deployments from Python scripts by using the prefect.deployments.Deployment
class.
Create a new deployment using configuration defaults for an imported flow:
from my_project.flows import my_flow\nfrom prefect.deployments import Deployment\n\ndeployment = Deployment.build_from_flow(\n flow=my_flow,\n name=\"example-deployment\", \n version=1, \n work_queue_name=\"demo\",\n work_pool_name=\"default-agent-pool\",\n)\ndeployment.apply()\n
Create a new deployment with a pre-defined storage block and an infrastructure override:
from my_project.flows import my_flow\nfrom prefect.deployments import Deployment\nfrom prefect.filesystems import S3\n\nstorage = S3.load(\"dev-bucket\") # load a pre-defined block\n\ndeployment = Deployment.build_from_flow(\n flow=my_flow,\n name=\"s3-example\",\n version=2,\n work_queue_name=\"aws\",\n work_pool_name=\"default-agent-pool\",\n storage=storage,\n infra_overrides={\n \"env\": {\n \"ENV_VAR\": \"value\"\n }\n },\n)\n\ndeployment.apply()\n
If you have settings that you want to share from an existing deployment you can load those settings:
deployment = Deployment(\n name=\"a-name-you-used\", \n flow_name=\"name-of-flow\"\n)\ndeployment.load() # loads server-side settings\n
Once the existing deployment settings are loaded, you may update them as needed by changing deployment properties.
View all of the parameters for the Deployment
object in the Python API documentation.
When you create a deployment, it is constructed from deployment definition data you provide and additional properties set by client-side utilities.
Deployment properties include:
Property Descriptionid
An auto-generated UUID ID value identifying the deployment. created
A datetime
timestamp indicating when the deployment was created. updated
A datetime
timestamp indicating when the deployment was last changed. name
The name of the deployment. version
The version of the deployment description
A description of the deployment. flow_id
The id of the flow associated with the deployment. schedule
An optional schedule for the deployment. is_schedule_active
Boolean indicating whether the deployment schedule is active. Default is True. infra_overrides
One or more optional infrastructure overrides parameters
An optional dictionary of parameters for flow runs scheduled by the deployment. tags
An optional list of tags for the deployment. work_queue_name
The optional work queue that will handle the deployment's run parameter_openapi_schema
JSON schema for flow parameters. enforce_parameter_schema
Whether the API should validate the parameters passed to a flow run against the schema defined by parameter_openapi_schema
path
The path to the deployment.yaml file entrypoint
The path to a flow entry point storage_document_id
Storage block configured for the deployment. infrastructure_document_id
Infrastructure block configured for the deployment. You can inspect a deployment using the CLI with the prefect deployment inspect
command, referencing the deployment with <flow_name>/<deployment_name>
.
$ prefect deployment inspect 'Cat Facts/catfact'\n{\n 'id': '76a9f1ac-4d8c-4a92-8869-615bec502685',\n 'created': '2022-07-26T03:48:14.723328+00:00',\n 'updated': '2022-07-26T03:50:02.043238+00:00',\n 'name': 'catfact',\n 'version': '899b136ebc356d58562f48d8ddce7c19',\n 'description': None,\n 'flow_id': '2c7b36d1-0bdb-462e-bb97-f6eb9fef6fd5',\n 'schedule': None,\n 'is_schedule_active': True,\n 'infra_overrides': {},\n 'parameters': {},\n 'tags': [],\n 'work_queue_name': 'test',\n 'parameter_openapi_schema': {\n 'title': 'Parameters',\n 'type': 'object',\n 'properties': {'url': {'title': 'url'}},\n 'required': ['url']\n },\n 'path': '/Users/terry/test/testflows/catfact',\n 'entrypoint': 'catfact.py:catfacts_flow',\n 'manifest_path': None,\n 'storage_document_id': None,\n 'infrastructure_document_id': 'f958db1c-b143-4709-846c-321125247e07',\n 'infrastructure': {\n 'type': 'process',\n 'env': {},\n 'labels': {},\n 'name': None,\n 'command': ['python', '-m', 'prefect.engine'],\n 'stream_output': True\n }\n}\n
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-from-a-deployment","title":"Create a flow run from a deployment","text":"","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-with-a-schedule","title":"Create a flow run with a schedule","text":"If you specify a schedule for a deployment, the deployment will execute its flow automatically on that schedule as long as a Prefect server and agent are running. Prefect Cloud creates schedules flow runs automatically, and they will run on schedule if an agent is configured to pick up flow runs for the deployment.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-with-an-event-trigger","title":"Create a flow run with an event trigger","text":"deployment triggers are only available in Prefect Cloud
Deployments can optionally take a trigger specification, which will configure an automation to run the deployment based on the presence or absence of events, and optionally pass event data into the deployment run as parameters via jinja templating.
triggers:\n - enabled: true\n match:\n prefect.resource.id: prefect.flow-run.*\n expect:\n - prefect.flow-run.Completed\n match_related:\n prefect.resource.name: prefect.flow.etl-flow\n prefect.resource.role: flow\n parameters:\n param_1: \"{{ event }}\"\n
When applied, this deployment will start a flow run upon the completion of the upstream flow specified in the match_related
key, with the flow run passed in as a parameter. Triggers can be configured to respond to the presence or absence of arbitrary internal or external events. The trigger system and API are detailed in Automations.
In the Prefect UI, you can click the Run button next to any deployment to execute an ad hoc flow run for that deployment.
The prefect deployment
CLI command provides commands for managing and running deployments locally.
apply
Create or update a deployment from a YAML file. build
Generate a deployment YAML from /path/to/file.py:flow_function. delete
Delete a deployment. inspect
View details about a deployment. ls
View all deployments or deployments for specific flows. pause-schedule
Pause schedule of a given deployment. resume-schedule
Resume schedule of a given deployment. run
Create a flow run for the given flow and deployment. schedule
Commands for interacting with your deployment's schedules. set-schedule
Set schedule for a given deployment. Deprecated Schedule Commands
The pause-schedule, resume-schedule, and set-schedule commands are deprecated due to the introduction of multi-schedule support for deployments. Use the new prefect deployment schedule
command for enhanced flexibility and control over your deployment schedules.
You can create a flow run from a deployment in a Python script with the run_deployment
function.
from prefect.deployments import run_deployment\n\n\ndef main():\n response = run_deployment(name=\"flow-name/deployment-name\")\n print(response)\n\n\nif __name__ == \"__main__\":\n main()\n
PREFECT_API_URL
setting for agents
You'll need to configure agents and work pools that can create flow runs for deployments in remote environments. PREFECT_API_URL
must be set for the environment in which your agent is running.
If you want the agent to communicate with Prefect Cloud from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL
in that environment.
Deployments are server-side representations of flows. They store the crucial metadata needed for remote orchestration including when, where, and how a workflow should run. Deployments elevate workflows from functions that you must call manually to API-managed entities that can be triggered remotely.
Here we will focus largely on the metadata that defines a deployment and how it is used. Different ways of creating a deployment populate these fields differently.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#overview","title":"Overview","text":"Every Prefect deployment references one and only one \"entrypoint\" flow (though that flow may itself call any number of subflows). Different deployments may reference the same underlying flow, a useful pattern when developing or promoting workflow changes through staged environments.
The complete schema that defines a deployment is as follows:
class Deployment:\n \"\"\"\n Structure of the schema defining a deployment\n \"\"\"\n\n # required defining data\n name: str \n flow_id: UUID\n entrypoint: str\n path: str = None\n\n # workflow scheduling and parametrization\n parameters: dict = None\n parameter_openapi_schema: dict = None\n schedules: list[Schedule] = None\n paused: bool = False\n trigger: Trigger = None\n\n # metadata for bookkeeping\n version: str = None\n description: str = None\n tags: list = None\n\n # worker-specific fields\n work_pool_name: str = None\n work_queue_name: str = None\n infra_overrides: dict = None\n pull_steps: dict = None\n
All methods for creating Prefect deployments are interfaces for populating this schema. Let's look at each section in turn.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#required-data","title":"Required data","text":"Deployments universally require both a name
and a reference to an underlying Flow
. In almost all instances of deployment creation, users do not need to concern themselves with the flow_id
as most interfaces will only need the flow's name. Note that the deployment name is not required to be unique across all deployments but is required to be unique for a given flow ID. As a consequence, you will often see references to the deployment's unique identifying name {FLOW_NAME}/{DEPLOYMENT_NAME}
. For example, triggering a run of a deployment from the Prefect CLI can be done via:
prefect deployment run my-first-flow/my-first-deployment\n
The other two fields are less obvious:
path
: the path can generally be interpreted as the runtime working directory for the flow. For example, if a deployment references a workflow defined within a Docker image, the path
will be the absolute path to the parent directory where that workflow will run anytime the deployment is triggered. This interpretation is more subtle in the case of flows defined in remote filesystems.entrypoint
: the entrypoint of a deployment is a relative reference to a function decorated as a flow that exists on some filesystem. It is always specified relative to the path
. Entrypoints use Python's standard path-to-object syntax (e.g., path/to/file.py:function_name
or simply path:object
).The entrypoint must reference the same flow as the flow ID.
Note that Prefect requires that deployments reference flows defined within Python files. Flows defined within interactive REPLs or notebooks cannot currently be deployed as such. They are still valid flows that will be monitored by the API and observable in the UI whenever they are run, but Prefect cannot trigger them.
Deployments do not contain code definitions
Deployment metadata references code that exists in potentially diverse locations within your environment. This separation of concerns means that your flow code stays within your storage and execution infrastructure and never lives on the Prefect server or database.
This is the heart of the Prefect hybrid model: there's a boundary between your proprietary assets, such as your flow code, and the Prefect backend (including Prefect Cloud).
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#scheduling-and-parametrization","title":"Scheduling and parametrization","text":"One of the primary motivations for creating deployments of flows is to remotely schedule and trigger them. Just as flows can be called as functions with different input values, so can deployments be triggered or scheduled with different values through the use of parameters.
The six fields here capture the necessary metadata to perform such actions:
schedules
: a list of schedule objects. Most of the convenient interfaces for creating deployments allow users to avoid creating this object themselves. For example, when updating a deployment schedule in the UI basic information such as a cron string or interval is all that's required.trigger
(Cloud-only): triggers allow you to define event-based rules for running a deployment. For more information see Automations.parameter_openapi_schema
: an OpenAPI compatible schema that defines the types and defaults for the flow's parameters. This is used by both the UI and the backend to expose options for creating manual runs as well as type validation.parameters
: default values of flow parameters that this deployment will pass on each run. These can be overwritten through a trigger or when manually creating a custom run.enforce_parameter_schema
: a boolean flag that determines whether the API should validate the parameters passed to a flow run against the schema defined by parameter_openapi_schema
.Scheduling is asynchronous and decoupled
Because deployments are nothing more than metadata, runs can be created at anytime. Note that pausing a schedule, updating your deployment, and other actions reset your auto-scheduled runs.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#running-a-deployed-flow-from-within-python-flow-code","title":"Running a deployed flow from within Python flow code","text":"Prefect provides a run_deployment
function that can be used to schedule the run of an existing deployment when your Python code executes.
from prefect.deployments import run_deployment\n\ndef main():\n run_deployment(name=\"my_flow_name/my_deployment_name\")\n
Run a deployment without blocking
By default, run_deployment
blocks until the scheduled flow run finishes executing. Pass timeout=0
to return immediately and not block.
If you call run_deployment
from within a flow or task, the scheduled flow run will be linked to the calling flow run (or the calling task's flow run) as a subflow run by default.
Subflow runs have different behavior than regular flow runs. For example, a subflow run can't be suspended independently of its parent flow. If you'd rather not link the scheduled flow run to the calling flow or task run, you can disable this behavior by passing as_subflow=False
:
from prefect import flow\nfrom prefect.deployments import run_deployment\n\n\n@flow\ndef my_flow():\n # The scheduled flow run will not be linked to this flow as a subflow.\n run_deployment(name=\"my_other_flow/my_deployment_name\", as_subflow=False)\n
The return value of run_deployment
is a FlowRun object containing metadata about the scheduled run. You can use this object to retrieve information about the run after calling run_deployment
:
from prefect import get_client\nfrom prefect.deployments import run_deployment\n\ndef main():\n flow_run = run_deployment(name=\"my_flow_name/my_deployment_name\")\n flow_run_id = flow_run.id\n\n # If you save the flow run's ID, you can use it later to retrieve\n # flow run metadata again, e.g. to check if it's completed.\n async with get_client() as client:\n flow_run = client.read_flow_run(flow_run_id)\n print(f\"Current state of the flow run: {flow_run.state}\")\n
Using the Prefect client
For more information on using the Prefect client to interact with Prefect's REST API, see our guide.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#versioning-and-bookkeeping","title":"Versioning and bookkeeping","text":"Versions, descriptions and tags are omnipresent fields throughout Prefect that can be easy to overlook. However, putting some extra thought into how you use these fields can pay dividends down the road.
version
: versions are always set by the client and can be any arbitrary string. We recommend tightly coupling this field on your deployments to your software development lifecycle. For example if you leverage git
to manage code changes, use either a tag or commit hash in this field. If you don't set a value for the version, Prefect will compute a hashdescription
: the description field of a deployment is a place to provide rich reference material for downstream stakeholders such as intended use and parameter documentation. Markdown formatting will be rendered in the Prefect UI, allowing for section headers, links, tables, and other formatting. If not provided explicitly, Prefect will use the docstring of your flow function as a default value.tags
: tags are a mechanism for grouping related work together across a diverse set of objects. Tags set on a deployment will be inherited by that deployment's flow runs. These tags can then be used to filter what runs are displayed on the primary UI dashboard, allowing you to customize different views into your work. In addition, in Prefect Cloud you can easily find objects through searching by tag.All of these bits of metadata can be leveraged to great effect by injecting them into the processes that Prefect is orchestrating. For example you can use both run ID and versions to organize files that you produce from your workflows, or by associating your flow run's tags with the metadata of a job it orchestrates. This metadata is available during execution through Prefect runtime.
Everything has a version
Deployments aren't the only entity in Prefect with a version attached; both flows and tasks also have versions that can be set through their respective decorators. These versions will be sent to the API anytime the flow or task is run and thereby allow you to audit your changes across all levels.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#workers-and-work-pools","title":"Workers and Work Pools","text":"Workers and work pools are an advanced deployment pattern that allow you to dynamically provision infrastructure for each flow run. In addition, the work pool job template interface allows users to create and govern opinionated interfaces to their workflow infrastructure. To do this, a deployment using workers needs to evaluate the following fields:
work_pool_name
: the name of the work pool this deployment will be associated with. Work pool types mirror infrastructure types and therefore the decision here affects the options available for the other fields.work_queue_name
: if you are using work queues to either manage priority or concurrency, you can associate a deployment with a specific queue within a work pool using this field.infra_overrides
: often called job_variables
within various interfaces, this field allows deployment authors to customize whatever infrastructure options have been exposed on this work pool. This field is often used for things such as Docker image names, Kubernetes annotations and limits, and environment variables.pull_steps
: a JSON description of steps that should be performed to retrieve flow code or configuration and prepare the runtime environment for workflow execution.Pull steps allow users to highly decouple their workflow architecture. For example, a common use of pull steps is to dynamically pull code from remote filesystems such as GitHub with each run of their deployment.
For more information see the guide to deploying with a worker.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#two-approaches-to-deployments","title":"Two approaches to deployments","text":"There are two primary ways to deploy flows with Prefect, differentiated by how much control Prefect has over the infrastructure in which the flows run.
In one setup, deploying Prefect flows is analogous to deploying a webserver - users author their workflows and then start a long-running process (often within a Docker container) that is responsible for managing all of the runs for the associated deployment(s).
In the other setup, you do a little extra up-front work to set up a work pool and a base job template that defines how individual flow runs will be submitted to infrastructure.
Prefect provides several types of work pools corresponding to different types of infrastructure. Prefect Cloud provides a Prefect Managed work pool option that is the simplest way to run workflows remotely. A cloud-provider account, such as AWS, is not required with a Prefect Managed work pool.
Some work pool types require a client-side worker to submit job definitions to the appropriate infrastructure with each run.
Each of these setups can support production workloads. The choice ultimately boils down to your use case and preferences. Read further to decide which setup is best for your situation.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#serving-flows-on-long-lived-infrastructure","title":"Serving flows on long-lived infrastructure","text":"When you have several flows running regularly, the serve
method of the Flow
object or the serve
utility is a great option for managing multiple flows simultaneously.
Once you have authored your flow and decided on its deployment settings as described above, all that's left is to run this long-running process in a location of your choosing. The process will stay in communication with the Prefect API, monitoring for work and submitting each run within an individual subprocess. Note that because runs are submitted to subprocesses, any external infrastructure configuration will need to be setup beforehand and kept associated with this process.
This approach has many benefits:
However, there are a few reasons you might consider running flows on dynamically provisioned infrastructure with work pools instead:
Work pools allow Prefect to exercise greater control of the infrastructure on which flows run. Options for serverless work pools allow you to scale to zero when workflows aren't running. Prefect even provides you with the ability to provision cloud infrastructure via a single CLI command, if you use a Prefect Cloud push work pool option.
With work pools:
You don't have to commit to one approach
You are not required to use only one of these approaches for your deployments. You can mix and match approaches based on the needs of each flow. Further, you can change the deployment approach for a particular flow as its needs evolve. For example, you might use workers for your expensive machine learning pipelines, but use the serve mechanics for smaller, more frequent file-processing pipelines.
","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/filesystems/","title":"Filesystems","text":"A filesystem block is an object that allows you to read and write data from paths. Prefect provides multiple built-in file system types that cover a wide range of use cases.
LocalFileSystem
RemoteFileSystem
Azure
GitHub
GitLab
GCS
S3
SMB
Additional file system types are available in Prefect Collections.
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#local-filesystem","title":"Local filesystem","text":"The LocalFileSystem
block enables interaction with the files in your current development environment.
LocalFileSystem
properties include:
from prefect.filesystems import LocalFileSystem\n\nfs = LocalFileSystem(basepath=\"/foo/bar\")\n
Limited access to local file system
Be aware that LocalFileSystem
access is limited to the exact path provided. This file system may not be ideal for some use cases. The execution environment for your workflows may not have the same file system as the environment you are writing and deploying your code on.
Use of this file system can limit the availability of results after a flow run has completed or prevent the code for a flow from being retrieved successfully at the start of a run.
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#remote-file-system","title":"Remote file system","text":"The RemoteFileSystem
block enables interaction with arbitrary remote file systems. Under the hood, RemoteFileSystem
uses fsspec
and supports any file system that fsspec
supports.
RemoteFileSystem
properties include:
The file system is specified using a protocol:
s3://my-bucket/my-folder/
will use S3gcs://my-bucket/my-folder/
will use GCSaz://my-bucket/my-folder/
will use AzureFor example, to use it with Amazon S3:
from prefect.filesystems import RemoteFileSystem\n\nblock = RemoteFileSystem(basepath=\"s3://my-bucket/folder/\")\nblock.save(\"dev\")\n
You may need to install additional libraries to use some remote storage types.
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#remotefilesystem-examples","title":"RemoteFileSystem examples","text":"How can we use RemoteFileSystem
to store our flow code? The following is a use case where we use MinIO as a storage backend:
from prefect.filesystems import RemoteFileSystem\n\nminio_block = RemoteFileSystem(\n basepath=\"s3://my-bucket\",\n settings={\n \"key\": \"MINIO_ROOT_USER\",\n \"secret\": \"MINIO_ROOT_PASSWORD\",\n \"client_kwargs\": {\"endpoint_url\": \"http://localhost:9000\"},\n },\n)\nminio_block.save(\"minio\")\n
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#azure","title":"Azure","text":"The Azure
file system block enables interaction with Azure Datalake and Azure Blob Storage. Under the hood, the Azure
block uses adlfs
.
Azure
properties include:
DefaultAzureCredential
. To create a block:
from prefect.filesystems import Azure\n\nblock = Azure(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n
To use it in a deployment:
prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb az/dev\n
You need to install adlfs
to use it.
The GitHub
filesystem block enables interaction with GitHub repositories. This block is read-only and works with both public and private repositories.
GitHub
properties include:
repo
scope. To create a block:
from prefect.filesystems import GitHub\n\nblock = GitHub(\n repository=\"https://github.com/my-repo/\",\n access_token=<my_access_token> # only required for private repos\n)\nblock.get_directory(\"folder-in-repo\") # specify a subfolder of repo\nblock.save(\"dev\")\n
To use it in a deployment:
prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb github/dev -a\n
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#gitlabrepository","title":"GitLabRepository","text":"The GitLabRepository
block is read-only and works with private GitLab repositories.
GitLabRepository
properties include:
GitLabCredentials
block with Personal Access Token (PAT) with read_repository
scope. To create a block:
from prefect_gitlab.credentials import GitLabCredentials\nfrom prefect_gitlab.repositories import GitLabRepository\n\ngitlab_creds = GitLabCredentials(token=\"YOUR_GITLAB_ACCESS_TOKEN\")\ngitlab_repo = GitLabRepository(\n repository=\"https://gitlab.com/yourorg/yourrepo.git\",\n reference=\"main\",\n credentials=gitlab_creds,\n)\ngitlab_repo.save(\"dev\")\n
To use it in a deployment (and apply):
prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb gitlab-repository/dev -a\n
Note that to use this block, you need to install the prefect-gitlab
collection.
The GCS
file system block enables interaction with Google Cloud Storage. Under the hood, GCS
uses gcsfs
.
GCS
properties include:
To create a block:
from prefect.filesystems import GCS\n\nblock = GCS(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n
To use it in a deployment:
prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb gcs/dev\n
You need to install gcsfs
to use it.
The S3
file system block enables interaction with Amazon S3. Under the hood, S3
uses s3fs
.
S3
properties include:
To create a block:
from prefect.filesystems import S3\n\nblock = S3(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n
To use it in a deployment:
prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb s3/dev\n
You need to install s3fs
to use this block.
The SMB
file system block enables interaction with SMB shared network storage. Under the hood, SMB
uses smbprotocol
. Used to connect to Windows-based SMB shares from Linux-based Prefect flows. The SMB file system block is able to copy files, but cannot create directories.
SMB
properties include:
To create a block:
from prefect.filesystems import SMB\n\nblock = SMB(basepath=\"my-share/folder/\")\nblock.save(\"dev\")\n
To use it in a deployment:
prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb smb/dev\n
You need to install smbprotocol
to use it.
If you leverage S3
, GCS
, or Azure
storage blocks, and you don't explicitly configure credentials on the respective storage block, those credentials will be inferred from the environment. Make sure to set those either explicitly on the block or as environment variables, configuration files, or IAM roles within both the build and runtime environment for your deployments.
A Prefect installation and doesn't include filesystem-specific package dependencies such as s3fs
, gcsfs
or adlfs
. This includes Prefect base Docker images.
You must ensure that filesystem-specific libraries are installed in an execution environment where they will be used by flow runs.
In Dockerized deployments using the Prefect base image, you can leverage the EXTRA_PIP_PACKAGES
environment variable. Those dependencies will be installed at runtime within your Docker container or Kubernetes Job before the flow starts running.
In Dockerized deployments using a custom image, you must include the filesystem-specific package dependency in your image.
Here is an example from a deployment YAML file showing how to specify the installation of s3fs
from into your image:
infrastructure:\n type: docker-container\n env:\n EXTRA_PIP_PACKAGES: s3fs # could be gcsfs, adlfs, etc.\n
You may specify multiple dependencies by providing a comma-delimted list.
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#saving-and-loading-file-systems","title":"Saving and loading file systems","text":"Configuration for a file system can be saved to the Prefect API. For example:
fs = RemoteFileSystem(basepath=\"s3://my-bucket/folder/\")\nfs.write_path(\"foo\", b\"hello\")\nfs.save(\"dev-s3\")\n
This file system can be retrieved for later use with load
.
fs = RemoteFileSystem.load(\"dev-s3\")\nfs.read_path(\"foo\") # b'hello'\n
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#readable-and-writable-file-systems","title":"Readable and writable file systems","text":"Prefect provides two abstract file system types, ReadableFileSystem
and WriteableFileSystem
.
read_path
, which takes a file path to read content from and returns bytes. write_path
which takes a file path and content and writes the content to the file as bytes. A file system may implement both of these types.
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/flows/","title":"Flows","text":"Flows are the most central Prefect object. A flow is a container for workflow logic as-code and allows users to configure how their workflows behave. Flows are defined as Python functions, and any Python function is eligible to be a flow.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flows-overview","title":"Flows overview","text":"Flows can be thought of as special types of functions. They can take inputs, perform work, and return an output. In fact, you can turn any function into a Prefect flow by adding the @flow
decorator. When a function becomes a flow, its behavior changes, giving it the following advantages:
Flows also take advantage of automatic Prefect logging to capture details about flow runs such as run time and final state.
Flows can include calls to tasks as well as to other flows, which Prefect calls \"subflows\" in this context. Flows may be defined within modules and imported for use as subflows in your flow definitions.
Deployments elevate individual workflows from functions that you call manually to API-managed entities.
Tasks must be called from flows
All tasks must be called from within a flow. Tasks may not be called from other tasks.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flow-runs","title":"Flow runs","text":"A flow run represents a single execution of the flow.
You can create a flow run by calling the flow manually. For example, by running a Python script or importing the flow into an interactive session and calling it.
You can also create a flow run by:
cron
to invoke a flow functionHowever you run the flow, the Prefect API monitors the flow run, capturing flow run state for observability.
When you run a flow that contains tasks or additional flows, Prefect will track the relationship of each child run to the parent flow run.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#writing-flows","title":"Writing flows","text":"The @flow
decorator is used to designate a flow:
from prefect import flow\n\n@flow\ndef my_flow():\n return\n
There are no rigid rules for what code you include within a flow definition - all valid Python is acceptable.
Flows are uniquely identified by name. You can provide a name
parameter value for the flow. If you don't provide a name, Prefect uses the flow function name.
@flow(name=\"My Flow\")\ndef my_flow():\n return\n
Flows can call tasks to allow Prefect to orchestrate and track more granular units of work:
from prefect import flow, task\n\n@task\ndef print_hello(name):\n print(f\"Hello {name}!\")\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n print_hello(name)\n
Flows and tasks
There's nothing stopping you from putting all of your code in a single flow function \u2014 Prefect will happily run it!
However, organizing your workflow code into smaller flow and task units lets you take advantage of Prefect features like retries, more granular visibility into runtime state, the ability to determine final state regardless of individual task state, and more.
In addition, if you put all of your workflow logic in a single flow function and any line of code fails, the entire flow will fail and must be retried from the beginning. This can be avoided by breaking up the code into multiple tasks.
You may call any number of other tasks, subflows, and even regular Python functions within your flow. You can pass parameters to your flow function that will be used elsewhere in the workflow, and Prefect will report on the progress and final state of any invocation.
Prefect encourages \"small tasks\" \u2014 each one should represent a single logical step of your workflow. This allows Prefect to better contain task failures.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flow-settings","title":"Flow settings","text":"Flows allow a great deal of configuration by passing arguments to the decorator. Flows accept the following optional settings.
Argument Descriptiondescription
An optional string description for the flow. If not provided, the description will be pulled from the docstring for the decorated function. name
An optional name for the flow. If not provided, the name will be inferred from the function. retries
An optional number of times to retry on flow run failure. retry_delay_seconds
An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries
is nonzero. flow_run_name
An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables; this name can also be provided as a function that returns a string. task_runner
An optional task runner to use for task execution within the flow when you .submit()
tasks. If not provided and you .submit()
tasks, the ConcurrentTaskRunner
will be used. timeout_seconds
An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called. validate_parameters
Boolean indicating whether parameters passed to flows are validated by Pydantic. Default is True
. version
An optional version string for the flow. If not provided, we will attempt to create a version string as a hash of the file containing the wrapped function. If the file cannot be located, the version will be null. For example, you can provide a name
value for the flow. Here we've also used the optional description
argument and specified a non-default task runner.
from prefect import flow\nfrom prefect.task_runners import SequentialTaskRunner\n\n@flow(name=\"My Flow\",\n description=\"My flow using SequentialTaskRunner\",\n task_runner=SequentialTaskRunner())\ndef my_flow():\n return\n
You can also provide the description as the docstring on the flow function.
@flow(name=\"My Flow\",\n task_runner=SequentialTaskRunner())\ndef my_flow():\n \"\"\"My flow using SequentialTaskRunner\"\"\"\n return\n
You can distinguish runs of this flow by providing a flow_run_name
. This setting accepts a string that can optionally contain templated references to the parameters of your flow. The name will be formatted using Python's standard string formatting syntax as can be seen here:
import datetime\nfrom prefect import flow\n\n@flow(flow_run_name=\"{name}-on-{date:%A}\")\ndef my_flow(name: str, date: datetime.datetime):\n pass\n\n# creates a flow run called 'marvin-on-Thursday'\nmy_flow(name=\"marvin\", date=datetime.datetime.now(datetime.timezone.utc))\n
Additionally this setting also accepts a function that returns a string for the flow run name:
import datetime\nfrom prefect import flow\n\ndef generate_flow_run_name():\n date = datetime.datetime.now(datetime.timezone.utc)\n\n return f\"{date:%A}-is-a-nice-day\"\n\n@flow(flow_run_name=generate_flow_run_name)\ndef my_flow(name: str):\n pass\n\n# creates a flow run called 'Thursday-is-a-nice-day'\nmy_flow(name=\"marvin\")\n
If you need access to information about the flow, use the prefect.runtime
module. For example:
from prefect import flow\nfrom prefect.runtime import flow_run\n\ndef generate_flow_run_name():\n flow_name = flow_run.flow_name\n\n parameters = flow_run.parameters\n name = parameters[\"name\"]\n limit = parameters[\"limit\"]\n\n return f\"{flow_name}-with-{name}-and-{limit}\"\n\n@flow(flow_run_name=generate_flow_run_name)\ndef my_flow(name: str, limit: int = 100):\n pass\n\n# creates a flow run called 'my-flow-with-marvin-and-100'\nmy_flow(name=\"marvin\")\n
Note that validate_parameters
will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type. For example, if a parameter is defined as x: int
and \"5\" is passed, it will be resolved to 5
. If set to False
, no validation will be performed on flow parameters.
The simplest workflow is just a @flow
function that does all the work of the workflow.
from prefect import flow\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n print(f\"Hello {name}!\")\n\nhello_world(\"Marvin\")\n
When you run this flow, you'll see output like the following:
$ python hello.py\n15:11:23.594 | INFO | prefect.engine - Created flow run 'benevolent-donkey' for flow 'hello-world'\n15:11:23.594 | INFO | Flow run 'benevolent-donkey' - Using task runner 'ConcurrentTaskRunner'\nHello Marvin!\n15:11:24.447 | INFO | Flow run 'benevolent-donkey' - Finished in state Completed()\n
A better practice is to create @task
functions that do the specific work of your flow, and use your @flow
function as the conductor that orchestrates the flow of your application:
from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n msg = f\"Hello {name}!\"\n print(msg)\n return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n message = print_hello(name)\n\nhello_world(\"Marvin\")\n
When you run this flow, you'll see the following output, which illustrates how the work is encapsulated in a task run.
$ python hello.py\n15:15:58.673 | INFO | prefect.engine - Created flow run 'loose-wolverine' for flow 'Hello Flow'\n15:15:58.674 | INFO | Flow run 'loose-wolverine' - Using task runner 'ConcurrentTaskRunner'\n15:15:58.973 | INFO | Flow run 'loose-wolverine' - Created task run 'Print Hello-84f0fe0e-0' for task 'Print Hello'\nHello Marvin!\n15:15:59.037 | INFO | Task run 'Print Hello-84f0fe0e-0' - Finished in state Completed()\n15:15:59.568 | INFO | Flow run 'loose-wolverine' - Finished in state Completed('All states completed.')\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#visualizing-flow-structure","title":"Visualizing flow structure","text":"You can get a quick sense of the structure of your flow using the .visualize()
method on your flow. Calling this method will attempt to produce a schematic diagram of your flow and tasks without actually running your flow code.
Functions and code not inside of flows or tasks will still be run when calling .visualize()
. This may have unintended consequences. Place your code into tasks to avoid unintended execution.
To use the visualize()
method, Graphviz must be installed and on your PATH. Please install Graphviz from http://www.graphviz.org/download/. And note: just installing the graphviz
python package is not sufficient.
from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n msg = f\"Hello {name}!\"\n print(msg)\n return msg\n\n@task(name=\"Print Hello Again\")\ndef print_hello_again(name):\n msg = f\"Hello {name}!\"\n print(msg)\n return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n message = print_hello(name)\n message2 = print_hello_again(message)\n\nhello_world.visualize()\n
Prefect cannot automatically produce a schematic for dynamic workflows, such as those with loops or if/else control flow. In this case, you can provide tasks with mock return values for use in the visualize()
call.
from prefect import flow, task\n@task(viz_return_value=[4])\ndef get_list():\n return [1, 2, 3]\n\n@task\ndef append_one(n):\n return n.append(6)\n\n@flow\ndef viz_return_value_tracked():\n l = get_list()\n for num in range(3):\n l.append(5)\n append_one(l)\n\nviz_return_value_tracked.visualize()\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#composing-flows","title":"Composing flows","text":"A subflow run is created when a flow function is called inside the execution of another flow. The primary flow is the \"parent\" flow. The flow created within the parent is the \"child\" flow or \"subflow.\"
Subflow runs behave like normal flow runs. There is a full representation of the flow run in the backend as if it had been called separately. When a subflow starts, it will create a new task runner for tasks within the subflow. When the subflow completes, the task runner is shut down.
Subflows will block execution of the parent flow until completion. However, asynchronous subflows can be run concurrently by using AnyIO task groups or asyncio.gather.
Subflows differ from normal flows in that they will resolve any passed task futures into data. This allows data to be passed from the parent flow to the child easily.
The relationship between a child and parent flow is tracked by creating a special task run in the parent flow. This task run will mirror the state of the child flow run.
A task that represents a subflow will be annotated as such in its state_details
via the presence of a child_flow_run_id
field. A subflow can be identified via the presence of a parent_task_run_id
on state_details
.
You can define multiple flows within the same file. Whether running locally or via a deployment, you must indicate which flow is the entrypoint for a flow run.
Cancelling subflow runs
Inline subflow runs, specifically those created without run_deployment
, cannot be cancelled without cancelling their parent flow run. If you may need to cancel a subflow run independent of its parent flow run, we recommend deploying it separately and starting it using the run_deployment function.
from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n msg = f\"Hello {name}!\"\n print(msg)\n return msg\n\n@flow(name=\"Subflow\")\ndef my_subflow(msg):\n print(f\"Subflow says: {msg}\")\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n message = print_hello(name)\n my_subflow(message)\n\nhello_world(\"Marvin\")\n
You can also define flows or tasks in separate modules and import them for usage. For example, here's a simple subflow module:
from prefect import flow, task\n\n@flow(name=\"Subflow\")\ndef my_subflow(msg):\n print(f\"Subflow says: {msg}\")\n
Here's a parent flow that imports and uses my_subflow()
as a subflow:
from prefect import flow, task\nfrom subflow import my_subflow\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n msg = f\"Hello {name}!\"\n print(msg)\n return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n message = print_hello(name)\n my_subflow(message)\n\nhello_world(\"Marvin\")\n
Running the hello_world()
flow (in this example from the file hello.py
) creates a flow run like this:
$ python hello.py\n15:19:21.651 | INFO | prefect.engine - Created flow run 'daft-cougar' for flow 'Hello Flow'\n15:19:21.651 | INFO | Flow run 'daft-cougar' - Using task runner 'ConcurrentTaskRunner'\n15:19:21.945 | INFO | Flow run 'daft-cougar' - Created task run 'Print Hello-84f0fe0e-0' for task 'Print Hello'\nHello Marvin!\n15:19:22.055 | INFO | Task run 'Print Hello-84f0fe0e-0' - Finished in state Completed()\n15:19:22.107 | INFO | Flow run 'daft-cougar' - Created subflow run 'ninja-duck' for flow 'Subflow'\nSubflow says: Hello Marvin!\n15:19:22.794 | INFO | Flow run 'ninja-duck' - Finished in state Completed()\n15:19:23.215 | INFO | Flow run 'daft-cougar' - Finished in state Completed('All states completed.')\n
Subflows or tasks?
In Prefect you can call tasks or subflows to do work within your workflow, including passing results from other tasks to your subflow. So a common question is:
\"When should I use a subflow instead of a task?\"
We recommend writing tasks that do a discrete, specific piece of work in your workflow: calling an API, performing a database operation, analyzing or transforming a data point. Prefect tasks are well suited to parallel or distributed execution using distributed computation frameworks such as Dask or Ray. For troubleshooting, the more granular you create your tasks, the easier it is to find and fix issues should a task fail.
Subflows enable you to group related tasks within your workflow. Here are some scenarios where you might choose to use a subflow rather than calling tasks individually:
Flows can be called with both positional and keyword arguments. These arguments are resolved at runtime into a dictionary of parameters mapping name to value. These parameters are stored by the Prefect orchestration engine on the flow run object.
Prefect API requires keyword arguments
When creating flow runs from the Prefect API, parameter names must be specified when overriding defaults \u2014 they cannot be positional.
Type hints provide an easy way to enforce typing on your flow parameters via pydantic. This means any pydantic model used as a type hint within a flow will be coerced automatically into the relevant object type:
from prefect import flow\nfrom pydantic import BaseModel\n\nclass Model(BaseModel):\n a: int\n b: float\n c: str\n\n@flow\ndef model_validator(model: Model):\n print(model)\n
Note that parameter values can be provided to a flow via API using a deployment. Flow run parameters sent to the API on flow calls are coerced to a serializable form. Type hints on your flow functions provide you a way of automatically coercing JSON provided values to their appropriate Python representation.
For example, to automatically convert something to a datetime:
from prefect import flow\nfrom datetime import datetime\n\n@flow\ndef what_day_is_it(date: datetime = None):\n if date is None:\n date = datetime.now(timezone.utc)\n print(f\"It was {date.strftime('%A')} on {date.isoformat()}\")\n\nwhat_day_is_it(\"2021-01-01T02:00:19.180906\")\n# It was Friday on 2021-01-01T02:00:19.180906\n
Parameters are validated before a flow is run. If a flow call receives invalid parameters, a flow run is created in a Failed
state. If a flow run for a deployment receives invalid parameters, it will move from a Pending
state to a Failed
without entering a Running
state.
Flow run parameters cannot exceed 512kb
in size
Prerequisite
Read the documentation about states before proceeding with this section.
The final state of the flow is determined by its return value. The following rules apply:
None
), its state is determined by the states of all of the tasks and subflows within it.FAILED
.CANCELLED
.The following examples illustrate each of these cases:
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#raise-an-exception","title":"Raise an exception","text":"If an exception is raised within the flow function, the flow is immediately marked as failed.
from prefect import flow\n\n@flow\ndef always_fails_flow():\n raise ValueError(\"This flow immediately fails\")\n\nalways_fails_flow()\n
Running this flow produces the following result:
22:22:36.864 | INFO | prefect.engine - Created flow run 'acrid-tuatara' for flow 'always-fails-flow'\n22:22:36.864 | INFO | Flow run 'acrid-tuatara' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n22:22:37.060 | ERROR | Flow run 'acrid-tuatara' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: This flow immediately fails\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-none","title":"Return none
","text":"A flow with no return statement is determined by the state of all of its task runs.
from prefect import flow, task\n\n@task\ndef always_fails_task():\n raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n print(\"I'm fail safe!\")\n return \"success\"\n\n@flow\ndef always_fails_flow():\n always_fails_task.submit().result(raise_on_failure=False)\n always_succeeds_task()\n\nif __name__ == \"__main__\":\n always_fails_flow()\n
Running this flow produces the following result:
18:32:05.345 | INFO | prefect.engine - Created flow run 'auburn-lionfish' for flow 'always-fails-flow'\n18:32:05.346 | INFO | Flow run 'auburn-lionfish' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:32:05.582 | INFO | Flow run 'auburn-lionfish' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:32:05.582 | INFO | Flow run 'auburn-lionfish' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:32:05.610 | ERROR | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n ...\nValueError: I fail successfully\n18:32:05.638 | ERROR | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:32:05.658 | INFO | Flow run 'auburn-lionfish' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:32:05.659 | INFO | Flow run 'auburn-lionfish' - Executing 'always_succeeds_task-9c27db32-0' immediately...\nI'm fail safe!\n18:32:05.703 | INFO | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:32:05.730 | ERROR | Flow run 'auburn-lionfish' - Finished in state Failed('1/2 states failed.')\nTraceback (most recent call last):\n ...\nValueError: I fail successfully\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-a-future","title":"Return a future","text":"If a flow returns one or more futures, the final state is determined based on the underlying states.
from prefect import flow, task\n\n@task\ndef always_fails_task():\n raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n print(\"I'm fail safe!\")\n return \"success\"\n\n@flow\ndef always_succeeds_flow():\n x = always_fails_task.submit().result(raise_on_failure=False)\n y = always_succeeds_task.submit(wait_for=[x])\n return y\n\nif __name__ == \"__main__\":\n always_succeeds_flow()\n
Running this flow produces the following result \u2014 it succeeds because it returns the future of the task that succeeds:
18:35:24.965 | INFO | prefect.engine - Created flow run 'whispering-guan' for flow 'always-succeeds-flow'\n18:35:24.965 | INFO | Flow run 'whispering-guan' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:35:25.204 | INFO | Flow run 'whispering-guan' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:35:25.205 | INFO | Flow run 'whispering-guan' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:35:25.232 | ERROR | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n ...\nValueError: I fail successfully\n18:35:25.265 | ERROR | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:35:25.289 | INFO | Flow run 'whispering-guan' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:35:25.289 | INFO | Flow run 'whispering-guan' - Submitted task run 'always_succeeds_task-9c27db32-0' for execution.\nI'm fail safe!\n18:35:25.335 | INFO | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:35:25.362 | INFO | Flow run 'whispering-guan' - Finished in state Completed('All states completed.')\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-multiple-states-or-futures","title":"Return multiple states or futures","text":"If a flow returns a mix of futures and states, the final state is determined by resolving all futures to states, then determining if any of the states are not COMPLETED
.
from prefect import task, flow\n\n@task\ndef always_fails_task():\n raise ValueError(\"I am bad task\")\n\n@task\ndef always_succeeds_task():\n return \"foo\"\n\n@flow\ndef always_succeeds_flow():\n return \"bar\"\n\n@flow\ndef always_fails_flow():\n x = always_fails_task()\n y = always_succeeds_task()\n z = always_succeeds_flow()\n return x, y, z\n
Running this flow produces the following result. It fails because one of the three returned futures failed. Note that the final state is Failed
, but the states of each of the returned futures is included in the flow state:
20:57:51.547 | INFO | prefect.engine - Created flow run 'impartial-gorilla' for flow 'always-fails-flow'\n20:57:51.548 | INFO | Flow run 'impartial-gorilla' - Using task runner 'ConcurrentTaskRunner'\n20:57:51.645 | INFO | Flow run 'impartial-gorilla' - Created task run 'always_fails_task-58ea43a6-0' for task 'always_fails_task'\n20:57:51.686 | INFO | Flow run 'impartial-gorilla' - Created task run 'always_succeeds_task-c9014725-0' for task 'always_succeeds_task'\n20:57:51.727 | ERROR | Task run 'always_fails_task-58ea43a6-0' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: I am bad task\n20:57:51.787 | INFO | Task run 'always_succeeds_task-c9014725-0' - Finished in state Completed()\n20:57:51.808 | INFO | Flow run 'impartial-gorilla' - Created subflow run 'unbiased-firefly' for flow 'always-succeeds-flow'\n20:57:51.884 | ERROR | Task run 'always_fails_task-58ea43a6-0' - Finished in state Failed('Task run encountered an exception.')\n20:57:52.438 | INFO | Flow run 'unbiased-firefly' - Finished in state Completed()\n20:57:52.811 | ERROR | Flow run 'impartial-gorilla' - Finished in state Failed('1/3 states failed.')\nFailed(message='1/3 states failed.', type=FAILED, result=(Failed(message='Task run encountered an exception.', type=FAILED, result=ValueError('I am bad task'), task_run_id=5fd4c697-7c4c-440d-8ebc-dd9c5bbf2245), Completed(message=None, type=COMPLETED, result='foo', task_run_id=df9b6256-f8ac-457c-ba69-0638ac9b9367), Completed(message=None, type=COMPLETED, result='bar', task_run_id=cfdbf4f1-dccd-4816-8d0f-128750017d0c)), flow_run_id=6d2ec094-001a-4cb0-a24e-d2051db6318d)\n
Returning multiple states
When returning multiple states, they must be contained in a set
, list
, or tuple
. If other collection types are used, the result of the contained states will not be checked.
If a flow returns a manually created state, the final state is determined based on the return value.
from prefect import task, flow\nfrom prefect.states import Completed, Failed\n\n@task\ndef always_fails_task():\n raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n print(\"I'm fail safe!\")\n return \"success\"\n\n@flow\ndef always_succeeds_flow():\n x = always_fails_task.submit()\n y = always_succeeds_task.submit()\n if y.result() == \"success\":\n return Completed(message=\"I am happy with this result\")\n else:\n return Failed(message=\"How did this happen!?\")\n\nif __name__ == \"__main__\":\n always_succeeds_flow()\n
Running this flow produces the following result.
18:37:42.844 | INFO | prefect.engine - Created flow run 'lavender-elk' for flow 'always-succeeds-flow'\n18:37:42.845 | INFO | Flow run 'lavender-elk' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:37:43.125 | INFO | Flow run 'lavender-elk' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:37:43.126 | INFO | Flow run 'lavender-elk' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:37:43.162 | INFO | Flow run 'lavender-elk' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:37:43.163 | INFO | Flow run 'lavender-elk' - Submitted task run 'always_succeeds_task-9c27db32-0' for execution.\n18:37:43.175 | ERROR | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n ...\nValueError: I fail successfully\nI'm fail safe!\n18:37:43.217 | ERROR | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:37:43.236 | INFO | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:37:43.264 | INFO | Flow run 'lavender-elk' - Finished in state Completed('I am happy with this result')\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-an-object","title":"Return an object","text":"If the flow run returns any other object, then it is marked as completed.
from prefect import task, flow\n\n@task\ndef always_fails_task():\n raise ValueError(\"I fail successfully\")\n\n@flow\ndef always_succeeds_flow():\n always_fails_task().submit()\n return \"foo\"\n\nif __name__ == \"__main__\":\n always_succeeds_flow()\n
Running this flow produces the following result.
21:02:45.715 | INFO | prefect.engine - Created flow run 'sparkling-pony' for flow 'always-succeeds-flow'\n21:02:45.715 | INFO | Flow run 'sparkling-pony' - Using task runner 'ConcurrentTaskRunner'\n21:02:45.816 | INFO | Flow run 'sparkling-pony' - Created task run 'always_fails_task-58ea43a6-0' for task 'always_fails_task'\n21:02:45.853 | ERROR | Task run 'always_fails_task-58ea43a6-0' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: I am bad task\n21:02:45.879 | ERROR | Task run 'always_fails_task-58ea43a6-0' - Finished in state Failed('Task run encountered an exception.')\n21:02:46.593 | INFO | Flow run 'sparkling-pony' - Finished in state Completed()\nCompleted(message=None, type=COMPLETED, result='foo', flow_run_id=7240e6f5-f0a8-4e00-9440-a7b33fb51153)\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#serving-a-flow","title":"Serving a flow","text":"The simplest way to create a deployment for your flow is by calling its serve
method. This method creates a deployment for the flow and starts a long-running process that monitors for work from the Prefect server. When work is found, it is executed within its own isolated subprocess.
from prefect import flow\n\n\n@flow(log_prints=True)\ndef hello_world(name: str = \"world\", goodbye: bool = False):\n print(f\"Hello {name} from Prefect! \ud83e\udd17\")\n\n if goodbye:\n print(f\"Goodbye {name}!\")\n\n\nif __name__ == \"__main__\":\n # creates a deployment and stays running to monitor for work instructions generated on the server\n\n hello_world.serve(name=\"my-first-deployment\",\n tags=[\"onboarding\"],\n parameters={\"goodbye\": True},\n interval=60)\n
This interface provides all of the configuration needed for a deployment with no strong infrastructure requirements:
Schedules are auto-paused on shutdown
By default, stopping the process running flow.serve
will pause the schedule for the deployment (if it has one). When running this in environments where restarts are expected use the pause_on_shutdown=False
flag to prevent this behavior:
if __name__ == \"__main__\":\n hello_world.serve(name=\"my-first-deployment\",\n tags=[\"onboarding\"],\n parameters={\"goodbye\": True},\n pause_on_shutdown=False,\n interval=60)\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#serving-multiple-flows-at-once","title":"Serving multiple flows at once","text":"You can take this further and serve multiple flows with the same process using the serve
utility along with the to_deployment
method of flows:
import time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n \"Fastest flow this side of the Mississippi.\"\n return\n\n\nif __name__ == \"__main__\":\n slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n fast_deploy = fast_flow.to_deployment(name=\"fast\")\n serve(slow_deploy, fast_deploy)\n
The behavior and interfaces are identical to the single flow case.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#retrieve-a-flow-from-remote-storage","title":"Retrieve a flow from remote storage","text":"Flows can be retrieved from remote storage using the flow.from_source
method.
flow.from_source
accepts a git repository URL and an entrypoint pointing to the flow to load from the repository:
from prefect import flow\n\nmy_flow = flow.from_source(\n source=\"https://github.com/PrefectHQ/prefect.git\",\n entrypoint=\"flows/hello_world.py:hello\"\n)\n\nif __name__ == \"__main__\":\n my_flow()\n
16:40:33.818 | INFO | prefect.engine - Created flow run 'muscular-perch' for flow 'hello'\n16:40:34.048 | INFO | Flow run 'muscular-perch' - Hello world!\n16:40:34.706 | INFO | Flow run 'muscular-perch' - Finished in state Completed()\n
A flow entrypoint is the path to the file the flow is located in and the name of the flow function separated by a colon.
If you need additional configuration, such as specifying a private repository, you can provide a GitRepository
instead of URL:
from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nmy_flow = flow.from_source(\n source=GitRepository(\n url=\"https://github.com/org/private-repo.git\",\n branch=\"dev\",\n credentials={\n \"access_token\": Secret.load(\"github-access-token\").get()\n }\n ),\n entrypoint=\"flows.py:my_flow\"\n)\n\nif __name__ == \"__main__\":\n my_flow()\n
You can serve loaded flows
Flows loaded from remote storage can be served using the same serve
method as local flows:
from prefect import flow\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\"\n ).serve(name=\"my-deployment\")\n
When you serve a flow loaded from remote storage, the serving process will periodically poll your remote storage for updates to the flow's code. This pattern allows you to update your flow code without restarting the serving process.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#pausing-or-suspending-a-flow-run","title":"Pausing or suspending a flow run","text":"Prefect provides you with the ability to halt a flow run with two functions that are similar, but slightly different. When a flow run is paused, code execution is stopped and the process continues to run. When a flow run is suspended, code execution is stopped and so is the process.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#pausing-a-flow-run","title":"Pausing a flow run","text":"Prefect enables pausing an in-progress flow run for manual approval. Prefect exposes this functionality via the pause_flow_run
and resume_flow_run
functions.
Timeouts
Paused flow runs time out after one hour by default. After the timeout, the flow run will fail with a message saying it paused and never resumed. You can specify a different timeout period in seconds using the timeout
parameter.
Most simply, pause_flow_run
can be called inside a flow:
from prefect import task, flow, pause_flow_run, resume_flow_run\n\n@task\nasync def marvin_setup():\n return \"a raft of ducks walk into a bar...\"\n\n\n@task\nasync def marvin_punchline():\n return \"it's a wonder none of them ducked!\"\n\n\n@flow\nasync def inspiring_joke():\n await marvin_setup()\n await pause_flow_run(timeout=600) # pauses for 10 minutes\n await marvin_punchline()\n
You can also implement conditional pauses:
from prefect import task, flow, pause_flow_run\n\n@task\ndef task_one():\n for i in range(3):\n sleep(1)\n print(i)\n\n@flow\ndef my_flow():\n terminal_state = task_one.submit(return_state=True)\n if terminal_state.type == StateType.COMPLETED:\n print(\"Task one succeeded! Pausing flow run..\")\n pause_flow_run(timeout=2) \n else:\n print(\"Task one failed. Skipping pause flow run..\")\n
Calling this flow will block code execution after the first task and wait for resumption to deliver the punchline.
await inspiring_joke()\n> \"a raft of ducks walk into a bar...\"\n
Paused flow runs can be resumed by clicking the Resume button in the Prefect UI or calling the resume_flow_run
utility via client code.
resume_flow_run(FLOW_RUN_ID)\n
The paused flow run will then finish!
> \"it's a wonder none of them ducked!\"\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#suspending-a-flow-run","title":"Suspending a flow run","text":"Similar to pausing a flow run, Prefect enables suspending an in-progress flow run.
The difference between pausing and suspending a flow run
There is an important difference between pausing and suspending a flow run. When you pause a flow run, the flow code is still running but is blocked until someone resumes the flow. This is not the case with suspending a flow run! When you suspend a flow run, the flow exits completely and the infrastructure running it (e.g., a Kubernetes Job) tears down.
This means that you can suspend flow runs to save costs instead of paying for long-running infrastructure. However, when the flow run resumes, the flow code will execute again from the beginning of the flow, so you should use tasks and task caching to avoid recomputing expensive operations.
Prefect exposes this functionality via the suspend_flow_run
and resume_flow_run
functions, as well as the Prefect UI.
When called inside of a flow suspend_flow_run
will immediately suspend execution of the flow run. The flow run will be marked as Suspended
and will not be resumed until resume_flow_run
is called.
Timeouts
Suspended flow runs time out after one hour by default. After the timeout, the flow run will fail with a message saying it suspended and never resumed. You can specify a different timeout period in seconds using the timeout
parameter or pass timeout=None
for no timeout.
Here is an example of a flow that does not block flow execution while paused. This flow will exit after one task, and will be rescheduled upon resuming. The stored result of the first task is retrieved instead of being rerun.
from prefect import flow, pause_flow_run, task\n\n@task(persist_result=True)\ndef foo():\n return 42\n\n@flow(persist_result=True)\ndef noblock_pausing():\n x = foo.submit()\n pause_flow_run(timeout=30, reschedule=True)\n y = foo.submit()\n z = foo(wait_for=[x])\n alpha = foo(wait_for=[y])\n omega = foo(wait_for=[x, y])\n
Flow runs can be suspended out-of-process by calling suspend_flow_run(flow_run_id=<ID>)
or selecting the Suspend button in the Prefect UI or Prefect Cloud.
Suspended flow runs can be resumed by clicking the Resume button in the Prefect UI or calling the resume_flow_run
utility via client code.
resume_flow_run(FLOW_RUN_ID)\n
Subflows can't be suspended independently of their parent run
You can't suspend a subflow run independently of its parent flow run.
If you use a flow to schedule a flow run with run_deployment
, the scheduled flow run will be linked to the calling flow as a subflow run by default. This means you won't be able to suspend the scheduled flow run independently of the calling flow. Call run_deployment
with as_subflow=False
to disable this linking if you need to be able to suspend the scheduled flow run independently of the calling flow.
Experimental
The wait_for_input
parameter used in the pause_flow_run
or suspend_flow_run
functions is an experimental feature. The interface or behavior of this feature may change without warning in future releases.
If you encounter any issues, please let us know in Slack or with a Github issue.
When pausing or suspending a flow run you may want to wait for input from a user. Prefect provides a way to do this by leveraging the pause_flow_run
and suspend_flow_run
functions. These functions accept a wait_for_input
argument, the value of which should be a subclass of prefect.input.RunInput
, a pydantic model. When resuming the flow run, users are required to provide data for this model. Upon successful validation, the flow run resumes, and the return value of the pause_flow_run
or suspend_flow_run
is an instance of the model containing the provided data.
Here is an example of a flow that pauses and waits for input from a user:
from prefect import flow, get_run_logger, pause_flow_run\nfrom prefect.input import RunInput\n\n\nclass UserNameInput(RunInput):\n name: str\n\n\n@flow\nasync def greet_user():\n logger = get_run_logger()\n\n user_input = await pause_flow_run(\n wait_for_input=UserNameInput\n )\n\n logger.info(f\"Hello, {user_input.name}!\")\n
Running this flow will create a flow run. The flow run will advance until code execution reaches pause_flow_run
, at which point it will move into a Paused
state. Execution will block and wait for resumption.
When resuming the flow run, users will be prompted to provide a value for the name
field of the UserNameInput
model. Upon successful validation, the flow run will resume, and the return value of the pause_flow_run
will be an instance of the UserNameInput
model containing the provided data.
For more in-depth information on receiving input from users when pausing and suspending flow runs, see the Creating interactive workflows guide.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#canceling-a-flow-run","title":"Canceling a flow run","text":"You may cancel a scheduled or in-progress flow run from the CLI, UI, REST API, or Python client.
When cancellation is requested, the flow run is moved to a \"Cancelling\" state. If the deployment is a work pool-based deployemnt with a worker, then the worker monitors the state of flow runs and detects that cancellation has been requested. The worker then sends a signal to the flow run infrastructure, requesting termination of the run. If the run does not terminate after a grace period (default of 30 seconds), the infrastructure will be killed, ensuring the flow run exits.
A deployment is required
Flow run cancellation requires the flow run to be associated with a deployment. A monitoring process must be running to enforce the cancellation. Inline subflow runs, i.e. those created without run_deployment
, cannot be cancelled without cancelling the parent flow run. If you may need to cancel a subflow run independent of its parent flow run, we recommend deploying it separately and starting it using the run_deployment function.
Cancellation is robust to restarts of Prefect workers. To enable this, we attach metadata about the created infrastructure to the flow run. Internally, this is referred to as the infrastructure_pid
or infrastructure identifier. Generally, this is composed of two parts:
The scope is used to ensure that Prefect does not kill the wrong infrastructure. For example, workers running on multiple machines may have overlapping process IDs but should not have a matching scope.
The identifiers for infrastructure types:
While the cancellation process is robust, there are a few issues than can occur:
infrastructre_pid
is missing from the flow run will be marked as cancelled but cancellation cannot be enforced.Enhanced cancellation
We are working on improving cases where cancellation can fail. You can try the improved cancellation experience by enabling the PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION
setting on your worker or agents:
prefect config set PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION=True\n
If you encounter any issues, please let us know in Slack or with a Github issue.
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#cancel-via-the-cli","title":"Cancel via the CLI","text":"From the command line in your execution environment, you can cancel a flow run by using the prefect flow-run cancel
CLI command, passing the ID of the flow run.
prefect flow-run cancel 'a55a4804-9e3c-4042-8b59-b3b6b7618736'\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#cancel-via-the-ui","title":"Cancel via the UI","text":"From the UI you can cancel a flow run by navigating to the flow run's detail page and clicking the Cancel
button in the upper right corner.
Workers are recommended
Infrastructure blocks are part of the agent based deployment model. Work Pools and Workers simplify the specification of each flow's infrastructure and runtime environment. If you have existing agents, you can upgrade from agents to workers to significantly enhance the experience of deploying flows.
Users may specify an infrastructure block when creating a deployment. This block will be used to specify infrastructure for flow runs created by the deployment at runtime.
Infrastructure can only be used with a deployment. When you run a flow directly by calling the flow yourself, you are responsible for the environment in which the flow executes.
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#infrastructure-overview","title":"Infrastructure overview","text":"Prefect uses infrastructure to create the environment for a user's flow to execute.
Infrastructure is attached to a deployment and is propagated to flow runs created for that deployment. Infrastructure is deserialized by the agent and it has two jobs:
prefect.engine
in the infrastructure, which retrieves the flow from storage and executes the flow.The engine acquires and calls the flow. Infrastructure doesn't know anything about how the flow is stored, it's just passing a flow run ID to the engine.
Infrastructure is specific to the environments in which flows will run. Prefect currently provides the following infrastructure types:
Process
runs flows in a local subprocess.DockerContainer
runs flows in a Docker container.KubernetesJob
runs flows in a Kubernetes Job.ECSTask
runs flows in an Amazon ECS Task.Cloud Run
runs flows in a Google Cloud Run Job.Container Instance
runs flows in an Azure Container Instance.What about tasks?
Flows and tasks can both use configuration objects to manage the environment in which code runs.
Flows use infrastructure.
Tasks use task runners. For more on how task runners work, see Task Runners.
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#using-infrastructure","title":"Using infrastructure","text":"You may create customized infrastructure blocks through the Prefect UI or Prefect Cloud Blocks page or create them in code and save them to the API using the blocks .save()
method.
Once created, there are two distinct ways to use infrastructure in a deployment:
-i
or --infra
flag and provide a type when building deployment files.-ib
or --infra-block
and a block slug when building deployment files.For example, when creating your deployment files, the supported Prefect infrastrucure types are:
process
docker-container
kubernetes-job
ecs-task
cloud-run-job
container-instance-job
$ prefect deployment build ./my_flow.py:my_flow -n my-flow-deployment -t test -i docker-container -sb s3/my-bucket --override env.EXTRA_PIP_PACKAGES=s3fs\nFound flow 'my-flow'\nSuccessfully uploaded 2 files to s3://bucket-full-of-sunshine\nDeployment YAML created at '/Users/terry/test/flows/infra/deployment.yaml'.\n
In this example we specify the DockerContainer
infrastructure in addition to a preconfigured AWS S3 bucket storage block.
The default deployment YAML filename may be edited as needed to add an infrastructure type or infrastructure settings.
###\n### A complete description of a Prefect Deployment for flow 'my-flow'\n###\nname: my-flow-deployment\ndescription: null\nversion: e29de5d01b06d61b4e321d40f34a480c\n# The work queue that will handle this deployment's runs\nwork_queue_name: default\nwork_pool_name: default-agent-pool\ntags:\n- test\nparameters: {}\nschedules: []\npaused: true\ninfra_overrides:\n env.EXTRA_PIP_PACKAGES: s3fs\ninfrastructure:\n type: docker-container\n env: {}\n labels: {}\n name: null\n command:\n - python\n - -m\n - prefect.engine\n image: prefecthq/prefect:2-latest\n image_pull_policy: null\n networks: []\n network_mode: null\n auto_remove: false\n volumes: []\n stream_output: true\n memswap_limit: null\n mem_limit: null\n privileged: false\n block_type_slug: docker-container\n _block_type_slug: docker-container\n\n###\n### DO NOT EDIT BELOW THIS LINE\n###\nflow_name: my-flow\nmanifest_path: my_flow-manifest.json\nstorage:\n bucket_path: bucket-full-of-sunshine\n aws_access_key_id: '**********'\n aws_secret_access_key: '**********'\n _is_anonymous: true\n _block_document_name: anonymous-xxxxxxxx-f1ff-4265-b55c-6353a6d65333\n _block_document_id: xxxxxxxx-06c2-4c3c-a505-4a8db0147011\n block_type_slug: s3\n _block_type_slug: s3\npath: ''\nentrypoint: my_flow.py:my-flow\nparameter_openapi_schema:\n title: Parameters\n type: object\n properties: {}\n required: null\n definitions: null\ntimestamp: '2023-02-08T23:00:14.974642+00:00'\n
Editing deployment YAML
Note the big DO NOT EDIT comment in the deployment YAML: In practice, anything above this block can be freely edited before running prefect deployment apply
to create the deployment on the API.
Once the deployment exists, any flow runs that this deployment starts will use DockerContainer
infrastructure.
You can also create custom infrastructure blocks \u2014 either in the Prefect UI for in code via the API \u2014 and use the settings in the block to configure your infastructure. For example, here we specify settings for Kubernetes infrastructure in a block named k8sdev
.
from prefect.infrastructure import KubernetesJob, KubernetesImagePullPolicy\n\nk8s_job = KubernetesJob(\n namespace=\"dev\",\n image=\"prefecthq/prefect:2.0.0-python3.9\",\n image_pull_policy=KubernetesImagePullPolicy.IF_NOT_PRESENT,\n)\nk8s_job.save(\"k8sdev\")\n
Now we can apply the infrastrucure type and settings in the block by specifying the block slug kubernetes-job/k8sdev
as the infrastructure type when building a deployment:
prefect deployment build flows/k8s_example.py:k8s_flow --name k8sdev --tag k8s -sb s3/dev -ib kubernetes-job/k8sdev\n
See Deployments for more information about deployment build options.
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#configuring-infrastructure","title":"Configuring infrastructure","text":"Every infrastrcture type has type-specific options.
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#process","title":"Process","text":"Process
infrastructure runs a command in a new process.
Current environment variables and Prefect settings will be included in the created process. Configured environment variables will override any current environment variables.
Process
supports the following settings:
DockerContainer
infrastructure executes flow runs in a container.
Requirements for DockerContainer
:
localhost
and 127.0.0.1
will be replaced with host.docker.internal
.DockerContainer
supports the following settings:
DockerRegistry
or ensure otherwise that your execution layer is authenticated to pull the image from the image registry. image_pull_policy Specifies if the image should be pulled. One of 'ALWAYS', 'NEVER', 'IF_NOT_PRESENT'. image_registry A DockerRegistry
block containing credentials to use if image
is stored in a private image registry. labels An optional dictionary of labels, mapping name to value. name An optional name for the container. networks An optional list of strings specifying Docker networks to connect the container to. network_mode Set the network mode for the created container. Defaults to 'host' if a local API url is detected, otherwise the Docker default of 'bridge' is used. If 'networks' is set, this cannot be set. stream_output Bool indicating whether to stream output from the subprocess to local standard output. volumes An optional list of volume mount strings in the format of \"local_path:container_path\". Prefect automatically sets a Docker image matching the Python and Prefect version you're using at deployment time. You can see all available images at Docker Hub.
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#kubernetesjob","title":"KubernetesJob","text":"KubernetesJob
infrastructure executes flow runs in a Kubernetes Job.
Requirements for KubernetesJob
:
kubectl
must be available.The Prefect CLI command prefect kubernetes manifest server
automatically generates a Kubernetes manifest with default settings for Prefect deployments. By default, it simply prints out the YAML configuration for a manifest. You can pipe this output to a file of your choice and edit as necessary.
KubernetesJob
supports the following settings:
When creating deployments using KubernetesJob
infrastructure, the infra_overrides
parameter expects a dictionary. For a KubernetesJob
, the customizations
parameter expects a list.
Containers expect a list of objects, even if there is only one. For any patches applying to the container, the path value should be a list, for example: /spec/templates/spec/containers/0/resources
A Kubernetes-Job
infrastructure block defined in Python:
customizations = [\n {\n \"op\": \"add\",\n \"path\": \"/spec/template/spec/containers/0/resources\",\n \"value\": {\n \"requests\": {\n \"cpu\": \"2000m\",\n \"memory\": \"4gi\"\n },\n \"limits\": {\n \"cpu\": \"4000m\",\n \"memory\": \"8Gi\",\n \"nvidia.com/gpu\": \"1\"\n }\n },\n }\n]\n\nk8s_job = KubernetesJob(\n namespace=namespace,\n image=image_name,\n image_pull_policy=KubernetesImagePullPolicy.ALWAYS,\n finished_job_ttl=300,\n job_watch_timeout_seconds=600,\n pod_watch_timeout_seconds=600,\n service_account_name=\"prefect-server\",\n customizations=customizations,\n )\nk8s_job.save(\"devk8s\")\n
A Deployment
with infra-overrides defined in Python:
infra_overrides={ \n \"customizations\": [\n {\n \"op\": \"add\",\n \"path\": \"/spec/template/spec/containers/0/resources\",\n \"value\": {\n \"requests\": {\n \"cpu\": \"2000m\",\n \"memory\": \"4gi\"\n },\n \"limits\": {\n \"cpu\": \"4000m\",\n \"memory\": \"8Gi\",\n \"nvidia.com/gpu\": \"1\"\n }\n },\n }\n ]\n}\n\n# Load an already created K8s Block\nk8sjob = k8s_job.load(\"devk8s\")\n\ndeployment = Deployment.build_from_flow(\n flow=my_flow,\n name=\"s3-example\",\n version=2,\n work_queue_name=\"aws\",\n infrastructure=k8sjob,\n storage=storage,\n infra_overrides=infra_overrides,\n)\n\ndeployment.apply()\n
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#ecstask","title":"ECSTask","text":"ECSTask
infrastructure runs your flow in an ECS Task.
Requirements for ECSTask
:
prefect-aws
collection must be installed within the agent environment: pip install prefect-aws
ECSTask
and AwsCredentials
blocks must be registered within the agent environment: prefect block register -m prefect_aws.ecs
ECSTask
is S3. If you leverage that type of block, make sure that s3fs
is installed within your agent and flow run environment. The easiest way to satisfy all the installation-related points mentioned above is to include the following commands in your Dockerfile: FROM prefecthq/prefect:2-python3.9 # example base image \nRUN pip install s3fs prefect-aws\n
Make sure to allocate enough CPU and memory to your agent, and consider adding retries
When you start a Prefect agent on AWS ECS Fargate, allocate as much CPU and memory as needed for your workloads. Your agent needs enough resources to appropriately provision infrastructure for your flow runs and to monitor their execution. Otherwise, your flow runs may get stuck in a Pending
state. Alternatively, set a work-queue concurrency limit to ensure that the agent will not try to process all runs at the same time.
Some API calls to provision infrastructure may fail due to unexpected issues on the client side (for example, transient errors such as ConnectionError
, HTTPClientError
, or RequestTimeout
), or due to server-side rate limiting from the AWS service. To mitigate those issues, we recommend adding environment variables such as AWS_MAX_ATTEMPTS
(can be set to an integer value such as 10) and AWS_RETRY_MODE
(can be set to a string value including standard
or adaptive
modes). Those environment variables must be added within the agent environment, e.g. on your ECS service running the agent, rather than on the ECSTask
infrastructure block.
Learn about options for Prefect-maintained Docker images in the Docker guide.
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/results/","title":"Results","text":"Results represent the data returned by a flow or a task.
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#retrieving-results","title":"Retrieving results","text":"When calling flows or tasks, the result is returned directly:
from prefect import flow, task\n\n@task\ndef my_task():\n return 1\n\n@flow\ndef my_flow():\n task_result = my_task()\n return task_result + 1\n\nresult = my_flow()\nassert result == 2\n
When working with flow and task states, the result can be retrieved with the State.result()
method:
from prefect import flow, task\n\n@task\ndef my_task():\n return 1\n\n@flow\ndef my_flow():\n state = my_task(return_state=True)\n return state.result() + 1\n\nstate = my_flow(return_state=True)\nassert state.result() == 2\n
When submitting tasks to a runner, the result can be retrieved with the Future.result()
method:
from prefect import flow, task\n\n@task\ndef my_task():\n return 1\n\n@flow\ndef my_flow():\n future = my_task.submit()\n return future.result() + 1\n\nresult = my_flow()\nassert result == 2\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#handling-failures","title":"Handling failures","text":"Sometimes your flows or tasks will encounter an exception. Prefect captures all exceptions in order to report states to the orchestrator, but we do not hide them from you (unless you ask us to) as your program needs to know if an unexpected error has occurred.
When calling flows or tasks, the exceptions are raised as in normal Python:
from prefect import flow, task\n\n@task\ndef my_task():\n raise ValueError()\n\n@flow\ndef my_flow():\n try:\n my_task()\n except ValueError:\n print(\"Oh no! The task failed.\")\n\n return True\n\nmy_flow()\n
If you would prefer to check for a failed task without using try/except
, you may ask Prefect to return the state:
from prefect import flow, task\n\n@task\ndef my_task():\n raise ValueError()\n\n@flow\ndef my_flow():\n state = my_task(return_state=True)\n\n if state.is_failed():\n print(\"Oh no! The task failed. Falling back to '1'.\")\n result = 1\n else:\n result = state.result()\n\n return result + 1\n\nresult = my_flow()\nassert result == 2\n
If you retrieve the result from a failed state, the exception will be raised. For this reason, it's often best to check if the state is failed first.
from prefect import flow, task\n\n@task\ndef my_task():\n raise ValueError()\n\n@flow\ndef my_flow():\n state = my_task(return_state=True)\n\n try:\n result = state.result()\n except ValueError:\n print(\"Oh no! The state raised the error!\")\n\n return True\n\nmy_flow()\n
When retrieving the result from a state, you can ask Prefect not to raise exceptions:
from prefect import flow, task\n\n@task\ndef my_task():\n raise ValueError()\n\n@flow\ndef my_flow():\n state = my_task(return_state=True)\n\n maybe_result = state.result(raise_on_failure=False)\n if isinstance(maybe_result, ValueError):\n print(\"Oh no! The task failed. Falling back to '1'.\")\n result = 1\n else:\n result = maybe_result\n\n return result + 1\n\nresult = my_flow()\nassert result == 2\n
When submitting tasks to a runner, Future.result()
works the same as State.result()
:
from prefect import flow, task\n\n@task\ndef my_task():\n raise ValueError()\n\n@flow\ndef my_flow():\n future = my_task.submit()\n\n try:\n future.result()\n except ValueError:\n print(\"Ah! Futures will raise the failure as well.\")\n\n # You can ask it not to raise the exception too\n maybe_result = future.result(raise_on_failure=False)\n print(f\"Got {type(maybe_result)}\")\n\n return True\n\nmy_flow()\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#working-with-async-results","title":"Working with async results","text":"When calling flows or tasks, the result is returned directly:
import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n return 1\n\n@flow\nasync def my_flow():\n task_result = await my_task()\n return task_result + 1\n\nresult = asyncio.run(my_flow())\nassert result == 2\n
When working with flow and task states, the result can be retrieved with the State.result()
method:
import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n return 1\n\n@flow\nasync def my_flow():\n state = await my_task(return_state=True)\n result = await state.result(fetch=True)\n return result + 1\n\nasync def main():\n state = await my_flow(return_state=True)\n assert await state.result(fetch=True) == 2\n\nasyncio.run(main())\n
Resolving results
Prefect 2.6.0 added automatic retrieval of persisted results. Prior to this version, State.result()
did not require an await
. For backwards compatibility, when used from an asynchronous context, State.result()
returns a raw result type.
You may opt-in to the new behavior by passing fetch=True
as shown in the example above. If you would like this behavior to be used automatically, you may enable the PREFECT_ASYNC_FETCH_STATE_RESULT
setting. If you do not opt-in to this behavior, you will see a warning.
You may also opt-out by setting fetch=False
. This will silence the warning, but you will need to retrieve your result manually from the result type.
When submitting tasks to a runner, the result can be retrieved with the Future.result()
method:
import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n return 1\n\n@flow\nasync def my_flow():\n future = await my_task.submit()\n result = await future.result()\n return result + 1\n\nresult = asyncio.run(my_flow())\nassert result == 2\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisting-results","title":"Persisting results","text":"The Prefect API does not store your results except in special cases. Instead, the result is persisted to a storage location in your infrastructure and Prefect stores a reference to the result.
The following Prefect features require results to be persisted:
If results are not persisted, these features may not be usable.
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#configuring-persistence-of-results","title":"Configuring persistence of results","text":"Persistence of results requires a serializer and a storage location. Prefect sets defaults for these, and you should not need to adjust them until you want to customize behavior. You can configure results on the flow
and task
decorators with the following options:
persist_result
: Whether the result should be persisted to storage.result_storage
: Where to store the result when persisted.result_serializer
: How to convert the result to a storable form.Persistence of the result of a task or flow can be configured with the persist_result
option. The persist_result
option defaults to a null value, which will automatically enable persistence if it is needed for a Prefect feature used by the flow or task. Otherwise, persistence is disabled by default.
For example, the following flow has retries enabled. Flow retries require that all task results are persisted, so the task's result will be persisted:
from prefect import flow, task\n\n@task\ndef my_task():\n return \"hello world!\"\n\n@flow(retries=2)\ndef my_flow():\n # This task does not have persistence toggled off and it is needed for the flow feature,\n # so Prefect will persist its result at runtime\n my_task()\n
Flow retries do not require the flow's result to be persisted, so it will not be.
In this next example, one task has caching enabled. Task caching requires that the given task's result is persisted:
from prefect import flow, task\nfrom datetime import timedelta\n\n@task(cache_key_fn=lambda: \"always\", cache_expiration=timedelta(seconds=20))\ndef my_task():\n # This task uses caching so its result will be persisted by default\n return \"hello world!\"\n\n\n@task\ndef my_other_task():\n ...\n\n@flow\ndef my_flow():\n # This task uses a feature that requires result persistence\n my_task()\n\n # This task does not use a feature that requires result persistence and the\n # flow does not use any features that require task result persistence so its\n # result will not be persisted by default\n my_other_task()\n
Persistence of results can be manually toggled on or off:
from prefect import flow, task\n\n@flow(persist_result=True)\ndef my_flow():\n # This flow will persist its result even if not necessary for a feature.\n ...\n\n@task(persist_result=False)\ndef my_task():\n # This task will never persist its result.\n # If persistence needed for a feature, an error will be raised.\n ...\n
Toggling persistence manually will always override any behavior that Prefect would infer.
You may also change Prefect's default persistence behavior with the PREFECT_RESULTS_PERSIST_BY_DEFAULT
setting. To persist results by default, even if they are not needed for a feature change the value to a truthy value:
prefect config set PREFECT_RESULTS_PERSIST_BY_DEFAULT=true\n
Task and flows with persist_result=False
will not persist their results even if PREFECT_RESULTS_PERSIST_BY_DEFAULT
is true
.
The result storage location can be configured with the result_storage
option. The result_storage
option defaults to a null value, which infers storage from the context. Generally, this means that tasks will use the result storage configured on the flow unless otherwise specified. If there is no context to load the storage from and results must be persisted, results will be stored in the path specified by the PREFECT_LOCAL_STORAGE_PATH
setting (defaults to ~/.prefect/storage
).
from prefect import flow, task\nfrom prefect.filesystems import LocalFileSystem, S3\n\n@flow(persist_result=True)\ndef my_flow():\n my_task() # This task will use the flow's result storage\n\n@task(persist_result=True)\ndef my_task():\n ...\n\nmy_flow() # The flow has no result storage configured and no parent, the local file system will be used.\n\n\n# Reconfigure the flow to use a different storage type\nnew_flow = my_flow.with_options(result_storage=S3(bucket_path=\"my-bucket\"))\n\nnew_flow() # The flow and task within it will use S3 for result storage.\n
You can configure this to use a specific storage using one of the following:
LocalFileSystem(basepath=\".my-results\")
's3/dev-s3-block'
The path of the result file in the result storage can be configured with the result_storage_key
. The result_storage_key
option defaults to a null value, which generates a unique identifier for each result.
from prefect import flow, task\nfrom prefect.filesystems import LocalFileSystem, S3\n\n@flow(result_storage=S3(bucket_path=\"my-bucket\"))\ndef my_flow():\n my_task()\n\n@task(persist_result=True, result_storage_key=\"my_task.json\")\ndef my_task():\n ...\n\nmy_flow() # The task's result will be persisted to 's3://my-bucket/my_task.json'\n
Result storage keys are formatted with access to all of the modules in prefect.runtime
and the run's parameters
. In the following example, we will run a flow with three runs of the same task. Each task run will write its result to a unique file based on the name
parameter.
from prefect import flow, task\n\n@flow()\ndef my_flow():\n hello_world()\n hello_world(name=\"foo\")\n hello_world(name=\"bar\")\n\n@task(persist_result=True, result_storage_key=\"hello-{parameters[name]}.json\")\ndef hello_world(name: str = \"world\"):\n return f\"hello {name}\"\n\nmy_flow()\n
After running the flow, we can see three persisted result files in our storage directory:
$ ls ~/.prefect/storage | grep \"hello-\"\nhello-bar.json\nhello-foo.json\nhello-world.json\n
In the next example, we include metadata about the flow run from the prefect.runtime.flow_run
module:
from prefect import flow, task\n\n@flow\ndef my_flow():\n hello_world()\n\n@task(persist_result=True, result_storage_key=\"{flow_run.flow_name}_{flow_run.name}_hello.json\")\ndef hello_world(name: str = \"world\"):\n return f\"hello {name}\"\n\nmy_flow()\n
After running this flow, we can see a result file templated with the name of the flow and the flow run:
\u276f ls ~/.prefect/storage | grep \"my-flow\" \nmy-flow_industrious-trout_hello.json\n
If a result exists at a given storage key in the storage location, it will be overwritten.
Result storage keys can only be configured on tasks at this time.
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-serializer","title":"Result serializer","text":"The result serializer can be configured with the result_serializer
option. The result_serializer
option defaults to a null value, which infers the serializer from the context. Generally, this means that tasks will use the result serializer configured on the flow unless otherwise specified. If there is no context to load the serializer from, the serializer defined by PREFECT_RESULTS_DEFAULT_SERIALIZER
will be used. This setting defaults to Prefect's pickle serializer.
You may configure the result serializer using:
\"json\"
or \"pickle\"
\u2014 this corresponds to an instance with default valuesJSONSerializer(jsonlib=\"orjson\")
Prefect provides a CompressedSerializer
which can be used to wrap other serializers to provide compression over the bytes they generate. The compressed serializer uses lzma
compression by default. We test other compression schemes provided in the Python standard library such as bz2
and zlib
, but you should be able to use any compression library that provides compress
and decompress
methods.
You may configure compression of results using:
compressed/
e.g. \"compressed/json\"
or \"compressed/pickle\"
CompressedSerializer(serializer=\"pickle\", compressionlib=\"lzma\")
Note that the \"compressed/<serializer-type>\"
shortcut will only work for serializers provided by Prefect. If you are using custom serializers, you must pass a full instance.
The Prefect API does not store your results in most cases for the following reasons:
There are a few cases where Prefect will store your results directly in the database. This is an optimization to reduce the overhead of reading and writing to result storage.
The following data types will be stored by the API without persistence to storage:
True
, False
)None
)If persist_result
is set to False
, these values will never be stored.
The Prefect API tracks metadata about your results. The value of your result is only stored in specific cases. Result metadata can be seen in the UI on the \"Results\" page for flows.
Prefect tracks the following result metadata:
When running your workflows, Prefect will keep the results of all tasks and flows in memory so they can be passed downstream. In some cases, it is desirable to override this behavior. For example, if you are returning a large amount of data from a task it can be costly to keep it memory for the entire duration of the flow run.
Flows and tasks both include an option to drop the result from memory with cache_result_in_memory
:
@flow(cache_result_in_memory=False)\ndef foo():\n return \"pretend this is large data\"\n\n@task(cache_result_in_memory=False)\ndef bar():\n return \"pretend this is biiiig data\"\n
When cache_result_in_memory
is disabled, the result of your flow or task will be persisted by default. The result will then be pulled from storage when needed.
@flow\ndef foo():\n result = bar()\n state = bar(return_state=True)\n\n # The result will be retrieved from storage here\n state.result()\n\n future = bar.submit()\n # The result will be retrieved from storage here\n future.result()\n\n@task(cache_result_in_memory=False)\ndef bar():\n # This result will persisted\n return \"pretend this is biiiig data\"\n
If both cache_result_in_memory
and persistence are disabled, your results will not be available downstream.
@task(persist_result=False, cache_result_in_memory=False)\ndef bar():\n return \"pretend this is biiiig data\"\n\n@flow\ndef foo():\n # Raises an error\n result = bar()\n\n # This is oaky\n state = bar(return_state=True)\n\n # Raises an error\n state.result()\n\n # This is okay\n future = bar.submit()\n\n # Raises an error\n future.result()\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-storage-types","title":"Result storage types","text":"Result storage is responsible for reading and writing serialized data to an external location. At this time, any file system block can be used for result storage.
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-serializer-types","title":"Result serializer types","text":"A result serializer is responsible for converting your Python object to and from bytes. This is necessary to store the object outside of Python and retrieve it later.
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#pickle-serializer","title":"Pickle serializer","text":"Pickle is a standard Python protocol for encoding arbitrary Python objects. We supply a custom pickle serializer at prefect.serializers.PickleSerializer
. Prefect's pickle serializer uses the cloudpickle project by default to support more object types. Alternative pickle libraries can be specified:
from prefect.serializers import PickleSerializer\n\nPickleSerializer(picklelib=\"custompickle\")\n
Benefits of the pickle serializer:
Drawbacks of the pickle serializer:
We supply a custom JSON serializer at prefect.serializers.JSONSerializer
. Prefect's JSON serializer uses custom hooks by default to support more object types. Specifically, we add support for all types supported by Pydantic.
By default, we use the standard Python json
library. Alternative JSON libraries can be specified:
from prefect.serializers import JSONSerializer\n\nJSONSerializer(jsonlib=\"orjson\")\n
Benefits of the JSON serializer:
Drawbacks of the JSON serializer:
Prefect uses internal result types to capture information about the result attached to a state. The following types are used:
UnpersistedResult
: Stores result metadata but the value is only available when created.LiteralResult
: Stores simple values inline.PersistedResult
: Stores a reference to a result persisted to storage.All result types include a get()
method that can be called to return the value of the result. This is done behind the scenes when the result()
method is used on states or futures.
Unpersisted results are used to represent results that have not been and will not be persisted beyond the current flow run. The value associated with the result is stored in memory, but will not be available later. Result metadata is attached to this object for storage in the API and representation in the UI.
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#literal-results","title":"Literal results","text":"Literal results are used to represent results stored in the Prefect database. The values contained by these results must always be JSON serializable.
Example:
result = LiteralResult(value=None)\nresult.json()\n# {\"type\": \"result\", \"value\": \"null\"}\n
Literal results reduce the overhead required to persist simple results.
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisted-results","title":"Persisted results","text":"The persisted result type contains all of the information needed to retrieve the result from storage. This includes:
Persisted result types also contain metadata for inspection without retrieving the result:
The get()
method on result references retrieves the data from storage, deserializes it, and returns the original object. The get()
operation will cache the resolved object to reduce the overhead of subsequent calls.
When results are persisted to storage, they are always written as a JSON document. The schema for this is described by the PersistedResultBlob
type. The document contains:
Scheduling is one of the primary reasons for using an orchestrator such as Prefect. Prefect allows you to use schedules to automatically create new flow runs for deployments.
Prefect Cloud can also schedule flow runs through event-driven automations.
Schedules tell the Prefect API how to create new flow runs for you automatically on a specified cadence.
You can add a schedule to any deployment. The Prefect Scheduler
service periodically reviews every deployment and creates new flow runs according to the schedule configured for the deployment.
Support for multiple schedules
We are currently rolling out support for multiple schedules per deployment. You can now assign multiple schedules to deployments in the Prefect UI, the CLI via prefect deployment schedule
commands, the Deployment
class, and in block-based deployment YAML files.
Support for multiple schedules in flow.serve
, flow.deploy
, serve
, and worker-based deployments with prefect deploy
will arrive soon.
Prefect supports several types of schedules that cover a wide range of use cases and offer a large degree of customization:
Cron
is most appropriate for users who are already familiar with cron
from previous use.Interval
is best suited for deployments that need to run at some consistent cadence that isn't related to absolute time.RRule
is best suited for deployments that rely on calendar logic for simple recurring schedules, irregular intervals, exclusions, or day-of-month adjustments.Schedules can be inactive
When you create or edit a schedule, you can set the active
property to False
in Python (or false
in a YAML file) to deactivate the schedule. This is useful if you want to keep the schedule configuration but temporarily stop the schedule from creating new flow runs.
A schedule may be specified with a cron
pattern. Users may also provide a timezone to enforce DST behaviors.
Cron
uses croniter
to specify datetime iteration with a cron
-like format.
Cron
properties include:
cron
string. (Required) day_or Boolean indicating how croniter
handles day
and day_of_week
entries. Default is True
. timezone String name of a time zone. (See the IANA Time Zone Database for valid time zones.)","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#how-the-day_or-property-works","title":"How the day_or
property works","text":"The day_or
property defaults to True
, matching the behavior of cron
. In this mode, if you specify a day
(of the month) entry and a day_of_week
entry, the schedule will run a flow on both the specified day of the month and on the specified day of the week. The \"or\" in day_or
refers to the fact that the two entries are treated like an OR
statement, so the schedule should include both, as in the SQL statement SELECT * FROM employees WHERE first_name = 'Xi\u0101ng' OR last_name = 'Brookins';
.
For example, with day_or
set to True
, the cron schedule * * 3 1 2
runs a flow every minute on the 3rd day of the month (whatever that is) and on Tuesday (the second day of the week) in January (the first month of the year).
With day_or
set to False
, the day
(of the month) and day_of_week
entries are joined with the more restrictive AND
operation, as in the SQL statement SELECT * from employees WHERE first_name = 'Andrew' AND last_name = 'Brookins';
. For example, the same schedule, when day_or
is False
, runs a flow on every minute on the 3rd Tuesday in January. This behavior matches fcron
instead of cron
.
Supported croniter
features
While Prefect supports most features of croniter
for creating cron
-like schedules, we do not currently support \"R\" random or \"H\" hashed keyword expressions or the schedule jittering possible with those expressions.
Daylight saving time considerations
If the timezone
is a DST-observing one, then the schedule will adjust itself appropriately.
The cron
rules for DST are based on schedule times, not intervals. This means that an hourly cron
schedule fires on every new schedule hour, not every elapsed hour. For example, when clocks are set back, this results in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later.
Longer schedules, such as one that fires at 9am every morning, will adjust for DST automatically.
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#interval","title":"Interval","text":"An Interval
schedule creates new flow runs on a regular interval measured in seconds. Intervals are computed using an optional anchor_date
. For example, here's how you can create a schedule for every 10 minutes in a block-based deployment YAML file:
schedule:\n interval: 600\n timezone: America/Chicago \n
Interval
properties include:
datetime.timedelta
indicating the time between flow runs. (Required) anchor_date datetime.datetime
indicating the starting or \"anchor\" date to begin the schedule. If no anchor_date
is supplied, the current UTC time is used. timezone String name of a time zone, used to enforce localization behaviors like DST boundaries. (See the IANA Time Zone Database for valid time zones.) Note that the anchor_date
does not indicate a \"start time\" for the schedule, but rather a fixed point in time from which to compute intervals. If the anchor date is in the future, then schedule dates are computed by subtracting the interval
from it. Note that in this example, we import the Pendulum Python package for easy datetime manipulation. Pendulum isn\u2019t required, but it\u2019s a useful tool for specifying dates.
Daylight saving time considerations
If the schedule's anchor_date
or timezone
are provided with a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals.
For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time.
For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#rrule","title":"RRule","text":"An RRule
scheduling supports iCal recurrence rules (RRules), which provide convenient syntax for creating repetitive schedules. Schedules can repeat on a frequency from yearly down to every minute.
RRule
uses the dateutil rrule module to specify iCal recurrence rules.
RRules are appropriate for any kind of calendar-date manipulation, including simple repetition, irregular intervals, exclusions, week day or day-of-month adjustments, and more. RRules can represent complex logic like:
RRule
properties include:
rrulestr
examples for syntax. timezone String name of a time zone. See the IANA Time Zone Database for valid time zones. You may find it useful to use an RRule string generator such as the iCalendar.org RRule Tool to help create valid RRules.
For example, the following RRule schedule in a block-based deployment YAML file creates flow runs on Monday, Wednesday, and Friday until July 30, 2024.
schedule:\n rrule: 'FREQ=WEEKLY;BYDAY=MO,WE,FR;UNTIL=20240730T040000Z'\n
RRule restrictions
Note the max supported character length of an rrulestr
is 6500 characters
Note that COUNT
is not supported. Please use UNTIL
or the /deployments/{id}/runs
endpoint to schedule a fixed number of flow runs.
Daylight saving time considerations
Note that as a calendar-oriented standard, RRules
are sensitive to the initial timezone provided. A 9am daily schedule with a DST-aware start date will maintain a local 9am time through DST boundaries. A 9am daily schedule with a UTC start date will maintain a 9am UTC time.
There are several ways to create a schedule for a deployment:
cron
, interval
, or rrule
parameters if building your deployment via the serve
method of the Flow
object or the serve
utility for managing multiple flows simultaneouslyprefect deploy
commanddeployments
-> schedule
section of the prefect.yaml
file )Through the schedules
section of the deployment YAML fileschedules
into the Deployment
class or Deployment.build_from_flow
You can add schedules in the Schedules section on a Deployment page in the UI.
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#locating-the-schedules-section","title":"Locating the Schedules section","text":"On larger displays, the Schedules section will appear in a sidebar on the right side of the page. On smaller displays, it will appear on the \"Details\" tab of the page.
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#adding-a-schedule","title":"Adding a schedule","text":"Under Schedules, select the + Schedule button. A modal dialog will open. Choose Interval or Cron to create a schedule.
What about RRule?
The UI does not support creating RRule schedules. However, the UI will display RRule schedules that you've created via the command line.
The new schedule will appear on the Deployment page where you created it. In addition, the schedule will be viewable in human-friendly text in the list of deployments on the Deployments page.
After you create a schedule, new scheduled flow runs will be visible in the Upcoming tab of the Deployment page where you created it.
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#editing-schedules","title":"Editing schedules","text":"You can edit a schedule by selecting Edit from the three-dot menu next to a schedule on a Deployment page.
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-with-a-python-deployment-creation-file","title":"Creating schedules with a Python deployment creation file","text":"When you create a deployment in a Python file with flow.serve()
, serve
, flow.deploy()
, or deploy
you can specify the schedule. Just add the keyword argument cron
, interval
, or rrule
.
interval: An interval on which to execute the deployment. Accepts a number or a \n timedelta object to create a single schedule. If a number is given, it will be \n interpreted as seconds. Also accepts an iterable of numbers or timedelta to create \n multiple schedules.\ncron: A cron schedule string of when to execute runs of this deployment. \n Also accepts an iterable of cron schedule strings to create multiple schedules.\nrrule: An rrule schedule string of when to execute runs of this deployment.\n Also accepts an iterable of rrule schedule strings to create multiple schedules.\nschedules: A list of schedule objects defining when to execute runs of this deployment.\n Used to define multiple schedules or additional scheduling options such as `timezone`.\nschedule: A schedule object defining when to execute runs of this deployment. Used to\n define additional scheduling options like `timezone`.\n
Here's an example of creating a cron schedule with serve
for a deployment flow that will run every minute of every day:
my_flow.serve(name=\"flowing\", cron=\"* * * * *\")\n
Here's an example of creating an interval schedule with serve
for a deployment flow that will run every 10 minutes with an anchor date and a timezone:
from datetime import timedelta, datetime\nfrom prefect.client.schemas.schedules import IntervalSchedule\n\nmy_flow.serve(name=\"flowing\", schedule=IntervalSchedule(interval=timedelta(minutes=10), anchor_date=datetime(2023, 1, 1, 0, 0), timezone=\"America/Chicago\"))\n
Block and agent-based deployments with Python files are not a recommended way to create deployments. However, if you are using that deployment creation method you can create a schedule by passing a schedule
argument to the Deployment.build_from_flow
method.
Here's how you create the equivalent schedule in a Python deployment file, with a timezone specified.
from prefect.server.schemas.schedules import CronSchedule\n\ncron_demo = Deployment.build_from_flow(\n pipeline,\n \"etl\",\n schedule=(CronSchedule(cron=\"0 0 * * *\", timezone=\"America/Chicago\"))\n)\n
IntervalSchedule
and RRuleSchedule
are the other two Python class schedule options.
prefect deploy
command","text":"If you are using worker-based deployments, you can create a schedule through the interactive prefect deploy
command. You will be prompted to choose which type of schedule to create.
prefect.yaml
file's deployments
-> schedule
section","text":"If you save the prefect.yaml
file from the prefect deploy
command, you will see it has a schedules
section for your deployment. Alternatively, you can create a prefect.yaml
file from a recipe or from scratch and add a schedules
section to it.
deployments:\n ...\n schedules:\n - cron: \"0 0 * * *\"\n timezone: \"America/Chicago\"\n active: false\n - cron: \"0 12 * * *\"\n timezone: \"America/New_York\"\n active: true\n - cron: \"0 18 * * *\"\n timezone: \"Europe/London\"\n active: true\n
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#the-scheduler-service","title":"The Scheduler
service","text":"The Scheduler
service is started automatically when prefect server start
is run and it is a built-in service of Prefect Cloud.
By default, the Scheduler
service visits deployments on a 60-second loop, though recently-modified deployments will be visited more frequently. The Scheduler
evaluates each deployment's schedules and creates new runs appropriately. For typical deployments, it will create the next three runs, though more runs will be scheduled if the next 3 would all start in the next hour.
More specifically, the Scheduler
tries to create the smallest number of runs that satisfy the following constraints, in order:
These behaviors can all be adjusted through the relevant settings that can be viewed with the terminal command prefect config view --show-defaults
:
PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE='100'\nPREFECT_API_SERVICES_SCHEDULER_ENABLED='True'\nPREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE='500'\nPREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS='60.0'\nPREFECT_API_SERVICES_SCHEDULER_MIN_RUNS='3'\nPREFECT_API_SERVICES_SCHEDULER_MAX_RUNS='100'\nPREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME='1:00:00'\nPREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME='100 days, 0:00:00'\n
See the Settings docs for more information on altering your settings.
These settings mean that if a deployment has an hourly schedule, the default settings will create runs for the next 4 days (or 100 hours). If it has a weekly schedule, the default settings will maintain the next 14 runs (up to 100 days in the future).
The Scheduler
does not affect execution
The Prefect Scheduler
service only creates new flow runs and places them in Scheduled
states. It is not involved in flow or task execution.
If you change a schedule, previously scheduled flow runs that have not started are removed, and new scheduled flow runs are created to reflect the new schedule.
To remove all scheduled runs for a flow deployment, you can remove the schedule via the UI.
","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/states/","title":"States","text":"","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#overview","title":"Overview","text":"States are rich objects that contain information about the status of a particular task run or flow run. While you don't need to know the details of the states to use Prefect, you can give your workflows superpowers by taking advantage of it.
At any moment, you can learn anything you need to know about a task or flow by examining its current state or the history of its states. For example, a state could tell you that a task:
is scheduled to make a third run attempt in an hour
succeeded and what data it produced
was scheduled to run, but later cancelled
used the cached result of a previous run instead of re-running
failed because it timed out
By manipulating a relatively small number of task states, Prefect flows can harness the complexity that emerges in workflows.
Only runs have states
Though we often refer to the \"state\" of a flow or a task, what we really mean is the state of a flow run or a task run. Flows and tasks are templates that describe what a system does; only when we run the system does it also take on a state. So while we might refer to a task as \"running\" or being \"successful\", we really mean that a specific instance of the task is in that state.
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#state-types","title":"State Types","text":"States have names and types. State types are canonical, with specific orchestration rules that apply to transitions into and out of each state type. A state's name, is often, but not always, synonymous with its type. For example, a task run that is running for the first time has a state with the name Running and the type RUNNING
. However, if the task retries, that same task run will have the name Retrying and the type RUNNING
. Each time the task run transitions into the RUNNING
state, the same orchestration rules are applied.
There are terminal state types from which there are no orchestrated transitions to any other state type.
COMPLETED
CANCELLED
FAILED
CRASHED
The full complement of states and state types includes:
Name Type Terminal? Description Scheduled SCHEDULED No The run will begin at a particular time in the future. Late SCHEDULED No The run's scheduled start time has passed, but it has not transitioned to PENDING (5 seconds by default). AwaitingRetry SCHEDULED No The run did not complete successfully because of a code issue and had remaining retry attempts. Pending PENDING No The run has been submitted to run, but is waiting on necessary preconditions to be satisfied. Running RUNNING No The run code is currently executing. Retrying RUNNING No The run code is currently executing after previously not complete successfully. Paused PAUSED No The run code has stopped executing until it receives manual approval to proceed. Cancelling CANCELLING No The infrastructure on which the code was running is being cleaned up. Cancelled CANCELLED Yes The run did not complete because a user determined that it should not. Completed COMPLETED Yes The run completed successfully. Failed FAILED Yes The run did not complete because of a code issue and had no remaining retry attempts. Crashed CRASHED Yes The run did not complete because of an infrastructure issue.","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#returned-values","title":"Returned values","text":"When calling a task or a flow, there are three types of returned values:
int
, str
, dict
, list
, and so on).State
: A Prefect object indicating the state of a flow or task run.PrefectFuture
: A Prefect object that contains both data and State.Returning data\u200a is the default behavior any time you call your_task()
.
Returning Prefect State
occurs anytime you call your task or flow with the argument return_state=True
.
Returning PrefectFuture
is achieved by calling your_task.submit()
.
By default, running a task will return data:
from prefect import flow, task \n\n@task \ndef add_one(x):\n return x + 1\n\n@flow \ndef my_flow():\n result = add_one(1) # return int\n
The same rule applies for a subflow:
@flow \ndef subflow():\n return 42 \n\n@flow \ndef my_flow():\n result = subflow() # return data\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-prefect-state","title":"Return Prefect State","text":"To return a State
instead, add return_state=True
as a parameter of your task call.
@flow \ndef my_flow():\n state = add_one(1, return_state=True) # return State\n
To get data from a State
, call .result()
.
@flow \ndef my_flow():\n state = add_one(1, return_state=True) # return State\n result = state.result() # return int\n
The same rule applies for a subflow:
@flow \ndef subflow():\n return 42 \n\n@flow \ndef my_flow():\n state = subflow(return_state=True) # return State\n result = state.result() # return int\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-a-prefectfuture","title":"Return a PrefectFuture","text":"To get a PrefectFuture
, add .submit()
to your task call.
@flow \ndef my_flow():\n future = add_one.submit(1) # return PrefectFuture\n
To get data from a PrefectFuture
, call .result()
.
@flow \ndef my_flow():\n future = add_one.submit(1) # return PrefectFuture\n result = future.result() # return data\n
To get a State
from a PrefectFuture
, call .wait()
.
@flow \ndef my_flow():\n future = add_one.submit(1) # return PrefectFuture\n state = future.wait() # return State\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#final-state-determination","title":"Final state determination","text":"The final state of a flow is determined by its return value. The following rules apply:
FAILED
.None
), its state is determined by the states of all of the tasks and subflows within it.FAILED
.CANCELLED
.See the Final state determination section of the Flows documentation for further details and examples.
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#state-change-hooks","title":"State Change Hooks","text":"State change hooks execute code in response to changes in flow or task run states, enabling you to define actions for specific state transitions in a workflow.
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#a-simple-example","title":"A simple example","text":"from prefect import flow\n\ndef my_success_hook(flow, flow_run, state):\n print(\"Flow run succeeded!\")\n\n@flow(on_completion=[my_success_hook])\ndef my_flow():\n return 42\n\nmy_flow()\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-and-use-hooks","title":"Create and use hooks","text":"","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#available-state-change-hooks","title":"Available state change hooks","text":"Type Flow Task Description on_completion
\u2713 \u2713 Executes when a flow or task run enters a Completed
state. on_failure
\u2713 \u2713 Executes when a flow or task run enters a Failed
state. on_cancellation
\u2713 - Executes when a flow run enters a Cancelling
state. on_crashed
\u2713 - Executes when a flow run enters a Crashed
state. on_running
\u2713 - Executes when a flow run enters a Running
state.","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-flow-run-state-change-hooks","title":"Create flow run state change hooks","text":"def my_flow_hook(flow: Flow, flow_run: FlowRun, state: State):\n \"\"\"This is the required signature for a flow run state\n change hook. This hook can only be passed into flows.\n \"\"\"\n\n# pass hook as a list of callables\n@flow(on_completion=[my_flow_hook])\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-task-run-state-change-hooks","title":"Create task run state change hooks","text":"def my_task_hook(task: Task, task_run: TaskRun, state: State):\n \"\"\"This is the required signature for a task run state change\n hook. This hook can only be passed into tasks.\n \"\"\"\n\n# pass hook as a list of callables\n@task(on_failure=[my_task_hook])\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#use-multiple-state-change-hooks","title":"Use multiple state change hooks","text":"State change hooks are versatile, allowing you to specify multiple state change hooks for the same state transition, or to use the same state change hook for different transitions:
def my_success_hook(task, task_run, state):\n print(\"Task run succeeded!\")\n\ndef my_failure_hook(task, task_run, state):\n print(\"Task run failed!\")\n\ndef my_succeed_or_fail_hook(task, task_run, state):\n print(\"If the task run succeeds or fails, this hook runs.\")\n\n@task(\n on_completion=[my_success_hook, my_succeed_or_fail_hook],\n on_failure=[my_failure_hook, my_succeed_or_fail_hook]\n)\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#pass-kwargs-to-your-hooks","title":"Pass kwargs
to your hooks","text":"The Prefect engine will call your hooks for you upon the state change, passing in the flow, flow run, and state objects.
However, you can define your hook to have additional default arguments:
from prefect import flow\n\ndata = {}\n\ndef my_hook(flow, flow_run, state, my_arg=\"custom_value\"):\n data.update(my_arg=my_arg, state=state)\n\n@flow(on_completion=[my_hook])\ndef lazy_flow():\n pass\n\nstate = lazy_flow(return_state=True)\n\nassert data == {\"my_arg\": \"custom_value\", \"state\": state}\n
... or define your hook to accept arbitrary keyword arguments:
from functools import partial\nfrom prefect import flow, task\n\ndata = {}\n\ndef my_hook(task, task_run, state, **kwargs):\n data.update(state=state, **kwargs)\n\n@task\ndef bad_task():\n raise ValueError(\"meh\")\n\n@flow\ndef ok_with_failure_flow(x: str = \"foo\", y: int = 42):\n bad_task_with_a_hook = bad_task.with_options(\n on_failure=[partial(my_hook, **dict(x=x, y=y))]\n )\n # return a tuple of \"bar\" and the task run state\n # to avoid raising the task's exception\n return \"bar\", bad_task_with_a_hook(return_state=True)\n\n_, task_run_state = ok_with_failure_flow()\n\nassert data == {\"x\": \"foo\", \"y\": 42, \"state\": task_run_state}\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#more-examples-of-state-change-hooks","title":"More examples of state change hooks","text":"Storage blocks are not recommended
Storage blocks are part of the legacy block-based deployment model. Instead, using serve
or runner
-based Python creation methods or workers and work pools with prefect deploy
via the CLI are the recommended options for creating a deployment. Flow code storage can be specified in the Python file with serve
or runner
-based Python creation methods; alternatively, with the work pools and workers style of flow deployment, you can specify flow code storage during the interactive prefect deploy
CLI experience and in its resulting prefect.yaml
file.
Storage lets you configure how flow code for deployments is persisted and retrieved by Prefect workers (or legacy agents). Anytime you build a block-based deployment, a storage block is used to upload the entire directory containing your workflow code (along with supporting files) to its configured location. This helps ensure portability of your relative imports, configuration files, and more. Note that your environment dependencies (for example, external Python packages) still need to be managed separately.
If no storage is explicitly configured, Prefect will use LocalFileSystem
storage by default. Local storage works fine for many local flow run scenarios, especially when testing and getting started. However, due to the inherent lack of portability, many use cases are better served by using remote storage such as S3 or Google Cloud Storage.
Prefect supports creating multiple storage configurations and switching between storage as needed.
Storage uses blocks
Blocks are the Prefect technology underlying storage, and enables you to do so much more.
In addition to creating storage blocks via the Prefect CLI, you can now create storage blocks and other kinds of block configuration objects via the Prefect UI and Prefect Cloud.
","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/storage/#configuring-storage-for-a-deployment","title":"Configuring storage for a deployment","text":"When building a deployment for a workflow, you have two options for configuring workflow storage:
Anytime you call prefect deployment build
without providing the --storage-block
flag, a default LocalFileSystem
block will be used. Note that this block will always use your present working directory as its basepath (which is usually desirable). You can see the block's settings by inspecting the deployment.yaml
file that Prefect creates after calling prefect deployment build
.
While you generally can't run a deployment stored on a local file system on other machines, any agent running on the same machine will be able to successfully run your deployment.
","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/storage/#supported-storage-blocks","title":"Supported storage blocks","text":"Current options for deployment storage blocks include:
Storage Description Required Library Local File System Store code in a run's local file system. Remote File System Store code in a any filesystem supported byfsspec
. AWS S3 Storage Store code in an AWS S3 bucket. s3fs
Azure Storage Store code in Azure Datalake and Azure Blob Storage. adlfs
GitHub Storage Store code in a GitHub repository. Google Cloud Storage Store code in a Google Cloud Platform (GCP) Cloud Storage bucket. gcsfs
SMB Store code in SMB shared network storage. smbprotocol
GitLab Repository Store code in a GitLab repository. prefect-gitlab
Bitbucket Repository Store code in a Bitbucket repository. prefect-bitbucket
Accessing files may require storage filesystem libraries
Note that the appropriate filesystem library supporting the storage location must be installed prior to building a deployment with a storage block or accessing the storage location from flow scripts.
For example, the AWS S3 Storage block requires the s3fs
library.
See Filesystem package dependencies for more information about configuring filesystem libraries in your execution environment.
","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/storage/#configuring-a-block","title":"Configuring a block","text":"You can create these blocks either via the UI or via Python.
You can create, edit, and manage storage blocks in the Prefect UI and Prefect Cloud. On a Prefect server, blocks are created in the server's database. On Prefect Cloud, blocks are created on a workspace.
To create a new block, select the + button. Prefect displays a library of block types you can configure to create blocks to be used by your flows.
Select Add + to configure a new storage block based on a specific block type. Prefect displays a Create page that enables specifying storage settings.
You can also create blocks using the Prefect Python API:
from prefect.filesystems import S3\n\nblock = S3(bucket_path=\"my-bucket/a-sub-directory\", \n aws_access_key_id=\"foo\", \n aws_secret_access_key=\"bar\"\n)\nblock.save(\"example-block\")\n
This block configuration is now available to be used by anyone with appropriate access to your Prefect API. We can use this block to build a deployment by passing its slug to the prefect deployment build
command. The storage block slug is formatted as block-type/block-name
. In this case, s3/example-block
for an AWS S3 Bucket block named example-block
. See block identifiers for details.
prefect deployment build ./flows/my_flow.py:my_flow --name \"Example Deployment\" --storage-block s3/example-block\n
This command will upload the contents of your flow's directory to the designated storage location, then the full deployment specification will be persisted to a newly created deployment.yaml
file. For more information, see Deployments.
Task runners enable you to engage specific executors for Prefect tasks, such as for concurrent, parallel, or distributed execution of tasks.
Task runners are not required for task execution. If you call a task function directly, the task executes as a regular Python function, without a task runner, and produces whatever result is returned by the function.
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#task-runner-overview","title":"Task runner overview","text":"Calling a task function from within a flow, using the default task settings, executes the function sequentially. Execution of the task function blocks execution of the flow until the task completes. This means, by default, calling multiple tasks in a flow causes them to run in order.
However, that's not the only way to run tasks!
You can use the .submit()
method on a task function to submit the task to a task runner. Using a task runner enables you to control whether tasks run sequentially, concurrently, or if you want to take advantage of a parallel or distributed execution library such as Dask or Ray.
Using the .submit()
method to submit a task also causes the task run to return a PrefectFuture
, a Prefect object that contains both any data returned by the task function and a State
, a Prefect object indicating the state of the task run.
Prefect currently provides the following built-in task runners:
SequentialTaskRunner
can run tasks sequentially. ConcurrentTaskRunner
can run tasks concurrently, allowing tasks to switch when blocking on IO. Tasks will be submitted to a thread pool maintained by anyio
.In addition, the following Prefect-developed task runners for parallel or distributed task execution may be installed as Prefect Integrations.
DaskTaskRunner
can run tasks requiring parallel execution using dask.distributed
. RayTaskRunner
can run tasks requiring parallel execution using Ray.Concurrency versus parallelism
The words \"concurrency\" and \"parallelism\" may sound the same, but they mean different things in computing.
Concurrency refers to a system that can do more than one thing simultaneously, but not at the exact same time. It may be more accurate to think of concurrent execution as non-blocking: within the restrictions of resources available in the execution environment and data dependencies between tasks, execution of one task does not block execution of other tasks in a flow.
Parallelism refers to a system that can do more than one thing at the exact same time. Again, within the restrictions of resources available, parallel execution can run tasks at the same time, such as for operations mapped across a dataset.
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-a-task-runner","title":"Using a task runner","text":"You do not need to specify a task runner for a flow unless your tasks require a specific type of execution.
To configure your flow to use a specific task runner, import a task runner and assign it as an argument for the flow when the flow is defined.
Remember to call .submit()
when using a task runner
Make sure you use .submit()
to run your task with a task runner. Calling the task directly, without .submit()
, from within a flow will run the task sequentially instead of using a specified task runner.
For example, you can use ConcurrentTaskRunner
to allow tasks to switch when they would block.
from prefect import flow, task\nfrom prefect.task_runners import ConcurrentTaskRunner\nimport time\n\n@task\ndef stop_at_floor(floor):\n print(f\"elevator moving to floor {floor}\")\n time.sleep(floor)\n print(f\"elevator stops on floor {floor}\")\n\n@flow(task_runner=ConcurrentTaskRunner())\ndef elevator():\n for floor in range(10, 0, -1):\n stop_at_floor.submit(floor)\n
If you specify an uninitialized task runner class, a task runner instance of that type is created with the default settings. You can also pass additional configuration parameters for task runners that accept parameters, such as DaskTaskRunner
and RayTaskRunner
.
Default task runner
If you don't specify a task runner for a flow and you call a task with .submit()
within the flow, Prefect uses the default ConcurrentTaskRunner
.
Sometimes, it's useful to force tasks to run sequentially to make it easier to reason about the behavior of your program. Switching to the SequentialTaskRunner
will force submitted tasks to run sequentially rather than concurrently.
Synchronous and asynchronous tasks
The SequentialTaskRunner
works with both synchronous and asynchronous task functions. Asynchronous tasks are Python functions defined using async def
rather than def
.
The following example demonstrates using the SequentialTaskRunner
to ensure that tasks run sequentially. In the example, the flow glass_tower
runs the task stop_at_floor
for floors one through 38, in that order.
from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\nimport random\n\n@task\ndef stop_at_floor(floor):\n situation = random.choice([\"on fire\",\"clear\"])\n print(f\"elevator stops on {floor} which is {situation}\")\n\n@flow(task_runner=SequentialTaskRunner(),\n name=\"towering-infernflow\",\n )\ndef glass_tower():\n for floor in range(1, 39):\n stop_at_floor.submit(floor)\n\nglass_tower()\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-multiple-task-runners","title":"Using multiple task runners","text":"Each flow can only have a single task runner, but sometimes you may want a subset of your tasks to run using a specific task runner. In this case, you can create subflows for tasks that need to use a different task runner.
For example, you can have a flow (in the example below called sequential_flow
) that runs its tasks locally using the SequentialTaskRunner
. If you have some tasks that can run more efficiently in parallel on a Dask cluster, you could create a subflow (such as dask_subflow
) to run those tasks using the DaskTaskRunner
.
from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef hello_local():\n print(\"Hello!\")\n\n@task\ndef hello_dask():\n print(\"Hello from Dask!\")\n\n@flow(task_runner=SequentialTaskRunner())\ndef sequential_flow():\n hello_local.submit()\n dask_subflow()\n hello_local.submit()\n\n@flow(task_runner=DaskTaskRunner())\ndef dask_subflow():\n hello_dask.submit()\n\nif __name__ == \"__main__\":\n sequential_flow()\n
Guarding main
Note that you should guard the main
function by using if __name__ == \"__main__\"
to avoid issues with parallel processing.
This script outputs the following logs demonstrating the use of the Dask task runner:
120:14:29.785 | INFO | prefect.engine - Created flow run 'ivory-caiman' for flow 'sequential-flow'\n20:14:29.785 | INFO | Flow run 'ivory-caiman' - Starting 'SequentialTaskRunner'; submitted tasks will be run sequentially...\n20:14:29.880 | INFO | Flow run 'ivory-caiman' - Created task run 'hello_local-7633879f-0' for task 'hello_local'\n20:14:29.881 | INFO | Flow run 'ivory-caiman' - Executing 'hello_local-7633879f-0' immediately...\nHello!\n20:14:29.904 | INFO | Task run 'hello_local-7633879f-0' - Finished in state Completed()\n20:14:29.952 | INFO | Flow run 'ivory-caiman' - Created subflow run 'nimble-sparrow' for flow 'dask-subflow'\n20:14:29.953 | INFO | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n20:14:31.862 | INFO | prefect.task_runner.dask - The Dask dashboard is available at http://127.0.0.1:8787/status\n20:14:31.901 | INFO | Flow run 'nimble-sparrow' - Created task run 'hello_dask-2b96d711-0' for task 'hello_dask'\n20:14:32.370 | INFO | Flow run 'nimble-sparrow' - Submitted task run 'hello_dask-2b96d711-0' for execution.\nHello from Dask!\n20:14:33.358 | INFO | Flow run 'nimble-sparrow' - Finished in state Completed('All states completed.')\n20:14:33.368 | INFO | Flow run 'ivory-caiman' - Created task run 'hello_local-7633879f-1' for task 'hello_local'\n20:14:33.368 | INFO | Flow run 'ivory-caiman' - Executing 'hello_local-7633879f-1' immediately...\nHello!\n20:14:33.386 | INFO | Task run 'hello_local-7633879f-1' - Finished in state Completed()\n20:14:33.399 | INFO | Flow run 'ivory-caiman' - Finished in state Completed('All states completed.')\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-results-from-submitted-tasks","title":"Using results from submitted tasks","text":"When you use .submit()
to submit a task to a task runner, the task runner creates a PrefectFuture
for access to the state and result of the task.
A PrefectFuture
is an object that provides access to a computation happening in a task runner \u2014 even if that computation is happening on a remote system.
In the following example, we save the return value of calling .submit()
on the task say_hello
to the variable future
, and then we print the type of the variable:
from prefect import flow, task\n\n@task\ndef say_hello(name):\n return f\"Hello {name}!\"\n\n@flow\ndef hello_world():\n future = say_hello.submit(\"Marvin\")\n print(f\"variable 'future' is type {type(future)}\")\n\nhello_world()\n
When you run this code, you'll see that the variable future
is a PrefectFuture
:
variable 'future' is type <class 'prefect.futures.PrefectFuture'>\n
When you pass a future into a task, Prefect waits for the \"upstream\" task \u2014 the one that the future references \u2014 to reach a final state before starting the downstream task.
This means that the downstream task won't receive the PrefectFuture
you passed as an argument. Instead, the downstream task will receive the value that the upstream task returned.
Take a look at how this works in the following example
from prefect import flow, task\n\n@task\ndef say_hello(name):\n return f\"Hello {name}!\"\n\n@task\ndef print_result(result):\n print(type(result))\n print(result)\n\n@flow(name=\"hello-flow\")\ndef hello_world():\n future = say_hello.submit(\"Marvin\")\n print_result.submit(future)\n\nhello_world()\n
<class 'str'>\nHello Marvin!\n
Futures have a few useful methods. For example, you can get the return value of the task run with .result()
:
from prefect import flow, task\n\n@task\ndef my_task():\n return 42\n\n@flow\ndef my_flow():\n future = my_task.submit()\n result = future.result()\n print(result)\n\nmy_flow()\n
The .result()
method will wait for the task to complete before returning the result to the caller. If the task run fails, .result()
will raise the task run's exception. You may disable this behavior with the raise_on_failure
option:
from prefect import flow, task\n\n@task\ndef my_task():\n return \"I'm a task!\"\n\n\n@flow\ndef my_flow():\n future = my_task.submit()\n result = future.result(raise_on_failure=False)\n if future.get_state().is_failed():\n # `result` is an exception! handle accordingly\n ...\n else:\n # `result` is the expected return value of our task\n ...\n
You can retrieve the current state of the task run associated with the PrefectFuture
using .get_state()
:
@flow\ndef my_flow():\n future = my_task.submit()\n state = future.get_state()\n
You can also wait for a task to complete by using the .wait()
method:
@flow\ndef my_flow():\n future = my_task.submit()\n final_state = future.wait()\n
You can include a timeout in the wait
call to perform logic if the task has not finished in a given amount of time:
@flow\ndef my_flow():\n future = my_task.submit()\n final_state = future.wait(1) # Wait one second max\n if final_state:\n # Take action if the task is done\n result = final_state.result()\n else:\n ... # Task action if the task is still running\n
You may also use the wait_for=[]
parameter when calling a task, specifying upstream task dependencies. This enables you to control task execution order for tasks that do not share data dependencies.
@task\ndef task_a():\n pass\n\n@task\ndef task_b():\n pass\n\n@task\ndef task_c():\n pass\n\n@task\ndef task_d():\n pass\n\n@flow\ndef my_flow():\n a = task_a.submit()\n b = task_b.submit()\n # Wait for task_a and task_b to complete\n c = task_c.submit(wait_for=[a, b])\n # task_d will wait for task_c to complete\n # Note: If waiting for one task it must still be in a list.\n d = task_d(wait_for=[c])\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#when-to-use-result-in-flows","title":"When to use .result()
in flows","text":"The simplest pattern for writing a flow is either only using tasks or only using pure Python functions. When you need to mix the two, use .result()
.
Using only tasks:
from prefect import flow, task\n\n@task\ndef say_hello(name):\n return f\"Hello {name}!\"\n\n@task\ndef say_nice_to_meet_you(hello_greeting):\n return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n hello = say_hello.submit(\"Marvin\")\n nice_to_meet_you = say_nice_to_meet_you.submit(hello)\n\nhello_world()\n
Using only Python functions:
from prefect import flow, task\n\ndef say_hello(name):\n return f\"Hello {name}!\"\n\ndef say_nice_to_meet_you(hello_greeting):\n return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n # because this is just a Python function, calls will not be tracked\n hello = say_hello(\"Marvin\") \n nice_to_meet_you = say_nice_to_meet_you(hello)\n\nhello_world()\n
Mixing tasks and Python functions:
from prefect import flow, task\n\ndef say_hello_extra_nicely_to_marvin(hello): # not a task or flow!\n if hello == \"Hello Marvin!\":\n return \"HI MARVIN!\"\n return hello\n\n@task\ndef say_hello(name):\n return f\"Hello {name}!\"\n\n@task\ndef say_nice_to_meet_you(hello_greeting):\n return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n # run a task and get the result\n hello = say_hello.submit(\"Marvin\").result()\n\n # not calling a task or flow\n special_greeting = say_hello_extra_nicely_to_marvin(hello)\n\n # pass our modified greeting back into a task\n nice_to_meet_you = say_nice_to_meet_you.submit(special_greeting)\n\n print(nice_to_meet_you.result())\n\nhello_world()\n
Note that .result()
also limits Prefect's ability to track task dependencies. In the \"mixed\" example above, Prefect will not be aware that say_hello
is upstream of nice_to_meet_you
.
Calling .result()
is blocking
When calling .result()
, be mindful your flow function will have to wait until the task run is completed before continuing.
from prefect import flow, task\n\n@task\ndef say_hello(name):\n return f\"Hello {name}!\"\n\n@task\ndef do_important_stuff():\n print(\"Doing lots of important stuff!\")\n\n@flow\ndef hello_world():\n # blocks until `say_hello` has finished\n result = say_hello.submit(\"Marvin\").result() \n do_important_stuff.submit()\n\nhello_world()\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-on-dask","title":"Running tasks on Dask","text":"The DaskTaskRunner
is a parallel task runner that submits tasks to the dask.distributed
scheduler. By default, a temporary Dask cluster is created for the duration of the flow run. If you already have a Dask cluster running, either local or cloud hosted, you can provide the connection URL via the address
kwarg.
prefect-dask
collection is installed: pip install prefect-dask
.DaskTaskRunner
from prefect_dask.task_runners
.task_runner=DaskTaskRunner
argument.For example, this flow uses the DaskTaskRunner
configured to access an existing Dask cluster at http://my-dask-cluster
.
from prefect import flow\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@flow(task_runner=DaskTaskRunner(address=\"http://my-dask-cluster\"))\ndef my_flow():\n ...\n
DaskTaskRunner
accepts the following optional parameters:
\"distributed.LocalCluster\"
), or the class itself. cluster_kwargs Additional kwargs to pass to the cluster_class
when creating a temporary Dask cluster. adapt_kwargs Additional kwargs to pass to cluster.adapt
when creating a temporary Dask cluster. Note that adaptive scaling is only enabled if adapt_kwargs
are provided. client_kwargs Additional kwargs to use when creating a dask.distributed.Client
. Multiprocessing safety
Note that, because the DaskTaskRunner
uses multiprocessing, calls to flows in scripts must be guarded with if __name__ == \"__main__\":
or you will encounter warnings and errors.
If you don't provide the address
of a Dask scheduler, Prefect creates a temporary local cluster automatically. The number of workers used is based on the number of cores on your machine. The default provides a mix of processes and threads that should work well for most workloads. If you want to specify this explicitly, you can pass values for n_workers
or threads_per_worker
to cluster_kwargs
.
# Use 4 worker processes, each with 2 threads\nDaskTaskRunner(\n cluster_kwargs={\"n_workers\": 4, \"threads_per_worker\": 2}\n)\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-a-temporary-cluster","title":"Using a temporary cluster","text":"The DaskTaskRunner
is capable of creating a temporary cluster using any of Dask's cluster-manager options. This can be useful when you want each flow run to have its own Dask cluster, allowing for per-flow adaptive scaling.
To configure, you need to provide a cluster_class
. This can be:
\"dask_cloudprovider.aws.FargateCluster\"
)You can also configure cluster_kwargs
, which takes a dictionary of keyword arguments to pass to cluster_class
when starting the flow run.
For example, to configure a flow to use a temporary dask_cloudprovider.aws.FargateCluster
with 4 workers running with an image named my-prefect-image
:
DaskTaskRunner(\n cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n cluster_kwargs={\"n_workers\": 4, \"image\": \"my-prefect-image\"},\n)\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#connecting-to-an-existing-cluster","title":"Connecting to an existing cluster","text":"Multiple Prefect flow runs can all use the same existing Dask cluster. You might manage a single long-running Dask cluster (maybe using the Dask Helm Chart) and configure flows to connect to it during execution. This has a few downsides when compared to using a temporary cluster (as described above):
That said, you may prefer managing a single long-running cluster.
To configure a DaskTaskRunner
to connect to an existing cluster, pass in the address of the scheduler to the address
argument:
# Connect to an existing cluster running at a specified address\nDaskTaskRunner(address=\"tcp://...\")\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#adaptive-scaling","title":"Adaptive scaling","text":"One nice feature of using a DaskTaskRunner
is the ability to scale adaptively to the workload. Instead of specifying n_workers
as a fixed number, this lets you specify a minimum and maximum number of workers to use, and the dask cluster will scale up and down as needed.
To do this, you can pass adapt_kwargs
to DaskTaskRunner
. This takes the following fields:
maximum
(int
or None
, optional): the maximum number of workers to scale to. Set to None
for no maximum.minimum
(int
or None
, optional): the minimum number of workers to scale to. Set to None
for no minimum.For example, here we configure a flow to run on a FargateCluster
scaling up to at most 10 workers.
DaskTaskRunner(\n cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n adapt_kwargs={\"maximum\": 10}\n)\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#dask-annotations","title":"Dask annotations","text":"Dask annotations can be used to further control the behavior of tasks.
For example, we can set the priority of tasks in the Dask scheduler:
import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n print(x)\n\n\n@flow(task_runner=DaskTaskRunner())\ndef my_flow():\n with dask.annotate(priority=-10):\n future = show.submit(1) # low priority task\n\n with dask.annotate(priority=10):\n future = show.submit(2) # high priority task\n
Another common use case is resource annotations:
import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n print(x)\n\n# Create a `LocalCluster` with some resource annotations\n# Annotations are abstract in dask and not inferred from your system.\n# Here, we claim that our system has 1 GPU and 1 process available per worker\n@flow(\n task_runner=DaskTaskRunner(\n cluster_kwargs={\"n_workers\": 1, \"resources\": {\"GPU\": 1, \"process\": 1}}\n )\n)\n\ndef my_flow():\n with dask.annotate(resources={'GPU': 1}):\n future = show(0) # this task requires 1 GPU resource on a worker\n\n with dask.annotate(resources={'process': 1}):\n # These tasks each require 1 process on a worker; because we've \n # specified that our cluster has 1 process per worker and 1 worker,\n # these tasks will run sequentially\n future = show(1)\n future = show(2)\n future = show(3)\n\n\nif __name__ == \"__main__\":\n my_flow()\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-on-ray","title":"Running tasks on Ray","text":"The RayTaskRunner
\u2014 installed separately as a Prefect Collection \u2014 is a parallel task runner that submits tasks to Ray. By default, a temporary Ray instance is created for the duration of the flow run. If you already have a Ray instance running, you can provide the connection URL via an address
argument.
Remote storage and Ray tasks
We recommend configuring remote storage for task execution with the RayTaskRunner
. This ensures tasks executing in Ray have access to task result storage, particularly when accessing a Ray instance outside of your execution environment.
To configure your flow to use the RayTaskRunner
:
prefect-ray
collection is installed: pip install prefect-ray
.RayTaskRunner
from prefect_ray.task_runners
.task_runner=RayTaskRunner
argument.For example, this flow uses the RayTaskRunner
configured to access an existing Ray instance at ray://192.0.2.255:8786
.
from prefect import flow\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@flow(task_runner=RayTaskRunner(address=\"ray://192.0.2.255:8786\"))\ndef my_flow():\n ... \n
RayTaskRunner
accepts the following optional parameters:
ray.init
. Note that Ray Client uses the ray:// URI to indicate the address of a Ray instance. If you don't provide the address
of a Ray instance, Prefect creates a temporary instance automatically.
Ray environment limitations
While we're excited about adding support for parallel task execution via Ray to Prefect, there are some inherent limitations with Ray you should be aware of:
Ray's support for Python 3.11 is experimental.
Ray support for non-x86/64 architectures such as ARM/M1 processors with installation from pip
alone and will be skipped during installation of Prefect. It is possible to manually install the blocking component with conda
. See the Ray documentation for instructions.
See the Ray installation documentation for further compatibility information.
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/tasks/","title":"Tasks","text":"A task is a function that represents a discrete unit of work in a Prefect workflow. Tasks are not required \u2014 you may define Prefect workflows that consist only of flows, using regular Python statements and functions. Tasks enable you to encapsulate elements of your workflow logic in observable units that can be reused across flows and subflows.
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#tasks-overview","title":"Tasks overview","text":"Tasks are functions: they can take inputs, perform work, and return an output. A Prefect task can do almost anything a Python function can do.
Tasks are special because they receive metadata about upstream dependencies and the state of those dependencies before they run, even if they don't receive any explicit data inputs from them. This gives you the opportunity to, for example, have a task wait on the completion of another task before executing.
Tasks also take advantage of automatic Prefect logging to capture details about task runs such as runtime, tags, and final state.
You can define your tasks within the same file as your flow definition, or you can define tasks within modules and import them for use in your flow definitions. All tasks must be called from within a flow. Tasks may not be called from other tasks.
Calling a task from a flow
Use the @task
decorator to designate a function as a task. Calling the task from within a flow function creates a new task run:
from prefect import flow, task\n\n@task\ndef my_task():\n print(\"Hello, I'm a task\")\n\n@flow\ndef my_flow():\n my_task()\n
Tasks are uniquely identified by a task key, which is a hash composed of the task name, the fully-qualified name of the function, and any tags. If the task does not have a name specified, the name is derived from the task function.
How big should a task be?
Prefect encourages \"small tasks\" \u2014 each one should represent a single logical step of your workflow. This allows Prefect to better contain task failures.
To be clear, there's nothing stopping you from putting all of your code in a single task \u2014 Prefect will happily run it! However, if any line of code fails, the entire task will fail and must be retried from the beginning. This can be avoided by splitting the code into multiple dependent tasks.
Calling a task's function from another task
Prefect does not allow triggering task runs from other tasks. If you want to call your task's function directly, you can use task.fn()
.
from prefect import flow, task\n\n@task\ndef my_first_task(msg):\n print(f\"Hello, {msg}\")\n\n@task\ndef my_second_task(msg):\n my_first_task.fn(msg)\n\n@flow\ndef my_flow():\n my_second_task(\"Trillian\")\n
Note that in the example above you are only calling the task's function without actually generating a task run. Prefect won't track task execution in your Prefect backend if you call the task function this way. You also won't be able to use features such as retries with this function call.
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-arguments","title":"Task arguments","text":"Tasks allow for customization through optional arguments:
Argument Descriptionname
An optional name for the task. If not provided, the name will be inferred from the function name. description
An optional string description for the task. If not provided, the description will be pulled from the docstring for the decorated function. tags
An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags
context at task runtime. cache_key_fn
An optional callable that, given the task run context and call parameters, generates a string key. If the key matches a previous completed state, that state result will be restored instead of running the task again. cache_expiration
An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire. retries
An optional number of times to retry on task run failure. retry_delay_seconds
An optional number of seconds to wait before retrying the task after failure. This is only applicable if retries
is nonzero. log_prints
An optional boolean indicating whether to log print statements. persist_result
An optional boolean indicating whether to persist the result of the task run to storage. See all possible parameters in the Python SDK API docs.
For example, you can provide a name
value for the task. Here we've used the optional description
argument as well.
@task(name=\"hello-task\", \n description=\"This task says hello.\")\ndef my_task():\n print(\"Hello, I'm a task\")\n
You can distinguish runs of this task by providing a task_run_name
; this setting accepts a string that can optionally contain templated references to the keyword arguments of your task. The name will be formatted using Python's standard string formatting syntax as can be seen here:
import datetime\nfrom prefect import flow, task\n\n@task(name=\"My Example Task\", \n description=\"An example task for a tutorial.\",\n task_run_name=\"hello-{name}-on-{date:%A}\")\ndef my_task(name, date):\n pass\n\n@flow\ndef my_flow():\n # creates a run with a name like \"hello-marvin-on-Thursday\"\n my_task(name=\"marvin\", date=datetime.datetime.now(datetime.timezone.utc))\n
Additionally this setting also accepts a function that returns a string to be used for the task run name:
import datetime\nfrom prefect import flow, task\n\ndef generate_task_name():\n date = datetime.datetime.now(datetime.timezone.utc)\n return f\"{date:%A}-is-a-lovely-day\"\n\n@task(name=\"My Example Task\",\n description=\"An example task for a tutorial.\",\n task_run_name=generate_task_name)\ndef my_task(name):\n pass\n\n@flow\ndef my_flow():\n # creates a run with a name like \"Thursday-is-a-lovely-day\"\n my_task(name=\"marvin\")\n
If you need access to information about the task, use the prefect.runtime
module. For example:
from prefect import flow\nfrom prefect.runtime import flow_run, task_run\n\ndef generate_task_name():\n flow_name = flow_run.flow_name\n task_name = task_run.task_name\n\n parameters = task_run.parameters\n name = parameters[\"name\"]\n limit = parameters[\"limit\"]\n\n return f\"{flow_name}-{task_name}-with-{name}-and-{limit}\"\n\n@task(name=\"my-example-task\",\n description=\"An example task for a tutorial.\",\n task_run_name=generate_task_name)\ndef my_task(name: str, limit: int = 100):\n pass\n\n@flow\ndef my_flow(name: str):\n # creates a run with a name like \"my-flow-my-example-task-with-marvin-and-100\"\n my_task(name=\"marvin\")\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#tags","title":"Tags","text":"Tags are optional string labels that enable you to identify and group tasks other than by name or flow. Tags are useful for:
Tags may be specified as a keyword argument on the task decorator.
@task(name=\"hello-task\", tags=[\"test\"])\ndef my_task():\n print(\"Hello, I'm a task\")\n
You can also provide tags as an argument with a tags
context manager, specifying tags when the task is called rather than in its definition.
from prefect import flow, task\nfrom prefect import tags\n\n@task\ndef my_task():\n print(\"Hello, I'm a task\")\n\n@flow\ndef my_flow():\n with tags(\"test\"):\n my_task()\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#retries","title":"Retries","text":"Prefect can automatically retry tasks on failure. In Prefect, a task fails if its Python function raises an exception.
To enable retries, pass retries
and retry_delay_seconds
parameters to your task. If the task fails, Prefect will retry it up to retries
times, waiting retry_delay_seconds
seconds between each attempt. If the task fails on the final retry, Prefect marks the task as crashed if the task raised an exception or failed if it returned a string.
Retries don't create new task runs
A new task run is not created when a task is retried. A new state is added to the state history of the original task run.
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#a-real-world-example-making-an-api-request","title":"A real-world example: making an API request","text":"Consider the real-world problem of making an API request. In this example, we'll use the httpx
library to make an HTTP request.
import httpx\n\nfrom prefect import flow, task\n\n\n@task(retries=2, retry_delay_seconds=5)\ndef get_data_task(\n url: str = \"https://api.brittle-service.com/endpoint\"\n) -> dict:\n response = httpx.get(url)\n\n # If the response status code is anything but a 2xx, httpx will raise\n # an exception. This task doesn't handle the exception, so Prefect will\n # catch the exception and will consider the task run failed.\n response.raise_for_status()\n\n return response.json()\n\n\n@flow\ndef get_data_flow():\n get_data_task()\n
In this task, if the HTTP request to the brittle API receives any status code other than a 2xx (200, 201, etc.), Prefect will retry the task a maximum of two times, waiting five seconds in between retries.
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#custom-retry-behavior","title":"Custom retry behavior","text":"The retry_delay_seconds
option accepts a list of delays for more custom retry behavior. The following task will wait for successively increasing intervals of 1, 10, and 100 seconds, respectively, before the next attempt starts:
from prefect import task\n\n@task(retries=3, retry_delay_seconds=[1, 10, 100])\ndef some_task_with_manual_backoff_retries():\n ...\n
The retry_condition_fn
option accepts a callable that returns a boolean. If the callable returns True
, the task will be retried. If the callable returns False
, the task will not be retried. The callable accepts three arguments \u2014 the task, the task run, and the state of the task run. The following task will retry on HTTP status codes other than 401 or 404:
import httpx\nfrom prefect import flow, task\n\ndef retry_handler(task, task_run, state) -> bool:\n \"\"\"This is a custom retry handler to handle when we want to retry a task\"\"\"\n try:\n # Attempt to get the result of the task\n state.result()\n except httpx.HTTPStatusError as exc:\n # Retry on any HTTP status code that is not 401 or 404\n do_not_retry_on_these_codes = [401, 404]\n return exc.response.status_code not in do_not_retry_on_these_codes\n except httpx.ConnectError:\n # Do not retry\n return False\n except:\n # For any other exception, retry\n return True\n\n@task(retries=1, retry_condition_fn=retry_handler)\ndef my_api_call_task(url):\n response = httpx.get(url)\n response.raise_for_status()\n return response.json()\n\n@flow\ndef get_data_flow(url):\n my_api_call_task(url=url)\n\nif __name__ == \"__main__\":\n get_data_flow(url=\"https://httpbin.org/status/503\")\n
Additionally, you can pass a callable that accepts the number of retries as an argument and returns a list. Prefect includes an exponential_backoff
utility that will automatically generate a list of retry delays that correspond to an exponential backoff retry strategy. The following flow will wait for 10, 20, then 40 seconds before each retry.
from prefect import task\nfrom prefect.tasks import exponential_backoff\n\n@task(retries=3, retry_delay_seconds=exponential_backoff(backoff_factor=10))\ndef some_task_with_exponential_backoff_retries():\n ...\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#advanced-topic-adding-jitter","title":"Advanced topic: adding \"jitter\"","text":"While using exponential backoff, you may also want to add jitter to the delay times. Jitter is a random amount of time added to retry periods that helps prevent \"thundering herd\" scenarios, which is when many tasks all retry at the exact same time, potentially overwhelming systems.
The retry_jitter_factor
option can be used to add variance to the base delay. For example, a retry delay of 10 seconds with a retry_jitter_factor
of 0.5 will be allowed to delay up to 15 seconds. Large values of retry_jitter_factor
provide more protection against \"thundering herds,\" while keeping the average retry delay time constant. For example, the following task adds jitter to its exponential backoff so the retry delays will vary up to a maximum delay time of 20, 40, and 80 seconds respectively.
from prefect import task\nfrom prefect.tasks import exponential_backoff\n\n@task(\n retries=3,\n retry_delay_seconds=exponential_backoff(backoff_factor=10),\n retry_jitter_factor=1,\n)\ndef some_task_with_exponential_backoff_retries():\n ...\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#configuring-retry-behavior-globally-with-settings","title":"Configuring retry behavior globally with settings","text":"You can also set retries and retry delays by using the following global settings. These settings will not override the retries
or retry_delay_seconds
that are set in the flow or task decorator.
prefect config set PREFECT_FLOW_DEFAULT_RETRIES=2\nprefect config set PREFECT_TASK_DEFAULT_RETRIES=2\nprefect config set PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS = [1, 10, 100]\nprefect config set PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS = [1, 10, 100]\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#caching","title":"Caching","text":"Caching refers to the ability of a task run to reflect a finished state without actually running the code that defines the task. This allows you to efficiently reuse results of tasks that may be expensive to run with every flow run, or reuse cached results if the inputs to a task have not changed.
To determine whether a task run should retrieve a cached state, we use \"cache keys\". A cache key is a string value that indicates if one run should be considered identical to another. When a task run with a cache key finishes, we attach that cache key to the state. When each task run starts, Prefect checks for states with a matching cache key. If a state with an identical key is found, Prefect will use the cached state instead of running the task again.
To enable caching, specify a cache_key_fn
\u2014 a function that returns a cache key \u2014 on your task. You may optionally provide a cache_expiration
timedelta indicating when the cache expires. If you do not specify a cache_expiration
, the cache key does not expire.
You can define a task that is cached based on its inputs by using the Prefect task_input_hash
. This is a task cache key implementation that hashes all inputs to the task using a JSON or cloudpickle serializer. If the task inputs do not change, the cached results are used rather than running the task until the cache expires.
Note that, if any arguments are not JSON serializable, the pickle serializer is used as a fallback. If cloudpickle fails, task_input_hash
returns a null key indicating that a cache key could not be generated for the given inputs.
In this example, until the cache_expiration
time ends, as long as the input to hello_task()
remains the same when it is called, the cached return value is returned. In this situation the task is not rerun. However, if the input argument value changes, hello_task()
runs using the new input.
from datetime import timedelta\nfrom prefect import flow, task\nfrom prefect.tasks import task_input_hash\n\n@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\ndef hello_task(name_input):\n # Doing some work\n print(\"Saying hello\")\n return \"hello \" + name_input\n\n@flow\ndef hello_flow(name_input):\n hello_task(name_input)\n
Alternatively, you can provide your own function or other callable that returns a string cache key. A generic cache_key_fn
is a function that accepts two positional arguments:
TaskRunContext
, which stores task run metadata in the attributes task_run_id
, flow_run_id
, and task
.fn(x, y, z)
then the dictionary will have keys \"x\"
, \"y\"
, and \"z\"
with corresponding values that can be used to compute your cache key.Note that the cache_key_fn
is not defined as a @task
.
Task cache keys
By default, a task cache key is limited to 2000 characters, specified by the PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH
setting.
from prefect import task, flow\n\ndef static_cache_key(context, parameters):\n # return a constant\n return \"static cache key\"\n\n@task(cache_key_fn=static_cache_key)\ndef cached_task():\n print('running an expensive operation')\n return 42\n\n@flow\ndef test_caching():\n cached_task()\n cached_task()\n cached_task()\n
In this case, there's no expiration for the cache key, and no logic to change the cache key, so cached_task()
only runs once.
>>> test_caching()\nrunning an expensive operation\n>>> test_caching()\n>>> test_caching()\n
When each task run requested to enter a Running
state, it provided its cache key computed from the cache_key_fn
. The Prefect backend identified that there was a COMPLETED state associated with this key and instructed the run to immediately enter the same COMPLETED state, including the same return values.
A real-world example might include the flow run ID from the context in the cache key so only repeated calls in the same flow run are cached.
def cache_within_flow_run(context, parameters):\n return f\"{context.task_run.flow_run_id}-{task_input_hash(context, parameters)}\"\n\n@task(cache_key_fn=cache_within_flow_run)\ndef cached_task():\n print('running an expensive operation')\n return 42\n
Task results, retries, and caching
Task results are cached in memory during a flow run and persisted to the location specified by the PREFECT_LOCAL_STORAGE_PATH
setting. As a result, task caching between flow runs is currently limited to flow runs with access to that local storage path.
Sometimes, you want a task to update the data associated with its cache key instead of using the cache. This is a cache \"refresh\".
The refresh_cache
option can be used to enable this behavior for a specific task:
import random\n\n\ndef static_cache_key(context, parameters):\n # return a constant\n return \"static cache key\"\n\n\n@task(cache_key_fn=static_cache_key, refresh_cache=True)\ndef caching_task():\n return random.random()\n
When this task runs, it will always update the cache key instead of using the cached value. This is particularly useful when you have a flow that is responsible for updating the cache.
If you want to refresh the cache for all tasks, you can use the PREFECT_TASKS_REFRESH_CACHE
setting. Setting PREFECT_TASKS_REFRESH_CACHE=true
will change the default behavior of all tasks to refresh. This is particularly useful if you want to rerun a flow without cached results.
If you have tasks that should not refresh when this setting is enabled, you may explicitly set refresh_cache
to False
. These tasks will never refresh the cache \u2014 if a cache key exists it will be read, not updated. Note that, if a cache key does not exist yet, these tasks can still write to the cache.
@task(cache_key_fn=static_cache_key, refresh_cache=False)\ndef caching_task():\n return random.random()\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#timeouts","title":"Timeouts","text":"Task timeouts are used to prevent unintentional long-running tasks. When the duration of execution for a task exceeds the duration specified in the timeout, a timeout exception will be raised and the task will be marked as failed. In the UI, the task will be visibly designated as TimedOut
. From the perspective of the flow, the timed-out task will be treated like any other failed task.
Timeout durations are specified using the timeout_seconds
keyword argument.
from prefect import task, get_run_logger\nimport time\n\n@task(timeout_seconds=1)\ndef show_timeouts():\n logger = get_run_logger()\n logger.info(\"I will execute\")\n time.sleep(5)\n logger.info(\"I will not execute\")\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-results","title":"Task results","text":"Depending on how you call tasks, they can return different types of results and optionally engage the use of a task runner.
Any task can return:
int
, str
, dict
, list
, and so on \u2014 \u200athis is the default behavior any time you call your_task()
.PrefectFuture
\u2014 \u200athis is achieved by calling your_task.submit()
. A PrefectFuture
contains both data and StateState
\u200a\u2014 anytime you call your task or flow with the argument return_state=True
, it will directly return a state you can use to build custom behavior based on a state change you care about, such as task or flow failing or retrying.To run your task with a task runner, you must call the task with .submit()
.
See state returned values for examples.
Task runners are optional
If you just need the result from a task, you can simply call the task from your flow. For most workflows, the default behavior of calling a task directly and receiving a result is all you'll need.
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#wait-for","title":"Wait for","text":"To create a dependency between two tasks that do not exchange data, but one needs to wait for the other to finish, use the special wait_for
keyword argument:
@task\ndef task_1():\n pass\n\n@task\ndef task_2():\n pass\n\n@flow\ndef my_flow():\n x = task_1()\n\n # task 2 will wait for task_1 to complete\n y = task_2(wait_for=[x])\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#map","title":"Map","text":"Prefect provides a .map()
implementation that automatically creates a task run for each element of its input data. Mapped tasks represent the computations of many individual children tasks.
The simplest Prefect map takes a tasks and applies it to each element of its inputs.
from prefect import flow, task\n\n@task\ndef print_nums(nums):\n for n in nums:\n print(n)\n\n@task\ndef square_num(num):\n return num**2\n\n@flow\ndef map_flow(nums):\n print_nums(nums)\n squared_nums = square_num.map(nums) \n print_nums(squared_nums)\n\nmap_flow([1,2,3,5,8,13])\n
Prefect also supports unmapped
arguments, allowing you to pass static values that don't get mapped over.
from prefect import flow, task\n\n@task\ndef add_together(x, y):\n return x + y\n\n@flow\ndef sum_it(numbers, static_value):\n futures = add_together.map(numbers, static_value)\n return futures\n\nsum_it([1, 2, 3], 5)\n
If your static argument is an iterable, you'll need to wrap it with unmapped
to tell Prefect that it should be treated as a static value.
from prefect import flow, task, unmapped\n\n@task\ndef sum_plus(x, static_iterable):\n return x + sum(static_iterable)\n\n@flow\ndef sum_it(numbers, static_iterable):\n futures = sum_plus.map(numbers, static_iterable)\n return futures\n\nsum_it([4, 5, 6], unmapped([1, 2, 3]))\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#async-tasks","title":"Async tasks","text":"Prefect also supports asynchronous task and flow definitions by default. All of the standard rules of async apply:
import asyncio\n\nfrom prefect import task, flow\n\n@task\nasync def print_values(values):\n for value in values:\n await asyncio.sleep(1) # yield\n print(value, end=\" \")\n\n@flow\nasync def async_flow():\n await print_values([1, 2]) # runs immediately\n coros = [print_values(\"abcd\"), print_values(\"6789\")]\n\n # asynchronously gather the tasks\n await asyncio.gather(*coros)\n\nasyncio.run(async_flow())\n
Note, if you are not using asyncio.gather
, calling .submit()
is required for asynchronous execution on the ConcurrentTaskRunner
.
There are situations in which you want to actively prevent too many tasks from running simultaneously. For example, if many tasks across multiple flows are designed to interact with a database that only allows 10 connections, you want to make sure that no more than 10 tasks that connect to this database are running at any given time.
Prefect has built-in functionality for achieving this: task concurrency limits.
Task concurrency limits use task tags. You can specify an optional concurrency limit as the maximum number of concurrent task runs in a Running
state for tasks with a given tag. The specified concurrency limit applies to any task to which the tag is applied.
If a task has multiple tags, it will run only if all tags have available concurrency.
Tags without explicit limits are considered to have unlimited concurrency.
0 concurrency limit aborts task runs
Currently, if the concurrency limit is set to 0 for a tag, any attempt to run a task with that tag will be aborted instead of delayed.
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#execution-behavior","title":"Execution behavior","text":"Task tag limits are checked whenever a task run attempts to enter a Running
state.
If there are no concurrency slots available for any one of your task's tags, the transition to a Running
state will be delayed and the client is instructed to try entering a Running
state again in 30 seconds (or the value specified by the PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS
setting).
Flow run concurrency limits are set at a work pool and/or work queue level
While task run concurrency limits are configured via tags (as shown below), flow run concurrency limits are configured via work pools and/or work queues.
You can set concurrency limits on as few or as many tags as you wish. You can set limits through:
PrefectClient
Python clientYou can create, list, and remove concurrency limits by using Prefect CLI concurrency-limit
commands.
prefect concurrency-limit [command] [arguments]\n
Command Description create Create a concurrency limit by specifying a tag and limit. delete Delete the concurrency limit set on the specified tag. inspect View details about a concurrency limit set on the specified tag. ls View all defined concurrency limits. For example, to set a concurrency limit of 10 on the 'small_instance' tag:
prefect concurrency-limit create small_instance 10\n
To delete the concurrency limit on the 'small_instance' tag:
prefect concurrency-limit delete small_instance\n
To view details about the concurrency limit on the 'small_instance' tag:
prefect concurrency-limit inspect small_instance\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#python-client","title":"Python client","text":"To update your tag concurrency limits programmatically, use PrefectClient.orchestration.create_concurrency_limit
.
create_concurrency_limit
takes two arguments:
tag
specifies the task tag on which you're setting a limit.concurrency_limit
specifies the maximum number of concurrent task runs for that tag.For example, to set a concurrency limit of 10 on the 'small_instance' tag:
from prefect import get_client\n\nasync with get_client() as client:\n # set a concurrency limit of 10 on the 'small_instance' tag\n limit_id = await client.create_concurrency_limit(\n tag=\"small_instance\", \n concurrency_limit=10\n )\n
To remove all concurrency limits on a tag, use PrefectClient.delete_concurrency_limit_by_tag
, passing the tag:
async with get_client() as client:\n # remove a concurrency limit on the 'small_instance' tag\n await client.delete_concurrency_limit_by_tag(tag=\"small_instance\")\n
If you wish to query for the currently set limit on a tag, use PrefectClient.read_concurrency_limit_by_tag
, passing the tag:
To see all of your limits across all of your tags, use PrefectClient.read_concurrency_limits
.
async with get_client() as client:\n # query the concurrency limit on the 'small_instance' tag\n limit = await client.read_concurrency_limit_by_tag(tag=\"small_instance\")\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/work-pools/","title":"Work Pools & Workers","text":"Work pools and workers bridge the Prefect orchestration environment with your execution environment. When a deployment creates a flow run, it is submitted to a specific work pool for scheduling. A worker running in the execution environment can poll its respective work pool for new runs to execute, or the work pool can submit flow runs to serverless infrastructure directly, depending on your configuration.
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-overview","title":"Work pool overview","text":"Work pools organize work for execution. Work pools have types corresponding to the infrastructure that will execute the flow code, as well as the delivery method of work to that environment. Pull work pools require workers (or less ideally, agents) to poll the work pool for flow runs to execute. Push work pools can submit runs directly to your serverless infrastructure providers such as Google Cloud Run, Azure Container Instances, and AWS ECS without the need for an agent or worker. Managed work pools are administered by Prefect and handle the submission and execution of code on your behalf.
Work pools are like pub/sub topics
It's helpful to think of work pools as a way to coordinate (potentially many) deployments with (potentially many) workers through a known channel: the pool itself. This is similar to how \"topics\" are used to connect producers and consumers in a pub/sub or message-based system. By switching a deployment's work pool, users can quickly change the worker that will execute their runs, making it easy to promote runs through environments or even debug locally.
In addition, users can control aspects of work pool behavior, such as how many runs the pool allows to be run concurrently or pausing delivery entirely. These options can be modified at any time, and any workers requesting work for a specific pool will only see matching flow runs.
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-configuration","title":"Work pool configuration","text":"You can configure work pools by using any of the following:
To manage work pools in the UI, click the Work Pools icon. This displays a list of currently configured work pools.
You can pause a work pool from this page by using the toggle.
Select the + button to create a new work pool. You'll be able to specify the details for work served by this work pool.
To create a work pool via the Prefect CLI, use the prefect work-pool create
command:
prefect work-pool create [OPTIONS] NAME\n
NAME
is a required, unique name for the work pool.
Optional configuration parameters you can specify to filter work on the pool include:
Option Description--paused
If provided, the work pool will be created in a paused state. --type
The type of infrastructure that can execute runs from this work pool. --set-as-default
Whether to use the created work pool as the local default for deployment. --base-job-template
The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type. For example, to create a work pool called test-pool
, you would run this command:
prefect work-pool create test-pool\n
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-types","title":"Work pool types","text":"If you don't use the --type
flag to specify an infrastructure type, you are prompted to select from the following options:
On success, the command returns the details of the newly created work pool.
Created work pool with properties:\n name - 'test-pool'\n id - a51adf8c-58bb-4949-abe6-1b87af46eabd\n concurrency limit - None\n\nStart a worker to pick up flows from the work pool:\n prefect worker start -p 'test-pool'\n\nInspect the work pool:\n prefect work-pool inspect 'test-pool'\n
Set a work pool as the default for new deployments by adding the --set-as-default
flag.
Which would result in output similar to the following:
Set 'test-pool' as default work pool for profile 'default'\n\nTo change your default work pool, run:\n\n prefect config set PREFECT_DEFAULT_WORK_POOL_NAME=<work-pool-name>\n
To update a work pool via the Prefect CLI, use the prefect work-pool update
command:
prefect work-pool update [OPTIONS] NAME\n
NAME
is the name of the work pool to update.
Optional configuration parameters you can specify to update the work pool include:
Option Description--base-job-template
The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type. --description
A description of the work pool. --concurrency-limit
The maximum number of flow runs to run simultaneously in the work pool. Managing work pools in CI/CD
You can version control your base job template by committing it as a JSON file to your repository and control updates to your work pools' base job templates by using the prefect work-pool update
command in your CI/CD pipeline. For example, you could use the following command to update a work pool's base job template to the contents of a file named base-job-template.json
:
prefect work-pool update --base-job-template base-job-template.json my-work-pool\n
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#base-job-template","title":"Base job template","text":"Each work pool has a base job template that allows the customization of the behavior of the worker executing flow runs from the work pool.
The base job template acts as a contract defining the configuration passed to the worker for each flow run and the options available to deployment creators to customize worker behavior per deployment.
A base job template comprises a job_configuration
section and a variables
section.
The variables
section defines the fields available to be customized per deployment. The variables
section follows the OpenAPI specification, which allows work pool creators to place limits on provided values (type, minimum, maximum, etc.).
The job configuration section defines how values provided for fields in the variables section should be translated into the configuration given to a worker when executing a flow run.
The values in the job_configuration
can use placeholders to reference values provided in the variables
section. Placeholders are declared using double curly braces, e.g., {{ variable_name }}
. job_configuration
values can also be hard-coded if the value should not be customizable.
Each worker type is configured with a default base job template, making it easy to start with a work pool. The default base template defines fields that can be edited on a per-deployment basis or for the entire work pool via the Prefect API and UI.
For example, if we create a process
work pool named 'above-ground' via the CLI:
prefect work-pool create --type process above-ground\n
We see these configuration options available in the Prefect UI:
For a process
work pool with the default base job template, we can set environment variables for spawned processes, set the working directory to execute flows, and control whether the flow run output is streamed to workers' standard output. You can also see an example of JSON formatted base job template with the 'Advanced' tab.
You can examine the default base job template for a given worker type by running:
prefect work-pool get-default-base-job-template --type process\n
{\n \"job_configuration\": {\n \"command\": \"{{ command }}\",\n \"env\": \"{{ env }}\",\n \"labels\": \"{{ labels }}\",\n \"name\": \"{{ name }}\",\n \"stream_output\": \"{{ stream_output }}\",\n \"working_dir\": \"{{ working_dir }}\"\n },\n \"variables\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"title\": \"Name\",\n \"description\": \"Name given to infrastructure created by a worker.\",\n \"type\": \"string\"\n },\n \"env\": {\n \"title\": \"Environment Variables\",\n \"description\": \"Environment variables to set when starting a flow run.\",\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\"\n }\n },\n \"labels\": {\n \"title\": \"Labels\",\n \"description\": \"Labels applied to infrastructure created by a worker.\",\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\"\n }\n },\n \"command\": {\n \"title\": \"Command\",\n \"description\": \"The command to use when starting a flow run. In most cases, this should be left blank and the command will be automatically generated by the worker.\",\n \"type\": \"string\"\n },\n \"stream_output\": {\n \"title\": \"Stream Output\",\n \"description\": \"If enabled, workers will stream output from flow run processes to local standard output.\",\n \"default\": true,\n \"type\": \"boolean\"\n },\n \"working_dir\": {\n \"title\": \"Working Directory\",\n \"description\": \"If provided, workers will open flow run processes within the specified path as the working directory. Otherwise, a temporary directory will be created.\",\n \"type\": \"string\",\n \"format\": \"path\"\n }\n }\n }\n}\n
You can override each of these attributes on a per-deployment basis. When deploying a flow, you can specify these overrides in the work_pool.job_variables
section of a deployment.yaml
.
If we wanted to turn off streaming output for a specific deployment, we could add the following to our deployment.yaml
:
work_pool:\n name: above-ground \n job_variables:\n stream_output: false\n
Advanced Customization of the Base Job Template
For advanced use cases, you can create work pools with fully customizable job templates. This customization is available when creating or editing a work pool on the 'Advanced' tab within the UI or when updating a work pool via the Prefect CLI.
Advanced customization is useful anytime the underlying infrastructure supports a high degree of customization. In these scenarios a work pool job template allows you to expose a minimal and easy-to-digest set of options to deployment authors. Additionally, these options are the only customizable aspects for deployment infrastructure, which can be useful for restricting functionality in secure environments. For example, the kubernetes
worker type allows users to specify a custom job template that can be used to configure the manifest that workers use to create jobs for flow execution.
For more information and advanced configuration examples, see the Kubernetes Worker documentation.
For more information on overriding a work pool's job variables see this guide.
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#viewing-work-pools","title":"Viewing work pools","text":"At any time, users can see and edit configured work pools in the Prefect UI.
To view work pools with the Prefect CLI, you can:
ls
) all available poolsinspect
) the details of a single poolpreview
) scheduled work for a single poolprefect work-pool ls
lists all configured work pools for the server.
prefect work-pool ls\n
For example:
Work pools\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name \u2503 Type \u2503 ID \u2503 Concurrency Limit \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 barbeque \u2502 docker \u2502 72c0a101-b3e2-4448-b5f8-a8c5184abd17 \u2502 None \u2502\n\u2502 k8s-pool \u2502 kubernetes \u2502 7b6e3523-d35b-4882-84a7-7a107325bb3f \u2502 None \u2502\n\u2502 test-pool \u2502 prefect-agent \u2502 a51adf8c-58bb-4949-abe6-1b87af46eabd \u2502 None |\n| my-pool \u2502 process \u2502 cd6ff9e8-bfd8-43be-9be3-69375f7a11cd \u2502 None \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n (**) denotes a paused pool\n
prefect work-pool inspect
provides all configuration metadata for a specific work pool by ID.
prefect work-pool inspect 'test-pool'\n
Outputs information similar to the following:
Workpool(\n id='a51adf8c-58bb-4949-abe6-1b87af46eabd',\n created='2 minutes ago',\n updated='2 minutes ago',\n name='test-pool',\n filter=None,\n)\n
prefect work-pool preview
displays scheduled flow runs for a specific work pool by ID for the upcoming hour. The optional --hours
flag lets you specify the number of hours to look ahead.
prefect work-pool preview 'test-pool' --hours 12\n
\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Scheduled Star\u2026 \u2503 Run ID \u2503 Name \u2503 Deployment ID \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 2022-02-26 06:\u2026 \u2502 741483d4-dc90-4913-b88d-0\u2026 \u2502 messy-petrel \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 05:\u2026 \u2502 14e23a19-a51b-4833-9322-5\u2026 \u2502 unselfish-g\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 04:\u2026 \u2502 deb44d4d-5fa2-4f70-a370-e\u2026 \u2502 solid-ostri\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 03:\u2026 \u2502 07374b5c-121f-4c8d-9105-b\u2026 \u2502 sophisticat\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 02:\u2026 \u2502 545bc975-b694-4ece-9def-8\u2026 \u2502 gorgeous-mo\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 01:\u2026 \u2502 704f2d67-9dfa-4fb8-9784-4\u2026 \u2502 sassy-hedge\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 00:\u2026 \u2502 691312f0-d142-4218-b617-a\u2026 \u2502 sincere-moo\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 23:\u2026 \u2502 7cb3ff96-606b-4d8c-8a33-4\u2026 \u2502 curious-cat\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 22:\u2026 \u2502 3ea559fe-cb34-43b0-8090-1\u2026 \u2502 primitive-f\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 21:\u2026 \u2502 96212e80-426d-4bf4-9c49-e\u2026 \u2502 phenomenal-\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n (**) denotes a late run\n
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-status","title":"Work Pool Status","text":"Work pools have three statuses: READY
, NOT_READY
, and PAUSED
. A work pool is considered ready if it has at least one online worker sending heartbeats to the work pool. If a work pool has no online workers, it is considered not ready to execute work. A work pool can be placed in a paused status manually by a user or via an automation. When a paused work pool is unpaused, it will be reassigned the appropriate status based on whether any workers are sending heartbeats.
A work pool can be paused at any time to stop the delivery of work to workers. Workers will not receive any work when polling a paused pool.
To pause a work pool through the Prefect CLI, use the prefect work-pool pause
command:
prefect work-pool pause 'test-pool'\n
To resume a work pool through the Prefect CLI, use the prefect work-pool resume
command with the work pool name.
To delete a work pool through the Prefect CLI, use the prefect work-pool delete
command with the work pool name.
Each work pool can optionally restrict concurrent runs of matching flows.
For example, a work pool with a concurrency limit of 5 will only release new work if fewer than 5 matching runs are currently in a Running
or Pending
state. If 3 runs are Running
or Pending
, polling the pool for work will only result in 2 new runs, even if there are many more available, to ensure that the concurrency limit is not exceeded.
When using the prefect work-pool
Prefect CLI command to configure a work pool, the following subcommands set concurrency limits:
set-concurrency-limit
sets a concurrency limit on a work pool.clear-concurrency-limit
clears any concurrency limits from a work pool.Advanced topic
Prefect will automatically create a default work queue if needed.
Work queues offer advanced control over how runs are executed. Each work pool has a \"default\" queue that all work will be sent to by default. Additional queues can be added to a work pool to enable greater control over work delivery through fine grained priority and concurrency. Each work queue has a priority indicated by a unique positive integer. Lower numbers take greater priority in the allocation of work. Accordingly, new queues can be added without changing the rank of the higher-priority queues (e.g. no matter how many queues you add, the queue with priority 1
will always be the highest priority).
Work queues can also have their own concurrency limits. Note that each queue is also subject to the global work pool concurrency limit, which cannot be exceeded.
Together work queue priority and concurrency enable precise control over work. For example, a pool may have three queues: A \"low\" queue with priority 10
and no concurrency limit, a \"high\" queue with priority 5
and a concurrency limit of 3
, and a \"critical\" queue with priority 1
and a concurrency limit of 1
. This arrangement would enable a pattern in which there are two levels of priority, \"high\" and \"low\" for regularly scheduled flow runs, with the remaining \"critical\" queue for unplanned, urgent work, such as a backfill.
Priority is evaluated to determine the order in which flow runs are submitted for execution. If all flow runs are capable of being executed with no limitation due to concurrency or otherwise, priority is still used to determine order of submission, but there is no impact to execution. If not all flow runs can be executed, usually as a result of concurrency limits, priority is used to determine which queues receive precedence to submit runs for execution.
Priority for flow run submission proceeds from the highest priority to the lowest priority. In the preceding example, all work from the \"critical\" queue (priority 1) will be submitted, before any work is submitted from \"high\" (priority 5). Once all work has been submitted from priority queue \"critical\", work from the \"high\" queue will begin submission.
If new flow runs are received on the \"critical\" queue while flow runs are still in scheduled on the \"high\" and \"low\" queues, flow run submission goes back to ensuring all scheduled work is first satisfied from the highest priority queue, until it is empty, in waterfall fashion.
Work queue status
A work queue has a READY
status when it has been polled by a worker in the last 60 seconds. Pausing a work queue will give it a PAUSED
status and mean that it will accept no new work until it is unpaused. A user can control the work queue's paused status in the UI. Unpausing a work queue will give the work queue a NOT_READY
status unless a worker has polled it in the last 60 seconds.
As long as your deployment's infrastructure block supports it, you can use work pools to temporarily send runs to a worker running on your local machine for debugging by running prefect worker start -p my-local-machine
and updating the deployment's work pool to my-local-machine
.
Workers are lightweight polling services that retrieve scheduled runs from a work pool and execute them.
Workers are similar to agents, but offer greater control over infrastructure configuration and the ability to route work to specific types of execution environments.
Workers each have a type corresponding to the execution environment to which they will submit flow runs. Workers are only able to poll work pools that match their type. As a result, when deployments are assigned to a work pool, you know in which execution environment scheduled flow runs for that deployment will run.
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#worker-types","title":"Worker types","text":"Below is a list of available worker types. Note that most worker types will require installation of an additional package.
Worker Type Description Required Packageprocess
Executes flow runs in subprocesses kubernetes
Executes flow runs as Kubernetes jobs prefect-kubernetes
docker
Executes flow runs within Docker containers prefect-docker
ecs
Executes flow runs as ECS tasks prefect-aws
cloud-run
Executes flow runs as Google Cloud Run jobs prefect-gcp
vertex-ai
Executes flow runs as Google Cloud Vertex AI jobs prefect-gcp
azure-container-instance
Execute flow runs in ACI containers prefect-azure
If you don\u2019t see a worker type that meets your needs, consider developing a new worker type!
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#worker-options","title":"Worker options","text":"Workers poll for work from one or more queues within a work pool. If the worker references a work queue that doesn't exist, it will be created automatically. The worker CLI is able to infer the worker type from the work pool. Alternatively, you can also specify the worker type explicitly. If you supply the worker type to the worker CLI, a work pool will be created automatically if it doesn't exist (using default job settings).
Configuration parameters you can specify when starting a worker include:
Option Description--name
, -n
The name to give to the started worker. If not provided, a unique name will be generated. --pool
, -p
The work pool the started worker should poll. --work-queue
, -q
One or more work queue names for the worker to pull from. If not provided, the worker will pull from all work queues in the work pool. --type
, -t
The type of worker to start. If not provided, the worker type will be inferred from the work pool. --prefetch-seconds
The amount of time before a flow run's scheduled start time to begin submission. Default is the value of PREFECT_WORKER_PREFETCH_SECONDS
. --run-once
Only run worker polling once. By default, the worker runs forever. --limit
, -l
The maximum number of flow runs to start simultaneously. --with-healthcheck
Start a healthcheck server for the worker. --install-policy
Install policy to use workers from Prefect integration packages. You must start a worker within an environment that can access or create the infrastructure needed to execute flow runs. The worker will deploy flow runs to the infrastructure corresponding to the worker type. For example, if you start a worker with type kubernetes
, the worker will deploy flow runs to a Kubernetes cluster.
Prefect must be installed in execution environments
Prefect must be installed in any environment (virtual environment, Docker container, etc.) where you intend to run the worker or execute a flow run.
PREFECT_API_URL
and PREFECT_API_KEY
settings for workers
PREFECT_API_URL
must be set for the environment in which your worker is running. You must also have a user or service account with the Worker
role, which can be configured by setting the PREFECT_API_KEY
.
Workers have two statuses: ONLINE
and OFFLINE
. A worker is online if it sends regular heartbeat messages to the Prefect API. If a worker has missed three heartbeats, it is considered offline. By default, a worker is considered offline a maximum of 90 seconds after it stopped sending heartbeats, but the threshold can be configured via the PREFECT_WORKER_HEARTBEAT_SECONDS
setting.
Use the prefect worker start
CLI command to start a worker. You must pass at least the work pool name. If the work pool does not exist, it will be created if the --type
flag is used.
prefect worker start -p [work pool name]\n
For example:
prefect worker start -p \"my-pool\"\n
Results in output like this:
Discovered worker type 'process' for work pool 'my-pool'.\nWorker 'ProcessWorker 65716280-96f8-420b-9300-7e94417f2673' started!\n
In this case, Prefect automatically discovered the worker type from the work pool. To create a work pool and start a worker in one command, use the --type
flag:
prefect worker start -p \"my-pool\" --type \"process\"\n
Worker 'ProcessWorker d24f3768-62a9-4141-9480-a056b9539a25' started!\n06:57:53.289 | INFO | prefect.worker.process.processworker d24f3768-62a9-4141-9480-a056b9539a25 - Worker pool 'my-pool' created.\n
In addition, workers can limit the number of flow runs they will start simultaneously with the --limit
flag. For example, to limit a worker to five concurrent flow runs:
prefect worker start --pool \"my-pool\" --limit 5\n
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#configuring-prefetch","title":"Configuring prefetch","text":"By default, the worker begins submitting flow runs a short time (10 seconds) before they are scheduled to run. This behavior allows time for the infrastructure to be created so that the flow run can start on time.
In some cases, infrastructure will take longer than 10 seconds to start the flow run. The prefetch can be increased using the --prefetch-seconds
option or the PREFECT_WORKER_PREFETCH_SECONDS
setting.
If this value is more than the amount of time it takes for the infrastructure to start, the flow run will wait until its scheduled start time.
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#polling-for-work","title":"Polling for work","text":"Workers poll for work every 15 seconds by default. This interval is configurable in your profile settings with the PREFECT_WORKER_QUERY_SECONDS
setting.
The Prefect CLI can install the required package for Prefect-maintained worker types automatically. You can configure this behavior with the --install-policy
option. The following are valid install policies
always
Always install the required package. Will update the required package to the most recent version if already installed. if-not-present
Install the required package if it is not already installed. never
Never install the required package. prompt
Prompt the user to choose whether to install the required package. This is the default install policy. If prefect worker start
is run non-interactively, the prompt
install policy will behave the same as never
.","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#additional-resources","title":"Additional resources","text":"See how to daemonize a Prefect worker in this guide.
For more information on overriding a work pool's job variables see this guide.
","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"contributing/overview/","title":"Contributing","text":"Thanks for considering contributing to Prefect!
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#setting-up-a-development-environment","title":"Setting up a development environment","text":"First, you'll need to download the source code and install an editable version of the Python package:
# Clone the repository\ngit clone https://github.com/PrefectHQ/prefect.git\ncd prefect\n\n# We recommend using a virtual environment\n\npython -m venv .venv\nsource .venv/bin/activate\n\n# Install the package with development dependencies\n\npip install -e \".[dev]\"\n\n# Setup pre-commit hooks for required formatting\n\npre-commit install\n
If you don't want to install the pre-commit hooks, you can manually install the formatting dependencies with:
pip install $(./scripts/precommit-versions.py)\n
You'll need to run black
and ruff
before a contribution can be accepted.
After installation, you can run the test suite with pytest
:
# Run all the tests\npytest tests\n\n# Run a subset of tests\n\npytest tests/test_flows.py\n
Building the Prefect UI
If you intend to run a local Prefect server during development, you must first build the UI. See UI development for instructions.
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#prefect-code-of-conduct","title":"Prefect Code of Conduct","text":"","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-pledge","title":"Our Pledge","text":"In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-standards","title":"Our Standards","text":"Examples of behavior that contributes to creating a positive environment include:
Examples of unacceptable behavior by participants include:
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#scope","title":"Scope","text":"This Code of Conduct applies within all project spaces, and it also applies when an individual is representing the project or its community in public spaces. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#enforcement","title":"Enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting Chris White at chris@prefect.io. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#attribution","title":"Attribution","text":"This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#developer-tooling","title":"Developer tooling","text":"The Prefect CLI provides several helpful commands to aid development.
Start all services with hot-reloading on code changes (requires UI dependencies to be installed):
prefect dev start\n
Start a Prefect API that reloads on code changes:
prefect dev api\n
Start a Prefect worker that reloads on code changes:
prefect dev agent\n
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#ui-development","title":"UI development","text":"Developing the Prefect UI requires that npm is installed.
Start a development UI that reloads on code changes:
prefect dev ui\n
Build the static UI (the UI served by prefect server start
):
prefect dev build-ui\n
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#docs-development","title":"Docs Development","text":"Prefect uses mkdocs for the docs website and the mkdocs-material theme. While we use mkdocs-material-insiders
for production, builds can still happen without the extra plugins. Deploy previews are available on pull requests, so you'll be able to browse the final look of your changes before merging.
To build the docs:
mkdocs build\n
To serve the docs locally at http://127.0.0.1:8000/:
mkdocs serve\n
For additional mkdocs help and options:
mkdocs --help\n
We use the mkdocs-material theme. To add additional JavaScript or CSS to the docs, please see the theme documentation here.
Internal developers can install the production theme by running:
pip install -e git+https://github.com/PrefectHQ/mkdocs-material-insiders.git#egg=mkdocs-material\nmkdocs build # or mkdocs build --config-file mkdocs.insiders.yml if needed\n
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#kubernetes-development","title":"Kubernetes development","text":"Generate a manifest to deploy a development API to a local kubernetes cluster:
prefect dev kubernetes-manifest\n
To access the Prefect UI running in a Kubernetes cluster, use the kubectl port-forward
command to forward a port on your local machine to an open port within the cluster. For example:
kubectl port-forward deployment/prefect-dev 4200:4200\n
This forwards port 4200 on the default internal loop IP for localhost to the Prefect server deployment.
To tell the local prefect
command how to communicate with the Prefect API running in Kubernetes, set the PREFECT_API_URL
environment variable:
export PREFECT_API_URL=http://localhost:4200/api\n
Since you previously configured port forwarding for the localhost port to the Kubernetes environment, you\u2019ll be able to interact with the Prefect API running in Kubernetes when using local Prefect CLI commands.
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#adding-database-migrations","title":"Adding Database Migrations","text":"To make changes to a table, first update the SQLAlchemy model in src/prefect/server/database/orm_models.py
. For example, if you wanted to add a new column to the flow_run
table, you would add a new column to the FlowRun
model:
# src/prefect/server/database/orm_models.py\n\n@declarative_mixin\nclass ORMFlowRun(ORMRun):\n \"\"\"SQLAlchemy model of a flow run.\"\"\"\n ...\n new_column = Column(String, nullable=True) # <-- add this line\n
Next, you will need to generate new migration files. You must generate a new migration file for each database type. Migrations will be generated for whatever database type PREFECT_API_DATABASE_CONNECTION_URL
is set to. See here for how to set the database connection URL for each database type.
To generate a new migration file, run the following command:
prefect server database revision --autogenerate -m \"<migration name>\"\n
Try to make your migration name brief but descriptive. For example:
add_flow_run_new_column
add_flow_run_new_column_idx
rename_flow_run_old_column_to_new_column
The --autogenerate
flag will automatically generate a migration file based on the changes to the models.
Always inspect the output of --autogenerate
--autogenerate
will generate a migration file based on the changes to the models. However, it is not perfect. Be sure to check the file to make sure it only includes the changes you want to make. Additionally, you may need to remove extra statements that were included and not related to your change.
The new migration can be found in the src/prefect/server/database/migrations/versions/
directory. Each database type has its own subdirectory. For example, the SQLite migrations are stored in src/prefect/server/database/migrations/versions/sqlite/
.
After you have inspected the migration file, you can apply the migration to your database by running the following command:
prefect server database upgrade -y\n
Once you have successfully created and applied migrations for all database types, make sure to update MIGRATION-NOTES.md
to document your additions.
Generally, we follow the Google Python Style Guide. This document covers sections where we differ or where additional clarification is necessary.
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports","title":"Imports","text":"A brief collection of rules and guidelines for how imports should be handled in this repository.
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports-in-__init__-files","title":"Imports in__init__
files","text":"Leave __init__
files empty unless exposing an interface. If you must expose objects to present a simpler API, please follow these rules.
If importing objects from submodules, the __init__
file should use a relative import. This is required for type checkers to understand the exposed interface.
# Correct\nfrom .flows import flow\n
# Wrong\nfrom prefect.flows import flow\n
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#exposing-submodules","title":"Exposing submodules","text":"Generally, submodules should not be imported in the __init__
file. Submodules should only be exposed when the module is designed to be imported and used as a namespaced object.
For example, we do this for our schema and model modules because it is important to know if you are working with an API schema or database model, both of which may have similar names.
import prefect.server.schemas as schemas\n\n# The full module is accessible now\nschemas.core.FlowRun\n
If exposing a submodule, use a relative import as you would when exposing an object.
# Correct\nfrom . import flows\n
# Wrong\nimport prefect.flows\n
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#importing-to-run-side-effects","title":"Importing to run side-effects","text":"Another use case for importing submodules is perform global side-effects that occur when they are imported.
Often, global side-effects on import are a dangerous pattern. Avoid them if feasible.
We have a couple acceptable use-cases for this currently:
prefect.serializers
.prefect.cli
.The from
syntax should be reserved for importing objects from modules. Modules should not be imported using the from
syntax.
# Correct\nimport prefect.server.schemas # use with the full name\nimport prefect.server.schemas as schemas # use the shorter name\n
# Wrong\nfrom prefect.server import schemas\n
Unless in an __init__.py
file, relative imports should not be used.
# Correct\nfrom prefect.utilities.foo import bar\n
# Wrong\nfrom .utilities.foo import bar\n
Imports dependent on file location should never be used without explicit indication it is relative. This avoids confusion about the source of a module.
# Correct\nfrom . import test\n
# Wrong\nimport test\n
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#resolving-circular-dependencies","title":"Resolving circular dependencies","text":"Sometimes, we must defer an import and perform it within a function to avoid a circular dependency.
## This function in `settings.py` requires a method from the global `context` but the context\n## uses settings\ndef from_context():\n from prefect.context import get_profile_context\n\n ...\n
Attempt to avoid circular dependencies. This often reveals overentanglement in the design.
When performing deferred imports, they should all be placed at the top of the function.
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#with-type-annotations","title":"With type annotations","text":"If you are just using the imported object for a type signature, you should use the TYPE_CHECKING
flag.
# Correct\nfrom typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from prefect.server.schemas.states import State\n\ndef foo(state: \"State\"):\n pass\n
Note that usage of the type within the module will need quotes e.g. \"State\"
since it is not available at runtime.
We do not have a best practice for this yet. See the kubernetes
, docker
, and distributed
implementations for now.
Sometimes, imports are slow. We'd like to keep the prefect
module import times fast. In these cases, we can lazily import the slow module by deferring import to the relevant function body. For modules that are consumed by many functions, the pattern used for optional requirements may be used instead.
Upon executing a command that creates an object, the output message should offer: - A short description of what the command just did. - A bullet point list, rehashing user inputs, if possible. - Next steps, like the next command to run, if applicable. - Other relevant, pre-formatted commands that can be copied and pasted, if applicable. - A new line before the first line and after the last line.
Output Example:
$ prefect work-queue create testing\n\nCreated work queue with properties:\n name - 'abcde'\n uuid - 940f9828-c820-4148-9526-ea8107082bda\n tags - None\n deployment_ids - None\n\nStart an agent to pick up flows from the created work queue:\n prefect agent start -q 'abcde'\n\nInspect the created work queue:\n prefect work-queue inspect 'abcde'\n
Additionally:
!r
.textwrap.dedent
to remove extraneous spacing for strings that are written with triple quotes (\"\"\").Placeholder Example:
Create a work queue with tags:\n prefect work-queue create '<WORK QUEUE NAME>' -t '<OPTIONAL TAG 1>' -t '<OPTIONAL TAG 2>'\n
Dedent Example:
from textwrap import dedent\n...\noutput_msg = dedent(\n f\"\"\"\n Created work queue with properties:\n name - {name!r}\n uuid - {result}\n tags - {tags or None}\n deployment_ids - {deployment_ids or None}\n\n Start an agent to pick up flows from the created work queue:\n prefect agent start -q {name!r}\n\n Inspect the created work queue:\n prefect work-queue inspect {name!r}\n \"\"\"\n)\n
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#api-versioning","title":"API Versioning","text":"The Prefect client can be run separately from the Prefect orchestration server and communicate entirely via an API. Among other things, the Prefect client includes anything that runs task or flow code, (e.g. agents, and the Python client) or any consumer of Prefect metadata, (e.g. the Prefect UI, and CLI). The Prefect server stores this metadata and serves it via the REST API.
Sometimes, we make breaking changes to the API (for good reasons). In order to check that a Prefect client is compatible with the API it's making requests to, every API call the client makes includes a three-component API_VERSION
header with major, minor, and patch versions.
For example, a request with the X-PREFECT-API-VERSION=3.2.1
header has a major version of 3
, minor version 2
, and patch version 1
.
This version header can be changed by modifying the API_VERSION
constant in prefect.server.api.server
.
When making a breaking change to the API, we should consider if the change might be backwards compatible for clients, meaning that the previous version of the client would still be able to make calls against the updated version of the server code. This might happen if the changes are purely additive: such as adding a non-critical API route. In these cases, we should make sure to bump the patch version.
In almost all other cases, we should bump the minor version, which denotes a non-backwards-compatible API change. We have reserved the major version chanes to denote also-backwards compatible change that might be significant in some way, such as a major release milestone.
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/versioning/","title":"Versioning","text":"","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#understanding-version-numbers","title":"Understanding version numbers","text":"Versions are composed of three parts: MAJOR.MINOR.PATCH. For example, the version 2.5.0 has a major version of 2, a minor version of 5, and patch version of 0.
Occasionally, we will add a suffix to the version such as rc
, a
, or b
. These indicate pre-release versions that users can opt-into installing to test functionality before it is ready for release.
Each release will increase one of the version numbers. If we increase a number other than the patch version, the versions to the right of it will be reset to zero.
","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#prefects-versioning-scheme","title":"Prefect's versioning scheme","text":"Prefect will increase the major version when significant and widespread changes are made to the core product. It is very unlikely that the major version will change without extensive warning.
Prefect will increase the minor version when:
Prefect will increase the patch version when:
A breaking change means that your code will need to change to use a new version of Prefect. We strive to avoid breaking changes in all releases.
At times, Prefect will deprecate a feature. This means that a feature has been marked for removal in the future. When you use it, you may see warnings that it will be removed. A feature is deprecated when it will no longer be maintained. Frequently, a deprecated feature will have a new and improved alternative. Deprecated features will be retained for at least 3 minor version increases or 6 months, whichever is longer. We may retain deprecated features longer than this time period.
Prefect will sometimes include changes to behavior to fix a bug. These changes are not categorized as breaking changes.
","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#client-compatibility-with-prefect","title":"Client compatibility with Prefect","text":"When running a Prefect server, you are in charge of ensuring the version is compatible with those of the clients that are using the server. Prefect aims to maintain backwards compatibility with old clients for each server release. In contrast, sometimes new clients cannot be used with an old server. The new client may expect the server to support functionality that it does not yet include. For this reason, we recommend that all clients are the same version as the server or older.
For example, a client on 2.1.0 can be used with a server on 2.5.0. A client on 2.5.0 cannot be used with a server on 2.1.0.
","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#client-compatibility-with-cloud","title":"Client compatibility with Cloud","text":"Prefect Cloud targets compatibility with all versions of Prefect clients. If you encounter a compatibility issue, please file a bug report.
","tags":["versioning","semver"],"boost":2},{"location":"getting-started/installation/","title":"Installation","text":"Prefect requires Python 3.8 or newer.
Python 3.12 support is experimental, as not all dependencies to support it yet. If you encounter any errors, please open an issue.
We recommend installing Prefect using a Python virtual environment manager such as pipenv
, conda
, or virtualenv
/venv
.
Windows and Linux requirements
See Windows installation notes and Linux installation notes for details on additional installation requirements and considerations.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#install-prefect","title":"Install Prefect","text":"The following sections describe how to install Prefect in your development or execution environment.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-the-latest-version","title":"Installing the latest version","text":"Prefect is published as a Python package. To install the latest release or upgrade an existing Prefect install, run the following command in your terminal:
pip install -U prefect\n
To install a specific version, specify the version number like this:
pip install -U \"prefect==2.16.2\"\n
See available release versions in the Prefect Release Notes.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-the-bleeding-edge","title":"Installing the bleeding edge","text":"If you'd like to test with the most up-to-date code, you can install directly off the main
branch on GitHub:
pip install -U git+https://github.com/PrefectHQ/prefect\n
The main
branch may not be stable
Please be aware that this method installs unreleased code and may not be stable.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-for-development","title":"Installing for development","text":"If you'd like to install a version of Prefect for development:
pip install -e
.$ git clone https://github.com/PrefectHQ/prefect.git\n$ cd prefect\n$ pip install -e \".[dev]\"\n$ pre-commit install\n
See our Contributing guide for more details about standards and practices for contributing to Prefect.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#checking-your-installation","title":"Checking your installation","text":"To confirm that Prefect was installed correctly, run the command prefect version
to print the version and environment details to your console.
$ prefect version\n\nVersion: 2.10.21\nAPI version: 0.8.4\nPython version: 3.10.12\nGit commit: da816542\nBuilt: Thu, Jul 13, 2023 2:05 PM\nOS/Arch: darwin/arm64\nProfile: local\nServer type: ephemeral\nServer:\n Database: sqlite\n SQLite version: 3.42.0\n
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#windows-installation-notes","title":"Windows installation notes","text":"You can install and run Prefect via Windows PowerShell, the Windows Command Prompt, or conda
. After installation, you may need to manually add the Python local packages Scripts
folder to your Path
environment variable.
The Scripts
folder path looks something like this (the username and Python version may be different on your system):
C:\\Users\\MyUserNameHere\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\Scripts\n
Watch the pip install
output messages for the Scripts
folder path on your system.
If you're using Windows Subsystem for Linux (WSL), see Linux installation notes.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#linux-installation-notes","title":"Linux installation notes","text":"Linux is a popular operating system for running Prefect. You can use Prefect Cloud as your API server, or host your own Prefect server backed by PostgreSQL.
For development, you can use SQLite 2.24 or newer as your database. Note that certain Linux versions of SQLite can be problematic. Compatible versions include Ubuntu 22.04 LTS and Ubuntu 20.04 LTS.
Alternatively, you can install SQLite on Red Hat Enterprise Linux (RHEL) or use the conda
virtual environment manager and configure a compatible SQLite version.
If you're using a self-signed SSL certificate, you need to configure your environment to trust the certificate. You can add the certificate to your system bundle and pointing your tools to use that bundle by configuring the SSL_CERT_FILE
environment variable.
If the certificate is not part of your system bundle, you can set the PREFECT_API_TLS_INSECURE_SKIP_VERIFY
to True
to disable certificate verification altogether.
Note: Disabling certificate validation is insecure and only suggested as an option for testing!
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#proxies","title":"Proxies","text":"Prefect supports communicating via proxies through environment variables. Simply set HTTPS_PROXY
and SSL_CERT_FILE
in your environment, and the underlying network libraries will route Prefect\u2019s requests appropriately. Read more about using Prefect Cloud with proxies here.
You can use Prefect Cloud as your API server, or host your own Prefect server backed by PostgreSQL.
By default, a local Prefect server instance uses SQLite as the backing database. SQLite is not packaged with the Prefect installation. Most systems will already have SQLite installed, because it is typically bundled as a part of Python.
The Prefect CLI command prefect version
prints environment details to your console, including the server database. For example:
$ prefect version\nVersion: 2.10.21\nAPI version: 0.8.4\nPython version: 3.10.12\nGit commit: a46cbebb\nBuilt: Sat, Jul 15, 2023 7:59 AM\nOS/Arch: darwin/arm64\nProfile: default\nServer type: cloud\n
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#install-sqlite-on-rhel","title":"Install SQLite on RHEL","text":"The following steps are needed to install an appropriate version of SQLite on Red Hat Enterprise Linux (RHEL). Note that some RHEL instances have no C compiler, so you may need to check for and install gcc
first:
yum install gcc\n
Download and extract the tarball for SQLite.
wget https://www.sqlite.org/2022/sqlite-autoconf-3390200.tar.gz\ntar -xzf sqlite-autoconf-3390200.tar.gz\n
Move to the extracted SQLite directory, then build and install SQLite.
cd sqlite-autoconf-3390200/\n./configure\nmake\nmake install\n
Add LD_LIBRARY_PATH
to your profile.
echo 'export LD_LIBRARY_PATH=\"/usr/local/lib\"' >> /etc/profile\n
Restart your shell to register these changes.
Now you can install Prefect using pip
.
pip3 install prefect\n
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#using-prefect-in-an-environment-with-http-proxies","title":"Using Prefect in an environment with HTTP proxies","text":"If you are using Prefect Cloud or hosting your own Prefect server instance, the Prefect library will connect to the API via any proxies you have listed in the HTTP_PROXY
, HTTPS_PROXY
, or ALL_PROXY
environment variables. You may also use the NO_PROXY
environment variable to specify which hosts should not be sent through the proxy.
For more information about these environment variables, see the cURL documentation.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#next-steps","title":"Next steps","text":"Now that you have Prefect installed and your environment configured, you may want to check out the Tutorial to get more familiar with Prefect.
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/quickstart/","title":"Quickstart","text":"Prefect is an orchestration and observability platform that empowers developers to build and scale resilient code quickly, turning their Python scripts into resilient, recurring workflows.
In this quickstart, you'll see how you can schedule your code on remote infrastructure and observe the state of your workflows. With Prefect, you can go from a Python script to a production-ready workflow that runs remotely in a few minutes.
Let's get started!
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#setup","title":"Setup","text":"Here's a basic script that fetches statistics about the main Prefect GitHub repository.
import httpx\n\ndef get_repo_info():\n url = \"https://api.github.com/repos/PrefectHQ/prefect\"\n response = httpx.get(url)\n repo = response.json()\n print(\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n\nif __name__ == \"__main__\":\n get_repo_info()\n
How can we make this script schedulable, observable, resilient, and capable of running anywhere?
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-1-install-prefect","title":"Step 1: Install Prefect","text":"pip install -U prefect\n
See the install guide for more detailed installation instructions, if needed.
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-2-connect-to-prefects-api","title":"Step 2: Connect to Prefect's API","text":"Much of Prefect's functionality is backed by an API. Sign up for a forever free Prefect Cloud account or accept your organization's invite to join their Prefect Cloud account.
prefect cloud login
CLI command to log in to Prefect Cloud from your environment.prefect cloud login\n
Choose Log in with a web browser and click the Authorize button in the browser window that opens.
Self-hosted Prefect server instance
If you would like to host a Prefect server instance on your own infrastructure, see the tutorial and select the \"Self-hosted\" tab. Note that you will need to both host your own server and run your flows on your own infrastructure.
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-3-turn-your-function-into-a-prefect-flow","title":"Step 3: Turn your function into a Prefect flow","text":"The fastest way to get started with Prefect is to add a @flow
decorator to your Python function. Flows are the core observable, deployable units in Prefect and are the primary entrypoint to orchestrated work.
import httpx # an HTTP client library and dependency of Prefect\nfrom prefect import flow, task\n\n\n@task(retries=2)\ndef get_repo_info(repo_owner: str, repo_name: str):\n \"\"\"Get info about a repo - will retry twice after failing\"\"\"\n url = f\"https://api.github.com/repos/{repo_owner}/{repo_name}\"\n api_response = httpx.get(url)\n api_response.raise_for_status()\n repo_info = api_response.json()\n return repo_info\n\n\n@task\ndef get_contributors(repo_info: dict):\n \"\"\"Get contributors for a repo\"\"\"\n contributors_url = repo_info[\"contributors_url\"]\n response = httpx.get(contributors_url)\n response.raise_for_status()\n contributors = response.json()\n return contributors\n\n\n@flow(log_prints=True)\ndef repo_info(repo_owner: str = \"PrefectHQ\", repo_name: str = \"prefect\"):\n \"\"\"\n Given a GitHub repository, logs the number of stargazers\n and contributors for that repo.\n \"\"\"\n repo_info = get_repo_info(repo_owner, repo_name)\n print(f\"Stars \ud83c\udf20 : {repo_info['stargazers_count']}\")\n\n contributors = get_contributors(repo_info)\n print(f\"Number of contributors \ud83d\udc77: {len(contributors)}\")\n\n\nif __name__ == \"__main__\":\n repo_info()\n
Note that we added a log_prints=True
argument to the @flow
decorator so that print
statements within the flow-decorated function will be logged. Also note that our flow calls two tasks, which are defined by the @task
decorator. Tasks are the smallest unit of observed and orchestrated work in Prefect.
python my_gh_workflow.py\n
Now when we run this script, Prefect will automatically track the state of the flow run and log the output where we can see it in the UI and CLI.
14:28:31.099 | INFO | prefect.engine - Created flow run 'energetic-panther' for flow 'repo-info'\n14:28:31.100 | INFO | Flow run 'energetic-panther' - View at https://app.prefect.cloud/account/123/workspace/abc/flow-runs/flow-run/xyz\n14:28:32.178 | INFO | Flow run 'energetic-panther' - Created task run 'get_repo_info-0' for task 'get_repo_info'\n14:28:32.179 | INFO | Flow run 'energetic-panther' - Executing 'get_repo_info-0' immediately...\n14:28:32.584 | INFO | Task run 'get_repo_info-0' - Finished in state Completed()\n14:28:32.599 | INFO | Flow run 'energetic-panther' - Stars \ud83c\udf20 : 13609\n14:28:32.682 | INFO | Flow run 'energetic-panther' - Created task run 'get_contributors-0' for task 'get_contributors'\n14:28:32.682 | INFO | Flow run 'energetic-panther' - Executing 'get_contributors-0' immediately...\n14:28:33.118 | INFO | Task run 'get_contributors-0' - Finished in state Completed()\n14:28:33.134 | INFO | Flow run 'energetic-panther' - Number of contributors \ud83d\udc77: 30\n14:28:33.255 | INFO | Flow run 'energetic-panther' - Finished in state Completed('All states completed.')\n
You should see similar output in your terminal, with your own randomly generated flow run name and your own Prefect Cloud account URL.
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-4-choose-a-remote-infrastructure-location","title":"Step 4: Choose a remote infrastructure location","text":"Let's get this workflow running on infrastructure other than your local machine! We can tell Prefect where we want to run our workflow by creating a work pool.
We can have Prefect Cloud run our flow code for us with a Prefect Managed work pool.
Let's create a Prefect Managed work pool so that Prefect can run our flows for us. We can create a work pool in the UI or from the CLI. Let's use the CLI:
prefect work-pool create my-managed-pool --type prefect:managed\n
You should see a message in the CLI that your work pool was created. Feel free to check out your new work pool on the Work Pools page in the UI.
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-4-make-your-code-schedulable","title":"Step 4: Make your code schedulable","text":"We have a flow function and we have a work pool where we can run our flow remotely. Let's package both of these things, along with the location for where to find our flow code, into a deployment so that we can schedule our workflow to run remotely.
Deployments elevate flows to remotely configurable entities that have their own API.
Let's make a script to build a deployment with the name my-first-deployment and set it to run on a schedule.
create_deployment.pyfrom prefect import flow\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/discdiver/demos.git\",\n entrypoint=\"my_gh_workflow.py:repo_info\",\n ).deploy(\n name=\"my-first-deployment\",\n work_pool_name=\"my-managed-pool\",\n cron=\"0 1 * * *\",\n )\n
Run the script to create the deployment on the Prefect Cloud server. Note that the cron
argument will schedule the deployment to run at 1am every day.
python create_deployment.py\n
You should see a message that your deployment was created, similar to the one below.
Successfully created/updated all deployments!\n\n Deployments \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name \u2503 Status \u2503 Details \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 repo-info/my-first-deployment \u2502 applied \u2502 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\nTo schedule a run for this deployment, use the following command:\n\n $ prefect deployment run 'repo-info/my-first-deployment'\n\n\nYou can also run your flow via the Prefect UI: <https://app.prefect.cloud/account/abc/workspace/123/deployments/deployment/xyz>\n
Head to the Deployments page of the UI to check it out.
Code storage options
You can store your flow code in nearly any location. You just need to tell Prefect where to find it. In this example, we use a GitHub repository, but you could bake your code into a Docker image or store it in cloud provider storage. Read more here.
Push your code to GitHub
In the example above, we use an existing GitHub repository. If you make changes to the flow code, you will need to push those changes to your own GitHub account and update the source
argument to point to your repository.
You can trigger a manual run of this deployment by either clicking the Run button in the top right of the deployment page in the UI, or by running the following CLI command in your terminal:
prefect deployment run 'repo-info/my-first-deployment'\n
The deployment is configured to run on a Prefect Managed work pool, so Prefect will automatically spin up the infrastructure to run this flow. It may take a minute to set up the Docker image in which the flow will run.
After a minute or so, you should see the flow run graph and logs on the Flow Run page in the UI.
Remove the schedule
Click the Remove button in the top right of the Deployment page so that the workflow is no longer scheduled to run once per day.
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#next-steps","title":"Next steps","text":"You've seen how to move from a Python script to a scheduled, observable, remotely orchestrated workflow with Prefect.
To learn how to run flows on your own infrastructure, see how to customize the Docker image where your flow runs, and see how to gain lots of orchestration and observation benefits check out the tutorial.
Need help?
Get your questions answered by a Prefect Product Advocate! Book a Meeting
Happy building!
","tags":["getting started","quickstart","overview"],"boost":2},{"location":"guides/","title":"How-to Guides","text":"This section of the documentation contains how-to guides for common workflows and use cases.
","tags":["guides","how to"],"boost":2},{"location":"guides/#development","title":"Development","text":"Title Description Hosting Host your own Prefect server instance. Profiles & Settings Configure Prefect and save your settings. Testing Easily test your workflows. Global Concurrency Limits Limit flow runs. Runtime Context Enable a flow to access metadata about itself and its context when it runs. Variables Store and retrieve configuration data. Prefect Client UsePrefectClient
to interact with the API server. Interactive Workflows Create human-in-the-loop workflows by pausing flow runs for input. Automations Configure actions that Prefect executes automatically based on trigger conditions. Webhooks Receive, observe, and react to events from other systems. Terraform Provider Use the Terraform Provider for Prefect Cloud for infrastructure as code. CI/CD Use CI/CD with Prefect. Prefect Recipes Common, extensible examples for setting up Prefect.","tags":["guides","how to"],"boost":2},{"location":"guides/#execution","title":"Execution","text":"Title Description Docker Deploy flows with Docker containers. State Change Hooks Execute code in response to state changes. Dask and Ray Scale your flows with parallel computing frameworks. Read and Write Data Read and write data to and from cloud provider storage. Big Data Handle large data with Prefect. Logging Configure Prefect's logger and aggregate logs from other tools. Troubleshooting Identify and resolve common issues with Prefect. Managed Execution Let prefect run your code.","tags":["guides","how to"],"boost":2},{"location":"guides/#work-pools","title":"Work Pools","text":"Title Description Deploying Flows to Work Pools and Workers Learn how to run you code with dynamic infrastructure. Upgrade from Agents to Workers Why and how to upgrade from agents to workers. Flow Code Storage Where to store your code for deployments. Kubernetes Deploy flows on Kubernetes. Serverless Push Work Pools Run flows on serverless infrastructure without a worker. Serverless Work Pools with Workers Run flows on serverless infrastructure with a worker. Daemonize Processes Set up a systemd service to run a Prefect worker or .serve process. Custom Workers Develop your own worker type. Overriding Work Pool Job Variables Override job variables for a work pool for a given deployment. Need help?
Get your questions answered by a Prefect Product Advocate! Book a Meeting
","tags":["guides","how to"],"boost":2},{"location":"guides/automations/","title":"Using Automations for Dynamic Responses","text":"From the Automations concept page, we saw what an automation can do and how to configure one within the UI.
In this guide, we will showcase the following common use cases:
Available only on Prefect Cloud
Automations are a Prefect Cloud feature.\n
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#prerequisites","title":"Prerequisites","text":"Please have the following before exploring the guide:
Automations allow you to take actions in response to triggering events recorded by Prefect.
For example, let's try to grab data from an API and send a notification based on the end state.
We can start by pulling hypothetical user data from an endpoint and then performing data cleaning and transformations.
Let's create a simple extract method, that pulls the data from a random user data generator endpoint.
from prefect import flow, task, get_run_logger\nimport requests\nimport json\n\n@task\ndef fetch(url: str):\n logger = get_run_logger()\n response = requests.get(url)\n raw_data = response.json()\n logger.info(f\"Raw response: {raw_data}\")\n return raw_data\n\n@task\ndef clean(raw_data: dict):\n print(raw_data.get('results')[0])\n results = raw_data.get('results')[0]\n logger = get_run_logger()\n logger.info(f\"Cleaned results: {results}\")\n return results['name']\n\n@flow\ndef build_names(num: int = 10):\n df = []\n url = \"https://randomuser.me/api/\"\n logger = get_run_logger()\n copy = num\n while num != 0:\n raw_data = fetch(url)\n df.append(clean(raw_data))\n num -= 1\n logger.info(f\"Built {copy} names: {df}\")\n return df\n\nif __name__ == \"__main__\":\n list_of_names = build_names()\n
The data cleaning workflow has visibility into each step, and we are sending a list of names to our next step of our pipeline.
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#create-notification-block-within-the-ui","title":"Create notification block within the UI","text":"Now let's try to send a notification based off a completed state outcome. We can configure a notification to be sent so that we know when to look into our workflow logic.
Prior to creating the automation, let's confirm the notification location. We have to create a notification block to help define where the notification will be sent.
Let's navigate to the blocks page on the UI, and click into creating an email notification block.
Now that we created a notification block, we can go to the automations page to create our first automation.
Next we try to find the trigger type, in this case let's use a flow completion.
Finally, let's create the actions that will be done once the triggered is hit. In this case, let's create a notification to be sent out to showcase the completion.
Now the automation is ready to be triggered from a flow run completion. Let's run the file locally and see that the notification is sent to our inbox after the completion. It may take a few minutes for the notification to arrive.
No deployment created
Keep in mind, we did not need to create a deployment to trigger our automation, where a state outcome of a local flow run helped trigger this notification block. We are not required to create a deployment to trigger a notification.
Now that you've seen how to create an email notification from a flow run completion, let's see how we can kick off a deployment run in response to an event.
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#event-based-deployment-automation","title":"Event-based deployment automation","text":"We can create an automation that can kick off a deployment instead of a notification. Let's explore how we can programmatically create this automation. We will take advantage of Prefect's REST API to help create this automation.
See the REST API documentation as a reference for interacting with the Prefect Cloud automation endpoints.
Let's create a deployment where we can kick off some work based on how long a flow is running. For example, if the build_names
flow is taking too long to execute, we can kick off a deployment of the with the same build_names
flow, but replace the count
value with a lower number - to speed up completion. You can create a deployment with a prefect.yaml
file or a Python file that uses flow.deploy
.
Create a prefect.yaml
file like this one for our flow build_names
:
# Welcome to your prefect.yaml file! You can use this file for storing and managing\n # configuration for deploying your flows. We recommend committing this file to source\n # control along with your flow code.\n\n # Generic metadata about this project\n name: automations-guide\n prefect-version: 2.13.1\n\n # build section allows you to manage and build docker images\n build: null\n\n # push section allows you to manage if and how this project is uploaded to remote locations\n push: null\n\n # pull section allows you to provide instructions for cloning this project in remote locations\n pull:\n - prefect.deployments.steps.set_working_directory:\n directory: /Users/src/prefect/Playground/automations-guide\n\n # the deployments section allows you to provide configuration for deploying flows\n deployments:\n - name: deploy-build-names\n version: null\n tags: []\n description: null\n entrypoint: test-automations.py:build_names\n parameters: {}\n work_pool:\n name: tutorial-process-pool\n work_queue_name: null\n job_variables: {}\n schedule: null\n
To follow a more Python based approach to create a deployment, you can use flow.deploy
as in the example below.
# .deploy only needs a name, valid work pool \n# and a reference to where the flow code exists\n\nif __name__ == \"__main__\":\nbuild_names.deploy(\n name=\"deploy-build-names\",\n work_pool_name=\"tutorial-process-pool\"\n image=\"my_registry/my_image:my_image_tag\",\n)\n
Now let's grab our deployment_id
from this deployment, and embed it in our automation. There are many ways to obtain the deployment_id
, but the CLI is a quick way to see all of your deployment ids.
Find deployment_id from the CLI
The quickest way to see the ID's associated with your deployment would be running prefect deployment ls
in an authenticated command prompt, and you will be able to see the id's associated with all of your deployments
prefect deployment ls\n Deployments \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name \u2503 ID \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 Extract islands/island-schedule \u2502 d9d7289c-7a41-436d-8313-80a044e61532 \u2502\n\u2502 build-names/deploy-build-names \u2502 8b10a65e-89ef-4c19-9065-eec5c50242f4 \u2502\n\u2502 ride-duration-prediction-backfill/backfill-deployment \u2502 76dc6581-1773-45c5-a291-7f864d064c57 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
We can create an automation via a POST call, where we can programmatically create the automation. Ensure you have your api_key
, account_id
, and workspace_id
. def create_event_driven_automation():\n api_url = f\"https://api.prefect.cloud/api/accounts/{account_id}/workspaces/{workspace_id}/automations/\"\n data = {\n \"name\": \"Event Driven Redeploy\",\n \"description\": \"Programmatically created an automation to redeploy a flow based on an event\",\n \"enabled\": \"true\",\n \"trigger\": {\n \"after\": [\n \"string\"\n ],\n \"expect\": [\n \"prefect.flow-run.Running\"\n ],\n \"for_each\": [\n \"prefect.resource.id\"\n ],\n \"posture\": \"Proactive\",\n \"threshold\": 30,\n \"within\": 0\n },\n \"actions\": [\n {\n \"type\": \"run-deployment\",\n \"source\": \"selected\",\n \"deployment_id\": \"YOUR-DEPLOYMENT-ID\", \n \"parameters\": \"10\"\n }\n ],\n \"owner_resource\": \"string\"\n }\n\n headers = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\n response = requests.post(api_url, headers=headers, json=data)\n\n print(response.json())\n return response.json()\n
After running this function, you will see within the UI the changes that came from the post request. Keep in mind, the context will be \"custom\" on UI.
Let's run the underlying flow and see the deployment get kicked off after 30 seconds elapsed. This will result in a new flow run of build_names
, and we are able to see this new deployment get initiated with the custom parameters we outlined above.
In a few quick changes, we are able to programmatically create an automation that deploys workflows with custom parameters.
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#using-an-underlying-yaml-file","title":"Using an underlying .yaml file","text":"We can extend this idea one step further by utilizing our own .yaml version of the automation, and registering that file with our UI. This simplifies the requirements of the automation by declaring it in its own .yaml file, and then registering that .yaml with the API.
Let's first start with creating the .yaml file that will house the automation requirements. Here is how it would look like:
name: Cancel long running flows\ndescription: Cancel any flow run after an hour of execution\ntrigger:\n match:\n \"prefect.resource.id\": \"prefect.flow-run.*\"\n match_related: {}\n after:\n - \"prefect.flow-run.Failed\"\n expect:\n - \"prefect.flow-run.*\"\n for_each:\n - \"prefect.resource.id\"\n posture: \"Proactive\"\n threshold: 1\n within: 30\nactions:\n - type: \"cancel-flow-run\"\n
We can then have a helper function that applies this YAML file with the REST API function.
import yaml\n\nfrom utils import post, put\n\ndef create_or_update_automation(path: str = \"automation.yaml\"):\n \"\"\"Create or update an automation from a local YAML file\"\"\"\n # Load the definition\n with open(path, \"r\") as fh:\n payload = yaml.safe_load(fh)\n\n # Find existing automations by name\n automations = post(\"/automations/filter\")\n existing_automation = [a[\"id\"] for a in automations if a[\"name\"] == payload[\"name\"]]\n automation_exists = len(existing_automation) > 0\n\n # Create or update the automation\n if automation_exists:\n print(f\"Automation '{payload['name']}' already exists and will be updated\")\n put(f\"/automations/{existing_automation[0]}\", payload=payload)\n else:\n print(f\"Creating automation '{payload['name']}'\")\n post(\"/automations/\", payload=payload)\n\nif __name__ == \"__main__\":\n create_or_update_automation()\n
You can find a complete repo with these APIs examples in this GitHub repository.
In this example, we managed to create the automation by registering the .yaml file with a helper function. This offers another experience when trying to create an automation.
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#custom-webhook-kicking-off-an-automation","title":"Custom webhook kicking off an automation","text":"We can use webhooks to expose the events API which allows us to extend the functionality of deployments and ways to respond to changes in our workflow through a few easy steps.
By exposing a webhook endpoint, we can kick off workflows that can trigger deployments - all from a simple event created from an HTTP request.
Lets create a webhook within the UI. Here is the webhook we can use to create these dynamic events.
{\n \"event\": \"model-update\",\n \"resource\": {\n \"prefect.resource.id\": \"product.models.{{ body.model_id}}\",\n \"prefect.resource.name\": \"{{ body.friendly_name }}\",\n \"run_count\": \"{{body.run_count}}\"\n }\n}\n
From a simple input, we can easily create an exposed webhook endpoint. Each webhook will correspond to a custom event created, where you can react to it downstream with a separate deployment or automation.
For example, we can create a curl request that sends the endpoint information such as a run count for our deployment.
curl -X POST https://api.prefect.cloud/hooks/34iV2SFke3mVa6y5Y-YUoA -d \"model_id=adhoc\" -d \"run_count=10\" -d \"friendly_name=test-user-input\"\n
From here, we can make a webhook that is connected to pulling in parameters on the curl command, and then it kicks off a deployment that uses these pulled parameters. Let us go into the event feed, and we can automate straight from this event.
This allows us to create automations that respond to these webhook events. From a few clicks in the UI, we are able to associate an external process with the Prefect events API, that can enable us to trigger downstream deployments.
In the next section, we will explore event triggers that automate the kickoff of a deployment run.
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#using-triggers","title":"Using triggers","text":"Let's take this idea one step further, by creating a deployment that will be triggered when a flow run takes longer than expected. We can take advantage of Prefect's Marvin library that will use an LLM to classify our data. Marvin is great at embedding data science and data analysis applications within your pre-existing data engineering workflows. In this case, we can use Marvin'd AI functions to help make our dataset more information rich.
Install Marvin with pip install marvin
and set you OpenAI API key as shown here
We can add a trigger to run a deployment in response to a specific event.
Let's create an example with Marvin's AI functions. We will take in a pandas DataFrame and use the AI function to analyze it.
Here is an example of pulling in that data and classifying using Marvin AI. We can help create dummy data based on classifications we have already created.
from marvin import ai_classifier\nfrom enum import Enum\nimport pandas as pd\n\n@ai_fn\ndef generate_synthetic_user_data(build_of_names: list[dict]) -> list:\n \"\"\"\n Generate additional data for userID (numerical values with 6 digits), location, and timestamp as separate columns and append the data onto 'build_of_names'. Make userID the first column\n \"\"\"\n\n@flow\ndef create_fake_user_dataset(df):\n artifact_df = generate_synthetic_user_data(df)\n print(artifact_df)\n\n create_table_artifact(\n key=\"fake-user-data\",\n table=artifact_df,\n description= \"Dataset that is comprised of a mix of autogenerated data based on user data\"\n )\n\nif __name__ == \"__main__\":\n create_fake_artifact() \n
Let's kick off a deployment with a trigger defined in a prefect.yaml
file. Let's specify what we want to trigger when the event stays in a running state for longer than 30 seconds.
# Welcome to your prefect.yaml file! You can use this file for storing and managing\n# configuration for deploying your flows. We recommend committing this file to source\n# control along with your flow code.\n\n# Generic metadata about this project\nname: automations-guide\nprefect-version: 2.13.1\n\n# build section allows you to manage and build docker images\nbuild: null\n\n# push section allows you to manage if and how this project is uploaded to remote locations\npush: null\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect.deployments.steps.set_working_directory:\n directory: /Users/src/prefect/Playground/marvin-extension\n\n# the deployments section allows you to provide configuration for deploying flows\ndeployments:\n- name: create-fake-user-dataset\n triggers:\n - enabled: true\n match:\n prefect.resource.id: \"prefect.flow-run.*\"\n after: \"prefect.flow-run.Running\",\n expect: [],\n for_each: [\"prefect.resource.id\"],\n parameters:\n param_1: 10\n posture: \"Proactive\"\n version: null\n tags: []\n description: null\n entrypoint: marvin-extension.py:create_fake_user_dataset\n parameters: {}\n work_pool:\n name: tutorial-process-pool\n work_queue_name: null\n job_variables: {}\n schedule: null\n
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#next-steps","title":"Next steps","text":"You've seen how to create automations via the UI, REST API, and a triggers defined in a prefect.yaml
deployment definition.
To learn more about events that can act as automation triggers, see the events docs. To learn more about event webhooks in particular, see the webhooks guide.
","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/big-data/","title":"Big Data with Prefect","text":"In this guide you'll learn tips for working with large amounts of data in Prefect.
Big data doesn't have a widely accepted, precise definition. In this guide, we'll discuss methods to reduce the processing time or memory utilization of Prefect workflows, without editing your Python code.
","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#optimizing-your-python-code-with-prefect-for-big-data","title":"Optimizing your Python code with Prefect for big data","text":"Depending upon your needs, you may want to optimize your Python code for speed, memory, compute, or disk space.
Prefect provides several options that we'll explore in this guide:
quote
to save time running your code.When a task is called from a flow, each argument is introspected by Prefect, by default. To speed up your flow runs, you can disable this behavior for a task by wrapping the argument using quote
.
To demonstrate, let's use a basic example that extracts and transforms some New York taxi data.
et_quote.pyfrom prefect import task, flow\nfrom prefect.utilities.annotations import quote\nimport pandas as pd\n\n\n@task\ndef extract(url: str):\n \"\"\"Extract data\"\"\"\n df_raw = pd.read_parquet(url)\n print(df_raw.info())\n return df_raw\n\n\n@task\ndef transform(df: pd.DataFrame):\n \"\"\"Basic transformation\"\"\"\n df[\"tip_fraction\"] = df[\"tip_amount\"] / df[\"total_amount\"]\n print(df.info())\n return df\n\n\n@flow(log_prints=True)\ndef et(url: str):\n \"\"\"ET pipeline\"\"\"\n df_raw = extract(url)\n df = transform(quote(df_raw))\n\n\nif __name__ == \"__main__\":\n url = \"https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2023-09.parquet\"\n et(url)\n
Introspection can take significant time when the object being passed is a large collection, such as dictionary or DataFrame, where each element needs to be visited. Note that using quote
reduces execution time at the expense of disabling task dependency tracking for the wrapped object.
By default, the results of task runs are stored in memory in your execution environment. This behavior makes flow runs fast for small data, but can be problematic for large data. You can save memory by writing results to disk. In production, you'll generally want to write results to a cloud provider storage such as AWS S3. Prefect lets you to use a storage block from a Prefect cloud integration library such as prefect-aws to save your configuration information. Learn more about blocks here.
Install the relevant library, register the block with the server, and create your storage block. Then you can reference the block in your flow like this:
...\nfrom prefect_aws.s3 import S3Bucket\n\nmy_s3_block = S3Bucket.load(\"MY_BLOCK_NAME\")\n\n...\n@task(result_storage=my_s3_block)\n
Now the result of the task will be written to S3, rather than stored in memory.
","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#save-data-to-disk-within-a-flow","title":"Save data to disk within a flow","text":"To save memory and time with big data, you don't need to pass results between tasks at all. Instead, you can write and read data to disk directly in your flow code. Prefect has integration libraries for each of the major cloud providers. Each library contains blocks with methods that make it convenient to read and write data to and from cloud object storage. The moving data guide has step-by-step examples for each cloud provider.
","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#cache-task-results","title":"Cache task results","text":"Caching allows you to avoid re-running tasks when doing so is unnecessary. Caching can save you time and compute. Note that caching requires task result persistence. Caching is discussed in detail in the tasks concept page.
","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#compress-results-written-to-disk","title":"Compress results written to disk","text":"If you're using Prefect's task result persistence, you can save disk space by compressing the results. You just need to specify the result type with compressed/
prefixed like this:
@task(result_serializer=\"compressed/json\")\n
Read about compressing results with Prefect for more details. The tradeoff of using compression is that it takes time to compress and decompress the data.
","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#use-a-task-runner-for-parallelizable-operations","title":"Use a task runner for parallelizable operations","text":"Prefect's task runners allow you to use the Dask and Ray Python libraries to run tasks in parallel and distributed across multiple machines. This can save you time and compute when operating on large data structures. See the guide to working with Dask and Ray Task Runners for details.
","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/ci-cd/","title":"CI/CD With Prefect","text":"Many organizations deploy Prefect workflows via their CI/CD process. Each organization has their own unique CI/CD setup, but a common pattern is to use CI/CD to manage Prefect deployments. Combining Prefect's deployment features with CI/CD tools enables efficient management of flow code updates, scheduling changes, and container builds. This guide uses GitHub Actions to implement a CI/CD process, but these concepts are generally applicable across many CI/CD tools.
Note that Prefect's primary ways for creating deployments, a .deploy
flow method or a prefect.yaml
configuration file, are both designed with building and pushing images to a Docker registry in mind.
In this example, you'll write a GitHub Actions workflow that will run each time you push to your repository's main
branch. This workflow will build and push a Docker image containing your flow code to Docker Hub, then deploy the flow to Prefect Cloud.
Your CI/CD process must be able to authenticate with Prefect in order to deploy flows.
Deploying flows securely and non-interactively in your CI/CD process can be accomplished by saving your PREFECT_API_URL
and PREFECT_API_KEY
as secrets in your repository's settings so they can be accessed in your CI/CD runner's environment without exposing them in any scripts or configuration files.
In this scenario, deploying flows involves building and pushing Docker images, so add DOCKER_USERNAME
and DOCKER_PASSWORD
as secrets to your repository as well.
You can create secrets for GitHub Actions in your repository under Settings -> Secrets and variables -> Actions -> New repository secret:
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#writing-a-github-workflow","title":"Writing a GitHub workflow","text":"To deploy your flow via GitHub Actions, you'll need a workflow YAML file. GitHub will look for workflow YAML files in the .github/workflows/
directory in the root of your repository. In their simplest form, GitHub workflow files are made up of triggers and jobs.
The on:
trigger is set to run the workflow each time a push occurs on the main
branch of the repository.
The deploy
job is comprised of four steps
:
Checkout
clones your repository into the GitHub Actions runner so you can reference files or run scripts from your repository in later steps.Log in to Docker Hub
authenticates to DockerHub so your image can be pushed to the Docker registry in your DockerHub account. docker/login-action is an existing GitHub action maintained by Docker. with:
passes values into the Action, similar to passing parameters to a function.Setup Python
installs your selected version of Python.Prefect Deploy
installs the dependencies used in your flow, then deploys your flow. env:
makes the PREFECT_API_KEY
and PREFECT_API_URL
secrets from your repository available as environment variables during this step's execution.For reference, the examples below can be found on their respective branches of this repository.
.deployprefect.yaml.\n\u251c\u2500\u2500 .github/\n\u2502 \u2514\u2500\u2500 workflows/\n\u2502 \u2514\u2500\u2500 deploy-prefect-flow.yaml\n\u251c\u2500\u2500 flow.py\n\u2514\u2500\u2500 requirements.txt\n
flow.py
from prefect import flow\n\n@flow(log_prints=True)\ndef hello():\n print(\"Hello!\")\n\nif __name__ == \"__main__\":\n hello.deploy(\n name=\"my-deployment\",\n work_pool_name=\"my-work-pool\",\n image=\"my_registry/my_image:my_image_tag\",\n )\n
.github/workflows/deploy-prefect-flow.yaml
name: Deploy Prefect flow\n\non:\n push:\n branches:\n - main\n\njobs:\n deploy:\n name: Deploy\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: Log in to Docker Hub\n uses: docker/login-action@v3\n with:\n username: ${{ secrets.DOCKER_USERNAME }}\n password: ${{ secrets.DOCKER_PASSWORD }}\n\n - name: Setup Python\n uses: actions/setup-python@v5\n with:\n python-version: '3.11'\n\n - name: Prefect Deploy\n env:\n PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }}\n PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }}\n run: |\n pip install -r requirements.txt\n python flow.py\n
.\n\u251c\u2500\u2500 .github/\n\u2502 \u2514\u2500\u2500 workflows/\n\u2502 \u2514\u2500\u2500 deploy-prefect-flow.yaml\n\u251c\u2500\u2500 flow.py\n\u251c\u2500\u2500 prefect.yaml\n\u2514\u2500\u2500 requirements.txt\n
flow.py
from prefect import flow\n\n@flow(log_prints=True)\ndef hello():\n print(\"Hello!\")\n
prefect.yaml
name: cicd-example\nprefect-version: 2.14.11\n\nbuild:\n - prefect_docker.deployments.steps.build_docker_image:\n id: build_image\n requires: prefect-docker>=0.3.1\n image_name: my_registry/my_image\n tag: my_image_tag\n dockerfile: auto\n\npush:\n - prefect_docker.deployments.steps.push_docker_image:\n requires: prefect-docker>=0.3.1\n image_name: \"{{ build_image.image_name }}\"\n tag: \"{{ build_image.tag }}\"\n\npull: null\n\ndeployments:\n - name: my-deployment\n entrypoint: flow.py:hello\n work_pool:\n name: my-work-pool\n work_queue_name: default\n job_variables:\n image: \"{{ build-image.image }}\"\n
.github/workflows/deploy-prefect-flow.yaml
name: Deploy Prefect flow\n\non:\n push:\n branches:\n - main\n\njobs:\n deploy:\n name: Deploy\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: Log in to Docker Hub\n uses: docker/login-action@v3\n with:\n username: ${{ secrets.DOCKER_USERNAME }}\n password: ${{ secrets.DOCKER_PASSWORD }}\n\n - name: Setup Python\n uses: actions/setup-python@v5\n with:\n python-version: '3.11'\n\n - name: Prefect Deploy\n env:\n PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }}\n PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }}\n run: |\n pip install -r requirements.txt\n prefect deploy -n my-deployment\n
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#running-a-github-workflow","title":"Running a GitHub workflow","text":"After pushing commits to your repository, GitHub will automatically trigger a run of your workflow. The status of running and completed workflows can be monitored from the Actions tab of your repository.
You can view the logs from each workflow step as they run. The Prefect Deploy
step will include output about your image build and push, and the creation/update of your deployment.
Successfully built image '***/cicd-example:latest'\n\nSuccessfully pushed image '***/cicd-example:latest'\n\nSuccessfully created/updated all deployments!\n\n Deployments\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name \u2503 Status \u2503 Details \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 hello/my-deployment \u2502 applied \u2502 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#advanced-example","title":"Advanced example","text":"In more complex scenarios, CI/CD processes often need to accommodate several additional considerations to enable a smooth development workflow:
This example repository demonstrates how each of these considerations can be addressed using a combination of Prefect's and GitHub's capabilities.
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#deploying-to-multiple-workspaces","title":"Deploying to multiple workspaces","text":"Which deployment processes should run are automatically selected when changes are pushed depending on two conditions:
on:\n push:\n branches:\n - stg\n - main\n paths:\n - \"project_1/**\"\n
branches:
- which branch has changed. This will ultimately select which Prefect workspace a deployment is created or updated in. In this example, changes on the stg
branch will deploy flows to a staging workspace, and changes on the main
branch will deploy flows to a production workspace.paths:
- which project folders' files have changed. Since each project folder contains its own flows, dependencies, and prefect.yaml
, it represents a complete set of logic and configuration that can be deployed independently. Each project in this repository gets its own GitHub Actions workflow YAML file.The prefect.yaml
file in each project folder depends on environment variables that are dictated by the selected job in each CI/CD workflow, enabling external code storage for Prefect deployments that is clearly separated across projects and environments.
.\n \u251c\u2500\u2500 cicd-example-workspaces-prod # production bucket\n \u2502 \u251c\u2500\u2500 project_1\n \u2502 \u2514\u2500\u2500 project_2\n \u2514\u2500\u2500 cicd-example-workspaces-stg # staging bucket\n \u251c\u2500\u2500 project_1\n \u2514\u2500\u2500 project_2 \n
Since the deployments in this example use S3 for code storage, it's important that push steps place flow files in separate locations depending upon their respective environment and project so no deployment overwrites another deployment's files.
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#caching-build-dependencies","title":"Caching build dependencies","text":"Since building Docker images and installing Python dependencies are essential parts of the deployment process, it's useful to rely on caching to skip repeated build steps.
The setup-python
action offers caching options so Python packages do not have to be downloaded on repeat workflow runs.
- name: Setup Python\n uses: actions/setup-python@v5\n with:\n python-version: \"3.11\"\n cache: \"pip\"\n
Using cached prefect-2.16.1-py3-none-any.whl (2.9 MB)\nUsing cached prefect_aws-0.4.10-py3-none-any.whl (61 kB)\n
The build-push-action
for building Docker images also offers caching options for GitHub Actions. If you are not using GitHub, other remote cache backends are available as well.
- name: Build and push\n id: build-docker-image\n env:\n GITHUB_SHA: ${{ steps.get-commit-hash.outputs.COMMIT_HASH }}\n uses: docker/build-push-action@v5\n with:\n context: ${{ env.PROJECT_NAME }}/\n push: true\n tags: ${{ secrets.DOCKER_USERNAME }}/${{ env.PROJECT_NAME }}:${{ env.GITHUB_SHA }}-stg\n cache-from: type=gha\n cache-to: type=gha,mode=max\n
importing cache manifest from gha:***\nDONE 0.1s\n\n[internal] load build context\ntransferring context: 70B done\nDONE 0.0s\n\n[2/3] COPY requirements.txt requirements.txt\nCACHED\n\n[3/3] RUN pip install -r requirements.txt\nCACHED\n
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#prefect-github-actions","title":"Prefect GitHub Actions","text":"Prefect provides its own GitHub Actions for authentication and deployment creation. These actions can simplify deploying with CI/CD when using prefect.yaml
, especially in cases where a repository contains flows that are used in multiple deployments across multiple Prefect Cloud workspaces.
Here's an example of integrating these actions into the workflow we created above:
name: Deploy Prefect flow\n\non:\n push:\n branches:\n - main\n\njobs:\n deploy:\n name: Deploy\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout\n uses: actions/checkout@v4\n\n - name: Log in to Docker Hub\n uses: docker/login-action@v3\n with:\n username: ${{ secrets.DOCKER_USERNAME }}\n password: ${{ secrets.DOCKER_PASSWORD }}\n\n - name: Setup Python\n uses: actions/setup-python@v5\n with:\n python-version: \"3.11\"\n\n - name: Prefect Auth\n uses: PrefectHQ/actions-prefect-auth@v1\n with:\n prefect-api-key: ${{ secrets.PREFECT_API_KEY }}\n prefect-workspace: ${{ secrets.PREFECT_WORKSPACE }}\n\n - name: Run Prefect Deploy\n uses: PrefectHQ/actions-prefect-deploy@v3\n with:\n deployment-names: my-deployment\n requirements-file-paths: requirements.txt\n
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#authenticating-to-other-docker-image-registries","title":"Authenticating to other Docker image registries","text":"The docker/login-action
GitHub Action supports pushing images to a wide variety of image registries.
For example, if you are storing Docker images in AWS Elastic Container Registry, you can add your ECR registry URL to the registry
key in the with:
part of the action and use an AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
as your username
and password
.
- name: Login to ECR\n uses: docker/login-action@v3\n with:\n registry: <aws-account-number>.dkr.ecr.<region>.amazonaws.com\n username: ${{ secrets.AWS_ACCESS_KEY_ID }}\n password: ${{ secrets.AWS_SECRET_ACCESS_KEY }}\n
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#other-resources","title":"Other resources","text":"Check out the Prefect Cloud Terraform provider if you're using Terraform to manage your infrastructure.
","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/creating-interactive-workflows/","title":"Creating Interactive Workflows","text":"Flows can pause or suspend execution and automatically resume when they receive type-checked input in Prefect's UI. Flows can also send and receive type-checked input at any time while running, without pausing or suspending. This guide will show you how to use these features to build interactive workflows.
A note on async Python syntax
Most of the example code in this section uses async Python functions and await
. However, as with other Prefect features, you can call these functions with or without await
.
You can pause or suspend a flow until it receives input from a user in Prefect's UI. This is useful when you need to ask for additional information or feedback before resuming a flow. Such workflows are often called human-in-the-loop (HITL) systems.
What is human-in-the-loop interactivity used for?
Approval workflows that pause to ask a human to confirm whether a workflow should continue are very common in the business world. Certain types of machine learning training and artificial intelligence workflows benefit from incorporating HITL design.
","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#waiting-for-input","title":"Waiting for input","text":"To receive input while paused or suspended use the wait_for_input
parameter in the pause_flow_run
or suspend_flow_run
functions. This parameter accepts one of the following:
int
or str
, or a built-in collection like List[int]
pydantic.BaseModel
subclassprefect.input.RunInput
When to use a RunModel
or BaseModel
instead of a built-in type
There are a few reasons to use a RunModel
or BaseModel
. The first is that when you let Prefect automatically create one of these classes for your input type, the field that users will see in Prefect's UI when they click \"Resume\" on a flow run is named value
and has no help text to suggest what the field is. If you create a RunInput
or BaseModel
, you can change details like the field name, help text, and default value, and users will see those reflected in the \"Resume\" form.
The simplest way to pause or suspend and wait for input is to pass a built-in type:
from prefect import flow, pause_flow_run, get_run_logger\n\n@flow\ndef greet_user():\n logger = get_run_logger()\n\n user = pause_flow_run(wait_for_input=str)\n\n logger.info(f\"Hello, {user}!\")\n
In this example, the flow run will pause until a user clicks the Resume button in the Prefect UI, enters a name, and submits the form.
What types can you pass for wait_for_input
?
When you pass a built-in type such as int
as an argument for the wait_for_input
parameter to pause_flow_run
or suspend_flow_run
, Prefect automatically creates a Pydantic model containing one field annotated with the type you specified. This means you can use any type annotation that Pydantic accepts for model fields with these functions.
Instead of a built-in type, you can pass in a pydantic.BaseModel
class. This is useful if you already have a BaseModel
you want to use:
from prefect import flow, pause_flow_run, get_run_logger\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n name: str\n age: int\n\n\n@flow\nasync def greet_user():\n logger = get_run_logger()\n\n user = await pause_flow_run(wait_for_input=User)\n\n logger.info(f\"Hello, {user.name}!\")\n
BaseModel
classes are upgraded to RunInput
classes automatically
When you pass a pydantic.BaseModel
class as the wait_for_input
argument to pause_flow_run
or suspend_flow_run
, Prefect automatically creates a RunInput
class with the same behavior as your BaseModel
and uses that instead.
RunInput
classes contain extra logic that allows flows to send and receive them at runtime. You shouldn't notice any difference!
Finally, for advanced use cases like overriding how Prefect stores flow run inputs, you can create a RunInput
class:
from prefect import get_run_logger\nfrom prefect.input import RunInput\n\nclass UserInput(RunInput):\n name: str\n age: int\n\n # Imagine overridden methods here!\n def override_something(self, *args, **kwargs):\n super().override_something(*args, **kwargs)\n\n@flow\nasync def greet_user():\n logger = get_run_logger()\n\n user = await pause_flow_run(wait_for_input=UserInput)\n\n logger.info(f\"Hello, {user.name}!\")\n
","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#providing-initial-data","title":"Providing initial data","text":"You can set default values for fields in your model by using the with_initial_data
method. This is useful when you want to provide default values for the fields in your own RunInput
class.
Expanding on the example above, you could make the name
field default to \"anonymous\":
from prefect import get_run_logger\nfrom prefect.input import RunInput\n\nclass UserInput(RunInput):\n name: str\n age: int\n\n@flow\nasync def greet_user():\n logger = get_run_logger()\n\n user_input = await pause_flow_run(\n wait_for_input=UserInput.with_initial_data(name=\"anonymous\")\n )\n\n if user_input.name == \"anonymous\":\n logger.info(\"Hello, stranger!\")\n else:\n logger.info(f\"Hello, {user_input.name}!\")\n
When a user sees the form for this input, the name field will contain \"anonymous\" as the default.
","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#providing-a-description-with-runtime-data","title":"Providing a description with runtime data","text":"You can provide a dynamic, markdown description that will appear in the Prefect UI when the flow run pauses. This feature enables context-specific prompts, enhancing clarity and user interaction. Building on the example above:
from datetime import datetime\nfrom prefect import flow, pause_flow_run, get_run_logger\nfrom prefect.input import RunInput\n\n\nclass UserInput(RunInput):\n name: str\n age: int\n\n\n@flow\nasync def greet_user():\n logger = get_run_logger()\n current_date = datetime.now().strftime(\"%B %d, %Y\")\n\n description_md = f\"\"\"\n**Welcome to the User Greeting Flow!**\nToday's Date: {current_date}\n\nPlease enter your details below:\n- **Name**: What should we call you?\n- **Age**: Just a number, nothing more.\n\"\"\"\n\n user_input = await pause_flow_run(\n wait_for_input=UserInput.with_initial_data(\n description=description_md, name=\"anonymous\"\n )\n )\n\n if user_input.name == \"anonymous\":\n logger.info(\"Hello, stranger!\")\n else:\n logger.info(f\"Hello, {user_input.name}!\")\n
When a user sees the form for this input, the given markdown will appear above the input fields.
","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#handling-custom-validation","title":"Handling custom validation","text":"Prefect uses the fields and type hints on your RunInput
or BaseModel
class to validate the general structure of input your flow receives, but you might require more complex validation. If you do, you can use Pydantic validators.
Custom validation runs after the flow resumes
Prefect transforms the type annotations in your RunInput
or BaseModel
class to a JSON schema and uses that schema in the UI for client-side validation. However, custom validation requires running Python logic defined in your RunInput
class. Because of this, validation happens after the flow resumes, so you'll want to handle it explicitly in your flow. Continue reading for an example best practice.
The following is an example RunInput
class that uses a custom field validator:
import pydantic\nfrom prefect.input import RunInput\n\n\nclass ShirtOrder(RunInput):\n size: Literal[\"small\", \"medium\", \"large\", \"xlarge\"]\n color: Literal[\"red\", \"green\", \"black\"]\n\n @pydantic.validator(\"color\")\n def validate_age(cls, value, values, **kwargs):\n if value == \"green\" and values[\"size\"] == \"small\":\n raise ValueError(\n \"Green is only in-stock for medium, large, and XL sizes.\"\n )\n\n return value\n
In the example, we use Pydantic's validator
decorator to define a custom validation method for the color
field. We can use it in a flow like this:
import pydantic\nfrom prefect import flow, pause_flow_run\nfrom prefect.input import RunInput\n\n\nclass ShirtOrder(RunInput):\n size: Literal[\"small\", \"medium\", \"large\", \"xlarge\"]\n color: Literal[\"red\", \"green\", \"black\"]\n\n @pydantic.validator(\"color\")\n def validate_age(cls, value, values, **kwargs):\n if value == \"green\" and values[\"size\"] == \"small\":\n raise ValueError(\n \"Green is only in-stock for medium, large, and XL sizes.\"\n )\n\n return value\n\n\n@flow\ndef get_shirt_order():\n shirt_order = pause_flow_run(wait_for_input=ShirtOrder)\n
If a user chooses any size and color combination other than small
and green
, the flow run will resume successfully. However, if the user chooses size small
and color green
, the flow run will resume, and pause_flow_run
will raise a ValidationError
exception. This will cause the flow run to fail and log the error.
However, what if you don't want the flow run to fail? One way to handle this case is to use a while
loop and pause again if the ValidationError
exception is raised:
from typing import Literal\n\nimport pydantic\nfrom prefect import flow, get_run_logger, pause_flow_run\nfrom prefect.input import RunInput\n\n\nclass ShirtOrder(RunInput):\n size: Literal[\"small\", \"medium\", \"large\", \"xlarge\"]\n color: Literal[\"red\", \"green\", \"black\"]\n\n @pydantic.validator(\"color\")\n def validate_age(cls, value, values, **kwargs):\n if value == \"green\" and values[\"size\"] == \"small\":\n raise ValueError(\n \"Green is only in-stock for medium, large, and XL sizes.\"\n )\n\n return value\n\n\n@flow\ndef get_shirt_order():\n logger = get_run_logger()\n shirt_order = None\n\n while shirt_order is None:\n try:\n shirt_order = pause_flow_run(wait_for_input=ShirtOrder)\n except pydantic.ValidationError as exc:\n logger.error(f\"Invalid size and color combination: {exc}\")\n\n logger.info(\n f\"Shirt order: {shirt_order.size}, {shirt_order.color}\"\n )\n
This code will cause the flow run to continually pause until the user enters a valid age.
As an additional step, you may want to use an automation or notification to alert the user to the error.
","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#sending-and-receiving-input-at-runtime","title":"Sending and receiving input at runtime","text":"Use the send_input
and receive_input
functions to send input to a flow or receive input from a flow at runtime. You don't need to pause or suspend the flow to send or receive input.
Why would you send or receive input without pausing or suspending?
You might want to send or receive input without pausing or suspending in scenarios where the flow run is designed to handle real-time data. For instance, in a live monitoring system, you might need to update certain parameters based on the incoming data without interrupting the flow. Another use is having a long-running flow that continually responds to runtime input with low latency. For example, if you're building a chatbot, you could have a flow that starts a GPT Assistant and manages a conversation thread.
The most important parameter to the send_input
and receive_input
functions is run_type
, which should be one of the following:
int
or str
pydantic.BaseModel
classprefect.input.RunInput
classWhen to use a BaseModel
or RunInput
instead of a built-in type
Most built-in types and collections of built-in types should work with send_input
and receive_input
, but there is a caveat with nested collection types, such as lists of tuples, e.g. List[Tuple[str, float]])
. In this case, validation may happen after your flow receives the data, so calling receive_input
may raise a ValidationError
. You can plan to catch this exception, but also, consider placing the field in an explicit BaseModel
or RunInput
so that your flow only receives exact type matches.
Let's look at some examples! We'll check out receive_input
first, followed by send_input
, and then we'll see the two functions working together.
The following flow uses receive_input
to continually receive names and print a personalized greeting for each name it receives:
from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n@flow\nasync def greeter_flow():\n async for name_input in receive_input(str, timeout=None):\n # Prints \"Hello, andrew!\" if another flow sent \"andrew\"\n print(f\"Hello, {name_input}!\")\n
When you pass a type such as str
into receive_input
, Prefect creates a RunInput
class to manage your input automatically. When a flow sends input of this type, Prefect uses the RunInput
class to validate the input. If the validation succeeds, your flow receives the input in the type you specified. In this example, if the flow received a valid string as input, the variable name_input
would contain the string value.
If, instead, you pass a BaseModel
, Prefect upgrades your BaseModel
to a RunInput
class, and the variable your flow sees \u2014 in this case, name_input
\u2014 is a RunInput
instance that behaves like a BaseModel
. Of course, if you pass in a RunInput
class, no upgrade is needed, and you'll get a RunInput
instance.
If you prefer to keep things simple and pass types such as str
into receive_input
, you can do so. If you need access to the generated RunInput
that contains the received value, pass with_metadata=True
to receive_input
:
from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n@flow\nasync def greeter_flow():\n async for name_input in receive_input(\n str,\n timeout=None,\n with_metadata=True\n ):\n # Input will always be in the field \"value\" on this object.\n print(f\"Hello, {name_input.value}!\")\n
Why would you need to use with_metadata=True
?
The primary uses of accessing the RunInput
object for a receive input are to respond to the sender with the RunInput.respond()
function or to access the unique key for an input. Later in this guide, we'll discuss how and why you might use these features.
Notice that we are now printing name_input.value
. When Prefect generates a RunInput
for you from a built-in type, the RunInput
class has a single field, value
, that uses a type annotation matching the type you specified. So if you call receive_input
like this: receive_input(str, with_metadata=True)
, that's equivalent to manually creating the following RunInput
class and receive_input
call:
from prefect import flow\nfrom prefect.input.run_input import RunInput\n\nclass GreeterInput(RunInput):\n value: str\n\n@flow\nasync def greeter_flow():\n async for name_input in receive_input(GreeterInput, timeout=None):\n print(f\"Hello, {name_input.value}!\")\n
The type used in receive_input
and send_input
must match
For a flow to receive input, the sender must use the same type that the receiver is receiving. This means that if the receiver is receiving GreeterInput
, the sender must send GreeterInput
. If the receiver is receiving GreeterInput
and the sender sends str
input that Prefect automatically upgrades to a RunInput
class, the types won't match, so the receiving flow run won't receive the input. However, the input will be waiting if the flow ever calls receive_input(str)
!
By default, each time you call receive_input
, you get an iterator that iterates over all known inputs to a specific flow run, starting with the first received. The iterator will keep track of your current position as you iterate over it, or you can call next()
to explicitly get the next input. If you're using the iterator in a loop, you should probably assign it to a variable:
from prefect import flow, get_client\nfrom prefect.deployments.deployments import run_deployment\nfrom prefect.input.run_input import receive_input, send_input\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def sender():\n greeter_flow_run = await run_deployment(\n \"greeter/send-receive\", timeout=0, as_subflow=False\n )\n client = get_client()\n\n # Assigning the `receive_input` iterator to a variable\n # outside of the the `while True` loop allows us to continue\n # iterating over inputs in subsequent passes through the\n # while loop without losing our position.\n receiver = receive_input(\n str,\n with_metadata=True,\n timeout=None,\n poll_interval=0.1\n )\n\n while True:\n name = input(\"What is your name? \")\n if not name:\n continue\n\n if name == \"q\" or name == \"quit\":\n await send_input(\n EXIT_SIGNAL,\n flow_run_id=greeter_flow_run.id\n )\n print(\"Goodbye!\")\n break\n\n await send_input(name, flow_run_id=greeter_flow_run.id)\n\n # Saving the iterator outside of the while loop and\n # calling next() on each iteration of the loop ensures\n # that we're always getting the newest greeting. If we\n # had instead called `receive_input` here, we would\n # always get the _first_ greeting this flow received,\n # print it, and then ask for a new name.\n greeting = await receiver.next()\n print(greeting)\n
So, an iterator helps to keep track of the inputs your flow has already received. But what if you want your flow to suspend and then resume later, picking up where it left off? In that case, you will need to save the keys of the inputs you've seen so that the flow can read them back out when it resumes. You might use a Block, such as a JSONBlock
.
The following flow receives input for 30 seconds then suspends itself, which exits the flow and tears down infrastructure:
from prefect import flow, get_run_logger, suspend_flow_run\nfrom prefect.blocks.system import JSON\nfrom prefect.context import get_run_context\nfrom prefect.input.run_input import receive_input\n\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def greeter():\n logger = get_run_logger()\n run_context = get_run_context()\n assert run_context.flow_run, \"Could not see my flow run ID\"\n\n block_name = f\"{run_context.flow_run.id}-seen-ids\"\n\n try:\n seen_keys_block = await JSON.load(block_name)\n except ValueError:\n seen_keys_block = JSON(\n value=[],\n )\n\n try:\n async for name_input in receive_input(\n str,\n with_metadata=True,\n poll_interval=0.1,\n timeout=30,\n exclude_keys=seen_keys_block.value\n ):\n if name_input.value == EXIT_SIGNAL:\n print(\"Goodbye!\")\n return\n await name_input.respond(f\"Hello, {name_input.value}!\")\n\n seen_keys_block.value.append(name_input.metadata.key)\n await seen_keys_block.save(\n name=block_name,\n overwrite=True\n )\n except TimeoutError:\n logger.info(\"Suspending greeter after 30 seconds of idle time\")\n await suspend_flow_run(timeout=10000)\n
As this flow processes name input, it adds the key of the flow run input to the seen_keys_block
. When the flow later suspends and then resumes, it reads the keys it has already seen out of the JSON Block and passes them as the exlude_keys
parameter to receive_input
.
When your flow receives input from another flow, Prefect knows the sending flow run ID, so the receiving flow can respond by calling the respond
method on the RunInput
instance the flow received. There are a couple of requirements:
BaseModel
or RunInput
, or use with_metadata=True
The respond
method is equivalent to calling send_input(..., flow_run_id=sending_flow_run.id)
, but with respond
, your flow doesn't need to know the sending flow run's ID.
Now that we know about respond
, let's make our greeter_flow
respond to name inputs instead of printing them:
from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n@flow\nasync def greeter():\n async for name_input in receive_input(\n str,\n with_metadata=True,\n timeout=None\n ):\n await name_input.respond(f\"Hello, {name_input.value}!\")\n
Cool! There's one problem left: this flow runs forever! We need a way to signal that it should exit. Let's keep things simple and teach it to look for a special string:
from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def greeter():\n async for name_input in receive_input(\n str,\n with_metadata=True,\n poll_interval=0.1,\n timeout=None\n ):\n if name_input.value == EXIT_SIGNAL:\n print(\"Goodbye!\")\n return\n await name_input.respond(f\"Hello, {name_input.value}!\")\n
With a greeter
flow in place, we're ready to create the flow that sends greeter
names!
You can send input to a flow with the send_input
function. This works similarly to receive_input
and, like that function, accepts the same run_input
argument, which can be a built-in type such as str
, or else a BaseModel
or RunInput
subclass.
When can you send input to a flow run?
You can send input to a flow run as soon as you have the flow run's ID. The flow does not have to be receiving input for you to send input. If you send a flow input before it is receiving, it will see your input when it calls receive_input
(as long as the types in the send_input
and receive_input
calls match!)
Next, we'll create a sender
flow that starts a greeter
flow run and then enters a loop, continuously getting input from the terminal and sending it to the greeter flow:
@flow\nasync def sender():\n greeter_flow_run = await run_deployment(\n \"greeter/send-receive\", timeout=0, as_subflow=False\n )\n receiver = receive_input(str, timeout=None, poll_interval=0.1)\n client = get_client()\n\n while True:\n flow_run = await client.read_flow_run(greeter_flow_run.id)\n\n if not flow_run.state or not flow_run.state.is_running():\n continue\n\n name = input(\"What is your name? \")\n if not name:\n continue\n\n if name == \"q\" or name == \"quit\":\n await send_input(\n EXIT_SIGNAL,\n flow_run_id=greeter_flow_run.id\n )\n print(\"Goodbye!\")\n break\n\n await send_input(name, flow_run_id=greeter_flow_run.id)\n greeting = await receiver.next()\n print(greeting)\n
There's more going on here than in greeter
, so let's take a closer look at the pieces.
First, we use run_deployment
to start a greeter
flow run. This means we must have a worker or flow.serve()
running in separate process. That process will begin running greeter
while sender
continues to execute. Calling run_deployment(..., timeout=0)
ensures that sender
won't wait for the greeter
flow run to complete, because it's running a loop and will only exit when we send EXIT_SIGNAL
.
Next, we capture the iterator returned by receive_input
as receiver
. This flow works by entering a loop, and on each iteration of the loop, the flow asks for terminal input, sends that to the greeter
flow, and then runs receiver.next()
to wait until it receives the response from greeter
.
Next, we let the terminal user who ran this flow exit by entering the string q
or quit
. When that happens, we send the greeter
flow an exit signal so it will shut down too.
Finally, we send the new name to greeter
. We know that greeter
is going to send back a greeting as a string, so we immediately wait for new string input. When we receive the greeting, we print it and continue the loop that gets terminal input.
Finally, let's see a complete example of using send_input
and receive_input
. Here is what the greeter
and sender
flows look like together:
import asyncio\nimport sys\nfrom prefect import flow, get_client\nfrom prefect.blocks.system import JSON\nfrom prefect.context import get_run_context\nfrom prefect.deployments.deployments import run_deployment\nfrom prefect.input.run_input import receive_input, send_input\n\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def greeter():\n run_context = get_run_context()\n assert run_context.flow_run, \"Could not see my flow run ID\"\n\n block_name = f\"{run_context.flow_run.id}-seen-ids\"\n\n try:\n seen_keys_block = await JSON.load(block_name)\n except ValueError:\n seen_keys_block = JSON(\n value=[],\n )\n\n async for name_input in receive_input(\n str,\n with_metadata=True,\n poll_interval=0.1,\n timeout=None\n ):\n if name_input.value == EXIT_SIGNAL:\n print(\"Goodbye!\")\n return\n await name_input.respond(f\"Hello, {name_input.value}!\")\n\n seen_keys_block.value.append(name_input.metadata.key)\n await seen_keys_block.save(\n name=block_name,\n overwrite=True\n )\n\n\n@flow\nasync def sender():\n greeter_flow_run = await run_deployment(\n \"greeter/send-receive\", timeout=0, as_subflow=False\n )\n receiver = receive_input(str, timeout=None, poll_interval=0.1)\n client = get_client()\n\n while True:\n flow_run = await client.read_flow_run(greeter_flow_run.id)\n\n if not flow_run.state or not flow_run.state.is_running():\n continue\n\n name = input(\"What is your name? \")\n if not name:\n continue\n\n if name == \"q\" or name == \"quit\":\n await send_input(\n EXIT_SIGNAL,\n flow_run_id=greeter_flow_run.id\n )\n print(\"Goodbye!\")\n break\n\n await send_input(name, flow_run_id=greeter_flow_run.id)\n greeting = await receiver.next()\n print(greeting)\n\n\nif __name__ == \"__main__\":\n if sys.argv[1] == \"greeter\":\n asyncio.run(greeter.serve(name=\"send-receive\"))\n elif sys.argv[1] == \"sender\":\n asyncio.run(sender())\n
To run the example, you'll need a Python environment with Prefect installed, pointed at either an open-source Prefect server instance or Prefect Cloud.
With your environment set up, start a flow runner in one terminal with the following command:
python my_file_name greeter\n
For example, with Prefect Cloud, you should see output like this:
\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Your flow 'greeter' is being served and polling for scheduled runs! \u2502\n\u2502 \u2502\n\u2502 To trigger a run for this flow, use the following command: \u2502\n\u2502 \u2502\n\u2502 $ prefect deployment run 'greeter/send-receive' \u2502\n\u2502 \u2502\n\u2502 You can also run your flow via the Prefect UI: \u2502\n\u2502 https://app.prefect.cloud/account/...(a URL for your account) \u2502\n\u2502 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n
Then start the greeter process in another terminal:
python my_file_name sender\n
You should see output like this:
11:38:41.800 | INFO | prefect.engine - Created flow run 'gregarious-owl' for flow 'sender'\n11:38:41.802 | INFO | Flow run 'gregarious-owl' - View at https://app.prefect.cloud/account/...\nWhat is your name?\n
Type a name and press the enter key to see a greeting, and you'll see sending and receiving in action:
What is your name? andrew\nHello, andrew!\n
","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/dask-ray-task-runners/","title":"Dask and Ray Task Runners","text":"Task runners provide an execution environment for tasks. In a flow decorator, you can specify a task runner to run the tasks called in that flow.
The default task runner is the ConcurrentTaskRunner
.
Use .submit
to run your tasks asynchronously
To run tasks asynchronously use the .submit
method when you call them. If you call a task as you would normally in Python code it will run synchronously, even if you are calling the task within a flow that uses the ConcurrentTaskRunner
, DaskTaskRunner
, or RayTaskRunner
.
Many real-world data workflows benefit from true parallel, distributed task execution. For these use cases, the following Prefect-developed task runners for parallel task execution may be installed as Prefect Integrations.
DaskTaskRunner
runs tasks requiring parallel execution using dask.distributed
.RayTaskRunner
runs tasks requiring parallel execution using Ray.These task runners can spin up a local Dask cluster or Ray instance on the fly, or let you connect with a Dask or Ray environment you've set up separately. Then you can take advantage of massively parallel computing environments.
Use Dask or Ray in your flows to choose the execution environment that fits your particular needs.
To show you how they work, let's start small.
Remote storage
We recommend configuring remote file storage for task execution with DaskTaskRunner
or RayTaskRunner
. This ensures tasks executing in Dask or Ray have access to task result storage, particularly when accessing a Dask or Ray instance outside of your execution environment.
You may have seen this briefly in a previous tutorial, but let's look a bit more closely at how you can configure a specific task runner for a flow.
Let's start with the SequentialTaskRunner
. This task runner runs all tasks synchronously and may be useful when used as a debugging tool in conjunction with async code.
Let's start with this simple flow. We import the SequentialTaskRunner
, specify a task_runner
on the flow, and call the tasks with .submit()
.
from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\n\n@task\ndef say_hello(name):\n print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n print(f\"goodbye {name}\")\n\n@flow(task_runner=SequentialTaskRunner())\ndef greetings(names):\n for name in names:\n say_hello.submit(name)\n say_goodbye.submit(name)\n\ngreetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n
Save this as sequential_flow.py
and run it in a terminal. You'll see output similar to the following:
$ python sequential_flow.py\n16:51:17.967 | INFO | prefect.engine - Created flow run 'humongous-mink' for flow 'greetings'\n16:51:17.967 | INFO | Flow run 'humongous-mink' - Starting 'SequentialTaskRunner'; submitted tasks will be run sequentially...\n16:51:18.038 | INFO | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:51:18.038 | INFO | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-0' immediately...\nhello arthur\n16:51:18.060 | INFO | Task run 'say_hello-811087cd-0' - Finished in state Completed()\n16:51:18.107 | INFO | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:51:18.107 | INFO | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-0' immediately...\ngoodbye arthur\n16:51:18.123 | INFO | Task run 'say_goodbye-261e56a8-0' - Finished in state Completed()\n16:51:18.134 | INFO | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:51:18.134 | INFO | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-1' immediately...\nhello trillian\n16:51:18.150 | INFO | Task run 'say_hello-811087cd-1' - Finished in state Completed()\n16:51:18.159 | INFO | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:51:18.159 | INFO | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-1' immediately...\ngoodbye trillian\n16:51:18.181 | INFO | Task run 'say_goodbye-261e56a8-1' - Finished in state Completed()\n16:51:18.190 | INFO | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:51:18.190 | INFO | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-2' immediately...\nhello ford\n16:51:18.210 | INFO | Task run 'say_hello-811087cd-2' - Finished in state Completed()\n16:51:18.219 | INFO | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:51:18.219 | INFO | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-2' immediately...\ngoodbye ford\n16:51:18.237 | INFO | Task run 'say_goodbye-261e56a8-2' - Finished in state Completed()\n16:51:18.246 | INFO | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:51:18.246 | INFO | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-3' immediately...\nhello marvin\n16:51:18.264 | INFO | Task run 'say_hello-811087cd-3' - Finished in state Completed()\n16:51:18.273 | INFO | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:51:18.273 | INFO | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-3' immediately...\ngoodbye marvin\n16:51:18.290 | INFO | Task run 'say_goodbye-261e56a8-3' - Finished in state Completed()\n16:51:18.321 | INFO | Flow run 'humongous-mink' - Finished in state Completed('All states completed.')\n
If we take out the log messages and just look at the printed output of the tasks, you see they're executed in sequential order:
$ python sequential_flow.py\nhello arthur\ngoodbye arthur\nhello trillian\ngoodbye trillian\nhello ford\ngoodbye ford\nhello marvin\ngoodbye marvin\n
","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#running-parallel-tasks-with-dask","title":"Running parallel tasks with Dask","text":"You could argue that this simple flow gains nothing from parallel execution, but let's roll with it so you can see just how simple it is to take advantage of the DaskTaskRunner
.
To configure your flow to use the DaskTaskRunner
:
prefect-dask
collection is installed by running pip install prefect-dask
.DaskTaskRunner
from prefect_dask.task_runners
.task_runner=DaskTaskRunner
argument..submit
method when calling functions.This is the same flow as above, with a few minor changes to use DaskTaskRunner
where we previously configured SequentialTaskRunner
. Install prefect-dask
, made these changes, then save the updated code as dask_flow.py
.
from prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef say_hello(name):\n print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n print(f\"goodbye {name}\")\n\n@flow(task_runner=DaskTaskRunner())\ndef greetings(names):\n for name in names:\n say_hello.submit(name)\n say_goodbye.submit(name)\n\nif __name__ == \"__main__\":\n greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n
Note that, because you're using DaskTaskRunner
in a script, you must use if __name__ == \"__main__\":
or you'll see warnings and errors.
Now run dask_flow.py
. If you get a warning about accepting incoming network connections, that's okay - everything is local in this example.
$ python dask_flow.py\n19:29:03.798 | INFO | prefect.engine - Created flow run 'fine-bison' for flow 'greetings'\n\n19:29:03.798 | INFO | Flow run 'fine-bison' - Using task runner 'DaskTaskRunner'\n\n19:29:04.080 | INFO | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:54:18.465 | INFO | prefect.engine - Created flow run 'radical-finch' for flow 'greetings'\n16:54:18.465 | INFO | Flow run 'radical-finch' - Starting 'DaskTaskRunner'; submitted tasks will be run concurrently...\n16:54:18.465 | INFO | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:54:19.811 | INFO | prefect.task_runner.dask - The Dask dashboard is available at <http://127.0.0.1:8787/status>\n16:54:19.881 | INFO | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:54:20.364 | INFO | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-0' for execution.\n16:54:20.379 | INFO | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:54:20.386 | INFO | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-0' for execution.\n16:54:20.397 | INFO | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:54:20.401 | INFO | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-1' for execution.\n16:54:20.417 | INFO | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:54:20.423 | INFO | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-1' for execution.\n16:54:20.443 | INFO | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:54:20.449 | INFO | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-2' for execution.\n16:54:20.462 | INFO | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:54:20.474 | INFO | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-2' for execution.\n16:54:20.500 | INFO | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:54:20.511 | INFO | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-3' for execution.\n16:54:20.544 | INFO | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:54:20.555 | INFO | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-3' for execution.\nhello arthur\ngoodbye ford\ngoodbye arthur\nhello ford\ngoodbye marvin\ngoodbye trillian\nhello trillian\nhello marvin\n
DaskTaskRunner
automatically creates a local Dask cluster, then starts executing all of the tasks in parallel. The results do not return in the same order as the sequential code above.
Notice what happens if you do not use the submit
method when calling tasks:
from prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n\n@task\ndef say_hello(name):\n print(f\"hello {name}\")\n\n\n@task\ndef say_goodbye(name):\n print(f\"goodbye {name}\")\n\n\n@flow(task_runner=DaskTaskRunner())\ndef greetings(names):\n for name in names:\n say_hello(name)\n say_goodbye(name)\n\n\nif __name__ == \"__main__\":\n greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n
$ python dask_flow.py\n\n16:57:34.534 | INFO | prefect.engine - Created flow run 'papaya-honeybee' for flow 'greetings'\n16:57:34.534 | INFO | Flow run 'papaya-honeybee' - Starting 'DaskTaskRunner'; submitted tasks will be run concurrently...\n16:57:34.535 | INFO | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:57:35.715 | INFO | prefect.task_runner.dask - The Dask dashboard is available at <http://127.0.0.1:8787/status>\n16:57:35.787 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:57:35.788 | INFO | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-0' immediately...\nhello arthur\n16:57:35.810 | INFO | Task run 'say_hello-811087cd-0' - Finished in state Completed()\n16:57:35.820 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:57:35.820 | INFO | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-0' immediately...\ngoodbye arthur\n16:57:35.840 | INFO | Task run 'say_goodbye-261e56a8-0' - Finished in state Completed()\n16:57:35.849 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:57:35.849 | INFO | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-1' immediately...\nhello trillian\n16:57:35.869 | INFO | Task run 'say_hello-811087cd-1' - Finished in state Completed()\n16:57:35.878 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:57:35.878 | INFO | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-1' immediately...\ngoodbye trillian\n16:57:35.894 | INFO | Task run 'say_goodbye-261e56a8-1' - Finished in state Completed()\n16:57:35.907 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:57:35.907 | INFO | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-2' immediately...\nhello ford\n16:57:35.924 | INFO | Task run 'say_hello-811087cd-2' - Finished in state Completed()\n16:57:35.933 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:57:35.933 | INFO | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-2' immediately...\ngoodbye ford\n16:57:35.951 | INFO | Task run 'say_goodbye-261e56a8-2' - Finished in state Completed()\n16:57:35.959 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:57:35.959 | INFO | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-3' immediately...\nhello marvin\n16:57:35.976 | INFO | Task run 'say_hello-811087cd-3' - Finished in state Completed()\n16:57:35.985 | INFO | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:57:35.985 | INFO | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-3' immediately...\ngoodbye marvin\n16:57:36.004 | INFO | Task run 'say_goodbye-261e56a8-3' - Finished in state Completed()\n16:57:36.289 | INFO | Flow run 'papaya-honeybee' - Finished in state Completed('All states completed.')\n
The tasks are not submitted to the DaskTaskRunner
and are run sequentially.
To demonstrate the ability to flexibly apply the task runner appropriate for your workflow, use the same flow as above, with a few minor changes to use the RayTaskRunner
where we previously configured DaskTaskRunner
.
To configure your flow to use the RayTaskRunner
:
prefect-ray
collection is installed by running pip install prefect-ray
.RayTaskRunner
from prefect_ray.task_runners
.task_runner=RayTaskRunner
argument.Ray environment limitations
While we're excited about parallel task execution via Ray to Prefect, there are some inherent limitations with Ray you should be aware of:
pip
alone and will be skipped during installation of Prefect. It is possible to manually install the blocking component with conda
. See the Ray documentation for instructions.See the Ray installation documentation for further compatibility information.
Save this code in ray_flow.py
.
from prefect import flow, task\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@task\ndef say_hello(name):\n print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n print(f\"goodbye {name}\")\n\n@flow(task_runner=RayTaskRunner())\ndef greetings(names):\n for name in names:\n say_hello.submit(name)\n say_goodbye.submit(name)\n\nif __name__ == \"__main__\":\n greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n
Now run ray_flow.py
RayTaskRunner
automatically creates a local Ray instance, then immediately starts executing all of the tasks in parallel. If you have an existing Ray instance, you can provide the address as a parameter to run tasks in the instance. See Running tasks on Ray for details.
Many workflows include a variety of tasks, and not all of them benefit from parallel execution. You'll most likely want to use the Dask or Ray task runners and spin up their respective resources only for those tasks that need them.
Because task runners are specified on flows, you can assign different task runners to tasks by using subflows to organize those tasks.
This example uses the same tasks as the previous examples, but on the parent flow greetings()
we use the default ConcurrentTaskRunner
. Then we call a ray_greetings()
subflow that uses the RayTaskRunner
to execute the same tasks in a Ray instance.
from prefect import flow, task\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@task\ndef say_hello(name):\n print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n print(f\"goodbye {name}\")\n\n@flow(task_runner=RayTaskRunner())\ndef ray_greetings(names):\n for name in names:\n say_hello.submit(name)\n say_goodbye.submit(name)\n\n@flow()\ndef greetings(names):\n for name in names:\n say_hello.submit(name)\n say_goodbye.submit(name)\n ray_greetings(names)\n\nif __name__ == \"__main__\":\n greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n
If you save this as ray_subflow.py
and run it, you'll see that the flow greetings
runs as you'd expect for a concurrent flow, then flow ray-greetings
spins up a Ray instance to run the tasks again.
In the Deployments tutorial, we looked at serving a flow that enables scheduling or creating flow runs via the Prefect API.
With our Python script in hand, we can build a Docker image for our script, allowing us to serve our flow in various remote environments. We'll use Kubernetes in this guide, but you can use any Docker-compatible infrastructure.
In this guide we'll:
Note that in this guide we'll create a Dockerfile from scratch. Alternatively, Prefect makes it convenient to build a Docker image as part of deployment creation. You can even include environment variables and specify additional Python packages to install at runtime.
If creating a deployment with a prefect.yaml
file, the build step makes it easy to customize your Docker image and push it to the registry of your choice. See an example here.
Deployment creation with a Python script that includes flow.deploy
similarly allows you to customize your Docker image with keyword arguments as shown below.
...\n\nif __name__ == \"__main__\":\n hello_world.deploy(\n name=\"my-first-deployment\",\n work_pool_name=\"above-ground\",\n image='my_registry/hello_world:demo',\n job_variables={\"env\": { \"EXTRA_PIP_PACKAGES\": \"boto3\" } }\n )\n
","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#prerequisites","title":"Prerequisites","text":"To complete this guide, you'll need the following:
prefect server start
.First let's make a clean directory to work from, prefect-docker-guide
.
mkdir prefect-docker-guide\ncd prefect-docker-guide\n
In this directory, we'll create a sub-directory named flows
and put our flow script from the Deployments tutorial in it.
mkdir flows\ncd flows\ntouch prefect-docker-guide-flow.py\n
Here's the flow code for reference:
prefect-docker-guide-flow.pyimport httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info.serve(name=\"prefect-docker-guide\")\n
The next file we'll add to the prefect-docker-guide
directory is a requirements.txt
. We'll include all dependencies required for our prefect-docker-guide-flow.py
script in the Docker image we'll build.
# ensure you run this line from the top level of the `prefect-docker-guide` directory\ntouch requirements.txt\n
Here's what we'll put in our requirements.txt
file:
prefect>=2.12.0\nhttpx\n
Next, we'll create a Dockerfile
that we'll use to create a Docker image that will also store the flow code.
touch Dockerfile\n
We'll add the following content to our Dockerfile
:
# We're using the latest version of Prefect with Python 3.10\nFROM prefecthq/prefect:2-python3.10\n\n# Add our requirements.txt file to the image and install dependencies\nCOPY requirements.txt .\nRUN pip install -r requirements.txt --trusted-host pypi.python.org --no-cache-dir\n\n# Add our flow code to the image\nCOPY flows /opt/prefect/flows\n\n# Run our flow script when the container starts\nCMD [\"python\", \"flows/prefect-docker-guide-flow.py\"]\n
","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#building-a-docker-image","title":"Building a Docker image","text":"Now that we have a Dockerfile we can build our image by running:
docker build -t prefect-docker-guide-image .\n
We can check that our build worked by running a container from our new image.
CloudSelf-hostedOur container will need an API URL and and API key to communicate with Prefect Cloud.
You can get an API key from the API Keys section of the user settings in the Prefect UI.
You can get your API URL by running prefect config view
and copying the PREFECT_API_URL
value.
We'll provide both these values to our container by passing them as environment variables with the -e
flag.
docker run -e PREFECT_API_URL=YOUR_PREFECT_API_URL -e PREFECT_API_KEY=YOUR_API_KEY prefect-docker-guide-image\n
After running the above command, the container should start up and serve the flow within the container!
Our container will need an API URL and network access to communicate with the Prefect API.
For this guide, we'll assume the Prefect API is running on the same machine that we'll run our container on and the Prefect API was started with prefect server start
. If you're running a different setup, check out the Hosting a Prefect server guide for information on how to connect to your Prefect API instance.
To ensure that our flow container can communicate with the Prefect API, we'll set our PREFECT_API_URL
to http://host.docker.internal:4200/api
. If you're running Linux, you'll need to set your PREFECT_API_URL
to http://localhost:4200/api
and use the --network=\"host\"
option instead.
docker run --network=\"host\" -e PREFECT_API_URL=http://host.docker.internal:4200/api prefect-docker-guide-image\n
After running the above command, the container should start up and serve the flow within the container!
","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#deploying-to-a-remote-environment","title":"Deploying to a remote environment","text":"Now that we have a Docker image with our flow code embedded, we can deploy it to a remote environment!
For this guide, we'll simulate a remote environment by using Kubernetes locally with Docker Desktop. You can use the instructions provided by Docker to set up Kubernetes locally.
","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#creating-a-kubernetes-deployment-manifest","title":"Creating a Kubernetes deployment manifest","text":"To ensure the process serving our flow is always running, we'll create a Kubernetes deployment. If our flow's container ever crashes, Kubernetes will automatically restart it, ensuring that we won't miss any scheduled runs.
First, we'll create a deployment-manifest.yaml
file in our prefect-docker-guide
directory:
touch deployment-manifest.yaml\n
And we'll add the following content to our deployment-manifest.yaml
file:
apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: prefect-docker-guide\nspec:\n replicas: 1\n selector:\n matchLabels:\n flow: get-repo-info\n template:\n metadata:\n labels:\n flow: get-repo-info\n spec:\n containers:\n - name: flow-container\n image: prefect-docker-guide-image:latest\n env:\n - name: PREFECT_API_URL\n value: YOUR_PREFECT_API_URL\n - name: PREFECT_API_KEY\n value: YOUR_API_KEY\n # Never pull the image because we're using a local image\n imagePullPolicy: Never\n
Keep your API key secret
In the above manifest we are passing in the Prefect API URL and API key as environment variables. This approach is simple, but it is not secure. If you are deploying your flow to a remote cluster, you should use a Kubernetes secret to store your API key.
deployment-manifest.yamlapiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: prefect-docker-guide\nspec:\n replicas: 1\n selector:\n matchLabels:\n flow: get-repo-info\n template:\n metadata:\n labels:\n flow: get-repo-info\n spec:\n containers:\n - name: flow-container\n image: prefect-docker-guide-image:latest\n env:\n - name: PREFECT_API_URL\n value: <http://host.docker.internal:4200/api>\n # Never pull the image because we're using a local image\n imagePullPolicy: Never\n
Linux users
If you're running Linux, you'll need to set your PREFECT_API_URL
to use the IP address of your machine instead of host.docker.internal
.
This manifest defines how our image will run when deployed in our Kubernetes cluster. Note that we will be running a single replica of our flow container. If you want to run multiple replicas of your flow container to keep up with an active schedule, or because our flow is resource-intensive, you can increase the replicas
value.
Now that we have a deployment manifest, we can deploy our flow to the cluster by running:
kubectl apply -f deployment-manifest.yaml\n
We can monitor the status of our Kubernetes deployment by running:
kubectl get deployments\n
Once the deployment has successfully started, we can check the logs of our flow container by running the following:
kubectl logs -l flow=get-repo-info\n
Now that we're serving our flow in our cluster, we can trigger a flow run by running:
prefect deployment run get-repo-info/prefect-docker-guide\n
If we navigate to the URL provided by the prefect deployment run
command, we can follow the flow run via the logs in the Prefect UI!
Every release of Prefect results in several new Docker images. These images are all named prefecthq/prefect and their tags identify their differences.
","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#image-tags","title":"Image tags","text":"When a release is published, images are built for all of Prefect's supported Python versions. These images are tagged to identify the combination of Prefect and Python versions contained. Additionally, we have \"convenience\" tags which are updated with each release to facilitate automatic updates.
For example, when release 2.11.5
is published:
prefect:2.1.1-python3.10
and prefect:2.1.1-python3.10-conda
.sha-88a7ff17a3435ec33c95c0323b8f05d7b9f3f6d2-python3.10
2.1.x
release, receiving patch updates, we update a tag without the patch version to this release, e.g. prefect.2.1-python3.10
.2.x.y
release, receiving minor version updates, we update a tag without the minor or patch version to this release, e.g. prefect.2-python3.10
2.x.y
release without specifying a Python version, we update 2-latest
to the image for our highest supported Python version, which in this case would be equivalent to prefect:2.1.1-python3.10
.Choose image versions carefully
It's a good practice to use Docker images with specific Prefect versions in production.
Use care when employing images that automatically update to new versions (such as prefecthq/prefect:2-python3.11
or prefecthq/prefect:2-latest
).
Standard Python images are based on the official Python slim
images, e.g. python:3.10-slim
.
Conda flavored images are based on continuumio/miniconda3
. Prefect is installed into a conda environment named prefect
.
If your flow relies on dependencies not found in the default prefecthq/prefect
images, you may want to build your own image. You can either base it off of one of the provided prefecthq/prefect
images, or build your own image. See the Work pool deployment guide for discussion of how Prefect can help you build custom images with dependencies specified in a requirements.txt
file.
By default, Prefect work pools that use containers refer to the 2-latest
image. You can specify another image at work pool creation. The work pool image choice can be overridden in individual deployments.
prefecthq/prefect
image manually","text":"Here we provide an example Dockerfile
for building an image based on prefecthq/prefect:2-latest
, but with scikit-learn
installed.
FROM prefecthq/prefect:2-latest\n\nRUN pip install scikit-learn\n
","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#choosing-an-image-strategy","title":"Choosing an image strategy","text":"The options described above have different complexity (and performance) characteristics. For choosing a strategy, we provide the following recommendations:
If your flow only makes use of tasks defined in the same file as the flow, or tasks that are part of prefect
itself, then you can rely on the default provided prefecthq/prefect
image.
If your flow requires a few extra dependencies found on PyPI, you can use the default prefecthq/prefect
image and set prefect.deployments.steps.pip_install_requirements:
in the pull
step to install these dependencies at runtime.
If the installation process requires compiling code or other expensive operations, you may be better off building a custom image instead.
If your flow (or flows) require extra dependencies or shared libraries, we recommend building a shared custom image with all the extra dependencies and shared task definitions you need. Your flows can then all rely on the same image, but have their source stored externally. This option can ease development, as the shared image only needs to be rebuilt when dependencies change, not when the flow source changes.
We only served a single flow in this guide, but you can extend this setup to serve multiple flows in a single Docker image by updating your Python script to using flow.to_deployment
and serve
to serve multiple flows or the same flow with different configuration.
To learn more about deploying flows, check out the Deployments concept doc!
For advanced infrastructure requirements, such as executing each flow run within its own dedicated Docker container, learn more in the Work pool deployment guide.
","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/global-concurrency-limits/","title":"Global Concurrency Limits and Rate Limits","text":"Global concurrency limits allow you to manage execution efficiently, controlling how many tasks, flows, or other operations can run simultaneously. They are ideal when optimizing resource usage, preventing bottlenecks, and customizing task execution are priorities.
Clarification on use of the term 'tasks'
In the context of global concurrency and rate limits, \"tasks\" refers not specifically to Prefect tasks, but to concurrent units of work in general, such as those managed by an event loop or TaskGroup
in asynchronous programming. These general \"tasks\" could include Prefect tasks when they are part of an asynchronous execution environment.
Rate Limits ensure system stability by governing the frequency of requests or operations. They are suitable for preventing overuse, ensuring fairness, and handling errors gracefully.
When selecting between Concurrency and Rate Limits, consider your primary goal. Choose Concurrency Limits for resource optimization and task management. Choose Rate Limits to maintain system stability and fair access to services.
The core difference between a rate limit and a concurrency limit is the way in which slots are released. With a rate limit, slots are released at a controlled rate, controlled by slot_decay_per_second
whereas with a concurrency limit, slots are released when the concurrency manager is exited.
You can create, read, edit, and delete concurrency limits via the Prefect UI.
When creating a concurrency limit, you can specify the following parameters:
/
, %
, &
, >
, <
, are not allowed.rate_limit
function.Global concurrency limits can be in either an active
or inactive
state.
Global concurrency limits can be configured with slot decay. This is used when the concurrency limit is used as a rate limit, and it governs the pace at which slots are released or become available for reuse after being occupied. These slots effectively represent the concurrency capacity within a specific concurrency limit. The concept is best understood as the rate at which these slots \"decay\" or refresh.
To configure slot decay, you can set the slot_decay_per_second
parameter when defining or adjusting a concurrency limit.
For practical use, consider the following:
Higher values: Setting slot_decay_per_second
to a higher value, such as 5.0, results in slots becoming available relatively quickly. In this scenario, a slot that was occupied by a task will free up after just 0.2
(1.0 / 5.0
) seconds.
Lower values: Conversely, setting slot_decay_per_second
to a lower value, like 0.1, causes slots to become available more slowly. In this scenario it would take 10
(1.0 / 0.1
) seconds for a slot to become available again after occupancy
Slot decay provides fine-grained control over the availability of slots, enabling you to optimize the rate of your workflow based on your specific requirements.
","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#using-the-concurrency-context-manager","title":"Using theconcurrency
context manager","text":"The concurrency
context manager allows control over the maximum number of concurrent operations. You can select either the synchronous (sync
) or asynchronous (async
) version, depending on your use case. Here's how to use it:
Concurrency limits are implicitly created
When using the concurrency
context manager, the concurrency limit you use will be created, in an inactive state, if it does not already exist.
Sync
from prefect import flow, task\nfrom prefect.concurrency.sync import concurrency\n\n\n@task\ndef process_data(x, y):\n with concurrency(\"database\", occupy=1):\n return x + y\n\n\n@flow\ndef my_flow():\n for x, y in [(1, 2), (2, 3), (3, 4), (4, 5)]:\n process_data.submit(x, y)\n\n\nif __name__ == \"__main__\":\n my_flow()\n
Async
import asyncio\nfrom prefect import flow, task\nfrom prefect.concurrency.asyncio import concurrency\n\n\n@task\nasync def process_data(x, y):\n async with concurrency(\"database\", occupy=1):\n return x + y\n\n\n@flow\nasync def my_flow():\n for x, y in [(1, 2), (2, 3), (3, 4), (4, 5)]:\n await process_data.submit(x, y)\n\n\nif __name__ == \"__main__\":\n asyncio.run(my_flow())\n
prefect.concurrency.sync
module for sync usage and the prefect.concurrency.asyncio
module for async usage.process_data
task, taking x
and y
as input arguments. Inside this task, the concurrency context manager controls concurrency, using the database
concurrency limit and occupying one slot. If another task attempts to run with the same limit and no slots are available, that task will be blocked until a slot becomes available.my_flow
is defined. Within this flow, it iterates through a list of tuples, each containing pairs of x and y values. For each pair, the process_data
task is submitted with the corresponding x and y values for processing.rate_limit
","text":"The Rate Limit feature provides control over the frequency of requests or operations, ensuring responsible usage and system stability. Depending on your requirements, you can utilize rate_limit
to govern both synchronous (sync) and asynchronous (async) operations. Here's how to make the most of it:
Slot decay
When using the rate_limit
function the concurrency limit you use must have a slot decay configured.
Sync
from prefect import flow, task\nfrom prefect.concurrency.sync import rate_limit\n\n\n@task\ndef make_http_request():\n rate_limit(\"rate-limited-api\")\n print(\"Making an HTTP request...\")\n\n\n@flow\ndef my_flow():\n for _ in range(10):\n make_http_request.submit()\n\n\nif __name__ == \"__main__\":\n my_flow()\n
Async
import asyncio\n\nfrom prefect import flow, task\nfrom prefect.concurrency.asyncio import rate_limit\n\n\n@task\nasync def make_http_request():\n await rate_limit(\"rate-limited-api\")\n print(\"Making an HTTP request...\")\n\n\n@flow\nasync def my_flow():\n for _ in range(10):\n await make_http_request.submit()\n\n\nif __name__ == \"__main__\":\n asyncio.run(my_flow())\n
rate_limit
function. Use the prefect.concurrency.sync
module for sync usage and the prefect.concurrency.asyncio
module for async usage.make_http_request
task. Inside this task, the rate_limit
function is used to ensure that the requests are made at a controlled pace.my_flow
is defined. Within this flow the make_http_request
task is submitted 10 times.concurrency
and rate_limit
outside of a flow","text":"concurreny
and rate_limit
can be used outside of a flow to control concurrency and rate limits for any operation.
import asyncio\n\nfrom prefect.concurrency.asyncio import rate_limit\n\n\nasync def main():\n for _ in range(10):\n await rate_limit(\"rate-limited-api\")\n print(\"Making an HTTP request...\")\n\n\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n
","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#use-cases","title":"Use cases","text":"","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#throttling-task-submission","title":"Throttling task submission","text":"Throttling task submission to avoid overloading resources, to comply with external rate limits, or ensure a steady, controlled flow of work.
In this scenario the rate_limit
function is used to throttle the submission of tasks. The rate limit acts as a bottleneck, ensuring that tasks are submitted at a controlled rate, governed by the slot_decay_per_second
setting on the associated concurrency limit.
from prefect import flow, task\nfrom prefect.concurrency.sync import rate_limit\n\n\n@task\ndef my_task(i):\n return i\n\n\n@flow\ndef my_flow():\n for _ in range(100):\n rate_limit(\"slow-my-flow\", occupy=1)\n my_task.submit(1)\n\n\nif __name__ == \"__main__\":\n my_flow()\n
","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#managing-database-connections","title":"Managing database connections","text":"Managing the maximum number of concurrent database connections to avoid exhausting database resources.
In this scenario we've setup a concurrency limit named database
and given it a maximum concurrency limit that matches the maximum number of database connections we want to allow. We then use the concurrency
context manager to control the number of database connections allowed at any one time.
from prefect import flow, task, concurrency\nimport psycopg2\n\n@task\ndef database_query(query):\n # Here we request a single slot on the 'database' concurrency limit. This\n # will block in the case that all of the database connections are in use\n # ensuring that we never exceed the maximum number of database connections.\n with concurrency(\"database\", occupy=1):\n connection = psycopg2.connect(\"<connection_string>\")\n cursor = connection.cursor()\n cursor.execute(query)\n result = cursor.fetchall()\n connection.close()\n return result\n\n@flow\ndef my_flow():\n queries = [\"SELECT * FROM table1\", \"SELECT * FROM table2\", \"SELECT * FROM table3\"]\n\n for query in queries:\n database_query.submit(query)\n\nif __name__ == \"__main__\":\n my_flow()\n
","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#parallel-data-processing","title":"Parallel data processing","text":"Limiting the maximum number of parallel processing tasks.
In this scenario we want to limit the number of process_data
tasks to five at any one time. We do this by using the concurrency
context manager to request five slots on the data-processing
concurrency limit. This will block until five slots are free and then submit five more tasks, ensuring that we never exceed the maximum number of parallel processing tasks.
import asyncio\nfrom prefect.concurrency.sync import concurrency\n\n\nasync def process_data(data):\n print(f\"Processing: {data}\")\n await asyncio.sleep(1)\n return f\"Processed: {data}\"\n\n\nasync def main():\n data_items = list(range(100))\n processed_data = []\n\n while data_items:\n with concurrency(\"data-processing\", occupy=5):\n chunk = [data_items.pop() for _ in range(5)]\n processed_data += await asyncio.gather(\n *[process_data(item) for item in chunk]\n )\n\n print(processed_data)\n\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n
","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/host/","title":"Hosting a Prefect server instance","text":"After you install Prefect you have a Python SDK client that can communicate with Prefect Cloud, the platform hosted by Prefect. You also have an API server instance backed by a database and a UI.
In this section you'll learn how to host your own Prefect server instance. If you would like to host a Prefect server instance on Kubernetes, check out the prefect-server Helm chart.
Spin up a local Prefect server UI by running the prefect server start
CLI command in the terminal:
prefect server start\n
Open the URL for the Prefect server UI (http://127.0.0.1:4200 by default) in a browser.
Shut down the Prefect server with ctrl + c in the terminal.","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#differences-between-a-self-hosted-prefect-server-instance-and-prefect-cloud","title":"Differences between a self-hosted Prefect server instance and Prefect Cloud","text":"
A self-hosted Prefect server instance and Prefect Cloud share a common set of features. Prefect Cloud includes the following additional features:
You can read more about Prefect Cloud in the Cloud section.
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#configuring-a-prefect-server-instance","title":"Configuring a Prefect server instance","text":"Go to your terminal session and run this command to set the API URL to point to a Prefect server instance:
prefect config set PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n
PREFECT_API_URL
required when running Prefect inside a container
You must set the API server address to use Prefect within a container, such as a Docker container.
You can save the API server address in a Prefect profile. Whenever that profile is active, the API endpoint will be be at that address.
See Profiles & Configuration for more information on profiles and configurable Prefect settings.
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#prefect-database","title":"Prefect database","text":"The Prefect database persists data to track the state of your flow runs and related Prefect concepts, including:
Currently Prefect supports the following databases:
pg_trgm
extension, so it must be installed and enabled.A local SQLite database is the default database and is configured upon Prefect installation. The database is located at ~/.prefect/prefect.db
by default.
To reset your database, run the CLI command:
prefect server database reset -y\n
This command will clear all data and reapply the schema.
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#database-settings","title":"Database settings","text":"Prefect provides several settings for configuring the database. Here are the default settings:
PREFECT_API_DATABASE_CONNECTION_URL='sqlite+aiosqlite:///${PREFECT_HOME}/prefect.db'\nPREFECT_API_DATABASE_ECHO='False'\nPREFECT_API_DATABASE_MIGRATE_ON_START='True'\nPREFECT_API_DATABASE_PASSWORD='None'\n
You can save a setting to your active Prefect profile with prefect config set
.
To connect Prefect to a PostgreSQL database, you can set the following environment variable:
prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"postgresql+asyncpg://postgres:yourTopSecretPassword@localhost:5432/prefect\"\n
The above environment variable assumes that:
postgres
yourTopSecretPassword
localhost
5432
prefect
To quickly start a PostgreSQL instance that can be used as your Prefect database, use the following command, which will start a Docker container running PostgreSQL:
docker run -d --name prefect-postgres -v prefectdb:/var/lib/postgresql/data -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=yourTopSecretPassword -e POSTGRES_DB=prefect postgres:latest\n
The above command:
postgres
Docker image, which is compatible with Prefect.prefect-postgres
.prefect
with a user postgres
and yourTopSecretPassword
password.prefectdb
to provide persistence if you ever have to restart or rebuild that container.Then you'll want to run the command above to set your current Prefect Profile to the PostgreSQL database instance running in your Docker container.
prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"postgresql+asyncpg://postgres:yourTopSecretPassword@localhost:5432/prefect\"\n
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#confirming-your-postgresql-database-configuration","title":"Confirming your PostgreSQL database configuration","text":"Inspect your Prefect profile to confirm that the environment variable has been set properly:
prefect config view --show-sources\n
You should see output similar to the following:\n\nPREFECT_PROFILE='my_profile'\nPREFECT_API_DATABASE_CONNECTION_URL='********' (from profile)\nPREFECT_API_URL='http://127.0.0.1:4200/api' (from profile)\n
Start the Prefect server and it should begin to use your PostgreSQL database instance:
prefect server start\n
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#in-memory-database","title":"In-memory database","text":"One of the benefits of SQLite is in-memory database support.
To use an in-memory SQLite database, set the following environment variable:
prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false\"\n
Use SQLite database for testing only
SQLite is only supported by Prefect for testing purposes and is not compatible with multiprocessing.
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#migrations","title":"Migrations","text":"Prefect uses Alembic to manage database migrations. Alembic is a database migration tool for usage with the SQLAlchemy Database Toolkit for Python. Alembic provides a framework for generating and applying schema changes to a database.
To apply migrations to your database you can run the following commands:
To upgrade:
prefect server database upgrade -y\n
To downgrade:
prefect server database downgrade -y\n
You can use the -r
flag to specify a specific migration version to upgrade or downgrade to. For example, to downgrade to the previous migration version you can run:
prefect server database downgrade -y -r -1\n
or to downgrade to a specific revision:
prefect server database downgrade -y -r d20618ce678e\n
To downgrade all migrations, use the base
revision.
See the contributing docs for information on how to create new database migrations.
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#notifications","title":"Notifications","text":"When you use Prefect Cloud you gain access to a hosted platform with Workspace & User controls, Events, and Automations. Prefect Cloud has an option for automation notifications. The more limited Notifications option is provided for the self-hosted Prefect server.
Notifications enable you to set up alerts that are sent when a flow enters any state you specify. When your flow and task runs changes state, Prefect notes the state change and checks whether the new state matches any notification policies. If it does, a new notification is queued.
Prefect supports sending notifications via:
Notifications in Prefect Cloud
Prefect Cloud uses the robust Automations interface to enable notifications related to flow run state changes and work pool status.
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#configure-notifications","title":"Configure notifications","text":"To configure a notification in a Prefect server, go to the Notifications page and select Create Notification or the + button.
Notifications are structured just as you would describe them to someone. You can choose:
For email notifications (supported on Prefect Cloud only), the configuration requires email addresses to which the message is sent.
For Slack notifications, the configuration requires webhook credentials for your Slack and the channel to which the message is sent.
For example, to get a Slack message if a flow with a daily-etl
tag fails, the notification will read:
If a run of any flow with daily-etl tag enters a failed state, send a notification to my-slack-webhook
When the conditions of the notification are triggered, you\u2019ll receive a message:
The fuzzy-leopard run of the daily-etl flow entered a failed state at 22-06-27 16:21:37 EST.
On the Notifications page you can pause, edit, or delete any configured notification.
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/logs/","title":"Logging","text":"Prefect enables you to log a variety of useful information about your flow and task runs, capturing information about your workflows for purposes such as monitoring, troubleshooting, and auditing.
Prefect captures logs for your flow and task runs by default, even if you have not started a Prefect server with prefect server start
.
You can view and filter logs in the Prefect UI or Prefect Cloud, or access log records via the API.
Prefect enables fine-grained customization of log levels for flows and tasks, including configuration for default levels and log message formatting.
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-overview","title":"Logging overview","text":"Whenever you run a flow, Prefect automatically logs events for flow runs and task runs, along with any custom log handlers you have configured. No configuration is needed to enable Prefect logging.
For example, say you created a simple flow in a file flow.py
. If you create a local flow run with python flow.py
, you'll see an example of the log messages created automatically by Prefect:
16:45:44.534 | INFO | prefect.engine - Created flow run 'gray-dingo' for flow\n'hello-flow'\n16:45:44.534 | INFO | Flow run 'gray-dingo' - Using task runner 'SequentialTaskRunner'\n16:45:44.598 | INFO | Flow run 'gray-dingo' - Created task run 'hello-task-54135dc1-0'\nfor task 'hello-task'\nHello world!\n16:45:44.650 | INFO | Task run 'hello-task-54135dc1-0' - Finished in state\nCompleted(None)\n16:45:44.672 | INFO | Flow run 'gray-dingo' - Finished in state\nCompleted('All states completed.')\n
You can see logs for a flow run in the Prefect UI by navigating to the Flow Runs page and selecting a specific flow run to inspect.
These log messages reflect the logging configuration for log levels and message formatters. You may customize the log levels captured and the default message format through configuration, and you can capture custom logging events by explicitly emitting log messages during flow and task runs.
Prefect supports the standard Python logging levels CRITICAL
, ERROR
, WARNING
, INFO
, and DEBUG
. By default, Prefect displays INFO
-level and above events. You can configure the root logging level as well as specific logging levels for flow and task runs.
Prefect provides several settings for configuring logging level and loggers.
By default, Prefect displays INFO
-level and above logging records. You may change this level to DEBUG
and DEBUG
-level logs created by Prefect will be shown as well. You may need to change the log level used by loggers from other libraries to see their log records.
You can override any logging configuration by setting an environment variable or Prefect Profile setting using the syntax PREFECT_LOGGING_[PATH]_[TO]_[KEY]
, with [PATH]_[TO]_[KEY]
corresponding to the nested address of any setting.
For example, to change the default logging levels for Prefect to DEBUG
, you can set the environment variable PREFECT_LOGGING_LEVEL=\"DEBUG\"
.
You may also configure the \"root\" Python logger. The root logger receives logs from all loggers unless they explicitly opt out by disabling propagation. By default, the root logger is configured to output WARNING
level logs to the console. As with other logging settings, you can override this from the environment or in the logging configuration file. For example, you can change the level with the variable PREFECT_LOGGING_ROOT_LEVEL
.
You may adjust the log level used by specific handlers. For example, you could set PREFECT_LOGGING_HANDLERS_API_LEVEL=ERROR
to have only ERROR
logs reported to the Prefect API. The console handlers will still default to level INFO
.
There is a logging.yml
file packaged with Prefect that defines the default logging configuration.
You can customize logging configuration by creating your own version of logging.yml
with custom settings, by either creating the file at the default location (/.prefect/logging.yml
) or by specifying the path to the file with PREFECT_LOGGING_SETTINGS_PATH
. (If the file does not exist at the specified location, Prefect ignores the setting and uses the default configuration.)
See the Python Logging configuration documentation for more information about the configuration options and syntax used by logging.yml
.
To access the Prefect logger, import from prefect import get_run_logger
. You can send messages to the logger in both flows and tasks.
To log from a flow, retrieve a logger instance with get_run_logger()
, then call the standard Python logging methods.
from prefect import flow, get_run_logger\n\n@flow(name=\"log-example-flow\")\ndef logger_flow():\n logger = get_run_logger()\n logger.info(\"INFO level log message.\")\n
Prefect automatically uses the flow run logger based on the flow context. If you run the above code, Prefect captures the following as a log event.
15:35:17.304 | INFO | Flow run 'mottled-marten' - INFO level log message.\n
The default flow run log formatter uses the flow run name for log messages.
Note
Starting in 2.7.11, if you use a logger that sends logs to the API outside of a flow or task run, a warning will be displayed instead of an error. You can silence this warning by setting `PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW=ignore` or have the logger raise an error by setting the value to `error`.\n
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-in-tasks","title":"Logging in tasks","text":"Logging in tasks works much as logging in flows: retrieve a logger instance with get_run_logger()
, then call the standard Python logging methods.
from prefect import flow, task, get_run_logger\n\n@task(name=\"log-example-task\")\ndef logger_task():\n logger = get_run_logger()\n logger.info(\"INFO level log message from a task.\")\n\n@flow(name=\"log-example-flow\")\ndef logger_flow():\n logger_task()\n
Prefect automatically uses the task run logger based on the task context. The default task run log formatter uses the task run name for log messages.
15:33:47.179 | INFO | Task run 'logger_task-80a1ffd1-0' - INFO level log message from a task.\n
The underlying log model for task runs captures the task name, task run ID, and parent flow run ID, which are persisted to the database for reporting and may also be used in custom message formatting.
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-print-statements","title":"Logging print statements","text":"Prefect provides the log_prints
option to enable the logging of print
statements at the task or flow level. When log_prints=True
for a given task or flow, the Python builtin print
will be patched to redirect to the Prefect logger for the scope of that task or flow.
By default, tasks and subflows will inherit the log_prints
setting from their parent flow, unless opted out with their own explicit log_prints
setting.
from prefect import task, flow\n\n@task\ndef my_task():\n print(\"we're logging print statements from a task\")\n\n@flow(log_prints=True)\ndef my_flow():\n print(\"we're logging print statements from a flow\")\n my_task()\n
Will output:
15:52:11.244 | INFO | prefect.engine - Created flow run 'emerald-gharial' for flow 'my-flow'\n15:52:11.812 | INFO | Flow run 'emerald-gharial' - we're logging print statements from a flow\n15:52:11.926 | INFO | Flow run 'emerald-gharial' - Created task run 'my_task-20c6ece6-0' for task 'my_task'\n15:52:11.927 | INFO | Flow run 'emerald-gharial' - Executing 'my_task-20c6ece6-0' immediately...\n15:52:12.217 | INFO | Task run 'my_task-20c6ece6-0' - we're logging print statements from a task\n
from prefect import task, flow\n\n@task\ndef my_task(log_prints=False):\n print(\"not logging print statements in this task\")\n\n@flow(log_prints=True)\ndef my_flow():\n print(\"we're logging print statements from a flow\")\n my_task()\n
Using log_prints=False
at the task level will output:
15:52:11.244 | INFO | prefect.engine - Created flow run 'emerald-gharial' for flow 'my-flow'\n15:52:11.812 | INFO | Flow run 'emerald-gharial' - we're logging print statements from a flow\n15:52:11.926 | INFO | Flow run 'emerald-gharial' - Created task run 'my_task-20c6ece6-0' for task 'my_task'\n15:52:11.927 | INFO | Flow run 'emerald-gharial' - Executing 'my_task-20c6ece6-0' immediately...\nnot logging print statements in this task\n
You can also configure this behavior globally for all Prefect flows, tasks, and subflows.
prefect config set PREFECT_LOGGING_LOG_PRINTS=True\n
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#formatters","title":"Formatters","text":"Prefect log formatters specify the format of log messages. You can see details of message formatting for different loggers in logging.yml
. For example, the default formatting for task run log records is:
\"%(asctime)s.%(msecs)03d | %(levelname)-7s | Task run %(task_run_name)r - %(message)s\"\n
The variables available to interpolate in log messages varies by logger. In addition to the run context, message string, and any keyword arguments, flow and task run loggers have access to additional variables.
The flow run logger has the following:
flow_run_name
flow_run_id
flow_name
The task run logger has the following:
task_run_id
flow_run_id
task_run_name
task_name
flow_run_name
flow_name
You can specify custom formatting by setting an environment variable or by modifying the formatter in a logging.yml
file as described earlier. For example, to change the formatting for the flow runs formatter:
PREFECT_LOGGING_FORMATTERS_STANDARD_FLOW_RUN_FMT=\"%(asctime)s.%(msecs)03d | %(levelname)-7s | %(flow_run_id)s - %(message)s\"\n
The resulting messages, using the flow run ID instead of name, would look like this:
10:40:01.211 | INFO | e43a5a80-417a-41c4-a39e-2ef7421ee1fc - Created task run\n'othertask-1c085beb-3' for task 'othertask'\n
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#styles","title":"Styles","text":"By default, Prefect highlights specific keywords in the console logs with a variety of colors.
Highlighting can be toggled on/off with the PREFECT_LOGGING_COLORS
setting, e.g.
PREFECT_LOGGING_COLORS=False\n
You can change what gets highlighted and also adjust the colors by updating the styles in a logging.yml
file. Below lists the specific keys built-in to the PrefectConsoleHighlighter
.
URLs:
log.web_url
log.local_url
Log levels:
log.info_level
log.warning_level
log.error_level
log.critical_level
State types:
log.pending_state
log.running_state
log.scheduled_state
log.completed_state
log.cancelled_state
log.failed_state
log.crashed_state
Flow (run) names:
log.flow_run_name
log.flow_name
Task (run) names:
log.task_run_name
log.task_name
You can also build your own handler with a custom highlighter. For example, to additionally highlight emails:
my_package_or_module.py
(rename as needed) in the same directory as the flow run script, or ideally part of a Python package so it's available in site-packages
to be accessed anywhere within your environment.import logging\nfrom typing import Dict, Union\n\nfrom rich.highlighter import Highlighter\n\nfrom prefect.logging.handlers import PrefectConsoleHandler\nfrom prefect.logging.highlighters import PrefectConsoleHighlighter\n\nclass CustomConsoleHighlighter(PrefectConsoleHighlighter):\n base_style = \"log.\"\n highlights = PrefectConsoleHighlighter.highlights + [\n # ?P<email> is naming this expression as `email`\n r\"(?P<email>[\\w-]+@([\\w-]+\\.)+[\\w-]+)\",\n ]\n\nclass CustomConsoleHandler(PrefectConsoleHandler):\n def __init__(\n self,\n highlighter: Highlighter = CustomConsoleHighlighter,\n styles: Dict[str, str] = None,\n level: Union[int, str] = logging.NOTSET,\n ):\n super().__init__(highlighter=highlighter, styles=styles, level=level)\n
/.prefect/logging.yml
to use my_package_or_module.CustomConsoleHandler
and additionally reference the base_style and named expression: log.email
. console_flow_runs:\n level: 0\n class: my_package_or_module.CustomConsoleHandler\n formatter: flow_runs\n styles:\n log.email: magenta\n # other styles can be appended here, e.g.\n # log.completed_state: green\n
my@email.com
is colored in magenta here.from prefect import flow, get_run_logger\n\n@flow\ndef log_email_flow():\n logger = get_run_logger()\n logger.info(\"my@email.com\")\n\nlog_email_flow()\n
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#applying-markup-in-logs","title":"Applying markup in logs","text":"To use Rich's markup in Prefect logs, first configure PREFECT_LOGGING_MARKUP
.
PREFECT_LOGGING_MARKUP=True\n
Then, the following will highlight \"fancy\" in red.
from prefect import flow, get_run_logger\n\n@flow\ndef my_flow():\n logger = get_run_logger()\n logger.info(\"This is [bold red]fancy[/]\")\n\nmy_flow()\n
Inaccurate logs could result
Although this can be convenient, the downside is, if enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output, e.g. DROP TABLE [dbo].[SomeTable];\"
outputs DROP TABLE .[SomeTable];
.
Logged events are also persisted to the Prefect database. A log record includes the following data:
Column Description id Primary key ID of the log record. created Timestamp specifying when the record was created. updated Timestamp specifying when the record was updated. name String specifying the name of the logger. level Integer representation of the logging level. flow_run_id ID of the flow run associated with the log record. If the log record is for a task run, this is the parent flow of the task. task_run_id ID of the task run associated with the log record. Null if logging a flow run event. message Log message. timestamp The client-side timestamp of this logged statement.For more information, see Log schema in the API documentation.
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#including-logs-from-other-libraries","title":"Including logs from other libraries","text":"By default, Prefect won't capture log statements from libraries that your flows and tasks use. You can tell Prefect to include logs from these libraries with the PREFECT_LOGGING_EXTRA_LOGGERS
setting.
To use this setting, specify one or more Python library names to include, separated by commas. For example, if you want to make sure Prefect captures Dask and SciPy logging statements with your flow and task run logs:
PREFECT_LOGGING_EXTRA_LOGGERS=dask,scipy\n
You can set this setting as an environment variable or in a profile. See Settings for more details about how to use settings.
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/managed-execution/","title":"Managed Execution","text":"Prefect Cloud can run your flows on your behalf with Prefect Managed work pools. Flows run with this work pool do not require a worker or cloud provider account. Prefect handles the infrastructure and code execution for you.
Managed execution is a great option for users who want to get started quickly, with no infrastructure setup.
Managed Execution is in beta
Managed Execution is currently in beta. Features are likely to change without warning.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#usage-guide","title":"Usage guide","text":"Run a flow with managed infrastructure in three steps.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#step-1","title":"Step 1","text":"Create a new work pool of type Prefect Managed in the UI or the CLI. Here's the command to create a new work pool using the CLI:
prefect work-pool create my-managed-pool --type prefect:managed\n
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#step-2","title":"Step 2","text":"Create a deployment using the flow deploy
method or prefect.yaml
.
Specify the name of your managed work pool, as shown in this example that uses the deploy
method:
from prefect import flow\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/desertaxle/demo.git\",\n entrypoint=\"flow.py:my_flow\",\n ).deploy(\n name=\"test-managed-flow\",\n work_pool_name=\"my-managed-pool\",\n )\n
With your CLI authenticated to your Prefect Cloud workspace, run the script to create your deployment:
python managed-execution.py\n
Note that this deployment uses flow code stored in a GitHub repository.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#step-3","title":"Step 3","text":"Run the deployment from the UI or from the CLI.
That's it! You ran a flow on remote infrastructure without any infrastructure setup, starting a worker, or needing a cloud provider account.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#adding-dependencies","title":"Adding dependencies","text":"Prefect can install Python packages in the container that runs your flow at runtime. You can specify these dependencies in the Pip Packages field in the UI, or by configuring job_variables={\"pip_packages\": [\"pandas\", \"prefect-aws\"]}
in your deployment creation like this:
from prefect import flow\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/desertaxle/demo.git\",\n entrypoint=\"flow.py:my_flow\",\n ).deploy(\n name=\"test-managed-flow\",\n work_pool_name=\"my-managed-pool\",\n job_variables={\"pip_packages\": [\"pandas\", \"prefect-aws\"]}\n )\n
Alternatively, you can create a requirements.txt
file and reference it in your prefect.yaml
pull_step
.
Managed execution requires Prefect 2.14.4 or newer.
All limitations listed below may change without warning during the beta period. We will update this page as we make changes.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#concurrency-work-pools","title":"Concurrency & work pools","text":"Free tier accounts are limited to:
prefect:managed
pools.Pro tier and above accounts are limited to:
prefect:managed
pools.At this time, managed execution requires that you run the official Prefect Docker image: prefecthq/prefect:2-latest
. However, as noted above, you can install Python package dependencies at runtime. If you need to use your own image, we recommend using another type of work pool.
Flow code must be stored in an accessible remote location. This means git-based cloud providers such as GitHub, Bitbucket, or GitLab are supported. Remote block-based storage is also supported, so S3, GCS, and Azure Blob are additional code storage options.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#resources","title":"Resources","text":"Memory is limited to 2GB of RAM, which includes all operations such as dependency installation. Maximum job run time is 24 hours.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#usage-limits","title":"Usage limits","text":"Free tier accounts are limited to ten compute hours per workspace per month. Pro tier and above accounts are limited to 250 hours per workspace per month. You can view your compute hours quota usage on the Work Pools page in the UI.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#next-steps","title":"Next steps","text":"Read more about creating deployments in the deployment guide.
If you find that you need more control over your infrastructure, such as the ability to run custom Docker images, serverless push work pools might be a good option. Read more here.
","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/migration-guide/","title":"Migrating from Prefect 1 to Prefect 2","text":"This guide is designed to help you migrate your workflows from Prefect 1 to Prefect 2.
","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-stayed-the-same","title":"What stayed the same","text":"Prefect 2 still:
Prefect 2 requires modifications to your existing tasks, flows, and deployment patterns. We've organized this section into the following categories:
Since Prefect 2 allows running native Python code within the flow function, some abstractions are no longer necessary:
Parameter
tasks: in Prefect 2, inputs to your flow function are automatically treated as parameters of your flow. You can define the parameter values in your flow code when you create your Deployment
, or when you schedule an ad-hoc flow run. One benefit of Prefect parametrization is built-in type validation with pydantic.state_handlers
: in Prefect 2, you can build custom logic that reacts to task-run states within your flow function without the need for state_handlers
. The page \" How to take action on a state change of a task run\" provides a further explanation and code examples.signals
, Prefect 2 allows you to raise an arbitrary exception in your task or flow and return a custom state. For more details and examples, see How can I stop the task run based on a custom logic.case
are no longer required. Use Python native if...else
statements to build a conditional logic. The Discourse tag \"conditional-logic\" provides more resources.resource_manager
is no longer necessary. As long as you point to your flow script in your Deployment
, you can share database connections and any other resources between tasks in your flow. The Discourse page How to clean up resources used in a flow provides a full example.The changes listed below require you to modify your workflow code. The following table shows how Prefect 1 concepts have been implemented in Prefect 2. The last column contains references to additional resources that provide more details and examples.
Concept Prefect 1 Prefect 2 Reference links Flow definition.with Flow(\"flow_name\") as flow:
@flow(name=\"flow_name\")
How can I define a flow? Flow executor that determines how to execute your task runs. Executor such as LocalExecutor
. Task runner such as ConcurrentTaskRunner
. What is the default TaskRunner (executor)? Configuration that determines how and where to execute your flow runs. Run configuration such as flow.run_config = DockerRun()
. Create an infrastructure block such as a Docker Container and specify it as the infrastructure when creating a deployment. How can I run my flow in a Docker container? Assignment of schedules and default parameter values. Schedules are attached to the flow object and default parameter values are defined within the Parameter tasks. Schedules and default parameters are assigned to a flow\u2019s Deployment
, rather than to a Flow object. How can I attach a schedule to a flow? Retries @task(max_retries=2, retry_delay=timedelta(seconds=5))
@task(retries=2, retry_delay_seconds=5)
How can I specify the retry behavior for a specific task? Logger syntax. Logger is retrieved from prefect.context
and can only be used within tasks. In Prefect 2, you can log not only from tasks, but also within flows. To get the logger object, use: prefect.get_run_logger()
. How can I add logs to my flow? The syntax and contents of Prefect context. Context is a thread-safe way of accessing variables related to the flow run and task run. The syntax to retrieve it: prefect.context
. Context is still available, but its content is much richer, allowing you to retrieve even more information about your flow runs and task runs. The syntax to retrieve it: prefect.context.get_run_context()
. How to access Prefect context values? Task library. Included in the main Prefect Core repository. Separated into individual repositories per system, cloud provider, or technology. How to migrate Prefect 1 tasks to Prefect 2 integrations.","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-changed-in-dataflow-orchestration","title":"What changed in dataflow orchestration?","text":"Let\u2019s look at the differences in how Prefect 2 transitions your flow and task runs between various execution states.
Completed
, while in Prefect 1, this flow run has a Success
state. You can find more about that topic here.To deploy your Prefect 1 flows, you have to send flow metadata to the backend in a step called registration. Prefect 2 no longer requires flow pre-registration. Instead, you create a Deployment that specifies the entry point to your flow code and optionally specifies:
DockerContainer
, KubernetesJob
, or ECSTask
).Interval
, Cron
, or RRule
schedule).parameters
, flow deployment name
, and more).default
is used.The API is now implemented as a REST API rather than GraphQL. This page illustrates how you can interact with the API.
In Prefect 1, the logical grouping of flows was based on projects. Prefect 2 provides a much more flexible way of organizing your flows, tasks, and deployments through customizable filters and\u00a0tags. This page provides more details on how to assign tags to various Prefect 2 objects.
The role of agents has changed:
The following new components and capabilities are enabled by Prefect 2.
async
support.pydantic
validation.subflows
concept: Prefect 1 only allowed the flow-of-flows orchestrator pattern. With Prefect 2 subflows, you gain a natural and intuitive way of organizing your flows into modular sub-components. For more details, see the following list of resources about subflows.Apart from new features, Prefect 2 simplifies many usage patterns and provides a much more seamless onboarding experience.
Every time you run a flow, whether it is tracked by the API server or ad-hoc through a Python script, it is on the same UI page for easier debugging and observability.
","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#code-as-workflows","title":"Code as workflows","text":"With Prefect 2, your functions\u00a0are\u00a0your flows and tasks. Prefect 2 automatically detects your flows and tasks without the need to define a rigid DAG structure. While use of tasks is encouraged to provide you the maximum visibility into your workflows, they are no longer required. You can add a single @flow
decorator to your main function to transform any Python script into a Prefect workflow.
The built-in SQLite database automatically tracks all your locally executed flow runs. As soon as you start a Prefect server and open the Prefect UI in your browser (or authenticate your CLI with your Prefect Cloud workspace), you can see all your locally executed flow runs in the UI. You don't even need to start an agent.
Then, when you want to move toward scheduled, repeatable workflows, you can build a deployment and send it to the server by running a CLI command or a Python script.
Prefect 2 eliminates ambiguities in many ways. For example. there is no more confusion between Prefect Core and Prefect Server \u2014 Prefect 2 unifies those into a single open source product. This product is also much easier to deploy with no requirement for Docker or docker-compose.
If you want to switch your backend to use Prefect Cloud for an easier production-level managed experience, Prefect profiles let you quickly connect to your workspace.
In Prefect 1, there are several confusing ways you could implement caching
. Prefect 2 resolves those ambiguities by providing a single cache_key_fn
function paired with cache_expiration
, allowing you to define arbitrary caching mechanisms \u2014 no more confusion about whether you need to use cache_for
, cache_validator
, or file-based caching using targets
.
For more details on how to configure caching, check out the following resources:
A similarly confusing concept in Prefect 1 was distinguishing between the functional and imperative APIs. This distinction caused ambiguities with respect to how to define state dependencies between tasks. Prefect 1 users were often unsure whether they should use the functional upstream_tasks
keyword argument or the imperative methods such as task.set_upstream()
, task.set_downstream()
, or flow.set_dependencies()
. In Prefect 2, there is only the functional API.
We know migrations can be tough. We encourage you to take it step-by-step and experiment with the new features.
To make the migration process easier for you:
Happy Engineering!
","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/moving-data/","title":"Read and Write Data to and from Cloud Provider Storage","text":"Writing data to cloud-based storage and reading data from that storage is a common task in data engineering. In this guide we'll learn how to use Prefect to move data to and from AWS, Azure, and GCP blob storage.
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#prerequisites","title":"Prerequisites","text":"In the CLI, install the Prefect integration library for your cloud provider:
AWSAzureGCPprefect-aws provides blocks for interacting with AWS services.
pip install -U prefect-aws\n
prefect-azure provides blocks for interacting with Azure services.
pip install -U prefect-azure\n
prefect-gcp provides blocks for interacting with GCP services.
pip install -U prefect-gcp\n
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#register-the-block-types","title":"Register the block types","text":"Register the new block types with Prefect Cloud (or with your self-hosted Prefect server instance):
AWSAzureGCP
prefect block register -m prefect_aws \n
prefect block register -m prefect_azure \n
prefect block register -m prefect_gcp\n
We should see a message in the CLI that several block types were registered. If we check the UI, we should see the new block types listed.
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#create-a-storage-bucket","title":"Create a storage bucket","text":"Create a storage bucket in the cloud provider account. Ensure the bucket is publicly accessible or create a user or service account with the appropriate permissions to fetch and write data to the bucket.
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#create-a-credentials-block","title":"Create a credentials block","text":"If the bucket is private, there are several options to authenticate:
If saving credential details in a block we can use a credentials block specific to the cloud provider or use a more generic secret block. We can create blocks via the UI or Python code. Below we'll use Python code to create a credentials block for our cloud provider.
Credentials safety
Reminder, don't store credential values in public locations such as public git platform repositories. In the examples below we use environment variables to store credential values.
AWSAzureGCPimport os\nfrom prefect_aws import AwsCredentials\n\nmy_aws_creds = AwsCredentials(\n aws_access_key_id=\"123abc\",\n aws_secret_access_key=os.environ.get(\"MY_AWS_SECRET_ACCESS_KEY\"),\n)\nmy_aws_creds.save(name=\"my-aws-creds-block\", overwrite=True)\n
import os\nfrom prefect_azure import AzureBlobStorageCredentials\n\nmy_azure_creds = AzureBlobStorageCredentials(\n connection_string=os.environ.get(\"MY_AZURE_CONNECTION_STRING\"),\n)\nmy_azure_creds.save(name=\"my-azure-creds-block\", overwrite=True)\n
We recommend specifying the service account key file contents as a string, rather than the path to the file, because that file might not be available in your production environments.
import os\nfrom prefect_gcp import GCPCredentials\n\nmy_gcp_creds = GCPCredentials(\n service_account_info=os.environ.get(\"GCP_SERVICE_ACCOUNT_KEY_FILE_CONTENTS\"), \n)\nmy_gcp_creds.save(name=\"my-gcp-creds-block\", overwrite=True)\n
Run the code to create the block. We should see a message that the block was created.
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#create-a-storage-block","title":"Create a storage block","text":"Let's create a block for the chosen cloud provider using Python code or the UI. In this example we'll use Python code.
AWSAzureGCPNote that the S3Bucket
block is not the same as the S3
block that ships with Prefect. The S3Bucket
block we use in this example is part of the prefect-aws
library and provides additional functionality.
We'll reference the credentials block created above.
from prefect_aws import S3Bucket\n\ns3bucket = S3Bucket.create(\n bucket=\"my-bucket-name\",\n credentials=\"my-aws-creds-block\"\n )\ns3bucket.save(name=\"my-s3-bucket-block\", overwrite=True)\n
Note that the AzureBlobStorageCredentials
block is not the same as the Azure block that ships with Prefect. The AzureBlobStorageCredentials
block we use in this example is part of the prefect-azure
library and provides additional functionality.
Azure blob storage doesn't require a separate block, the connection string used in the AzureBlobStorageCredentials
block can encode the information needed.
Note that the GcsBucket
block is not the same as the GCS
block that ships with Prefect. The GcsBucket
block is part of the prefect-gcp
library and provides additional functionality. We'll use it here.
We'll reference the credentials block created above.
from prefect_gcp.cloud_storage import GcsBucket\n\ngcsbucket = GcsBucket(\n bucket=\"my-bucket-name\", \n credentials=\"my-gcp-creds-block\"\n )\ngcsbucket.save(name=\"my-gcs-bucket-block\", overwrite=True)\n
Run the code to create the block. We should see a message that the block was created.
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#write-data","title":"Write data","text":"Use your new block inside a flow to write data to your cloud provider.
AWSAzureGCPfrom pathlib import Path\nfrom prefect import flow\nfrom prefect_aws.s3 import S3Bucket\n\n@flow()\ndef upload_to_s3():\n \"\"\"Flow function to upload data\"\"\"\n path = Path(\"my_path_to/my_file.parquet\")\n aws_block = S3Bucket.load(\"my-s3-bucket-block\")\n aws_block.upload_from_path(from_path=path, to_path=path)\n\nif __name__ == \"__main__\":\n upload_to_s3()\n
from prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_upload\n\n@flow\ndef upload_to_azure():\n \"\"\"Flow function to upload data\"\"\"\n blob_storage_credentials = AzureBlobStorageCredentials.load(\n name=\"my-azure-creds-block\"\n )\n\n with open(\"my_path_to/my_file.parquet\", \"rb\") as f:\n blob_storage_upload(\n data=f.read(),\n container=\"my_container\",\n blob=\"my_path_to/my_file.parquet\",\n blob_storage_credentials=blob_storage_credentials,\n )\n\nif __name__ == \"__main__\":\n upload_to_azure()\n
from pathlib import Path\nfrom prefect import flow\nfrom prefect_gcp.cloud_storage import GcsBucket\n\n@flow()\ndef upload_to_gcs():\n \"\"\"Flow function to upload data\"\"\"\n path = Path(\"my_path_to/my_file.parquet\")\n gcs_block = GcsBucket.load(\"my-gcs-bucket-block\")\n gcs_block.upload_from_path(from_path=path, to_path=path)\n\nif __name__ == \"__main__\":\n upload_to_gcs()\n
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#read-data","title":"Read data","text":"Use your block to read data from your cloud provider inside a flow.
AWSAzureGCPfrom prefect import flow\nfrom prefect_aws import S3Bucket\n\n@flow\ndef download_from_s3():\n \"\"\"Flow function to download data\"\"\"\n s3_block = S3Bucket.load(\"my-s3-bucket-block\")\n s3_block.get_directory(\n from_path=\"my_path_to/my_file.parquet\", \n local_path=\"my_path_to/my_file.parquet\"\n )\n\nif __name__ == \"__main__\":\n download_from_s3()\n
from prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_download\n\n@flow\ndef download_from_azure():\n \"\"\"Flow function to download data\"\"\"\n blob_storage_credentials = AzureBlobStorageCredentials.load(\n name=\"my-azure-creds-block\"\n )\n blob_storage_download(\n blob=\"my_path_to/my_file.parquet\",\n container=\"my_container\",\n blob_storage_credentials=blob_storage_credentials,\n )\n\nif __name__ == \"__main__\":\n download_from_azure()\n
from prefect import flow\nfrom prefect_gcp.cloud_storage import GcsBucket\n\n@flow\ndef download_from_gcs():\n gcs_block = GcsBucket.load(\"my-gcs-bucket-block\")\n gcs_block.get_directory(\n from_path=\"my_path_to/my_file.parquet\", \n local_path=\"my_path_to/my_file.parquet\"\n )\n\nif __name__ == \"__main__\":\n download_from_gcs()\n
In this guide we've seen how to use Prefect to read data from and write data to cloud providers!
","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#next-steps","title":"Next steps","text":"Check out the prefect-aws
, prefect-azure
, and prefect-gcp
docs to see additional methods for interacting with cloud storage providers. Each library also contains blocks for interacting with other cloud-provider services.
In this guide, we will configure a deployment that uses a work pool for dynamically provisioned infrastructure.
All Prefect flow runs are tracked by the API. The API does not require prior registration of flows. With Prefect, you can call a flow locally or on a remote environment and it will be tracked.
A deployment turns your workflow into an application that can be interacted with and managed via the Prefect API. A deployment enables you to:
Deployments created with .serve
A deployment created with the Python flow.serve
method or the serve
function runs flows in a subprocess on the same machine where the deployment is created. It does not use a work pool or worker.
A work pool-based deployment is useful when you want to dynamically scale the infrastructure where your flow code runs. Work pool-based deployments contain information about the infrastructure type and configuration for your workflow execution.
Work pool-based deployment infrastructure options include the following:
.serve
.The following diagram provides a high-level overview of the conceptual elements involved in defining a work-pool based deployment that is polled by a worker and executes a flow run based on that deployment.
%%{\n init: {\n 'theme': 'base',\n 'themeVariables': {\n 'fontSize': '19px'\n }\n }\n}%%\n\nflowchart LR\n F(\"<div style='margin: 5px 10px 5px 5px;'>Flow Code</div>\"):::yellow -.-> A(\"<div style='margin: 5px 10px 5px 5px;'>Deployment Definition</div>\"):::gold\n subgraph Server [\"<div style='width: 150px; text-align: center; margin-top: 5px;'>Prefect API</div>\"]\n D(\"<div style='margin: 5px 10px 5px 5px;'>Deployment</div>\"):::green\n end\n subgraph Remote Storage [\"<div style='width: 160px; text-align: center; margin-top: 5px;'>Remote Storage</div>\"]\n B(\"<div style='margin: 5px 6px 5px 5px;'>Flow</div>\"):::yellow\n end\n subgraph Infrastructure [\"<div style='width: 150px; text-align: center; margin-top: 5px;'>Infrastructure</div>\"]\n G(\"<div style='margin: 5px 10px 5px 5px;'>Flow Run</div>\"):::blue\n end\n\n A --> D\n D --> E(\"<div style='margin: 5px 10px 5px 5px;'>Worker</div>\"):::red\n B -.-> E\n A -.-> B\n E -.-> G\n\n classDef gold fill:goldenrod,stroke:goldenrod,stroke-width:4px,color:black\n classDef yellow fill:gold,stroke:gold,stroke-width:4px,color:black\n classDef gray fill:lightgray,stroke:lightgray,stroke-width:4px\n classDef blue fill:blue,stroke:blue,stroke-width:4px,color:white\n classDef green fill:green,stroke:green,stroke-width:4px,color:white\n classDef red fill:red,stroke:red,stroke-width:4px,color:white\n classDef dkgray fill:darkgray,stroke:darkgray,stroke-width:4px,color:white
The work pool types above require a worker to be running on your infrastructure to poll a work pool for scheduled flow runs.
Additional work pool options available with Prefect Cloud
Prefect Cloud offers other flavors of work pools that don't require a worker:
Push Work Pools - serverless cloud options that don't require a worker because Prefect Cloud submits them to your serverless cloud infrastructure on your behalf. Prefect can auto-provision your cloud infrastructure for you and set it up to use your work pool.
Managed Execution Prefect Cloud submits and runs your deployment on serverless infrastructure. No cloud provider account required.
In this guide, we focus on deployments that require a worker.
Work pool-based deployments that use a worker also allow you to assign a work queue name to prioritize work and allow you to limit concurrent runs at the work pool level.
When creating a deployment that uses a work pool and worker, we must answer two basic questions:
The tutorial shows how you can create a deployment with a long-running process using .serve
and how to move to a work-pool-based deployment setup with .deploy
. See the discussion of when you might want to move to work-pool-based deployments there.
Next, we'll explore how to use .deploy
to create deployments with Python code. If you'd prefer to learn about using a YAML-based alternative for managing deployment configuration, skip to the later section on prefect.yaml
.
.deploy
","text":"","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#automatically-bake-your-code-into-a-docker-image","title":"Automatically bake your code into a Docker image","text":"You can create a deployment from Python code by calling the .deploy
method on a flow.
from prefect import flow\n\n\n@flow(log_prints=True)\ndef buy():\n print(\"Buying securities\")\n\n\nif __name__ == \"__main__\":\n buy.deploy(\n name=\"my-code-baked-into-an-image-deployment\", \n work_pool_name=\"my-docker-pool\", \n image=\"my_registry/my_image:my_image_tag\"\n )\n
Make sure you have the work pool created in the Prefect Cloud workspace you are authenticated to or on your running self-hosted server instance. Then run the script to create a deployment (in future examples this step will be omitted for brevity):
python buy.py\n
You should see messages in your terminal that Docker is building your image. When the deployment build succeeds you will see helpful information in your terminal showing you how to start a worker for your deployment and how to run your deployment. Your deployment will be visible on the Deployments
page in the UI.
By default, .deploy
will build a Docker image with your flow code baked into it and push the image to the Docker Hub registry specified in the image
argument`.
Authentication to Docker Hub
You need your environment to be authenticated to your Docker registry to push an image to it.
You can specify a registry other than Docker Hub by providing the full registry path in the image
argument.
Warning
If building a Docker image, the environment in which you are creating the deployment needs to have Docker installed and running.
To avoid pushing to a registry, set push=False
in the .deploy
method.
if __name__ == \"__main__\":\n buy.deploy(\n name=\"my-code-baked-into-an-image-deployment\", \n work_pool_name=\"my-docker-pool\", \n image=\"my_registry/my_image:my_image_tag\",\n push=False\n )\n
To avoid building an image, set build=False
in the .deploy
method.
if __name__ == \"__main__\":\n buy.deploy(\n name=\"my-code-baked-into-an-image-deployment\", \n work_pool_name=\"my-docker-pool\", \n image=\"discdiver/no-build-image:1.0\",\n build=False\n )\n
The specified image will need to be available in your deployment's execution environment for your flow code to be accessible.
Prefect generates a Dockerfile for you that will build an image based off of one of Prefect's published images. The generated Dockerfile will copy the current directory into the Docker image and install any dependencies listed in a requirements.txt
file.
If you want to use a custom Dockerfile, you can specify the path to the Dockerfile with the DeploymentImage
class:
from prefect import flow\nfrom prefect.deployments import DeploymentImage\n\n\n@flow(log_prints=True)\ndef buy():\n print(\"Selling securities\")\n\n\nif __name__ == \"__main__\":\n buy.deploy(\n name=\"my-custom-dockerfile-deployment\", \n work_pool_name=\"my-docker-pool\", \n image=DeploymentImage(\n name=\"my_image\",\n tag=\"deploy-guide\",\n dockerfile=\"Dockerfile\"\n ),\n push=False\n)\n
The DeploymentImage
object allows for a great deal of image customization.
For example, you can install a private Python package from GCP's artifact registry like this:
Create a custom base Dockerfile.
FROM python:3.10\n\nARG AUTHED_ARTIFACT_REG_URL\nCOPY ./requirements.txt /requirements.txt\n\nRUN pip install --extra-index-url ${AUTHED_ARTIFACT_REG_URL} -r /requirements.txt\n
Create our deployment by leveraging the DeploymentImage class.
private-package.pyfrom prefect import flow\nfrom prefect.deployments.runner import DeploymentImage\nfrom prefect.blocks.system import Secret\nfrom my_private_package import do_something_cool\n\n\n@flow(log_prints=True)\ndef my_flow():\n do_something_cool()\n\n\nif __name__ == \"__main__\":\n artifact_reg_url: Secret = Secret.load(\"artifact-reg-url\")\n\n my_flow.deploy(\n name=\"my-deployment\",\n work_pool_name=\"k8s-demo\",\n image=DeploymentImage(\n name=\"my-image\",\n tag=\"test\",\n dockerfile=\"Dockerfile\",\n buildargs={\"AUTHED_ARTIFACT_REG_URL\": artifact_reg_url.get()},\n ),\n )\n
Note that we used a Prefect Secret block to load the URL configuration for the artifact registry above.
See all the optional keyword arguments for the DeploymentImage class here.
Default Docker namespace
You can set the PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE
setting to append a default Docker namespace to all images you build with .deploy
. This is great if you use a private registry to store your images.
To set a default Docker namespace for your current profile run:
prefect config set PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE=<docker-registry-url>/<organization-or-username>\n
Once set, you can omit the namespace from your image name when creating a deployment:
with_default_docker_namespace.pyif __name__ == \"__main__\":\n buy.deploy(\n name=\"my-code-baked-into-an-image-deployment\", \n work_pool_name=\"my-docker-pool\", \n image=\"my_image:my_image_tag\"\n )\n
The above code will build an image with the format <docker-registry-url>/<organization-or-username>/my_image:my_image_tag
when PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE
is set.
While baking code into Docker images is a popular deployment option, many teams decide to store their workflow code in git-based storage, such as GitHub, Bitbucket, or Gitlab. Let's see how to do that next.
","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#store-your-code-in-git-based-cloud-storage","title":"Store your code in git-based cloud storage","text":"If you don't specify an image
argument for .deploy
, then you need to specify where to pull the flow code from at runtime with the from_source
method.
Here's how we can pull our flow code from a GitHub repository.
git_storage.pyfrom prefect import flow\n\nif __name__ == \"__main__\":\n flow.from_source(\n \"https://github.com/my_github_account/my_repo/my_file.git\",\n entrypoint=\"flows/no-image.py:hello_world\",\n ).deploy(\n name=\"no-image-deployment\",\n work_pool_name=\"my_pool\",\n build=False\n )\n
The entrypoint
is the path to the file the flow is located in and the function name, separated by a colon.
Alternatively, you could specify a git-based cloud storage URL for a Bitbucket or Gitlab repository.
Note
If you don't specify an image as part of your deployment creation, the image specified in the work pool will be used to run your flow.
After creating a deployment you might change your flow code. Generally, you can just push your code to GitHub, without rebuilding your deployment. The exception is if something that the server needs to know about changes, such as the flow entrypoint parameters. Rerunning the Python script with .deploy
will update your deployment on the server with the new flow code.
If you need to provide additional configuration, such as specifying a private repository, you can provide a GitRepository
object instead of a URL:
from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=GitRepository(\n url=\"https://github.com/org/private-repo.git\",\n branch=\"dev\",\n credentials={\n \"access_token\": Secret.load(\"github-access-token\")\n }\n ),\n entrypoint=\"flows/no-image.py:hello_world\",\n ).deploy(\n name=\"private-git-storage-deployment\",\n work_pool_name=\"my_pool\",\n build=False\n )\n
Note the use of the Secret block to load the GitHub access token. Alternatively, you could provide a username and password to the username
and password
fields of the credentials
argument.
Another option for flow code storage is any fsspec-supported storage location, such as AWS S3, GCP GCS, or Azure Blob Storage.
For example, you can pass the S3 bucket path to source
.
from prefect import flow\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=\"s3://my-bucket/my-folder\",\n entrypoint=\"flows.py:my_flow\",\n ).deploy(\n name=\"deployment-from-aws-flow\",\n work_pool_name=\"my_pool\",\n )\n
In the example above your credentials will be auto-discovered from your deployment creation environment and credentials will need to be available in your runtime environment.
If you need additional configuration for your cloud-based storage - for example, with a private S3 Bucket - we recommend using a storage block. A storage block also ensures your credentials will be available in both your deployment creation environment and your execution environment.
Here's an example that uses an S3Bucket
block from the prefect-aws library.
from prefect import flow\nfrom prefect_aws.s3 import S3Bucket\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=S3Bucket.load(\"my-code-storage\"), entrypoint=\"my_file.py:my_flow\"\n ).deploy(name=\"test-s3\", work_pool_name=\"my_pool\")\n
If you are familiar with the deployment creation mechanics with .serve
, you will notice that .deploy
is very similar. .deploy
just requires a work pool name and has a number of parameters dealing with flow-code storage for Docker images.
Unlike .serve
, if you don't specify an image to use for your flow, you must to specify where to pull the flow code from at runtime with the from_source
method, whereas from_source
is optional with .serve
.
.deploy
","text":"Our examples thus far have explored options for where to store flow code. Let's turn our attention to other deployment configuration options.
To pass parameters to your flow, you can use the parameters
argument in the .deploy
method. Just pass in a dictionary of key-value pairs.
from prefect import flow\n\n@flow\ndef hello_world(name: str):\n print(f\"Hello, {name}!\")\n\nif __name__ == \"__main__\":\n hello_world.deploy(\n name=\"pass-params-deployment\",\n work_pool_name=\"my_pool\",\n parameters=dict(name=\"Prefect\"),\n image=\"my_registry/my_image:my_image_tag\",\n )\n
The job_variables
parameter allows you to fine-tune the infrastructure settings for a deployment. The values passed in override default values in the specified work pool's base job template.
You can override environment variables, such as image_pull_policy
and image
, for a specific deployment with the job_variables
argument.
if __name__ == \"__main__\":\n get_repo_info.deploy(\n name=\"my-deployment-never-pull\", \n work_pool_name=\"my-docker-pool\", \n job_variables={\"image_pull_policy\": \"Never\"},\n image=\"my-image:my-tag\"\",\n push=False\n )\n
Similarly, you can override the environment variables specified in a work pool through the job_variables
parameter:
if __name__ == \"__main__\":\n get_repo_info.deploy(\n name=\"my-deployment-never-pull\", \n work_pool_name=\"my-docker-pool\", \n job_variables={\"env\": {\"EXTRA_PIP_PACKAGES\": \"boto3\"} },\n image=\"my-image:my-tag\"\",\n push=False\n )\n
The dictionary key \"EXTRA_PIP_PACKAGES\" denotes a special environment variable that Prefect will use to install additional Python packages at runtime. This approach is an alternative to building an image with a custom requirements.txt
copied into it.
For more information on overriding job variables see this guide.
","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#working-with-multiple-deployments-with-deploy","title":"Working with multiple deployments withdeploy
","text":"You can create multiple deployments from one or more Python files that use .deploy
. These deployments can be managed independently of one another, allowing you to deploy the same flow with different configurations in the same codebase.
To create multiple work pool-based deployments at once you can use the deploy
function, which is analogous to the serve
function.
from prefect import deploy, flow\n\n@flow(log_prints=True)\ndef buy():\n print(\"Buying securities\")\n\n\nif __name__ == \"__main__\":\n deploy(\n buy.to_deployment(name=\"dev-deploy\", work_pool_name=\"my-dev-work-pool\"),\n buy.to_deployment(name=\"prod-deploy\", work_pool_name=\"my-prod-work-pool\"),\n image=\"my-registry/my-image:dev\",\n push=False,\n )\n
Note that in the example above we created two deployments from the same flow, but with different work pools. Alternatively, we could have created two deployments from different flows.
from prefect import deploy, flow\n\n@flow(log_prints=True)\ndef buy():\n print(\"Buying securities.\")\n\n@flow(log_prints=True)\ndef sell():\n print(\"Selling securities.\")\n\n\nif __name__ == \"__main__\":\n deploy(\n buy.to_deployment(name=\"buy-deploy\"),\n sell.to_deployment(name=\"sell-deploy\"),\n work_pool_name=\"my-dev-work-pool\"\n image=\"my-registry/my-image:dev\",\n push=False,\n )\n
In the example above the code for both flows gets baked into the same image.
We can specify that one or more flows should be pulled from a remote location at runtime by using the from_source
method. Here's an example of deploying two flows, one defined locally and one defined in a remote repository:
from prefect import deploy, flow\n\n\n@flow(log_prints=True)\ndef local_flow():\n print(\"I'm a flow!\")\n\nif __name__ == \"__main__\":\n deploy(\n local_flow.to_deployment(name=\"example-deploy-local-flow\"),\n flow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"flows.py:my_flow\",\n ).to_deployment(\n name=\"example-deploy-remote-flow\",\n ),\n work_pool_name=\"my-work-pool\",\n image=\"my-registry/my-image:dev\",\n )\n
You could pass any number of flows to the deploy
function. This behavior is useful if using a monorepo approach to your workflows.
The prefect.yaml
file is a YAML file describing base settings for your deployments, procedural steps for preparing deployments, and instructions for preparing the execution environment for a deployment run.
You can initialize your deployment configuration, which creates the prefect.yaml
file, by running the CLI command prefect init
in any directory or repository that stores your flow code.
Deployment configuration recipes
Prefect ships with many off-the-shelf \"recipes\" that allow you to get started with more structure within your prefect.yaml
file; run prefect init
to be prompted with available recipes in your installation. You can provide a recipe name in your initialization command with the --recipe
flag, otherwise Prefect will attempt to guess an appropriate recipe based on the structure of your working directory (for example if you initialize within a git
repository, Prefect will use the git
recipe).
The prefect.yaml
file contains deployment configuration for deployments created from this file, default instructions for how to build and push any necessary code artifacts (such as Docker images), and default instructions for pulling a deployment in remote execution environments (e.g., cloning a GitHub repository).
Any deployment configuration can be overridden via options available on the prefect deploy
CLI command when creating a deployment.
prefect.yaml
file flexibility
In older versions of Prefect, this file had to be in the root of your repository or project directory and named prefect.yaml
. Now this file can be located in a directory outside the project or a subdirectory inside the project. It can be named differently, provided the filename ends in .yaml
. You can even have multiple prefect.yaml
files with the same name in different directories. By default, prefect deploy
will use a prefect.yaml
file in the project's root directory. To use a custom deployment configuration file, supply the new --prefect-file
CLI argument when running the deploy
command from the root of your project directory:
prefect deploy --prefect-file path/to/my_file.yaml
The base structure for prefect.yaml
is as follows:
# generic metadata\nprefect-version: null\nname: null\n\n# preparation steps\nbuild: null\npush: null\n\n# runtime steps\npull: null\n\n# deployment configurations\ndeployments:\n- # base metadata\n name: null\n version: null\n tags: []\n description: null\n schedule: null\n\n # flow-specific fields\n entrypoint: null\n parameters: {}\n\n # infra-specific fields\n work_pool:\n name: null\n work_queue_name: null\n job_variables: {}\n
The metadata fields are always pre-populated for you. These fields are for bookkeeping purposes only. The other sections are pre-populated based on recipe; if no recipe is provided, Prefect will attempt to guess an appropriate one based on local configuration.
You can create deployments via the CLI command prefect deploy
without ever needing to alter the deployments
section of your prefect.yaml
file \u2014 the prefect deploy
command will help in deployment creation via interactive prompts. The prefect.yaml
file facilitates version-controlling your deployment configuration and managing multiple deployments.
Deployment actions defined in your prefect.yaml
file control the lifecycle of the creation and execution of your deployments. The three actions available are build
, push
, and pull
. pull
is the only required deployment action \u2014 it is used to define how Prefect will pull your deployment in remote execution environments.
Each action is defined as a list of steps that are executing in sequence.
Each step has the following format:
section:\n- prefect_package.path.to.importable.step:\n id: \"step-id\" # optional\n requires: \"pip-installable-package-spec\" # optional\n kwarg1: value\n kwarg2: more-values\n
Every step can optionally provide a requires
field that Prefect will use to auto-install in the event that the step cannot be found in the current environment. Each step can also specify an id
for the step which is used when referencing step outputs in later steps. The additional fields map directly onto Python keyword arguments to the step function. Within a given section, steps always run in the order that they are provided within the prefect.yaml
file.
Deployment Instruction Overrides
build
, push
, and pull
sections can all be overridden on a per-deployment basis by defining build
, push
, and pull
fields within a deployment definition in the prefect.yaml
file.
The prefect deploy
command will use any build
, push
, or pull
instructions provided in a deployment's definition in the prefect.yaml
file.
This capability is useful with multiple deployments that require different deployment instructions.
","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#the-build-action","title":"The build action","text":"The build section of prefect.yaml
is where any necessary side effects for running your deployments are built - the most common type of side effect produced here is a Docker image. If you initialize with the docker recipe, you will be prompted to provide required information, such as image name and tag:
prefect init --recipe docker\n>> image_name: < insert image name here >\n>> tag: < insert image tag here >\n
Use --field
to avoid the interactive experience
We recommend that you only initialize a recipe when you are first creating your deployment structure, and afterwards store your configuration files within version control. However, sometimes you may need to initialize programmatically and avoid the interactive prompts. To do so, provide all required fields for your recipe using the --field
flag:
prefect init --recipe docker \\\n --field image_name=my-repo/my-image \\\n --field tag=my-tag\n
build:\n- prefect_docker.deployments.steps.build_docker_image:\n requires: prefect-docker>=0.3.0\n image_name: my-repo/my-image\n tag: my-tag\n dockerfile: auto\n push: true\n
Once you've confirmed that these fields are set to their desired values, this step will automatically build a Docker image with the provided name and tag and push it to the repository referenced by the image name. As the prefect-docker
package documentation notes, this step produces a few fields that can optionally be used in future steps or within prefect.yaml
as template values. It is best practice to use {{ image }}
within prefect.yaml
(specifically the work pool's job variables section) so that you don't risk having your build step and deployment specification get out of sync with hardcoded values.
Note
Note that in the build step example above, we relied on the prefect-docker
package; in cases that deal with external services, additional packages are often required and will be auto-installed for you.
Pass output to downstream steps
Each deployment action can be composed of multiple steps. For example, if you wanted to build a Docker image tagged with the current commit hash, you could use the run_shell_script
step and feed the output into the build_docker_image
step:
build:\n - prefect.deployments.steps.run_shell_script:\n id: get-commit-hash\n script: git rev-parse --short HEAD\n stream_output: false\n - prefect_docker.deployments.steps.build_docker_image:\n requires: prefect-docker\n image_name: my-image\n image_tag: \"{{ get-commit-hash.stdout }}\"\n dockerfile: auto\n
Note that the id
field is used in the run_shell_script
step so that its output can be referenced in the next step.
The push section is most critical for situations in which code is not stored on persistent filesystems or in version control. In this scenario, code is often pushed and pulled from a Cloud storage bucket of some kind (e.g., S3, GCS, Azure Blobs, etc.). The push section allows users to specify and customize the logic for pushing this code repository to arbitrary remote locations.
For example, a user wishing to store their code in an S3 bucket and rely on default worker settings for its runtime environment could use the s3
recipe:
prefect init --recipe s3\n>> bucket: < insert bucket name here >\n
Inspecting our newly created prefect.yaml
file we find that the push
and pull
sections have been templated out for us as follows:
push:\n- prefect_aws.deployments.steps.push_to_s3:\n id: push-code\n requires: prefect-aws>=0.3.0\n bucket: my-bucket\n folder: project-name\n credentials: null\n\npull:\n- prefect_aws.deployments.steps.pull_from_s3:\n requires: prefect-aws>=0.3.0\n bucket: my-bucket\n folder: \"{{ push-code.folder }}\"\n credentials: null\n
The bucket has been populated with our provided value (which also could have been provided with the --field
flag); note that the folder
property of the push
step is a template - the pull_from_s3
step outputs both a bucket
value as well as a folder
value that can be used to template downstream steps. Doing this helps you keep your steps consistent across edits.
As discussed above, if you are using blocks, the credentials section can be templated with a block reference for secure and dynamic credentials access:
push:\n- prefect_aws.deployments.steps.push_to_s3:\n requires: prefect-aws>=0.3.0\n bucket: my-bucket\n folder: project-name\n credentials: \"{{ prefect.blocks.aws-credentials.dev-credentials }}\"\n
Anytime you run prefect deploy
, this push
section will be executed upon successful completion of your build
section. For more information on the mechanics of steps, see below.
The pull section is the most important section within the prefect.yaml
file. It contains instructions for preparing your flows for a deployment run. These instructions will be executed each time a deployment created within this folder is run via a worker.
There are three main types of steps that typically show up in a pull
section:
set_working_directory
: this step simply sets the working directory for the process prior to importing your flowgit_clone
: this step clones the provided repository on the provided branchpull_from_{cloud}
: this step pulls the working directory from a Cloud storage location (e.g., S3)Use block and variable references
All block and variable references within your pull step will remain unresolved until runtime and will be pulled each time your deployment is run. This allows you to avoid storing sensitive information insecurely; it also allows you to manage certain types of configuration from the API and UI without having to rebuild your deployment every time.
Below is an example of how to use an existing GitHubCredentials
block to clone a private GitHub repository:
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/org/repo.git\n credentials: \"{{ prefect.blocks.github-credentials.my-credentials }}\"\n
Alternatively, you can specify a BitBucketCredentials
or GitLabCredentials
block to clone from Bitbucket or GitLab. In lieu of a credentials block, you can also provide a GitHub, GitLab, or Bitbucket token directly to the 'access_token` field. You can use a Secret block to do this securely:
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://bitbucket.org/org/repo.git\n access_token: \"{{ prefect.blocks.secret.bitbucket-token }}\"\n
","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#utility-steps","title":"Utility steps","text":"Utility steps can be used within a build, push, or pull action to assist in managing the deployment lifecycle:
run_shell_script
allows for the execution of one or more shell commands in a subprocess, and returns the standard output and standard error of the script. This step is useful for scripts that require execution in a specific environment, or those which have specific input and output requirements.Here is an example of retrieving the short Git commit hash of the current repository to use as a Docker image tag:
build:\n - prefect.deployments.steps.run_shell_script:\n id: get-commit-hash\n script: git rev-parse --short HEAD\n stream_output: false\n - prefect_docker.deployments.steps.build_docker_image:\n requires: prefect-docker>=0.3.0\n image_name: my-image\n tag: \"{{ get-commit-hash.stdout }}\"\n dockerfile: auto\n
Provided environment variables are not expanded by default
To expand environment variables in your shell script, set expand_env_vars: true
in your run_shell_script
step. For example:
- prefect.deployments.steps.run_shell_script:\n id: get-user\n script: echo $USER\n stream_output: true\n expand_env_vars: true\n
Without expand_env_vars: true
, the above step would return a literal string $USER
instead of the current user.
pip_install_requirements
installs dependencies from a requirements.txt
file within a specified directory.Below is an example of installing dependencies from a requirements.txt
file after cloning:
pull:\n - prefect.deployments.steps.git_clone:\n id: clone-step # needed in order to be referenced in subsequent steps\n repository: https://github.com/org/repo.git\n - prefect.deployments.steps.pip_install_requirements:\n directory: {{ clone-step.directory }} # `clone-step` is a user-provided `id` field\n requirements_file: requirements.txt\n
Below is an example that retrieves an access token from a 3rd party Key Vault and uses it in a private clone step:
pull:\n- prefect.deployments.steps.run_shell_script:\n id: get-access-token\n script: az keyvault secret show --name <secret name> --vault-name <secret vault> --query \"value\" --output tsv\n stream_output: false\n- prefect.deployments.steps.git_clone:\n repository: https://bitbucket.org/samples/deployments.git\n branch: master\n access_token: \"{{ get-access-token.stdout }}\"\n
You can also run custom steps by packaging them. In the example below, retrieve_secrets
is a custom python module that has been packaged into the default working directory of a Docker image (which is /opt/prefect by default). main
is the function entry point, which returns an access token (e.g. return {\"access_token\": access_token}
) like the preceding example, but utilizing the Azure Python SDK for retrieval.
- retrieve_secrets.main:\n id: get-access-token\n- prefect.deployments.steps.git_clone:\n repository: https://bitbucket.org/samples/deployments.git\n branch: master\n access_token: '{{ get-access-token.access_token }}'\n
","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#templating-options","title":"Templating options","text":"Values that you place within your prefect.yaml
file can reference dynamic values in several different ways:
build
and push
produce named fields such as image_name
; you can reference these fields within prefect.yaml
and prefect deploy
will populate them with each call. References must be enclosed in double brackets and be of the form \"{{ field_name }}\"
{{ prefect.blocks.block_type.block_slug }}
. It is highly recommended that you use block references for any sensitive information (such as a GitHub access token or any credentials) to avoid hardcoding these values in plaintext{{ prefect.variables.variable_name }}
. Variables can be used to reference non-sensitive, reusable pieces of information such as a default image name or a default work pool name.{{ $MY_ENV_VAR }}
. This is especially useful for referencing environment variables that are set at runtime.As an example, consider the following prefect.yaml
file:
build:\n- prefect_docker.deployments.steps.build_docker_image:\n id: build-image\n requires: prefect-docker>=0.3.0\n image_name: my-repo/my-image\n tag: my-tag\n dockerfile: auto\n push: true\n\ndeployments:\n- # base metadata\n name: null\n version: \"{{ build-image.tag }}\"\n tags:\n - \"{{ $my_deployment_tag }}\"\n - \"{{ prefect.variables.some_common_tag }}\"\n description: null\n schedule: null\n\n # flow-specific fields\n entrypoint: null\n parameters: {}\n\n # infra-specific fields\n work_pool:\n name: \"my-k8s-work-pool\"\n work_queue_name: null\n job_variables:\n image: \"{{ build-image.image }}\"\n cluster_config: \"{{ prefect.blocks.kubernetes-cluster-config.my-favorite-config }}\"\n
So long as our build
steps produce fields called image_name
and tag
, every time we deploy a new version of our deployment, the {{ build-image.image }}
variable will be dynamically populated with the relevant values.
Docker step
The most commonly used build step is prefect_docker.deployments.steps.build_docker_image
which produces both the image_name
and tag
fields.
For an example, check out the deployments tutorial.
A prefect.yaml
file can have multiple deployment configurations that control the behavior of several deployments. These deployments can be managed independently of one another, allowing you to deploy the same flow with different configurations in the same codebase.
Prefect supports multiple deployment declarations within the prefect.yaml
file. This method of declaring multiple deployments allows the configuration for all deployments to be version controlled and deployed with a single command.
New deployment declarations can be added to the prefect.yaml
file by adding a new entry to the deployments
list. Each deployment declaration must have a unique name
field which is used to select deployment declarations when using the prefect deploy
command.
Warning
When using a prefect.yaml
file that is in another directory or differently named, remember that the value for the deployment entrypoint
must be relative to the root directory of the project.
For example, consider the following prefect.yaml
file:
build: ...\npush: ...\npull: ...\n\ndeployments:\n- name: deployment-1\n entrypoint: flows/hello.py:my_flow\n parameters:\n number: 42,\n message: Don't panic!\n work_pool:\n name: my-process-work-pool\n work_queue_name: primary-queue\n\n- name: deployment-2\n entrypoint: flows/goodbye.py:my_other_flow\n work_pool:\n name: my-process-work-pool\n work_queue_name: secondary-queue\n\n- name: deployment-3\n entrypoint: flows/hello.py:yet_another_flow\n work_pool:\n name: my-docker-work-pool\n work_queue_name: tertiary-queue\n
This file has three deployment declarations, each referencing a different flow. Each deployment declaration has a unique name
field and can be deployed individually by using the --name
flag when deploying.
For example, to deploy deployment-1
you would run:
prefect deploy --name deployment-1\n
To deploy multiple deployments you can provide multiple --name
flags:
prefect deploy --name deployment-1 --name deployment-2\n
To deploy multiple deployments with the same name, you can prefix the deployment name with its flow name:
prefect deploy --name my_flow/deployment-1 --name my_other_flow/deployment-1\n
To deploy all deployments you can use the --all
flag:
prefect deploy --all\n
To deploy deployments that match a pattern you can run:
prefect deploy -n my-flow/* -n *dev/my-deployment -n dep*prod\n
The above command will deploy all deployments from the flow my-flow
, all flows ending in dev
with a deployment named my-deployment
, and all deployments starting with dep
and ending in prod
.
CLI Options When Deploying Multiple Deployments
When deploying more than one deployment with a single prefect deploy
command, any additional attributes provided via the CLI will be ignored.
To provide overrides to a deployment via the CLI, you must deploy that deployment individually.
","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#reusing-configuration-across-deployments","title":"Reusing configuration across deployments","text":"Because a prefect.yaml
file is a standard YAML file, you can use YAML aliases to reuse configuration across deployments.
This functionality is useful when multiple deployments need to share the work pool configuration, deployment actions, or other configurations.
You can declare a YAML alias by using the &{alias_name}
syntax and insert that alias elsewhere in the file with the *{alias_name}
syntax. When aliasing YAML maps, you can also override specific fields of the aliased map by using the <<: *{alias_name}
syntax and adding additional fields below.
We recommend adding a definitions
section to your prefect.yaml
file at the same level as the deployments
section to store your aliases.
For example, consider the following prefect.yaml
file:
build: ...\npush: ...\npull: ...\n\ndefinitions:\n work_pools:\n my_docker_work_pool: &my_docker_work_pool\n name: my-docker-work-pool\n work_queue_name: default\n job_variables:\n image: \"{{ build-image.image }}\"\n schedules:\n every_ten_minutes: &every_10_minutes\n interval: 600\n actions:\n docker_build: &docker_build\n - prefect_docker.deployments.steps.build_docker_image: &docker_build_config\n id: build-image\n requires: prefect-docker>=0.3.0\n image_name: my-example-image\n tag: dev\n dockerfile: auto\n push: true\n\ndeployments:\n- name: deployment-1\n entrypoint: flows/hello.py:my_flow\n schedule: *every_10_minutes\n parameters:\n number: 42,\n message: Don't panic!\n work_pool: *my_docker_work_pool\n build: *docker_build # Uses the full docker_build action with no overrides\n\n- name: deployment-2\n entrypoint: flows/goodbye.py:my_other_flow\n work_pool: *my_docker_work_pool\n build:\n - prefect_docker.deployments.steps.build_docker_image:\n <<: *docker_build_config # Uses the docker_build_config alias and overrides the dockerfile field\n dockerfile: Dockerfile.custom\n\n- name: deployment-3\n entrypoint: flows/hello.py:yet_another_flow\n schedule: *every_10_minutes\n work_pool:\n name: my-process-work-pool\n work_queue_name: primary-queue\n
In the above example, we are using YAML aliases to reuse work pool, schedule, and build configuration across multiple deployments:
deployment-1
and deployment-2
are using the same work pool configurationdeployment-1
and deployment-3
are using the same scheduledeployment-1
and deployment-2
are using the same build deployment action, but deployment-2
is overriding the dockerfile
field to use a custom DockerfileBelow are fields that can be added to each deployment declaration.
Property Descriptionname
The name to give to the created deployment. Used with the prefect deploy
command to create or update specific deployments. version
An optional version for the deployment. tags
A list of strings to assign to the deployment as tags. description
An optional description for the deployment. schedule
An optional schedule to assign to the deployment. Fields for this section are documented in the Schedule Fields section. triggers
An optional array of triggers to assign to the deployment entrypoint
Required path to the .py
file containing the flow you want to deploy (relative to the root directory of your development folder) combined with the name of the flow function. Should be in the format path/to/file.py:flow_function_name
. parameters
Optional default values to provide for the parameters of the deployed flow. Should be an object with key/value pairs. enforce_parameter_schema
Boolean flag that determines whether the API should validate the parameters passed to a flow run against the parameter schema generated for the deployed flow. work_pool
Information on where to schedule flow runs for the deployment. Fields for this section are documented in the Work Pool Fields section.","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#schedule-fields","title":"Schedule fields","text":"Below are fields that can be added to a deployment declaration's schedule
section.
interval
Number of seconds indicating the time between flow runs. Cannot be used in conjunction with cron
or rrule
. anchor_date
Datetime string indicating the starting or \"anchor\" date to begin the schedule. If no anchor_date
is supplied, the current UTC time is used. Can only be used with interval
. timezone
String name of a time zone, used to enforce localization behaviors like DST boundaries. See the IANA Time Zone Database for valid time zones. cron
A valid cron string. Cannot be used in conjunction with interval
or rrule
. day_or
Boolean indicating how croniter handles day and day_of_week entries. Must be used with cron
. Defaults to True
. rrule
String representation of an RRule schedule. See the rrulestr
examples for syntax. Cannot be used in conjunction with interval
or cron
. For more information about schedules, see the Schedules concept doc.
","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#work-pool-fields","title":"Work pool fields","text":"Below are fields that can be added to a deployment declaration's work_pool
section.
name
The name of the work pool to schedule flow runs in for the deployment. work_queue_name
The name of the work queue within the specified work pool to schedule flow runs in for the deployment. If not provided, the default queue for the specified work pool will be used. job_variables
Values used to override the default values in the specified work pool's base job template. Maps directly to a created deployments infra_overrides
attribute.","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#deployment-mechanics","title":"Deployment mechanics","text":"Anytime you run prefect deploy
in a directory that contains a prefect.yaml
file, the following actions are taken in order:
prefect.yaml
file is loaded. First, the build
section is loaded and all variable and block references are resolved. The steps are then run in the order provided.push
section is loaded and all variable and block references are resolved; the steps within this section are then run in the order providedpull
section is templated with any step outputs but is not run. Note that block references are not hydrated for security purposes - block references are always resolved at runtimeprefect deploy
CLI are then overlaid on the values loaded from the file.Deployment Instruction Overrides
The build
, push
, and pull
sections in deployment definitions take precedence over the corresponding sections above them in prefect.yaml
.
Each time a step is run, the following actions are taken in order:
requires
keyword is used to install the necessary packagesNow that you are familiar with creating deployments, you may want to explore infrastructure options for running your deployments:
Prefect tracks information about the current flow or task run with a run context. The run context can be thought of as a global variable that allows the Prefect engine to determine relationships between your runs, such as which flow your task was called from.
The run context itself contains many internal objects used by Prefect to manage execution of your run and is only available in specific situations. For this reason, we expose a simple interface that only includes the items you care about and dynamically retrieves additional information when necessary. We call this the \"runtime context\" as it contains information that can be accessed only when a run is happening.
Mock values via environment variable
Oftentimes, you may want to mock certain values for testing purposes. For example, by manually setting an ID or a scheduled start time to ensure your code is functioning properly. Starting in version 2.10.3
, you can mock values in runtime via environment variable using the schema PREFECT__RUNTIME__{SUBMODULE}__{KEY_NAME}=value
:
$ export PREFECT__RUNTIME__TASK_RUN__FAKE_KEY='foo'\n$ python -c 'from prefect.runtime import task_run; print(task_run.fake_key)' # \"foo\"\n
If the environment variable mocks an existing runtime attribute, the value is cast to the same type. This works for runtime attributes of basic types (bool
, int
, float
and str
) and pendulum.DateTime
. For complex types like list
or dict
, we suggest mocking them using monkeypatch or a similar tool.
The prefect.runtime
module is the home for all runtime context access. Each major runtime concept has its own submodule:
deployment
: Access information about the deployment for the current runflow_run
: Access information about the current flow runtask_run
: Access information about the current task runFor example:
my_runtime_info.pyfrom prefect import flow, task\nfrom prefect import runtime\n\n@flow(log_prints=True)\ndef my_flow(x):\n print(\"My name is\", runtime.flow_run.name)\n print(\"I belong to deployment\", runtime.deployment.name)\n my_task(2)\n\n@task\ndef my_task(y):\n print(\"My name is\", runtime.task_run.name)\n print(\"Flow run parameters:\", runtime.flow_run.parameters)\n\nmy_flow(1)\n
Running this file will produce output similar to the following:
10:08:02.948 | INFO | prefect.engine - Created flow run 'solid-gibbon' for flow 'my-flow'\n10:08:03.555 | INFO | Flow run 'solid-gibbon' - My name is solid-gibbon\n10:08:03.558 | INFO | Flow run 'solid-gibbon' - I belong to deployment None\n10:08:03.703 | INFO | Flow run 'solid-gibbon' - Created task run 'my_task-0' for task 'my_task'\n10:08:03.704 | INFO | Flow run 'solid-gibbon' - Executing 'my_task-0' immediately...\n10:08:04.006 | INFO | Task run 'my_task-0' - My name is my_task-0\n10:08:04.007 | INFO | Task run 'my_task-0' - Flow run parameters: {'x': 1}\n10:08:04.105 | INFO | Task run 'my_task-0' - Finished in state Completed()\n10:08:04.968 | INFO | Flow run 'solid-gibbon' - Finished in state Completed('All states completed.')\n
Above, we demonstrated access to information about the current flow run, task run, and deployment. When run without a deployment (via python my_runtime_info.py
), you should see \"I belong to deployment None\"
logged. When information is not available, the runtime will always return an empty value. Because this flow was run outside of a deployment, there is no deployment data. If this flow was run as part of a deployment, we'd see the name of the deployment instead.
See the runtime API reference for a full list of available attributes.
","tags":["flows","subflows","tasks","deployments"],"boost":2},{"location":"guides/runtime-context/#accessing-the-run-context-directly","title":"Accessing the run context directly","text":"The current run context can be accessed with prefect.context.get_run_context()
. This function will raise an exception if no run context is available, meaning you are not in a flow or task run. If a task run context is available, it will be returned even if a flow run context is available.
Alternatively, you can access the flow run or task run context explicitly. This will, for example, allow you to access the flow run context from a task run.
Note that we do not send the flow run context to distributed task workers because the context is costly to serialize and deserialize.
from prefect.context import FlowRunContext, TaskRunContext\n\nflow_run_ctx = FlowRunContext.get()\ntask_run_ctx = TaskRunContext.get()\n
Unlike get_run_context
, these method calls will not raise an error if the context is not available. Instead, they will return None
.
Prefect's local settings are documented and type-validated.
By modifying the default settings, you can customize various aspects of the system. You can override a setting with an environment variable or by updating the setting in a Prefect profile.
Prefect profiles are persisted groups of settings on your local machine. A single profile is always active.
Initially, a default profile named default
is active and contains no settings overrides.
All currently active settings can be viewed from the command line by running the following command:
prefect config view --show-defaults\n
When you switch to a different profile, all of the settings configured in the newly activated profile are applied.
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#commonly-configured-settings","title":"Commonly configured settings","text":"This section describes some commonly configured settings. See Configuring settings for details on setting and unsetting configuration values.
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#prefect_api_key","title":"PREFECT_API_KEY","text":"The PREFECT_API_KEY
value specifies the API key used to authenticate with Prefect Cloud.
PREFECT_API_KEY=\"[API-KEY]\"\n
Generally, you will set the PREFECT_API_URL
and PREFECT_API_KEY
for your active profile by running prefect cloud login
. If you're curious, read more about managing API keys.
The PREFECT_API_URL
value specifies the API endpoint of your Prefect Cloud workspace or a self-hosted Prefect server instance.
For example, if using Prefect Cloud:
PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n
You can view your Account ID and Workspace ID in your browser URL when at a Prefect Cloud workspace page. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.
If using a local Prefect server instance, set your API URL like this:
PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n
PREFECT_API_URL
setting for workers
If using a worker (agent and block-based deployments are legacy) that can create flow runs for deployments in remote environments, PREFECT_API_URL
must be set for the environment in which your worker is running.
If you want the worker to communicate with Prefect Cloud or a Prefect server instance from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL
in that environment.
Running the Prefect UI behind a reverse proxy
When using a reverse proxy (such as Nginx or Traefik) to proxy traffic to a locally-hosted Prefect UI instance, the Prefect server instance also needs to be configured to know how to connect to the API. The PREFECT_UI_API_URL
should be set to the external proxy URL (e.g. if your external URL is https://prefect-server.example.com/ then set PREFECT_UI_API_URL=https://prefect-server.example.com/api
for the Prefect server process). You can also accomplish this by setting PREFECT_API_URL
to the API URL, as this setting is used as a fallback if PREFECT_UI_API_URL
is not set.
The PREFECT_HOME
value specifies the local Prefect directory for configuration files, profiles, and the location of the default Prefect SQLite database.
PREFECT_HOME='~/.prefect'\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#prefect_local_storage_path","title":"PREFECT_LOCAL_STORAGE_PATH","text":"The PREFECT_LOCAL_STORAGE_PATH
value specifies the default location of local storage for flow runs.
PREFECT_LOCAL_STORAGE_PATH='${PREFECT_HOME}/storage'\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#database-settings","title":"Database settings","text":"If running a self-hosted Prefect server instance, there are several database configuration settings you can read about here.
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#logging-settings","title":"Logging settings","text":"Prefect provides several logging configuration settings that you can read about in the logging docs.
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#configuring-settings","title":"Configuring settings","text":"The prefect config
CLI commands enable you to view, set, and unset settings.
The prefect config view
command will display settings that override default values.
$ prefect config view\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG'\n
You can show the sources of values with --show-sources
:
$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG' (from env)\n
You can also include default values with --show-defaults
:
$ prefect config view --show-defaults\nPREFECT_PROFILE='default'\nPREFECT_AGENT_PREFETCH_SECONDS='10' (from defaults)\nPREFECT_AGENT_QUERY_INTERVAL='5.0' (from defaults)\nPREFECT_API_KEY='None' (from defaults)\nPREFECT_API_REQUEST_TIMEOUT='60.0' (from defaults)\nPREFECT_API_URL='None' (from defaults)\n...\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#setting-and-clearing-values","title":"Setting and clearing values","text":"The prefect config set
command lets you change the value of a default setting.
A commonly used example is setting the PREFECT_API_URL
, which you may need to change when interacting with different Prefect server instances or Prefect Cloud.
# use a local Prefect server\nprefect config set PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n\n# use Prefect Cloud\nprefect config set PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n
If you want to configure a setting to use its default value, use the prefect config unset
command.
prefect config unset PREFECT_API_URL\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#overriding-defaults-with-environment-variables","title":"Overriding defaults with environment variables","text":"All settings have keys that match the environment variable that can be used to override them.
For example, configuring the home directory:
# environment variable\nexport PREFECT_HOME=\"/path/to/home\"\n
# python\nimport prefect.settings\nprefect.settings.PREFECT_HOME.value() # PosixPath('/path/to/home')\n
Configuring the a server instance's port:
# environment variable\nexport PREFECT_SERVER_API_PORT=4242\n
# python\nprefect.settings.PREFECT_SERVER_API_PORT.value() # 4242\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#configuration-profiles","title":"Configuration profiles","text":"Prefect allows you to persist settings instead of setting an environment variable each time you open a new shell. Settings are persisted to profiles, which allow you to move between groups of settings quickly.
The prefect profile
CLI commands enable you to create, review, and manage profiles.
If you configured settings for a profile, prefect profile inspect
displays those settings:
$ prefect profile inspect\nPREFECT_PROFILE = \"default\"\nPREFECT_API_KEY = \"pnu_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\"\nPREFECT_API_URL = \"http://127.0.0.1:4200/api\"\n
You can pass the name of a profile to view its settings:
$ prefect profile create test\n$ prefect profile inspect test\nPREFECT_PROFILE=\"test\"\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#creating-and-removing-profiles","title":"Creating and removing profiles","text":"Create a new profile with no settings:
$ prefect profile create test\nCreated profile 'test' at /Users/terry/.prefect/profiles.toml.\n
Create a new profile foo
with settings cloned from an existing default
profile:
$ prefect profile create foo --from default\nCreated profile 'cloud' matching 'default' at /Users/terry/.prefect/profiles.toml.\n
Rename a profile:
$ prefect profile rename temp test\nRenamed profile 'temp' to 'test'.\n
Remove a profile:
$ prefect profile delete test\nRemoved profile 'test'.\n
Removing the default profile resets it:
$ prefect profile delete default\nReset profile 'default'.\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#change-values-in-profiles","title":"Change values in profiles","text":"Set a value in the current profile:
$ prefect config set VAR=X\nSet variable 'VAR' to 'X'\nUpdated profile 'default'\n
Set multiple values in the current profile:
$ prefect config set VAR2=Y VAR3=Z\nSet variable 'VAR2' to 'Y'\nSet variable 'VAR3' to 'Z'\nUpdated profile 'default'\n
You can set a value in another profile by passing the --profile NAME
option to a CLI command:
$ prefect --profile \"foo\" config set VAR=Y\nSet variable 'VAR' to 'Y'\nUpdated profile 'foo'\n
Unset values in the current profile to restore the defaults:
$ prefect config unset VAR2 VAR3\nUnset variable 'VAR2'\nUnset variable 'VAR3'\nUpdated profile 'default'\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#inspecting-profiles","title":"Inspecting profiles","text":"See a list of available profiles:
$ prefect profile ls\n* default\ncloud\ntest\nlocal\n
View all settings for a profile:
$ prefect profile inspect cloud\nPREFECT_API_URL='https://api.prefect.cloud/api/accounts/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx\nx/workspaces/xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'\nPREFECT_API_KEY='xxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' \n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#using-profiles","title":"Using profiles","text":"The profile named default
is used by default. There are several methods to switch to another profile.
The recommended method is to use the prefect profile use
command with the name of the profile:
$ prefect profile use foo\nProfile 'test' now active.\n
Alternatively, you can set the environment variable PREFECT_PROFILE
to the name of the profile:
export PREFECT_PROFILE=foo\n
Or, specify the profile in the CLI command for one-time usage:
prefect --profile \"foo\" ...\n
Note that this option must come before the subcommand. For example, to list flow runs using the profile foo
:
prefect --profile \"foo\" flow-run ls\n
You may use the -p
flag as well:
prefect -p \"foo\" flow-run ls\n
You may also create an 'alias' to automatically use your profile:
$ alias prefect-foo=\"prefect --profile 'foo' \"\n# uses our profile!\n$ prefect-foo config view \n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#conflicts-with-environment-variables","title":"Conflicts with environment variables","text":"If setting the profile from the CLI with --profile
, environment variables that conflict with settings in the profile will be ignored.
In all other cases, environment variables will take precedence over the value in the profile.
For example, a value set in a profile will be used by default:
$ prefect config set PREFECT_LOGGING_LEVEL=\"ERROR\"\n$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='ERROR' (from profile)\n
But, setting an environment variable will override the profile setting:
$ export PREFECT_LOGGING_LEVEL=\"DEBUG\"\n$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG' (from env)\n
Unless the profile is explicitly requested when using the CLI:
$ prefect --profile default config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='ERROR' (from profile)\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#profile-files","title":"Profile files","text":"Profiles are persisted to the file location specified by PREFECT_PROFILES_PATH
. The default location is a profiles.toml
file in the PREFECT_HOME
directory:
$ prefect config view --show-defaults\n...\nPREFECT_PROFILES_PATH='${PREFECT_HOME}/profiles.toml'\n...\n
The TOML format is used to store profile data.
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/state-change-hooks/","title":"State Change Hooks","text":"State change hooks execute code in response to changes in flow or task run states, enabling you to define actions for specific state transitions in a workflow. This guide provides examples of real-world use cases.
","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#example-use-cases","title":"Example use cases","text":"","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#send-a-notification-when-a-flow-run-fails","title":"Send a notification when a flow run fails","text":"State change hooks enable you to customize messages sent when tasks transition between states, such as sending notifications containing sensitive information when tasks enter a Failed
state. Let's run a client-side hook upon a flow run entering a Failed
state.
from prefect import flow\nfrom prefect.blocks.core import Block\nfrom prefect.settings import PREFECT_API_URL\n\ndef notify_slack(flow, flow_run, state):\n slack_webhook_block = Block.load(\n \"slack-webhook/my-slack-webhook\"\n )\n\n slack_webhook_block.notify(\n (\n f\"Your job {flow_run.name} entered {state.name} \"\n f\"with message:\\n\\n\"\n f\"See <https://{PREFECT_API_URL.value()}/flow-runs/\"\n f\"flow-run/{flow_run.id}|the flow run in the UI>\\n\\n\"\n f\"Tags: {flow_run.tags}\\n\\n\"\n f\"Scheduled start: {flow_run.expected_start_time}\"\n )\n )\n\n@flow(on_failure=[notify_slack], retries=1)\ndef failing_flow():\n raise ValueError(\"oops!\")\n\nif __name__ == \"__main__\":\n failing_flow()\n
Note that because we've configured retries in this example, the on_failure
hook will not run until all retries
have completed, when the flow run enters a Failed
state.
State change hooks can aid in managing infrastructure cleanup in scenarios where tasks spin up individual infrastructure resources independently of Prefect. When a flow run crashes, tasks may exit abruptly, resulting in the potential omission of cleanup logic within the tasks. State change hooks can be used to ensure infrastructure is properly cleaned up even when a flow run enters a Crashed
state!
Let's create a hook that deletes a Cloud Run job if the flow run crashes.
import os\nfrom prefect import flow, task\nfrom prefect.blocks.system import String\nfrom prefect.client import get_client\nimport prefect.runtime\n\nasync def delete_cloud_run_job(flow, flow_run, state):\n \"\"\"Flow run state change hook that deletes a Cloud Run Job if\n the flow run crashes.\"\"\"\n\n # retrieve Cloud Run job name\n cloud_run_job_name = await String.load(\n name=\"crashing-flow-cloud-run-job\"\n )\n\n # delete Cloud Run job\n delete_job_command = f\"yes | gcloud beta run jobs delete \n {cloud_run_job_name.value} --region us-central1\"\n os.system(delete_job_command)\n\n # clean up the Cloud Run job string block as well\n async with get_client() as client:\n block_document = await client.read_block_document_by_name(\n \"crashing-flow-cloud-run-job\", block_type_slug=\"string\"\n )\n await client.delete_block_document(block_document.id)\n\n@task\ndef my_task_that_crashes():\n raise SystemExit(\"Crashing on purpose!\")\n\n@flow(on_crashed=[delete_cloud_run_job])\ndef crashing_flow():\n \"\"\"Save the flow run name (i.e. Cloud Run job name) as a \n String block. It then executes a task that ends up crashing.\"\"\"\n flow_run_name = prefect.runtime.flow_run.name\n cloud_run_job_name = String(value=flow_run_name)\n cloud_run_job_name.save(\n name=\"crashing-flow-cloud-run-job\", overwrite=True\n )\n\n my_task_that_crashes()\n\nif __name__ == \"__main__\":\n crashing_flow()\n
","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/testing/","title":"Testing","text":"Once you have some awesome flows, you probably want to test them!
","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/testing/#unit-testing-flows","title":"Unit testing flows","text":"Prefect provides a simple context manager for unit tests that allows you to run flows and tasks against a temporary local SQLite database.
from prefect import flow\nfrom prefect.testing.utilities import prefect_test_harness\n\n@flow\ndef my_favorite_flow():\n return 42\n\ndef test_my_favorite_flow():\n with prefect_test_harness():\n # run the flow against a temporary testing database\n assert my_favorite_flow() == 42\n
For more extensive testing, you can leverage prefect_test_harness
as a fixture in your unit testing framework. For example, when using pytest
:
from prefect import flow\nimport pytest\nfrom prefect.testing.utilities import prefect_test_harness\n\n@pytest.fixture(autouse=True, scope=\"session\")\ndef prefect_test_fixture():\n with prefect_test_harness():\n yield\n\n@flow\ndef my_favorite_flow():\n return 42\n\ndef test_my_favorite_flow():\n assert my_favorite_flow() == 42\n
Note
In this example, the fixture is scoped to run once for the entire test session. In most cases, you will not need a clean database for each test and just want to isolate your test runs to a test database. Creating a new test database per test creates significant overhead, so we recommend scoping the fixture to the session. If you need to isolate some tests fully, you can use the test harness again to create a fresh database.
","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/testing/#unit-testing-tasks","title":"Unit testing tasks","text":"To test an individual task, you can access the original function using .fn
:
from prefect import flow, task\n\n@task\ndef my_favorite_task():\n return 42\n\n@flow\ndef my_favorite_flow():\n val = my_favorite_task()\n return val\n\ndef test_my_favorite_task():\n assert my_favorite_task.fn() == 42\n
Disable logger
If your task makes uses a logger, you can disable the logger in order to avoid the RuntimeError
raised from a missing flow context.
from prefect.logging import disable_run_logger\n\ndef test_my_favorite_task():\n with disable_run_logger():\n assert my_favorite_task.fn() == 42\n
","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/troubleshooting/","title":"Troubleshooting","text":"Don't Panic! If you experience an error with Prefect, there are many paths to understanding and resolving it. The first troubleshooting step is confirming that you are running the latest version of Prefect. If you are not, be sure to upgrade to the latest version, since the issue may have already been fixed. Beyond that, there are several categories of errors:
Prefect is constantly evolving, adding new features and fixing bugs. Chances are that a patch has already been identified and released. Search existing issues for similar reports and check out the Release Notes. Upgrade to the newest version with the following command:
pip install --upgrade prefect\n
Different components may use different versions of Prefect:
Integration Versions
Keep in mind that integrations are versioned and released independently of the core Prefect library. They should be upgraded simultaneously with the core library, using the same method.
","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#logs","title":"Logs","text":"In many cases, there will be an informative stack trace in Prefect's logs. Read it carefully, locate the source of the error, and try to identify the cause.
There are two types of logs:
If your flow and task logs are empty, there may have been an infrastructure issue that prevented your flow from starting. Check your worker logs for more details.
If there is no clear indication of what went wrong, try updating the logging level from the default INFO
level to the DEBUG
level. Settings such as the logging level are propagated from the worker environment to the flow run environment and can be set via environment variables or the prefect config set
CLI:
# Using the CLI\nprefect config set PREFECT_LOGGING_LEVEL=DEBUG\n\n# Using environment variables\nexport PREFECT_LOGGING_LEVEL=DEBUG\n
The DEBUG
logging level produces a high volume of logs so consider setting it back to INFO
once any issues are resolved.
When using Prefect Cloud, there are the additional concerns of authentication and authorization. The Prefect API authenticates users and service accounts - collectively known as actors - with API keys. Missing, incorrect, or expired API keys will result in a 401 response with detail Invalid authentication credentials
. Use the following command to check your authentication, replacing $PREFECT_API_KEY
with your API key:
curl -s -H \"Authorization: Bearer $PREFECT_API_KEY\" \"https://api.prefect.cloud/api/me/\"\n
Users vs Service Accounts
Service accounts - sometimes referred to as bots - represent non-human actors that interact with Prefect such as workers and CI/CD systems. Each human that interacts with Prefect should be represented as a user. User API keys start with pnu_
and service account API keys start with pnb_
.
Supposing the response succeeds, let's check our authorization. Actors can be members of workspaces. An actor attempting an action in a workspace they are not a member of will result in a 404 response. Use the following command to check your actor's workspace memberships:
curl -s -H \"Authorization: Bearer $PREFECT_API_KEY\" \"https://api.prefect.cloud/api/me/workspaces\"\n
Formatting JSON
Python comes with a helpful tool for formatting JSON. Append the following to the end of the command above to make the output more readable: | python -m json.tool
Make sure your actor is a member of the workspace you are working in. Within a workspace, an actor has a role which grants them certain permissions. Insufficient permissions will result in an error. For example, starting an agent or worker with the Viewer role, will result in errors.
","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#execution","title":"Execution","text":"Prefect flows can be executed locally by the user, or remotely by a worker or agent. Local execution generally means that you - the user - run your flow directly with a command like python flow.py
. Remote execution generally means that a worker runs your flow via a deployment, optionally on different infrastructure.
With remote execution, the creation of your flow run happens separately from its execution. Flow runs are assigned to a work pool and a work queue. For flow runs to execute, a worker must be subscribed to the work pool and work queue, otherwise the flow runs will go from Scheduled
to Late
. Ensure that your work pool and work queue have a subscribed worker.
Local and remote execution can also differ in their treatment of relative imports. If switching from local to remote execution results in local import errors, try replicating the behavior by executing the flow locally with the -m
flag (i.e. python -m flow
instead of python flow.py
). Read more about -m
here.
Summary: Requests require a trailing /
in the request URL.
If you write a test that does not include a trailing /
when making a request to a specific endpoint:
async def test_example(client):\n response = await client.post(\"/my_route\")\n assert response.status_code == 201\n
You'll see a failure like:
E assert 307 == 201\nE + where 307 = <Response [307 Temporary Redirect]>.status_code\n
To resolve this, include the trailing /
:
async def test_example(client):\n response = await client.post(\"/my_route/\")\n assert response.status_code == 201\n
Note: requests to nested URLs may exhibit the opposite behavior and require no trailing slash:
async def test_nested_example(client):\n response = await client.post(\"/my_route/filter/\")\n assert response.status_code == 307\n\n response = await client.post(\"/my_route/filter\")\n assert response.status_code == 200\n
Reference: \"HTTPX disabled redirect following by default\" in 0.22.0
.
pytest.PytestUnraisableExceptionWarning
or ResourceWarning
","text":"As you're working with one of the FlowRunner
implementations, you may get an error like this one:
E pytest.PytestUnraisableExceptionWarning: Exception ignored in: <ssl.SSLSocket fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0>\nE\nE Traceback (most recent call last):\nE File \".../pytest_asyncio/plugin.py\", line 306, in setup\nE res = await func(**_add_kwargs(func, kwargs, event_loop, request))\nE ResourceWarning: unclosed <ssl.SSLSocket fd=10, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 60605), raddr=('127.0.0.1', 6443)>\n\n.../_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning\n
This error is saying that your test suite (or the prefect
library code) opened a connection to something (like a Docker daemon or a Kubernetes cluster) and didn't close it.
It may help to re-run the specific test with PYTHONTRACEMALLOC=25 pytest ...
so that Python can display more of the stack trace where the connection was opened.
Upgrading from agents to workers significantly enhances the experience of deploying flows. It simplifies the specification of each flow's infrastructure and runtime environment.
A worker is the fusion of an agent with an infrastructure block. Like agents, workers poll a work pool for flow runs that are scheduled to start. Like infrastructure blocks, workers are typed - they work with only one kind of infrastructure, and they specify the default configuration for jobs submitted to that infrastructure.
Accordingly, workers are not a drop-in replacement for agents. Using workers requires deploying flows differently. In particular, deploying a flow with a worker does not involve specifying an infrastructure block. Instead, infrastructure configuration is specified on the work pool and passed to each worker that polls work from that pool.
This guide provides an overview of the differences between agents and workers. It also describes how to upgrade from agents to workers in just a few quick steps.
","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#enhancements","title":"Enhancements","text":"","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#workers","title":"Workers","text":".deploy()
or the alternative deployment experience with prefect.yaml
are more flexible and easier to use than block and agent-based deployments.Deployment CLI and Python SDK:
prefect deployment build <entrypoint>
/prefect deployment apply
--> prefect deploy
Prefect will now automatically detect flows in your repo and provide a wizard \ud83e\uddd9 to guide you through setting required attributes for your deployments.
Deployment.build_from_flow
--> flow.deploy
Configuring remote flow code storage:
storage blocks --> pull action
When using the YAML-based deployment API, you can configure a pull action in your prefect.yaml
file to specify how to retrieve flow code for your deployments. You can use configuration from your existing storage blocks to define your pull action via templating.
When using the Python deployment API, you can pass any storage block to the flow.deploy
method to specify how to retrieve flow code for your deployment.
Configuring flow run infrastructure:
infrastructure blocks --> typed work pool
Default infrastructure config is now set on the typed work pool, and can be overwritten by individual deployments.
Managing multiple deployments:
Create and/or update many deployments at once through a prefect.yaml
file or use the deploy
function.
prefect.yaml
file.Deployment-level infrastructure overrides operate in much the same way.
infra_override
-> job_variable
The process for starting an agent and starting a worker in your environment are virtually identical.
prefect agent start --pool <work pool name>
--> prefect worker start --pool <work pool name>
Worker Helm chart
If you host your agents in a Kubernetes cluster, you can use the Prefect worker Helm chart to host workers in your cluster.
If you have existing deployments that use infrastructure blocks, you can quickly upgrade them to be compatible with workers by following these steps:
This new work pool will replace your infrastructure block.
You can use the .publish_as_work_pool
method on any infrastructure block to create a work pool with the same configuration.
For example, if you have a KubernetesJob
infrastructure block named 'my-k8s-job', you can create a work pool with the same configuration with this script:
from prefect.infrastructure import KubernetesJob\n\nKubernetesJob.load(\"my-k8s-job\").publish_as_work_pool()\n
Running this script will create a work pool named 'my-k8s-job' with the same configuration as your infrastructure block.\n
Serving flows
If you are using a Process
infrastructure block and a LocalFilesystem
storage block (or aren't using an infrastructure and storage block at all), you can use flow.serve
to create a deployment without needing to specify a work pool name or start a worker.
This is a quick way to create a deployment for a flow and is a great way to manage your deployments if you don't need the dynamic infrastructure creation or configuration offered by workers.
Check out our Docker guide for how to build a served flow into a Docker image and host it in your environment.
This worker will replace your agent and poll your new work pool for flow runs to execute.
prefect worker start -p <work pool name>\n
To deploy your flows to the new work pool, you can use flow.deploy
for a Pythonic deployment experience or prefect deploy
for a YAML-based deployment experience.
If you currently use Deployment.build_from_flow
, we recommend using flow.deploy
.
If you currently use prefect deployment build
and prefect deployment apply
, we recommend using prefect deploy
.
flow.deploy
","text":"If you have a Python script that uses Deployment.build_from_flow
, you can replace it with flow.deploy
.
Most arguments to Deployment.build_from_flow
can be translated directly to flow.deploy
, but here are some changes that you may need to make:
infrastructure
with work_pool_name
..publish_as_work_pool
method on your infrastructure block, use the name of the created work pool.infra_overrides
with job_variables
.storage
with a call to flow.from_source
.flow.from_source
will load your flow from a remote storage location and make it deployable. Your existing storage block can be passed to the source
argument of flow.from_source
.Below are some examples of how to translate Deployment.build_from_flow
to flow.deploy
.
If you aren't using any blocks:
from prefect import flow\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n print(f\"Hello {name}! I'm a flow from a Python script!\")\n\nif __name__ == \"__main__\":\n Deployment.build_from_flow(\n my_flow,\n name=\"my-deployment\",\n parameters=dict(name=\"Marvin\"),\n )\n
You can replace Deployment.build_from_flow
with flow.serve
:
from prefect import flow\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n print(f\"Hello {name}! I'm a flow from a Python script!\")\n\nif __name__ == \"__main__\":\n my_flow.serve(\n name=\"my-deployment\",\n parameters=dict(name=\"Marvin\"),\n )\n
This will start a process that will serve your flow and execute any flow runs that are scheduled to start.
","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#deploying-using-a-storage-block","title":"Deploying using a storage block","text":"If you currently use a storage block to load your flow code but no infrastructure block:
from prefect import flow\nfrom prefect.storage import GitHub\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n print(f\"Hello {name}! I'm a flow from a GitHub repo!\")\n\nif __name__ == \"__main__\":\n Deployment.build_from_flow(\n my_flow,\n name=\"my-deployment\",\n storage=GitHub.load(\"demo-repo\"),\n parameters=dict(name=\"Marvin\"),\n )\n
you can use flow.from_source
to load your flow from the same location and flow.serve
to create a deployment:
from prefect import flow\nfrom prefect.storage import GitHub\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=GitHub.load(\"demo-repo\"),\n entrypoint=\"example.py:my_flow\"\n ).serve(\n name=\"my-deployment\",\n parameters=dict(name=\"Marvin\"),\n )\n
This will allow you to execute scheduled flow runs without starting a worker. Additionally, the process serving your flow will regularly check for updates to your flow code and automatically update the flow if it detects any changes to the code.
","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#deploying-using-an-infrastructure-and-storage-block","title":"Deploying using an infrastructure and storage block","text":"For the code below, we'll need to create a work pool from our infrastructure block and pass it to flow.deploy
as the work_pool_name
argument. We'll also need to pass our storage block to flow.from_source
as the source
argument.
from prefect import flow\nfrom prefect.deployments import Deployment\nfrom prefect.filesystems import GitHub\nfrom prefect.infrastructure.kubernetes import KubernetesJob\n\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n print(f\"Hello {name}! I'm a flow from a GitHub repo!\")\n\n\nif __name__ == \"__main__\":\n Deployment.build_from_flow(\n my_flow,\n name=\"my-deployment\",\n storage=GitHub.load(\"demo-repo\"),\n entrypoint=\"example.py:my_flow\",\n infrastructure=KubernetesJob.load(\"my-k8s-job\"),\n infra_overrides=dict(pull_policy=\"Never\"),\n parameters=dict(name=\"Marvin\"),\n )\n
The equivalent deployment code using flow.deploy
would look like this:
from prefect import flow\nfrom prefect.storage import GitHub\n\nif __name__ == \"__main__\":\n flow.from_source(\n source=GitHub.load(\"demo-repo\"),\n entrypoint=\"example.py:my_flow\"\n ).deploy(\n name=\"my-deployment\",\n work_pool_name=\"my-k8s-job\",\n job_variables=dict(pull_policy=\"Never\"),\n parameters=dict(name=\"Marvin\"),\n )\n
Note that when using flow.from_source(...).deploy(...)
, the flow you're deploying does not need to be available locally before running your script.
If you currently bake your flow code into a Docker image before deploying, you can use the image
argument of flow.deploy
to build a Docker image as part of your deployment process:
from prefect import flow\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n print(f\"Hello {name}! I'm a flow from a Docker image!\")\n\n\nif __name__ == \"__main__\":\n my_flow.deploy(\n name=\"my-deployment\",\n image=\"my-repo/my-image:latest\",\n work_pool_name=\"my-k8s-job\",\n job_variables=dict(pull_policy=\"Never\"),\n parameters=dict(name=\"Marvin\"),\n )\n
You can skip a flow.from_source
call when building an image with flow.deploy
. Prefect will keep track of the flow's source code location in the image and load it from that location when the flow is executed.
prefect deploy
","text":"Always run prefect deploy
commands from the root level of your repo!
With agents, you might have had multiple deployment.yaml
files, but under worker deployment patterns, each repo will have a single prefect.yaml
file located at the root of the repo that contains deployment configuration for all flows in that repo.
To set up a new prefect.yaml
file for your deployments, run the following command from the root level of your repo:
perfect deploy\n
This will start a wizard that will guide you through setting up your deployment.
For step 4, select y
on the last prompt to save the configuration for the deployment.
Saving the configuration for your deployment will result in a prefect.yaml
file populated with your first deployment. You can use this YAML file to edit and define multiple deployments for this repo.
You can add more deployments to the deployments
list in your prefect.yaml
file and/or by continuing to use the deployment creation wizard.
For more information on deployments, check out our in-depth guide for deploying flows to work pools.
","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/using-the-client/","title":"Using the Prefect Orchestration Client","text":"","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#overview","title":"Overview","text":"In the API reference for the PrefectClient
, you can find many useful client methods that make it simpler to do things such as:
N
completed flow runs from my workspaceThe PrefectClient
is an async context manager, so you can use it like this:
from prefect import get_client\n\nasync with get_client() as client:\n response = await client.hello()\n print(response.json()) # \ud83d\udc4b\n
","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#examples","title":"Examples","text":"","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#rescheduling-late-flow-runs","title":"Rescheduling late flow runs","text":"Sometimes, you may need to bulk reschedule flow runs that are late - for example, if you've accidentally scheduled many flow runs of a deployment to an inactive work pool.
To do this, we can delete late flow runs and create new ones in a Scheduled
state with a delay.
This example reschedules the last 3 late flow runs of a deployment named healthcheck-storage-test
to run 6 hours later than their original expected start time. It also deletes any remaining late flow runs of that deployment.
import asyncio\nfrom datetime import datetime, timedelta, timezone\nfrom typing import Optional\n\nfrom prefect import get_client\nfrom prefect.client.schemas.filters import (\n DeploymentFilter, FlowRunFilter\n)\nfrom prefect.client.schemas.objects import FlowRun\nfrom prefect.client.schemas.sorting import FlowRunSort\nfrom prefect.states import Scheduled\n\nasync def reschedule_late_flow_runs(\n deployment_name: str,\n delay: timedelta,\n most_recent_n: int,\n delete_remaining: bool = True,\n states: Optional[list[str]] = None\n) -> list[FlowRun]:\n if not states:\n states = [\"Late\"]\n\n async with get_client() as client:\n flow_runs = await client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state=dict(name=dict(any_=states)),\n expected_start_time=dict(\n before_=datetime.now(timezone.utc)\n ),\n ),\n deployment_filter=DeploymentFilter(\n name={'like_': deployment_name}\n ),\n sort=FlowRunSort.START_TIME_DESC,\n limit=most_recent_n if not delete_remaining else None\n )\n\n if not flow_runs:\n print(f\"No flow runs found in states: {states!r}\")\n return []\n\n rescheduled_flow_runs = []\n for i, run in enumerate(flow_runs):\n await client.delete_flow_run(flow_run_id=run.id)\n if i < most_recent_n:\n new_run = await client.create_flow_run_from_deployment(\n deployment_id=run.deployment_id,\n state=Scheduled(\n scheduled_time=run.expected_start_time + delay\n ),\n )\n rescheduled_flow_runs.append(new_run)\n\n return rescheduled_flow_runs\n\nif __name__ == \"__main__\":\n rescheduled_flow_runs = asyncio.run(\n reschedule_late_flow_runs(\n deployment_name=\"healthcheck-storage-test\",\n delay=timedelta(hours=6),\n most_recent_n=3,\n )\n )\n\n print(f\"Rescheduled {len(rescheduled_flow_runs)} flow runs\")\n\n assert all(\n run.state.is_scheduled() for run in rescheduled_flow_runs\n )\n assert all(\n run.expected_start_time > datetime.now(timezone.utc)\n for run in rescheduled_flow_runs\n )\n
","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#get-the-last-n-completed-flow-runs-from-my-workspace","title":"Get the last N
completed flow runs from my workspace","text":"To get the last N
completed flow runs from our workspace, we can make use of read_flow_runs
and prefect.client.schemas
.
This example gets the last three completed flow runs from our workspace:
import asyncio\nfrom typing import Optional\n\nfrom prefect import get_client\nfrom prefect.client.schemas.filters import FlowRunFilter\nfrom prefect.client.schemas.objects import FlowRun\nfrom prefect.client.schemas.sorting import FlowRunSort\n\nasync def get_most_recent_flow_runs(\n n: int = 3,\n states: Optional[list[str]] = None\n) -> list[FlowRun]:\n if not states:\n states = [\"COMPLETED\"]\n\n async with get_client() as client:\n return await client.read_flow_runs(\n flow_run_filter=FlowRunFilter(\n state={'type': {'any_': states}}\n ),\n sort=FlowRunSort.END_TIME_DESC,\n limit=n,\n )\n\nif __name__ == \"__main__\":\n last_3_flow_runs: list[FlowRun] = asyncio.run(\n get_most_recent_flow_runs()\n )\n print(last_3_flow_runs)\n\n assert all(\n run.state.is_completed() for run in last_3_flow_runs\n )\n assert (\n end_times := [run.end_time for run in last_3_flow_runs]\n ) == sorted(end_times, reverse=True)\n
Instead of the last three from the whole workspace, you could also use the DeploymentFilter
like the previous example to get the last three completed flow runs of a specific deployment.
There are other ways to filter objects like flow runs
See the filters API reference
for more ways to filter flow runs and other objects in your Prefect ecosystem.
Variables enable you to store and reuse non-sensitive bits of data, such as configuration information. Variables are named, mutable string values, much like environment variables. Variables are scoped to a Prefect server instance or a single workspace in Prefect Cloud.
Variables can be created or modified at any time, but are intended for values with infrequent writes and frequent reads. Variable values may be cached for quicker retrieval.
While variable values are most commonly loaded during flow runtime, they can be loaded in other contexts, at any time, such that they can be used to pass configuration information to Prefect configuration files, such as deployment steps.
Variables are not Encrypted
Using variables to store sensitive information, such as credentials, is not recommended. Instead, use Secret blocks to store and access sensitive information.
","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#managing-variables","title":"Managing variables","text":"You can create, read, edit and delete variables via the Prefect UI, API, and CLI. Names must adhere to traditional variable naming conventions:
Values must:
Optionally, you can add tags to the variable.
","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#via-the-prefect-ui","title":"Via the Prefect UI","text":"You can see all the variables in your Prefect server instance or Prefect Cloud workspace on the Variables page of the Prefect UI. Both the name and value of all variables are visible to anyone with access to the server or workspace.
To create a new variable, select the + button next to the header of the Variables page. Enter the name and value of the variable.
","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#via-the-rest-api","title":"Via the REST API","text":"Variables can be created and deleted via the REST API. You can also set and get variables via the API with either the variable name or ID. See the REST reference for more information.
","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#via-the-cli","title":"Via the CLI","text":"You can list, inspect, and delete variables via the command line interface with the prefect variable ls
, prefect variable inspect <name>
, and prefect variable delete <name>
commands, respectively.
In addition to the UI and API, variables can be referenced in code and in certain Prefect configuration files.
","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#in-python-code","title":"In Python code","text":"You can access any variable via the Python SDK via the .get()
method. If you attempt to reference a variable that does not exist, the method will return None
.
from prefect import variables\n\n# from a synchronous context\nanswer = variables.get('the_answer')\nprint(answer)\n# 42\n\n# from an asynchronous context\nanswer = await variables.get('the_answer')\nprint(answer)\n# 42\n\n# without a default value\nanswer = variables.get('not_the_answer')\nprint(answer)\n# None\n\n# with a default value\nanswer = variables.get('not_the_answer', default='42')\nprint(answer)\n# 42\n
","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#in-prefectyaml-deployment-steps","title":"In prefect.yaml
deployment steps","text":"In .yaml
files, variables are denoted by quotes and double curly brackets, like so: \"{{ prefect.variables.my_variable }}\"
. You can use variables to templatize deployment steps by referencing them in the prefect.yaml
file used to create deployments. For example, you could pass a variable in to specify a branch for a git repo in a deployment pull
step:
pull:\n- prefect.deployments.steps.git_clone:\n repository: https://github.com/PrefectHQ/hello-projects.git\n branch: \"{{ prefect.variables.deployment_branch }}\"\n
The deployment_branch
variable will be evaluated at runtime for the deployed flow, allowing changes to be made to variables used in a pull action without updating a deployment directly.
Use webhooks in your Prefect Cloud workspace to receive, observe, and react to events from other systems in your ecosystem. Each webhook exposes a unique URL endpoint to receive events from other systems and transforms them into Prefect events for use in automations.
Webhooks are defined by two essential components: a unique URL and a template that translates incoming web requests to a Prefect event.
","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#configuring-webhooks","title":"Configuring webhooks","text":"","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#via-the-prefect-cloud-api","title":"Via the Prefect Cloud API","text":"Webhooks are managed via the Webhooks API endpoints. This is a Prefect Cloud-only feature. You authenticate API calls using the standard authentication methods you use with Prefect Cloud.
","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#via-prefect-cloud","title":"Via Prefect Cloud","text":"Webhooks can be created and managed from the Prefect Cloud UI.
","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#via-the-prefect-cli","title":"Via the Prefect CLI","text":"Webhooks can be managed and interacted with via the prefect cloud webhook
command group.
prefect cloud webhook --help\n
You can create your first webhook by invoking create
:
prefect cloud webhook create your-webhook-name \\\n --description \"Receives webhooks from your system\" \\\n --template '{ \"event\": \"your.event.name\", \"resource\": { \"prefect.resource.id\": \"your.resource.id\" } }'\n
Note the template string, which is discussed in greater detail below
You can retrieve details for a specific webhook by ID using get
, or optionally query all webhooks in your workspace via ls
:
# get webhook by ID\nprefect cloud webhook get <webhook-id>\n\n# list all configured webhooks in your workspace\n\nprefect cloud webhook ls\n
If you need to disable an existing webhook without deleting it, use toggle
:
prefect cloud webhook toggle <webhook-id>\nWebhook is now disabled\n\nprefect cloud webhook toggle <webhook-id>\nWebhook is now enabled\n
If you are concerned that your webhook endpoint may have been compromised, use rotate
to generate a new, random endpoint
prefect cloud webhook rotate <webhook-url-slug>\n
","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#webhook-endpoints","title":"Webhook endpoints","text":"The webhook endpoints have randomly generated opaque URLs that do not divulge any information about your Prefect Cloud workspace. They are rooted at https://api.prefect.cloud/hooks/
. For example: https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ
. Prefect Cloud assigns this URL when you create a webhook; it cannot be set via the API. You may rotate your webhook URL at any time without losing the associated configuration.
All webhooks may accept requests via the most common HTTP methods:
GET
, HEAD
, and DELETE
may be used for webhooks that define a static event template, or a template that does not depend on the body of the HTTP request. The headers of the request will be available for templates.POST
, PUT
, and PATCH
may be used when the webhook request will include a body. See How HTTP request components are handled for more details on how the body is parsed.Prefect Cloud webhooks are deliberately quiet to the outside world, and will only return a 204 No Content
response when they are successful, and a 400 Bad Request
error when there is any error interpreting the request. For more visibility when your webhooks fail, see the Troubleshooting section below.
The purpose of a webhook is to accept an HTTP request from another system and produce a Prefect event from it. You may find that you often have little influence or control over the format of those requests, so Prefect's webhook system gives you full control over how you turn those notifications from other systems into meaningful events in your Prefect Cloud workspace. The template you define for each webhook will determine how individual components of the incoming HTTP request become the event name and resource labels of the resulting Prefect event.
As with the templates available in Prefect Cloud Automation for defining notifications and other parameters, you will write templates in Jinja2. All of the built-in Jinja2 blocks and filters are available, as well as the filters from the jinja2-humanize-extensions
package.
Your goal when defining your event template is to produce a valid JSON object that defines (at minimum) the event
name and the resource[\"prefect.resource.id\"]
, which are required of all events. The simplest template is one in which these are statically defined.
Let's see a static webhook template example. Say you want to configure a webhook that will notify Prefect when your recommendations
machine learning model has been updated, so you can then send a Slack notification to your team and run a few subsequent deployments. Those models are produced on a daily schedule by another team that is using cron
for scheduling. They aren't able to use Prefect for their flows (yet!), but they are happy to add a curl
to the end of their daily script to notify you. Because this webhook will only be used for a single event from a single resource, your template can be entirely static:
{\n \"event\": \"model.refreshed\",\n \"resource\": {\n \"prefect.resource.id\": \"product.models.recommendations\",\n \"prefect.resource.name\": \"Recommendations [Products]\",\n \"producing-team\": \"Data Science\"\n }\n}\n
Make sure to produce valid JSON
The output of your template, when rendered, should be a valid string that can be parsed, for example, with json.loads
.
A webhook with this template may be invoked via any of the HTTP methods, including a GET
request with no body, so the team you are integrating with can include this line at the end of their daily script:
curl https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ\n
Each time the script hits the webhook, the webhook will produce a single Prefect event with that name and resource in your workspace.
","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#event-fields-that-prefect-cloud-populates-for-you","title":"Event fields that Prefect Cloud populates for you","text":"You may notice that you only had to provide the event
and resource
definition, which is not a completely fleshed out event. Prefect Cloud will set default values for any missing fields, such as occurred
and id
, so you don't need to set them in your template. Additionally, Prefect Cloud will add the webhook itself as a related resource on all of the events it produces.
If your template does not produce a payload
field, the payload
will default to a standard set of debugging information, including the HTTP method, headers, and body.
Now let's say that after a few days you and the Data Science team are getting a lot of value from the automations you have set up with the static webhook. You've agreed to upgrade this webhook to handle all of the various models that the team produces. It's time to add some dynamic information to your webhook template.
Your colleagues on the team have adjusted their daily cron
scripts to POST
a small body that includes the ID and name of the model that was updated:
curl \\\n -d \"model=recommendations\" \\\n -d \"friendly_name=Recommendations%20[Products]\" \\\n -X POST https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ\n
This script will send a POST
request and the body will include a traditional URL-encoded form with two fields describing the model that was updated: model
and friendly_name
. Here's the webhook code that uses Jinja to receive these values in your template and produce different events for the different models:
{\n \"event\": \"model.refreshed\",\n \"resource\": {\n \"prefect.resource.id\": \"product.models.{{ body.model }}\",\n \"prefect.resource.name\": \"{{ body.friendly_name }}\",\n \"producing-team\": \"Data Science\"\n }\n}\n
All subsequent POST
requests will produce events with those variable resource IDs and names. The other statically-defined parts, such as event
or the producing-team
label you included earlier will still be used.
Use Jinja2's default
filter to handle missing values
Jinja2 has a helpful default
filter that can compensate for missing values in the request. In this example, you may want to use the model's ID in place of the friendly name when the friendly name is not provided: {{ body.friendly_name|default(body.model) }}
.
The Jinja2 template context includes the three parts of the incoming HTTP request:
method
is the uppercased string of the HTTP method, like GET
or POST
.headers
is a case-insensitive dictionary of the HTTP headers included with the request. To prevent accidental disclosures, the Authorization
header is removed.body
represents the body that was posted to the webhook, with a best-effort approach to parse it into an object you can access.HTTP headers are available without any alteration as a dict
-like object, but you may access them with header names in any case. For example, these template expressions all return the value of the Content-Length
header:
{{ headers['Content-Length'] }}\n\n{{ headers['content-length'] }}\n\n{{ headers['CoNtEnt-LeNgTh'] }}\n
The HTTP request body goes through some light preprocessing to make it more useful in templates. If the Content-Type
of the request is application/json
, the body will be parsed as a JSON object and made available to the webhook templates. If the Content-Type
is application/x-www-form-urlencoded
(as in our example above), the body is parsed into a flat dict
-like object of key-value pairs. Jinja2 supports both index and attribute access to the fields of these objects, so the following two expressions are equivalent:
{{ body['friendly_name'] }}\n\n{{ body.friendly_name }}\n
Only for Python identifiers
Jinja2's syntax only allows attribute-like access if the key is a valid Python identifier, so body.friendly-name
will not work. Use body['friendly-name']
in those cases.
You may not have much control over the client invoking your webhook, but would still like for bodies that look like JSON to be parsed as such. Prefect Cloud will attempt to parse any other content type (like text/plain
) as if it were JSON first. In any case where the body cannot be transformed into JSON, it will be made available to your templates as a Python str
.
In cases where you have more control over the client, your webhook can accept Prefect events directly with a simple pass-through template:
{{ body|tojson }}\n
This template accepts the incoming body (assuming it was in JSON format) and just passes it through unmodified. This allows a POST
of a partial Prefect event as in this example:
POST /hooks/AERylZ_uewzpDx-8fcweHQ HTTP/1.1\nHost: api.prefect.cloud\nContent-Type: application/json\nContent-Length: 228\n\n{\n \"event\": \"model.refreshed\",\n \"resource\": {\n \"prefect.resource.id\": \"product.models.recommendations\",\n \"prefect.resource.name\": \"Recommendations [Products]\",\n \"producing-team\": \"Data Science\"\n }\n}\n
The resulting event will be filled out with the default values for occurred
, id
, and other fields as described above.
The Cloud Native Computing Foundation has standardized CloudEvents for use by systems to exchange event information in a common format. These events are supported by major cloud providers and a growing number of cloud-native systems. Prefect Cloud can interpret a webhook containing a CloudEvent natively with the following template:
{{ body|from_cloud_event(headers) }}\n
The resulting event will use the CloudEvent's subject
as the resource (or the source
if no subject
is available). The CloudEvent's data
attribute will become the Prefect event's payload['data']
, and the other CloudEvent metadata will be at payload['cloudevents']
. If you would like to handle CloudEvents in a more specific way tailored to your use case, use a dynamic template to interpret the incoming body
.
The initial configuration of your webhook may require some trial and error as you get the sender and your receiving webhook speaking a compatible language. While you are in this phase, you may find the Event Feed in the UI to be indispensable for seeing the events as they are happening.
When Prefect Cloud encounters an error during receipt of a webhook, it will produce a prefect-cloud.webhook.failed
event in your workspace. This event will include critical information about the HTTP method, headers, and body it received, as well as what the template rendered. Keep an eye out for these events when something goes wrong.
Microsoft Azure Container Instances (ACI) provides a convenient and simple service for quickly spinning up a Docker container that can host a Prefect Agent and execute flow runs.
","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#prerequisites","title":"Prerequisites","text":"To follow this quickstart, you'll need the following:
Like most Azure resources, ACI applications must live in a resource group. If you don\u2019t already have a resource group you\u2019d like to use, create a new one by running the az group create
command. For example, this example creates a resource group called prefect-agents
in the eastus
region:
az group create --name prefect-agents --location eastus\n
Feel free to change the group name or location to match your use case. You can also run az account list-locations -o table
to see all available resource group locations for your account.
Prefect provides pre-configured Docker images you can use to quickly stand up a container instance. These Docker images include Python and Prefect. For example, the image prefecthq/prefect:2-python3.10
includes the latest release version of Prefect and Python 3.10.
To create the container instance, use the az container create
command. This example shows the syntax, but you'll need to provide the correct values for [ACCOUNT-ID]
,[WORKSPACE-ID]
, [API-KEY]
, and any dependencies you need to pip install
on the instance. These options are discussed below.
az container create \\\n--resource-group prefect-agents \\\n--name prefect-agent-example \\\n--image prefecthq/prefect:2-python3.10 \\\n--secure-environment-variables PREFECT_API_URL='https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]' PREFECT_API_KEY='[API-KEY]' \\\n--command-line \"/bin/bash -c 'pip install adlfs s3fs requests pandas; prefect agent start -p default-agent-pool -q test'\"\n
When the container instance is running, go to Prefect Cloud and select the Work Pools page. Select default-agent-pool, then select the Queues tab to see work queues configured on this work pool. When the container instance is running and the agent has started, the test
work queue displays \"Healthy\" status. This work queue and agent are ready to execute deployments configured to run on the test
queue.
Agents and queues
The agent running in this container instance can now pick up and execute flow runs for any deployment configured to use the test
queue on the default-agent-pool
work pool.
Let's break down the details of the az container create
command used here.
The az container create command
creates a new ACI container.
--resource-group prefect-agents
tells Azure which resource group the new container is created in. Here, the examples uses the prefect-agents
resource group created earlier.
--name prefect-agent-example
determines the container name you will see in the Azure Portal. You can set any name you\u2019d like here to suit your use case, but container instance names must be unique in your resource group.
--image prefecthq/prefect:2-python3.10
tells ACI which Docker images to run. The script above pulls a public Prefect image from Docker Hub. You can also build custom images and push them to a public container registry so ACI can access them. Or you can push your image to a private Azure Container Registry and use it to create a container instance.
--secure-environment-variables
sets environment variables that are only visible from inside the container. They do not show up when viewing the container\u2019s metadata. You'll populate these environment variables with a few pieces of information to configure the execution environment of the container instance so it can communicate with your Prefect Cloud workspace:
PREFECT_API_KEY
]/concepts/settings/#prefect_api_key) value specifying the API key used to authenticate with your Prefect Cloud workspace. (Pro and Enterprise tier accounts can use a service account API key.)PREFECT_API_URL
value specifying the API endpoint of your Prefect Cloud workspace.--command-line
lets you override the container\u2019s normal entry point and run a command instead. The script above uses this section to install the adlfs
pip package so it can read flow code from Azure Blob Storage, along with s3fs
, pandas
, and requests
. It then runs the Prefect agent, in this case using the default work pool and a test
work queue. If you want to use a different work pool or queue, make sure to change these values appropriately.
Following the example of the Flow deployments tutorial, let's create a deployment that can be executed by the agent on this container instance.
In an environment where you have installed Prefect, create a new folder called health_test
, and within it create a new file called health_flow.py
containing the following code.
import prefect\nfrom prefect import task, flow\nfrom prefect import get_run_logger\n\n\n@task\ndef say_hi():\n logger = get_run_logger()\n logger.info(\"Hello from the Health Check Flow! \ud83d\udc4b\")\n\n\n@task\ndef log_platform_info():\n import platform\n import sys\n from prefect.server.api.server import SERVER_API_VERSION\n\n logger = get_run_logger()\n logger.info(\"Host's network name = %s\", platform.node())\n logger.info(\"Python version = %s\", platform.python_version())\n logger.info(\"Platform information (instance type) = %s \", platform.platform())\n logger.info(\"OS/Arch = %s/%s\", sys.platform, platform.machine())\n logger.info(\"Prefect Version = %s \ud83d\ude80\", prefect.__version__)\n logger.info(\"Prefect API Version = %s\", SERVER_API_VERSION)\n\n\n@flow(name=\"Health Check Flow\")\ndef health_check_flow():\n hi = say_hi()\n log_platform_info(wait_for=[hi])\n
Now create a deployment for this flow script, making sure that it's configured to use the test
queue on the default-agent-pool
work pool.
prefect deployment build --infra process --storage-block azure/flowsville/health_test --name health-test --pool default-agent-pool --work-queue test --apply health_flow.py:health_check_flow\n
Once created, any flow runs for this deployment will be picked up by the agent running on this container instance.
Infrastructure and storage
This Prefect deployment example was built using the Process
infrastructure type and Azure Blob Storage.
You might wonder why your deployment needs process infrastructure rather than DockerContainer
infrastructure when you are deploying a Docker image to ACI.
A Prefect deployment\u2019s infrastructure type describes how you want Prefect agents to run flows for the deployment. With DockerContainer
infrastructure, the agent will try to use Docker to spin up a new container for each flow run. Since you\u2019ll be starting your own container on ACI, you don\u2019t need Prefect to do it for you. Specifying process infrastructure on the deployment tells Prefect you want to agent to run flows by starting a process in your ACI container.
You can use any storage type as long as you've configured a block for it before creating the deployment.
","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#cleaning-up","title":"Cleaning up","text":"Note that ACI instances may incur usage charges while running, but must be running for the agent to pick up and execute flow runs.
To stop a container, use the az container stop
command:
az container stop --resource-group prefect-agents --name prefect-agent-example\n
To delete a container, use the az container delete
command:
az container delete --resource-group prefect-agents --name prefect-agent-example\n
","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/daemonize/","title":"Daemonize Processes for Prefect Deployments","text":"When running workflow applications, it can be helpful to create long-running processes that run at startup and are robust to failure. In this guide you'll learn how to set up a systemd service to create long-running Prefect processes that poll for scheduled flow runs.
A systemd service is ideal for running a long-lived process on a Linux VM or physical Linux server. We will leverage systemd and see how to automatically start a Prefect worker or long-lived serve
process when Linux starts. This approach provides resilience by automatically restarting the process if it crashes.
In this guide we will:
.serve
processsudo
commands).If using an AWS t2-micro EC2 instance with an AWS Linux image, you can install Python and pip with sudo yum install -y python3 python3-pip
.
Create a user account on your linux system for the Prefect process. While you can run a worker or serve process as root, it's good security practice to avoid doing so unless you are sure you need to.
In a terminal, run:
sudo useradd -m prefect\nsudo passwd prefect\n
When prompted, enter a password for the prefect
account.
Next, log in to the prefect
account by running:
sudo su prefect\n
","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#step-2-install-prefect","title":"Step 2: Install Prefect","text":"Run:
pip3 install prefect\n
This guide assumes you are installing Prefect globally, not in a virtual environment. If running a systemd service in a virtual environment, you'll just need to change the ExecPath. For example, if using venv, change the ExecPath to target the prefect
application in the bin
subdirectory of your virtual environment.
Next, set up your environment so that the Prefect client will know which server to connect to.
If connecting to Prefect Cloud, follow the instructions to obtain an API key and then run the following:
prefect cloud login -k YOUR_API_KEY\n
When prompted, choose the Prefect workspace you'd like to log in to.
If connecting to a self-hosted Prefect server instance instead of Prefect Cloud, run the following and substitute the IP address of your server:
prefect config set PREFECT_API_URL=http://your-prefect-server-IP:4200\n
Finally, run the exit
command to sign out of the prefect
Linux account. This command switches you back to your sudo-enabled account so you will can run the commands in the next section.
See the section below if you are setting up a Prefect worker. Skip to the next section if you are setting up a Prefect .serve
process.
Move into the /etc/systemd/system
folder and open a file for editing. We use the Vim text editor below.
cd /etc/systemd/system\nsudo vim my-prefect-service.service\n
my-prefect-service.service[Unit]\nDescription=Prefect worker\n\n[Service]\nUser=prefect\nWorkingDirectory=/home\nExecStart=prefect worker start --pool YOUR_WORK_POOL_NAME\nRestart=always\n\n[Install]\nWantedBy=multi-user.target\n
Make sure you substitute your own work pool name.
","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#setting-up-a-systemd-service-for-serve","title":"Setting up a systemd service for.serve
","text":"Copy your flow entrypoint Python file and any other files needed for your flow to run into the /home
directory (or the directory of your choice).
Here's a basic example flow:
my_file.pyfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef say_hi():\n print(\"Hello!\")\n\nif __name__==\"__main__\":\n say_hi.serve(name=\"Greeting from daemonized .serve\")\n
If you want to make changes to your flow code without restarting your process, you can push your code to git-based cloud storage (GitHub, BitBucket, GitLab) and use flow.from_source().serve()
, as in the example below.
if __name__ == \"__main__\":\nflow.from_source(\n source=\"https://github.com/org/repo.git\",\n entrypoint=\"path/to/my_remote_flow_code_file.py:say_hi\",\n).serve(name=\"deployment-with-github-storage\")\n
Make sure you substitute your own flow code entrypoint path.
Note that if you change the flow entrypoint parameters, you will need to restart the process.
Move into the /etc/systemd/system
folder and open a file for editing. We use the Vim text editor below.
cd /etc/systemd/system\nsudo vim my-prefect-service.service\n
my-prefect-service.service[Unit]\nDescription=Prefect serve \n\n[Service]\nUser=prefect\nWorkingDirectory=/home\nExecStart=python3 my_file.py\nRestart=always\n\n[Install]\nWantedBy=multi-user.target\n
","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#save-enable-and-start-the-service","title":"Save, enable, and start the service","text":"To save the file and exit Vim hit the escape key, type :wq!
, then press the return key.
Next, run sudo systemctl daemon-reload
to make systemd aware of your new service.
Then, run sudo systemctl enable my-prefect-service
to enable the service. This command will ensure it runs when your system boots.
Next, run sudo systemctl start my-prefect-service
to start the service.
Run your deployment from UI and check out the logs on the Flow Runs page.
You can see if your daemonized Prefect worker or serve process is running and see the Prefect logs with systemctl status my-prefect-service
.
That's it! You now have a systemd service that starts when your system boots, and will restart if it ever crashes.
","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#next-steps","title":"Next steps","text":"If you want to set up a long-lived process on a Windows machine the pattern is similar. Instead of systemd, you can use NSSM.
Check out other Prefect guides to see what else you can do with Prefect!
","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/","title":"Developing a New Worker Type","text":"Advanced Topic
This tutorial is for users who want to extend the Prefect framework and completing this successfully will require deep knowledge of Prefect concepts. For standard use cases, we recommend using one of the available workers instead.
Prefect workers are responsible for setting up execution infrastructure and starting flow runs on that infrastructure.
A list of available workers can be found here. What if you want to execute your flow runs on infrastructure that doesn't have an available worker type? This tutorial will walk you through creating a custom worker that can run your flows on your chosen infrastructure.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-configuration","title":"Worker configuration","text":"When setting up an execution environment for a flow run, a worker receives configuration for the infrastructure it is designed to work with. Examples of configuration values include memory allocation, CPU allocation, credentials, image name, etc. The worker then uses this configuration to create the execution environment and start the flow run.
How are the configuration values populated?
The work pool that a worker polls for flow runs has a base job template associated with it. The template is the contract for how configuration values populate for each flow run.
The keys in the job_configuration
section of this base job template match the worker's configuration class attributes. The values in the job_configuration
section of the base job template are used to populate the attributes of the worker's configuration class.
The work pool creator gets to decide how they want to populate the values in the job_configuration
section of the base job template. The values can be hard-coded, templated using placeholders, or a mix of these two approaches. Because you, as the worker developer, don't know how the work pool creator will populate the values, you should set sensible defaults for your configuration class attributes as a matter of best practice.
BaseJobConfiguration
subclass","text":"A worker developer defines their worker's configuration to function with a class extending BaseJobConfiguration
.
BaseJobConfiguration
has attributes that are common to all workers:
name
The name to assign to the created execution environment. env
Environment variables to set in the created execution environment. labels
The labels assigned to the created execution environment for metadata purposes. command
The command to use when starting a flow run. Prefect sets values for each attribute before giving the configuration to the worker. If you want to customize the values of these attributes, use the prepare_for_flow_run
method.
Here's an example prepare_for_flow_run
method that adds a label to the execution environment:
def prepare_for_flow_run(\n self, flow_run, deployment = None, flow = None,\n): \n super().prepare_for_flow_run(flow_run, deployment, flow) \n self.labels.append(\"my-custom-label\")\n
A worker configuration class is a Pydantic model, so you can add additional attributes to your configuration class as Pydantic fields. For example, if you want to allow memory and CPU requests for your worker, you can do so like this:
from pydantic import Field\nfrom prefect.workers.base import BaseJobConfiguration\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n memory: int = Field(\n default=1024,\n description=\"Memory allocation for the execution environment.\"\n )\n cpu: int = Field(\n default=500, \n description=\"CPU allocation for the execution environment.\"\n )\n
This configuration class will populate the job_configuration
section of the resulting base job template.
For this example, the base job template would look like this:
job_configuration:\n name: \"{{ name }}\"\n env: \"{{ env }}\"\n labels: \"{{ labels }}\"\n command: \"{{ command }}\"\n memory: \"{{ memory }}\"\n cpu: \"{{ cpu }}\"\nvariables:\n type: object\n properties:\n name:\n title: Name\n description: Name given to infrastructure created by a worker.\n type: string\n env:\n title: Environment Variables\n description: Environment variables to set when starting a flow run.\n type: object\n additionalProperties:\n type: string\n labels:\n title: Labels\n description: Labels applied to infrastructure created by a worker.\n type: object\n additionalProperties:\n type: string\n command:\n title: Command\n description: The command to use when starting a flow run. In most cases,\n this should be left blank and the command will be automatically generated\n by the worker.\n type: string\n memory:\n title: Memory\n description: Memory allocation for the execution environment.\n type: integer\n default: 1024\n cpu:\n title: CPU\n description: CPU allocation for the execution environment.\n type: integer\n default: 500\n
This base job template defines what values can be provided by deployment creators on a per-deployment basis and how those provided values will be translated into the configuration values that the worker will use to create the execution environment.
Notice that each attribute for the class was added in the job_configuration
section with placeholders whose name matches the attribute name. The variables
section was also populated with the OpenAPI schema for each attribute. If a configuration class is used without explicitly declaring any template variables, the template variables will be inferred from the configuration class attributes.
You can customize the template for each attribute for situations where the configuration values should use more sophisticated templating. For example, if you want to add units for the memory
attribute, you can do so like this:
from pydantic import Field\nfrom prefect.workers.base import BaseJobConfiguration\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n memory: str = Field(\n default=\"1024Mi\",\n description=\"Memory allocation for the execution environment.\"\n template=\"{{ memory_request }}Mi\"\n )\n cpu: str = Field(\n default=\"500m\", \n description=\"CPU allocation for the execution environment.\"\n template=\"{{ cpu_request }}m\"\n )\n
Notice that we changed the type of each attribute to str
to accommodate the units, and we added a new template
attribute to each attribute. The template
attribute is used to populate the job_configuration
section of the resulting base job template.
For this example, the job_configuration
section of the resulting base job template would look like this:
job_configuration:\n name: \"{{ name }}\"\n env: \"{{ env }}\"\n labels: \"{{ labels }}\"\n command: \"{{ command }}\"\n memory: \"{{ memory_request }}Mi\"\n cpu: \"{{ cpu_request }}m\"\n
Note that to use custom templates, you will need to declare the template variables used in the template because the names of those variables can no longer be inferred from the configuration class attributes. We will cover how to declare the default variable schema in the Worker Template Variables section.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#rules-for-template-variable-interpolation","title":"Rules for template variable interpolation","text":"When defining a job configuration model, it's useful to understand how template variables are interpolated into the job configuration. The templating engine follows a few simple rules:
job_configuration
section, the key will be replaced with the value template variable.job_configuration
section and no value is provided for the template variable, the key will be removed from the job_configuration
section.These rules allow worker developers and work pool maintainers to define template variables that can be complex types like dictionaries and lists. These rules also mean that worker developers should give reasonable default values to job configuration fields whenever possible because values are not guaranteed to be provided if template variables are unset.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#template-variable-usage-strategies","title":"Template variable usage strategies","text":"Template variables define the interface that deployment creators interact with to configure the execution environments of their deployments. The complexity of this interface can be controlled via the template variables that are defined for a base job template. This control allows work pool maintainers to find a point along the spectrum of flexibility and simplicity appropriate for their organization.
There are two patterns that are represented in current worker implementations:
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#pass-through","title":"Pass-through","text":"In the pass-through pattern, template variables are passed through to the job configuration with little change. This pattern exposes complete control to deployment creators but also requires them to understand the details of the execution environment.
This pattern is useful when the execution environment is simple, and the deployment creators are expected to have high technical knowledge.
The Docker worker is an example of a worker that uses this pattern.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#infrastructure-as-code-templating","title":"Infrastructure as code templating","text":"Depending on the infrastructure they interact with, workers can sometimes employ a declarative infrastructure syntax (i.e., infrastructure as code) to create execution environments (e.g., a Kubernetes manifest or an ECS task definition).
In the IaC pattern, it's often useful to use template variables to template portions of the declarative syntax which then can be used to generate the declarative syntax into a final form.
This approach allows work pool creators to provide a simpler interface to deployment creators while also controlling which portions of infrastructure are configurable by deployment creators.
The Kubernetes worker is an example of a worker that uses this pattern.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#configuring-credentials","title":"Configuring credentials","text":"When executing flow runs within cloud services, workers will often need credentials to authenticate with those services. For example, a worker that executes flow runs in AWS Fargate will need AWS credentials. As a worker developer, you can use blocks to accept credentials configuration from the user.
For example, if you want to allow the user to configure AWS credentials, you can do so like this:
from prefect_aws import AwsCredentials\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n aws_credentials: AwsCredentials = Field(\n default=None,\n description=\"AWS credentials to use when creating AWS resources.\"\n )\n
Users can create and assign a block to the aws_credentials
attribute in the UI and the worker will use these credentials when interacting with AWS resources.
Providing template variables for a base job template defines the fields that deployment creators can override per deployment. The work pool creator ultimately defines the template variables for a base job template, but the worker developer is able to define default template variables for the worker to make it easier to use.
Default template variables for a worker are defined by implementing the BaseVariables
class. Like the BaseJobConfiguration
class, the BaseVariables
class has attributes that are common to all workers:
name
The name to assign to the created execution environment. env
Environment variables to set in the created execution environment. labels
The labels assigned the created execution environment for metadata purposes. command
The command to use when starting a flow run. Additional attributes can be added to the BaseVariables
class to define additional template variables. For example, if you want to allow memory and CPU requests for your worker, you can do so like this:
from pydantic import Field\nfrom prefect.workers.base import BaseVariables\n\nclass MyWorkerTemplateVariables(BaseVariables):\n memory_request: int = Field(\n default=1024,\n description=\"Memory allocation for the execution environment.\"\n )\n cpu_request: int = Field(\n default=500, \n description=\"CPU allocation for the execution environment.\"\n )\n
When MyWorkerTemplateVariables
is used in conjunction with MyWorkerConfiguration
from the Customizing Configuration Attribute Templates section, the resulting base job template will look like this:
job_configuration:\n name: \"{{ name }}\"\n env: \"{{ env }}\"\n labels: \"{{ labels }}\"\n command: \"{{ command }}\"\n memory: \"{{ memory_request }}Mi\"\n cpu: \"{{ cpu_request }}m\"\nvariables:\n type: object\n properties:\n name:\n title: Name\n description: Name given to infrastructure created by a worker.\n type: string\n env:\n title: Environment Variables\n description: Environment variables to set when starting a flow run.\n type: object\n additionalProperties:\n type: string\n labels:\n title: Labels\n description: Labels applied to infrastructure created by a worker.\n type: object\n additionalProperties:\n type: string\n command:\n title: Command\n description: The command to use when starting a flow run. In most cases,\n this should be left blank and the command will be automatically generated\n by the worker.\n type: string\n memory_request:\n title: Memory Request\n description: Memory allocation for the execution environment.\n type: integer\n default: 1024\n cpu_request:\n title: CPU Request\n description: CPU allocation for the execution environment.\n type: integer\n default: 500\n
Note that template variable classes are never used directly. Instead, they are used to generate a schema that is used to populate the variables
section of a base job template and validate the template variables provided by the user.
We don't recommend using template variable classes within your worker implementation for validation purposes because the work pool creator ultimately defines the template variables. The configuration class should handle any necessary run-time validation.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-implementation","title":"Worker implementation","text":"Workers set up execution environments using provided configuration. Workers also observe the execution environment as the flow run executes and report any crashes to the Prefect API.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#attributes","title":"Attributes","text":"To implement a worker, you must implement the BaseWorker
class and provide it with the following attributes:
type
The type of the worker. Yes job_configuration
The configuration class for the worker. Yes job_configuration_variables
The template variables class for the worker. No _documentation_url
Link to documentation for the worker. No _logo_url
Link to a logo for the worker. No _description
A description of the worker. No","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#methods","title":"Methods","text":"","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#run","title":"run
","text":"In addition to the attributes above, you must also implement a run
method. The run
method is called for each flow run the worker receives for execution from the work pool.
The run
method has the following signature:
async def run(\n self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None,\n ) -> BaseWorkerResult:\n ...\n
The run
method is passed: the flow run to execute, the execution environment configuration for the flow run, and a task status object that allows the worker to track whether the flow run was submitted successfully.
The run
method must also return a BaseWorkerResult
object. The BaseWorkerResult
object returned contains information about the flow run execution. For the most part, you can implement the BaseWorkerResult
with no modifications like so:
from prefect.workers.base import BaseWorkerResult\n\nclass MyWorkerResult(BaseWorkerResult):\n \"\"\"Result returned by the MyWorker.\"\"\"\n
If you would like to return more information about a flow run, then additional attributes can be added to the BaseWorkerResult
class.
kill_infrastructure
","text":"Workers must implement a kill_infrastructure
method to support flow run cancellation. The kill_infrastructure
method is called when a flow run is canceled and is passed an identifier for the infrastructure to tear down and the execution environment configuration for the flow run.
The infrastructure_pid
passed to the kill_infrastructure
method is the same identifier used to mark a flow run execution as started in the run
method. The infrastructure_pid
must be a string, but it can take on any format you choose.
The infrastructure_pid
should contain enough information to uniquely identify the infrastructure created for a flow run when used with the job_configuration
passed to the kill_infrastructure
method. Examples of useful information include: the cluster name, the hostname, the process ID, the container ID, etc.
If a worker cannot tear down infrastructure created for a flow run, the kill_infrastructure
command should raise an InfrastructureNotFound
or InfrastructureNotAvailable
exception.
Below is an example of a worker implementation. This example is not intended to be a complete implementation but to illustrate the aforementioned concepts.
from prefect.workers.base import BaseWorker, BaseWorkerResult, BaseJobConfiguration, BaseVariables\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n memory: str = Field(\n default=\"1024Mi\",\n description=\"Memory allocation for the execution environment.\"\n template=\"{{ memory_request }}Mi\"\n )\n cpu: str = Field(\n default=\"500m\", \n description=\"CPU allocation for the execution environment.\"\n template=\"{{ cpu_request }}m\"\n )\n\nclass MyWorkerTemplateVariables(BaseVariables):\n memory_request: int = Field(\n default=1024,\n description=\"Memory allocation for the execution environment.\"\n )\n cpu_request: int = Field(\n default=500, \n description=\"CPU allocation for the execution environment.\"\n )\n\nclass MyWorkerResult(BaseWorkerResult):\n \"\"\"Result returned by the MyWorker.\"\"\"\n\nclass MyWorker(BaseWorker):\n type = \"my-worker\"\n job_configuration = MyWorkerConfiguration\n job_configuration_variables = MyWorkerTemplateVariables\n _documentation_url = \"https://example.com/docs\"\n _logo_url = \"https://example.com/logo\"\n _description = \"My worker description.\"\n\n async def run(\n self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None,\n ) -> BaseWorkerResult:\n # Create the execution environment and start execution\n job = await self._create_and_start_job(configuration)\n\n if task_status:\n # Use a unique ID to mark the run as started. This ID is later used to tear down infrastructure\n # if the flow run is cancelled.\n task_status.started(job.id) \n\n # Monitor the execution\n job_status = await self._watch_job(job, configuration)\n\n exit_code = job_status.exit_code if job_status else -1 # Get result of execution for reporting\n return MyWorkerResult(\n status_code=exit_code,\n identifier=job.id,\n )\n\n async def kill_infrastructure(self, infrastructure_pid: str, configuration: BaseJobConfiguration) -> None:\n # Tear down the execution environment\n await self._kill_job(infrastructure_pid, configuration)\n
Most of the execution logic is omitted from the example above, but it shows that the typical order of operations in the run
method is: 1. Create the execution environment and start the flow run execution 2. Mark the flow run as started via the passed task_status
object 3. Monitor the execution 4. Get the execution's final status from the infrastructure and return a BaseWorkerResult
object
To see other examples of worker implementations, see the ProcessWorker
and KubernetesWorker
implementations.
Workers can be started via the Prefect CLI by providing the --type
option to the prefect worker start
CLI command. To make your worker type available via the CLI, it must be available at import time.
If your worker is in a package, you can add an entry point to your setup file in the following format:
entry_points={\n \"prefect.collections\": [\n \"my_package_name = my_worker_module\",\n ]\n},\n
Prefect will discover this entry point and load your work module in the specified module. The entry point will allow the worker to be available via the CLI.
","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/kubernetes/","title":"Running Flows with Kubernetes","text":"This guide will walk you through running your flows on Kubernetes. Though much of the guide is general to any Kubernetes cluster, there are differences between the managed Kubernetes offerings between cloud providers, especially when it comes to container registries and access management. We'll focus on Amazon Elastic Kubernetes Service (EKS).
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#prerequisites","title":"Prerequisites","text":"Before we begin, there are a few pre-requisites:
Prefect is tested against Kubernetes 1.26.0 and newer minor versions.
Administrator Access
Though not strictly necessary, you may want to ensure you have admin access, both in Prefect Cloud and in your cloud provider. Admin access is only necessary during the initial setup and can be downgraded after.
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-cluster","title":"Create a cluster","text":"Let's start by creating a new cluster. If you already have one, skip ahead to the next section.
AWSGCPAzureOne easy way to get set up with a cluster in EKS is with eksctl
. Node pools can be backed by either EC2 instances or FARGATE. Let's choose FARGATE so there's less to manage. The following command takes around 15 minutes and must not be interrupted:
# Replace the cluster name with your own value\neksctl create cluster --fargate --name <CLUSTER-NAME>\n\n# Authenticate to the cluster.\naws eks update-kubeconfig --name <CLUSTER-NAME>\n
You can get a GKE cluster up and running with a few commands using the gcloud
CLI. We'll build a bare-bones cluster that is accessible over the open internet - this should not be used in a production environment. To deploy the cluster, your project must have a VPC network configured.
First, authenticate to GCP by setting the following configuration options.
# Authenticate to gcloud\ngcloud auth login\n\n# Specify the project & zone to deploy the cluster to\n# Replace the project name with your GCP project name\ngcloud config set project <GCP-PROJECT-NAME>\ngcloud config set compute/zone <AVAILABILITY-ZONE>\n
Next, deploy the cluster - this command will take ~15 minutes to complete. Once the cluster has been created, authenticate to the cluster.
# Create cluster\n# Replace the cluster name with your own value\ngcloud container clusters create <CLUSTER-NAME> --num-nodes=1 \\\n--machine-type=n1-standard-2\n\n# Authenticate to the cluster\ngcloud container clusters <CLUSTER-NAME> --region <AVAILABILITY-ZONE>\n
GCP Gotchas
ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=Service account \"000000000000-compute@developer.gserviceaccount.com\" is disabled.\n
Organizational Policy
page within IAM.creation failed: Constraint constraints/compute.vmExternalIpAccess violated for project 000000000000. Add instance projects/<GCP-PROJECT-NAME>/zones/us-east1-b/instances/gke-gke-guide-1-default-pool-c369c84d-wcfl to the constraint to use external IP with it.\"\n
You can quickly create an AKS cluster using the Azure CLI, or use the Cloud Shell directly from the Azure portal shell.azure.com.
First, authenticate to Azure if not already done.
az login\n
Next, deploy the cluster - this command will take ~4 minutes to complete. Once the cluster has been created, authenticate to the cluster.
# Create a Resource Group at the desired location, e.g. westus\n az group create --name <RESOURCE-GROUP-NAME> --location <LOCATION>\n\n # Create a kubernetes cluster with default kubernetes version, default SKU load balancer (Standard) and default vm set type (VirtualMachineScaleSets)\n az aks create --resource-group <RESOURCE-GROUP-NAME> --name <CLUSTER-NAME>\n\n # Configure kubectl to connect to your Kubernetes cluster\n az aks get-credentials --resource-group <RESOURCE-GROUP-NAME> --name <CLUSTER-NAME>\n\n # Verify the connection by listing the cluster nodes\n kubectl get nodes\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-container-registry","title":"Create a container registry","text":"Besides a cluster, the other critical resource we'll need is a container registry. A registry is not strictly required, but in most cases you'll want to use custom images and/or have more control over where images are stored. If you already have a registry, skip ahead to the next section.
AWSGCPAzureLet's create a registry using the AWS CLI and authenticate the docker daemon to said registry:
# Replace the image name with your own value\naws ecr create-repository --repository-name <IMAGE-NAME>\n\n# Login to ECR\n# Replace the region and account ID with your own values\naws ecr get-login-password --region <REGION> | docker login \\\n --username AWS --password-stdin <AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com\n
Let's create a registry using the gcloud CLI and authenticate the docker daemon to said registry:
# Create artifact registry repository to host your custom image\n# Replace the repository name with your own value; it can be the \n# same name as your image\ngcloud artifacts repositories create <REPOSITORY-NAME> \\\n--repository-format=docker --location=us\n\n# Authenticate to artifact registry\ngcloud auth configure-docker us-docker.pkg.dev\n
Let's create a registry using the Azure CLI and authenticate the docker daemon to said registry:
# Name must be a lower-case alphanumeric\n# Tier SKU can easily be updated later, e.g. az acr update --name <REPOSITORY-NAME> --sku Standard\naz acr create --resource-group <RESOURCE-GROUP-NAME> \\\n --name <REPOSITORY-NAME> \\\n --sku Basic\n\n# Attach ACR to AKS cluster\n# You need Owner, Account Administrator, or Co-Administrator role on your Azure subscription as per Azure docs\naz aks update --resource-group <RESOURCE-GROUP-NAME> --name <CLUSTER-NAME> --attach-acr <REPOSITORY-NAME>\n\n# You can verify AKS can now reach ACR\naz aks check-acr --resource-group RESOURCE-GROUP-NAME> --name <CLUSTER-NAME> --acr <REPOSITORY-NAME>.azurecr.io\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-kubernetes-work-pool","title":"Create a Kubernetes work pool","text":"Work pools allow you to manage deployment infrastructure. We'll configure the default values for our Kubernetes base job template. Note that these values can be overridden by individual deployments.
Let's switch to the Prefect Cloud UI, where we'll create a new Kubernetes work pool (alternatively, you could use the Prefect CLI to create a work pool).
Let's look at a few popular configuration options.
Environment Variables
Add environment variables to set when starting a flow run. So long as you are using a Prefect-maintained image and haven't overwritten the image's entrypoint, you can specify Python packages to install at runtime with {\"EXTRA_PIP_PACKAGES\":\"my_package\"}
. For example {\"EXTRA_PIP_PACKAGES\":\"pandas==1.2.3\"}
will install pandas version 1.2.3. Alternatively, you can specify package installation in a custom Dockerfile, which can allow you to take advantage of image caching. As we'll see below, Prefect can help us create a Dockerfile with our flow code and the packages specified in a requirements.txt
file baked in.
Namespace
Set the Kubernetes namespace to create jobs within, such as prefect
. By default, set to default.
Image
Specify the Docker container image for created jobs. If not set, the latest Prefect 2 image will be used (i.e. prefecthq/prefect:2-latest
). Note that you can override this on each deployment through job_variables
.
Image Pull Policy
Select from the dropdown options to specify when to pull the image. When using the IfNotPresent
policy, make sure to use unique image tags, as otherwise old images could get cached on your nodes.
Finished Job TTL
Number of seconds before finished jobs are automatically cleaned up by Kubernetes' controller. You may want to set to 60 so that completed flow runs are cleaned up after a minute.
Pod Watch Timeout Seconds
Number of seconds for pod creation to complete before timing out. Consider setting to 300, especially if using a serverless type node pool, as these tend to have longer startup times.
Kubernetes Cluster Config
You can configure the Kubernetes cluster to use for job creation by specifying a KubernetesClusterConfig
block. Generally you should leave the cluster config blank as the worker should be provisioned with appropriate access and permissions. Typically this setting is used when a worker is deployed to a cluster that is different from the cluster where flow runs are executed.
Advanced Settings
Want to modify the default base job template to add other fields or delete existing fields?
Select the Advanced tab and edit the JSON representation of the base job template.
For example, to set a CPU request, add the following section under variables:
\"cpu_request\": {\n \"title\": \"CPU Request\",\n \"description\": \"The CPU allocation to request for this pod.\",\n \"default\": \"default\",\n \"type\": \"string\"\n},\n
Next add the following to the first containers
item under job_configuration
:
...\n\"containers\": [\n {\n ...,\n \"resources\": {\n \"requests\": {\n \"cpu\": \"{{ cpu_request }}\"\n }\n }\n }\n],\n...\n
Running deployments with this work pool will now request the specified CPU.
After configuring the work pool settings, move to the next screen.
Give the work pool a name and save.
Our new Kubernetes work pool should now appear in the list of work pools.
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-prefect-cloud-api-key","title":"Create a Prefect Cloud API key","text":"While in the Prefect Cloud UI, create a Prefect Cloud API key if you don't already have one. Click on your profile avatar picture, then click your name to go to your profile settings, click API Keys and hit the plus button to create a new API key here. Make sure to store it safely along with your other passwords, ideally via a password manager.
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#deploy-a-worker-using-helm","title":"Deploy a worker using Helm","text":"With our cluster and work pool created, it's time to deploy a worker, which will set up Kubernetes infrastructure to run our flows. The best way to deploy a worker is using the Prefect Helm Chart.
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#add-the-prefect-helm-repository","title":"Add the Prefect Helm repository","text":"Add the Prefect Helm repository to your Helm client:
helm repo add prefect https://prefecthq.github.io/prefect-helm\nhelm repo update\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-namespace","title":"Create a namespace","text":"Create a new namespace in your Kubernetes cluster to deploy the Prefect worker:
kubectl create namespace prefect\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-kubernetes-secret-for-the-prefect-api-key","title":"Create a Kubernetes secret for the Prefect API key","text":"kubectl create secret generic prefect-api-key \\\n--namespace=prefect --from-literal=key=your-prefect-cloud-api-key\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#configure-helm-chart-values","title":"Configure Helm chart values","text":"Create a values.yaml
file to customize the Prefect worker configuration. Add the following contents to the file:
worker:\n cloudApiConfig:\n accountId: <target account ID>\n workspaceId: <target workspace ID>\n config:\n workPool: <target work pool name>\n
These settings will ensure that the worker connects to the proper account, workspace, and work pool.
View your Account ID and Workspace ID in your browser URL when logged into Prefect Cloud. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-helm-release","title":"Create a Helm release","text":"Let's install the Prefect worker using the Helm chart with your custom values.yaml
file:
helm install prefect-worker prefect/prefect-worker \\\n --namespace=prefect \\\n -f values.yaml\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#verify-deployment","title":"Verify deployment","text":"Check the status of your Prefect worker deployment:
kubectl get pods -n prefect\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#define-a-flow","title":"Define a flow","text":"Let's start simple with a flow that just logs a message. In a directory named flows
, create a file named hello.py
with the following contents:
from prefect import flow, get_run_logger, tags\n\n@flow\ndef hello(name: str = \"Marvin\"):\n logger = get_run_logger()\n logger.info(f\"Hello, {name}!\")\n\nif __name__ == \"__main__\":\n with tags(\"local\"):\n hello()\n
Run the flow locally with python hello.py
to verify that it works. Note that we use the tags
context manager to tag the flow run as local
. This step is not required, but does add some helpful metadata.
Prefect has two recommended options for creating a deployment with dynamic infrastructure. You can define a deployment in a Python script using the flow.deploy
mechanics or in a prefect.yaml
definition file. The prefect.yaml
file currently allows for more customization in terms of push and pull steps. Kubernetes objects are defined in YAML, so we expect many teams using Kubernetes work pools to create their deployments with YAML as well. To learn about the Python deployment creation method with flow.deploy
refer to the Workers & Work Pools tutorial page.
The prefect.yaml
file is used by the prefect deploy
command to deploy our flows. As a part of that process it will also build and push our image. Create a new file named prefect.yaml
with the following contents:
# Generic metadata about this project\nname: flows\nprefect-version: 2.13.8\n\n# build section allows you to manage and build docker images\nbuild:\n- prefect_docker.deployments.steps.build_docker_image:\n id: build-image\n requires: prefect-docker>=0.4.0\n image_name: \"{{ $PREFECT_IMAGE_NAME }}\"\n tag: latest\n dockerfile: auto\n platform: \"linux/amd64\"\n\n# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_docker.deployments.steps.push_docker_image:\n requires: prefect-docker>=0.4.0\n image_name: \"{{ build-image.image_name }}\"\n tag: \"{{ build-image.tag }}\"\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect.deployments.steps.set_working_directory:\n directory: /opt/prefect/flows\n\n# the definitions section allows you to define reusable components for your deployments\ndefinitions:\n tags: &common_tags\n - \"eks\"\n work_pool: &common_work_pool\n name: \"kubernetes\"\n job_variables:\n image: \"{{ build-image.image }}\"\n\n# the deployments section allows you to provide configuration for deploying flows\ndeployments:\n- name: \"default\"\n tags: *common_tags\n schedule: null\n entrypoint: \"flows/hello.py:hello\"\n work_pool: *common_work_pool\n\n- name: \"arthur\"\n tags: *common_tags\n schedule: null\n entrypoint: \"flows/hello.py:hello\"\n parameters:\n name: \"Arthur\"\n work_pool: *common_work_pool\n
We define two deployments of the hello
flow: default
and arthur
. Note that by specifying dockerfile: auto
, Prefect will automatically create a dockerfile that installs any requirements.txt
and copies over the current directory. You can pass a custom Dockerfile instead with dockerfile: Dockerfile
or dockerfile: path/to/Dockerfile
. Also note that we are specifically building for the linux/amd64
platform. This specification is often necessary when images are built on Macs with M series chips but run on cloud provider instances.
Deployment specific build, push, and pull
The build, push, and pull steps can be overridden for each deployment. This allows for more custom behavior, such as specifying a different image for each deployment.
Let's make sure we define our requirements in a requirements.txt
file:
prefect>=2.13.8\nprefect-docker>=0.4.0\nprefect-kubernetes>=0.3.1\n
The directory should now look something like this:
.\n\u251c\u2500\u2500 prefect.yaml\n\u2514\u2500\u2500 flows\n \u251c\u2500\u2500 requirements.txt\n \u2514\u2500\u2500 hello.py\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#tag-images-with-a-git-sha","title":"Tag images with a Git SHA","text":"If your code is stored in a GitHub repository, it's good practice to tag your images with the Git SHA of the code used to build it. This can be done in the prefect.yaml
file with a few minor modifications, and isn't yet an option with the Python deployment creation method. Let's use the run_shell_script
command to grab the SHA and pass it to the tag
parameter of build_docker_image
:
build:\n- prefect.deployments.steps.run_shell_script:\n id: get-commit-hash\n script: git rev-parse --short HEAD\n stream_output: false\n- prefect_docker.deployments.steps.build_docker_image:\n id: build-image\n requires: prefect-docker>=0.4.0\n image_name: \"{{ $PREFECT_IMAGE_NAME }}\"\n tag: \"{{ get-commit-hash.stdout }}\"\n dockerfile: auto\n platform: \"linux/amd64\"\n
Let's also set the SHA as a tag for easy identification in the UI:
definitions:\n tags: &common_tags\n - \"eks\"\n - \"{{ get-commit-hash.stdout }}\"\n work_pool: &common_work_pool\n name: \"kubernetes\"\n job_variables:\n image: \"{{ build-image.image }}\"\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#authenticate-to-prefect","title":"Authenticate to Prefect","text":"Before we deploy the flows to Prefect, we will need to authenticate via the Prefect CLI. We will also need to ensure that all of our flow's dependencies are present at deploy
time.
This example uses a virtual environment to ensure consistency across environments.
# Create a virtualenv & activate it\nvirtualenv prefect-demo\nsource prefect-demo/bin/activate\n\n# Install dependencies of your flow\nprefect-demo/bin/pip install -r requirements.txt\n\n# Authenticate to Prefect & select the appropriate \n# workspace to deploy your flows to\nprefect-demo/bin/prefect cloud login\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#deploy-the-flows","title":"Deploy the flows","text":"Now we're ready to deploy our flows which will build our images. The image name determines which registry it will end up in. We have configured our prefect.yaml
file to get the image name from the PREFECT_IMAGE_NAME
environment variable, so let's set that first:
export PREFECT_IMAGE_NAME=<AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/<IMAGE-NAME>\n
export PREFECT_IMAGE_NAME=us-docker.pkg.dev/<GCP-PROJECT-NAME>/<REPOSITORY-NAME>/<IMAGE-NAME>\n
export PREFECT_IMAGE_NAME=<REPOSITORY-NAME>.azurecr.io/<IMAGE-NAME>\n
To deploy your flows, ensure your Docker daemon is running first. Deploy all the flows with prefect deploy --all
or deploy them individually by name: prefect deploy -n hello/default
or prefect deploy -n hello/arthur
.
Once the deployments are successfully created, we can run them from the UI or the CLI:
prefect deployment run hello/default\nprefect deployment run hello/arthur\n
Congratulations! You just ran two deployments in Kubernetes. Head over to the UI to check their status!
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/overriding-job-variables/","title":"Deeper Dive: Overriding Work Pool Job Variables","text":"As described in the Deploying Flows to Work Pools and Workers guide, there are two ways to deploy flows to work pools: using a prefect.yaml
file or using the .deploy()
method.
In both cases, you can override job variables on a work pool for a given deployment.
While exactly which job variables are available to be overridden depends on the type of work pool you're using at a given time, this guide will explore some common patterns for overriding job variables in both deployment methods.
","tags":["deployments","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#background","title":"Background","text":"First of all, what are \"job variables\"?
Job variables are infrastructure related values that are configurable on a work pool, which may be relevant to how your flow run executes on your infrastructure.
Let's use env
- the only job variable that is configurable for all work pool types - as an example.
When you create or edit a work pool, you can specify a set of environment variables that will be set in the runtime environment of the flow run.
For example, you might want a certain deployment to have the following environment variables available:
{\n \"EXECUTION_ENV\": \"staging\",\n \"MY_NOT_SO_SECRET_CONFIG\": \"plumbus\",\n}\n
Rather than hardcoding these values into your work pool in the UI and making them available to all deployments associated with that work pool, you can override these values on a per-deployment basis.
Let's look at how to do that.
","tags":["deployments","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#how-to-override-job-variables","title":"How to override job variables","text":"Say we have the following repo structure:
\u00bb tree\n.\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 requirements.txt\n\u251c\u2500\u2500 demo_project\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 daily_flow.py\n
... and we have some demo_flow.py
file like this:
import os\nfrom prefect import flow, task\n\n@task\ndef do_something_important(not_so_secret_value: str) -> None:\n print(f\"Doing something important with {not_so_secret_value}!\")\n\n@flow(log_prints=True)\ndef some_work():\n environment = os.environ.get(\"EXECUTION_ENVIRONMENT\", \"local\")\n\n print(f\"Coming to you live from {environment}!\")\n\n not_so_secret_value = os.environ.get(\"MY_NOT_SO_SECRET_CONFIG\")\n\n if not_so_secret_value is None:\n raise ValueError(\"You forgot to set MY_NOT_SO_SECRET_CONFIG!\")\n\n do_something_important(not_so_secret_value)\n
","tags":["deployments","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#using-a-prefectyaml-file","title":"Using a prefect.yaml
file","text":"In this case, let's also say we have the following deployment definition in a prefect.yaml
file at the root of our repository:
deployments:\n- name: demo-deployment\n entrypoint: demo_project/demo_flow.py:some_work\n work_pool:\n name: local\n schedule: null\n
Note
While not the focus of this guide, note that this deployment definition uses a default \"global\" pull
step, because one is not explicitly defined on the deployment. For reference, here's what that would look like at the top of the prefect.yaml
file:
pull:\n- prefect.deployments.steps.git_clone: &clone_repo\n repository: https://github.com/some-user/prefect-monorepo\n branch: main\n
","tags":["deployments","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#hard-coded-job-variables","title":"Hard-coded job variables","text":"To provide the EXECUTION_ENVIRONMENT
and MY_NOT_SO_SECRET_CONFIG
environment variables to this deployment, we can add a job_variables
section to our deployment definition in the prefect.yaml
file:
deployments:\n- name: demo-deployment\n entrypoint: demo_project/demo_flow.py:some_work\n work_pool:\n name: local\n job_variables:\n env:\n EXECUTION_ENVIRONMENT: staging\n MY_NOT_SO_SECRET_CONFIG: plumbus\n schedule: null\n
... and then run prefect deploy -n demo-deployment
to deploy the flow with these job variables.
We should then be able to see the job variables in the Configuration
tab of the deployment in the UI:
If you want to use environment variables that are already set in your local environment, you can template these in the prefect.yaml
file using the {{ $ENV_VAR_NAME }}
syntax:
deployments:\n- name: demo-deployment\n entrypoint: demo_project/demo_flow.py:some_work\n work_pool:\n name: local\n job_variables:\n env:\n EXECUTION_ENVIRONMENT: \"{{ $EXECUTION_ENVIRONMENT }}\"\n MY_NOT_SO_SECRET_CONFIG: \"{{ $MY_NOT_SO_SECRET_CONFIG }}\"\n schedule: null\n
Note
This assumes that the machine where prefect deploy
is run would have these environment variables set.
export EXECUTION_ENVIRONMENT=staging\nexport MY_NOT_SO_SECRET_CONFIG=plumbus\n
As before, run prefect deploy -n demo-deployment
to deploy the flow with these job variables, and you should see them in the UI under the Configuration
tab.
.deploy()
method","text":"If you're using the .deploy()
method to deploy your flow, the process is similar, but instead of having your prefect.yaml
file define the job variables, you can pass them as a dictionary to the job_variables
argument of the .deploy()
method.
We could add the following block to our demo_project/daily_flow.py
file from the setup section:
if __name__ == \"__main__\":\n flow.from_source(\n source=\"https://github.com/zzstoatzz/prefect-monorepo.git\",\n entrypoint=\"src/demo_project/demo_flow.py:some_work\"\n ).deploy(\n name=\"demo-deployment\",\n work_pool_name=\"local\", # can only .deploy() to a local work pool in prefect>=2.15.1\n job_variables={\n \"env\": {\n \"EXECUTION_ENVIRONMENT\": os.environ.get(\"EXECUTION_ENVIRONMENT\", \"local\"),\n \"MY_NOT_SO_SECRET_CONFIG\": os.environ.get(\"MY_NOT_SO_SECRET_CONFIG\")\n }\n }\n )\n
Note
The above example works assuming a couple things: - the machine where this script is run would have these environment variables set.
export EXECUTION_ENVIRONMENT=staging\nexport MY_NOT_SO_SECRET_CONFIG=plumbus\n
demo_project/daily_flow.py
already exists in the repository at the specified pathRunning this script with something like:
python demo_project/daily_flow.py\n
... will deploy the flow with the specified job variables, which should then be visible in the UI under the Configuration
tab.
Push work pools are a special type of work pool that allows Prefect Cloud to submit flow runs for execution to serverless computing infrastructure without running a worker. Push work pools currently support execution in AWS ECS tasks, Azure Container Instances, Google Cloud Run jobs, and Modal.
In this guide you will:
You can automatically provision infrastructure and create your push work pool using the prefect work-pool create
CLI command with the --provision-infra
flag. This approach greatly simplifies the setup process.
Let's explore automatic infrastructure provisioning for push work pools first, and then we'll cover how to manually set up your push work pool.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#automatic-infrastructure-provisioning","title":"Automatic infrastructure provisioning","text":"With Perfect Cloud you can provision infrastructure for use with an AWS ECS, Google Cloud Run, ACI push work pool. Push work pools in Prefect Cloud simplify the setup and management of the infrastructure necessary to run your flows. However, setting up infrastructure on your cloud provider can still be a time-consuming process. Prefect can dramatically simplify this process by automatically provisioning the necessary infrastructure for you.
We'll use the prefect work-pool create
CLI command with the --provision-infra
flag to automatically provision your serverless cloud resources and set up your Prefect workspace to use a new push pool.
To use automatic infrastructure provisioning, you'll need to have the relevant cloud CLI library installed and to have authenticated with your cloud provider.
AWS ECSAzure Container InstancesGoogle Cloud RunModalInstall the AWS CLI, authenticate with your AWS account, and set a default region.
If you already have the AWS CLI installed, be sure to update to the latest version.
You will need the following permissions in your authenticated AWS account:
IAM Permissions:
Amazon ECS Permissions:
Amazon EC2 Permissions:
Amazon ECR Permissions:
If you want to use AWS managed policies, you can use the following:
Note that the above policies will give you all the permissions needed, but are more permissive than necessary.
Docker is also required to build and push images to your registry. You can install Docker here.
Install the Azure CLI and authenticate with your Azure account.
If you already have the Azure CLI installed, be sure to update to the latest version with az upgrade
.
You will also need the following roles in your Azure subscription:
Docker is also required to build and push images to your registry. You can install Docker here.
Install the gcloud CLI and authenticate with your GCP project.
If you already have the gcloud CLI installed, be sure to update to the latest version with gcloud components update
.
You will also need the following permissions in your GCP project:
Docker is also required to build and push images to your registry. You can install Docker here.
Install modal
by running:
pip install modal\n
Create a Modal API token by running:
modal token new\n
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#automatically-creating-a-new-push-work-pool-and-provisioning-infrastructure","title":"Automatically creating a new push work pool and provisioning infrastructure","text":"Here's the command to create a new push work pool and configure the necessary infrastructure.
AWS ECSAzure Container InstancesGoogle Cloud RunModalprefect work-pool create --type ecs:push --provision-infra my-ecs-pool\n
Using the --provision-infra
flag will automatically set up your default AWS account to be ready to execute flows via ECS tasks. In your AWS account, this command will create a new IAM user, IAM policy, ECS cluster that uses AWS Fargate, VPC, and ECR repository if they don't already exist. In your Prefect workspace, this command will create an AWSCredentials
block for storing the generated credentials.
Here's an abbreviated example output from running the command:
\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-ecs-pool will require: \u2502\n\u2502 \u2502\n\u2502 - Creating an IAM user for managing ECS tasks: prefect-ecs-user \u2502\n\u2502 - Creating and attaching an IAM policy for managing ECS tasks: prefect-ecs-policy \u2502\n\u2502 - Storing generated AWS credentials in a block \u2502\n\u2502 - Creating an ECS cluster for running Prefect flows: prefect-ecs-cluster \u2502\n\u2502 - Creating a VPC with CIDR 172.31.0.0/16 for running ECS tasks: prefect-ecs-vpc \u2502\n\u2502 - Creating an ECR repository for storing Prefect images: prefect-flows \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\nProvisioning IAM user\nCreating IAM policy\nGenerating AWS credentials\nCreating AWS credentials block\nProvisioning ECS cluster\nProvisioning VPC\nCreating internet gateway\nSetting up subnets\nSetting up security group\nProvisioning ECR repository\nAuthenticating with ECR\nSetting default Docker build namespace\nProvisioning Infrastructure \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-ecs-pool'!\n
Default Docker build namespace
After infrastructure provisioning completes, you will be logged into your new ECR repository and the default Docker build namespace will be set to the URL of the registry.
While the default namespace is set, you will not need to provide the registry URL when building images as part of your deployment process.
To take advantage of this, you can write your deploy scripts like this:
example_deploy_script.pyfrom prefect import flow\nfrom prefect.deployments import DeploymentImage\n\n@flow(log_prints=True) \ndef my_flow(name: str = \"world\"): \n print(f\"Hello {name}! I'm a flow running in a ECS task!\") \n\n\nif __name__ == \"__main__\":\n my_flow.deploy(\n name=\"my-deployment\", \n work_pool_name=\"my-work-pool\",\n image=DeploymentImage( \n name=\"my-repository:latest\",\n platform=\"linux/amd64\",\n ) \n ) \n
This will build an image with the tag <ecr-registry-url>/my-image:latest
and push it to the registry.
Your image name will need to match the name of the repository created with your work pool. You can create new repositories in the ECR console.
prefect work-pool create --type azure-container-instance:push --provision-infra my-aci-pool\n
Using the --provision-infra
flag will automatically set up your default Azure account to be ready to execute flows via Azure Container Instances. In your Azure account, this command will create a resource group, app registration, service account with necessary permission, generate a secret for the app registration, and create an Azure Container Registry, if they don't already exist. In your Prefect workspace, this command will create an AzureContainerInstanceCredentials
block for storing the client secret value from the generated secret.
Here's an abbreviated example output from running the command:
\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-aci-work-pool will require: \u2502\n\u2502 \u2502\n\u2502 Updates in subscription Azure subscription 1 \u2502\n\u2502 \u2502\n\u2502 - Create a resource group in location eastus \u2502\n\u2502 - Create an app registration in Azure AD prefect-aci-push-pool-app \u2502\n\u2502 - Create/use a service principal for app registration \u2502\n\u2502 - Generate a secret for app registration \u2502\n\u2502 - Create an Azure Container Registry with prefix prefect \u2502\n\u2502 - Create an identity prefect-acr-identity to allow access to the created registry \u2502\n\u2502 - Assign Contributor role to service account \u2502\n\u2502 - Create an ACR registry for image hosting \u2502\n\u2502 - Create an identity for Azure Container Instance to allow access to the registry \u2502\n\u2502 \u2502\n\u2502 Updates in Prefect workspace \u2502\n\u2502 \u2502\n\u2502 - Create Azure Container Instance credentials block aci-push-pool-credentials \u2502\n\u2502 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: \nCreating resource group\nCreating app registration\nGenerating secret for app registration\nCreating ACI credentials block\nACI credentials block 'aci-push-pool-credentials' created in Prefect Cloud\nAssigning Contributor role to service account\nCreating Azure Container Registry\nCreating identity\nProvisioning infrastructure... \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned for 'my-aci-work-pool' work pool!\nCreated work pool 'my-aci-work-pool'!\n
Default Docker build namespace
After infrastructure provisioning completes, you will be logged into your new Azure Container Registry and the default Docker build namespace will be set to the URL of the registry.
While the default namespace is set, any images you build without specifying a registry or username/organization will be pushed to the registry.
To take advantage of this functionality, you can write your deploy scripts like this:
example_deploy_script.pyfrom prefect import flow \nfrom prefect.deployments import DeploymentImage \n\n\n@flow(log_prints=True) \ndef my_flow(name: str = \"world\"): \n print(f\"Hello {name}! I'm a flow running on an Azure Container Instance!\") \n\n\nif __name__ == \"__main__\": \n my_flow.deploy( \n name=\"my-deployment\",\n work_pool_name=\"my-work-pool\", \n image=DeploymentImage( \n name=\"my-image:latest\", \n platform=\"linux/amd64\", \n ) \n ) \n
This will build an image with the tag <acr-registry-url>/my-image:latest
and push it to the registry.
prefect work-pool create --type cloud-run:push --provision-infra my-cloud-run-pool \n
Using the --provision-infra
flag will allow you to select a GCP project to use for your work pool and automatically configure it to be ready to execute flows via Cloud Run. In your GCP project, this command will activate the Cloud Run API, create a service account, and create a key for the service account, if they don't already exist. In your Prefect workspace, this command will create a GCPCredentials
block for storing the service account key.
Here's an abbreviated example output from running the command:
\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-cloud-run-pool will require: \u2502\n\u2502 \u2502\n\u2502 Updates in GCP project central-kit-405415 in region us-central1 \u2502\n\u2502 \u2502\n\u2502 - Activate the Cloud Run API for your project \u2502\n\u2502 - Activate the Artifact Registry API for your project \u2502\n\u2502 - Create an Artifact Registry repository named prefect-images \u2502\n\u2502 - Create a service account for managing Cloud Run jobs: prefect-cloud-run \u2502\n\u2502 - Service account will be granted the following roles: \u2502\n\u2502 - Service Account User \u2502\n\u2502 - Cloud Run Developer \u2502\n\u2502 - Create a key for service account prefect-cloud-run \u2502\n\u2502 \u2502\n\u2502 Updates in Prefect workspace \u2502\n\u2502 \u2502\n\u2502 - Create GCP credentials block my--pool-push-pool-credentials to store the service account key \u2502\n\u2502 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\nActivating Cloud Run API\nActivating Artifact Registry API\nCreating Artifact Registry repository\nConfiguring authentication to Artifact Registry\nSetting default Docker build namespace\nCreating service account\nAssigning roles to service account\nCreating service account key\nCreating GCP credentials block\nProvisioning Infrastructure \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-cloud-run-pool'!\n
Default Docker build namespace
After infrastructure provisioning completes, you will be logged into your new Artifact Registry repository and the default Docker build namespace will be set to the URL of the repository.
While the default namespace is set, any images you build without specifying a registry or username/organization will be pushed to the repository.
To take advantage of this functionality, you can write your deploy scripts like this:
example_deploy_script.pyfrom prefect import flow \nfrom prefect.deployments import DeploymentImage \n\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n print(f\"Hello {name}! I'm a flow running on Cloud Run!\")\n\n\nif __name__ == \"__main__\": \n my_flow.deploy( \n name=\"my-deployment\",\n work_pool_name=\"above-ground\",\n image=DeploymentImage(\n name=\"my-image:latest\",\n platform=\"linux/amd64\",\n )\n )\n
This will build an image with the tag <region>-docker.pkg.dev/<project>/<repository-name>/my-image:latest
and push it to the repository.
prefect work-pool create --type modal:push --provision-infra my-modal-pool \n
Using the --provision-infra
flag will trigger the creation of a ModalCredentials
block in your Prefect Cloud workspace. This block will store your Modal API token, which is used to authenticate with Modal's API. By default, the token for your current Modal profile will be used for the new ModalCredentials
block. If Prefect is unable to discover a Modal API token for your current profile, you will be prompted to create a new one.
That's it! You're ready to create and schedule deployments that use your new push work pool. Reminder that no worker is needed to run flows with a push work pool.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#using-existing-resources-with-automatic-infrastructure-provisioning","title":"Using existing resources with automatic infrastructure provisioning","text":"If you already have the necessary infrastructure set up, Prefect will detect that upon work pool creation and the infrastructure provisioning for that resource will be skipped.
For example, here's how prefect work-pool create my-work-pool --provision-infra
looks when existing Azure resources are detected:
Proceed with infrastructure provisioning? [y/n]: y\nCreating resource group\nResource group 'prefect-aci-push-pool-rg' already exists in location 'eastus'.\nCreating app registration\nApp registration 'prefect-aci-push-pool-app' already exists.\nGenerating secret for app registration\nProvisioning infrastructure\nACI credentials block 'bb-push-pool-credentials' created\nAssigning Contributor role to service account...\nService principal with object ID '4be6fed7-...' already has the 'Contributor' role assigned in \n'/subscriptions/.../'\nCreating Azure Container Instance\nContainer instance 'prefect-aci-push-pool-container' already exists.\nCreating Azure Container Instance credentials block\nProvisioning infrastructure... \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-work-pool'!\n
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#provisioning-infrastructure-for-an-existing-push-work-pool","title":"Provisioning infrastructure for an existing push work pool","text":"If you already have a push work pool set up, but haven't configured the necessary infrastructure, you can use the provision-infra
sub-command to provision the infrastructure for that work pool. For example, you can run the following command if you have a work pool named \"my-work-pool\".
prefect work-pool provision-infra my-work-pool\n
Prefect will create the necessary infrastructure for the my-work-pool
work pool and provide you with a summary of the changes to be made:
\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-work-pool will require: \u2502\n\u2502 \u2502\n\u2502 Updates in subscription Azure subscription 1 \u2502\n\u2502 \u2502\n\u2502 - Create a resource group in location eastus \u2502\n\u2502 - Create an app registration in Azure AD prefect-aci-push-pool-app \u2502\n\u2502 - Create/use a service principal for app registration \u2502\n\u2502 - Generate a secret for app registration \u2502\n\u2502 - Assign Contributor role to service account \u2502\n\u2502 - Create Azure Container Instance 'aci-push-pool-container' in resource group prefect-aci-push-pool-rg \u2502\n\u2502 \u2502\n\u2502 Updates in Prefect workspace \u2502\n\u2502 \u2502\n\u2502 - Create Azure Container Instance credentials block aci-push-pool-credentials \u2502\n\u2502 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\n
This command can speed up your infrastructure setup process.
As with the examples above, you will need to have the related cloud CLI library installed and be authenticated with your cloud provider.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#manual-infrastructure-provisioning","title":"Manual infrastructure provisioning","text":"If you prefer to set up your infrastructure manually, don't include the --provision-infra
flag in the CLI command. In the examples below, we'll create a push work pool via the Prefect Cloud UI.
To push work to ECS, AWS credentials are required.
Create a user and attach the AmazonECS_FullAccess permissions.
From that user's page create credentials and store them somewhere safe for use in the next section.
To push work to Azure, an Azure subscription, resource group and tenant secret are required.
Create Subscription and Resource Group
Create App Registration
Add App Registration to Resource Group
A GCP service account and an API Key are required, to push work to Cloud Run.
Create a service account by navigating to the service accounts page and clicking Create. Name and describe your service account, and click continue to configure permissions.
The service account must have two roles at a minimum, Cloud Run Developer, and Service Account User.
Once the Service account is created, navigate to its Keys page to add an API key. Create a JSON type key, download it, and store it somewhere safe for use in the next section.
A Modal API token is required to push work to Modal.
Create a Modal API token by navigating to Settings in the Modal UI. In the API Tokens section of the Settings page, click New Token.
Copy the token ID and token secret and store them somewhere safe for use in the next section.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#work-pool-configuration","title":"Work pool configuration","text":"Our push work pool will store information about what type of infrastructure our flow will run on, what default values to provide to compute jobs, and other important execution environment parameters. Because our push work pool needs to integrate securely with your serverless infrastructure, we need to start by storing our credentials in Prefect Cloud, which we'll do by making a block.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#creating-a-credentials-block","title":"Creating a Credentials block","text":"AWS ECSAzure Container InstancesGoogle Cloud RunModalNavigate to the blocks page, click create new block, and select AWS Credentials for the type.
For use in a push work pool, this block must have the region and cluster name filled out, in addition to access key and access key secret.
Provide any other optional information and create your block.
Navigate to the blocks page and click the \"+\" at the top to create a new block. Find the Azure Container Instance Credentials block and click \"Add +\".
Locate the client ID and tenant ID on your app registration and use the client secret you saved earlier. Be sure to use the value of the secret, not the secret ID!
Provide any other optional information and click \"Create\".
Navigate to the blocks page, click create new block, and select GCP Credentials for the type.
For use in a push work pool, this block must have the contents of the JSON key stored in the Service Account Info field, as such:
Provide any other optional information and create your block.
Navigate to the blocks page, click create new block, and select Modal Credentials for the type.
For use in a push work pool, this block must have the token ID and token secret stored in the Token ID and Token Secret fields, respectively.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#creating-a-push-work-pool","title":"Creating a push work pool","text":"Now navigate to the work pools page. Click Create to start configuring your push work pool by selecting a push option in the infrastructure type step.
AWS ECSAzure Container InstancesGoogle Cloud RunModalEach step has several optional fields that are detailed in the work pools documentation. Select the block you created under the AWS Credentials field. This will allow Prefect Cloud to securely interact with your ECS cluster.
Fill in the subscription ID and resource group name from the resource group you created. Add the Azure Container Instance Credentials block you created in the step above.
Each step has several optional fields that are detailed in the work pools documentation. Select the block you created under the GCP Credentials field. This will allow Prefect Cloud to securely interact with your GCP project.
Each step has several optional fields that are detailed in the work pools documentation. Select the block you created under the Modal Credentials field. This will allow Prefect Cloud to securely interact with your Modal account.
Create your pool and you are ready to deploy flows to your Push work pool.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#deployment","title":"Deployment","text":"Deployment details are described in the deployments concept section. Your deployment needs to be configured to send flow runs to our push work pool. For example, if you create a deployment through the interactive command line experience, choose the work pool you just created. If you are deploying an existing prefect.yaml
file, the deployment would contain:
work_pool:\n name: my-push-pool\n
Deploying your flow to the my-push-pool
work pool will ensure that runs that are ready for execution will be submitted immediately, without the need for a worker to poll for them.
Serverless infrastructure may require a certain image architecture
Note that serverless infrastructure may assume a certain Docker image architecture; for example, Google Cloud Run will fail to run images built with linux/arm64
architecture. If using Prefect to build your image, you can change the image architecture through the platform
keyword (e.g., platform=\"linux/amd64\"
).
With your deployment created, navigate to its detail page and create a new flow run. You'll see the flow start running without ever having to poll the work pool, because Prefect Cloud securely connected to your serverless infrastructure, created a job, ran the job, and began reporting on its execution.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/serverless-workers/","title":"Run Deployments on Serverless Infrastructure with Prefect Workers","text":"Prefect provides hybrid work pools for workers to run flows on the serverless platforms of major cloud providers. The following options are available:
Push work pools don't require a worker
Options for push work pool versions of AWS ECS, Azure Container Instances, and Google Cloud Run that do not require a worker are available with Prefect Cloud. These push work pool options require connection configuration information to be stored on Prefect Cloud. Read more in the Serverless Push Work Pool Guide.
This is a brief overview of the options to run workflows on serverless infrastructure. For in-depth guides, see the Prefect integration libraries:
prefect-aws
docsprefect-gcp
docs.Choosing between Google Cloud Run and Google Vertex AI
Google Vertex AI is well-suited for machine learning model training applications in which GPUs or TPUs and high resource levels are desired.
","tags":["work pools","deployments","Cloud Run","GCP","Vertex AI","AWS ECS","Azure Container Instances","ACI"],"boost":2},{"location":"guides/deployment/serverless-workers/#steps","title":"Steps","text":"Options for push versions on AWS ECS, Azure Container Instances, and Google Cloud Run work pools that do not require a worker are available with Prefect Cloud. Read more in the Serverless Push Work Pool Guide.
Learn more about workers and work pools in the Prefect concept documentation.
","tags":["work pools","deployments","Cloud Run","GCP","Vertex AI","AWS ECS","Azure Container Instances","ACI"],"boost":2},{"location":"guides/deployment/storage-guide/","title":"Where to Store Your Flow Code","text":"When a flow runs, the execution environment needs access to its code. Flow code is not stored in a Prefect server database instance or Prefect Cloud. When deploying a flow, you have several flow code storage options.
This guide discusses storage options with a focus on deployments created with the interactive CLI experience or a prefect.yaml
file. If you'd like to create your deployments using Python code, see the discussion of flow code storage on the .deploy
tab of Deploying Flows to Work pools and Workers guide.
Local flow code storage is often used with a Local Subprocess work pool for initial experimentation.
To create a deployment with local storage and a Local Subprocess work pool, do the following:
prefect deploy
from the root of the directory containing your flow code.You are then shown the location that your flow code will be fetched from when a flow is run. For example:
Your Prefect workers will attempt to load your flow from: \n/my-path/my-flow-file.py. To see more options for managing your flow's code, run:\n\n $ prefect init\n
When deploying a flow to production, you most likely want code to run with infrastructure-specific configuration. The flow code storage options shown below are recommended for production deployments.
","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-2-git-based-storage","title":"Option 2: Git-based storage","text":"Git-based version control platforms are popular locations for code storage. They provide redundancy, version control, and easier collaboration.
GitHub is the most popular cloud-based repository hosting provider. GitLab and Bitbucket are other popular options. Prefect supports each of these platforms.
","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#creating-a-deployment-with-git-based-storage","title":"Creating a deployment with git-based storage","text":"Run prefect deploy
from the root directory of the git repository and create a new deployment. You will see a series of prompts. Select that you want to create a new deployment, select the flow code entrypoint, and name your deployment.
Prefect detects that you are in a git repository and asks if you want to store your flow code in a git repository. Select \"y\" and you will be prompted to confirm the URL of your git repository and the branch name, as in the example below:
? Your Prefect workers will need access to this flow's code in order to run it. \nWould you like your workers to pull your flow code from its remote repository when running this flow? [y/n] (y): \n? Is https://github.com/my_username/my_repo.git the correct URL to pull your flow code from? [y/n] (y): \n? Is main the correct branch to pull your flow code from? [y/n] (y): \n? Is this a private repository? [y/n]: y\n
In this example, the git repository is hosted on GitHub. If you are using Bitbucket or GitLab, the URL will match your provider. If the repository is public, enter \"n\" and you are on your way.
If the repository is private, you can enter a token to access your private repository. This token will be saved in an encrypted Prefect Secret block.
? Please enter a token that can be used to access your private repository. This token will be saved as a Secret block via the Prefect API: \"123_abc_this_is_my_token\"\n
Verify that you have a new Secret block in your active workspace named in the format \"deployment-my-deployment-my-flow-name-repo-token\".
Creating access tokens differs for each provider.
GitHubBitbucketGitLabWe recommend using HTTPS with fine-grained Personal Access Tokens so that you can limit access by repository. See the GitHub docs for Personal Access Tokens (PATs).
Under Your Profile->Developer Settings->Personal access tokens->Fine-grained token choose Generate New Token and fill in the required fields. Under Repository access choose Only select repositories and grant the token permissions for Contents.
We recommend using HTTPS with Repository, Project, or Workspace Access Tokens.
You can create a Repository Access Token with Scopes->Repositories->Read.
Bitbucket requires you prepend the token string with x-token-auth:
So the full string looks like x-token-auth:abc_123_this_is_my_token
.
We recommend using HTTPS with Project Access Tokens.
In your repository in the GitLab UI, select Settings->Repository->Project Access Tokens and check read_repository under Select scopes.
If you want to configure a Secret block ahead of time, create the block via code or the Prefect UI and reference it in your prefect.yaml
file.
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://bitbucket.org/org/my-private-repo.git\n access_token: \"{{ prefect.blocks.secret.my-block-name }}\"\n
Alternatively, you can create a Credentials block ahead of time and reference it in the prefect.yaml
pull step.
pip install -U prefect-github
prefect block register -m prefect_github
.pull:\n - prefect.deployments.steps.git_clone:\n repository: https://github.com/discdiver/my-private-repo.git\n credentials: \"{{ prefect.blocks.github-credentials.my-block-name }}\"\n
pip install -U prefect-bitbucket
prefect block register -m prefect_bitbucket
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://bitbucket.org/org/my-private-repo.git\n credentials: \"{{ prefect.blocks.bitbucket-credentials.my-block-name }}\"\n
pip install -U prefect-gitlab
prefect block register -m prefect_gitlab
pull:\n - prefect.deployments.steps.git_clone:\n repository: https://gitlab.com/org/my-private-repo.git\n credentials: \"{{ prefect.blocks.gitlab-credentials.my-block-name }}\"\n
Push your code
When you make a change to your code, Prefect does not push your code to your git-based version control platform. You need to push your code manually or as part of your CI/CD pipeline. This design decision is an intentional one to avoid confusion about the git history and push process.
","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-3-docker-based-storage","title":"Option 3: Docker-based storage","text":"Another popular way to store your flow code is to include it in a Docker image. The following work pools use Docker containers, so the flow code can be directly baked into the image:
Push-based serverless cloud-based options (no worker required)
Run prefect init
in the root of your repository and choose docker
for the project name and answer the prompts to create a prefect.yaml
file with a build step that will create a Docker image with the flow code built in. See the Workers and Work Pools page of the tutorial for more info.
prefect deploy
from the root of your repository to create a deployment. CI/CD may not require push or pull steps
You don't need push or pull steps in the prefect.yaml
file if using CI/CD to build a Docker image outside of Prefect. Instead, the work pool can reference the image directly.
You can store your code in an AWS S3 bucket, Azure Blob Storage container, or GCP GCS bucket and specify the destination directly in the push
and pull
steps of your prefect.yaml
file.
To create a templated prefect.yaml
file run prefect init
and select the recipe for the applicable cloud-provider storage. Below are the recipe options and the relevant portions of the prefect.yaml
file.
Choose s3
as the recipe and enter the bucket name when prompted.
# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_aws.deployments.steps.push_to_s3:\n id: push_code\n requires: prefect-aws>=0.3.4\n bucket: my-bucket\n folder: my-folder\n credentials: \"{{ prefect.blocks.aws-credentials.my-credentials-block }}\" # if private\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_aws.deployments.steps.pull_from_s3:\n id: pull_code\n requires: prefect-aws>=0.3.4\n bucket: '{{ push_code.bucket }}'\n folder: '{{ push_code.folder }}'\n credentials: \"{{ prefect.blocks.aws-credentials.my-credentials-block }}\" # if private \n
If the bucket requires authentication to access it, you can do the following:
pip install -U prefect-aws
prefect block register -m prefect_aws
Choose azure
as the recipe and enter the container name when prompted.
# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_azure.deployments.steps.push_to_azure_blob_storage:\n id: push_code\n requires: prefect-azure>=0.2.8\n container: my-prefect-azure-container\n folder: my-folder\n credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.my-credentials-block }}\" # if private\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_azure.deployments.steps.pull_from_azure_blob_storage:\n id: pull_code\n requires: prefect-azure>=0.2.8\n container: '{{ push_code.container }}'\n folder: '{{ push_code.folder }}'\n credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.my-credentials-block }}\" # if private\n
If the blob requires authentication to access it, you can do the following:
pip install -U prefect-azure
prefect block register -m prefect_azure
Choose `gcs`` as the recipe and enter the bucket name when prompted.
# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_gcp.deployment.steps.push_to_gcs:\n id: push_code\n requires: prefect-gcp>=0.4.3\n bucket: my-bucket\n folder: my-folder\n credentials: \"{{ prefect.blocks.gcp-credentials.my-credentials-block }}\" # if private \n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_gcp.deployment.steps.pull_from_gcs:\n id: pull_code\n requires: prefect-gcp>=0.4.3\n bucket: '{{ push_code.bucket }}'\n folder: '{{ pull_code.folder }}'\n credentials: \"{{ prefect.blocks.gcp-credentials.my-credentials-block }}\" # if private \n
If the bucket requires authentication to access it, you can do the following:
pip install -U prefect-gcp
prefect block register -m prefect_gcp
Another option for authentication is for the worker to have access to the storage location at runtime via SSH keys.
Alternatively, you can inject environment variables into your deployment like this example that uses an environment variable named CUSTOM_FOLDER
:
push:\n - prefect_gcp.deployment.steps.push_to_gcs:\n id: push_code\n requires: prefect-gcp>=0.4.3\n bucket: my-bucket\n folder: '{{ $CUSTOM_FOLDER }}'\n
","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#including-and-excluding-files-from-storage","title":"Including and excluding files from storage","text":"By default, Prefect uploads all files in the current folder to the configured storage location when you create a deployment.
When using a git repository, Docker image, or cloud-provider storage location, you may want to exclude certain files or directories.
.gitignore
file. .dockerignore
file. .prefectignore
file serves the same purpose and follows a similar syntax as those files. So an entry of *.pyc
will exclude all .pyc
files from upload.In earlier versions of Prefect storage blocks were the recommended way to store flow code. Storage blocks are still supported, but not recommended.
As shown above, repositories can be referenced directly through interactive prompts with prefect deploy
or in a prefect.yaml
. When authentication is needed, Secret or Credential blocks can be referenced, and in some cases created automatically through interactive deployment creation prompts.
You've seen options for where to store your flow code.
We recommend using Docker-based storage or git-based storage for your production deployments.
Check out more guides to reach your goals with Prefect.
","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"integrations/","title":"Integrations","text":"Prefect integrations are organized into collections of pre-built tasks, flows, blocks and more that are installable as PyPI packages.
Airbyte
Maintained by Prefect
Alert
Maintained by Khuyen Tran
AWS
Maintained by Prefect
Azure
Maintained by Prefect
Bitbucket
Maintained by Prefect
Census
Maintained by Prefect
Coiled
Maintained by Coiled
CubeJS
Maintained by Alessandro Lollo
Dask
Maintained by Prefect
Databricks
Maintained by Prefect
dbt
Maintained by Prefect
Docker
Maintained by Prefect
Earthdata
Maintained by Giorgio Basile
Maintained by Prefect
Firebolt
Maintained by Prefect
Fivetran
Maintained by Fivetran
Fugue
Maintained by The Fugue Development Team
GCP
Maintained by Prefect
GitHub
Maintained by Prefect
GitLab
Maintained by Prefect
Google Sheets
Maintained by Stefano Cascavilla
Great Expectations
Maintained by Prefect
HashiCorp Vault
Maintained by Pavel Chekin
Hex
Maintained by Prefect
Hightouch
Maintained by Prefect
Jupyter
Maintained by Prefect
Kubernetes
Maintained by Prefect
KV
Maintained by Zanie Blue
MetricFlow
Maintained by Alessandro Lollo
Monday
Maintained by Prefect
MonteCarlo
Maintained by Prefect
OpenAI
Maintained by Prefect
OpenMetadata
Maintained by Prefect
Planetary Computer
Maintained by Giorgio Basile
Ray
Maintained by Prefect
Shell
Maintained by Prefect
Sifflet
Maintained by Sifflet and Alessandro Lollo
Slack
Maintained by Prefect
Snowflake
Maintained by Prefect
Soda Cloud
Maintained by Alessandro Lollo
Soda Core
Maintained by Soda and Alessandro Lollo
Spark on Kubernetes
Maintained by Manoj Babu Katragadda
SQLAlchemy
Maintained by Prefect
Stitch
Maintained by Alessandro Lollo
Transform
Maintained by Alessandro Lollo
Maintained by Prefect
","tags":["tasks","flows","blocks","collections","task library","integrations","Airbyte","Alert","AWS","Azure","Bitbucket","Census","Coiled","CubeJS","Dask","Databricks","dbt","Docker","Earthdata","Email","Firebolt","Fivetran","Fugue","GCP","GitHub","GitLab","Google Sheets","Great Expectations","HashiCorp Vault","Hex","Hightouch","Jupyter","Kubernetes","KV","MetricFlow","Monday","MonteCarlo","OpenAI","OpenMetadata","Planetary Computer","Ray","Shell","Sifflet","Slack","Snowflake","Soda Cloud","Soda Core","Spark on Kubernetes","SQLAlchemy","Stitch","Transform","Twitter"],"boost":2},{"location":"integrations/contribute/","title":"Contribute","text":"We welcome contributors! You can help contribute blocks and integrations by following these steps.
","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contributing-blocks","title":"Contributing Blocks","text":"Building your own custom block is simple!
Block
.Attributes
and Example
section in the docstring._logo_url
to point to a relevant image.pydantic.Field
s of the block with a type annotation, default
or default_factory
, and a short description about the field.For example, this is how the Secret block is implemented:
from pydantic import Field, SecretStr\nfrom prefect.blocks.core import Block\n\nclass Secret(Block):\n \"\"\"\n A block that represents a secret value. The value stored in this block will be obfuscated when\n this block is logged or shown in the UI.\n\n Attributes:\n value: A string value that should be kept secret.\n\n Example:\n ```python\n from prefect.blocks.system import Secret\n secret_block = Secret.load(\"BLOCK_NAME\")\n\n # Access the stored secret\n secret_block.get()\n ```\n \"\"\"\n\n _logo_url = \"https://example.com/logo.png\"\n\n value: SecretStr = Field(\n default=..., description=\"A string value that should be kept secret.\"\n ) # ... indicates it's a required field\n\n def get(self):\n return self.value.get_secret_value()\n
To view in Prefect Cloud or the Prefect server UI, register the block.
","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contributing-integrations","title":"Contributing Integrations","text":"Anyone can create and share a Prefect Integration and we encourage anyone interested in creating an integration to do so!
","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#generate-a-project","title":"Generate a project","text":"To help you get started with your integration, we've created a template that gives the tools you need to create and publish your integration.
Use the Prefect Integration template to get started creating an integration with a bootstrapped project!
","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#list-a-project-in-the-integrations-catalog","title":"List a project in the Integrations Catalog","text":"To list your integration in the Prefect Integrations Catalog, submit a PR to the Prefect repository adding a file to the docs/integrations/catalog
directory with details about your integration. Please use TEMPLATE.yaml
in that folder as a guide.
If you'd like to help contribute to fix an issue or add a feature to any of our Integrations, please propose changes through a pull request from a fork of the repository.
pip install -e \".[dev]\"\n
pre-commit
to perform quality checks prior to commit: pre-commit install\n
git commit
, git push
, and create a pull requestInstall the Integration via pip
.
For example, to use prefect-aws
:
pip install prefect-aws\n
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#registering-blocks-from-an-integration","title":"Registering Blocks from an Integration","text":"Once the Prefect Integration is installed, register the blocks within the integration to view them in the Prefect Cloud UI:
For example, to register the blocks available in prefect-aws
:
prefect block register -m prefect_aws\n
Updating blocks from an integrations
If you install an updated Prefect integration that adds fields to a block type, you will need to re-register that block type.
Loading a block in code
To use the load
method on a Block, you must already have a block document saved either through code or through the Prefect UI.
Learn more about Blocks here!
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#using-tasks-and-flows-from-an-integration","title":"Using Tasks and Flows from an Integration","text":"Integrations also contain pre-built tasks and flows that can be imported and called within your code.
As an example, to read a secret from AWS Secrets Manager with the read_secret
task:
from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import read_secret\n\n@flow\ndef connect_to_database():\n aws_credentials = AwsCredentials.load(\"MY_BLOCK_NAME\")\n secret_value = read_secret(\n secret_name=\"db_password\",\n aws_credentials=aws_credentials\n )\n\n # Use secret_value to connect to a database\n
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#customizing-tasks-and-flows-from-an-integration","title":"Customizing Tasks and Flows from an Integration","text":"To customize the settings of a task or flow pre-configured in a collection, use with_options
:
from prefect import flow\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n\ncustom_run_dbt_cloud_job = trigger_dbt_cloud_job_run_and_wait_for_completion.with_options(\n name=\"Run My DBT Cloud Job\",\n retries=2,\n retry_delay_seconds=10\n)\n\n@flow\ndef run_dbt_job_flow():\n run_result = custom_run_dbt_cloud_job(\n dbt_cloud_credentials=DbtCloudCredentials.load(\"my-dbt-cloud-credentials\"),\n job_id=1\n )\n\nrun_dbt_job_flow()\n
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#recipes-and-tutorials","title":"Recipes and Tutorials","text":"To learn more about how to use Integrations, check out Prefect recipes on GitHub. These recipes provide examples of how Integrations can be used in various scenarios.
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"recipes/recipes/","title":"Prefect Recipes","text":"Prefect recipes are common, extensible examples for setting up Prefect in your execution environment with ready-made ingredients such as Dockerfiles, Terraform files, and GitHub Actions.
Recipes are useful when you are looking for tutorials on how to deploy a worker, use event-driven flows, set up unit testing, and more.
The following are Prefect recipes specific to Prefect 2. You can find a full repository of recipes at https://github.com/PrefectHQ/prefect-recipes and additional recipes at Prefect Discourse.
","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#recipe-catalog","title":"Recipe catalog","text":"Agent on Azure with KubernetesConfigure Prefect on Azure with Kubernetes, running a Prefect agent to execute deployment flow runs.
Maintained by Prefect
This recipe uses:
Agent on ECS Fargate with AWS CLI
Run a Prefect 2 agent on ECS Fargate using the AWS CLI.
Maintained by Prefect
This recipe uses:
Agent on ECS Fargate with Terraform
Run a Prefect 2 agent on ECS Fargate using Terraform.
Maintained by Prefect
This recipe uses:
Agent on an Azure VM
Set up an Azure VM and run a Prefect agent.
Maintained by Prefect
This recipe uses:
Flow Deployment with GitHub Actions
Deploy a Prefect flow with storage and infrastructure blocks, update and push Docker image to container registry.
Maintained by Prefect
This recipe uses:
Flow Deployment with GitHub Storage and Docker Infrastructure
Create a deployment with GitHub as a storage and Docker Container as an infrastructure
Maintained by Prefect
This recipe uses:
Prefect server on an AKS Cluster
Deploy a Prefect server to an Azure Kubernetes Service (AKS) Cluster with Azure Blob Storage.
Maintained by Prefect
This recipe uses:
Serverless Prefect with AWS Chalice
Execute Prefect flows in an AWS Lambda function managed by Chalice.
Maintained by Prefect
This recipe uses:
Serverless Workflows with ECSTask Blocks
Deploy a Prefect agent to AWS ECS Fargate using GitHub Actions and ECSTask infrastructure blocks.
Maintained by Prefect
This recipe uses:
","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#contributing-recipes","title":"Contributing recipes","text":"
We're always looking for new recipe contributions! See the Prefect Recipes repository for details on how you can add your Prefect recipe, share best practices with fellow Prefect users, and earn some swag.
Prefect recipes provide a vital cookbook where users can find helpful code examples and, when appropriate, common steps for specific Prefect use cases.
We love recipes from anyone who has example code that another Prefect user can benefit from (e.g. a Prefect flow that loads data into Snowflake).
Have a blog post, Discourse article, or tutorial you\u2019d like to share as a recipe? All submissions are welcome. Clone the prefect-recipes repo, create a branch, add a link to your recipe to the README, and submit a PR. Have more questions? Read on.
","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-is-a-recipe","title":"What is a recipe?","text":"A Prefect recipe is like a cookbook recipe: it tells you what you need \u2014 the ingredients \u2014 and some basic steps, but assumes you can put the pieces together. Think of the Hello Fresh meal experience, but for dataflows.
A tutorial, on the other hand, is Julia Child holding your hand through the entire cooking process: explaining each ingredient and procedure, demonstrating best practices, pointing out potential problems, and generally making sure you can\u2019t stray from the happy path to a delicious meal.
We love Julia, and we love tutorials. But we don\u2019t expect that a Prefect recipe should handhold users through every step and possible contingency of a solution. A recipe can start from an expectation of more expertise and problem-solving ability on the part of the reader.
To see an example of a high quality recipe, check out Serverless with AWS Chalice. This recipe includes all of the elements we like to see.
","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#steps-to-add-your-recipe","title":"Steps to add your recipe","text":"Here\u2019s our guide to creating a recipe:
# Clone the repository\ngit clone git@github.com:PrefectHQ/prefect-recipes.git\ncd prefect-recipes\n\n# Create and checkout a new branch\n\ngit checkout -b new_recipe_branch_name\n
flows-advanced/
folder. A Prefect Recipes maintainer will help you find the best place for your recipe. Just want to direct others to a project you made, whether it be a repo or a blogpost? Simply link to it in the Prefect Recipes README!That\u2019s it!
","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-makes-a-good-recipe","title":"What makes a good recipe?","text":"Every recipe is useful, as other Prefect users can adapt the recipe to their needs. Particularly good ones help a Prefect user bake a great dataflow solution! Take a look at the prefect-recipes repo to see some examples.
","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-are-the-common-ingredients-of-a-good-recipe","title":"What are the common ingredients of a good recipe?","text":"A thoughtful README can take a recipe from good to great. Here are some best practices that we\u2019ve found make for a great recipe README:
We hope you\u2019ll feel comfortable sharing your Prefect solutions as recipes in the prefect-recipes repo. Collaboration and knowledge sharing are defining attributes of our Prefect Community!
Have questions about sharing or using recipes? Reach out on our active Prefect Slack Community!
Happy engineering!
","tags":["recipes","best practices","examples"],"boost":2},{"location":"tutorial/","title":"Tutorial Overview","text":"This tutorial provides a guided walk-through of Prefect core concepts and instructions on how to use them.
By the end of this tutorial you will have:
These four topics will get most users to their first production deployment.
Advanced users that need more governance and control of their workflow infrastructure can go one step further by:
If you're looking for examples of more advanced operations (like deploying on Kubernetes), check out Prefect's guides.
","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/#prerequisites","title":"Prerequisites","text":"pip install -U prefect
See the install guide for more detailed instructions, if needed.
To get the most out of this tutorial, we recommend using Prefect Cloud. Sign up for a forever free Prefect Cloud account or accept your organization's invite to join their Prefect Cloud account.
prefect cloud login
CLI command to authenticate to Prefect Cloud from your environment.prefect cloud login\n
Choose Log in with a web browser and click the Authorize button in the browser window that opens.
As an alternative to using Prefect Cloud, you can self-host a Prefect server instance. If you choose this option, run prefect server start
to start a local Prefect server instance.
Prefect orchestrates workflows \u2014 it simplifies the creation, scheduling, and monitoring of complex data pipelines. With Prefect, you define workflows as Python code and let it handle the rest.
Prefect also provides error handling, retry mechanisms, and a user-friendly dashboard for monitoring. It's the easiest way to transform any Python function into a unit of work that can be observed and orchestrated.
Just bring your Python code, sprinkle in a few decorators, and go!
","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/#first-steps-flows","title":"First steps: Flows","text":"Let's begin by learning how to create your first Prefect flow - click here to get started.
","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/deployments/","title":"Deploying Flows","text":"Reminder to connect to Prefect Cloud or a self-hosted Prefect server instance
Some features in this tutorial, such as scheduling, require you to be connected to a Prefect server. If using a self-hosted setup, run prefect server start
to run both the webserver and UI. If using Prefect Cloud, make sure you have successfully authenticated your local environment.
Some of the most common reasons to use an orchestration tool such as Prefect are for scheduling and event-based triggering. Up to this point, we\u2019ve demonstrated running Prefect flows as scripts, but this means you have been the one triggering and managing flow runs. You can certainly continue to trigger your workflows in this way and use Prefect as a monitoring layer for other schedulers or systems, but you will miss out on many of the other benefits and features that Prefect offers.
Deploying a flow exposes an API and UI so that you can:
Deploying a flow is the act of specifying where and how it will run. This information is encapsulated and sent to Prefect as a deployment that contains the crucial metadata needed for remote orchestration. Deployments elevate workflows from functions that you call manually to API-managed entities.
Attributes of a deployment include (but are not limited to):
Using our get_repo_info
flow from the previous sections, we can easily create a deployment for it by calling a single method on the flow object: flow.serve
.
import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info.serve(name=\"my-first-deployment\")\n
Running this script will do two things:
Deployments must be defined in static files
Flows can be defined and run interactively, that is, within REPLs or Notebooks. Deployments, on the other hand, require that your flow definition be in a known file (which can be located on a remote filesystem in certain setups, as we'll see in the next section of the tutorial).
Because this deployment has no schedule or triggering automation, you will need to use the UI or API to create runs for it. Let's use the CLI (in a separate terminal window) to create a run for this deployment:
prefect deployment run 'get-repo-info/my-first-deployment'\n
If you are watching either your terminal or your UI, you should see the newly created run execute successfully! Let's take this example further by adding a schedule and additional metadata.
","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#additional-options","title":"Additional options","text":"The serve
method on flows exposes many options for the deployment. Let's use a few of these options now:
cron
: a keyword that allows us to set a cron string schedule for the deployment; see schedules for more advanced scheduling optionstags
: a keyword that allows us to tag this deployment and its runs for bookkeeping and filtering purposesdescription
: a keyword that allows us to document what this deployment does; by default the description is set from the docstring of the flow function, but we did not document our flow functionversion
: a keyword that allows us to track changes to our deployment; by default a hash of the file containing the flow is used; popular options include semver tags or git commit hashesLet's add these options to our deployment:
if __name__ == \"__main__\":\n get_repo_info.serve(\n name=\"my-first-deployment\",\n cron=\"* * * * *\",\n tags=[\"testing\", \"tutorial\"],\n description=\"Given a GitHub repository, logs repository statistics for that repo.\",\n version=\"tutorial/deployments\",\n )\n
When you rerun this script, you will find an updated deployment in the UI that is actively scheduling work! Stop the script in the CLI using CTRL+C
and your schedule will be automatically paused.
.serve
is a long-running process
For remotely triggered or scheduled runs to be executed, your script with flow.serve
must be actively running.
This method is useful for creating deployments for single flows, but what if we have two or more flows? This situation only requires a few additional method calls and imports to get up and running:
multi_flow_deployment.pyimport time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n \"Fastest flow this side of the Mississippi.\"\n return\n\n\nif __name__ == \"__main__\":\n slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n fast_deploy = fast_flow.to_deployment(name=\"fast\")\n serve(slow_deploy, fast_deploy)\n
A few observations are in order:
flow.to_deployment
interface exposes the exact same options as flow.serve
; this method produces a deployment objectserve(...)
is calledSpend some time experimenting with this setup. A few potential next steps for exploration include:
sleep
Hybrid execution option
Another implication of Prefect's deployment interface is that you can choose to use our hybrid execution model. Whether you use Prefect Cloud or host a Prefect server instance yourself, you can run work flows in the environments best suited to their execution. This model allows you efficient use of your infrastructure resources while maintaining the privacy of your code and data. There is no ingress required. For more information read more about our hybrid model.
","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#next-steps","title":"Next steps","text":"Congratulations! You now have your first working deployment.
Deploying flows through the serve
method is a fast way to start scheduling flows with Prefect. However, if your team has more complex infrastructure requirements or you'd like to have Prefect manage flow execution, you can deploy flows to a work pool.
Learn about work pools and how Prefect Cloud can handle infrastructure configuration for you in the next step of the tutorial.
","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/flows/","title":"Flows","text":"Prerequisites
This tutorial assumes you have already installed Prefect and connected to Prefect Cloud or a self-hosted server instance. See the prerequisites section of the tutorial for more details.
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#what-is-a-flow","title":"What is a flow?","text":"Flows are like functions. They can take inputs, perform work, and return an output. In fact, you can turn any function into a Prefect flow by adding the @flow
decorator. When a function becomes a flow, its behavior changes, giving it the following advantages:
The simplest way to get started with Prefect is to annotate a Python function with the\u00a0@flow
\u00a0decorator. The script below fetches statistics about the main Prefect repository. Note that httpx is an HTTP client library and a dependency of Prefect. Let's turn this function into a Prefect flow and run the script:
import httpx\nfrom prefect import flow\n\n\n@flow\ndef get_repo_info():\n url = \"https://api.github.com/repos/PrefectHQ/prefect\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\nif __name__ == \"__main__\":\n get_repo_info()\n
Running this file will result in some interesting output:
12:47:42.792 | INFO | prefect.engine - Created flow run 'ludicrous-warthog' for flow 'get-repo-info'\nPrefectHQ/prefect repository statistics \ud83e\udd13:\nStars \ud83c\udf20 : 12146\nForks \ud83c\udf74 : 1245\n12:47:45.008 | INFO | Flow run 'ludicrous-warthog' - Finished in state Completed()\n
Flows can contain arbitrary Python
As we can see above, flow definitions can contain arbitrary Python logic.
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#parameters","title":"Parameters","text":"As with any Python function, you can pass arguments to a flow. The positional and keyword arguments defined on your flow function are called parameters. Prefect will automatically perform type conversion using any provided type hints. Let's make the repository a string parameter with a default value:
repo_info.pyimport httpx\nfrom prefect import flow\n\n\n@flow\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info(repo_name=\"PrefectHQ/marvin\")\n
We can call our flow with varying values for the repo_name
parameter (including \"bad\" values):
python repo_info.py\n
Try passing repo_name=\"missing-org/missing-repo\"
.
You should see
HTTPStatusError: Client error '404 Not Found' for url '<https://api.github.com/repos/missing-org/missing-repo>'\n
Now navigate to your Prefect dashboard and compare the displays for these two runs.
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#logging","title":"Logging","text":"Prefect enables you to log a variety of useful information about your flow and task runs, capturing information about your workflows for purposes such as monitoring, troubleshooting, and auditing. If we navigate to our dashboard and explore the runs we created above, we will notice that the repository statistics are not captured in the flow run logs. Let's fix that by adding some logging to our flow:
repo_info.pyimport httpx\nfrom prefect import flow, get_run_logger\n\n\n@flow\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n logger = get_run_logger()\n logger.info(\"%s repository statistics \ud83e\udd13:\", repo_name)\n logger.info(f\"Stars \ud83c\udf20 : %d\", repo[\"stargazers_count\"])\n logger.info(f\"Forks \ud83c\udf74 : %d\", repo[\"forks_count\"])\n
Now the output looks more consistent and, more importantly, our statistics are stored in the Prefect backend and displayed in the UI for this flow run:
12:47:42.792 | INFO | prefect.engine - Created flow run 'ludicrous-warthog' for flow 'get-repo-info'\n12:47:43.016 | INFO | Flow run 'ludicrous-warthog' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n12:47:43.016 | INFO | Flow run 'ludicrous-warthog' - Stars \ud83c\udf20 : 12146\n12:47:43.042 | INFO | Flow run 'ludicrous-warthog' - Forks \ud83c\udf74 : 1245\n12:47:45.008 | INFO | Flow run 'ludicrous-warthog' - Finished in state Completed()\n
log_prints=True
We could have achieved the exact same outcome by using Prefect's convenient log_prints
keyword argument in the flow
decorator:
@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n ...\n
Logging vs Artifacts
The example above is for educational purposes. In general, it is better to use Prefect artifacts for storing metrics and output. Logs are best for tracking progress and debugging errors.
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#retries","title":"Retries","text":"So far our script works, but in the future unexpected errors may occur. For example the GitHub API may be temporarily unavailable or rate limited. Retries help make our flow more resilient. Let's add retry functionality to our example above:
repo_info.pyimport httpx\nfrom prefect import flow\n\n\n@flow(retries=3, retry_delay_seconds=5, log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\nif __name__ == \"__main__\":\n get_repo_info()\n
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#next-tasks","title":"Next: Tasks","text":"As you have seen, adding a flow decorator converts our Python function to a resilient and observable workflow. In the next section, you'll supercharge this flow by using tasks to break down the workflow's complexity and make it more performant and observable - click here to continue.
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/tasks/","title":"Tasks","text":"","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#what-is-a-task","title":"What is a task?","text":"A task is any Python function decorated with a @task
decorator called within a flow. You can think of a flow as a recipe for connecting a known sequence of tasks together. Tasks, and the dependencies between them, are displayed in the flow run graph, enabling you to break down a complex flow into something you can observe, understand and control at a more granular level. When a function becomes a task, it can be executed concurrently and its return value can be cached.
Flows and tasks share some common features:
name
, description
and tags
for organization and bookkeeping.Network calls (such as our GET
requests to the GitHub API) are particularly useful as tasks because they take advantage of task features such as retries, caching, and concurrency.
Tasks must be called from flows
All tasks must be called from within a flow. Tasks may not call other tasks directly.
When to use tasks
Not all functions in a flow need be tasks. Use them only when their features are useful.
Let's take our flow from before and move the request into a task:
repo_info.pyimport httpx\nfrom prefect import flow, task\n\n\n@task\ndef get_url(url: str, params: dict = None):\n response = httpx.get(url, params=params)\n response.raise_for_status()\n return response.json()\n\n\n@flow(retries=3, retry_delay_seconds=5, log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n repo_stats = get_url(url)\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo_stats['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo_stats['forks_count']}\")\n\nif __name__ == \"__main__\":\n get_repo_info()\n
Running the flow in your terminal will result in something like this:
09:55:55.412 | INFO | prefect.engine - Created flow run 'great-ammonite' for flow 'get-repo-info'\n09:55:55.499 | INFO | Flow run 'great-ammonite' - Created task run 'get_url-0' for task 'get_url'\n09:55:55.500 | INFO | Flow run 'great-ammonite' - Executing 'get_url-0' immediately...\n09:55:55.825 | INFO | Task run 'get_url-0' - Finished in state Completed()\n09:55:55.827 | INFO | Flow run 'great-ammonite' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n09:55:55.827 | INFO | Flow run 'great-ammonite' - Stars \ud83c\udf20 : 12157\n09:55:55.827 | INFO | Flow run 'great-ammonite' - Forks \ud83c\udf74 : 1251\n09:55:55.849 | INFO | Flow run 'great-ammonite' - Finished in state Completed('All states completed.')\n
And you should now see this task run tracked in the UI as well.
","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#caching","title":"Caching","text":"Tasks support the ability to cache their return value. Caching allows you to efficiently reuse results of tasks that may be expensive to reproduce with every flow run, or reuse cached results if the inputs to a task have not changed.
To enable caching, specify a cache_key_fn
\u2014 a function that returns a cache key \u2014 on your task. You may optionally provide a cache_expiration
timedelta indicating when the cache expires. You can define a task that is cached based on its inputs by using the Prefect task_input_hash
. Let's add caching to our get_url
task:
import httpx\nfrom datetime import timedelta\nfrom prefect import flow, task, get_run_logger\nfrom prefect.tasks import task_input_hash\n\n\n@task(cache_key_fn=task_input_hash, \n cache_expiration=timedelta(hours=1),\n )\ndef get_url(url: str, params: dict = None):\n response = httpx.get(url, params=params)\n response.raise_for_status()\n return response.json()\n
You can test this caching behavior by using a personal repository as your workflow parameter - give it a star, or remove a star and see how the output of this task changes (or doesn't) by running your flow multiple times.
Task results and caching
Task results are cached in memory during a flow run and persisted to your home directory by default. Prefect Cloud only stores the cache key, not the data itself.
","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#concurrency","title":"Concurrency","text":"Tasks enable concurrency, allowing you to execute multiple tasks asynchronously. This concurrency can greatly enhance the efficiency and performance of your workflows. Let's expand our script to calculate the average open issues per user. This will require making more requests:
repo_info.pyimport httpx\nfrom datetime import timedelta\nfrom prefect import flow, task\nfrom prefect.tasks import task_input_hash\n\n\n@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(hours=1))\ndef get_url(url: str, params: dict = None):\n response = httpx.get(url, params=params)\n response.raise_for_status()\n return response.json()\n\n\ndef get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n issues = []\n pages = range(1, -(open_issues_count // -per_page) + 1)\n for page in pages:\n issues.append(\n get_url(\n f\"https://api.github.com/repos/{repo_name}/issues\",\n params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n )\n )\n return [i for p in issues for i in p]\n\n\n@flow(retries=3, retry_delay_seconds=5, log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n repo_stats = get_url(f\"https://api.github.com/repos/{repo_name}\")\n issues = get_open_issues(repo_name, repo_stats[\"open_issues_count\"])\n issues_per_user = len(issues) / len(set([i[\"user\"][\"id\"] for i in issues]))\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo_stats['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo_stats['forks_count']}\")\n print(f\"Average open issues per user \ud83d\udc8c : {issues_per_user:.2f}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info()\n
Now we're fetching the data we need, but the requests are happening sequentially. Tasks expose a submit
method that changes the execution from sequential to concurrent. In our specific example, we also need to use the result
method because we are unpacking a list of return values:
def get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n issues = []\n pages = range(1, -(open_issues_count // -per_page) + 1)\n for page in pages:\n issues.append(\n get_url.submit(\n f\"https://api.github.com/repos/{repo_name}/issues\",\n params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n )\n )\n return [i for p in issues for i in p.result()]\n
The logs show that each task is running concurrently:
12:45:28.241 | INFO | prefect.engine - Created flow run 'intrepid-coua' for flow 'get-repo-info'\n12:45:28.311 | INFO | Flow run 'intrepid-coua' - Created task run 'get_url-0' for task 'get_url'\n12:45:28.312 | INFO | Flow run 'intrepid-coua' - Executing 'get_url-0' immediately...\n12:45:28.543 | INFO | Task run 'get_url-0' - Finished in state Completed()\n12:45:28.583 | INFO | Flow run 'intrepid-coua' - Created task run 'get_url-1' for task 'get_url'\n12:45:28.584 | INFO | Flow run 'intrepid-coua' - Submitted task run 'get_url-1' for execution.\n12:45:28.594 | INFO | Flow run 'intrepid-coua' - Created task run 'get_url-2' for task 'get_url'\n12:45:28.594 | INFO | Flow run 'intrepid-coua' - Submitted task run 'get_url-2' for execution.\n12:45:28.609 | INFO | Flow run 'intrepid-coua' - Created task run 'get_url-4' for task 'get_url'\n12:45:28.610 | INFO | Flow run 'intrepid-coua' - Submitted task run 'get_url-4' for execution.\n12:45:28.624 | INFO | Flow run 'intrepid-coua' - Created task run 'get_url-5' for task 'get_url'\n12:45:28.625 | INFO | Flow run 'intrepid-coua' - Submitted task run 'get_url-5' for execution.\n12:45:28.640 | INFO | Flow run 'intrepid-coua' - Created task run 'get_url-6' for task 'get_url'\n12:45:28.641 | INFO | Flow run 'intrepid-coua' - Submitted task run 'get_url-6' for execution.\n12:45:28.708 | INFO | Flow run 'intrepid-coua' - Created task run 'get_url-3' for task 'get_url'\n12:45:28.708 | INFO | Flow run 'intrepid-coua' - Submitted task run 'get_url-3' for execution.\n12:45:29.096 | INFO | Task run 'get_url-6' - Finished in state Completed()\n12:45:29.565 | INFO | Task run 'get_url-2' - Finished in state Completed()\n12:45:29.721 | INFO | Task run 'get_url-5' - Finished in state Completed()\n12:45:29.749 | INFO | Task run 'get_url-4' - Finished in state Completed()\n12:45:29.801 | INFO | Task run 'get_url-3' - Finished in state Completed()\n12:45:29.817 | INFO | Task run 'get_url-1' - Finished in state Completed()\n12:45:29.820 | INFO | Flow run 'intrepid-coua' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n12:45:29.820 | INFO | Flow run 'intrepid-coua' - Stars \ud83c\udf20 : 12159\n12:45:29.821 | INFO | Flow run 'intrepid-coua' - Forks \ud83c\udf74 : 1251\nAverage open issues per user \ud83d\udc8c : 2.27\n12:45:29.838 | INFO | Flow run 'intrepid-coua' - Finished in state Completed('All states completed.')\n
","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#subflows","title":"Subflows","text":"Not only can you call tasks within a flow, but you can also call other flows! Child flows are called\u00a0subflows\u00a0and allow you to efficiently manage, track, and version common multi-task logic.
Subflows are a great way to organize your workflows and offer more visibility within the UI.
Let's add a flow
decorator to our get_open_issues
function:
@flow\ndef get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n issues = []\n pages = range(1, -(open_issues_count // -per_page) + 1)\n for page in pages:\n issues.append(\n get_url.submit(\n f\"https://api.github.com/repos/{repo_name}/issues\",\n params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n )\n )\n return [i for p in issues for i in p.result()]\n
Whenever we run the parent flow, a new run will be generated for related functions within that as well. Not only is this run tracked as a subflow run of the main flow, but you can also inspect it independently in the UI!
","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#next-deployments","title":"Next: Deployments","text":"We now have a flow with tasks, subflows, retries, logging, caching, and concurrent execution. In the next section, we'll see how we can deploy this flow in order to run it on a schedule and/or external infrastructure - click here to learn how to create your first deployment.
","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/work-pools/","title":"Work Pools","text":"","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/work-pools/#why-work-pools","title":"Why work pools?","text":"Work pools are a bridge between the Prefect orchestration layer and infrastructure for flow runs that can be dynamically provisioned. To transition from persistent infrastructure to dynamic infrastructure, use flow.deploy
instead of flow.serve
.
Choosing Between flow.deploy()
and flow.serve()
Earlier in the tutorial you used serve
to deploy your flows. For many use cases, serve
is sufficient to meet scheduling and orchestration needs. Work pools are optional. If infrastructure needs escalate, work pools can become a handy tool. The best part? You're not locked into one method. You can seamlessly combine approaches as needed.
Deployment definition methods differ slightly for work pools
When you use work-pool-based execution, you define deployments differently. Deployments for workers are configured with deploy
, which requires additional configuration. A deployment created with serve
cannot be used with a work pool.
The primary reason to use work pools is for dynamic infrastructure provisioning and configuration. For example, you might have a workflow that has expensive infrastructure requirements and is run infrequently. In this case, you don't want an idle process running within that infrastructure.
Other advantages to using work pools include:
Prefect provides several types of work pools. Prefect Cloud provides a Prefect Managed work pool option that is the simplest way to run workflows remotely. A cloud-provider account, such as AWS, is not required with a Prefect Managed work pool.
","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/work-pools/#set-up-a-work-pool","title":"Set up a work pool","text":"Prefect Cloud
This tutorial uses Prefect Cloud to deploy flows to work pools. Managed execution and push work pools are available in Prefect Cloud only. If you are not using Prefect Cloud, please learn about work pools below and then proceed to the next tutorial that uses worker-based work pools.
","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/work-pools/#create-a-prefect-managed-work-pool","title":"Create a Prefect Managed work pool","text":"In your terminal, run the following command to set up a work pool named my-managed-pool
of type prefect:managed
.
prefect work-pool create my-managed-pool --type prefect:managed \n
Let\u2019s confirm that the work pool was successfully created by running the following command.
prefect work-pool ls\n
You should see your new my-managed-pool
in the output list.
Finally, let\u2019s double check that you can see this work pool in the UI.
Navigate to the Work Pools tab and verify that you see my-managed-pool
listed.
Feel free to select Edit from the three-dot menu on right of the work pool card to view the details of your work pool.
Work pools contain configuration that is used to provision infrastructure for flow runs. For example, you can specify additional Python packages or environment variables that should be set for all deployments that use this work pool. Note that individual deployments can override the work pool configuration.
Now that you\u2019ve set up your work pool, we can deploy a flow to this work pool. Let's deploy your tutorial flow to my-managed-pool
.
From our previous steps, we now have:
Let's update our repo_info.py
file to create a deployment in Prefect Cloud.
The updates that we need to make to repo_info.py
are:
flow.serve
to flow.deploy
.flow.deploy
which work pool to deploy to.Here's what the updated repo_info.py
looks like:
import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info.from_source(\n source=\"https://github.com/discdiver/demos.git\", \n entrypoint=\"repo_info.py:get_repo_info\"\n ).deploy(\n name=\"my-first-deployment\", \n work_pool_name=\"my-managed-pool\", \n )\n
In the from_source
method, we specify the source of our flow code.
In the deploy
method, we specify the name of our deployment and the name of the work pool that we created earlier.
You can store your flow code in any of several types of remote storage. In this example, we use a GitHub repository, but you could use a Docker image, as you'll see in an upcoming section of the tutorial. Alternatively, you could store your flow code in cloud provider storage such as AWS S3, or within a different git-based cloud provider such as GitLab or Bitbucket.
Note
In the example above, we store our code in a GitHub repository. If you make changes to the flow code, you will need to push those changes to your own GitHub account and update the source
argument of from_source
to point to your repository.
Run the script again and you should see a message in the CLI that your deployment was created with instructions for how to run it.
Successfully created/updated all deployments!\n\n Deployments \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name \u2503 Status \u2503 Details \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 get-repo-info/my-first-deployment | applied \u2502 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\nTo schedule a run for this deployment, use the following command:\n\n $ prefect deployment run 'get-repo-info/my-first-deployment'\n\n\nYou can also run your flow via the Prefect UI: https://app.prefect.cloud/account/\nabc/workspace/123/deployments/deployment/xyz\n
Navigate to your Prefect Cloud UI and view your new deployment. Click the Run button to trigger a run of your deployment.
Because this deployment was configured with a Prefect Managed work pool, Prefect Cloud will run your flow on your behalf.
View the logs in the UI.
Now that you've updated your script, you can run it to register your deployment on Prefect Cloud:
python repo_info.py\n
","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/work-pools/#schedule-a-deployment-run","title":"Schedule a deployment run","text":"Now everything is set up for us to submit a flow-run to the work pool. Go ahead and run the deployment from the CLI or the UI.
prefect deployment run 'get_repo_info/my-deployment'\n
Prefect Managed work pools are a great way to get started with Prefect. See the Managed Execution guide for more details.
Many users will find that they need more control over the infrastructure that their flows run on. Prefect Cloud's push work pools are a popular option in those cases.
","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/work-pools/#push-work-pools-with-automatic-infrastructure-provisioning","title":"Push work pools with automatic infrastructure provisioning","text":"Serverless push work pools scale infinitely and provide more configuration options than Prefect Managed work pools.
Prefect provides push work pools for AWS ECS on Fargate, Azure Container Instances, Google Cloud Run, and Modal. To use a push work pool, you will need an account with sufficient permissions on the cloud provider that you want to use. We'll use GCP for this example.
Setting up the cloud provider pieces for infrastructure can be tricky and time consuming. Fortunately, Prefect can automatically provision infrastructure for you and wire it all together to work with your push work pool.
","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/work-pools/#create-a-push-work-pool-with-automatic-infrastructure-provisioning","title":"Create a push work pool with automatic infrastructure provisioning","text":"In your terminal, run the following command to set up a push work pool.
Install the gcloud CLI and authenticate with your GCP project.
If you already have the gcloud CLI installed, be sure to update to the latest version with gcloud components update
.
You will need the following permissions in your GCP project:
Docker is also required to build and push images to your registry. You can install Docker here.
Run the following command to set up a work pool named my-cloud-run-pool
of type cloud-run:push
.
prefect work-pool create --type cloud-run:push --provision-infra my-cloud-run-pool \n
Using the --provision-infra
flag allows you to select a GCP project to use for your work pool and automatically configure it to be ready to execute flows via Cloud Run. In your GCP project, this command will activate the Cloud Run API, create a service account, and create a key for the service account, if they don't already exist. In your Prefect workspace, this command will create a GCPCredentials
block for storing the service account key.
Here's an abbreviated example output from running the command:
\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-cloud-run-pool will require: \u2502\n\u2502 \u2502\n\u2502 Updates in GCP project central-kit-405415 in region us-central1 \u2502\n\u2502 \u2502\n\u2502 - Activate the Cloud Run API for your project \u2502\n\u2502 - Activate the Artifact Registry API for your project \u2502\n\u2502 - Create an Artifact Registry repository named prefect-images \u2502\n\u2502 - Create a service account for managing Cloud Run jobs: prefect-cloud-run \u2502\n\u2502 - Service account will be granted the following roles: \u2502\n\u2502 - Service Account User \u2502\n\u2502 - Cloud Run Developer \u2502\n\u2502 - Create a key for service account prefect-cloud-run \u2502\n\u2502 \u2502\n\u2502 Updates in Prefect workspace \u2502\n\u2502 \u2502\n\u2502 - Create GCP credentials block my--pool-push-pool-credentials to store the service account key \u2502\n\u2502 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\nActivating Cloud Run API\nActivating Artifact Registry API\nCreating Artifact Registry repository\nConfiguring authentication to Artifact Registry\nSetting default Docker build namespace\nCreating service account\nAssigning roles to service account\nCreating service account key\nCreating GCP credentials block\nProvisioning Infrastructure \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-cloud-run-pool'!\n
After infrastructure provisioning completes, you will be logged into your new Artifact Registry repository and the default Docker build namespace will be set to the URL of the repository.
While the default namespace is set, any images you build without specifying a registry or username/organization will be pushed to the repository.
To take advantage of this functionality, you can write your deploy script like this:
example_deploy_script.pyfrom prefect import flow \nfrom prefect.deployments import DeploymentImage \n\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n print(f\"Hello {name}! I'm a flow running on Cloud Run!\")\n\n\nif __name__ == \"__main__\": \n my_flow.deploy( \n name=\"my-deployment\",\n work_pool_name=\"above-ground\",\n image=DeploymentImage(\n name=\"my-image:latest\",\n platform=\"linux/amd64\",\n )\n )\n
Running this script will build a Docker image with the tag <region>-docker.pkg.dev/<project>/<repository-name>/my-image:latest
and push it to your repository.
Tip
Make sure you have Docker running locally before running this script.
Note that you only need to include an object of the DeploymentImage
class with the argument platform=\"linux/amd64
if you're building your image on a machine with an ARM-based processor. Otherwise, you could just pass image=\"my-image:latest\"
to deploy
.
See the Push Work Pool guide for more details and example commands for each cloud provider.
","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/work-pools/#next-step","title":"Next step","text":"Congratulations! You've learned how to deploy flows to work pools. If these work pool options meet all of your needs, we encourage you to go deeper with the concepts docs or explore our how-to guides to see examples of particular Prefect use cases.
However, if you need more control over your infrastructure, want to run your workflows in Kubernetes, or are running a self-hosted Prefect server instance, we encourage you to see the next section of the tutorial. There you'll learn how to use work pools that rely on a worker and see how to customize Docker images for container-based infrastructure.
","tags":["work pools","orchestration","flow runs","deployments","schedules","tutorial"],"boost":2},{"location":"tutorial/workers/","title":"Workers","text":"","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#prerequisites","title":"Prerequisites","text":"Docker installed and running on your machine.
","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#why-workers","title":"Why workers","text":"In the previous section of the tutorial, you learned how work pools are a bridge between the Prefect orchestration layer and infrastructure for flow runs that can be dynamically provisioned. You saw how you can transition from persistent infrastructure to dynamic infrastructure by using flow.deploy
instead of flow.serve
.
Work pools that rely on client-side workers take this a step further by enabling you to run work flows in your own Docker containers, Kubernetes clusters, and serverless environments such as AWS ECS, Azure Container Instances, and GCP Cloud Run.
The architecture of a worker-based work pool deployment can be summarized with the following diagram:
graph TD\n subgraph your_infra[\"Your Execution Environment\"]\n worker[\"Worker\"]\n subgraph flow_run_infra[Flow Run Infra]\n flow_run_a((\"Flow Run A\"))\n end\n subgraph flow_run_infra_2[Flow Run Infra]\n flow_run_b((\"Flow Run B\"))\n end \n end\n\n subgraph api[\"Prefect API\"]\n Deployment --> |assigned to| work_pool\n work_pool([\"Work Pool\"])\n end\n\n worker --> |polls| work_pool\n worker --> |creates| flow_run_infra\n worker --> |creates| flow_run_infra_2
Notice above that the worker is in charge of provisioning the flow run infrastructure. In context of this tutorial, that flow run infrastructure is an ephemeral Docker container to host each flow run. Different worker types create different types of flow run infrastructure.
Now that we\u2019ve reviewed the concepts of a work pool and worker, let\u2019s create them so that you can deploy your tutorial flow, and execute it later using the Prefect API.
","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#set-up-a-work-pool-and-worker","title":"Set up a work pool and worker","text":"For this tutorial you will create a Docker type work pool via the CLI.
Using the Docker work pool type means that all work sent to this work pool will run within a dedicated Docker container using a Docker client available to the worker.
Other work pool types
There are work pool types for serverless computing environments such as AWS ECS, Azure Container Instances, Google Cloud Run, and Vertex AI. Kubernetes is also a popular work pool type.
These options are expanded upon in various How-to Guides.
","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#create-a-work-pool","title":"Create a work pool","text":"In your terminal, run the following command to set up a Docker type work pool.
prefect work-pool create --type docker my-docker-pool\n
Let\u2019s confirm that the work pool was successfully created by running the following command in the same terminal.
prefect work-pool ls\n
You should see your new my-docker-pool
listed in the output.
Finally, let\u2019s double check that you can see this work pool in your Prefect UI.
Navigate to the Work Pools tab and verify that you see my-docker-pool
listed.
When you click into my-docker-pool
you should see a red status icon signifying that this work pool is not ready.
To make the work pool ready, you need to start a worker.
","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#start-a-worker","title":"Start a worker","text":"Workers are a lightweight polling process that kick off scheduled flow runs on a specific type of infrastructure (such as Docker). To start a worker on your local machine, open a new terminal and confirm that your virtual environment has prefect
installed.
Run the following command in this new terminal to start the worker:
prefect worker start --pool my-docker-pool\n
You should see the worker start. It's now polling the Prefect API to check for any scheduled flow runs it should pick up and then submit for execution. You\u2019ll see your new worker listed in the UI under the Workers tab of the Work Pools page with a recent last polled date.
You should also be able to see a Ready
status indicator on your work pool - progress!
You will need to keep this terminal session active for the worker to continue to pick up jobs. Since you are running this worker locally, the worker will terminate if you close the terminal. Therefore, in a production setting this worker should run as a daemonized or managed process.
Now that you\u2019ve set up your work pool and worker, we have what we need to kick off and execute flow runs of flows deployed to this work pool. Let's deploy your tutorial flow to my-docker-pool
.
From our previous steps, we now have:
Now it\u2019s time to put it all together. We're going to update our repo_info.py
file to build a Docker image and update our deployment so our worker can execute it.
The updates that you need to make to repo_info.py
are:
flow.serve
to flow.deploy
.flow.deploy
which work pool to deploy to.flow.deploy
the name to use for the Docker image that will be built.Here's what the updated repo_info.py
looks like:
import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info.deploy(\n name=\"my-first-deployment\", \n work_pool_name=\"my-docker-pool\", \n image=\"my-first-deployment-image:tutorial\",\n push=False\n )\n
Why the push=False
?
For this tutorial, your Docker worker is running on your machine, so we don't need to push the image built by flow.deploy
to a registry. When your worker is running on a remote machine, you will need to push the image to a registry that the worker can access.
Remove the push=False
argument, include your registry name, and ensure you've authenticated with the Docker CLI to push the image to a registry.
Now that you've updated your script, you can run it to deploy your flow to the work pool:
python repo_info.py\n
Prefect will build a custom Docker image containing your workflow code that the worker can use to dynamically spawn Docker containers whenever this workflow needs to run.
What Dockerfile?
In this example, Prefect generates a Dockerfile for you that will build an image based off of one of Prefect's published images. The generated Dockerfile will copy the current directory into the Docker image and install any dependencies listed in a requirements.txt
file.
If you want to use a custom Dockerfile, you can specify the path to the Dockerfile using the DeploymentImage
class:
import httpx\nfrom prefect import flow\nfrom prefect.deployments import DeploymentImage\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info.deploy(\n name=\"my-first-deployment\", \n work_pool_name=\"my-docker-pool\", \n image=DeploymentImage(\n name=\"my-first-deployment-image\",\n tag=\"tutorial\",\n dockerfile=\"Dockerfile\"\n ),\n push=False\n )\n
","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#modify-the-deployment","title":"Modify the deployment","text":"If you need to make updates to your deployment, you can do so by modifying your script and rerunning it. You'll need to make one update to specify a value for job_variables
to ensure your Docker worker can successfully execute scheduled runs for this flow. See the example below.
The job_variables
section allows you to fine-tune the infrastructure settings for a specific deployment. These values override default values in the specified work pool's base job template.
When testing images locally without pushing them to a registry (to avoid potential errors like docker.errors.NotFound), it's recommended to include an image_pull_policy
job_variable set to Never
. However, for production workflows, always consider pushing images to a remote registry for more reliability and accessibility.
Here's how you can quickly set the image_pull_policy
to be Never
for this tutorial deployment without affecting the default value set on your work pool:
import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n url = f\"https://api.github.com/repos/{repo_name}\"\n response = httpx.get(url)\n response.raise_for_status()\n repo = response.json()\n print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n get_repo_info.deploy(\n name=\"my-first-deployment\", \n work_pool_name=\"my-docker-pool\", \n job_variables={\"image_pull_policy\": \"Never\"},\n image=\"my-first-deployment-image:tutorial\",\n push=False\n )\n
To register this update to your deployment's parameters with Prefect's API, run:
python repo_info.py\n
Now everything is set for us to submit a flow-run to the work pool:
prefect deployment run 'get_repo_info/my-deployment'\n
Common Pitfall
Did you know?
A Prefect flow can have more than one deployment. This pattern can be useful if you want your flow to run in different execution environments.
","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#next-steps","title":"Next steps","text":"prefect.yaml
.Happy building!
","tags":["workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2}]} \ No newline at end of file diff --git a/versions/unreleased/sitemap.xml.gz b/versions/unreleased/sitemap.xml.gz index a129cd5dfe..3bcb00c249 100644 Binary files a/versions/unreleased/sitemap.xml.gz and b/versions/unreleased/sitemap.xml.gz differ