diff --git a/previews/PR56/.documenter-siteinfo.json b/previews/PR56/.documenter-siteinfo.json index 5cf7c17c..d2f1712a 100644 --- a/previews/PR56/.documenter-siteinfo.json +++ b/previews/PR56/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.0","generation_timestamp":"2024-10-11T10:38:42","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.0","generation_timestamp":"2024-10-11T19:35:13","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/previews/PR56/assets/compute_cluster.jpg b/previews/PR56/assets/compute_cluster.jpg new file mode 100644 index 00000000..725f63e2 Binary files /dev/null and b/previews/PR56/assets/compute_cluster.jpg differ diff --git a/previews/PR56/concepts/architectures/index.html b/previews/PR56/concepts/architectures/index.html index bb7dbd98..cd968af9 100644 --- a/previews/PR56/concepts/architectures/index.html +++ b/previews/PR56/concepts/architectures/index.html @@ -1,5 +1,5 @@ -Architectures · Chmy.jl

Architectures

Backend Selection & Architecture Initialization

Chmy.jl supports CPUs, as well as CUDA, ROC and Metal backends for Nvidia, AMD and Apple M-series GPUs through a thin wrapper around the KernelAbstractions.jl for users to select desirable backends.

# Default with CPU
+Architectures · Chmy.jl

Architectures

Backend Selection & Architecture Initialization

Chmy.jl supports CPUs, as well as CUDA, ROC and Metal backends for Nvidia, AMD and Apple M-series GPUs through a thin wrapper around the KernelAbstractions.jl for users to select desirable backends.

# Default with CPU
 arch = Arch(CPU())
using CUDA
 
 arch = Arch(CUDABackend())
using AMDGPU
@@ -8,4 +8,4 @@
 
 arch = Arch(MetalBackend())

At the beginning of program, one may specify the backend and initialize the architecture they desire to use. The initialized arch variable will be required explicitly at creation of some objects such as grids and kernel launchers.

Specifying the device ID and stream priority

On systems with multiple GPUs, passing the keyword argument device_id to the Arch constructor will select and set the selected device as a current device.

For advanced users, we provide a function activate!(arch; priority) for specifying the stream priority owned by the task one is executing. The stream priority will be set to :normal by default, where :low and :high are also possible options given that the target backend has priority control over streams implemented.

Distributed Architecture

Our distributed architecture builds upon the abstraction of having GPU clusters that build on the same GPU architecture. Note that in general, GPU clusters may be equipped with hardware from different vendors, incorporating different types of GPUs to exploit their unique capabilities for specific tasks.

GPU-Aware MPI Required for Distributed Module on GPU backend

The Distributed module currently only supports GPU-aware MPI when a GPU backend is selected for multi-GPU computations. For the Distributed module to function properly, any GPU-aware MPI library installation shall be used. Otherwise, a segmentation fault will occur.

To make the Architecture object aware of MPI topology, user can pass an MPI communicator object and dimensions of the Cartesian topology to the Arch constructor:

using MPI
 
-arch = Arch(CPU(), MPI.COMM_WORLD, (0, 0, 0))

Passing zeros as the last argument will automatically spread the dimensions to be as close as possible to each other, see MPI.jl documentation for details.

+arch = Arch(CPU(), MPI.COMM_WORLD, (0, 0, 0))

Passing zeros as the last argument will automatically spread the dimensions to be as close as possible to each other, see MPI.jl documentation for details. For distributed usage of Chmy.jl see Distributed

diff --git a/previews/PR56/concepts/bc/index.html b/previews/PR56/concepts/bc/index.html index bf5b8491..30f365cd 100644 --- a/previews/PR56/concepts/bc/index.html +++ b/previews/PR56/concepts/bc/index.html @@ -1,4 +1,4 @@ -Boundary Conditions · Chmy.jl

Boundary Conditions

Using Chmy.jl, we aim to study partial differential equations (PDEs) arising from physical or engineering problems. Additional initial and/or boundary conditions are necessary for the model problem to be well-posed, ensuring the existence and uniqueness of a stable solution.

We provide a small overview for boundary conditions that one often encounters. In the following, we consider the unknown function $u : \Omega \mapsto \mathbb{R}$ defined on some bounded computational domain $\Omega \subset \mathbb{R}^d$ in a $d$-dimensional space. With the domain boundary denoted by $\partial \Omega$, we have some function $g : \partial \Omega \mapsto \mathbb{R}$ prescribed on the boundary.

TypeFormExample
Dirichlet$u = g$ on $\partial \Omega$In fluid dynamics, the no-slip condition for viscous fluids states that at a solid boundary the fluid has zero velocity relative to the boundary.
Neumann$\partial_{\boldsymbol{n}} u = g$ on $\partial \Omega$, where $\boldsymbol{n}$ is the outer normal vector to $\Omega$It specifies the values in which the derivative of a solution is applied within the boundary of the domain. An application in thermodynamics is a prescribed heat flux through the boundary
Robin$u + \alpha \partial_\nu u = g$ on $\partial \Omega$, where $\alpha \in \mathbb{R}$.Also called impedance boundary conditions from their application in electromagnetic problems

Applying Boundary Conditions with bc!()

In the following, we describe the syntax in Chmy.jl for launching kernels that impose boundary conditions on some field that is well-defined on a grid with backend specified through arch.

For Dirichlet and Neumann boundary conditions, they are referred to as homogeneous if $g = 0$, otherwise they are non-homogeneous if $g = v$ holds, for some $v\in \mathbb{R}$.

HomogeneousNon-homogeneous
Dirichlet on $\partial \Omega$bc!(arch, grid, field => Dirichlet())bc!(arch, grid, field => Dirichlet(v))
Neumann on $\partial \Omega$bc!(arch, grid, field => Neumann())bc!(arch, grid, field => Neumann(v))

Note that the syntax shown in the table above is a fused expression of both specifying and applying the boundary conditions.

$\partial \Omega$ Refers to the Entire Domain Boundary!

By specifying field to a single boundary condition, we impose the boundary condition on the entire domain boundary by default. See the section for "Mixed Boundary Conditions" below for specifying different BC on different parts of the domain boundary.

Alternatively, one could also define the boundary conditions beforehand using batch() provided the grid information as well as the field variable. This way the boundary condition to be prescibed is precomputed.

# pre-compute batch
+Boundary Conditions · Chmy.jl

Boundary Conditions

Using Chmy.jl, we aim to study partial differential equations (PDEs) arising from physical or engineering problems. Additional initial and/or boundary conditions are necessary for the model problem to be well-posed, ensuring the existence and uniqueness of a stable solution.

We provide a small overview for boundary conditions that one often encounters. In the following, we consider the unknown function $u : \Omega \mapsto \mathbb{R}$ defined on some bounded computational domain $\Omega \subset \mathbb{R}^d$ in a $d$-dimensional space. With the domain boundary denoted by $\partial \Omega$, we have some function $g : \partial \Omega \mapsto \mathbb{R}$ prescribed on the boundary.

TypeFormExample
Dirichlet$u = g$ on $\partial \Omega$In fluid dynamics, the no-slip condition for viscous fluids states that at a solid boundary the fluid has zero velocity relative to the boundary.
Neumann$\partial_{\boldsymbol{n}} u = g$ on $\partial \Omega$, where $\boldsymbol{n}$ is the outer normal vector to $\Omega$It specifies the values in which the derivative of a solution is applied within the boundary of the domain. An application in thermodynamics is a prescribed heat flux through the boundary
Robin$u + \alpha \partial_\nu u = g$ on $\partial \Omega$, where $\alpha \in \mathbb{R}$.Also called impedance boundary conditions from their application in electromagnetic problems

Applying Boundary Conditions with bc!()

In the following, we describe the syntax in Chmy.jl for launching kernels that impose boundary conditions on some field that is well-defined on a grid with backend specified through arch.

For Dirichlet and Neumann boundary conditions, they are referred to as homogeneous if $g = 0$, otherwise they are non-homogeneous if $g = v$ holds, for some $v\in \mathbb{R}$.

HomogeneousNon-homogeneous
Dirichlet on $\partial \Omega$bc!(arch, grid, field => Dirichlet())bc!(arch, grid, field => Dirichlet(v))
Neumann on $\partial \Omega$bc!(arch, grid, field => Neumann())bc!(arch, grid, field => Neumann(v))

Note that the syntax shown in the table above is a fused expression of both specifying and applying the boundary conditions.

$\partial \Omega$ Refers to the Entire Domain Boundary!

By specifying field to a single boundary condition, we impose the boundary condition on the entire domain boundary by default. See the section for "Mixed Boundary Conditions" below for specifying different BC on different parts of the domain boundary.

Alternatively, one could also define the boundary conditions beforehand using batch() provided the grid information as well as the field variable. This way the boundary condition to be prescibed is precomputed.

# pre-compute batch
 bt = batch(grid, field => Neumann()) # specify Neumann BC for the variable `field`
-bc!(arch, grid, bt)                  # apply the boundary condition

In the script batcher.jl, we provide a MWE using both fused and precomputed expressions for BC update.

Specifying BC within a launch

When using launch to specify the execution of a kernel (more see section Kernels), one can pass the specified boundary condition(s) as an optional parameter using batch, provided the grid information of the discretized space. This way we can gain efficiency from making good use of already cached values.

In the 2D diffusion example as introduced in the tutorial "Getting Started with Chmy.jl", we need to update the temperature field C at k-th iteration using the values of heat flux q and physical time step size Δt from (k-1)-th iteration. When launching the kernel update_C! with launch, we simultaneously launch the kernel for the BC update using:

launch(arch, grid, update_C! => (C, q, Δt, grid); bc=batch(grid, C => Neumann(); exchange=C))

Mixed Boundary Conditions

In the code example above, by specifying boundary conditions using syntax such as field => Neumann(), we essentially launch a kernel that impose the Neumann boundary condition on the entire domain boundary $\partial \Omega$. More often, one may be interested in prescribing different boundary conditions on different parts of $\partial \Omega$.

The following figure showcases a 2D square domain $\Omega$ with different boundary conditions applied on each side:

  • The top boundary (red) is a Dirichlet boundary condition where $u = a$.
  • The bottom boundary (blue) is also a Dirichlet boundary condition where $u = b$.
  • The left and right boundaries (green) are Neumann boundary conditions where $\frac{\partial u}{\partial y} = 0$.

To launch a kernel that satisfies these boundary conditions in Chmy.jl, you can use the following code:

bc!(arch, grid, field => (x = Neumann(), y = (Dirichlet(b), Dirichlet(a))))
+bc!(arch, grid, bt) # apply the boundary condition

In the script batcher.jl, we provide a MWE using both fused and precomputed expressions for BC update.

Specifying BC within a launch

When using launch to specify the execution of a kernel (more see section Kernels), one can pass the specified boundary condition(s) as an optional parameter using batch, provided the grid information of the discretized space. This way we can gain efficiency from making good use of already cached values.

In the 2D diffusion example as introduced in the tutorial "Getting Started with Chmy.jl", we need to update the temperature field C at k-th iteration using the values of heat flux q and physical time step size Δt from (k-1)-th iteration. When launching the kernel update_C! with launch, we simultaneously launch the kernel for the BC update using:

launch(arch, grid, update_C! => (C, q, Δt, grid); bc=batch(grid, C => Neumann(); exchange=C))

Mixed Boundary Conditions

In the code example above, by specifying boundary conditions using syntax such as field => Neumann(), we essentially launch a kernel that impose the Neumann boundary condition on the entire domain boundary $\partial \Omega$. More often, one may be interested in prescribing different boundary conditions on different parts of $\partial \Omega$.

The following figure showcases a 2D square domain $\Omega$ with different boundary conditions applied on each side:

  • The top boundary (red) is a Dirichlet boundary condition where $u = a$.
  • The bottom boundary (blue) is also a Dirichlet boundary condition where $u = b$.
  • The left and right boundaries (green) are Neumann boundary conditions where $\frac{\partial u}{\partial y} = 0$.

To launch a kernel that satisfies these boundary conditions in Chmy.jl, you can use the following code:

bc!(arch, grid, field => (x = Neumann(), y = (Dirichlet(b), Dirichlet(a))))
diff --git a/previews/PR56/concepts/distributed/index.html b/previews/PR56/concepts/distributed/index.html new file mode 100644 index 00000000..fd1a262c --- /dev/null +++ b/previews/PR56/concepts/distributed/index.html @@ -0,0 +1,2 @@ + +Distributed · Chmy.jl

Distributed

Task-based parallelism in Chmy.jl is featured by the usage of Threads.@spawn, with an additional layer of a Worker construct for efficiently managing the lifespan of tasks. Note that the task-based parallelism provides a high-level abstraction of program execution not only for shared-memory architecture on a single machine, but it can be also extended to hybrid parallelism, consisting of both shared and distributed-memory parallelism. The Distributed module in Chmy.jl allows users to leverage the hybrid parallelism through the power of abstraction.

We will start with some basic background knowledge for understanding the architecture of modern HPC clusters, the underlying memory model and the programming paradgm complied with it. We then introduce how Chmy.jl provides a high-level API for users to abstract the low-level details away and then a simple example showing how the Distributed module should be used.

HPC Cluster & Distributed Memory

An high-performance computing (HPC) cluster consists of a network of independent computers combined into a system through specialized hardware. We call each computer a node, and each node manages its own private memory. Such system with interconnected nodes, without having access to memory of any other node, features the distributed memory model. The underlying fast interconnect architecture (InfiniBand) that physically connects the nodes in the network can transfer the data from one node to another in an extremely efficient manner through a communication protocol called remote direct memory access (RDMA).

By using the InfiniBand, processes across different nodes can communicate with each other through the sending of messages in a high-throughput, low-latency fashion. The syntax and semantics of how message passing should proceed through such network is defined by a standard called the Message-Passing Interface (MPI), and there are different libraries that implement the standard, resulting in a wide range of choice (MPICH, Open MPI, MVAPICH etc.) for users.

Message-Passing Interface (MPI) is a General Specification

In general, implementations based on MPI standard can be used for a great variety of computers, not just on HPC clusters, as long as these computers are connected by a communication network.

Hybrid Parallelism

diff --git a/previews/PR56/concepts/fields/index.html b/previews/PR56/concepts/fields/index.html index 8ac94703..4bf1074a 100644 --- a/previews/PR56/concepts/fields/index.html +++ b/previews/PR56/concepts/fields/index.html @@ -1,5 +1,5 @@ -Fields · Chmy.jl

Fields

With a given grid that allows us to define each point uniquely in a high-dimensional space, we abstract the data values to be defined on the grid under the concept AbstractField. Following is the type tree of the abstract field and its derived data types.

Defining a multi-dimensional Field

Consider the following example, where we defined a variable grid of type Chmy.UniformGrid, similar as in the previous section Grids. We can now define physical properties on the grid.

When defining a scalar field Field on the grid, we need to specify the arrangement of the field values. These values can either be stored at the cell centers of each control volume Center() or on the cell vertices/faces Vertex().

# Define geometry, architecture..., a 2D grid
+Fields · Chmy.jl

Fields

With a given grid that allows us to define each point uniquely in a high-dimensional space, we abstract the data values to be defined on the grid under the concept AbstractField. Following is the type tree of the abstract field and its derived data types.

Defining a multi-dimensional Field

Consider the following example, where we defined a variable grid of type Chmy.UniformGrid, similar as in the previous section Grids. We can now define physical properties on the grid.

When defining a scalar field Field on the grid, we need to specify the arrangement of the field values. These values can either be stored at the cell centers of each control volume Center() or on the cell vertices/faces Vertex().

# Define geometry, architecture..., a 2D grid
 grid = UniformGrid(arch; origin=(-lx/2, -ly/2), extent=(lx, ly), dims=(nx, ny))
 
 # Define pressure as a scalar field
@@ -28,4 +28,4 @@
            y=FunctionField(ρgy, grid, vy_node; parameters=η0))

Defining Constant Fields

For completeness, we also provide an abstract type ConstantField, which comprises of a generic ValueField type, and two special types ZeroField, OneField allowing dispatch for special casess. With such a construct, we can easily define value fields properties and other parameters using constant values in a straightforward and readable manner. Moreover, explicit information about the grid on which the field should be defined can be abbreviated. For example:

# Defines a field with constant values 1.0
 field = Chmy.ValueField(1.0)

Alternatively, we could also use the OneField type, providing type information about the contents of the field.

# Defines a field with constant value 1.0
 onefield = Chmy.OneField{Float64}()

Notably, these two fields shall equal to each other as expected.

julia> field == onefield
-true
+true
diff --git a/previews/PR56/concepts/grid_operators/index.html b/previews/PR56/concepts/grid_operators/index.html index d3e2cc88..6c257189 100644 --- a/previews/PR56/concepts/grid_operators/index.html +++ b/previews/PR56/concepts/grid_operators/index.html @@ -1,5 +1,5 @@ -Grid Operators · Chmy.jl

Grid Operators

Chmy.jl currently supports various finite difference operators for fields defined in Cartesian coordinates. The table below summarizes the most common usage of grid operators, with the grid g::StructuredGrid and index I = @index(Global, Cartesian) defined and P = Field(backend, grid, location) is some field defined on the grid g.

Mathematical FormulationCode
$\frac{\partial}{\partial x} P$∂x(P, g, I)
$\frac{\partial}{\partial y} P$∂y(P, g, I)
$\frac{\partial}{\partial z} P$∂z(P, g, I)
$\nabla P$divg(P, g, I)

Computing the Divergence of a Vector Field

To illustrate the usage of grid operators, we compute the divergence of an vector field $V$ using the divg function. We first allocate memory for required fields.

V  = VectorField(backend, grid)
+Grid Operators · Chmy.jl

Grid Operators

Chmy.jl currently supports various finite difference operators for fields defined in Cartesian coordinates. The table below summarizes the most common usage of grid operators, with the grid g::StructuredGrid and index I = @index(Global, Cartesian) defined and P = Field(backend, grid, location) is some field defined on the grid g.

Mathematical FormulationCode
$\frac{\partial}{\partial x} P$∂x(P, g, I)
$\frac{\partial}{\partial y} P$∂y(P, g, I)
$\frac{\partial}{\partial z} P$∂z(P, g, I)
$\nabla P$divg(P, g, I)

Computing the Divergence of a Vector Field

To illustrate the usage of grid operators, we compute the divergence of an vector field $V$ using the divg function. We first allocate memory for required fields.

V  = VectorField(backend, grid)
 ∇V = Field(backend, grid, Center())
 # use set! to set up the initial vector field...

The kernel that computes the divergence needs to have the grid information passed as for other finite difference operators.

@kernel inbounds = true function update_∇!(V, ∇V, g::StructuredGrid, O)
     I = @index(Global, Cartesian)
@@ -39,4 +39,4 @@
     # interpolate from cell centres to cell interfaces
     ρx = lerp(ρ, location(ρx), g, I)
     ρy = lerp(ρ, location(ρy), g, I)
-end
+end
diff --git a/previews/PR56/concepts/grids/index.html b/previews/PR56/concepts/grids/index.html index 7dc2d6d7..41cc4a49 100644 --- a/previews/PR56/concepts/grids/index.html +++ b/previews/PR56/concepts/grids/index.html @@ -1,5 +1,5 @@ -Grids · Chmy.jl

Grids

The choice of numerical grid used depends on the type of equations to be resolved and affects the discretization schemes used. The design of the Chmy.Grids module aims to provide a robust yet flexible user API in customizing the numerical grids used for spatial discretization.

We currently support grids with quadrilateral cells. An N-dimensional numerical grid contains N spatial dimensions, each represented by an axis.

Grid PropertiesDescriptionTunable Parameters
DimensionsThe grid can be N-dimensional by having N axes.AbstractAxis
Distribution of Nodal PointsThe grid can be regular (uniform distribution) or non-regular (irregular distribution).UniformAxis, FunctionAxis
Distribution of VariablesThe grid can be non-staggered (collocated) or staggered, affecting how variables are positioned within the grid.Center, Vertex

Axis

Objects of type AbstractAxis are building blocks of numerical grids. We can either define equidistant axes with UniformAxis, or parameterized axes with FunctionAxis.

Uniform Axis

To define a uniform axis, we need to provide:

  • Origin: The starting point of the axis.
  • Extent: The length of the section of the axis considered.
  • Cell Length: The length of each cell along the axis.

With the information above, an axis can be defined and incorporated into a spatial dimension. The spacing (with alias Δ) and inv_spacing (with alias ) functions allow convenient access to the grid spacing (Δx/Δy/Δz) and its reciprocal, respectively.

Function Axis

As an alternative, one could also define a FunctionAxis object using a function that parameterizes the spacing of the axis, together with the length of the axis.

f = i -> ((i - 1) / 4)^1.5
+Grids · Chmy.jl

Grids

The choice of numerical grid used depends on the type of equations to be resolved and affects the discretization schemes used. The design of the Chmy.Grids module aims to provide a robust yet flexible user API in customizing the numerical grids used for spatial discretization.

We currently support grids with quadrilateral cells. An N-dimensional numerical grid contains N spatial dimensions, each represented by an axis.

Grid PropertiesDescriptionTunable Parameters
DimensionsThe grid can be N-dimensional by having N axes.AbstractAxis
Distribution of Nodal PointsThe grid can be regular (uniform distribution) or non-regular (irregular distribution).UniformAxis, FunctionAxis
Distribution of VariablesThe grid can be non-staggered (collocated) or staggered, affecting how variables are positioned within the grid.Center, Vertex

Axis

Objects of type AbstractAxis are building blocks of numerical grids. We can either define equidistant axes with UniformAxis, or parameterized axes with FunctionAxis.

Uniform Axis

To define a uniform axis, we need to provide:

  • Origin: The starting point of the axis.
  • Extent: The length of the section of the axis considered.
  • Cell Length: The length of each cell along the axis.

With the information above, an axis can be defined and incorporated into a spatial dimension. The spacing (with alias Δ) and inv_spacing (with alias ) functions allow convenient access to the grid spacing (Δx/Δy/Δz) and its reciprocal, respectively.

Function Axis

As an alternative, one could also define a FunctionAxis object using a function that parameterizes the spacing of the axis, together with the length of the axis.

f = i -> ((i - 1) / 4)^1.5
 length = 4
 parameterized_axis = FunctionAxis(f, length)

Structured Grids

A common mesh structure that is used for the spatial discretization in the finite difference approach is a structured grid (concrete type StructuredGrid or its alias SG).

We provide a function UniformGrid for creating an equidistant StructuredGrid, that essentially boils down to having axes of type UniformAxis in each spatial dimension.

# with architecture as well as numerics lx/y/z and nx/y/z defined
 grid   = UniformGrid(arch;
@@ -11,4 +11,4 @@
 
 julia> @assert connectivity(grid, Dim(2), Side(1)) isa Bounded "Upper boundary is bounded"
 
-julia> @assert connectivity(grid, Dim(2), Side(2)) isa Bounded "Lower boundary is bounded"
+julia> @assert connectivity(grid, Dim(2), Side(2)) isa Bounded "Lower boundary is bounded"
diff --git a/previews/PR56/concepts/kernels/index.html b/previews/PR56/concepts/kernels/index.html index 7a1bc3cf..ee0540a7 100644 --- a/previews/PR56/concepts/kernels/index.html +++ b/previews/PR56/concepts/kernels/index.html @@ -1,5 +1,5 @@ -Kernels · Chmy.jl

Kernels

The KernelAbstractions.jl package provides a macro-based dialect that hides the intricacies of vendor-specific GPU programming. It allows one to write hardware-agnostic kernels that can be instantiated and launched for different device backends without modifying the high-level code nor sacrificing performance.

In the following, we show how to write and launch kernels on various backends. We also explain the concept of a Launcher in Chmy.jl, that complements the default kernel launching, allowing us to hide the latency between the bulk of the computations and boundary conditions or MPI communications.

Writing Kernels

This section highlights some important features of KernelAbstractions.jl that are essential for understanding the high-level abstraction of the kernel concept that is used throughout our package. As it barely serves for illustrative purposes, for more specific examples, please refer to their documentation.

using KernelAbstractions
+Kernels · Chmy.jl

Kernels

The KernelAbstractions.jl package provides a macro-based dialect that hides the intricacies of vendor-specific GPU programming. It allows one to write hardware-agnostic kernels that can be instantiated and launched for different device backends without modifying the high-level code nor sacrificing performance.

In the following, we show how to write and launch kernels on various backends. We also explain the concept of a Launcher in Chmy.jl, that complements the default kernel launching, allowing us to hide the latency between the bulk of the computations and boundary conditions or MPI communications.

Writing Kernels

This section highlights some important features of KernelAbstractions.jl that are essential for understanding the high-level abstraction of the kernel concept that is used throughout our package. As it barely serves for illustrative purposes, for more specific examples, please refer to their documentation.

using KernelAbstractions
 
 # Define a kernel that performs element-wise operations on A
 @kernel function mul2!(A)
@@ -45,4 +45,4 @@
 
     # with Neumann boundary conditions and MPI exchange
     launch(arch, grid, update_C! => (C, q, Δt, grid); bc=batch(grid, C => Neumann(); exchange=C))
-end
+end
diff --git a/previews/PR56/developer_documentation/running_tests/index.html b/previews/PR56/developer_documentation/running_tests/index.html index 09fafc6c..95e7ffd5 100644 --- a/previews/PR56/developer_documentation/running_tests/index.html +++ b/previews/PR56/developer_documentation/running_tests/index.html @@ -1,5 +1,5 @@ -Running Tests · Chmy.jl

Running Tests

CPU tests

To run the Chmy test suite on the CPU, simple run test from within the package mode or using Pkg:

julia> using Pkg
+Running Tests · Chmy.jl

Running Tests

CPU tests

To run the Chmy test suite on the CPU, simple run test from within the package mode or using Pkg:

julia> using Pkg
 
 julia> Pkg.test("Chmy")

GPU tests

To run the Chmy test suite on CUDA, ROC or Metal backend (Nvidia, AMD or Apple GPUs), respectively, run the tests using Pkg adding following test_args:

For CUDA backend (Nvidia GPUs):

julia> using Pkg
 
@@ -7,4 +7,4 @@
 
 julia> Pkg.test("Chmy"; test_args=["--backend=AMDGPU"])

For Metal backend (Apple GPUs):

julia> using Pkg
 
-julia> Pkg.test("Chmy"; test_args=["--backends=Metal"])
+julia> Pkg.test("Chmy"; test_args=["--backends=Metal"])
diff --git a/previews/PR56/developer_documentation/workers/index.html b/previews/PR56/developer_documentation/workers/index.html index cd94775a..4de73cc8 100644 --- a/previews/PR56/developer_documentation/workers/index.html +++ b/previews/PR56/developer_documentation/workers/index.html @@ -1,2 +1,2 @@ -Workers · Chmy.jl

Workers

Task-based parallelism provides a highly abstract view for the program execution scheduling, although it may come with a performance overhead related to task creation and destruction. The overhead is currently significant when tasks are used to perform asynchronous operations on GPUs, where TLS context creation and destruction may be in the order of kernel execution time. Therefore, it may be desirable to have long-running tasks that do not get terminated immediately, but only when all queued subtasks (i.e. work units) are executed.

In Chmy.jl, we introduced the concept Worker for this purpose. A Worker is a special construct to extend the lifespan of a task created by Threads.@spawn. It possesses a Channel of subtasks to be executed on the current thread, where subtasks are submitted at construction time to the worker using put!.

With the help of the Worker, we can specify subtasks that need to be sequentially executed by enqueuing them to one Worker. Any work units that shall run in parallel should be put into separate workers instead.

Currently, we use Workers under the hood to hide the communications between computations. We split the computational domain into inner part, containing the bulk of the grid points, and thin outer part. We launch the same kernels processing the inner and outer parts in different Julia tasks. When the outer part completes, we launch the non-blocking MPI communication. Workers are a stateful representation of the long-running computation, needed to avoid significant overhead of creating a new task-local state each time a communication is performed.

+Workers · Chmy.jl

Workers

Task-based parallelism provides a highly abstract view for the program execution scheduling, although it may come with a performance overhead related to task creation and destruction. The overhead is currently significant when tasks are used to perform asynchronous operations on GPUs, where TLS context creation and destruction may be in the order of kernel execution time. Therefore, it may be desirable to have long-running tasks that do not get terminated immediately, but only when all queued subtasks (i.e. work units) are executed.

In Chmy.jl, we introduced the concept Worker for this purpose. A Worker is a special construct to extend the lifespan of a task created by Threads.@spawn. It possesses a Channel of subtasks to be executed on the current thread, where subtasks are submitted at construction time to the worker using put!.

With the help of the Worker, we can specify subtasks that need to be sequentially executed by enqueuing them to one Worker. Any work units that shall run in parallel should be put into separate workers instead.

Currently, we use Workers under the hood to hide the communications between computations. We split the computational domain into inner part, containing the bulk of the grid points, and thin outer part. We launch the same kernels processing the inner and outer parts in different Julia tasks. When the outer part completes, we launch the non-blocking MPI communication. Workers are a stateful representation of the long-running computation, needed to avoid significant overhead of creating a new task-local state each time a communication is performed.

diff --git a/previews/PR56/examples/overview/index.html b/previews/PR56/examples/overview/index.html index e10193ec..0e62df9a 100644 --- a/previews/PR56/examples/overview/index.html +++ b/previews/PR56/examples/overview/index.html @@ -1,2 +1,2 @@ -Examples Overview · Chmy.jl

Examples Overview

This page provides an overview of Chmy.jl examples. These selected examples demonstrate how Chmy.jl can be used to solve various numerical problems using architecture-agnostic kernels both on a single-device and in a distributed way.

Table of Contents

ExampleDescription
Diffusion 2DSolving the 2D diffusion equation on a uniform grid.
Diffusion 2D with MPISolving the 2D diffusion equation on a uniform grid and distributed parallelisation using MPI.
Single-Device Performance OptimisationRevisiting the 2D diffusion problem with focus on performance optimisation techniques on a single-device architecture.
Stokes 2D with MPISolving the 2D Stokes equation with thermal coupling on a uniform grid.
Stokes 3D with MPISolving the 3D Stokes equation with thermal coupling on a uniform grid and distributed parallelisation using MPI.
Diffusion 1D with MetalSolving the 1D diffusion equation using the Metal backend and single precision (Float32) on a uniform grid.
2D Grid VisualizationVisualization of a 2D StructuredGrid.
3D Grid VisualizationVisualization of a 3D StructuredGrid.
+Examples Overview · Chmy.jl

Examples Overview

This page provides an overview of Chmy.jl examples. These selected examples demonstrate how Chmy.jl can be used to solve various numerical problems using architecture-agnostic kernels both on a single-device and in a distributed way.

Table of Contents

ExampleDescription
Diffusion 2DSolving the 2D diffusion equation on a uniform grid.
Diffusion 2D with MPISolving the 2D diffusion equation on a uniform grid and distributed parallelisation using MPI.
Single-Device Performance OptimisationRevisiting the 2D diffusion problem with focus on performance optimisation techniques on a single-device architecture.
Stokes 2D with MPISolving the 2D Stokes equation with thermal coupling on a uniform grid.
Stokes 3D with MPISolving the 3D Stokes equation with thermal coupling on a uniform grid and distributed parallelisation using MPI.
Diffusion 1D with MetalSolving the 1D diffusion equation using the Metal backend and single precision (Float32) on a uniform grid.
2D Grid VisualizationVisualization of a 2D StructuredGrid.
3D Grid VisualizationVisualization of a 3D StructuredGrid.
diff --git a/previews/PR56/getting_started/index.html b/previews/PR56/getting_started/index.html index fb1d6b9f..9d2dacfc 100644 --- a/previews/PR56/getting_started/index.html +++ b/previews/PR56/getting_started/index.html @@ -1,5 +1,5 @@ -Getting Started with Chmy.jl · Chmy.jl

Getting Started with Chmy.jl

Chmy.jl is a backend-agnostic toolkit for finite difference computations on multi-dimensional computational staggered grids. In this introductory tutorial, we will showcase the essence of Chmy.jl by solving a simple 2D diffusion problem. The full code of the tutorial material is available under diffusion_2d.jl.

Basic Diffusion

The diffusion equation is a second order parabolic PDE, here for a multivariable function $C(x,y,t)$ that represents the field being diffused (such as the temperature or the concentration of a chemical component in a solution) showing derivatives in both temporal $\partial t$ and spatial $\partial x$ dimensions, where $\chi$ is the diffusion coefficient. In 2D we have the following formulation for the diffusion process:

\[\begin{equation} +Getting Started with Chmy.jl · Chmy.jl

Getting Started with Chmy.jl

Chmy.jl is a backend-agnostic toolkit for finite difference computations on multi-dimensional computational staggered grids. In this introductory tutorial, we will showcase the essence of Chmy.jl by solving a simple 2D diffusion problem. The full code of the tutorial material is available under diffusion_2d.jl.

Basic Diffusion

The diffusion equation is a second order parabolic PDE, here for a multivariable function $C(x,y,t)$ that represents the field being diffused (such as the temperature or the concentration of a chemical component in a solution) showing derivatives in both temporal $\partial t$ and spatial $\partial x$ dimensions, where $\chi$ is the diffusion coefficient. In 2D we have the following formulation for the diffusion process:

\[\begin{equation} \frac{\partial C}{\partial t} = \chi \left( \frac{\partial^2 C}{\partial x^2} + \frac{\partial^2 C}{\partial y^2} \right). \end{equation}\]

Introducing the diffusion flux $q$, we can rewrite equation (1) as a system of two PDEs, consisting of equations (2) and (3).

\[\begin{equation} \boldsymbol{q} = -\chi \nabla C~, @@ -68,4 +68,4 @@ Colorbar(fig[1, 2], plt) display(fig)

-
+
diff --git a/previews/PR56/index.html b/previews/PR56/index.html index d25e2578..37a9494c 100644 --- a/previews/PR56/index.html +++ b/previews/PR56/index.html @@ -1,4 +1,4 @@ -Home · Chmy.jl

Chmy.jl

Chmy.jl (pronounce tsh-mee) is a backend-agnostic toolkit for finite difference computations on multi-dimensional computational staggered grids. Chmy.jl features task-based distributed memory parallelisation capabilities.

Installation

To install Chmy.jl, one can simply add it using the Julia package manager:

julia> using Pkg
+Home · Chmy.jl

Chmy.jl

Chmy.jl (pronounce tsh-mee) is a backend-agnostic toolkit for finite difference computations on multi-dimensional computational staggered grids. Chmy.jl features task-based distributed memory parallelisation capabilities.

Installation

To install Chmy.jl, one can simply add it using the Julia package manager:

julia> using Pkg
 
-julia> Pkg.add("Chmy")

After the package is installed, one can load the package by using:

julia> using Chmy
Install from a Specific Branch

For developers and advanced users, one might want to use the implementation of Chmy.jl from a specific branch by specifying the url. In the following code snippet, we do this by explicitly specifying to use the current implementation that is available under the main branch:

julia> using Pkg; Pkg.add(url="https://github.com/PTsolvers/Chmy.jl#main")

Feature Summary

Chmy.jl provides a comprehensive framework for handling complex computational tasks on structured grids, leveraging both single and multi-device architectures. It seamlessly integrates with Julia's powerful parallel and concurrent programming capabilities, making it suitable for a wide range of scientific and engineering applications.

A general list of the features is:

  • Backend-agnostic capabilities leveraging KernelAbstractions.jl
  • Distributed computing support with MPI.jl
  • Multi-dimensional, parametrisable discrete and continuous fields on structured grids
  • High-level interface for specifying boundary conditions with automatic batching for performance
  • Finite difference and interpolation operators on discrete fields
  • Extensibility; The package is written in pure Julia, so adding new functions, simplification rules, and model transformations has no barrier

Funding

The development of this package is supported by the GPU4GEO PASC project. More information about the GPU4GEO project can be found on the GPU4GEO website.

+julia> Pkg.add("Chmy")

After the package is installed, one can load the package by using:

julia> using Chmy
Install from a Specific Branch

For developers and advanced users, one might want to use the implementation of Chmy.jl from a specific branch by specifying the url. In the following code snippet, we do this by explicitly specifying to use the current implementation that is available under the main branch:

julia> using Pkg; Pkg.add(url="https://github.com/PTsolvers/Chmy.jl#main")

Feature Summary

Chmy.jl provides a comprehensive framework for handling complex computational tasks on structured grids, leveraging both single and multi-device architectures. It seamlessly integrates with Julia's powerful parallel and concurrent programming capabilities, making it suitable for a wide range of scientific and engineering applications.

A general list of the features is:

  • Backend-agnostic capabilities leveraging KernelAbstractions.jl
  • Distributed computing support with MPI.jl
  • Multi-dimensional, parametrisable discrete and continuous fields on structured grids
  • High-level interface for specifying boundary conditions with automatic batching for performance
  • Finite difference and interpolation operators on discrete fields
  • Extensibility; The package is written in pure Julia, so adding new functions, simplification rules, and model transformations has no barrier

Funding

The development of this package is supported by the GPU4GEO PASC project. More information about the GPU4GEO project can be found on the GPU4GEO website.

diff --git a/previews/PR56/lib/modules/index.html b/previews/PR56/lib/modules/index.html index 398761f1..93faf8a0 100644 --- a/previews/PR56/lib/modules/index.html +++ b/previews/PR56/lib/modules/index.html @@ -1,5 +1,5 @@ -Modules · Chmy.jl

Modules

Grids

Chmy.Grids.AbstractAxisType
abstract type AbstractAxis{T}

Abstract type representing an axis in a grid, where the axis is parameterized by the type T of the coordinates.

source
Chmy.Grids.CenterType
struct Center <: Location

The Center struct represents a location at the center along a dimension of a grid cell.

source
Chmy.Grids.UniformGridMethod
UniformGrid(arch; origin, extent, dims, topology=nothing)

Constructs a uniform grid with specified origin, extent, dimensions, and topology.

Arguments

  • arch::Architecture: The associated architecture.
  • origin::NTuple{N,Number}: The origin of the grid.
  • extent::NTuple{N,Number}: The extent of the grid.
  • dims::NTuple{N,Integer}: The dimensions of the grid.
  • topology=nothing: The topology of the grid. If not provided, a default Bounded topology is used.
source
Chmy.Grids.VertexType
struct Vertex <: Location

The Vertex struct represents a location at the vertex along a dimension of a grid cell.

source
Chmy.Grids.axisMethod
axis(grid, dim::Dim)

Return the axis corresponding to the spatial dimension dim.

source
Chmy.Grids.boundsMethod
bounds(grid, loc, [dim::Dim])

Return the bounds of a structured grid at the specified location(s).

source
Chmy.Grids.connectivityMethod
connectivity(grid, dim::Dim, side::Side)

Return the connectivity of the structured grid grid for the given dimension dim and side side.

source
Chmy.Grids.coordMethod
coord(grid, loc, I...)

Return a tuple of spatial coordinates of a grid point at location loc and indices I.

For vertex locations, first grid point is at the origin. For center locations, first grid point at half-spacing distance from the origin.

source
Chmy.Grids.extentMethod
extent(grid, loc, [dim::Dim])

Return the extent of a structured grid at the specified location(s).

source
Chmy.Grids.iΔFunction

Alias for the inv_spacing method that returns the reciprocal of the spacing between grid points.

source
Chmy.Grids.originMethod
origin(grid, loc, [dim::Dim])

Return the origin of a structured grid at the specified location(s).

source
Chmy.Grids.spacingMethod
spacing(grid, loc, I...)

Return a tuple of grid spacings at location loc and indices I.

source
Chmy.Grids.ΔFunction
Δ

Alias for the spacing method that returns the spacing between grid points.

source

Architectures

Chmy.Architectures.ArchMethod
Arch(backend::Backend; device_id::Integer=1)

Create an architecture object for the specified backend and device.

Arguments

  • backend: The backend to use for computation.
  • device_id=1: The ID of the device to use.
source
Chmy.Architectures.activate!Method
activate!(arch::SingleDeviceArchitecture; priority=:normal)

Activate the given architecture on the specified device and set the priority of the backend. For the priority accepted values are :normal, :low and :high.

source

Fields

Chmy.Fields.AbstractFieldType
abstract type AbstractField{T,N,L} <: AbstractArray{T,N}

Abstract type representing a field with data type T, number of dimensions N, location L where the field should be defined on.

See also: abstract type ConstantField

source
Chmy.Fields.FieldType
struct Field{T,N,L,H,A} <: AbstractField{T,N,L}

Field represents a discrete scalar field with specified type, number of dimensions, location, and halo size.

source
Chmy.Fields.FieldMethod
Field(arch::Architecture, args...; kwargs...)

Create a Field object on the specified architecture.

Arguments:

  • arch::Architecture: The architecture for which to create the Field.
  • args...: Additional positional arguments to pass to the Field constructor.
  • kwargs...: Additional keyword arguments to pass to the Field constructor.
source
Chmy.Fields.FieldMethod
Field(backend, grid, loc, type=eltype(grid); halo=1)

Constructs a field on a structured grid at the specified location.

Arguments:

  • backend: The backend to use for memory allocation.
  • grid: The structured grid on which the field is constructed.
  • loc: The location or locations on the grid where the field is constructed.
  • type: The element type of the field. Defaults to the element type of the grid.
  • halo: The halo size for the field. Defaults to 1.
source
Chmy.Fields.FunctionFieldType
FunctionField <: AbstractField

Continuous or discrete field with values computed at runtime.

Constructors

  • FunctionField(func, grid, loc; [discrete], [parameters]): Create a new FunctionField object.
source
Chmy.Fields.FunctionFieldMethod
FunctionField(func::F, grid::StructuredGrid{N}, loc; discrete=false, parameters=nothing) where {F,N}

Create a FunctionField on the given grid using the specified function func.

Arguments:

  • func::F: The function used to generate the field values.
  • grid::StructuredGrid{N}: The structured grid defining the computational domain.
  • loc: The nodal location on the grid grid where the function field is defined on.
  • discrete::Bool=false: A flag indicating whether the field should be discrete. Defaults to false.
  • parameters=nothing: Additional parameters to be used by the function. Defaults to nothing.
source
Chmy.Fields.TensorFieldMethod
TensorField(backend::Backend, grid::StructuredGrid{2}, args...; kwargs...)

Create a 2D tensor field in the form of a named tuple on the given grid using the specified backend, with components xx, yy, and xy each being a Field.

Arguments:

  • backend::Backend: The backend to be used for computation.
  • grid::StructuredGrid{2}: The 2D structured grid defining the computational domain.
  • args...: Additional positional arguments to pass to the Field constructor.
  • kwargs...: Additional keyword arguments to pass to the Field constructor.
source
Chmy.Fields.TensorFieldMethod
TensorField(backend::Backend, grid::StructuredGrid{3}, args...; kwargs...)

Create a 3D tensor field in the form of a named tuple on the given grid using the specified backend, with components xx, yy, zz, xy, xz, and yz each being a Field.

Arguments:

  • backend::Backend: The backend to be used for computation.
  • grid::StructuredGrid{3}: The 3D structured grid defining the computational domain.
  • args...: Additional positional arguments to pass to the Field constructor.
  • kwargs...: Additional keyword arguments to pass to the Field constructor.
source
Chmy.Fields.VectorFieldMethod
VectorField(backend::Backend, grid::StructuredGrid{N}, args...; kwargs...) where {N}

Create a vector field in the form of a NamedTuple on the given grid using the specified backend. With each component being a Field.

Arguments:

  • backend::Backend: The backend to be used for computation.
  • grid::StructuredGrid{N}: The structured grid defining the computational domain.
  • args...: Additional positional arguments to pass to the Field constructor.
  • kwargs...: Additional keyword arguments to pass to the Field constructor.
source
Chmy.Fields.interiorMethod
interior(f::Field; with_halo=false)

Displays the field on the interior of the grid on which it is defined on. One could optionally specify to display the halo regions on the grid with with_halo=true.

source
Chmy.Fields.set!Method
set!(f::Field, A::AbstractArray)

Set the elements of the Field f using the values from the AbstractArray A.

Arguments:

  • f::Field: The Field object to be modified.
  • A::AbstractArray: The array whose values are to be copied to the Field.
source
Chmy.Fields.set!Method
set!(f::Field, other::AbstractField)

Set the elements of the Field f using the values from another AbstractField other.

Arguments:

  • f::Field: The destination Field object to be modified.
  • other::AbstractField: The source AbstractField whose values are to be copied to f.
source
Chmy.Fields.set!Method
set!(f::Field, val::Number)

Set all elements of the Field f to the specified numeric value val.

Arguments:

  • f::Field: The Field object to be modified.
  • val::Number: The numeric value to set in the Field.
source

Grid Operators

Chmy.GridOperators.AbstractMaskType
abstract type AbstractMask{T,N}

Abstract type representing the data transformation to be performed on elements in a field of dimension N, where each element is of typeT.

source
Chmy.GridOperators.InterpolationRuleType
abstract type InterpolationRule

A type representing an interpolation rule that specifies how the interpolant f should be reconstructed using a data set on a given grid.

source
Chmy.GridOperators.hlerpMethod
hlerp(f, to, grid, I...)

Interpolate a field f to location to using harmonic linear interpolation rule.

rule(t, v0, v1) = 1/(1/v0 + t * (1/v1 - 1/v0))

source
Chmy.GridOperators.itpMethod
itp(f, to, r, grid, I...)

Interpolates the field f from its current location to the specified location(s) to using the given interpolation rule r. The indices specify the position within the grid at location(s) to.

source

Boundary Conditions

Chmy.BoundaryConditions.bc!Method
bc!(arch::Architecture, grid::StructuredGrid, batch::BatchSet)

Apply boundary conditions using a batch set batch containing an AbstractBatch per dimension per side of grid.

Arguments

  • arch: The architecture.
  • grid: The grid.
  • batch:: The batch set to apply boundary conditions to.
source

Kernel launcher

Chmy.KernelLaunch.LauncherMethod
Launcher(arch, grid; outer_width=nothing)

Constructs a Launcher object configured based on the input parameters.

Arguments:

  • arch: The associated architecture.
  • grid: The grid defining the computational domain.
  • outer_width: Optional parameter specifying outer width.
Warning

worksize for the last dimension N takes into account only last outer width W[N], N-1 uses W[N] and W[N-1], N-2 uses W[N], W[N-1], and W[N-2].

source
Chmy.KernelLaunch.LauncherMethod
(launcher::Launcher)(arch::Architecture, grid, kernel_and_args::Pair{F,Args}; bc=nothing, async=false) where {F,Args}

Launches a computational kernel using the specified arch, grid, kernel_and_args, and optional boundary conditions (bc).

Arguments:

  • arch::Architecture: The architecture on which to execute the computation.
  • grid: The grid defining the computational domain.
  • kernel_and_args::Pair{F,Args}: A pair consisting of the computational kernel F and its arguments Args.
  • bc=nothing: Optional boundary conditions for the computation.
  • async=false: If true, launches the kernel asynchronously.
Warning
  • arch should be compatible with the Launcher's architecture.
  • If bc is nothing, the kernel is launched without boundary conditions.
  • If async is false (default), the function waits for the computation to complete before returning.
source

Distributed

Chmy.Distributed.CartesianTopologyMethod
CartesianTopology(comm, dims)

Create an N-dimensional Cartesian topology using base MPI communicator comm with dimensions dims. If all entries in dims are not equal to 0, the product of dims should be equal to the total number of MPI processes MPI.Comm_size(comm). If any (or all) entries of dims are 0, the dimensions in the corresponding spatial directions will be picked automatically.

source
Chmy.Distributed.StackAllocatorType
mutable struct StackAllocator

Simple stack (a.k.a. bump/arena) allocator. Maintains an internal buffer that grows dynamically if the requested allocation exceeds current buffer size.

source
Base.resize!Method
resize!(sa::StackAllocator, sz::Integer)

Resize the StackAllocator's buffer to capacity of sz bytes. This method will throw an error if any arrays were already allocated using this allocator.

source
Chmy.Architectures.ArchMethod
Architectures.Arch(backend::Backend, comm::MPI.Comm, dims; device_id=nothing)

Create a distributed Architecture using backend backend and comm. For GPU backends, device will be selected automatically based on a process id within a node, unless specified by device_id.

Arguments

  • backend::Backend: The backend to use for the architecture.
  • comm::MPI.Comm: The MPI communicator to use for the architecture.
  • dims: The dimensions of the architecture.

Keyword Arguments

  • device_id: The ID of the device to use. If not provided, the shared rank of the topology plus one is used.
source
Chmy.Architectures.activate!Method
activate!(arch::DistributedArchitecture; kwargs...)

Activate the given DistributedArchitecture by delegating to the child architecture, and pass through any keyword arguments. For example, the priority can be set with accepted values being :normal, :low, and :high.

source
Chmy.Architectures.get_backendMethod
get_backend(arch::DistributedArchitecture)

Get the backend associated with a DistributedArchitecture by delegating to the child architecture.

source
Chmy.Architectures.get_deviceMethod
get_device(arch::DistributedArchitecture)

Get the device associated with a DistributedArchitecture by delegating to the child architecture.

source
Chmy.BoundaryConditions.bc!Method
BoundaryConditions.bc!(side::Side, dim::Dim,
+Modules · Chmy.jl

Modules

Grids

Chmy.Grids.AbstractAxisType
abstract type AbstractAxis{T}

Abstract type representing an axis in a grid, where the axis is parameterized by the type T of the coordinates.

source
Chmy.Grids.CenterType
struct Center <: Location

The Center struct represents a location at the center along a dimension of a grid cell.

source
Chmy.Grids.UniformGridMethod
UniformGrid(arch; origin, extent, dims, topology=nothing)

Constructs a uniform grid with specified origin, extent, dimensions, and topology.

Arguments

  • arch::Architecture: The associated architecture.
  • origin::NTuple{N,Number}: The origin of the grid.
  • extent::NTuple{N,Number}: The extent of the grid.
  • dims::NTuple{N,Integer}: The dimensions of the grid.
  • topology=nothing: The topology of the grid. If not provided, a default Bounded topology is used.
source
Chmy.Grids.VertexType
struct Vertex <: Location

The Vertex struct represents a location at the vertex along a dimension of a grid cell.

source
Chmy.Grids.axisMethod
axis(grid, dim::Dim)

Return the axis corresponding to the spatial dimension dim.

source
Chmy.Grids.boundsMethod
bounds(grid, loc, [dim::Dim])

Return the bounds of a structured grid at the specified location(s).

source
Chmy.Grids.connectivityMethod
connectivity(grid, dim::Dim, side::Side)

Return the connectivity of the structured grid grid for the given dimension dim and side side.

source
Chmy.Grids.coordMethod
coord(grid, loc, I...)

Return a tuple of spatial coordinates of a grid point at location loc and indices I.

For vertex locations, first grid point is at the origin. For center locations, first grid point at half-spacing distance from the origin.

source
Chmy.Grids.extentMethod
extent(grid, loc, [dim::Dim])

Return the extent of a structured grid at the specified location(s).

source
Chmy.Grids.iΔFunction

Alias for the inv_spacing method that returns the reciprocal of the spacing between grid points.

source
Chmy.Grids.originMethod
origin(grid, loc, [dim::Dim])

Return the origin of a structured grid at the specified location(s).

source
Chmy.Grids.spacingMethod
spacing(grid, loc, I...)

Return a tuple of grid spacings at location loc and indices I.

source
Chmy.Grids.ΔFunction
Δ

Alias for the spacing method that returns the spacing between grid points.

source

Architectures

Chmy.Architectures.ArchMethod
Arch(backend::Backend; device_id::Integer=1)

Create an architecture object for the specified backend and device.

Arguments

  • backend: The backend to use for computation.
  • device_id=1: The ID of the device to use.
source
Chmy.Architectures.activate!Method
activate!(arch::SingleDeviceArchitecture; priority=:normal)

Activate the given architecture on the specified device and set the priority of the backend. For the priority accepted values are :normal, :low and :high.

source

Fields

Chmy.Fields.AbstractFieldType
abstract type AbstractField{T,N,L} <: AbstractArray{T,N}

Abstract type representing a field with data type T, number of dimensions N, location L where the field should be defined on.

See also: abstract type ConstantField

source
Chmy.Fields.FieldType
struct Field{T,N,L,H,A} <: AbstractField{T,N,L}

Field represents a discrete scalar field with specified type, number of dimensions, location, and halo size.

source
Chmy.Fields.FieldMethod
Field(arch::Architecture, args...; kwargs...)

Create a Field object on the specified architecture.

Arguments:

  • arch::Architecture: The architecture for which to create the Field.
  • args...: Additional positional arguments to pass to the Field constructor.
  • kwargs...: Additional keyword arguments to pass to the Field constructor.
source
Chmy.Fields.FieldMethod
Field(backend, grid, loc, type=eltype(grid); halo=1)

Constructs a field on a structured grid at the specified location.

Arguments:

  • backend: The backend to use for memory allocation.
  • grid: The structured grid on which the field is constructed.
  • loc: The location or locations on the grid where the field is constructed.
  • type: The element type of the field. Defaults to the element type of the grid.
  • halo: The halo size for the field. Defaults to 1.
source
Chmy.Fields.FunctionFieldType
FunctionField <: AbstractField

Continuous or discrete field with values computed at runtime.

Constructors

  • FunctionField(func, grid, loc; [discrete], [parameters]): Create a new FunctionField object.
source
Chmy.Fields.FunctionFieldMethod
FunctionField(func::F, grid::StructuredGrid{N}, loc; discrete=false, parameters=nothing) where {F,N}

Create a FunctionField on the given grid using the specified function func.

Arguments:

  • func::F: The function used to generate the field values.
  • grid::StructuredGrid{N}: The structured grid defining the computational domain.
  • loc: The nodal location on the grid grid where the function field is defined on.
  • discrete::Bool=false: A flag indicating whether the field should be discrete. Defaults to false.
  • parameters=nothing: Additional parameters to be used by the function. Defaults to nothing.
source
Chmy.Fields.TensorFieldMethod
TensorField(backend::Backend, grid::StructuredGrid{2}, args...; kwargs...)

Create a 2D tensor field in the form of a named tuple on the given grid using the specified backend, with components xx, yy, and xy each being a Field.

Arguments:

  • backend::Backend: The backend to be used for computation.
  • grid::StructuredGrid{2}: The 2D structured grid defining the computational domain.
  • args...: Additional positional arguments to pass to the Field constructor.
  • kwargs...: Additional keyword arguments to pass to the Field constructor.
source
Chmy.Fields.TensorFieldMethod
TensorField(backend::Backend, grid::StructuredGrid{3}, args...; kwargs...)

Create a 3D tensor field in the form of a named tuple on the given grid using the specified backend, with components xx, yy, zz, xy, xz, and yz each being a Field.

Arguments:

  • backend::Backend: The backend to be used for computation.
  • grid::StructuredGrid{3}: The 3D structured grid defining the computational domain.
  • args...: Additional positional arguments to pass to the Field constructor.
  • kwargs...: Additional keyword arguments to pass to the Field constructor.
source
Chmy.Fields.VectorFieldMethod
VectorField(backend::Backend, grid::StructuredGrid{N}, args...; kwargs...) where {N}

Create a vector field in the form of a NamedTuple on the given grid using the specified backend. With each component being a Field.

Arguments:

  • backend::Backend: The backend to be used for computation.
  • grid::StructuredGrid{N}: The structured grid defining the computational domain.
  • args...: Additional positional arguments to pass to the Field constructor.
  • kwargs...: Additional keyword arguments to pass to the Field constructor.
source
Chmy.Fields.interiorMethod
interior(f::Field; with_halo=false)

Displays the field on the interior of the grid on which it is defined on. One could optionally specify to display the halo regions on the grid with with_halo=true.

source
Chmy.Fields.set!Method
set!(f::Field, A::AbstractArray)

Set the elements of the Field f using the values from the AbstractArray A.

Arguments:

  • f::Field: The Field object to be modified.
  • A::AbstractArray: The array whose values are to be copied to the Field.
source
Chmy.Fields.set!Method
set!(f::Field, other::AbstractField)

Set the elements of the Field f using the values from another AbstractField other.

Arguments:

  • f::Field: The destination Field object to be modified.
  • other::AbstractField: The source AbstractField whose values are to be copied to f.
source
Chmy.Fields.set!Method
set!(f::Field, val::Number)

Set all elements of the Field f to the specified numeric value val.

Arguments:

  • f::Field: The Field object to be modified.
  • val::Number: The numeric value to set in the Field.
source

Grid Operators

Chmy.GridOperators.AbstractMaskType
abstract type AbstractMask{T,N}

Abstract type representing the data transformation to be performed on elements in a field of dimension N, where each element is of typeT.

source
Chmy.GridOperators.InterpolationRuleType
abstract type InterpolationRule

A type representing an interpolation rule that specifies how the interpolant f should be reconstructed using a data set on a given grid.

source
Chmy.GridOperators.hlerpMethod
hlerp(f, to, grid, I...)

Interpolate a field f to location to using harmonic linear interpolation rule.

rule(t, v0, v1) = 1/(1/v0 + t * (1/v1 - 1/v0))

source
Chmy.GridOperators.itpMethod
itp(f, to, r, grid, I...)

Interpolates the field f from its current location to the specified location(s) to using the given interpolation rule r. The indices specify the position within the grid at location(s) to.

source

Boundary Conditions

Chmy.BoundaryConditions.bc!Method
bc!(arch::Architecture, grid::StructuredGrid, batch::BatchSet)

Apply boundary conditions using a batch set batch containing an AbstractBatch per dimension per side of grid.

Arguments

  • arch: The architecture.
  • grid: The grid.
  • batch:: The batch set to apply boundary conditions to.
source

Kernel launcher

Chmy.KernelLaunch.LauncherMethod
Launcher(arch, grid; outer_width=nothing)

Constructs a Launcher object configured based on the input parameters.

Arguments:

  • arch: The associated architecture.
  • grid: The grid defining the computational domain.
  • outer_width: Optional parameter specifying outer width.
Warning

worksize for the last dimension N takes into account only last outer width W[N], N-1 uses W[N] and W[N-1], N-2 uses W[N], W[N-1], and W[N-2].

source
Chmy.KernelLaunch.LauncherMethod
(launcher::Launcher)(arch::Architecture, grid, kernel_and_args::Pair{F,Args}; bc=nothing, async=false) where {F,Args}

Launches a computational kernel using the specified arch, grid, kernel_and_args, and optional boundary conditions (bc).

Arguments:

  • arch::Architecture: The architecture on which to execute the computation.
  • grid: The grid defining the computational domain.
  • kernel_and_args::Pair{F,Args}: A pair consisting of the computational kernel F and its arguments Args.
  • bc=nothing: Optional boundary conditions for the computation.
  • async=false: If true, launches the kernel asynchronously.
Warning
  • arch should be compatible with the Launcher's architecture.
  • If bc is nothing, the kernel is launched without boundary conditions.
  • If async is false (default), the function waits for the computation to complete before returning.
source

Distributed

Chmy.Distributed.CartesianTopologyMethod
CartesianTopology(comm, dims)

Create an N-dimensional Cartesian topology using base MPI communicator comm with dimensions dims. If all entries in dims are not equal to 0, the product of dims should be equal to the total number of MPI processes MPI.Comm_size(comm). If any (or all) entries of dims are 0, the dimensions in the corresponding spatial directions will be picked automatically.

source
Chmy.Distributed.StackAllocatorType
mutable struct StackAllocator

Simple stack (a.k.a. bump/arena) allocator. Maintains an internal buffer that grows dynamically if the requested allocation exceeds current buffer size.

source
Base.resize!Method
resize!(sa::StackAllocator, sz::Integer)

Resize the StackAllocator's buffer to capacity of sz bytes. This method will throw an error if any arrays were already allocated using this allocator.

source
Chmy.Architectures.ArchMethod
Architectures.Arch(backend::Backend, comm::MPI.Comm, dims; device_id=nothing)

Create a distributed Architecture using backend backend and comm. For GPU backends, device will be selected automatically based on a process id within a node, unless specified by device_id.

Arguments

  • backend::Backend: The backend to use for the architecture.
  • comm::MPI.Comm: The MPI communicator to use for the architecture.
  • dims: The dimensions of the architecture.

Keyword Arguments

  • device_id: The ID of the device to use. If not provided, the shared rank of the topology plus one is used.
source
Chmy.Architectures.activate!Method
activate!(arch::DistributedArchitecture; kwargs...)

Activate the given DistributedArchitecture by delegating to the child architecture, and pass through any keyword arguments. For example, the priority can be set with accepted values being :normal, :low, and :high.

source
Chmy.Architectures.get_backendMethod
get_backend(arch::DistributedArchitecture)

Get the backend associated with a DistributedArchitecture by delegating to the child architecture.

source
Chmy.Architectures.get_deviceMethod
get_device(arch::DistributedArchitecture)

Get the device associated with a DistributedArchitecture by delegating to the child architecture.

source
Chmy.BoundaryConditions.bc!Method
BoundaryConditions.bc!(side::Side, dim::Dim,
                             arch::DistributedArchitecture,
                             grid::StructuredGrid,
-                            batch::ExchangeBatch; kwargs...)

Apply boundary conditions on a distributed grid with halo exchange performed internally.

Arguments

  • side: The side of the grid where the halo exchange is performed.
  • dim: The dimension along which the halo exchange is performed.
  • arch: The distributed architecture used for communication.
  • grid: The structured grid on which the halo exchange is performed.
  • batch: The batch set to apply boundary conditions to.
source
Chmy.Distributed.allocateFunction
allocate(sa::StackAllocator, T::DataType, dims, [align=sizeof(T)])

Allocate a buffer of type T with dimensions dims using a stack allocator. The align parameter specifies the alignment of the buffer elements.

Arguments

  • sa::StackAllocator: The stack allocator object.
  • T::DataType: The data type of the requested allocation.
  • dims: The dimensions of the requested allocation.
  • align::Integer: The alignment of the allocated buffer in bytes.
Warning

Arrays allocated with StackAllocator are not managed by Julia runtime. User is responsible for ensuring correct lifetimes, i.e., that the reference to allocator outlives all arrays allocated using this allocator.

source
Chmy.Distributed.exchange_halo!Method
exchange_halo!(side::Side, dim::Dim, arch, grid, fields...; async=false)

Perform halo exchange communication between neighboring processes in a distributed architecture.

Arguments

  • side: The side of the grid where the halo exchange is performed.
  • dim: The dimension along which the halo exchange is performed.
  • arch: The distributed architecture used for communication.
  • grid: The structured grid on which the halo exchange is performed.
  • fields...: The fields to be exchanged.

Optional Arguments

  • async=false: Whether to perform the halo exchange asynchronously.
source
Chmy.Distributed.exchange_halo!Method
exchange_halo!(arch, grid, fields...)

Perform halo exchange for the given architecture, grid, and fields.

Arguments

  • arch: The distributed architecture to perform halo exchange on.
  • grid: The structured grid on which halo exchange is performed.
  • fields: The fields on which halo exchange is performed.
source
Chmy.Distributed.gather!Method
gather!(arch, dst, src::Field; kwargs...)

Gather the interior of a field src into a global array dst on the CPU.

source
Chmy.Distributed.gather!Method
gather!(dst, src, comm::MPI.Comm; root=0)

Gather local array src into a global array dst. Size of the global array size(dst) should be equal to the product of the size of a local array size(src) and the dimensions of a Cartesian communicator comm. The array will be gathered on the process with id root (root=0 by default). Note that the memory for a global array should be allocated only on the process with id root, on other processes dst can be set to nothing.

source
Chmy.Distributed.has_neighborMethod
has_neighbor(topo, dim, side)

Returns true if there a neighbor process in spatial direction dim on the side side, or false otherwise.

source
Chmy.Distributed.neighborMethod
neighbor(topo, dim, side)

Returns id of a neighbor process in spatial direction dim on the side side, if this neighbor exists, or MPI.PROC_NULL otherwise.

source
Chmy.Distributed.neighborsMethod
neighbors(topo)

Neighbors of a current process.

Returns tuple of ranks of the two immediate neighbors in each spatial direction, or MPI.PROC_NULL if there is no neighbor on a corresponding side.

source
Chmy.Distributed.reset!Method
reset!(sa::StackAllocator)

Reset the stack allocator by resetting the pointer. Doesn't free the internal memory buffer.

source

Workers

Chmy.Workers.WorkerType
Worker

A worker that performs tasks asynchronously.

Constructor

Worker{T}(; [setup], [teardown]) where {T}

Constructs a new Worker object.

Arguments

  • setup: A function to be executed before the worker starts processing tasks. (optional)
  • teardown: A function to be executed after the worker finishes processing tasks. (optional)
source
+ batch::ExchangeBatch; kwargs...)

Apply boundary conditions on a distributed grid with halo exchange performed internally.

Arguments

  • side: The side of the grid where the halo exchange is performed.
  • dim: The dimension along which the halo exchange is performed.
  • arch: The distributed architecture used for communication.
  • grid: The structured grid on which the halo exchange is performed.
  • batch: The batch set to apply boundary conditions to.
source
Chmy.Distributed.allocateFunction
allocate(sa::StackAllocator, T::DataType, dims, [align=sizeof(T)])

Allocate a buffer of type T with dimensions dims using a stack allocator. The align parameter specifies the alignment of the buffer elements.

Arguments

  • sa::StackAllocator: The stack allocator object.
  • T::DataType: The data type of the requested allocation.
  • dims: The dimensions of the requested allocation.
  • align::Integer: The alignment of the allocated buffer in bytes.
Warning

Arrays allocated with StackAllocator are not managed by Julia runtime. User is responsible for ensuring correct lifetimes, i.e., that the reference to allocator outlives all arrays allocated using this allocator.

source
Chmy.Distributed.exchange_halo!Method
exchange_halo!(side::Side, dim::Dim, arch, grid, fields...; async=false)

Perform halo exchange communication between neighboring processes in a distributed architecture.

Arguments

  • side: The side of the grid where the halo exchange is performed.
  • dim: The dimension along which the halo exchange is performed.
  • arch: The distributed architecture used for communication.
  • grid: The structured grid on which the halo exchange is performed.
  • fields...: The fields to be exchanged.

Optional Arguments

  • async=false: Whether to perform the halo exchange asynchronously.
source
Chmy.Distributed.exchange_halo!Method
exchange_halo!(arch, grid, fields...)

Perform halo exchange for the given architecture, grid, and fields.

Arguments

  • arch: The distributed architecture to perform halo exchange on.
  • grid: The structured grid on which halo exchange is performed.
  • fields: The fields on which halo exchange is performed.
source
Chmy.Distributed.gather!Method
gather!(arch, dst, src::Field; kwargs...)

Gather the interior of a field src into a global array dst on the CPU.

source
Chmy.Distributed.gather!Method
gather!(dst, src, comm::MPI.Comm; root=0)

Gather local array src into a global array dst. Size of the global array size(dst) should be equal to the product of the size of a local array size(src) and the dimensions of a Cartesian communicator comm. The array will be gathered on the process with id root (root=0 by default). Note that the memory for a global array should be allocated only on the process with id root, on other processes dst can be set to nothing.

source
Chmy.Distributed.has_neighborMethod
has_neighbor(topo, dim, side)

Returns true if there a neighbor process in spatial direction dim on the side side, or false otherwise.

source
Chmy.Distributed.neighborMethod
neighbor(topo, dim, side)

Returns id of a neighbor process in spatial direction dim on the side side, if this neighbor exists, or MPI.PROC_NULL otherwise.

source
Chmy.Distributed.neighborsMethod
neighbors(topo)

Neighbors of a current process.

Returns tuple of ranks of the two immediate neighbors in each spatial direction, or MPI.PROC_NULL if there is no neighbor on a corresponding side.

source
Chmy.Distributed.reset!Method
reset!(sa::StackAllocator)

Reset the stack allocator by resetting the pointer. Doesn't free the internal memory buffer.

source

Workers

Chmy.Workers.WorkerType
Worker

A worker that performs tasks asynchronously.

Constructor

Worker{T}(; [setup], [teardown]) where {T}

Constructs a new Worker object.

Arguments

  • setup: A function to be executed before the worker starts processing tasks. (optional)
  • teardown: A function to be executed after the worker finishes processing tasks. (optional)
source
diff --git a/previews/PR56/objects.inv b/previews/PR56/objects.inv index 5be571ec..aed4d070 100644 Binary files a/previews/PR56/objects.inv and b/previews/PR56/objects.inv differ diff --git a/previews/PR56/search_index.js b/previews/PR56/search_index.js index 4f3b72e7..bc8cb363 100644 --- a/previews/PR56/search_index.js +++ b/previews/PR56/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"developer_documentation/workers/#Workers","page":"Workers","title":"Workers","text":"","category":"section"},{"location":"developer_documentation/workers/","page":"Workers","title":"Workers","text":"Task-based parallelism provides a highly abstract view for the program execution scheduling, although it may come with a performance overhead related to task creation and destruction. The overhead is currently significant when tasks are used to perform asynchronous operations on GPUs, where TLS context creation and destruction may be in the order of kernel execution time. Therefore, it may be desirable to have long-running tasks that do not get terminated immediately, but only when all queued subtasks (i.e. work units) are executed.","category":"page"},{"location":"developer_documentation/workers/","page":"Workers","title":"Workers","text":"In Chmy.jl, we introduced the concept Worker for this purpose. A Worker is a special construct to extend the lifespan of a task created by Threads.@spawn. It possesses a Channel of subtasks to be executed on the current thread, where subtasks are submitted at construction time to the worker using put!.","category":"page"},{"location":"developer_documentation/workers/","page":"Workers","title":"Workers","text":"With the help of the Worker, we can specify subtasks that need to be sequentially executed by enqueuing them to one Worker. Any work units that shall run in parallel should be put into separate workers instead.","category":"page"},{"location":"developer_documentation/workers/","page":"Workers","title":"Workers","text":"Currently, we use Workers under the hood to hide the communications between computations. We split the computational domain into inner part, containing the bulk of the grid points, and thin outer part. We launch the same kernels processing the inner and outer parts in different Julia tasks. When the outer part completes, we launch the non-blocking MPI communication. Workers are a stateful representation of the long-running computation, needed to avoid significant overhead of creating a new task-local state each time a communication is performed.","category":"page"},{"location":"concepts/bc/#Boundary-Conditions","page":"Boundary Conditions","title":"Boundary Conditions","text":"","category":"section"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"Using Chmy.jl, we aim to study partial differential equations (PDEs) arising from physical or engineering problems. Additional initial and/or boundary conditions are necessary for the model problem to be well-posed, ensuring the existence and uniqueness of a stable solution.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"We provide a small overview for boundary conditions that one often encounters. In the following, we consider the unknown function u Omega mapsto mathbbR defined on some bounded computational domain Omega subset mathbbR^d in a d-dimensional space. With the domain boundary denoted by partial Omega, we have some function g partial Omega mapsto mathbbR prescribed on the boundary.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"Type Form Example\nDirichlet u = g on partial Omega In fluid dynamics, the no-slip condition for viscous fluids states that at a solid boundary the fluid has zero velocity relative to the boundary.\nNeumann partial_boldsymboln u = g on partial Omega, where boldsymboln is the outer normal vector to Omega It specifies the values in which the derivative of a solution is applied within the boundary of the domain. An application in thermodynamics is a prescribed heat flux through the boundary\nRobin u + alpha partial_nu u = g on partial Omega, where alpha in mathbbR. Also called impedance boundary conditions from their application in electromagnetic problems","category":"page"},{"location":"concepts/bc/#Applying-Boundary-Conditions-with-bc!()","page":"Boundary Conditions","title":"Applying Boundary Conditions with bc!()","text":"","category":"section"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"In the following, we describe the syntax in Chmy.jl for launching kernels that impose boundary conditions on some field that is well-defined on a grid with backend specified through arch.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"For Dirichlet and Neumann boundary conditions, they are referred to as homogeneous if g = 0, otherwise they are non-homogeneous if g = v holds, for some vin mathbbR.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":" Homogeneous Non-homogeneous\nDirichlet on partial Omega bc!(arch, grid, field => Dirichlet()) bc!(arch, grid, field => Dirichlet(v))\nNeumann on partial Omega bc!(arch, grid, field => Neumann()) bc!(arch, grid, field => Neumann(v))","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"Note that the syntax shown in the table above is a fused expression of both specifying and applying the boundary conditions.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"warning: $\\partial \\Omega$ Refers to the Entire Domain Boundary!\nBy specifying field to a single boundary condition, we impose the boundary condition on the entire domain boundary by default. See the section for \"Mixed Boundary Conditions\" below for specifying different BC on different parts of the domain boundary.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"Alternatively, one could also define the boundary conditions beforehand using batch() provided the grid information as well as the field variable. This way the boundary condition to be prescibed is precomputed.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"# pre-compute batch\nbt = batch(grid, field => Neumann()) # specify Neumann BC for the variable `field`\nbc!(arch, grid, bt) # apply the boundary condition","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"In the script batcher.jl, we provide a MWE using both fused and precomputed expressions for BC update.","category":"page"},{"location":"concepts/bc/#Specifying-BC-within-a-launch","page":"Boundary Conditions","title":"Specifying BC within a launch","text":"","category":"section"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"When using launch to specify the execution of a kernel (more see section Kernels), one can pass the specified boundary condition(s) as an optional parameter using batch, provided the grid information of the discretized space. This way we can gain efficiency from making good use of already cached values.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"In the 2D diffusion example as introduced in the tutorial \"Getting Started with Chmy.jl\", we need to update the temperature field C at k-th iteration using the values of heat flux q and physical time step size Δt from (k-1)-th iteration. When launching the kernel update_C! with launch, we simultaneously launch the kernel for the BC update using:","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"launch(arch, grid, update_C! => (C, q, Δt, grid); bc=batch(grid, C => Neumann(); exchange=C))","category":"page"},{"location":"concepts/bc/#Mixed-Boundary-Conditions","page":"Boundary Conditions","title":"Mixed Boundary Conditions","text":"","category":"section"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"In the code example above, by specifying boundary conditions using syntax such as field => Neumann(), we essentially launch a kernel that impose the Neumann boundary condition on the entire domain boundary partial Omega. More often, one may be interested in prescribing different boundary conditions on different parts of partial Omega.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"The following figure showcases a 2D square domain Omega with different boundary conditions applied on each side:","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"The top boundary (red) is a Dirichlet boundary condition where u = a.\nThe bottom boundary (blue) is also a Dirichlet boundary condition where u = b.\nThe left and right boundaries (green) are Neumann boundary conditions where fracpartial upartial y = 0.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"To launch a kernel that satisfies these boundary conditions in Chmy.jl, you can use the following code:","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"bc!(arch, grid, field => (x = Neumann(), y = (Dirichlet(b), Dirichlet(a))))","category":"page"},{"location":"concepts/kernels/#Kernels","page":"Kernels","title":"Kernels","text":"","category":"section"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"The KernelAbstractions.jl package provides a macro-based dialect that hides the intricacies of vendor-specific GPU programming. It allows one to write hardware-agnostic kernels that can be instantiated and launched for different device backends without modifying the high-level code nor sacrificing performance.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"In the following, we show how to write and launch kernels on various backends. We also explain the concept of a Launcher in Chmy.jl, that complements the default kernel launching, allowing us to hide the latency between the bulk of the computations and boundary conditions or MPI communications.","category":"page"},{"location":"concepts/kernels/#Writing-Kernels","page":"Kernels","title":"Writing Kernels","text":"","category":"section"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"This section highlights some important features of KernelAbstractions.jl that are essential for understanding the high-level abstraction of the kernel concept that is used throughout our package. As it barely serves for illustrative purposes, for more specific examples, please refer to their documentation.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"using KernelAbstractions\n\n# Define a kernel that performs element-wise operations on A\n@kernel function mul2!(A)\n # use @index macro to obtain the global Cartesian index of the current work item.\n I = @index(Global, Cartesian)\n A[I] *= 2\nend","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"The kernel mul2! being defined using the @kernel macro, we can launch it on the desired backend to perform the element-wise operations on host.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"# Define array and work group size\nA = ones(1024, 1024)\nbackend = get_backend(A) # CPU\n\n# Launch kernel and explicitly synchronize\nkernel = mul2!(backend)\nkernel(A, ndrange=size(A))\nKernelAbstractions.synchronize(backend)\n\n# Result assertion\n@assert(all(A .== 2.0) == true)","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"To launch the kernel on GPU devices, one could simply define A as CuArray, ROCArray or oneArray as detailed in the section \"launching kernel on the backend\". More fine-grained memory access is available using the @index macro as described here.","category":"page"},{"location":"concepts/kernels/#Thread-Indexing","page":"Kernels","title":"Thread Indexing","text":"","category":"section"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"Thread indexing is essential for memory usage on GPU devices; however, it can quickly become cumbersome to figure out the thread index, especially when working with multi-dimensional grids of multi-dimensional blocks of threads. The performance of kernels can also depend significantly on access patterns.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"In the example above, we saw the usage of I = @index(Global, Cartesian), which retrieves the global index of threads for the two-dimensional array A. Such powerful macros are provided by KernelAbstractions.jl for conveniently retrieving the desired index of threads.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"The following table is non-exhaustive and provides a reference of commonly used terminology. Here, KernelAbstractions.@index is used for index retrieval, and KernelAbstractions.@groupsize is used for obtaining the dimensions of blocks of threads.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"KernelAbstractions CPU CUDA AMDGPU\n@index(Local, Linear) mod(i, g) threadIdx().x workitemIdx().x\n@index(Local, Cartesian)[2] threadIdx().y workitemIdx().y\n@index(Local, Cartesian)[3] threadIdx().z workitemIdx().z\n@index(Group, Linear) i ÷ g blockIdx().x workgroupIdx().x\n@index(Group, Cartesian)[2] blockIdx().y workgroupIdx().y\n@groupsize()[3] blockDim().z workgroupDim().z\n@index(Global, Linear) i global index computation needed global index computation needed\n@index(Global, Cartesian)[2] global index computation needed global index computation needed\n@index(Global, NTuple) global index computation needed global index computation needed","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"The @index(Global, NTuple) returns a NTuple object, allowing more fine-grained memory control over the allocated arrays.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"@kernel function memcpy!(a, b)\n i, j = @index(Global, NTuple)\n @inbounds a[i, j] = b[i, j]\nend","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"A tuple can be splatted with ... Julia operator when used to avoid explicitly using i, j indices.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"@kernel function splatting_memcpy!(a, b)\n I = @index(Global, NTuple)\n @inbounds a[I...] = b[I...]\nend","category":"page"},{"location":"concepts/kernels/#Kernel-Launcher","page":"Kernels","title":"Kernel Launcher","text":"","category":"section"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"In Chmy.jl, the KernelLaunch module is designed to provide handy utilities for performing different grid operations on selected data entries of Fields that are involved at each kernel launch, in which the grid geometry underneath is also taken into account.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"Followingly, we define a kernel launcher associated with an UniformGrid object, supporting CUDA backend.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"# Define backend and geometry\narch = Arch(CUDABackend())\ngrid = UniformGrid(arch; origin=(-1, -1), extent=(2, 2), dims=(126, 126))\n\n# Define launcher\nlaunch = Launcher(arch, grid)","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"We also have two kernel functions compute_q! and update_C! defined, which shall update the fields q and C using grid operators (see section Grid Operators) ∂x, ∂y, divg that are anchored on some grid g accordingly.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"@kernel inbounds = true function compute_q!(q, C, χ, g::StructuredGrid, O)\n I = @index(Global, Cartesian)\n I = I + O\n q.x[I] = -χ * ∂x(C, g, I)\n q.y[I] = -χ * ∂y(C, g, I)\nend\n\n@kernel inbounds = true function update_C!(C, q, Δt, g::StructuredGrid, O)\n I = @index(Global, Cartesian)\n I = I + O\n C[I] -= Δt * divg(q, g, I)\nend","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"To spawn the kernel, we invoke the launcher using the launch function to perform the field update at each physical timestep, and specify desired boundary conditions for involved fields in the kernel.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"# Define physics, numerics, geometry ...\nfor it in 1:nt\n # without boundary conditions\n launch(arch, grid, compute_q! => (q, C, χ, grid))\n\n # with Neumann boundary conditions and MPI exchange\n launch(arch, grid, update_C! => (C, q, Δt, grid); bc=batch(grid, C => Neumann(); exchange=C))\nend","category":"page"},{"location":"concepts/grid_operators/#Grid-Operators","page":"Grid Operators","title":"Grid Operators","text":"","category":"section"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"Chmy.jl currently supports various finite difference operators for fields defined in Cartesian coordinates. The table below summarizes the most common usage of grid operators, with the grid g::StructuredGrid and index I = @index(Global, Cartesian) defined and P = Field(backend, grid, location) is some field defined on the grid g.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"Mathematical Formulation Code\nfracpartialpartial x P ∂x(P, g, I)\nfracpartialpartial y P ∂y(P, g, I)\nfracpartialpartial z P ∂z(P, g, I)\nnabla P divg(P, g, I)","category":"page"},{"location":"concepts/grid_operators/#Computing-the-Divergence-of-a-Vector-Field","page":"Grid Operators","title":"Computing the Divergence of a Vector Field","text":"","category":"section"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"To illustrate the usage of grid operators, we compute the divergence of an vector field V using the divg function. We first allocate memory for required fields.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"V = VectorField(backend, grid)\n∇V = Field(backend, grid, Center())\n# use set! to set up the initial vector field...","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"The kernel that computes the divergence needs to have the grid information passed as for other finite difference operators.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"@kernel inbounds = true function update_∇!(V, ∇V, g::StructuredGrid, O)\n I = @index(Global, Cartesian)\n I = I + O\n ∇V[I] = divg(V, g, I)\nend","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"The kernel can then be launched when required as we detailed in section Kernels.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"launch(arch, grid, update_∇! => (V, ∇V, grid))","category":"page"},{"location":"concepts/grid_operators/#Masking","page":"Grid Operators","title":"Masking","text":"","category":"section"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"Masking allows selectively applying operations only where needed, allowing more flexible control over the grid operators and improving performance. Thus, by providing masked grid operators, we enable more flexible control over the domain on which the grid operators should be applied.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"In the following example, we first define a mask ω on the 2D StructuredGrid. Then we specify to not mask the center area of all Vx, Vy nodes (accessible through ω.vc, ω.cv) on the staggered grid.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"# define the mask\nω = FieldMask2D(arch, grid) # with backend and grid geometry defined\n\n# define the initial inclusion\nr = 2.0\ninit_inclusion = (x,y) -> ifelse(x^2 + y^2 < r^2, 1.0, 0.0)\n\n# mask all other entries other than the initial inclusion\nset!(ω.vc, grid, init_inclusion)\nset!(ω.cv, grid, init_inclusion)","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"We can then pass the mask to other grid operators when applying them within the kernel. When computing masked derivatives, a mask being the subtype of AbstractMask is premultiplied at the corresponding grid location for each operand:","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"@kernel function update_strain_rate!(ε̇, V, ω::AbstractMask, g::StructuredGrid, O)\n I = @index(Global, Cartesian)\n I = I + O\n # with masks ω\n ε̇.xx[I] = ∂x(V.x, ω, g, I)\n ε̇.yy[I] = ∂y(V.y, ω, g, I)\n ε̇.xy[I] = 0.5 * (∂y(V.x, ω, g, I) + ∂x(V.y, ω, g, I))\nend","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"The kernel can be launched as follows, with some launcher defined using launch = Launcher(arch, grid):","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"# define fields\nε̇ = TensorField(backend, grid)\nV = VectorField(backend, grid)\n\n# launch kernel\nlaunch(arch, grid, update_strain_rate! => (ε̇, V, ω, grid))","category":"page"},{"location":"concepts/grid_operators/#Interpolation","page":"Grid Operators","title":"Interpolation","text":"","category":"section"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"Chmy.jl provides an interface itp which interpolates the field f from its location to the specified location to using the given interpolation rule r. The indices specify the position within the grid at location to:","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"itp(f, to, r, grid, I...)","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"Currently implemented interpolation rules are:","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"Linear() which implements rule(t, v0, v1) = v0 + t * (v1 - v0);\nHarmonicLinear() which implements rule(t, v0, v1) = 1/(1/v0 + t * (1/v1 - 1/v0)).","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"Both rules are exposed as convenience wrapper functions lerp and hlerp, using Linear() and HarmonicLinear() rules, respectively:","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"lerp(f, to, grid, I...) # implements itp(f, to, Linear(), grid, I...)\nhlerp(f, to, grid, I...) # implements itp(f, to, HarmonicLinear(), grid, I...)","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"In the following example, we use the linear interpolation wrapper lerp when interpolating nodal values of the density field ρ, defined on cell centres, i.e. having the location (Center(), Center()) to ρx and ρy, defined on cell interfaces in the x- and y- direction, respectively.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"# define density ρ on cell centres\nρ = Field(backend, grid, Center())\nρ0 = 3.0; set!(ρ, ρ0)\n\n# allocate memory for density on cell interfaces\nρx = Field(backend, grid, (Vertex(), Center()))\nρy = Field(backend, grid, (Center(), Vertex()))","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"The kernel interpolate_ρ! performs the actual interpolation and requires the grid information passed by g.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"@kernel function interpolate_ρ!(ρ, ρx, ρy, g::StructuredGrid, O)\n I = @index(Global, Cartesian)\n I = I + O\n # interpolate from cell centres to cell interfaces\n ρx = lerp(ρ, location(ρx), g, I)\n ρy = lerp(ρ, location(ρy), g, I)\nend","category":"page"},{"location":"examples/overview/#Examples-Overview","page":"Examples Overview","title":"Examples Overview","text":"","category":"section"},{"location":"examples/overview/","page":"Examples Overview","title":"Examples Overview","text":"This page provides an overview of Chmy.jl examples. These selected examples demonstrate how Chmy.jl can be used to solve various numerical problems using architecture-agnostic kernels both on a single-device and in a distributed way.","category":"page"},{"location":"examples/overview/#Table-of-Contents","page":"Examples Overview","title":"Table of Contents","text":"","category":"section"},{"location":"examples/overview/","page":"Examples Overview","title":"Examples Overview","text":"Example Description\nDiffusion 2D Solving the 2D diffusion equation on a uniform grid.\nDiffusion 2D with MPI Solving the 2D diffusion equation on a uniform grid and distributed parallelisation using MPI.\nSingle-Device Performance Optimisation Revisiting the 2D diffusion problem with focus on performance optimisation techniques on a single-device architecture.\nStokes 2D with MPI Solving the 2D Stokes equation with thermal coupling on a uniform grid.\nStokes 3D with MPI Solving the 3D Stokes equation with thermal coupling on a uniform grid and distributed parallelisation using MPI.\nDiffusion 1D with Metal Solving the 1D diffusion equation using the Metal backend and single precision (Float32) on a uniform grid.\n2D Grid Visualization Visualization of a 2D StructuredGrid.\n3D Grid Visualization Visualization of a 3D StructuredGrid.","category":"page"},{"location":"concepts/architectures/#Architectures","page":"Architectures","title":"Architectures","text":"","category":"section"},{"location":"concepts/architectures/#Backend-Selection-and-Architecture-Initialization","page":"Architectures","title":"Backend Selection & Architecture Initialization","text":"","category":"section"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"Chmy.jl supports CPUs, as well as CUDA, ROC and Metal backends for Nvidia, AMD and Apple M-series GPUs through a thin wrapper around the KernelAbstractions.jl for users to select desirable backends.","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"# Default with CPU\narch = Arch(CPU())","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"using CUDA\n\narch = Arch(CUDABackend())","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"using AMDGPU\n\narch = Arch(ROCBackend())","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"using Metal\n\narch = Arch(MetalBackend())","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"At the beginning of program, one may specify the backend and initialize the architecture they desire to use. The initialized arch variable will be required explicitly at creation of some objects such as grids and kernel launchers.","category":"page"},{"location":"concepts/architectures/#Specifying-the-device-ID-and-stream-priority","page":"Architectures","title":"Specifying the device ID and stream priority","text":"","category":"section"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"On systems with multiple GPUs, passing the keyword argument device_id to the Arch constructor will select and set the selected device as a current device.","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"For advanced users, we provide a function activate!(arch; priority) for specifying the stream priority owned by the task one is executing. The stream priority will be set to :normal by default, where :low and :high are also possible options given that the target backend has priority control over streams implemented.","category":"page"},{"location":"concepts/architectures/#Distributed-Architecture","page":"Architectures","title":"Distributed Architecture","text":"","category":"section"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"Our distributed architecture builds upon the abstraction of having GPU clusters that build on the same GPU architecture. Note that in general, GPU clusters may be equipped with hardware from different vendors, incorporating different types of GPUs to exploit their unique capabilities for specific tasks.","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"warning: GPU-Aware MPI Required for Distributed Module on GPU backend\nThe Distributed module currently only supports GPU-aware MPI when a GPU backend is selected for multi-GPU computations. For the Distributed module to function properly, any GPU-aware MPI library installation shall be used. Otherwise, a segmentation fault will occur.","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"To make the Architecture object aware of MPI topology, user can pass an MPI communicator object and dimensions of the Cartesian topology to the Arch constructor:","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"using MPI\n\narch = Arch(CPU(), MPI.COMM_WORLD, (0, 0, 0))","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"Passing zeros as the last argument will automatically spread the dimensions to be as close as possible to each other, see MPI.jl documentation for details.","category":"page"},{"location":"lib/modules/#Modules","page":"Modules","title":"Modules","text":"","category":"section"},{"location":"lib/modules/#Grids","page":"Modules","title":"Grids","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.Grids]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.Grids.AbstractAxis","page":"Modules","title":"Chmy.Grids.AbstractAxis","text":"abstract type AbstractAxis{T}\n\nAbstract type representing an axis in a grid, where the axis is parameterized by the type T of the coordinates.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Grids.Center","page":"Modules","title":"Chmy.Grids.Center","text":"struct Center <: Location\n\nThe Center struct represents a location at the center along a dimension of a grid cell.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Grids.Connectivity","page":"Modules","title":"Chmy.Grids.Connectivity","text":"abstract type Connectivity\n\nAbstract type representing the connectivity of grid elements.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Grids.Location","page":"Modules","title":"Chmy.Grids.Location","text":"abstract type Location\n\nAbstract type representing a location in a grid cell.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Grids.StructuredGrid","page":"Modules","title":"Chmy.Grids.StructuredGrid","text":"StructuredGrid\n\nRepresents a structured grid with orthogonal axes.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Grids.UniformGrid-Union{Tuple{Chmy.Architectures.Architecture}, Tuple{N}} where N","page":"Modules","title":"Chmy.Grids.UniformGrid","text":"UniformGrid(arch; origin, extent, dims, topology=nothing)\n\nConstructs a uniform grid with specified origin, extent, dimensions, and topology.\n\nArguments\n\narch::Architecture: The associated architecture.\norigin::NTuple{N,Number}: The origin of the grid.\nextent::NTuple{N,Number}: The extent of the grid.\ndims::NTuple{N,Integer}: The dimensions of the grid.\ntopology=nothing: The topology of the grid. If not provided, a default Bounded topology is used.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.Vertex","page":"Modules","title":"Chmy.Grids.Vertex","text":"struct Vertex <: Location\n\nThe Vertex struct represents a location at the vertex along a dimension of a grid cell.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Grids.axes_names-Tuple{Chmy.Grids.StructuredGrid{1}}","page":"Modules","title":"Chmy.Grids.axes_names","text":"axes_names(::SG{1})\n\nReturns the names of the axes for a 1-dimensional structured grid.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.axes_names-Tuple{Chmy.Grids.StructuredGrid{2}}","page":"Modules","title":"Chmy.Grids.axes_names","text":"axes_names(::SG{2})\n\nReturns the names of the axes for a 2-dimensional structured grid.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.axes_names-Tuple{Chmy.Grids.StructuredGrid{3}}","page":"Modules","title":"Chmy.Grids.axes_names","text":"axes_names(::SG{3})\n\nReturns the names of the axes for a 3-dimensional structured grid.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.axis-Union{Tuple{dim}, Tuple{Chmy.Grids.StructuredGrid, Dim{dim}}} where dim","page":"Modules","title":"Chmy.Grids.axis","text":"axis(grid, dim::Dim)\n\nReturn the axis corresponding to the spatial dimension dim.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.bounds-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, Union{NTuple{N, Chmy.Grids.Location}, Chmy.Grids.Location}}} where N","page":"Modules","title":"Chmy.Grids.bounds","text":"bounds(grid, loc, [dim::Dim])\n\nReturn the bounds of a structured grid at the specified location(s).\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.connectivity-Union{Tuple{S}, Tuple{D}, Tuple{C}, Tuple{T}, Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N, T, C}, Dim{D}, Side{S}}} where {N, T, C, D, S}","page":"Modules","title":"Chmy.Grids.connectivity","text":"connectivity(grid, dim::Dim, side::Side)\n\nReturn the connectivity of the structured grid grid for the given dimension dim and side side.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.coord-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, Chmy.Grids.Location, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.Grids.coord","text":"coord(grid, loc, I...)\n\nReturn a tuple of spatial coordinates of a grid point at location loc and indices I.\n\nFor vertex locations, first grid point is at the origin. For center locations, first grid point at half-spacing distance from the origin.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.extent-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, Union{NTuple{N, Chmy.Grids.Location}, Chmy.Grids.Location}}} where N","page":"Modules","title":"Chmy.Grids.extent","text":"extent(grid, loc, [dim::Dim])\n\nReturn the extent of a structured grid at the specified location(s).\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.inv_spacing-Union{Tuple{Chmy.Grids.UniformGrid{N}}, Tuple{N}} where N","page":"Modules","title":"Chmy.Grids.inv_spacing","text":"inv_spacing(grid::UniformGrid)\n\nReturn a tuple of inverse grid spacing for a uniform grid grid.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.inv_spacing-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, Chmy.Grids.Location, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.Grids.inv_spacing","text":"inv_spacing(grid, loc, I...)\n\nReturn a tuple of inverse grid spacings at location loc and indices I.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.inv_volume-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, NTuple{N, Chmy.Grids.Location}, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.Grids.inv_volume","text":"inv_volume(grid, loc, I...)\n\nReturn the inverse of control volume at location loc and indices I.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.iΔ","page":"Modules","title":"Chmy.Grids.iΔ","text":"iΔ\n\nAlias for the inv_spacing method that returns the reciprocal of the spacing between grid points.\n\n\n\n\n\n","category":"function"},{"location":"lib/modules/#Chmy.Grids.origin-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, Union{NTuple{N, Chmy.Grids.Location}, Chmy.Grids.Location}}} where N","page":"Modules","title":"Chmy.Grids.origin","text":"origin(grid, loc, [dim::Dim])\n\nReturn the origin of a structured grid at the specified location(s).\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.spacing-Union{Tuple{Chmy.Grids.UniformGrid{N}}, Tuple{N}} where N","page":"Modules","title":"Chmy.Grids.spacing","text":"spacing(grid::UniformGrid)\n\nReturn a tuple of grid spacing for a uniform grid grid.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.spacing-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, Chmy.Grids.Location, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.Grids.spacing","text":"spacing(grid, loc, I...)\n\nReturn a tuple of grid spacings at location loc and indices I.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.volume-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, NTuple{N, Chmy.Grids.Location}, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.Grids.volume","text":"volume(grid, loc, I...)\n\nReturn the control volume at location loc and indices I.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.Δ","page":"Modules","title":"Chmy.Grids.Δ","text":"Δ\n\nAlias for the spacing method that returns the spacing between grid points.\n\n\n\n\n\n","category":"function"},{"location":"lib/modules/#Architectures","page":"Modules","title":"Architectures","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.Architectures]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.Architectures.Architecture","page":"Modules","title":"Chmy.Architectures.Architecture","text":"Architecture\n\nAbstract type representing an architecture.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Architectures.SingleDeviceArchitecture","page":"Modules","title":"Chmy.Architectures.SingleDeviceArchitecture","text":"SingleDeviceArchitecture <: Architecture\n\nA struct representing an architecture that operates on a single CPU or GPU device.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Architectures.SingleDeviceArchitecture-Tuple{Chmy.Architectures.Architecture}","page":"Modules","title":"Chmy.Architectures.SingleDeviceArchitecture","text":"SingleDeviceArchitecture(arch::Architecture)\n\nCreate a SingleDeviceArchitecture object retrieving backend and device from arch.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.Arch-Tuple{KernelAbstractions.Backend}","page":"Modules","title":"Chmy.Architectures.Arch","text":"Arch(backend::Backend; device_id::Integer=1)\n\nCreate an architecture object for the specified backend and device.\n\nArguments\n\nbackend: The backend to use for computation.\ndevice_id=1: The ID of the device to use.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.activate!-Tuple{Chmy.Architectures.SingleDeviceArchitecture}","page":"Modules","title":"Chmy.Architectures.activate!","text":"activate!(arch::SingleDeviceArchitecture; priority=:normal)\n\nActivate the given architecture on the specified device and set the priority of the backend. For the priority accepted values are :normal, :low and :high.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.get_backend-Tuple{Chmy.Architectures.SingleDeviceArchitecture}","page":"Modules","title":"Chmy.Architectures.get_backend","text":"get_backend(arch::SingleDeviceArchitecture)\n\nGet the backend associated with a SingleDeviceArchitecture.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.get_device-Tuple{Chmy.Architectures.SingleDeviceArchitecture}","page":"Modules","title":"Chmy.Architectures.get_device","text":"get_device(arch::SingleDeviceArchitecture)\n\nGet the device associated with a SingleDeviceArchitecture.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Fields","page":"Modules","title":"Fields","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.Fields]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.Fields.AbstractField","page":"Modules","title":"Chmy.Fields.AbstractField","text":"abstract type AbstractField{T,N,L} <: AbstractArray{T,N}\n\nAbstract type representing a field with data type T, number of dimensions N, location L where the field should be defined on.\n\nSee also: abstract type ConstantField\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Fields.ConstantField","page":"Modules","title":"Chmy.Fields.ConstantField","text":"Scalar field with a constant value\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Fields.Field","page":"Modules","title":"Chmy.Fields.Field","text":"struct Field{T,N,L,H,A} <: AbstractField{T,N,L}\n\nField represents a discrete scalar field with specified type, number of dimensions, location, and halo size.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Fields.Field-Tuple{Chmy.Architectures.Architecture, Vararg{Any}}","page":"Modules","title":"Chmy.Fields.Field","text":"Field(arch::Architecture, args...; kwargs...)\n\nCreate a Field object on the specified architecture.\n\nArguments:\n\narch::Architecture: The architecture for which to create the Field.\nargs...: Additional positional arguments to pass to the Field constructor.\nkwargs...: Additional keyword arguments to pass to the Field constructor.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.Field-Union{Tuple{N}, Tuple{KernelAbstractions.Backend, Chmy.Grids.StructuredGrid{N}, Union{NTuple{N, Chmy.Grids.Location}, Chmy.Grids.Location}}, Tuple{KernelAbstractions.Backend, Chmy.Grids.StructuredGrid{N}, Union{NTuple{N, Chmy.Grids.Location}, Chmy.Grids.Location}, Any}} where N","page":"Modules","title":"Chmy.Fields.Field","text":"Field(backend, grid, loc, type=eltype(grid); halo=1)\n\nConstructs a field on a structured grid at the specified location.\n\nArguments:\n\nbackend: The backend to use for memory allocation.\ngrid: The structured grid on which the field is constructed.\nloc: The location or locations on the grid where the field is constructed.\ntype: The element type of the field. Defaults to the element type of the grid.\nhalo: The halo size for the field. Defaults to 1.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.FunctionField","page":"Modules","title":"Chmy.Fields.FunctionField","text":"FunctionField <: AbstractField\n\nContinuous or discrete field with values computed at runtime.\n\nConstructors\n\nFunctionField(func, grid, loc; [discrete], [parameters]): Create a new FunctionField object.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Fields.FunctionField-Union{Tuple{N}, Tuple{F}, Tuple{F, Chmy.Grids.StructuredGrid{N}, Any}} where {F, N}","page":"Modules","title":"Chmy.Fields.FunctionField","text":"FunctionField(func::F, grid::StructuredGrid{N}, loc; discrete=false, parameters=nothing) where {F,N}\n\nCreate a FunctionField on the given grid using the specified function func.\n\nArguments:\n\nfunc::F: The function used to generate the field values.\ngrid::StructuredGrid{N}: The structured grid defining the computational domain.\nloc: The nodal location on the grid grid where the function field is defined on.\ndiscrete::Bool=false: A flag indicating whether the field should be discrete. Defaults to false.\nparameters=nothing: Additional parameters to be used by the function. Defaults to nothing.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.OneField","page":"Modules","title":"Chmy.Fields.OneField","text":"Constant field with values equal to one(T)\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Fields.ValueField","page":"Modules","title":"Chmy.Fields.ValueField","text":"Field with a constant value\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Fields.ZeroField","page":"Modules","title":"Chmy.Fields.ZeroField","text":"Constant field with values equal to zero(T)\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Fields.TensorField-Tuple{KernelAbstractions.Backend, Chmy.Grids.StructuredGrid{2}, Vararg{Any}}","page":"Modules","title":"Chmy.Fields.TensorField","text":"TensorField(backend::Backend, grid::StructuredGrid{2}, args...; kwargs...)\n\nCreate a 2D tensor field in the form of a named tuple on the given grid using the specified backend, with components xx, yy, and xy each being a Field.\n\nArguments:\n\nbackend::Backend: The backend to be used for computation.\ngrid::StructuredGrid{2}: The 2D structured grid defining the computational domain.\nargs...: Additional positional arguments to pass to the Field constructor.\nkwargs...: Additional keyword arguments to pass to the Field constructor.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.TensorField-Tuple{KernelAbstractions.Backend, Chmy.Grids.StructuredGrid{3}, Vararg{Any}}","page":"Modules","title":"Chmy.Fields.TensorField","text":"TensorField(backend::Backend, grid::StructuredGrid{3}, args...; kwargs...)\n\nCreate a 3D tensor field in the form of a named tuple on the given grid using the specified backend, with components xx, yy, zz, xy, xz, and yz each being a Field.\n\nArguments:\n\nbackend::Backend: The backend to be used for computation.\ngrid::StructuredGrid{3}: The 3D structured grid defining the computational domain.\nargs...: Additional positional arguments to pass to the Field constructor.\nkwargs...: Additional keyword arguments to pass to the Field constructor.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.VectorField-Union{Tuple{N}, Tuple{KernelAbstractions.Backend, Chmy.Grids.StructuredGrid{N}, Vararg{Any}}} where N","page":"Modules","title":"Chmy.Fields.VectorField","text":"VectorField(backend::Backend, grid::StructuredGrid{N}, args...; kwargs...) where {N}\n\nCreate a vector field in the form of a NamedTuple on the given grid using the specified backend. With each component being a Field.\n\nArguments:\n\nbackend::Backend: The backend to be used for computation.\ngrid::StructuredGrid{N}: The structured grid defining the computational domain.\nargs...: Additional positional arguments to pass to the Field constructor.\nkwargs...: Additional keyword arguments to pass to the Field constructor.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.interior-Tuple{Chmy.Fields.Field}","page":"Modules","title":"Chmy.Fields.interior","text":"interior(f::Field; with_halo=false)\n\nDisplays the field on the interior of the grid on which it is defined on. One could optionally specify to display the halo regions on the grid with with_halo=true.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.set!-Tuple{Chmy.Fields.Field, AbstractArray}","page":"Modules","title":"Chmy.Fields.set!","text":"set!(f::Field, A::AbstractArray)\n\nSet the elements of the Field f using the values from the AbstractArray A.\n\nArguments:\n\nf::Field: The Field object to be modified.\nA::AbstractArray: The array whose values are to be copied to the Field.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.set!-Tuple{Chmy.Fields.Field, Chmy.Fields.AbstractField}","page":"Modules","title":"Chmy.Fields.set!","text":"set!(f::Field, other::AbstractField)\n\nSet the elements of the Field f using the values from another AbstractField other.\n\nArguments:\n\nf::Field: The destination Field object to be modified.\nother::AbstractField: The source AbstractField whose values are to be copied to f.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.set!-Tuple{Chmy.Fields.Field, Number}","page":"Modules","title":"Chmy.Fields.set!","text":"set!(f::Field, val::Number)\n\nSet all elements of the Field f to the specified numeric value val.\n\nArguments:\n\nf::Field: The Field object to be modified.\nval::Number: The numeric value to set in the Field.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Grid-Operators","page":"Modules","title":"Grid Operators","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.GridOperators]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.GridOperators.AbstractMask","page":"Modules","title":"Chmy.GridOperators.AbstractMask","text":"abstract type AbstractMask{T,N}\n\nAbstract type representing the data transformation to be performed on elements in a field of dimension N, where each element is of typeT.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.GridOperators.InterpolationRule","page":"Modules","title":"Chmy.GridOperators.InterpolationRule","text":"abstract type InterpolationRule\n\nA type representing an interpolation rule that specifies how the interpolant f should be reconstructed using a data set on a given grid.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.GridOperators.hlerp-Union{Tuple{N}, Tuple{Any, Any, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.hlerp","text":"hlerp(f, to, grid, I...)\n\nInterpolate a field f to location to using harmonic linear interpolation rule.\n\nrule(t, v0, v1) = 1/(1/v0 + t * (1/v1 - 1/v0))\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.itp-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, NTuple{N, Chmy.Grids.Location}, Chmy.GridOperators.InterpolationRule, Chmy.Grids.StructuredGrid{N}, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.itp","text":"itp(f, to, r, grid, I...)\n\nInterpolates the field f from its current location to the specified location(s) to using the given interpolation rule r. The indices specify the position within the grid at location(s) to.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.leftx-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.leftx","text":"leftx(f, ω, I)\n\n\"left side\" of a field ([1:end-1]) in x direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.leftx-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.leftx","text":"leftx(f, I)\n\n\"left side\" of a field ([1:end-1]) in x direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.lefty-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.lefty","text":"lefty(f, ω, I)\n\n\"left side\" of a field ([1:end-1]) in y direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.lefty-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.lefty","text":"lefty(f, I)\n\n\"left side\" of a field ([1:end-1]) in y direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.leftz-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.leftz","text":"leftz(f, ω, I)\n\n\"left side\" of a field ([1:end-1]) in z direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.leftz-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.leftz","text":"leftz(f, I)\n\n\"left side\" of a field ([1:end-1]) in z direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.lerp-Union{Tuple{N}, Tuple{Any, Any, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.lerp","text":"lerp(f, to, grid, I...)\n\nLinearly interpolate values of a field f to location to.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.rightx-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.rightx","text":"rightx(f, ω, I)\n\n\"right side\" of a field ([2:end]) in x direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.rightx-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.rightx","text":"rightx(f, I)\n\n\"right side\" of a field ([2:end]) in x direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.righty-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.righty","text":"righty(f, ω, I)\n\n\"right side\" of a field ([2:end]) in y direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.righty-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.righty","text":"righty(f, I)\n\n\"right side\" of a field ([2:end]) in y direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.rightz-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.rightz","text":"rightz(f, ω, I)\n\n\"right side\" of a field ([2:end]) in z direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.rightz-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.rightz","text":"rightz(f, I)\n\n\"right side\" of a field ([2:end]) in z direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.δx-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.δx","text":"δx(f, ω, I)\n\nFinite difference in x direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.δx-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.δx","text":"δx(f, I)\n\nFinite difference in x direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.δy-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.δy","text":"δy(f, ω, I)\n\nFinite difference in y direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.δy-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.δy","text":"δy(f, I)\n\nFinite difference in y direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.δz-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.δz","text":"δz(f, ω, I)\n\nFinite difference in z direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.δz-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.δz","text":"δz(f, I)\n\nFinite difference in z direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.∂x-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.∂x","text":"∂x(f, grid, I)\n\nDirectional partial derivative in x direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.∂x-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.∂x","text":"∂x(f, ω, grid, I)\n\nDirectional partial derivative in x direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.∂y-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.∂y","text":"∂y(f, grid, I)\n\nDirectional partial derivative in y direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.∂y-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.∂y","text":"∂y(f, ω, grid, I)\n\nDirectional partial derivative in y direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.∂z-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.∂z","text":"∂z(f, grid, I)\n\nDirectional partial derivative in z direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.∂z-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.∂z","text":"∂z(f, ω, grid, I)\n\nDirectional partial derivative in z direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Boundary-Conditions","page":"Modules","title":"Boundary Conditions","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.BoundaryConditions]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.BoundaryConditions.AbstractBatch","page":"Modules","title":"Chmy.BoundaryConditions.AbstractBatch","text":"AbstractBatch\n\nAbstract type representing a batch of boundary conditions.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.BoundaryFunction","page":"Modules","title":"Chmy.BoundaryConditions.BoundaryFunction","text":"abstract type BoundaryFunction{F}\n\nAbstract type for boundary condition functions with function type F.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.Dirichlet","page":"Modules","title":"Chmy.BoundaryConditions.Dirichlet","text":"Dirichlet(value=nothing)\n\nCreate a Dirichlet object representing the Dirichlet boundary condition with the specified value.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.EmptyBatch","page":"Modules","title":"Chmy.BoundaryConditions.EmptyBatch","text":"EmptyBatch <: AbstractBatch\n\nEmptyBatch represents no boundary conditions.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.ExchangeBatch","page":"Modules","title":"Chmy.BoundaryConditions.ExchangeBatch","text":"ExchangeBatch <: AbstractBatch\n\nExchangeBatch represents a batch used for MPI communication.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.FieldBatch","page":"Modules","title":"Chmy.BoundaryConditions.FieldBatch","text":"FieldBatch <: AbstractBatch\n\nFieldBatch is a batch of boundary conditions, where each field has one boundary condition.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.FieldBoundaryCondition","page":"Modules","title":"Chmy.BoundaryConditions.FieldBoundaryCondition","text":"FieldBoundaryCondition\n\nAbstract supertype for all boundary conditions that are specified per-field.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.FirstOrderBC","page":"Modules","title":"Chmy.BoundaryConditions.FirstOrderBC","text":"struct FirstOrderBC{T,Kind} <: FieldBoundaryCondition\n\nA struct representing a boundary condition of first-order accuracy.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.Neumann","page":"Modules","title":"Chmy.BoundaryConditions.Neumann","text":"Neumann(value=nothing)\n\nCreate a Neumann object representing the Neumann boundary condition with the specified value.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.bc!-Union{Tuple{N}, Tuple{Chmy.Architectures.Architecture, Chmy.Grids.StructuredGrid{N}, NTuple{N, Tuple{Chmy.BoundaryConditions.AbstractBatch, Chmy.BoundaryConditions.AbstractBatch}}}} where N","page":"Modules","title":"Chmy.BoundaryConditions.bc!","text":"bc!(arch::Architecture, grid::StructuredGrid, batch::BatchSet)\n\nApply boundary conditions using a batch set batch containing an AbstractBatch per dimension per side of grid.\n\nArguments\n\narch: The architecture.\ngrid: The grid.\nbatch:: The batch set to apply boundary conditions to.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Kernel-launcher","page":"Modules","title":"Kernel launcher","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.KernelLaunch]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.KernelLaunch.Launcher","page":"Modules","title":"Chmy.KernelLaunch.Launcher","text":"struct Launcher{Worksize,OuterWidth,Workers}\n\nA struct representing a launcher for asynchronous kernel execution.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.KernelLaunch.Launcher-Tuple{Any, Any}","page":"Modules","title":"Chmy.KernelLaunch.Launcher","text":"Launcher(arch, grid; outer_width=nothing)\n\nConstructs a Launcher object configured based on the input parameters.\n\nArguments:\n\narch: The associated architecture.\ngrid: The grid defining the computational domain.\nouter_width: Optional parameter specifying outer width.\n\nwarning: Warning\nworksize for the last dimension N takes into account only last outer width W[N], N-1 uses W[N] and W[N-1], N-2 uses W[N], W[N-1], and W[N-2].\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.KernelLaunch.Launcher-Union{Tuple{Args}, Tuple{F}, Tuple{Chmy.Architectures.Architecture, Any, Pair{F, Args}}} where {F, Args}","page":"Modules","title":"Chmy.KernelLaunch.Launcher","text":"(launcher::Launcher)(arch::Architecture, grid, kernel_and_args::Pair{F,Args}; bc=nothing, async=false) where {F,Args}\n\nLaunches a computational kernel using the specified arch, grid, kernel_and_args, and optional boundary conditions (bc).\n\nArguments:\n\narch::Architecture: The architecture on which to execute the computation.\ngrid: The grid defining the computational domain.\nkernel_and_args::Pair{F,Args}: A pair consisting of the computational kernel F and its arguments Args.\nbc=nothing: Optional boundary conditions for the computation.\nasync=false: If true, launches the kernel asynchronously.\n\nwarning: Warning\narch should be compatible with the Launcher's architecture.\nIf bc is nothing, the kernel is launched without boundary conditions.\nIf async is false (default), the function waits for the computation to complete before returning.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Distributed","page":"Modules","title":"Distributed","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.Distributed]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.Distributed.CartesianTopology","page":"Modules","title":"Chmy.Distributed.CartesianTopology","text":"CartesianTopology\n\nRepresents N-dimensional Cartesian topology of distributed processes.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Distributed.CartesianTopology-Union{Tuple{N}, Tuple{MPI.Comm, NTuple{N, Int64}}} where N","page":"Modules","title":"Chmy.Distributed.CartesianTopology","text":"CartesianTopology(comm, dims)\n\nCreate an N-dimensional Cartesian topology using base MPI communicator comm with dimensions dims. If all entries in dims are not equal to 0, the product of dims should be equal to the total number of MPI processes MPI.Comm_size(comm). If any (or all) entries of dims are 0, the dimensions in the corresponding spatial directions will be picked automatically.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.DistributedArchitecture","page":"Modules","title":"Chmy.Distributed.DistributedArchitecture","text":"DistributedArchitecture <: Architecture\n\nA struct representing a distributed architecture.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Distributed.StackAllocator","page":"Modules","title":"Chmy.Distributed.StackAllocator","text":"mutable struct StackAllocator\n\nSimple stack (a.k.a. bump/arena) allocator. Maintains an internal buffer that grows dynamically if the requested allocation exceeds current buffer size.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Distributed.StackAllocator-Tuple{KernelAbstractions.Backend}","page":"Modules","title":"Chmy.Distributed.StackAllocator","text":"StackAllocator(backend::Backend)\n\nCreate a stack allocator using the specified backend to store allocations.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Base.resize!-Tuple{Chmy.Distributed.StackAllocator, Integer}","page":"Modules","title":"Base.resize!","text":"resize!(sa::StackAllocator, sz::Integer)\n\nResize the StackAllocator's buffer to capacity of sz bytes. This method will throw an error if any arrays were already allocated using this allocator.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.Arch-Tuple{KernelAbstractions.Backend, MPI.Comm, Any}","page":"Modules","title":"Chmy.Architectures.Arch","text":"Architectures.Arch(backend::Backend, comm::MPI.Comm, dims; device_id=nothing)\n\nCreate a distributed Architecture using backend backend and comm. For GPU backends, device will be selected automatically based on a process id within a node, unless specified by device_id.\n\nArguments\n\nbackend::Backend: The backend to use for the architecture.\ncomm::MPI.Comm: The MPI communicator to use for the architecture.\ndims: The dimensions of the architecture.\n\nKeyword Arguments\n\ndevice_id: The ID of the device to use. If not provided, the shared rank of the topology plus one is used.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.activate!-Tuple{Chmy.Distributed.DistributedArchitecture}","page":"Modules","title":"Chmy.Architectures.activate!","text":"activate!(arch::DistributedArchitecture; kwargs...)\n\nActivate the given DistributedArchitecture by delegating to the child architecture, and pass through any keyword arguments. For example, the priority can be set with accepted values being :normal, :low, and :high.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.get_backend-Tuple{Chmy.Distributed.DistributedArchitecture}","page":"Modules","title":"Chmy.Architectures.get_backend","text":"get_backend(arch::DistributedArchitecture)\n\nGet the backend associated with a DistributedArchitecture by delegating to the child architecture.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.get_device-Tuple{Chmy.Distributed.DistributedArchitecture}","page":"Modules","title":"Chmy.Architectures.get_device","text":"get_device(arch::DistributedArchitecture)\n\nGet the device associated with a DistributedArchitecture by delegating to the child architecture.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.BoundaryConditions.bc!-Tuple{Side, Dim, Chmy.Distributed.DistributedArchitecture, Chmy.Grids.StructuredGrid, Chmy.BoundaryConditions.ExchangeBatch}","page":"Modules","title":"Chmy.BoundaryConditions.bc!","text":"BoundaryConditions.bc!(side::Side, dim::Dim,\n arch::DistributedArchitecture,\n grid::StructuredGrid,\n batch::ExchangeBatch; kwargs...)\n\nApply boundary conditions on a distributed grid with halo exchange performed internally.\n\nArguments\n\nside: The side of the grid where the halo exchange is performed.\ndim: The dimension along which the halo exchange is performed.\narch: The distributed architecture used for communication.\ngrid: The structured grid on which the halo exchange is performed.\nbatch: The batch set to apply boundary conditions to.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.allocate","page":"Modules","title":"Chmy.Distributed.allocate","text":"allocate(sa::StackAllocator, T::DataType, dims, [align=sizeof(T)])\n\nAllocate a buffer of type T with dimensions dims using a stack allocator. The align parameter specifies the alignment of the buffer elements.\n\nArguments\n\nsa::StackAllocator: The stack allocator object.\nT::DataType: The data type of the requested allocation.\ndims: The dimensions of the requested allocation.\nalign::Integer: The alignment of the allocated buffer in bytes.\n\nwarning: Warning\nArrays allocated with StackAllocator are not managed by Julia runtime. User is responsible for ensuring correct lifetimes, i.e., that the reference to allocator outlives all arrays allocated using this allocator.\n\n\n\n\n\n","category":"function"},{"location":"lib/modules/#Chmy.Distributed.cart_comm-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.cart_comm","text":"cart_comm(topo)\n\nMPI Cartesian communicator for the topology.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.cart_coords-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.cart_coords","text":"cart_coords(topo)\n\nCoordinates of a current process within a Cartesian topology.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.dims-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.dims","text":"dims(topo)\n\nDimensions of the topology as NTuple.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.exchange_halo!-Union{Tuple{K}, Tuple{D}, Tuple{S}, Tuple{Side{S}, Dim{D}, Chmy.Distributed.DistributedArchitecture, Chmy.Grids.StructuredGrid, Vararg{Chmy.Fields.Field, K}}} where {S, D, K}","page":"Modules","title":"Chmy.Distributed.exchange_halo!","text":"exchange_halo!(side::Side, dim::Dim, arch, grid, fields...; async=false)\n\nPerform halo exchange communication between neighboring processes in a distributed architecture.\n\nArguments\n\nside: The side of the grid where the halo exchange is performed.\ndim: The dimension along which the halo exchange is performed.\narch: The distributed architecture used for communication.\ngrid: The structured grid on which the halo exchange is performed.\nfields...: The fields to be exchanged.\n\nOptional Arguments\n\nasync=false: Whether to perform the halo exchange asynchronously.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.exchange_halo!-Union{Tuple{N}, Tuple{Chmy.Distributed.DistributedArchitecture, Chmy.Grids.StructuredGrid{N}, Vararg{Chmy.Fields.Field}}} where N","page":"Modules","title":"Chmy.Distributed.exchange_halo!","text":"exchange_halo!(arch, grid, fields...)\n\nPerform halo exchange for the given architecture, grid, and fields.\n\nArguments\n\narch: The distributed architecture to perform halo exchange on.\ngrid: The structured grid on which halo exchange is performed.\nfields: The fields on which halo exchange is performed.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.gather!-Tuple{Chmy.Distributed.DistributedArchitecture, Any, Chmy.Fields.Field}","page":"Modules","title":"Chmy.Distributed.gather!","text":"gather!(arch, dst, src::Field; kwargs...)\n\nGather the interior of a field src into a global array dst on the CPU.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.gather!-Union{Tuple{N}, Tuple{T}, Tuple{Union{Nothing, AbstractArray{T, N}}, AbstractArray{T, N}, MPI.Comm}} where {T, N}","page":"Modules","title":"Chmy.Distributed.gather!","text":"gather!(dst, src, comm::MPI.Comm; root=0)\n\nGather local array src into a global array dst. Size of the global array size(dst) should be equal to the product of the size of a local array size(src) and the dimensions of a Cartesian communicator comm. The array will be gathered on the process with id root (root=0 by default). Note that the memory for a global array should be allocated only on the process with id root, on other processes dst can be set to nothing.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.global_rank-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.global_rank","text":"global_rank(topo)\n\nGlobal id of a process in a Cartesian topology.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.global_size-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.global_size","text":"global_size(topo)\n\nTotal number of processes within the topology.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.has_neighbor-Tuple{Chmy.Distributed.CartesianTopology, Any, Any}","page":"Modules","title":"Chmy.Distributed.has_neighbor","text":"has_neighbor(topo, dim, side)\n\nReturns true if there a neighbor process in spatial direction dim on the side side, or false otherwise.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.nallocs-Tuple{Chmy.Distributed.StackAllocator}","page":"Modules","title":"Chmy.Distributed.nallocs","text":"nallocs(sa::StackAllocator)\n\nGet the number of allocations made by the given StackAllocator.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.neighbor-Tuple{Chmy.Distributed.CartesianTopology, Any, Any}","page":"Modules","title":"Chmy.Distributed.neighbor","text":"neighbor(topo, dim, side)\n\nReturns id of a neighbor process in spatial direction dim on the side side, if this neighbor exists, or MPI.PROC_NULL otherwise.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.neighbors-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.neighbors","text":"neighbors(topo)\n\nNeighbors of a current process.\n\nReturns tuple of ranks of the two immediate neighbors in each spatial direction, or MPI.PROC_NULL if there is no neighbor on a corresponding side.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.node_name-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.node_name","text":"node_name(topo)\n\nName of a node according to MPI.Get_processor_name().\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.node_size-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.node_size","text":"node_size(topo)\n\nNumber of processes sharing the same node.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.reset!-Tuple{Chmy.Distributed.StackAllocator}","page":"Modules","title":"Chmy.Distributed.reset!","text":"reset!(sa::StackAllocator)\n\nReset the stack allocator by resetting the pointer. Doesn't free the internal memory buffer.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.shared_comm-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.shared_comm","text":"shared_comm(topo)\n\nMPI communicator for the processes sharing the same node.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.shared_rank-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.shared_rank","text":"shared_rank(topo)\n\nLocal id of a process within a single node. Can be used to set the GPU device.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.topology-Tuple{Chmy.Distributed.DistributedArchitecture}","page":"Modules","title":"Chmy.Distributed.topology","text":"topology(arch::DistributedArchitecture)\n\nGet the virtual MPI topology of a distributed architecture\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Workers","page":"Modules","title":"Workers","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.Workers]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.Workers.Worker","page":"Modules","title":"Chmy.Workers.Worker","text":"Worker\n\nA worker that performs tasks asynchronously.\n\nConstructor\n\nWorker{T}(; [setup], [teardown]) where {T}\n\nConstructs a new Worker object.\n\nArguments\n\nsetup: A function to be executed before the worker starts processing tasks. (optional)\nteardown: A function to be executed after the worker finishes processing tasks. (optional)\n\n\n\n\n\n","category":"type"},{"location":"getting_started/#Getting-Started-with-Chmy.jl","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"","category":"section"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"Chmy.jl is a backend-agnostic toolkit for finite difference computations on multi-dimensional computational staggered grids. In this introductory tutorial, we will showcase the essence of Chmy.jl by solving a simple 2D diffusion problem. The full code of the tutorial material is available under diffusion_2d.jl.","category":"page"},{"location":"getting_started/#Basic-Diffusion","page":"Getting Started with Chmy.jl","title":"Basic Diffusion","text":"","category":"section"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"The diffusion equation is a second order parabolic PDE, here for a multivariable function C(xyt) that represents the field being diffused (such as the temperature or the concentration of a chemical component in a solution) showing derivatives in both temporal partial t and spatial partial x dimensions, where chi is the diffusion coefficient. In 2D we have the following formulation for the diffusion process:","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"beginequation\nfracpartial Cpartial t = chi left( fracpartial^2 Cpartial x^2 + fracpartial^2 Cpartial y^2 right)\nendequation","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"Introducing the diffusion flux q, we can rewrite equation (1) as a system of two PDEs, consisting of equations (2) and (3).","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"beginequation\nboldsymbolq = -chi nabla C\nendequation","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"beginequation\nfracpartial Cpartial t = - nabla cdot boldsymbolq\nendequation","category":"page"},{"location":"getting_started/#Boundary-Conditions","page":"Getting Started with Chmy.jl","title":"Boundary Conditions","text":"","category":"section"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"Generally, partial differential equations (PDEs) require initial or boundary conditions to ensure a unique and stable solution. For the field C, a Neumann boundary condition is given by:","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"beginequation\nfracpartial Cpartial boldsymboln = g(x y t)\nendequation","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"where fracpartial Cpartial boldsymboln is the derivative of C normal to the boundary, and g(x y t) is a given function. In this tutorial example, we consider a homogeneous Neumann boundary condition, g(x y t) = 0, which implies that there is no flux across the boundary.","category":"page"},{"location":"getting_started/#Using-Chmy.jl-for-Backend-Portable-Implementation","page":"Getting Started with Chmy.jl","title":"Using Chmy.jl for Backend Portable Implementation","text":"","category":"section"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"As the first step, we need to load the main module and any necessary submodules of Chmy.jl. Moreover, we use KernelAbstractions.jl for writing backend-agnostic kernels that are compatible with Chmy.jl.","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"using Chmy, Chmy.Architectures, Chmy.Grids, Chmy.Fields, Chmy.BoundaryConditions, Chmy.GridOperators, Chmy.KernelLaunch\nusing KernelAbstractions # for backend-agnostic kernels\nusing Printf, CairoMakie # for I/O and plotting\n# using CUDA\n# using AMDGPU\n# using Metal","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"In this introductory tutorial, we will use the CPU backend for simplicity:","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"backend = CPU()\narch = Arch(backend)","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"If a different backend is desired, one needs to load the relevant package accordingly. For example, if Nvidia or AMD GPUs are available, one can comment out using CUDA, using AMDGPU or using Metal and make sure to use arch = Arch(CUDABackend()), arch = Arch(ROCBackend()) or arch = Arch(MetalBackend()), respectively, when selecting the architecture. For further information about executing on a single-device or multi-device architecture, see the documentation section for Architectures.","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"warning: Metal backend\nMetal backend restricts floating point arithmetic precision of computations to Float32 or lower. In Chmy, this can be achieved by initialising the grid object using Float32 (f0) elements in the origin and extent tuples.","category":"page"},{"location":"getting_started/#Writing-and-Launch-Compute-Kernels","page":"Getting Started with Chmy.jl","title":"Writing & Launch Compute Kernels","text":"","category":"section"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"We want to solve the system of equations (2) & (3) numerically. We will use the explicit forward Euler method for temporal discretization and finite-differences for spatial discretization. Accordingly, the kernels for performing the arithmetic operations for each time step can be defined as follows:","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"@kernel inbounds = true function compute_q!(q, C, χ, g::StructuredGrid, O)\n I = @index(Global, Cartesian)\n I = I + O\n q.x[I] = -χ * ∂x(C, g, I)\n q.y[I] = -χ * ∂y(C, g, I)\nend","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"@kernel inbounds = true function update_C!(C, q, Δt, g::StructuredGrid, O)\n I = @index(Global, Cartesian)\n I = I + O\n C[I] -= Δt * divg(q, g, I)\nend","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"note: Non-Cartesian indices\nBesides using Cartesian indices, more standard indexing works as well, using NTuple. For example, update_C! will become:@kernel inbounds = true function update_C!(C, q, Δt, g::StructuredGrid, O)\n ix, iy = @index(Global, NTuple)\n (ix, iy) = (ix, iy) + O\n C[ix, iy] -= Δt * divg(q, g, ix, iy)\nendwhere the dimensions could be abstracted by splatting the returned index (I...):@kernel inbounds = true function update_C!(C, q, Δt, g::StructuredGrid, O)\n I = @index(Global, NTuple)\n I = I + O\n C[I...] -= Δt * divg(q, g, I...)\nend","category":"page"},{"location":"getting_started/#Model-Setup","page":"Getting Started with Chmy.jl","title":"Model Setup","text":"","category":"section"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"The diffusion model that we solve should contain the following model setup","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"# geometry\ngrid = UniformGrid(arch; origin=(-1, -1), extent=(2, 2), dims=(126, 126))\nlaunch = Launcher(arch, grid)\n\n# physics\nχ = 1.0\n\n# numerics\nΔt = minimum(spacing(grid))^2 / χ / ndims(grid) / 2.1","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"In the 2D problem only three physical fields, the field C and the diffusion flux q in x- and y-dimension are evolving with time. We define these fields on different locations on the staggered grid (more see Grids).","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"# allocate fields\nC = Field(backend, grid, Center())\nq = VectorField(backend, grid)","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"We randomly initialized the entries of C field and finished the initial model setup. One can refer to the section Fields for setting up more complex initial conditions.","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"# initial conditions\nset!(C, grid, (_, _) -> rand())\nbc!(arch, grid, C => Neumann(); exchange=C)","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"You should get a result like in the following plot.","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"fig = Figure()\nax = Axis(fig[1, 1];\n aspect = DataAspect(),\n xlabel = \"x\", ylabel = \"y\",\n title = \"it = 0\")\nplt = heatmap!(ax, centers(grid)..., interior(C) |> Array;\n colormap = :turbo)\nColorbar(fig[1, 2], plt)\ndisplay(fig)","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"
\n \n
","category":"page"},{"location":"getting_started/#Solving-Time-dependent-Problem","page":"Getting Started with Chmy.jl","title":"Solving Time-dependent Problem","text":"","category":"section"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"We are resolving a time-dependent problem, so we explicitly advance our solution within a time loop, specifying the number of iterations (or time steps) we desire to perform. The action that takes place within the time loop is the variable update that is performed by the compute kernels compute_q! and update_C!, accompanied by the imposing the Neumann boundary condition on the C field.","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"# action\nnt = 100\nfor it in 1:nt\n @printf(\"it = %d/%d \\n\", it, nt)\n launch(arch, grid, compute_q! => (q, C, χ, grid))\n launch(arch, grid, update_C! => (C, q, Δt, grid); bc=batch(grid, C => Neumann(); exchange=C))\nend","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"After running the simulation, you should see something like this, here the final result at it = 100 for the field C is plotted:","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"fig = Figure()\nax = Axis(fig[1, 1];\n aspect = DataAspect(),\n xlabel = \"x\", ylabel = \"y\",\n title = \"it = 100\")\nplt = heatmap!(ax, centers(grid)..., interior(C) |> Array;\n colormap = :turbo)\nColorbar(fig[1, 2], plt)\ndisplay(fig)","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"
\n \n
","category":"page"},{"location":"concepts/fields/#Fields","page":"Fields","title":"Fields","text":"","category":"section"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"With a given grid that allows us to define each point uniquely in a high-dimensional space, we abstract the data values to be defined on the grid under the concept AbstractField. Following is the type tree of the abstract field and its derived data types.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"","category":"page"},{"location":"concepts/fields/#Defining-a-multi-dimensional-Field","page":"Fields","title":"Defining a multi-dimensional Field","text":"","category":"section"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"Consider the following example, where we defined a variable grid of type Chmy.UniformGrid, similar as in the previous section Grids. We can now define physical properties on the grid.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"When defining a scalar field Field on the grid, we need to specify the arrangement of the field values. These values can either be stored at the cell centers of each control volume Center() or on the cell vertices/faces Vertex().","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"# Define geometry, architecture..., a 2D grid\ngrid = UniformGrid(arch; origin=(-lx/2, -ly/2), extent=(lx, ly), dims=(nx, ny))\n\n# Define pressure as a scalar field\nPr = Field(backend, grid, Center())","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"With the methods VectorField and TensorField, we can construct 2-dimensional and 3-dimensional fields, with predefined locations for each field dimension on a staggered grid.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"# Define velocity as a vector field on the 2D grid\nV = VectorField(backend, grid)\n\n# Define stress as a tensor field on the 2D grid\nτ = TensorField(backend, grid)","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"Use the function location to get the location of the field as a tuple. Vector and tensor fields are currently defined as NamedTuple's (likely to change in the future), so one could query the locations of individual components, e.g. location(V.x) or location(τ.xy)","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"tip: Acquiring Locations on the Grid Cell\nOne could use a convenient getter for obtaining locations of variable on the staggered-grid. Such as Chmy.location(Pr) for scalar-valued pressure field and Chmy.location(τ.xx) for a tensor field.","category":"page"},{"location":"concepts/fields/#Initialising-Field","page":"Fields","title":"Initialising Field","text":"","category":"section"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"Chmy.jl provides functionality to set the values of the fields as a function of spatial coordinates:","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"C = Field(backend, grid, Center())\n\n# Set initial values of the field randomly\nset!(C, grid, (_, _) -> rand())\n\n# Set initial values to 2D Gaussian\nset!(C, grid, (x, y) -> exp(-x^2 - y^2))","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"A slightly more complex usage involves passing extra parameters to be used for initial conditions setup.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"# Define a temperature field with values on cell centers\nT = Field(backend, grid, Center())\n\n# Function for setting up the initial conditions on T\ninit_incl(x, y, x0, y0, r, in, out) = ifelse((x - x0)^2 + (y - y0)^2 < r^2, in, out)\n\n# Set up the initial conditions with parameters specified\nset!(T, grid, init_incl; parameters=(x0=0.0, y0=0.0, r=0.1lx, in=T0, out=Ta))","category":"page"},{"location":"concepts/fields/#Defining-a-parameterized-FunctionField","page":"Fields","title":"Defining a parameterized FunctionField","text":"","category":"section"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"A field could also be represented in a parameterized way, having a function that associates a single number to every point in the space.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"An object of the concrete type FunctionField can be initialized with its constructor. The constructor takes in","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"A function func\nA grid\nA location tuple loc for specifying the distribution of variables","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"Optionally, one can also use the boolean variable discrete to indicate if the function field is typed Discrete or Continuous. Any additional parameters to be used in the function func can be passed to the optional parameter parameters.","category":"page"},{"location":"concepts/fields/#Example:-Creation-of-a-parameterized-function-field","page":"Fields","title":"Example: Creation of a parameterized function field","text":"","category":"section"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"Followingly, we create a gravity variable that is two-dimensional and comprises of two parameterized FunctionField objects on a predefined uniform grid grid.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"1. Define Functions that Parameterize the Field","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"In this step, we specify how the gravity field should be parameterized in x-direction and y-direction, with η as the additional parameter used in the parameterization.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"# forcing terms\nρgx(x, y, η) = -0.5 * η * π^2 * sin(0.5π * x) * cos(0.5π * y)\nρgy(x, y, η) = 0.5 * η * π^2 * cos(0.5π * x) * sin(0.5π * y)","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"2. Define Locations for Variable Positioning","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"We specify the location on the fully-staggered grid as introduced in the Location on a Grid Cell section of the concept Grids.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"vx_node = (Vertex(), Center())\nvy_node = (Center(), Vertex())","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"3. Define the 2D Gravity Field","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"By specifying the locations on which the parameterized field should be calculated, as well as concretizing the value η = η0 by passing it as the optional parameter parameters to the constructor, we can define the 2D gravity field:","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"η0 = 1.0\ngravity = (x=FunctionField(ρgx, grid, vx_node; parameters=η0),\n y=FunctionField(ρgy, grid, vy_node; parameters=η0))","category":"page"},{"location":"concepts/fields/#Defining-Constant-Fields","page":"Fields","title":"Defining Constant Fields","text":"","category":"section"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"For completeness, we also provide an abstract type ConstantField, which comprises of a generic ValueField type, and two special types ZeroField, OneField allowing dispatch for special casess. With such a construct, we can easily define value fields properties and other parameters using constant values in a straightforward and readable manner. Moreover, explicit information about the grid on which the field should be defined can be abbreviated. For example:","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"# Defines a field with constant values 1.0\nfield = Chmy.ValueField(1.0)","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"Alternatively, we could also use the OneField type, providing type information about the contents of the field.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"# Defines a field with constant value 1.0\nonefield = Chmy.OneField{Float64}()","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"Notably, these two fields shall equal to each other as expected.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"julia> field == onefield\ntrue","category":"page"},{"location":"developer_documentation/running_tests/#Running-Tests","page":"Running Tests","title":"Running Tests","text":"","category":"section"},{"location":"developer_documentation/running_tests/#CPU-tests","page":"Running Tests","title":"CPU tests","text":"","category":"section"},{"location":"developer_documentation/running_tests/","page":"Running Tests","title":"Running Tests","text":"To run the Chmy test suite on the CPU, simple run test from within the package mode or using Pkg:","category":"page"},{"location":"developer_documentation/running_tests/","page":"Running Tests","title":"Running Tests","text":"julia> using Pkg\n\njulia> Pkg.test(\"Chmy\")","category":"page"},{"location":"developer_documentation/running_tests/#GPU-tests","page":"Running Tests","title":"GPU tests","text":"","category":"section"},{"location":"developer_documentation/running_tests/","page":"Running Tests","title":"Running Tests","text":"To run the Chmy test suite on CUDA, ROC or Metal backend (Nvidia, AMD or Apple GPUs), respectively, run the tests using Pkg adding following test_args:","category":"page"},{"location":"developer_documentation/running_tests/#For-CUDA-backend-(Nvidia-GPUs):","page":"Running Tests","title":"For CUDA backend (Nvidia GPUs):","text":"","category":"section"},{"location":"developer_documentation/running_tests/","page":"Running Tests","title":"Running Tests","text":"julia> using Pkg\n\njulia> Pkg.test(\"Chmy\"; test_args=[\"--backend=CUDA\"])","category":"page"},{"location":"developer_documentation/running_tests/#For-ROC-backend-(AMD-GPUs):","page":"Running Tests","title":"For ROC backend (AMD GPUs):","text":"","category":"section"},{"location":"developer_documentation/running_tests/","page":"Running Tests","title":"Running Tests","text":"julia> using Pkg\n\njulia> Pkg.test(\"Chmy\"; test_args=[\"--backend=AMDGPU\"])","category":"page"},{"location":"developer_documentation/running_tests/#For-Metal-backend-(Apple-GPUs):","page":"Running Tests","title":"For Metal backend (Apple GPUs):","text":"","category":"section"},{"location":"developer_documentation/running_tests/","page":"Running Tests","title":"Running Tests","text":"julia> using Pkg\n\njulia> Pkg.test(\"Chmy\"; test_args=[\"--backends=Metal\"])","category":"page"},{"location":"#Chmy.jl","page":"Home","title":"Chmy.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Chmy.jl (pronounce tsh-mee) is a backend-agnostic toolkit for finite difference computations on multi-dimensional computational staggered grids. Chmy.jl features task-based distributed memory parallelisation capabilities.","category":"page"},{"location":"#Installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"To install Chmy.jl, one can simply add it using the Julia package manager:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> using Pkg\n\njulia> Pkg.add(\"Chmy\")","category":"page"},{"location":"","page":"Home","title":"Home","text":"After the package is installed, one can load the package by using:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> using Chmy","category":"page"},{"location":"","page":"Home","title":"Home","text":"info: Install from a Specific Branch\nFor developers and advanced users, one might want to use the implementation of Chmy.jl from a specific branch by specifying the url. In the following code snippet, we do this by explicitly specifying to use the current implementation that is available under the main branch:julia> using Pkg; Pkg.add(url=\"https://github.com/PTsolvers/Chmy.jl#main\")","category":"page"},{"location":"#Feature-Summary","page":"Home","title":"Feature Summary","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Chmy.jl provides a comprehensive framework for handling complex computational tasks on structured grids, leveraging both single and multi-device architectures. It seamlessly integrates with Julia's powerful parallel and concurrent programming capabilities, making it suitable for a wide range of scientific and engineering applications.","category":"page"},{"location":"","page":"Home","title":"Home","text":"A general list of the features is:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Backend-agnostic capabilities leveraging KernelAbstractions.jl\nDistributed computing support with MPI.jl\nMulti-dimensional, parametrisable discrete and continuous fields on structured grids\nHigh-level interface for specifying boundary conditions with automatic batching for performance\nFinite difference and interpolation operators on discrete fields\nExtensibility; The package is written in pure Julia, so adding new functions, simplification rules, and model transformations has no barrier","category":"page"},{"location":"#Funding","page":"Home","title":"Funding","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"The development of this package is supported by the GPU4GEO PASC project. More information about the GPU4GEO project can be found on the GPU4GEO website.","category":"page"},{"location":"concepts/grids/#Grids","page":"Grids","title":"Grids","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"The choice of numerical grid used depends on the type of equations to be resolved and affects the discretization schemes used. The design of the Chmy.Grids module aims to provide a robust yet flexible user API in customizing the numerical grids used for spatial discretization.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"We currently support grids with quadrilateral cells. An N-dimensional numerical grid contains N spatial dimensions, each represented by an axis.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"Grid Properties Description Tunable Parameters\nDimensions The grid can be N-dimensional by having N axes. AbstractAxis\nDistribution of Nodal Points The grid can be regular (uniform distribution) or non-regular (irregular distribution). UniformAxis, FunctionAxis\nDistribution of Variables The grid can be non-staggered (collocated) or staggered, affecting how variables are positioned within the grid. Center, Vertex","category":"page"},{"location":"concepts/grids/#Axis","page":"Grids","title":"Axis","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"Objects of type AbstractAxis are building blocks of numerical grids. We can either define equidistant axes with UniformAxis, or parameterized axes with FunctionAxis.","category":"page"},{"location":"concepts/grids/#Uniform-Axis","page":"Grids","title":"Uniform Axis","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"To define a uniform axis, we need to provide:","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"Origin: The starting point of the axis.\nExtent: The length of the section of the axis considered.\nCell Length: The length of each cell along the axis.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"With the information above, an axis can be defined and incorporated into a spatial dimension. The spacing (with alias Δ) and inv_spacing (with alias iΔ) functions allow convenient access to the grid spacing (Δx/Δy/Δz) and its reciprocal, respectively.","category":"page"},{"location":"concepts/grids/#Function-Axis","page":"Grids","title":"Function Axis","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"As an alternative, one could also define a FunctionAxis object using a function that parameterizes the spacing of the axis, together with the length of the axis.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"f = i -> ((i - 1) / 4)^1.5\nlength = 4\nparameterized_axis = FunctionAxis(f, length)","category":"page"},{"location":"concepts/grids/#Structured-Grids","page":"Grids","title":"Structured Grids","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"A common mesh structure that is used for the spatial discretization in the finite difference approach is a structured grid (concrete type StructuredGrid or its alias SG).","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"We provide a function UniformGrid for creating an equidistant StructuredGrid, that essentially boils down to having axes of type UniformAxis in each spatial dimension.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"# with architecture as well as numerics lx/y/z and nx/y/z defined\ngrid = UniformGrid(arch;\n origin=(-lx/2, -ly/2, -lz/2),\n extent=(lx, ly, lz),\n dims=(nx, ny, nz))","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"warning: Metal backend\nIf using the Metal backend, ensure to use Float32 (f0) element types in the origin and extent tuples when initialising the grid.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"info: Interactive Grid Visualization\ngrids_2d.jl: Visualization of a 2D StructuredGrid\ngrids_3d.jl: Visualization of a 3D StructuredGrid","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"","category":"page"},{"location":"concepts/grids/#Location-on-a-Grid-Cell","page":"Grids","title":"Location on a Grid Cell","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"In order to allow full control over the distribution of different variables on the grid, we provide a high-level abstraction of the property location on a grid cell with the abstract type Location. More concretely, a property location along a spatial dimension can be either of concrete type Center or Vertex on a structured grid.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"We illustrate how to specify the location within a grid cell on a fully staggered uniform grid. The following 2D example also has ghost nodes illustrated that are located immediately outside the domain boundary.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"In the following example, we zoom into a specific cell on a fully-staggered grid. By specifying for both x- and y-dimensions whether the node locates at the Center (C) or Vertex (V) along the respective axis, we can arrive in 4 categories of nodes on a 2D quadrilateral cell, which we refer to as \"basic\", \"pressure\", \"Vx\" and \"Vy\" nodes, following common practices.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"If all variables are defined on basic nodes, specified by (V,V) locations, we have the simplest non-staggered collocated grid.","category":"page"},{"location":"concepts/grids/#Dimensions-of-Fields-on-Structured-Grids","page":"Grids","title":"Dimensions of Fields on Structured Grids","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"With a structured grid defined that consists of nx = N cells horizontally and ny = M cells vertically, we have the following dimensions for fields associated with the grid.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"Node Type Field Dimension Location\nCell vertex (N + 1) times (M + 1) (V, V)\nX interface (N + 1) times M (V, C)\nY interface $ N \\times (M + 1)$ (C, V)\nCell Center N times M (C, C)","category":"page"},{"location":"concepts/grids/#Connectivity-of-a-StructuredGrid","page":"Grids","title":"Connectivity of a StructuredGrid","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"Using the method connectivity(::SG{N,T,C}, ::Dim{D}, ::Side{S}), one can obtain the connectivity underlying a structured grid. If no special grid topology is provided, a default Bounded grid topology is used for the UniformGrid. Therefore, on a default UniformGrid, the following assertions hold:","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"julia> @assert connectivity(grid, Dim(1), Side(1)) isa Bounded \"Left boundary is bounded\"\n\njulia> @assert connectivity(grid, Dim(1), Side(2)) isa Bounded \"Right boundary is bounded\"\n\njulia> @assert connectivity(grid, Dim(2), Side(1)) isa Bounded \"Upper boundary is bounded\"\n\njulia> @assert connectivity(grid, Dim(2), Side(2)) isa Bounded \"Lower boundary is bounded\"","category":"page"}] +[{"location":"developer_documentation/workers/#Workers","page":"Workers","title":"Workers","text":"","category":"section"},{"location":"developer_documentation/workers/","page":"Workers","title":"Workers","text":"Task-based parallelism provides a highly abstract view for the program execution scheduling, although it may come with a performance overhead related to task creation and destruction. The overhead is currently significant when tasks are used to perform asynchronous operations on GPUs, where TLS context creation and destruction may be in the order of kernel execution time. Therefore, it may be desirable to have long-running tasks that do not get terminated immediately, but only when all queued subtasks (i.e. work units) are executed.","category":"page"},{"location":"developer_documentation/workers/","page":"Workers","title":"Workers","text":"In Chmy.jl, we introduced the concept Worker for this purpose. A Worker is a special construct to extend the lifespan of a task created by Threads.@spawn. It possesses a Channel of subtasks to be executed on the current thread, where subtasks are submitted at construction time to the worker using put!.","category":"page"},{"location":"developer_documentation/workers/","page":"Workers","title":"Workers","text":"With the help of the Worker, we can specify subtasks that need to be sequentially executed by enqueuing them to one Worker. Any work units that shall run in parallel should be put into separate workers instead.","category":"page"},{"location":"developer_documentation/workers/","page":"Workers","title":"Workers","text":"Currently, we use Workers under the hood to hide the communications between computations. We split the computational domain into inner part, containing the bulk of the grid points, and thin outer part. We launch the same kernels processing the inner and outer parts in different Julia tasks. When the outer part completes, we launch the non-blocking MPI communication. Workers are a stateful representation of the long-running computation, needed to avoid significant overhead of creating a new task-local state each time a communication is performed.","category":"page"},{"location":"concepts/bc/#Boundary-Conditions","page":"Boundary Conditions","title":"Boundary Conditions","text":"","category":"section"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"Using Chmy.jl, we aim to study partial differential equations (PDEs) arising from physical or engineering problems. Additional initial and/or boundary conditions are necessary for the model problem to be well-posed, ensuring the existence and uniqueness of a stable solution.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"We provide a small overview for boundary conditions that one often encounters. In the following, we consider the unknown function u Omega mapsto mathbbR defined on some bounded computational domain Omega subset mathbbR^d in a d-dimensional space. With the domain boundary denoted by partial Omega, we have some function g partial Omega mapsto mathbbR prescribed on the boundary.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"Type Form Example\nDirichlet u = g on partial Omega In fluid dynamics, the no-slip condition for viscous fluids states that at a solid boundary the fluid has zero velocity relative to the boundary.\nNeumann partial_boldsymboln u = g on partial Omega, where boldsymboln is the outer normal vector to Omega It specifies the values in which the derivative of a solution is applied within the boundary of the domain. An application in thermodynamics is a prescribed heat flux through the boundary\nRobin u + alpha partial_nu u = g on partial Omega, where alpha in mathbbR. Also called impedance boundary conditions from their application in electromagnetic problems","category":"page"},{"location":"concepts/bc/#Applying-Boundary-Conditions-with-bc!()","page":"Boundary Conditions","title":"Applying Boundary Conditions with bc!()","text":"","category":"section"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"In the following, we describe the syntax in Chmy.jl for launching kernels that impose boundary conditions on some field that is well-defined on a grid with backend specified through arch.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"For Dirichlet and Neumann boundary conditions, they are referred to as homogeneous if g = 0, otherwise they are non-homogeneous if g = v holds, for some vin mathbbR.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":" Homogeneous Non-homogeneous\nDirichlet on partial Omega bc!(arch, grid, field => Dirichlet()) bc!(arch, grid, field => Dirichlet(v))\nNeumann on partial Omega bc!(arch, grid, field => Neumann()) bc!(arch, grid, field => Neumann(v))","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"Note that the syntax shown in the table above is a fused expression of both specifying and applying the boundary conditions.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"warning: $\\partial \\Omega$ Refers to the Entire Domain Boundary!\nBy specifying field to a single boundary condition, we impose the boundary condition on the entire domain boundary by default. See the section for \"Mixed Boundary Conditions\" below for specifying different BC on different parts of the domain boundary.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"Alternatively, one could also define the boundary conditions beforehand using batch() provided the grid information as well as the field variable. This way the boundary condition to be prescibed is precomputed.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"# pre-compute batch\nbt = batch(grid, field => Neumann()) # specify Neumann BC for the variable `field`\nbc!(arch, grid, bt) # apply the boundary condition","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"In the script batcher.jl, we provide a MWE using both fused and precomputed expressions for BC update.","category":"page"},{"location":"concepts/bc/#Specifying-BC-within-a-launch","page":"Boundary Conditions","title":"Specifying BC within a launch","text":"","category":"section"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"When using launch to specify the execution of a kernel (more see section Kernels), one can pass the specified boundary condition(s) as an optional parameter using batch, provided the grid information of the discretized space. This way we can gain efficiency from making good use of already cached values.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"In the 2D diffusion example as introduced in the tutorial \"Getting Started with Chmy.jl\", we need to update the temperature field C at k-th iteration using the values of heat flux q and physical time step size Δt from (k-1)-th iteration. When launching the kernel update_C! with launch, we simultaneously launch the kernel for the BC update using:","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"launch(arch, grid, update_C! => (C, q, Δt, grid); bc=batch(grid, C => Neumann(); exchange=C))","category":"page"},{"location":"concepts/bc/#Mixed-Boundary-Conditions","page":"Boundary Conditions","title":"Mixed Boundary Conditions","text":"","category":"section"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"In the code example above, by specifying boundary conditions using syntax such as field => Neumann(), we essentially launch a kernel that impose the Neumann boundary condition on the entire domain boundary partial Omega. More often, one may be interested in prescribing different boundary conditions on different parts of partial Omega.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"The following figure showcases a 2D square domain Omega with different boundary conditions applied on each side:","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"The top boundary (red) is a Dirichlet boundary condition where u = a.\nThe bottom boundary (blue) is also a Dirichlet boundary condition where u = b.\nThe left and right boundaries (green) are Neumann boundary conditions where fracpartial upartial y = 0.","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"To launch a kernel that satisfies these boundary conditions in Chmy.jl, you can use the following code:","category":"page"},{"location":"concepts/bc/","page":"Boundary Conditions","title":"Boundary Conditions","text":"bc!(arch, grid, field => (x = Neumann(), y = (Dirichlet(b), Dirichlet(a))))","category":"page"},{"location":"concepts/kernels/#Kernels","page":"Kernels","title":"Kernels","text":"","category":"section"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"The KernelAbstractions.jl package provides a macro-based dialect that hides the intricacies of vendor-specific GPU programming. It allows one to write hardware-agnostic kernels that can be instantiated and launched for different device backends without modifying the high-level code nor sacrificing performance.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"In the following, we show how to write and launch kernels on various backends. We also explain the concept of a Launcher in Chmy.jl, that complements the default kernel launching, allowing us to hide the latency between the bulk of the computations and boundary conditions or MPI communications.","category":"page"},{"location":"concepts/kernels/#Writing-Kernels","page":"Kernels","title":"Writing Kernels","text":"","category":"section"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"This section highlights some important features of KernelAbstractions.jl that are essential for understanding the high-level abstraction of the kernel concept that is used throughout our package. As it barely serves for illustrative purposes, for more specific examples, please refer to their documentation.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"using KernelAbstractions\n\n# Define a kernel that performs element-wise operations on A\n@kernel function mul2!(A)\n # use @index macro to obtain the global Cartesian index of the current work item.\n I = @index(Global, Cartesian)\n A[I] *= 2\nend","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"The kernel mul2! being defined using the @kernel macro, we can launch it on the desired backend to perform the element-wise operations on host.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"# Define array and work group size\nA = ones(1024, 1024)\nbackend = get_backend(A) # CPU\n\n# Launch kernel and explicitly synchronize\nkernel = mul2!(backend)\nkernel(A, ndrange=size(A))\nKernelAbstractions.synchronize(backend)\n\n# Result assertion\n@assert(all(A .== 2.0) == true)","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"To launch the kernel on GPU devices, one could simply define A as CuArray, ROCArray or oneArray as detailed in the section \"launching kernel on the backend\". More fine-grained memory access is available using the @index macro as described here.","category":"page"},{"location":"concepts/kernels/#Thread-Indexing","page":"Kernels","title":"Thread Indexing","text":"","category":"section"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"Thread indexing is essential for memory usage on GPU devices; however, it can quickly become cumbersome to figure out the thread index, especially when working with multi-dimensional grids of multi-dimensional blocks of threads. The performance of kernels can also depend significantly on access patterns.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"In the example above, we saw the usage of I = @index(Global, Cartesian), which retrieves the global index of threads for the two-dimensional array A. Such powerful macros are provided by KernelAbstractions.jl for conveniently retrieving the desired index of threads.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"The following table is non-exhaustive and provides a reference of commonly used terminology. Here, KernelAbstractions.@index is used for index retrieval, and KernelAbstractions.@groupsize is used for obtaining the dimensions of blocks of threads.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"KernelAbstractions CPU CUDA AMDGPU\n@index(Local, Linear) mod(i, g) threadIdx().x workitemIdx().x\n@index(Local, Cartesian)[2] threadIdx().y workitemIdx().y\n@index(Local, Cartesian)[3] threadIdx().z workitemIdx().z\n@index(Group, Linear) i ÷ g blockIdx().x workgroupIdx().x\n@index(Group, Cartesian)[2] blockIdx().y workgroupIdx().y\n@groupsize()[3] blockDim().z workgroupDim().z\n@index(Global, Linear) i global index computation needed global index computation needed\n@index(Global, Cartesian)[2] global index computation needed global index computation needed\n@index(Global, NTuple) global index computation needed global index computation needed","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"The @index(Global, NTuple) returns a NTuple object, allowing more fine-grained memory control over the allocated arrays.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"@kernel function memcpy!(a, b)\n i, j = @index(Global, NTuple)\n @inbounds a[i, j] = b[i, j]\nend","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"A tuple can be splatted with ... Julia operator when used to avoid explicitly using i, j indices.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"@kernel function splatting_memcpy!(a, b)\n I = @index(Global, NTuple)\n @inbounds a[I...] = b[I...]\nend","category":"page"},{"location":"concepts/kernels/#Kernel-Launcher","page":"Kernels","title":"Kernel Launcher","text":"","category":"section"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"In Chmy.jl, the KernelLaunch module is designed to provide handy utilities for performing different grid operations on selected data entries of Fields that are involved at each kernel launch, in which the grid geometry underneath is also taken into account.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"Followingly, we define a kernel launcher associated with an UniformGrid object, supporting CUDA backend.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"# Define backend and geometry\narch = Arch(CUDABackend())\ngrid = UniformGrid(arch; origin=(-1, -1), extent=(2, 2), dims=(126, 126))\n\n# Define launcher\nlaunch = Launcher(arch, grid)","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"We also have two kernel functions compute_q! and update_C! defined, which shall update the fields q and C using grid operators (see section Grid Operators) ∂x, ∂y, divg that are anchored on some grid g accordingly.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"@kernel inbounds = true function compute_q!(q, C, χ, g::StructuredGrid, O)\n I = @index(Global, Cartesian)\n I = I + O\n q.x[I] = -χ * ∂x(C, g, I)\n q.y[I] = -χ * ∂y(C, g, I)\nend\n\n@kernel inbounds = true function update_C!(C, q, Δt, g::StructuredGrid, O)\n I = @index(Global, Cartesian)\n I = I + O\n C[I] -= Δt * divg(q, g, I)\nend","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"To spawn the kernel, we invoke the launcher using the launch function to perform the field update at each physical timestep, and specify desired boundary conditions for involved fields in the kernel.","category":"page"},{"location":"concepts/kernels/","page":"Kernels","title":"Kernels","text":"# Define physics, numerics, geometry ...\nfor it in 1:nt\n # without boundary conditions\n launch(arch, grid, compute_q! => (q, C, χ, grid))\n\n # with Neumann boundary conditions and MPI exchange\n launch(arch, grid, update_C! => (C, q, Δt, grid); bc=batch(grid, C => Neumann(); exchange=C))\nend","category":"page"},{"location":"concepts/grid_operators/#Grid-Operators","page":"Grid Operators","title":"Grid Operators","text":"","category":"section"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"Chmy.jl currently supports various finite difference operators for fields defined in Cartesian coordinates. The table below summarizes the most common usage of grid operators, with the grid g::StructuredGrid and index I = @index(Global, Cartesian) defined and P = Field(backend, grid, location) is some field defined on the grid g.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"Mathematical Formulation Code\nfracpartialpartial x P ∂x(P, g, I)\nfracpartialpartial y P ∂y(P, g, I)\nfracpartialpartial z P ∂z(P, g, I)\nnabla P divg(P, g, I)","category":"page"},{"location":"concepts/grid_operators/#Computing-the-Divergence-of-a-Vector-Field","page":"Grid Operators","title":"Computing the Divergence of a Vector Field","text":"","category":"section"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"To illustrate the usage of grid operators, we compute the divergence of an vector field V using the divg function. We first allocate memory for required fields.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"V = VectorField(backend, grid)\n∇V = Field(backend, grid, Center())\n# use set! to set up the initial vector field...","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"The kernel that computes the divergence needs to have the grid information passed as for other finite difference operators.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"@kernel inbounds = true function update_∇!(V, ∇V, g::StructuredGrid, O)\n I = @index(Global, Cartesian)\n I = I + O\n ∇V[I] = divg(V, g, I)\nend","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"The kernel can then be launched when required as we detailed in section Kernels.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"launch(arch, grid, update_∇! => (V, ∇V, grid))","category":"page"},{"location":"concepts/grid_operators/#Masking","page":"Grid Operators","title":"Masking","text":"","category":"section"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"Masking allows selectively applying operations only where needed, allowing more flexible control over the grid operators and improving performance. Thus, by providing masked grid operators, we enable more flexible control over the domain on which the grid operators should be applied.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"In the following example, we first define a mask ω on the 2D StructuredGrid. Then we specify to not mask the center area of all Vx, Vy nodes (accessible through ω.vc, ω.cv) on the staggered grid.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"# define the mask\nω = FieldMask2D(arch, grid) # with backend and grid geometry defined\n\n# define the initial inclusion\nr = 2.0\ninit_inclusion = (x,y) -> ifelse(x^2 + y^2 < r^2, 1.0, 0.0)\n\n# mask all other entries other than the initial inclusion\nset!(ω.vc, grid, init_inclusion)\nset!(ω.cv, grid, init_inclusion)","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"We can then pass the mask to other grid operators when applying them within the kernel. When computing masked derivatives, a mask being the subtype of AbstractMask is premultiplied at the corresponding grid location for each operand:","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"@kernel function update_strain_rate!(ε̇, V, ω::AbstractMask, g::StructuredGrid, O)\n I = @index(Global, Cartesian)\n I = I + O\n # with masks ω\n ε̇.xx[I] = ∂x(V.x, ω, g, I)\n ε̇.yy[I] = ∂y(V.y, ω, g, I)\n ε̇.xy[I] = 0.5 * (∂y(V.x, ω, g, I) + ∂x(V.y, ω, g, I))\nend","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"The kernel can be launched as follows, with some launcher defined using launch = Launcher(arch, grid):","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"# define fields\nε̇ = TensorField(backend, grid)\nV = VectorField(backend, grid)\n\n# launch kernel\nlaunch(arch, grid, update_strain_rate! => (ε̇, V, ω, grid))","category":"page"},{"location":"concepts/grid_operators/#Interpolation","page":"Grid Operators","title":"Interpolation","text":"","category":"section"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"Chmy.jl provides an interface itp which interpolates the field f from its location to the specified location to using the given interpolation rule r. The indices specify the position within the grid at location to:","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"itp(f, to, r, grid, I...)","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"Currently implemented interpolation rules are:","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"Linear() which implements rule(t, v0, v1) = v0 + t * (v1 - v0);\nHarmonicLinear() which implements rule(t, v0, v1) = 1/(1/v0 + t * (1/v1 - 1/v0)).","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"Both rules are exposed as convenience wrapper functions lerp and hlerp, using Linear() and HarmonicLinear() rules, respectively:","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"lerp(f, to, grid, I...) # implements itp(f, to, Linear(), grid, I...)\nhlerp(f, to, grid, I...) # implements itp(f, to, HarmonicLinear(), grid, I...)","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"In the following example, we use the linear interpolation wrapper lerp when interpolating nodal values of the density field ρ, defined on cell centres, i.e. having the location (Center(), Center()) to ρx and ρy, defined on cell interfaces in the x- and y- direction, respectively.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"# define density ρ on cell centres\nρ = Field(backend, grid, Center())\nρ0 = 3.0; set!(ρ, ρ0)\n\n# allocate memory for density on cell interfaces\nρx = Field(backend, grid, (Vertex(), Center()))\nρy = Field(backend, grid, (Center(), Vertex()))","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"The kernel interpolate_ρ! performs the actual interpolation and requires the grid information passed by g.","category":"page"},{"location":"concepts/grid_operators/","page":"Grid Operators","title":"Grid Operators","text":"@kernel function interpolate_ρ!(ρ, ρx, ρy, g::StructuredGrid, O)\n I = @index(Global, Cartesian)\n I = I + O\n # interpolate from cell centres to cell interfaces\n ρx = lerp(ρ, location(ρx), g, I)\n ρy = lerp(ρ, location(ρy), g, I)\nend","category":"page"},{"location":"examples/overview/#Examples-Overview","page":"Examples Overview","title":"Examples Overview","text":"","category":"section"},{"location":"examples/overview/","page":"Examples Overview","title":"Examples Overview","text":"This page provides an overview of Chmy.jl examples. These selected examples demonstrate how Chmy.jl can be used to solve various numerical problems using architecture-agnostic kernels both on a single-device and in a distributed way.","category":"page"},{"location":"examples/overview/#Table-of-Contents","page":"Examples Overview","title":"Table of Contents","text":"","category":"section"},{"location":"examples/overview/","page":"Examples Overview","title":"Examples Overview","text":"Example Description\nDiffusion 2D Solving the 2D diffusion equation on a uniform grid.\nDiffusion 2D with MPI Solving the 2D diffusion equation on a uniform grid and distributed parallelisation using MPI.\nSingle-Device Performance Optimisation Revisiting the 2D diffusion problem with focus on performance optimisation techniques on a single-device architecture.\nStokes 2D with MPI Solving the 2D Stokes equation with thermal coupling on a uniform grid.\nStokes 3D with MPI Solving the 3D Stokes equation with thermal coupling on a uniform grid and distributed parallelisation using MPI.\nDiffusion 1D with Metal Solving the 1D diffusion equation using the Metal backend and single precision (Float32) on a uniform grid.\n2D Grid Visualization Visualization of a 2D StructuredGrid.\n3D Grid Visualization Visualization of a 3D StructuredGrid.","category":"page"},{"location":"concepts/architectures/#Architectures","page":"Architectures","title":"Architectures","text":"","category":"section"},{"location":"concepts/architectures/#Backend-Selection-and-Architecture-Initialization","page":"Architectures","title":"Backend Selection & Architecture Initialization","text":"","category":"section"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"Chmy.jl supports CPUs, as well as CUDA, ROC and Metal backends for Nvidia, AMD and Apple M-series GPUs through a thin wrapper around the KernelAbstractions.jl for users to select desirable backends.","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"# Default with CPU\narch = Arch(CPU())","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"using CUDA\n\narch = Arch(CUDABackend())","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"using AMDGPU\n\narch = Arch(ROCBackend())","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"using Metal\n\narch = Arch(MetalBackend())","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"At the beginning of program, one may specify the backend and initialize the architecture they desire to use. The initialized arch variable will be required explicitly at creation of some objects such as grids and kernel launchers.","category":"page"},{"location":"concepts/architectures/#Specifying-the-device-ID-and-stream-priority","page":"Architectures","title":"Specifying the device ID and stream priority","text":"","category":"section"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"On systems with multiple GPUs, passing the keyword argument device_id to the Arch constructor will select and set the selected device as a current device.","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"For advanced users, we provide a function activate!(arch; priority) for specifying the stream priority owned by the task one is executing. The stream priority will be set to :normal by default, where :low and :high are also possible options given that the target backend has priority control over streams implemented.","category":"page"},{"location":"concepts/architectures/#Distributed-Architecture","page":"Architectures","title":"Distributed Architecture","text":"","category":"section"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"Our distributed architecture builds upon the abstraction of having GPU clusters that build on the same GPU architecture. Note that in general, GPU clusters may be equipped with hardware from different vendors, incorporating different types of GPUs to exploit their unique capabilities for specific tasks.","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"warning: GPU-Aware MPI Required for Distributed Module on GPU backend\nThe Distributed module currently only supports GPU-aware MPI when a GPU backend is selected for multi-GPU computations. For the Distributed module to function properly, any GPU-aware MPI library installation shall be used. Otherwise, a segmentation fault will occur.","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"To make the Architecture object aware of MPI topology, user can pass an MPI communicator object and dimensions of the Cartesian topology to the Arch constructor:","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"using MPI\n\narch = Arch(CPU(), MPI.COMM_WORLD, (0, 0, 0))","category":"page"},{"location":"concepts/architectures/","page":"Architectures","title":"Architectures","text":"Passing zeros as the last argument will automatically spread the dimensions to be as close as possible to each other, see MPI.jl documentation for details. For distributed usage of Chmy.jl see Distributed","category":"page"},{"location":"lib/modules/#Modules","page":"Modules","title":"Modules","text":"","category":"section"},{"location":"lib/modules/#Grids","page":"Modules","title":"Grids","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.Grids]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.Grids.AbstractAxis","page":"Modules","title":"Chmy.Grids.AbstractAxis","text":"abstract type AbstractAxis{T}\n\nAbstract type representing an axis in a grid, where the axis is parameterized by the type T of the coordinates.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Grids.Center","page":"Modules","title":"Chmy.Grids.Center","text":"struct Center <: Location\n\nThe Center struct represents a location at the center along a dimension of a grid cell.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Grids.Connectivity","page":"Modules","title":"Chmy.Grids.Connectivity","text":"abstract type Connectivity\n\nAbstract type representing the connectivity of grid elements.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Grids.Location","page":"Modules","title":"Chmy.Grids.Location","text":"abstract type Location\n\nAbstract type representing a location in a grid cell.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Grids.StructuredGrid","page":"Modules","title":"Chmy.Grids.StructuredGrid","text":"StructuredGrid\n\nRepresents a structured grid with orthogonal axes.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Grids.UniformGrid-Union{Tuple{Chmy.Architectures.Architecture}, Tuple{N}} where N","page":"Modules","title":"Chmy.Grids.UniformGrid","text":"UniformGrid(arch; origin, extent, dims, topology=nothing)\n\nConstructs a uniform grid with specified origin, extent, dimensions, and topology.\n\nArguments\n\narch::Architecture: The associated architecture.\norigin::NTuple{N,Number}: The origin of the grid.\nextent::NTuple{N,Number}: The extent of the grid.\ndims::NTuple{N,Integer}: The dimensions of the grid.\ntopology=nothing: The topology of the grid. If not provided, a default Bounded topology is used.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.Vertex","page":"Modules","title":"Chmy.Grids.Vertex","text":"struct Vertex <: Location\n\nThe Vertex struct represents a location at the vertex along a dimension of a grid cell.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Grids.axes_names-Tuple{Chmy.Grids.StructuredGrid{1}}","page":"Modules","title":"Chmy.Grids.axes_names","text":"axes_names(::SG{1})\n\nReturns the names of the axes for a 1-dimensional structured grid.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.axes_names-Tuple{Chmy.Grids.StructuredGrid{2}}","page":"Modules","title":"Chmy.Grids.axes_names","text":"axes_names(::SG{2})\n\nReturns the names of the axes for a 2-dimensional structured grid.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.axes_names-Tuple{Chmy.Grids.StructuredGrid{3}}","page":"Modules","title":"Chmy.Grids.axes_names","text":"axes_names(::SG{3})\n\nReturns the names of the axes for a 3-dimensional structured grid.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.axis-Union{Tuple{dim}, Tuple{Chmy.Grids.StructuredGrid, Dim{dim}}} where dim","page":"Modules","title":"Chmy.Grids.axis","text":"axis(grid, dim::Dim)\n\nReturn the axis corresponding to the spatial dimension dim.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.bounds-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, Union{NTuple{N, Chmy.Grids.Location}, Chmy.Grids.Location}}} where N","page":"Modules","title":"Chmy.Grids.bounds","text":"bounds(grid, loc, [dim::Dim])\n\nReturn the bounds of a structured grid at the specified location(s).\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.connectivity-Union{Tuple{S}, Tuple{D}, Tuple{C}, Tuple{T}, Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N, T, C}, Dim{D}, Side{S}}} where {N, T, C, D, S}","page":"Modules","title":"Chmy.Grids.connectivity","text":"connectivity(grid, dim::Dim, side::Side)\n\nReturn the connectivity of the structured grid grid for the given dimension dim and side side.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.coord-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, Chmy.Grids.Location, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.Grids.coord","text":"coord(grid, loc, I...)\n\nReturn a tuple of spatial coordinates of a grid point at location loc and indices I.\n\nFor vertex locations, first grid point is at the origin. For center locations, first grid point at half-spacing distance from the origin.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.extent-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, Union{NTuple{N, Chmy.Grids.Location}, Chmy.Grids.Location}}} where N","page":"Modules","title":"Chmy.Grids.extent","text":"extent(grid, loc, [dim::Dim])\n\nReturn the extent of a structured grid at the specified location(s).\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.inv_spacing-Union{Tuple{Chmy.Grids.UniformGrid{N}}, Tuple{N}} where N","page":"Modules","title":"Chmy.Grids.inv_spacing","text":"inv_spacing(grid::UniformGrid)\n\nReturn a tuple of inverse grid spacing for a uniform grid grid.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.inv_spacing-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, Chmy.Grids.Location, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.Grids.inv_spacing","text":"inv_spacing(grid, loc, I...)\n\nReturn a tuple of inverse grid spacings at location loc and indices I.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.inv_volume-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, NTuple{N, Chmy.Grids.Location}, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.Grids.inv_volume","text":"inv_volume(grid, loc, I...)\n\nReturn the inverse of control volume at location loc and indices I.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.iΔ","page":"Modules","title":"Chmy.Grids.iΔ","text":"iΔ\n\nAlias for the inv_spacing method that returns the reciprocal of the spacing between grid points.\n\n\n\n\n\n","category":"function"},{"location":"lib/modules/#Chmy.Grids.origin-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, Union{NTuple{N, Chmy.Grids.Location}, Chmy.Grids.Location}}} where N","page":"Modules","title":"Chmy.Grids.origin","text":"origin(grid, loc, [dim::Dim])\n\nReturn the origin of a structured grid at the specified location(s).\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.spacing-Union{Tuple{Chmy.Grids.UniformGrid{N}}, Tuple{N}} where N","page":"Modules","title":"Chmy.Grids.spacing","text":"spacing(grid::UniformGrid)\n\nReturn a tuple of grid spacing for a uniform grid grid.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.spacing-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, Chmy.Grids.Location, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.Grids.spacing","text":"spacing(grid, loc, I...)\n\nReturn a tuple of grid spacings at location loc and indices I.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.volume-Union{Tuple{N}, Tuple{Chmy.Grids.StructuredGrid{N}, NTuple{N, Chmy.Grids.Location}, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.Grids.volume","text":"volume(grid, loc, I...)\n\nReturn the control volume at location loc and indices I.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Grids.Δ","page":"Modules","title":"Chmy.Grids.Δ","text":"Δ\n\nAlias for the spacing method that returns the spacing between grid points.\n\n\n\n\n\n","category":"function"},{"location":"lib/modules/#Architectures","page":"Modules","title":"Architectures","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.Architectures]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.Architectures.Architecture","page":"Modules","title":"Chmy.Architectures.Architecture","text":"Architecture\n\nAbstract type representing an architecture.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Architectures.SingleDeviceArchitecture","page":"Modules","title":"Chmy.Architectures.SingleDeviceArchitecture","text":"SingleDeviceArchitecture <: Architecture\n\nA struct representing an architecture that operates on a single CPU or GPU device.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Architectures.SingleDeviceArchitecture-Tuple{Chmy.Architectures.Architecture}","page":"Modules","title":"Chmy.Architectures.SingleDeviceArchitecture","text":"SingleDeviceArchitecture(arch::Architecture)\n\nCreate a SingleDeviceArchitecture object retrieving backend and device from arch.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.Arch-Tuple{KernelAbstractions.Backend}","page":"Modules","title":"Chmy.Architectures.Arch","text":"Arch(backend::Backend; device_id::Integer=1)\n\nCreate an architecture object for the specified backend and device.\n\nArguments\n\nbackend: The backend to use for computation.\ndevice_id=1: The ID of the device to use.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.activate!-Tuple{Chmy.Architectures.SingleDeviceArchitecture}","page":"Modules","title":"Chmy.Architectures.activate!","text":"activate!(arch::SingleDeviceArchitecture; priority=:normal)\n\nActivate the given architecture on the specified device and set the priority of the backend. For the priority accepted values are :normal, :low and :high.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.get_backend-Tuple{Chmy.Architectures.SingleDeviceArchitecture}","page":"Modules","title":"Chmy.Architectures.get_backend","text":"get_backend(arch::SingleDeviceArchitecture)\n\nGet the backend associated with a SingleDeviceArchitecture.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.get_device-Tuple{Chmy.Architectures.SingleDeviceArchitecture}","page":"Modules","title":"Chmy.Architectures.get_device","text":"get_device(arch::SingleDeviceArchitecture)\n\nGet the device associated with a SingleDeviceArchitecture.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Fields","page":"Modules","title":"Fields","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.Fields]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.Fields.AbstractField","page":"Modules","title":"Chmy.Fields.AbstractField","text":"abstract type AbstractField{T,N,L} <: AbstractArray{T,N}\n\nAbstract type representing a field with data type T, number of dimensions N, location L where the field should be defined on.\n\nSee also: abstract type ConstantField\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Fields.ConstantField","page":"Modules","title":"Chmy.Fields.ConstantField","text":"Scalar field with a constant value\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Fields.Field","page":"Modules","title":"Chmy.Fields.Field","text":"struct Field{T,N,L,H,A} <: AbstractField{T,N,L}\n\nField represents a discrete scalar field with specified type, number of dimensions, location, and halo size.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Fields.Field-Tuple{Chmy.Architectures.Architecture, Vararg{Any}}","page":"Modules","title":"Chmy.Fields.Field","text":"Field(arch::Architecture, args...; kwargs...)\n\nCreate a Field object on the specified architecture.\n\nArguments:\n\narch::Architecture: The architecture for which to create the Field.\nargs...: Additional positional arguments to pass to the Field constructor.\nkwargs...: Additional keyword arguments to pass to the Field constructor.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.Field-Union{Tuple{N}, Tuple{KernelAbstractions.Backend, Chmy.Grids.StructuredGrid{N}, Union{NTuple{N, Chmy.Grids.Location}, Chmy.Grids.Location}}, Tuple{KernelAbstractions.Backend, Chmy.Grids.StructuredGrid{N}, Union{NTuple{N, Chmy.Grids.Location}, Chmy.Grids.Location}, Any}} where N","page":"Modules","title":"Chmy.Fields.Field","text":"Field(backend, grid, loc, type=eltype(grid); halo=1)\n\nConstructs a field on a structured grid at the specified location.\n\nArguments:\n\nbackend: The backend to use for memory allocation.\ngrid: The structured grid on which the field is constructed.\nloc: The location or locations on the grid where the field is constructed.\ntype: The element type of the field. Defaults to the element type of the grid.\nhalo: The halo size for the field. Defaults to 1.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.FunctionField","page":"Modules","title":"Chmy.Fields.FunctionField","text":"FunctionField <: AbstractField\n\nContinuous or discrete field with values computed at runtime.\n\nConstructors\n\nFunctionField(func, grid, loc; [discrete], [parameters]): Create a new FunctionField object.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Fields.FunctionField-Union{Tuple{N}, Tuple{F}, Tuple{F, Chmy.Grids.StructuredGrid{N}, Any}} where {F, N}","page":"Modules","title":"Chmy.Fields.FunctionField","text":"FunctionField(func::F, grid::StructuredGrid{N}, loc; discrete=false, parameters=nothing) where {F,N}\n\nCreate a FunctionField on the given grid using the specified function func.\n\nArguments:\n\nfunc::F: The function used to generate the field values.\ngrid::StructuredGrid{N}: The structured grid defining the computational domain.\nloc: The nodal location on the grid grid where the function field is defined on.\ndiscrete::Bool=false: A flag indicating whether the field should be discrete. Defaults to false.\nparameters=nothing: Additional parameters to be used by the function. Defaults to nothing.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.OneField","page":"Modules","title":"Chmy.Fields.OneField","text":"Constant field with values equal to one(T)\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Fields.ValueField","page":"Modules","title":"Chmy.Fields.ValueField","text":"Field with a constant value\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Fields.ZeroField","page":"Modules","title":"Chmy.Fields.ZeroField","text":"Constant field with values equal to zero(T)\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Fields.TensorField-Tuple{KernelAbstractions.Backend, Chmy.Grids.StructuredGrid{2}, Vararg{Any}}","page":"Modules","title":"Chmy.Fields.TensorField","text":"TensorField(backend::Backend, grid::StructuredGrid{2}, args...; kwargs...)\n\nCreate a 2D tensor field in the form of a named tuple on the given grid using the specified backend, with components xx, yy, and xy each being a Field.\n\nArguments:\n\nbackend::Backend: The backend to be used for computation.\ngrid::StructuredGrid{2}: The 2D structured grid defining the computational domain.\nargs...: Additional positional arguments to pass to the Field constructor.\nkwargs...: Additional keyword arguments to pass to the Field constructor.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.TensorField-Tuple{KernelAbstractions.Backend, Chmy.Grids.StructuredGrid{3}, Vararg{Any}}","page":"Modules","title":"Chmy.Fields.TensorField","text":"TensorField(backend::Backend, grid::StructuredGrid{3}, args...; kwargs...)\n\nCreate a 3D tensor field in the form of a named tuple on the given grid using the specified backend, with components xx, yy, zz, xy, xz, and yz each being a Field.\n\nArguments:\n\nbackend::Backend: The backend to be used for computation.\ngrid::StructuredGrid{3}: The 3D structured grid defining the computational domain.\nargs...: Additional positional arguments to pass to the Field constructor.\nkwargs...: Additional keyword arguments to pass to the Field constructor.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.VectorField-Union{Tuple{N}, Tuple{KernelAbstractions.Backend, Chmy.Grids.StructuredGrid{N}, Vararg{Any}}} where N","page":"Modules","title":"Chmy.Fields.VectorField","text":"VectorField(backend::Backend, grid::StructuredGrid{N}, args...; kwargs...) where {N}\n\nCreate a vector field in the form of a NamedTuple on the given grid using the specified backend. With each component being a Field.\n\nArguments:\n\nbackend::Backend: The backend to be used for computation.\ngrid::StructuredGrid{N}: The structured grid defining the computational domain.\nargs...: Additional positional arguments to pass to the Field constructor.\nkwargs...: Additional keyword arguments to pass to the Field constructor.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.interior-Tuple{Chmy.Fields.Field}","page":"Modules","title":"Chmy.Fields.interior","text":"interior(f::Field; with_halo=false)\n\nDisplays the field on the interior of the grid on which it is defined on. One could optionally specify to display the halo regions on the grid with with_halo=true.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.set!-Tuple{Chmy.Fields.Field, AbstractArray}","page":"Modules","title":"Chmy.Fields.set!","text":"set!(f::Field, A::AbstractArray)\n\nSet the elements of the Field f using the values from the AbstractArray A.\n\nArguments:\n\nf::Field: The Field object to be modified.\nA::AbstractArray: The array whose values are to be copied to the Field.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.set!-Tuple{Chmy.Fields.Field, Chmy.Fields.AbstractField}","page":"Modules","title":"Chmy.Fields.set!","text":"set!(f::Field, other::AbstractField)\n\nSet the elements of the Field f using the values from another AbstractField other.\n\nArguments:\n\nf::Field: The destination Field object to be modified.\nother::AbstractField: The source AbstractField whose values are to be copied to f.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Fields.set!-Tuple{Chmy.Fields.Field, Number}","page":"Modules","title":"Chmy.Fields.set!","text":"set!(f::Field, val::Number)\n\nSet all elements of the Field f to the specified numeric value val.\n\nArguments:\n\nf::Field: The Field object to be modified.\nval::Number: The numeric value to set in the Field.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Grid-Operators","page":"Modules","title":"Grid Operators","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.GridOperators]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.GridOperators.AbstractMask","page":"Modules","title":"Chmy.GridOperators.AbstractMask","text":"abstract type AbstractMask{T,N}\n\nAbstract type representing the data transformation to be performed on elements in a field of dimension N, where each element is of typeT.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.GridOperators.InterpolationRule","page":"Modules","title":"Chmy.GridOperators.InterpolationRule","text":"abstract type InterpolationRule\n\nA type representing an interpolation rule that specifies how the interpolant f should be reconstructed using a data set on a given grid.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.GridOperators.hlerp-Union{Tuple{N}, Tuple{Any, Any, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.hlerp","text":"hlerp(f, to, grid, I...)\n\nInterpolate a field f to location to using harmonic linear interpolation rule.\n\nrule(t, v0, v1) = 1/(1/v0 + t * (1/v1 - 1/v0))\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.itp-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, NTuple{N, Chmy.Grids.Location}, Chmy.GridOperators.InterpolationRule, Chmy.Grids.StructuredGrid{N}, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.itp","text":"itp(f, to, r, grid, I...)\n\nInterpolates the field f from its current location to the specified location(s) to using the given interpolation rule r. The indices specify the position within the grid at location(s) to.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.leftx-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.leftx","text":"leftx(f, ω, I)\n\n\"left side\" of a field ([1:end-1]) in x direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.leftx-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.leftx","text":"leftx(f, I)\n\n\"left side\" of a field ([1:end-1]) in x direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.lefty-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.lefty","text":"lefty(f, ω, I)\n\n\"left side\" of a field ([1:end-1]) in y direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.lefty-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.lefty","text":"lefty(f, I)\n\n\"left side\" of a field ([1:end-1]) in y direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.leftz-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.leftz","text":"leftz(f, ω, I)\n\n\"left side\" of a field ([1:end-1]) in z direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.leftz-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.leftz","text":"leftz(f, I)\n\n\"left side\" of a field ([1:end-1]) in z direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.lerp-Union{Tuple{N}, Tuple{Any, Any, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.lerp","text":"lerp(f, to, grid, I...)\n\nLinearly interpolate values of a field f to location to.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.rightx-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.rightx","text":"rightx(f, ω, I)\n\n\"right side\" of a field ([2:end]) in x direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.rightx-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.rightx","text":"rightx(f, I)\n\n\"right side\" of a field ([2:end]) in x direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.righty-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.righty","text":"righty(f, ω, I)\n\n\"right side\" of a field ([2:end]) in y direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.righty-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.righty","text":"righty(f, I)\n\n\"right side\" of a field ([2:end]) in y direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.rightz-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.rightz","text":"rightz(f, ω, I)\n\n\"right side\" of a field ([2:end]) in z direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.rightz-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.rightz","text":"rightz(f, I)\n\n\"right side\" of a field ([2:end]) in z direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.δx-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.δx","text":"δx(f, ω, I)\n\nFinite difference in x direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.δx-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.δx","text":"δx(f, I)\n\nFinite difference in x direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.δy-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.δy","text":"δy(f, ω, I)\n\nFinite difference in y direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.δy-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.δy","text":"δy(f, I)\n\nFinite difference in y direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.δz-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.δz","text":"δz(f, ω, I)\n\nFinite difference in z direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.δz-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.δz","text":"δz(f, I)\n\nFinite difference in z direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.∂x-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.∂x","text":"∂x(f, grid, I)\n\nDirectional partial derivative in x direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.∂x-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.∂x","text":"∂x(f, ω, grid, I)\n\nDirectional partial derivative in x direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.∂y-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.∂y","text":"∂y(f, grid, I)\n\nDirectional partial derivative in y direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.∂y-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.∂y","text":"∂y(f, ω, grid, I)\n\nDirectional partial derivative in y direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.∂z-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.∂z","text":"∂z(f, grid, I)\n\nDirectional partial derivative in z direction.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.GridOperators.∂z-Union{Tuple{N}, Tuple{Chmy.Fields.AbstractField, Chmy.GridOperators.AbstractMask, Any, Vararg{Integer, N}}} where N","page":"Modules","title":"Chmy.GridOperators.∂z","text":"∂z(f, ω, grid, I)\n\nDirectional partial derivative in z direction, masked with ω.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Boundary-Conditions","page":"Modules","title":"Boundary Conditions","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.BoundaryConditions]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.BoundaryConditions.AbstractBatch","page":"Modules","title":"Chmy.BoundaryConditions.AbstractBatch","text":"AbstractBatch\n\nAbstract type representing a batch of boundary conditions.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.BoundaryFunction","page":"Modules","title":"Chmy.BoundaryConditions.BoundaryFunction","text":"abstract type BoundaryFunction{F}\n\nAbstract type for boundary condition functions with function type F.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.Dirichlet","page":"Modules","title":"Chmy.BoundaryConditions.Dirichlet","text":"Dirichlet(value=nothing)\n\nCreate a Dirichlet object representing the Dirichlet boundary condition with the specified value.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.EmptyBatch","page":"Modules","title":"Chmy.BoundaryConditions.EmptyBatch","text":"EmptyBatch <: AbstractBatch\n\nEmptyBatch represents no boundary conditions.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.ExchangeBatch","page":"Modules","title":"Chmy.BoundaryConditions.ExchangeBatch","text":"ExchangeBatch <: AbstractBatch\n\nExchangeBatch represents a batch used for MPI communication.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.FieldBatch","page":"Modules","title":"Chmy.BoundaryConditions.FieldBatch","text":"FieldBatch <: AbstractBatch\n\nFieldBatch is a batch of boundary conditions, where each field has one boundary condition.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.FieldBoundaryCondition","page":"Modules","title":"Chmy.BoundaryConditions.FieldBoundaryCondition","text":"FieldBoundaryCondition\n\nAbstract supertype for all boundary conditions that are specified per-field.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.FirstOrderBC","page":"Modules","title":"Chmy.BoundaryConditions.FirstOrderBC","text":"struct FirstOrderBC{T,Kind} <: FieldBoundaryCondition\n\nA struct representing a boundary condition of first-order accuracy.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.Neumann","page":"Modules","title":"Chmy.BoundaryConditions.Neumann","text":"Neumann(value=nothing)\n\nCreate a Neumann object representing the Neumann boundary condition with the specified value.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.BoundaryConditions.bc!-Union{Tuple{N}, Tuple{Chmy.Architectures.Architecture, Chmy.Grids.StructuredGrid{N}, NTuple{N, Tuple{Chmy.BoundaryConditions.AbstractBatch, Chmy.BoundaryConditions.AbstractBatch}}}} where N","page":"Modules","title":"Chmy.BoundaryConditions.bc!","text":"bc!(arch::Architecture, grid::StructuredGrid, batch::BatchSet)\n\nApply boundary conditions using a batch set batch containing an AbstractBatch per dimension per side of grid.\n\nArguments\n\narch: The architecture.\ngrid: The grid.\nbatch:: The batch set to apply boundary conditions to.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Kernel-launcher","page":"Modules","title":"Kernel launcher","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.KernelLaunch]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.KernelLaunch.Launcher","page":"Modules","title":"Chmy.KernelLaunch.Launcher","text":"struct Launcher{Worksize,OuterWidth,Workers}\n\nA struct representing a launcher for asynchronous kernel execution.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.KernelLaunch.Launcher-Tuple{Any, Any}","page":"Modules","title":"Chmy.KernelLaunch.Launcher","text":"Launcher(arch, grid; outer_width=nothing)\n\nConstructs a Launcher object configured based on the input parameters.\n\nArguments:\n\narch: The associated architecture.\ngrid: The grid defining the computational domain.\nouter_width: Optional parameter specifying outer width.\n\nwarning: Warning\nworksize for the last dimension N takes into account only last outer width W[N], N-1 uses W[N] and W[N-1], N-2 uses W[N], W[N-1], and W[N-2].\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.KernelLaunch.Launcher-Union{Tuple{Args}, Tuple{F}, Tuple{Chmy.Architectures.Architecture, Any, Pair{F, Args}}} where {F, Args}","page":"Modules","title":"Chmy.KernelLaunch.Launcher","text":"(launcher::Launcher)(arch::Architecture, grid, kernel_and_args::Pair{F,Args}; bc=nothing, async=false) where {F,Args}\n\nLaunches a computational kernel using the specified arch, grid, kernel_and_args, and optional boundary conditions (bc).\n\nArguments:\n\narch::Architecture: The architecture on which to execute the computation.\ngrid: The grid defining the computational domain.\nkernel_and_args::Pair{F,Args}: A pair consisting of the computational kernel F and its arguments Args.\nbc=nothing: Optional boundary conditions for the computation.\nasync=false: If true, launches the kernel asynchronously.\n\nwarning: Warning\narch should be compatible with the Launcher's architecture.\nIf bc is nothing, the kernel is launched without boundary conditions.\nIf async is false (default), the function waits for the computation to complete before returning.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Distributed","page":"Modules","title":"Distributed","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.Distributed]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.Distributed.CartesianTopology","page":"Modules","title":"Chmy.Distributed.CartesianTopology","text":"CartesianTopology\n\nRepresents N-dimensional Cartesian topology of distributed processes.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Distributed.CartesianTopology-Union{Tuple{N}, Tuple{MPI.Comm, NTuple{N, Int64}}} where N","page":"Modules","title":"Chmy.Distributed.CartesianTopology","text":"CartesianTopology(comm, dims)\n\nCreate an N-dimensional Cartesian topology using base MPI communicator comm with dimensions dims. If all entries in dims are not equal to 0, the product of dims should be equal to the total number of MPI processes MPI.Comm_size(comm). If any (or all) entries of dims are 0, the dimensions in the corresponding spatial directions will be picked automatically.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.DistributedArchitecture","page":"Modules","title":"Chmy.Distributed.DistributedArchitecture","text":"DistributedArchitecture <: Architecture\n\nA struct representing a distributed architecture.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Distributed.StackAllocator","page":"Modules","title":"Chmy.Distributed.StackAllocator","text":"mutable struct StackAllocator\n\nSimple stack (a.k.a. bump/arena) allocator. Maintains an internal buffer that grows dynamically if the requested allocation exceeds current buffer size.\n\n\n\n\n\n","category":"type"},{"location":"lib/modules/#Chmy.Distributed.StackAllocator-Tuple{KernelAbstractions.Backend}","page":"Modules","title":"Chmy.Distributed.StackAllocator","text":"StackAllocator(backend::Backend)\n\nCreate a stack allocator using the specified backend to store allocations.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Base.resize!-Tuple{Chmy.Distributed.StackAllocator, Integer}","page":"Modules","title":"Base.resize!","text":"resize!(sa::StackAllocator, sz::Integer)\n\nResize the StackAllocator's buffer to capacity of sz bytes. This method will throw an error if any arrays were already allocated using this allocator.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.Arch-Tuple{KernelAbstractions.Backend, MPI.Comm, Any}","page":"Modules","title":"Chmy.Architectures.Arch","text":"Architectures.Arch(backend::Backend, comm::MPI.Comm, dims; device_id=nothing)\n\nCreate a distributed Architecture using backend backend and comm. For GPU backends, device will be selected automatically based on a process id within a node, unless specified by device_id.\n\nArguments\n\nbackend::Backend: The backend to use for the architecture.\ncomm::MPI.Comm: The MPI communicator to use for the architecture.\ndims: The dimensions of the architecture.\n\nKeyword Arguments\n\ndevice_id: The ID of the device to use. If not provided, the shared rank of the topology plus one is used.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.activate!-Tuple{Chmy.Distributed.DistributedArchitecture}","page":"Modules","title":"Chmy.Architectures.activate!","text":"activate!(arch::DistributedArchitecture; kwargs...)\n\nActivate the given DistributedArchitecture by delegating to the child architecture, and pass through any keyword arguments. For example, the priority can be set with accepted values being :normal, :low, and :high.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.get_backend-Tuple{Chmy.Distributed.DistributedArchitecture}","page":"Modules","title":"Chmy.Architectures.get_backend","text":"get_backend(arch::DistributedArchitecture)\n\nGet the backend associated with a DistributedArchitecture by delegating to the child architecture.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Architectures.get_device-Tuple{Chmy.Distributed.DistributedArchitecture}","page":"Modules","title":"Chmy.Architectures.get_device","text":"get_device(arch::DistributedArchitecture)\n\nGet the device associated with a DistributedArchitecture by delegating to the child architecture.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.BoundaryConditions.bc!-Tuple{Side, Dim, Chmy.Distributed.DistributedArchitecture, Chmy.Grids.StructuredGrid, Chmy.BoundaryConditions.ExchangeBatch}","page":"Modules","title":"Chmy.BoundaryConditions.bc!","text":"BoundaryConditions.bc!(side::Side, dim::Dim,\n arch::DistributedArchitecture,\n grid::StructuredGrid,\n batch::ExchangeBatch; kwargs...)\n\nApply boundary conditions on a distributed grid with halo exchange performed internally.\n\nArguments\n\nside: The side of the grid where the halo exchange is performed.\ndim: The dimension along which the halo exchange is performed.\narch: The distributed architecture used for communication.\ngrid: The structured grid on which the halo exchange is performed.\nbatch: The batch set to apply boundary conditions to.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.allocate","page":"Modules","title":"Chmy.Distributed.allocate","text":"allocate(sa::StackAllocator, T::DataType, dims, [align=sizeof(T)])\n\nAllocate a buffer of type T with dimensions dims using a stack allocator. The align parameter specifies the alignment of the buffer elements.\n\nArguments\n\nsa::StackAllocator: The stack allocator object.\nT::DataType: The data type of the requested allocation.\ndims: The dimensions of the requested allocation.\nalign::Integer: The alignment of the allocated buffer in bytes.\n\nwarning: Warning\nArrays allocated with StackAllocator are not managed by Julia runtime. User is responsible for ensuring correct lifetimes, i.e., that the reference to allocator outlives all arrays allocated using this allocator.\n\n\n\n\n\n","category":"function"},{"location":"lib/modules/#Chmy.Distributed.cart_comm-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.cart_comm","text":"cart_comm(topo)\n\nMPI Cartesian communicator for the topology.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.cart_coords-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.cart_coords","text":"cart_coords(topo)\n\nCoordinates of a current process within a Cartesian topology.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.dims-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.dims","text":"dims(topo)\n\nDimensions of the topology as NTuple.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.exchange_halo!-Union{Tuple{K}, Tuple{D}, Tuple{S}, Tuple{Side{S}, Dim{D}, Chmy.Distributed.DistributedArchitecture, Chmy.Grids.StructuredGrid, Vararg{Chmy.Fields.Field, K}}} where {S, D, K}","page":"Modules","title":"Chmy.Distributed.exchange_halo!","text":"exchange_halo!(side::Side, dim::Dim, arch, grid, fields...; async=false)\n\nPerform halo exchange communication between neighboring processes in a distributed architecture.\n\nArguments\n\nside: The side of the grid where the halo exchange is performed.\ndim: The dimension along which the halo exchange is performed.\narch: The distributed architecture used for communication.\ngrid: The structured grid on which the halo exchange is performed.\nfields...: The fields to be exchanged.\n\nOptional Arguments\n\nasync=false: Whether to perform the halo exchange asynchronously.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.exchange_halo!-Union{Tuple{N}, Tuple{Chmy.Distributed.DistributedArchitecture, Chmy.Grids.StructuredGrid{N}, Vararg{Chmy.Fields.Field}}} where N","page":"Modules","title":"Chmy.Distributed.exchange_halo!","text":"exchange_halo!(arch, grid, fields...)\n\nPerform halo exchange for the given architecture, grid, and fields.\n\nArguments\n\narch: The distributed architecture to perform halo exchange on.\ngrid: The structured grid on which halo exchange is performed.\nfields: The fields on which halo exchange is performed.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.gather!-Tuple{Chmy.Distributed.DistributedArchitecture, Any, Chmy.Fields.Field}","page":"Modules","title":"Chmy.Distributed.gather!","text":"gather!(arch, dst, src::Field; kwargs...)\n\nGather the interior of a field src into a global array dst on the CPU.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.gather!-Union{Tuple{N}, Tuple{T}, Tuple{Union{Nothing, AbstractArray{T, N}}, AbstractArray{T, N}, MPI.Comm}} where {T, N}","page":"Modules","title":"Chmy.Distributed.gather!","text":"gather!(dst, src, comm::MPI.Comm; root=0)\n\nGather local array src into a global array dst. Size of the global array size(dst) should be equal to the product of the size of a local array size(src) and the dimensions of a Cartesian communicator comm. The array will be gathered on the process with id root (root=0 by default). Note that the memory for a global array should be allocated only on the process with id root, on other processes dst can be set to nothing.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.global_rank-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.global_rank","text":"global_rank(topo)\n\nGlobal id of a process in a Cartesian topology.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.global_size-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.global_size","text":"global_size(topo)\n\nTotal number of processes within the topology.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.has_neighbor-Tuple{Chmy.Distributed.CartesianTopology, Any, Any}","page":"Modules","title":"Chmy.Distributed.has_neighbor","text":"has_neighbor(topo, dim, side)\n\nReturns true if there a neighbor process in spatial direction dim on the side side, or false otherwise.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.nallocs-Tuple{Chmy.Distributed.StackAllocator}","page":"Modules","title":"Chmy.Distributed.nallocs","text":"nallocs(sa::StackAllocator)\n\nGet the number of allocations made by the given StackAllocator.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.neighbor-Tuple{Chmy.Distributed.CartesianTopology, Any, Any}","page":"Modules","title":"Chmy.Distributed.neighbor","text":"neighbor(topo, dim, side)\n\nReturns id of a neighbor process in spatial direction dim on the side side, if this neighbor exists, or MPI.PROC_NULL otherwise.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.neighbors-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.neighbors","text":"neighbors(topo)\n\nNeighbors of a current process.\n\nReturns tuple of ranks of the two immediate neighbors in each spatial direction, or MPI.PROC_NULL if there is no neighbor on a corresponding side.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.node_name-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.node_name","text":"node_name(topo)\n\nName of a node according to MPI.Get_processor_name().\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.node_size-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.node_size","text":"node_size(topo)\n\nNumber of processes sharing the same node.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.reset!-Tuple{Chmy.Distributed.StackAllocator}","page":"Modules","title":"Chmy.Distributed.reset!","text":"reset!(sa::StackAllocator)\n\nReset the stack allocator by resetting the pointer. Doesn't free the internal memory buffer.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.shared_comm-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.shared_comm","text":"shared_comm(topo)\n\nMPI communicator for the processes sharing the same node.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.shared_rank-Tuple{Chmy.Distributed.CartesianTopology}","page":"Modules","title":"Chmy.Distributed.shared_rank","text":"shared_rank(topo)\n\nLocal id of a process within a single node. Can be used to set the GPU device.\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Chmy.Distributed.topology-Tuple{Chmy.Distributed.DistributedArchitecture}","page":"Modules","title":"Chmy.Distributed.topology","text":"topology(arch::DistributedArchitecture)\n\nGet the virtual MPI topology of a distributed architecture\n\n\n\n\n\n","category":"method"},{"location":"lib/modules/#Workers","page":"Modules","title":"Workers","text":"","category":"section"},{"location":"lib/modules/","page":"Modules","title":"Modules","text":"Modules = [Chmy.Workers]\nOrder = [:type, :function]","category":"page"},{"location":"lib/modules/#Chmy.Workers.Worker","page":"Modules","title":"Chmy.Workers.Worker","text":"Worker\n\nA worker that performs tasks asynchronously.\n\nConstructor\n\nWorker{T}(; [setup], [teardown]) where {T}\n\nConstructs a new Worker object.\n\nArguments\n\nsetup: A function to be executed before the worker starts processing tasks. (optional)\nteardown: A function to be executed after the worker finishes processing tasks. (optional)\n\n\n\n\n\n","category":"type"},{"location":"concepts/distributed/#Distributed","page":"Distributed","title":"Distributed","text":"","category":"section"},{"location":"concepts/distributed/","page":"Distributed","title":"Distributed","text":"Task-based parallelism in Chmy.jl is featured by the usage of Threads.@spawn, with an additional layer of a Worker construct for efficiently managing the lifespan of tasks. Note that the task-based parallelism provides a high-level abstraction of program execution not only for shared-memory architecture on a single machine, but it can be also extended to hybrid parallelism, consisting of both shared and distributed-memory parallelism. The Distributed module in Chmy.jl allows users to leverage the hybrid parallelism through the power of abstraction.","category":"page"},{"location":"concepts/distributed/","page":"Distributed","title":"Distributed","text":"We will start with some basic background knowledge for understanding the architecture of modern HPC clusters, the underlying memory model and the programming paradgm complied with it. We then introduce how Chmy.jl provides a high-level API for users to abstract the low-level details away and then a simple example showing how the Distributed module should be used.","category":"page"},{"location":"concepts/distributed/#HPC-Cluster-and-Distributed-Memory","page":"Distributed","title":"HPC Cluster & Distributed Memory","text":"","category":"section"},{"location":"concepts/distributed/","page":"Distributed","title":"Distributed","text":"An high-performance computing (HPC) cluster consists of a network of independent computers combined into a system through specialized hardware. We call each computer a node, and each node manages its own private memory. Such system with interconnected nodes, without having access to memory of any other node, features the distributed memory model. The underlying fast interconnect architecture (InfiniBand) that physically connects the nodes in the network can transfer the data from one node to another in an extremely efficient manner through a communication protocol called remote direct memory access (RDMA).","category":"page"},{"location":"concepts/distributed/","page":"Distributed","title":"Distributed","text":"","category":"page"},{"location":"concepts/distributed/","page":"Distributed","title":"Distributed","text":"By using the InfiniBand, processes across different nodes can communicate with each other through the sending of messages in a high-throughput, low-latency fashion. The syntax and semantics of how message passing should proceed through such network is defined by a standard called the Message-Passing Interface (MPI), and there are different libraries that implement the standard, resulting in a wide range of choice (MPICH, Open MPI, MVAPICH etc.) for users. ","category":"page"},{"location":"concepts/distributed/","page":"Distributed","title":"Distributed","text":"info: Message-Passing Interface (MPI) is a General Specification\nIn general, implementations based on MPI standard can be used for a great variety of computers, not just on HPC clusters, as long as these computers are connected by a communication network.","category":"page"},{"location":"concepts/distributed/#Hybrid-Parallelism","page":"Distributed","title":"Hybrid Parallelism","text":"","category":"section"},{"location":"getting_started/#Getting-Started-with-Chmy.jl","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"","category":"section"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"Chmy.jl is a backend-agnostic toolkit for finite difference computations on multi-dimensional computational staggered grids. In this introductory tutorial, we will showcase the essence of Chmy.jl by solving a simple 2D diffusion problem. The full code of the tutorial material is available under diffusion_2d.jl.","category":"page"},{"location":"getting_started/#Basic-Diffusion","page":"Getting Started with Chmy.jl","title":"Basic Diffusion","text":"","category":"section"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"The diffusion equation is a second order parabolic PDE, here for a multivariable function C(xyt) that represents the field being diffused (such as the temperature or the concentration of a chemical component in a solution) showing derivatives in both temporal partial t and spatial partial x dimensions, where chi is the diffusion coefficient. In 2D we have the following formulation for the diffusion process:","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"beginequation\nfracpartial Cpartial t = chi left( fracpartial^2 Cpartial x^2 + fracpartial^2 Cpartial y^2 right)\nendequation","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"Introducing the diffusion flux q, we can rewrite equation (1) as a system of two PDEs, consisting of equations (2) and (3).","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"beginequation\nboldsymbolq = -chi nabla C\nendequation","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"beginequation\nfracpartial Cpartial t = - nabla cdot boldsymbolq\nendequation","category":"page"},{"location":"getting_started/#Boundary-Conditions","page":"Getting Started with Chmy.jl","title":"Boundary Conditions","text":"","category":"section"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"Generally, partial differential equations (PDEs) require initial or boundary conditions to ensure a unique and stable solution. For the field C, a Neumann boundary condition is given by:","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"beginequation\nfracpartial Cpartial boldsymboln = g(x y t)\nendequation","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"where fracpartial Cpartial boldsymboln is the derivative of C normal to the boundary, and g(x y t) is a given function. In this tutorial example, we consider a homogeneous Neumann boundary condition, g(x y t) = 0, which implies that there is no flux across the boundary.","category":"page"},{"location":"getting_started/#Using-Chmy.jl-for-Backend-Portable-Implementation","page":"Getting Started with Chmy.jl","title":"Using Chmy.jl for Backend Portable Implementation","text":"","category":"section"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"As the first step, we need to load the main module and any necessary submodules of Chmy.jl. Moreover, we use KernelAbstractions.jl for writing backend-agnostic kernels that are compatible with Chmy.jl.","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"using Chmy, Chmy.Architectures, Chmy.Grids, Chmy.Fields, Chmy.BoundaryConditions, Chmy.GridOperators, Chmy.KernelLaunch\nusing KernelAbstractions # for backend-agnostic kernels\nusing Printf, CairoMakie # for I/O and plotting\n# using CUDA\n# using AMDGPU\n# using Metal","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"In this introductory tutorial, we will use the CPU backend for simplicity:","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"backend = CPU()\narch = Arch(backend)","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"If a different backend is desired, one needs to load the relevant package accordingly. For example, if Nvidia or AMD GPUs are available, one can comment out using CUDA, using AMDGPU or using Metal and make sure to use arch = Arch(CUDABackend()), arch = Arch(ROCBackend()) or arch = Arch(MetalBackend()), respectively, when selecting the architecture. For further information about executing on a single-device or multi-device architecture, see the documentation section for Architectures.","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"warning: Metal backend\nMetal backend restricts floating point arithmetic precision of computations to Float32 or lower. In Chmy, this can be achieved by initialising the grid object using Float32 (f0) elements in the origin and extent tuples.","category":"page"},{"location":"getting_started/#Writing-and-Launch-Compute-Kernels","page":"Getting Started with Chmy.jl","title":"Writing & Launch Compute Kernels","text":"","category":"section"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"We want to solve the system of equations (2) & (3) numerically. We will use the explicit forward Euler method for temporal discretization and finite-differences for spatial discretization. Accordingly, the kernels for performing the arithmetic operations for each time step can be defined as follows:","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"@kernel inbounds = true function compute_q!(q, C, χ, g::StructuredGrid, O)\n I = @index(Global, Cartesian)\n I = I + O\n q.x[I] = -χ * ∂x(C, g, I)\n q.y[I] = -χ * ∂y(C, g, I)\nend","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"@kernel inbounds = true function update_C!(C, q, Δt, g::StructuredGrid, O)\n I = @index(Global, Cartesian)\n I = I + O\n C[I] -= Δt * divg(q, g, I)\nend","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"note: Non-Cartesian indices\nBesides using Cartesian indices, more standard indexing works as well, using NTuple. For example, update_C! will become:@kernel inbounds = true function update_C!(C, q, Δt, g::StructuredGrid, O)\n ix, iy = @index(Global, NTuple)\n (ix, iy) = (ix, iy) + O\n C[ix, iy] -= Δt * divg(q, g, ix, iy)\nendwhere the dimensions could be abstracted by splatting the returned index (I...):@kernel inbounds = true function update_C!(C, q, Δt, g::StructuredGrid, O)\n I = @index(Global, NTuple)\n I = I + O\n C[I...] -= Δt * divg(q, g, I...)\nend","category":"page"},{"location":"getting_started/#Model-Setup","page":"Getting Started with Chmy.jl","title":"Model Setup","text":"","category":"section"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"The diffusion model that we solve should contain the following model setup","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"# geometry\ngrid = UniformGrid(arch; origin=(-1, -1), extent=(2, 2), dims=(126, 126))\nlaunch = Launcher(arch, grid)\n\n# physics\nχ = 1.0\n\n# numerics\nΔt = minimum(spacing(grid))^2 / χ / ndims(grid) / 2.1","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"In the 2D problem only three physical fields, the field C and the diffusion flux q in x- and y-dimension are evolving with time. We define these fields on different locations on the staggered grid (more see Grids).","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"# allocate fields\nC = Field(backend, grid, Center())\nq = VectorField(backend, grid)","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"We randomly initialized the entries of C field and finished the initial model setup. One can refer to the section Fields for setting up more complex initial conditions.","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"# initial conditions\nset!(C, grid, (_, _) -> rand())\nbc!(arch, grid, C => Neumann(); exchange=C)","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"You should get a result like in the following plot.","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"fig = Figure()\nax = Axis(fig[1, 1];\n aspect = DataAspect(),\n xlabel = \"x\", ylabel = \"y\",\n title = \"it = 0\")\nplt = heatmap!(ax, centers(grid)..., interior(C) |> Array;\n colormap = :turbo)\nColorbar(fig[1, 2], plt)\ndisplay(fig)","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"
\n \n
","category":"page"},{"location":"getting_started/#Solving-Time-dependent-Problem","page":"Getting Started with Chmy.jl","title":"Solving Time-dependent Problem","text":"","category":"section"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"We are resolving a time-dependent problem, so we explicitly advance our solution within a time loop, specifying the number of iterations (or time steps) we desire to perform. The action that takes place within the time loop is the variable update that is performed by the compute kernels compute_q! and update_C!, accompanied by the imposing the Neumann boundary condition on the C field.","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"# action\nnt = 100\nfor it in 1:nt\n @printf(\"it = %d/%d \\n\", it, nt)\n launch(arch, grid, compute_q! => (q, C, χ, grid))\n launch(arch, grid, update_C! => (C, q, Δt, grid); bc=batch(grid, C => Neumann(); exchange=C))\nend","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"After running the simulation, you should see something like this, here the final result at it = 100 for the field C is plotted:","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"fig = Figure()\nax = Axis(fig[1, 1];\n aspect = DataAspect(),\n xlabel = \"x\", ylabel = \"y\",\n title = \"it = 100\")\nplt = heatmap!(ax, centers(grid)..., interior(C) |> Array;\n colormap = :turbo)\nColorbar(fig[1, 2], plt)\ndisplay(fig)","category":"page"},{"location":"getting_started/","page":"Getting Started with Chmy.jl","title":"Getting Started with Chmy.jl","text":"
\n \n
","category":"page"},{"location":"concepts/fields/#Fields","page":"Fields","title":"Fields","text":"","category":"section"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"With a given grid that allows us to define each point uniquely in a high-dimensional space, we abstract the data values to be defined on the grid under the concept AbstractField. Following is the type tree of the abstract field and its derived data types.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"","category":"page"},{"location":"concepts/fields/#Defining-a-multi-dimensional-Field","page":"Fields","title":"Defining a multi-dimensional Field","text":"","category":"section"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"Consider the following example, where we defined a variable grid of type Chmy.UniformGrid, similar as in the previous section Grids. We can now define physical properties on the grid.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"When defining a scalar field Field on the grid, we need to specify the arrangement of the field values. These values can either be stored at the cell centers of each control volume Center() or on the cell vertices/faces Vertex().","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"# Define geometry, architecture..., a 2D grid\ngrid = UniformGrid(arch; origin=(-lx/2, -ly/2), extent=(lx, ly), dims=(nx, ny))\n\n# Define pressure as a scalar field\nPr = Field(backend, grid, Center())","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"With the methods VectorField and TensorField, we can construct 2-dimensional and 3-dimensional fields, with predefined locations for each field dimension on a staggered grid.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"# Define velocity as a vector field on the 2D grid\nV = VectorField(backend, grid)\n\n# Define stress as a tensor field on the 2D grid\nτ = TensorField(backend, grid)","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"Use the function location to get the location of the field as a tuple. Vector and tensor fields are currently defined as NamedTuple's (likely to change in the future), so one could query the locations of individual components, e.g. location(V.x) or location(τ.xy)","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"tip: Acquiring Locations on the Grid Cell\nOne could use a convenient getter for obtaining locations of variable on the staggered-grid. Such as Chmy.location(Pr) for scalar-valued pressure field and Chmy.location(τ.xx) for a tensor field.","category":"page"},{"location":"concepts/fields/#Initialising-Field","page":"Fields","title":"Initialising Field","text":"","category":"section"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"Chmy.jl provides functionality to set the values of the fields as a function of spatial coordinates:","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"C = Field(backend, grid, Center())\n\n# Set initial values of the field randomly\nset!(C, grid, (_, _) -> rand())\n\n# Set initial values to 2D Gaussian\nset!(C, grid, (x, y) -> exp(-x^2 - y^2))","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"A slightly more complex usage involves passing extra parameters to be used for initial conditions setup.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"# Define a temperature field with values on cell centers\nT = Field(backend, grid, Center())\n\n# Function for setting up the initial conditions on T\ninit_incl(x, y, x0, y0, r, in, out) = ifelse((x - x0)^2 + (y - y0)^2 < r^2, in, out)\n\n# Set up the initial conditions with parameters specified\nset!(T, grid, init_incl; parameters=(x0=0.0, y0=0.0, r=0.1lx, in=T0, out=Ta))","category":"page"},{"location":"concepts/fields/#Defining-a-parameterized-FunctionField","page":"Fields","title":"Defining a parameterized FunctionField","text":"","category":"section"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"A field could also be represented in a parameterized way, having a function that associates a single number to every point in the space.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"An object of the concrete type FunctionField can be initialized with its constructor. The constructor takes in","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"A function func\nA grid\nA location tuple loc for specifying the distribution of variables","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"Optionally, one can also use the boolean variable discrete to indicate if the function field is typed Discrete or Continuous. Any additional parameters to be used in the function func can be passed to the optional parameter parameters.","category":"page"},{"location":"concepts/fields/#Example:-Creation-of-a-parameterized-function-field","page":"Fields","title":"Example: Creation of a parameterized function field","text":"","category":"section"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"Followingly, we create a gravity variable that is two-dimensional and comprises of two parameterized FunctionField objects on a predefined uniform grid grid.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"1. Define Functions that Parameterize the Field","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"In this step, we specify how the gravity field should be parameterized in x-direction and y-direction, with η as the additional parameter used in the parameterization.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"# forcing terms\nρgx(x, y, η) = -0.5 * η * π^2 * sin(0.5π * x) * cos(0.5π * y)\nρgy(x, y, η) = 0.5 * η * π^2 * cos(0.5π * x) * sin(0.5π * y)","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"2. Define Locations for Variable Positioning","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"We specify the location on the fully-staggered grid as introduced in the Location on a Grid Cell section of the concept Grids.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"vx_node = (Vertex(), Center())\nvy_node = (Center(), Vertex())","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"3. Define the 2D Gravity Field","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"By specifying the locations on which the parameterized field should be calculated, as well as concretizing the value η = η0 by passing it as the optional parameter parameters to the constructor, we can define the 2D gravity field:","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"η0 = 1.0\ngravity = (x=FunctionField(ρgx, grid, vx_node; parameters=η0),\n y=FunctionField(ρgy, grid, vy_node; parameters=η0))","category":"page"},{"location":"concepts/fields/#Defining-Constant-Fields","page":"Fields","title":"Defining Constant Fields","text":"","category":"section"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"For completeness, we also provide an abstract type ConstantField, which comprises of a generic ValueField type, and two special types ZeroField, OneField allowing dispatch for special casess. With such a construct, we can easily define value fields properties and other parameters using constant values in a straightforward and readable manner. Moreover, explicit information about the grid on which the field should be defined can be abbreviated. For example:","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"# Defines a field with constant values 1.0\nfield = Chmy.ValueField(1.0)","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"Alternatively, we could also use the OneField type, providing type information about the contents of the field.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"# Defines a field with constant value 1.0\nonefield = Chmy.OneField{Float64}()","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"Notably, these two fields shall equal to each other as expected.","category":"page"},{"location":"concepts/fields/","page":"Fields","title":"Fields","text":"julia> field == onefield\ntrue","category":"page"},{"location":"developer_documentation/running_tests/#Running-Tests","page":"Running Tests","title":"Running Tests","text":"","category":"section"},{"location":"developer_documentation/running_tests/#CPU-tests","page":"Running Tests","title":"CPU tests","text":"","category":"section"},{"location":"developer_documentation/running_tests/","page":"Running Tests","title":"Running Tests","text":"To run the Chmy test suite on the CPU, simple run test from within the package mode or using Pkg:","category":"page"},{"location":"developer_documentation/running_tests/","page":"Running Tests","title":"Running Tests","text":"julia> using Pkg\n\njulia> Pkg.test(\"Chmy\")","category":"page"},{"location":"developer_documentation/running_tests/#GPU-tests","page":"Running Tests","title":"GPU tests","text":"","category":"section"},{"location":"developer_documentation/running_tests/","page":"Running Tests","title":"Running Tests","text":"To run the Chmy test suite on CUDA, ROC or Metal backend (Nvidia, AMD or Apple GPUs), respectively, run the tests using Pkg adding following test_args:","category":"page"},{"location":"developer_documentation/running_tests/#For-CUDA-backend-(Nvidia-GPUs):","page":"Running Tests","title":"For CUDA backend (Nvidia GPUs):","text":"","category":"section"},{"location":"developer_documentation/running_tests/","page":"Running Tests","title":"Running Tests","text":"julia> using Pkg\n\njulia> Pkg.test(\"Chmy\"; test_args=[\"--backend=CUDA\"])","category":"page"},{"location":"developer_documentation/running_tests/#For-ROC-backend-(AMD-GPUs):","page":"Running Tests","title":"For ROC backend (AMD GPUs):","text":"","category":"section"},{"location":"developer_documentation/running_tests/","page":"Running Tests","title":"Running Tests","text":"julia> using Pkg\n\njulia> Pkg.test(\"Chmy\"; test_args=[\"--backend=AMDGPU\"])","category":"page"},{"location":"developer_documentation/running_tests/#For-Metal-backend-(Apple-GPUs):","page":"Running Tests","title":"For Metal backend (Apple GPUs):","text":"","category":"section"},{"location":"developer_documentation/running_tests/","page":"Running Tests","title":"Running Tests","text":"julia> using Pkg\n\njulia> Pkg.test(\"Chmy\"; test_args=[\"--backends=Metal\"])","category":"page"},{"location":"#Chmy.jl","page":"Home","title":"Chmy.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Chmy.jl (pronounce tsh-mee) is a backend-agnostic toolkit for finite difference computations on multi-dimensional computational staggered grids. Chmy.jl features task-based distributed memory parallelisation capabilities.","category":"page"},{"location":"#Installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"To install Chmy.jl, one can simply add it using the Julia package manager:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> using Pkg\n\njulia> Pkg.add(\"Chmy\")","category":"page"},{"location":"","page":"Home","title":"Home","text":"After the package is installed, one can load the package by using:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> using Chmy","category":"page"},{"location":"","page":"Home","title":"Home","text":"info: Install from a Specific Branch\nFor developers and advanced users, one might want to use the implementation of Chmy.jl from a specific branch by specifying the url. In the following code snippet, we do this by explicitly specifying to use the current implementation that is available under the main branch:julia> using Pkg; Pkg.add(url=\"https://github.com/PTsolvers/Chmy.jl#main\")","category":"page"},{"location":"#Feature-Summary","page":"Home","title":"Feature Summary","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Chmy.jl provides a comprehensive framework for handling complex computational tasks on structured grids, leveraging both single and multi-device architectures. It seamlessly integrates with Julia's powerful parallel and concurrent programming capabilities, making it suitable for a wide range of scientific and engineering applications.","category":"page"},{"location":"","page":"Home","title":"Home","text":"A general list of the features is:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Backend-agnostic capabilities leveraging KernelAbstractions.jl\nDistributed computing support with MPI.jl\nMulti-dimensional, parametrisable discrete and continuous fields on structured grids\nHigh-level interface for specifying boundary conditions with automatic batching for performance\nFinite difference and interpolation operators on discrete fields\nExtensibility; The package is written in pure Julia, so adding new functions, simplification rules, and model transformations has no barrier","category":"page"},{"location":"#Funding","page":"Home","title":"Funding","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"The development of this package is supported by the GPU4GEO PASC project. More information about the GPU4GEO project can be found on the GPU4GEO website.","category":"page"},{"location":"concepts/grids/#Grids","page":"Grids","title":"Grids","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"The choice of numerical grid used depends on the type of equations to be resolved and affects the discretization schemes used. The design of the Chmy.Grids module aims to provide a robust yet flexible user API in customizing the numerical grids used for spatial discretization.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"We currently support grids with quadrilateral cells. An N-dimensional numerical grid contains N spatial dimensions, each represented by an axis.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"Grid Properties Description Tunable Parameters\nDimensions The grid can be N-dimensional by having N axes. AbstractAxis\nDistribution of Nodal Points The grid can be regular (uniform distribution) or non-regular (irregular distribution). UniformAxis, FunctionAxis\nDistribution of Variables The grid can be non-staggered (collocated) or staggered, affecting how variables are positioned within the grid. Center, Vertex","category":"page"},{"location":"concepts/grids/#Axis","page":"Grids","title":"Axis","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"Objects of type AbstractAxis are building blocks of numerical grids. We can either define equidistant axes with UniformAxis, or parameterized axes with FunctionAxis.","category":"page"},{"location":"concepts/grids/#Uniform-Axis","page":"Grids","title":"Uniform Axis","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"To define a uniform axis, we need to provide:","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"Origin: The starting point of the axis.\nExtent: The length of the section of the axis considered.\nCell Length: The length of each cell along the axis.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"With the information above, an axis can be defined and incorporated into a spatial dimension. The spacing (with alias Δ) and inv_spacing (with alias iΔ) functions allow convenient access to the grid spacing (Δx/Δy/Δz) and its reciprocal, respectively.","category":"page"},{"location":"concepts/grids/#Function-Axis","page":"Grids","title":"Function Axis","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"As an alternative, one could also define a FunctionAxis object using a function that parameterizes the spacing of the axis, together with the length of the axis.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"f = i -> ((i - 1) / 4)^1.5\nlength = 4\nparameterized_axis = FunctionAxis(f, length)","category":"page"},{"location":"concepts/grids/#Structured-Grids","page":"Grids","title":"Structured Grids","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"A common mesh structure that is used for the spatial discretization in the finite difference approach is a structured grid (concrete type StructuredGrid or its alias SG).","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"We provide a function UniformGrid for creating an equidistant StructuredGrid, that essentially boils down to having axes of type UniformAxis in each spatial dimension.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"# with architecture as well as numerics lx/y/z and nx/y/z defined\ngrid = UniformGrid(arch;\n origin=(-lx/2, -ly/2, -lz/2),\n extent=(lx, ly, lz),\n dims=(nx, ny, nz))","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"warning: Metal backend\nIf using the Metal backend, ensure to use Float32 (f0) element types in the origin and extent tuples when initialising the grid.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"info: Interactive Grid Visualization\ngrids_2d.jl: Visualization of a 2D StructuredGrid\ngrids_3d.jl: Visualization of a 3D StructuredGrid","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"","category":"page"},{"location":"concepts/grids/#Location-on-a-Grid-Cell","page":"Grids","title":"Location on a Grid Cell","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"In order to allow full control over the distribution of different variables on the grid, we provide a high-level abstraction of the property location on a grid cell with the abstract type Location. More concretely, a property location along a spatial dimension can be either of concrete type Center or Vertex on a structured grid.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"We illustrate how to specify the location within a grid cell on a fully staggered uniform grid. The following 2D example also has ghost nodes illustrated that are located immediately outside the domain boundary.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"In the following example, we zoom into a specific cell on a fully-staggered grid. By specifying for both x- and y-dimensions whether the node locates at the Center (C) or Vertex (V) along the respective axis, we can arrive in 4 categories of nodes on a 2D quadrilateral cell, which we refer to as \"basic\", \"pressure\", \"Vx\" and \"Vy\" nodes, following common practices.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"If all variables are defined on basic nodes, specified by (V,V) locations, we have the simplest non-staggered collocated grid.","category":"page"},{"location":"concepts/grids/#Dimensions-of-Fields-on-Structured-Grids","page":"Grids","title":"Dimensions of Fields on Structured Grids","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"With a structured grid defined that consists of nx = N cells horizontally and ny = M cells vertically, we have the following dimensions for fields associated with the grid.","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"Node Type Field Dimension Location\nCell vertex (N + 1) times (M + 1) (V, V)\nX interface (N + 1) times M (V, C)\nY interface $ N \\times (M + 1)$ (C, V)\nCell Center N times M (C, C)","category":"page"},{"location":"concepts/grids/#Connectivity-of-a-StructuredGrid","page":"Grids","title":"Connectivity of a StructuredGrid","text":"","category":"section"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"Using the method connectivity(::SG{N,T,C}, ::Dim{D}, ::Side{S}), one can obtain the connectivity underlying a structured grid. If no special grid topology is provided, a default Bounded grid topology is used for the UniformGrid. Therefore, on a default UniformGrid, the following assertions hold:","category":"page"},{"location":"concepts/grids/","page":"Grids","title":"Grids","text":"julia> @assert connectivity(grid, Dim(1), Side(1)) isa Bounded \"Left boundary is bounded\"\n\njulia> @assert connectivity(grid, Dim(1), Side(2)) isa Bounded \"Right boundary is bounded\"\n\njulia> @assert connectivity(grid, Dim(2), Side(1)) isa Bounded \"Upper boundary is bounded\"\n\njulia> @assert connectivity(grid, Dim(2), Side(2)) isa Bounded \"Lower boundary is bounded\"","category":"page"}] }