From f03fc022fb181ab290381bd58797316926e19d49 Mon Sep 17 00:00:00 2001 From: Carlo Lucibello Date: Sun, 16 Jul 2023 23:51:04 +0200 Subject: [PATCH 1/3] docs for gpu_backend! --- docs/src/gpu.md | 2 +- docs/src/models/functors.md | 1 + src/functor.jl | 7 +++++++ 3 files changed, 9 insertions(+), 1 deletion(-) diff --git a/docs/src/gpu.md b/docs/src/gpu.md index a2acdc32ac..b37dbec21a 100644 --- a/docs/src/gpu.md +++ b/docs/src/gpu.md @@ -46,7 +46,7 @@ Flux relies on [Preferences.jl](https://github.com/JuliaPackaging/Preferences.jl There are two ways you can specify it: -- From the REPL/code in your project, call `Flux.gpu_backend!("AMD")` and restart (if needed) Julia session for the changes to take effect. +- From the REPL/code in your project, call [`Flux.gpu_backend!`](ref)`("AMD")` and restart (if needed) Julia session for the changes to take effect. - In `LocalPreferences.toml` file in you project directory specify: ```toml [Flux] diff --git a/docs/src/models/functors.md b/docs/src/models/functors.md index ab0883c95e..908f18d09b 100644 --- a/docs/src/models/functors.md +++ b/docs/src/models/functors.md @@ -21,6 +21,7 @@ Functors.fmapstructure Flux provides some convenience functions based on `fmap`. Some ([`f16`](@ref Flux.f16), [`f32`](@ref Flux.f32), [`f64`](@ref Flux.f64)) change the precision of all arrays in a model. Others are used for moving a model to of from GPU memory: ```@docs +Flux.gpu_backend! cpu gpu(::Any) gpu(::Flux.DataLoader) diff --git a/src/functor.jl b/src/functor.jl index 7e4d552753..173c4db9c2 100644 --- a/src/functor.jl +++ b/src/functor.jl @@ -190,6 +190,13 @@ _isleaf(x) = _isbitsarray(x) || Functors.isleaf(x) const GPU_BACKENDS = ("CUDA", "AMD", "Metal") const GPU_BACKEND = @load_preference("gpu_backend", "CUDA") +""" + gpu_backend!(backend::String) + +Sets the gpu backend in `LocalPreferences.toml`. +Possible `backend` values are "CUDA", "AMD", and "Metal". +The selected backend affects the data movement through [`Flux.gpu`](@ref). +""" function gpu_backend!(backend::String) if backend == GPU_BACKEND @info """ From 7462d4a8ef80d8c5415889760fd799bde703c445 Mon Sep 17 00:00:00 2001 From: Carlo Lucibello Date: Sun, 16 Jul 2023 23:54:32 +0200 Subject: [PATCH 2/3] update --- src/functor.jl | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/functor.jl b/src/functor.jl index 173c4db9c2..d4d310a508 100644 --- a/src/functor.jl +++ b/src/functor.jl @@ -195,7 +195,8 @@ const GPU_BACKEND = @load_preference("gpu_backend", "CUDA") Sets the gpu backend in `LocalPreferences.toml`. Possible `backend` values are "CUDA", "AMD", and "Metal". -The selected backend affects the data movement through [`Flux.gpu`](@ref). +The selected backend affects the data movement in [`Flux.gpu`](@ref). +Current backend is stored in `Flux.GPU_BACKEND`. """ function gpu_backend!(backend::String) if backend == GPU_BACKEND From 7f6e9d30804f0fa2c3b695a08315ec2246ca7315 Mon Sep 17 00:00:00 2001 From: Carlo Lucibello Date: Mon, 17 Jul 2023 00:03:36 +0200 Subject: [PATCH 3/3] gpu docstring --- src/functor.jl | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/src/functor.jl b/src/functor.jl index d4d310a508..c3531a187c 100644 --- a/src/functor.jl +++ b/src/functor.jl @@ -225,8 +225,10 @@ end Copies `m` to the current GPU device (using current GPU backend), if one is available. If no GPU is available, it does nothing (but prints a warning the first time). -On arrays, this calls CUDA's `cu`, which also changes arrays -with Float64 elements to Float32 while copying them to the device (same for AMDGPU). +When the backed is set to "CUDA", when called on arrays it calls `CUDA.cu`, +which also changes arrays with Float64 elements to Float32 while copying them to the device. +Similar conversions happen for "AMDGPU" and "Metal" backends. + To act on arrays within a struct, the struct type must be marked with [`@functor`](@ref). Use [`cpu`](@ref) to copy back to ordinary `Array`s. @@ -235,6 +237,8 @@ See also [`f32`](@ref) and [`f16`](@ref) to change element type only. See the [CUDA.jl docs](https://juliagpu.github.io/CUDA.jl/stable/usage/multigpu/) to help identify the current device. +See [`Flux.gpu_backend!`](@ref) for setting the backend. + # Example ```julia-repl julia> m = Dense(rand(2, 3)) # constructed with Float64 weight matrix