Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup for v0.14 release #2283

Merged
merged 11 commits into from
Jul 12, 2023
14 changes: 13 additions & 1 deletion docs/src/gpu.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,23 @@
# GPU Support

NVIDIA GPU support should work out of the box on systems with CUDA and CUDNN installed. For more details see the [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) readme.
Flux doesn't force a specific GPU backend and the corresponding package dependencies on the users.
Thanks to the [package extension mechanism](
mcabbott marked this conversation as resolved.
Show resolved Hide resolved
https://pkgdocs.julialang.org/v1/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions)),
Flux conditionally load GPU specific code once a GPU package is made available (e.g. through `using CUDA`).
CarloLucibello marked this conversation as resolved.
Show resolved Hide resolved

NVIDIA GPU support requires the packages `CUDA.jl` and `cuDNN.jl` to be installed in the environment. In the julia REPL, type `] add CUDA, cuDNN` to install them. For more details see the [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) readme.

AMD GPU support is available since Julia 1.9 on systems with ROCm and MIOpen installed. For more details refer to the [AMDGPU.jl](https://github.com/JuliaGPU/AMDGPU.jl) repository.

Metal GPU acceleration is available on Apple Silicon hardware. For more details refer to the [Metal.jl](https://github.com/JuliaGPU/Metal.jl) repository. Metal support in Flux is experimental and many features are not yet available.

In order to trigger GPU support in Flux, you need to call `using CUDA`, `using AMDGPU` or `using Metal`
in your code. Notice that for CUDA, explicitely loading also `cuDNN` is not required, but the package has to be installed in the environment.
CarloLucibello marked this conversation as resolved.
Show resolved Hide resolved


!!! compat "Flux ≤ 0.13"
Old versions of Flux automatically installed CUDA.jl to provide GPU support. Starting from Flux v0.14, CUDA.jl is not a dependency anymore and has to be installed manually.
CarloLucibello marked this conversation as resolved.
Show resolved Hide resolved

## Checking GPU Availability

By default, Flux will run the checks on your system to see if it can support GPU functionality. You can check if Flux identified a valid GPU setup by typing the following:
Expand Down
3 changes: 2 additions & 1 deletion docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ Flux is a library for machine learning. It comes "batteries-included" with many

### Installation

Download [Julia 1.9](https://julialang.org/downloads/) or later, preferably the current stable release. You can add Flux using Julia's package manager, by typing `] add Flux` in the Julia prompt. This will automatically install several other packages, including [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) for Nvidia GPU support.
Download [Julia 1.9](https://julialang.org/downloads/) or later, preferably the current stable release. You can add Flux using Julia's package manager, by typing `] add Flux` in the Julia prompt.
For Nvidia GPU support, you will also need to install the `CUDA` and the `cuDNN` packages. For AMD GPU support, install the `AMDGPU` package. For acceleration on Apple Silicon, install the `Metal` package.

### Learning Flux

Expand Down
20 changes: 2 additions & 18 deletions src/deprecations.jl
Original file line number Diff line number Diff line change
@@ -1,19 +1,3 @@
# v0.12 deprecations

function ones(dims...)
Base.depwarn("Flux.ones(size...) is deprecated, please use Flux.ones32(size...) or Base.ones(Float32, size...)", :ones, force=true)
Base.ones(Float32, dims...)
end
ones(T::Type, dims...) = Base.ones(T, dims...)

function zeros(dims...)
Base.depwarn("Flux.zeros(size...) is deprecated, please use Flux.zeros32(size...) or Base.zeros(Float32, size...)", :zeros, force=true)
Base.zeros(Float32, dims...)
end
zeros(T::Type, dims...) = Base.zeros(T, dims...)

ones32(::Type, dims...) = throw(ArgumentError("Flux.ones32 is always Float32, use Base.ones to specify the element type"))
zeros32(::Type, dims...) = throw(ArgumentError("Flux.zeros32 is always Float32, use Base.zeros to specify the element type"))

# v0.13 deprecations

Expand Down Expand Up @@ -59,7 +43,7 @@ function loadparams!(m, xs)
end

# Channel notation: Changed to match Conv, but very softly deprecated!
# Perhaps change to @deprecate for v0.14, but there is no plan to remove these.
# Perhaps change to @deprecate for v0.15, but there is no plan to remove these.
Dense(in::Integer, out::Integer, σ = identity; kw...) =
Dense(in => out, σ; kw...)
Bilinear(in1::Integer, in2::Integer, out::Integer, σ = identity; kw...) =
Expand Down Expand Up @@ -217,7 +201,7 @@ ChainRulesCore.@non_differentiable _greek_ascii_depwarn(::Any...)

# v0.14 deprecations

# Enable these when 0.14 is released, and delete const ClipGrad = Optimise.ClipValue etc:
# Enable these when 0.15 is released, and delete const ClipGrad = Optimise.ClipValue etc:
CarloLucibello marked this conversation as resolved.
Show resolved Hide resolved
# Base.@deprecate_binding Optimiser OptimiserChain
# Base.@deprecate_binding ClipValue ClipGrad

Expand Down
6 changes: 1 addition & 5 deletions src/utils.jl
Original file line number Diff line number Diff line change
Expand Up @@ -47,11 +47,7 @@ rng_from_array(::AbstractArray) = default_rng_value()

@non_differentiable rng_from_array(::Any)

if VERSION >= v"1.7"
default_rng_value() = Random.default_rng()
else
default_rng_value() = Random.GLOBAL_RNG
end
default_rng_value() = Random.GLOBAL_RNG
CarloLucibello marked this conversation as resolved.
Show resolved Hide resolved

"""
default_rng_value()
Expand Down
4 changes: 1 addition & 3 deletions test/layers/conv.jl
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,5 @@ end

c3 = ConvTranspose((3,), 2=>4, relu)
@test c3(x) isa Array{Float32, 3}
if VERSION >= v"1.8"
@test (@inferred c3(x); true) # fails on 1.6
end
@test (@inferred c3(x); true)
end
12 changes: 2 additions & 10 deletions test/layers/normalisation.jl
Original file line number Diff line number Diff line change
Expand Up @@ -69,11 +69,7 @@ evalwgrad(f, x...) = pullback(f, x...)[1]

# CPU RNGs map onto CPU ok
if isempty(rng_kwargs)
if VERSION >= v"1.7"
@test cpu(m).rng isa Random.TaskLocalRNG
else
@test cpu(m).rng isa Random._GLOBAL_RNG
end
@test cpu(m).rng isa Random.TaskLocalRNG
else
@test cpu(m).rng === only(values(rng_kwargs))
end
Expand Down Expand Up @@ -118,11 +114,7 @@ end

# CPU RNGs map onto CPU ok
if isempty(rng_kwargs)
if VERSION >= v"1.7"
@test cpu(m).rng isa Random.TaskLocalRNG
else
@test cpu(m).rng isa Random._GLOBAL_RNG
end
@test cpu(m).rng isa Random.TaskLocalRNG
else
@test cpu(m).rng === only(values(rng_kwargs))
end
Expand Down
15 changes: 4 additions & 11 deletions test/layers/recurrent.jl
Original file line number Diff line number Diff line change
Expand Up @@ -58,17 +58,10 @@ end

@test primal[1] ≈ e

if VERSION < v"1.7"
@test ∇Wi ≈ grads[:Wi]
@test ∇Wh ≈ grads[:Wh]
@test ∇b ≈ grads[:b]
@test_broken ∇state0 ≈ grads[:state0]
else
@test_broken ∇Wi ≈ grads[:Wi]
@test_broken ∇Wh ≈ grads[:Wh]
@test_broken ∇b ≈ grads[:b]
@test_broken ∇state0 ≈ grads[:state0]
end
@test_broken ∇Wi ≈ grads[:Wi]
@test_broken ∇Wh ≈ grads[:Wh]
@test_broken ∇b ≈ grads[:b]
@test_broken ∇state0 ≈ grads[:state0]
end

# Ref FluxML/Flux.jl#1209 1D input
Expand Down