You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 26, 2022. It is now read-only.
As I understand, the error results from a version of _contract! expecting two tensors of the same type, while it is given a Float64 and ComplexF64 instead. I tried the following fix to the problematic method (allowing tensors to have different types):
--- a/src/tensor/cudense.jl
+++ b/src/tensor/cudense.jl
@@ -235,11 +235,11 @@ function _gemm_contract!(CT::DenseTensor{El,NC},
return C
end
-function _contract!(CT::CuDenseTensor{El,NC},
- AT::CuDenseTensor{El,NA},
- BT::CuDenseTensor{El,NB},
+function _contract!(CT::CuDenseTensor{El1,NC},
+ AT::CuDenseTensor{El2,NA},
+ BT::CuDenseTensor{El3,NB},
props::ContractionProperties,
- α::Number=one(El),β::Number=zero(El)) where {El,NC,NA,NB}
+ α::Number=one(El2),β::Number=zero(El3)) where {El1,El2,El3,NC,NA,NB}
if ndims(CT) > 12 || ndims(BT) > 12 || ndims(AT) > 12
return _gemm_contract!(CT, AT, BT, props, α, β)
end
This does suppress the error; however, after trying out the resulting code on larger and more complicated problem instances, the GPU code now clearly runs much slower than the CPU one. What have I done wrong?
Now, the ITensorsGPU.jl module may not have been designed for manipulating tensor with bond truncation, so perhaps I'm mistaken all the way and there is no simple solution!
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Applying gates to a
productCuMPS
fails when setting themaxdim
optional argument.Here is an example based on the gate evolution demo (replacing X by Y gates and setting
maxdim
inapply()
:This returns the following error:
As I understand, the error results from a version of
_contract!
expecting two tensors of the same type, while it is given aFloat64
andComplexF64
instead. I tried the following fix to the problematic method (allowing tensors to have different types):This does suppress the error; however, after trying out the resulting code on larger and more complicated problem instances, the GPU code now clearly runs much slower than the CPU one. What have I done wrong?
Now, the
ITensorsGPU.jl
module may not have been designed for manipulating tensor with bond truncation, so perhaps I'm mistaken all the way and there is no simple solution!The text was updated successfully, but these errors were encountered: