Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Made sample type agnostic and GPU compatible #93

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

tipfom
Copy link

@tipfom tipfom commented Nov 13, 2024

Fixed the issue of sample mentioned in #92.

Closes #92.

@@ -655,6 +655,8 @@ function sample(rng::AbstractRNG, m::MPS)
error("sample: MPS is not normalized, norm=$(norm(m[1]))")
end

ElT = promote_itensor_eltype(m)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been preferring scalartype(m) for this these days, probably we will remove promote_itensor_eltype.

@mtfishman mtfishman changed the title Made sample type agnostic and CUDA compatible Made sample type agnostic and GPU compatible Nov 13, 2024
@mtfishman
Copy link
Member

Thanks for the quick PR! This is subtle to test right now, we have to set up the tests in this repository to run on GPUs, I guess we'll address that in future PRs.

@kmp5VT
Copy link

kmp5VT commented Nov 13, 2024

@mtfishman I can look around other code in ITensorMPS to see if there is more implicit Float64 typing and open another PR if there is. Thank you @tipfom for your PR!

@tipfom
Copy link
Author

tipfom commented Nov 13, 2024

Let me know if you want some help with that @kmp5VT. I'd be willing to spend some time on that.

@mtfishman
Copy link
Member

Thanks @kmp5VT. Also it would be great to start switching some tests over to being able to test on GPU, looping over device backends the same way we do in the NDTensors/ITensors tests. For now it could just test using JLArrays, but ultimately we should also set things up with Jenkins to run the tests on GPUs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] sample in mps.jl broken when using CUDA
3 participants