Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: bool value of Tensor with more than one value is ambiguous #16

Open
sarvesh710 opened this issue Apr 16, 2019 · 5 comments
Open

Comments

@sarvesh710
Copy link

sarvesh710 commented Apr 16, 2019

File "/home/sarvesh23/pytorch_RVAE/utils/functional.py", line 6, in f_and
return x and y
RuntimeError: bool value of Tensor with more than one value is ambiguous

I am running train_word_embeddings.py. Any hint what I am doing wrong ?

@sarvesh710 sarvesh710 changed the title bool value of Tensor with more than one value is ambiguous Error: bool value of Tensor with more than one value is ambiguous Apr 16, 2019
@davislf2
Copy link

davislf2 commented May 2, 2019

Same problem here. Does anyone know why?

@SHIVITG
Copy link

SHIVITG commented Oct 17, 2019

preprocessed data was found and loaded
Traceback (most recent call last):
File "train_word_embeddings.py", line 50, in
out = neg_loss( input, target, args.num_sample).mean()
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/ec2-user/SageMaker/pytorch_RVAE/selfModules/neg.py", line 38, in forward
assert parameters_allocation_check(self),
File "/home/ec2-user/SageMaker/pytorch_RVAE/utils/functional.py", line 15, in parameters_allocation_check
return fold(f_and, parameters, True) or not fold(f_or, parameters, False)
File "/home/ec2-user/SageMaker/pytorch_RVAE/utils/functional.py", line 2, in fold
return a if (len(l) == 0) else fold(f, l[1:], f(a, l[0]))
File "/home/ec2-user/SageMaker/pytorch_RVAE/utils/functional.py", line 2, in fold
return a if (len(l) == 0) else fold(f, l[1:], f(a, l[0]))
File "/home/ec2-user/SageMaker/pytorch_RVAE/utils/functional.py", line 6, in f_and
return x and y
RuntimeError: bool value of Tensor with more than one value is ambiguous

@leehaoyuan
Copy link

According to the note, if you delete following lines in selfModules/neg.py, it will work just fine.
assert parameters_allocation_check(self),
"""
Invalid CUDA options. out_embed and in_embed parameters both should be stored in the same memory
got out_embed.is_cuda = {}, in_embed.is_cuda = {}
""".format(self.out_embed.weight.is_cuda, self.in_embed.weight.is_cuda)

@kay312
Copy link

kay312 commented Sep 16, 2020

According to the note, if you delete following lines in selfModules/neg.py, it will work just fine.
assert parameters_allocation_check(self),
"""
Invalid CUDA options. out_embed and in_embed parameters both should be stored in the same memory
got out_embed.is_cuda = {}, in_embed.is_cuda = {}
""".format(self.out_embed.weight.is_cuda, self.in_embed.weight.is_cuda)

yeah, the code is to do 'parameters_allocation_check', i deleted them and it worked, but i dont know whether it influence the output or not.

@gohjiayi
Copy link

According to the note, if you delete following lines in selfModules/neg.py, it will work just fine.
assert parameters_allocation_check(self),
"""
Invalid CUDA options. out_embed and in_embed parameters both should be stored in the same memory
got out_embed.is_cuda = {}, in_embed.is_cuda = {}
""".format(self.out_embed.weight.is_cuda, self.in_embed.weight.is_cuda)

Currently working on Python 3.6.9 and facing the same issue. After removing the parameters_allocation_check code (quoted above), I faced additional errors and this is how I solved them. (P.S. line number might differ)

ValueError: 'Object arrays cannot be loaded when allow_pickle=False'
In batch_loader.py, line 221. For np.load add in the argument allow_pickle=True, as instructed by StackOverflow post here.

[self.word_tensor, self.character_tensor] = [np.array([np.load(target, allow_pickle=True) for target in input_type])
                                            for input_type in tensor_files]

IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed
In train_word_embeddings.py, line 56. For out.cpu().data.numpy()[0] remove the index [0].

if iteration % 500 == 0:
      out = out.cpu().data.numpy()
      print('iteration = {}, loss = {}'.format(iteration, out))

This allowed me to run the codes and build a custom word embedding successfully. Still studying the impact it has on the word embeddings so please use at your own discretion. Hope this helps the others!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants