Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bugfix in PyCudaHandler merge and split operations #113

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Osambezy
Copy link
Contributor

The attribute call to gpudata is done by pycuda inside the function, no need to do it outside the function.

Before this change, using a merge layer with pycuda 2016.1 was giving the following errors in forward and backward pass:

  File "test.py", line 214, in main
    trainer.train(network, getter_tr, valid_getter=getter_va)
  File "brainstorm/training/trainer.py", line 99, in train
    self.stepper.run()
  File "brainstorm/training/steppers.py", line 103, in run
    self.net.forward_pass(training_pass=True)
  File "brainstorm/structure/network.py", line 430, in forward_pass
    layer.forward_pass(self.buffer[layer_name], training_pass)
  File "brainstorm/layers/merge_layer.py", line 52, in forward_pass
    buffers.outputs.default)
  File "brainstorm/handlers/pycuda_handler.py", line 323, in merge_tt
    block=block, grid=grid)
  File "/lib/python2.7/site-packages/pycuda/driver.py", line 383, in function_call
    handlers, arg_buf = _build_arg_buf(args)
  File "/lib/python2.7/site-packages/pycuda/driver.py", line 158, in _build_arg_buf
    raise TypeError("invalid type on parameter #%d (0-based)" % i)
TypeError: invalid type on parameter #0 (0-based)


  File "test.py", line 214, in main
    trainer.train(network, getter_tr, valid_getter=getter_va)
  File "brainstorm/training/trainer.py", line 99, in train
    self.stepper.run()
  File "brainstorm/training/steppers.py", line 104, in run
    self.net.backward_pass()
  File "brainstorm/structure/network.py", line 444, in backward_pass
    layer.backward_pass(self.buffer[layer_name])
  File "brainstorm/layers/merge_layer.py", line 59, in backward_pass
    buffers.input_deltas.inputs_2)
  File "brainstorm/handlers/pycuda_handler.py", line 364, in split_add_tt
    block=block, grid=grid)
  File "/lib/python2.7/site-packages/pycuda/driver.py", line 383, in function_call
    handlers, arg_buf = _build_arg_buf(args)
  File "/lib/python2.7/site-packages/pycuda/driver.py", line 158, in _build_arg_buf
    raise TypeError("invalid type on parameter #%d (0-based)" % i)
TypeError: invalid type on parameter #0 (0-based)

The attribute call to gpudata is done by pycuda inside the function, no need to do it outside the function.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant