Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transformer model requires more parameters than supported on TPU #638

Open
BradLarson opened this issue Jul 16, 2020 · 2 comments
Open

Transformer model requires more parameters than supported on TPU #638

BradLarson opened this issue Jul 16, 2020 · 2 comments

Comments

@BradLarson
Copy link
Contributor

It has been pointed out by Wojtek Czarnowski that in specific cases the Transformer model (or components used within it) can trigger a compilation error in X10 on TPU:

2020-07-16 22:51:03.077357: F tensorflow/compiler/xla/xla_client/xla_util.cc:90] Invalid argument: From /job:tpu_worker/replica:0/task:0:
Computation requires more parameters (333) than supported (limit 237).
	 [[{{node XRTCompile}}]]
Current stack trace:
	frame #17: 0x00007f6da8c0ceb2 $__lldb_expr102`partial apply for closure #1 in update(model:using:for:) at <Cell 14>:12:9
	frame #23: 0x00007f6da8c0c268 $__lldb_expr102`update(model=<unavailable>, optimizer=<unavailable>, batch=<unavailable>) at <Cell 14>:4:18
	frame #24: 0x00007f6d5000a483 $__lldb_expr132`closure #1 in  at <Cell 19>:20:31
	frame #25: 0x00007f6da48245b7 libjupyterInstalledPackages.so`time(repeating=1, f=0x00007f6d50009230 $__lldb_expr132`closure #1 () -> () in __lldb_expr_131 at <Cell 19>:4) at timing.swift:15:9 [opt]
	frame #26: 0x00007f6d5000914b $__lldb_expr132`main at <Cell 19>:4:1

He provided a reproducer notebook which can be opened and run in Colab. Choosing a GPU-backed instance lets this succeed, but running this notebook with a TPU-backed instance triggers the above crash.

@texasmichelle
Copy link
Member

I ran into this running WordSeg on TPU as well:

Attempting to fetch value instead of handling error Invalid argument: Computation requires more parameters (5370) than supported (limit 4035).

@brettkoonce
Copy link
Contributor

Starting training...
2020-10-05 23:42:13.702416: F tensorflow/compiler/xla/xla_client/xla_util.cc:90] Invalid argument: From /job:tpu_worker/replica:0/task:0:
2 root error(s) found.
  (0) Invalid argument: Computation requires more parameters (548) than supported (limit 236).
         [[{{node XRTCompile}}]]
  (1) Invalid argument: Computation requires more parameters (548) than supported (limit 236).
         [[{{node XRTCompile}}]]
         [[XRTCompile_G20]]
0 successful operations.
0 derived errors ignored.
Aborted (core dumped)

i can reproduce this by using tensorflow 2.3.1 as the base for the tpu. moving to a nightly build makes the crash go away:

Screen Shot 2020-10-05 at 6 54 01 PM

would suggest bumping what base image colab is using should fix this.

see also pytorch/xla#1963

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants