You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Consider the following, adapted from 01-vector-add.py to use numpy instead of torch. On gpu triton depends on torch for a number of reasons that would be hard to replace (e.g. interfacing with cuda from python), but on cpu torch is a relatively heavy dependency just to make tensors, and numpy is strictly smaller (as torch depends on numpy).
Currently this errors when torch is not installed with
Traceback (most recent call last):
File "...", line 19, in <module>
@triton.autotune(
^^^^^^^^^^^^^^^^
File ".../triton-cpu/python/triton/runtime/autotuner.py", line 361, in decorator
return Autotuner(fn, fn.arg_names, configs, key, reset_to_zero, restore_value, pre_hook=pre_hook,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../triton-cpu/python/triton/runtime/autotuner.py", line 127, in __init__
self.do_bench = driver.active.get_benchmarker()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../triton-cpu/python/triton/runtime/driver.py", line 33, in __getattr__
self._initialize_obj()
File ".../triton-cpu/python/triton/runtime/driver.py", line 30, in _initialize_obj
self._obj = self._init_fn()
^^^^^^^^^^^^^^^
File ".../triton-cpu/python/triton/runtime/driver.py", line 13, in _create_driver
actives = [x.driver for x in backends.values() if x.driver.is_active()]
^^^^^^^^^^^^^^^^^^^^
File ".../triton-cpu/python/triton/backends/amd/driver.py", line 495, in is_active
import torch
ModuleNotFoundError: No module named 'torch'
All tutorials use Torch as a reference for both functionality and performance. We want to compare Triton's performance with the native Torch performance, not NumPy. So it's not just to make tensors, it's to give performance reference numbers. Also, it's preferrable to be able to run any tutorial on any device.
Sorry about the confusion. This issue just uses the tutorial as an illustration of the runtime dependency on torch,
and the associated PR was not suggesting to changing the tutorials but to remove this dependency on torch, but I agree that this was not obvious from reading the issue alone.
I see. In this case, it would be better to open an issue for each particular case when a dependency on Torch seems unreasonable. Please note that any related changes outside of the CPU backend (third_party/cpu) should go through the upstream repo.
Describe the bug
Consider the following, adapted from 01-vector-add.py to use numpy instead of torch. On gpu triton depends on torch for a number of reasons that would be hard to replace (e.g. interfacing with cuda from python), but on cpu torch is a relatively heavy dependency just to make tensors, and numpy is strictly smaller (as torch depends on numpy).
Currently this errors when torch is not installed with
Environment details
triton-cpu: daa7eb0
The text was updated successfully, but these errors were encountered: