We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting a big whopper of an error trying to apply the optimizations shown in the directions for CogVideoX
--------------------------------------------------------------------------- TorchRuntimeError Traceback (most recent call last) Cell In[2], line 7 4 num_frames = 49 #@param {type:"integer"} 5 prompt = "A golden retriever riding a motorcycle" #@param {type:"string", multiline: True, placeholder: "A golden retriever riding a motorcycle"} ----> 7 video_output = result_model["pipeline"]( 8 prompt=prompt, 9 guidance_scale=guidance_scale, 10 num_inference_steps=num_inference_steps, 11 num_frames=num_frames 12 ).frames 14 result_video_output = {"video": video_output} File ~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File ~/.local/lib/python3.10/site-packages/diffusers/pipelines/cogvideo/pipeline_cogvideox.py:687, in CogVideoXPipeline.__call__(self, prompt, negative_prompt, height, width, num_frames, num_inference_steps, timesteps, guidance_scale, use_dynamic_cfg, num_videos_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, output_type, return_dict, callback_on_step_end, callback_on_step_end_tensor_inputs, max_sequence_length) 684 timestep = t.expand(latent_model_input.shape[0]) 686 # predict noise model_output --> 687 noise_pred = self.transformer( 688 hidden_states=latent_model_input, 689 encoder_hidden_states=prompt_embeds, 690 timestep=timestep, 691 image_rotary_emb=image_rotary_emb, 692 return_dict=False, 693 )[0] 694 noise_pred = noise_pred.float() 696 # perform guidance File ~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs) 1530 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1531 else: -> 1532 return self._call_impl(*args, **kwargs) File ~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs) 1536 # If we don't have any hooks, we want to skip the rest of the logic in 1537 # this function, and just call forward. 1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1539 or _global_backward_pre_hooks or _global_backward_hooks 1540 or _global_forward_hooks or _global_forward_pre_hooks): -> 1541 return forward_call(*args, **kwargs) 1543 try: 1544 result = None File ~/.local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:451, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs) 449 prior = set_eval_frame(callback) 450 try: --> 451 return fn(*args, **kwargs) 452 finally: 453 set_eval_frame(prior) File ~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs) 1530 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1531 else: -> 1532 return self._call_impl(*args, **kwargs) File ~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs) 1536 # If we don't have any hooks, we want to skip the rest of the logic in 1537 # this function, and just call forward. 1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1539 or _global_backward_pre_hooks or _global_backward_hooks 1540 or _global_forward_hooks or _global_forward_pre_hooks): -> 1541 return forward_call(*args, **kwargs) 1543 try: 1544 result = None File ~/.local/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:921, in catch_errors_wrapper.<locals>.catch_errors(frame, cache_entry, frame_state) 917 return hijacked_callback(frame, cache_entry, hooks, frame_state) 919 with compile_lock, _disable_current_modes(): 920 # skip=1: skip this frame --> 921 return callback(frame, cache_entry, hooks, frame_state, skip=1) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:400, in convert_frame_assert.<locals>._convert_frame_assert(frame, cache_entry, hooks, frame_state, skip) 386 compile_id = CompileId(frame_id, frame_compile_id) 388 signpost_event( 389 "dynamo", 390 "_convert_frame_assert._compile", (...) 397 }, 398 ) --> 400 return _compile( 401 frame.f_code, 402 frame.f_globals, 403 frame.f_locals, 404 frame.f_builtins, 405 compiler_fn, 406 one_graph, 407 export, 408 export_constraints, 409 hooks, 410 cache_size, 411 frame, 412 frame_state=frame_state, 413 compile_id=compile_id, 414 skip=skip + 1, 415 ) File /usr/lib/python3.10/contextlib.py:79, in ContextDecorator.__call__.<locals>.inner(*args, **kwds) 76 @wraps(func) 77 def inner(*args, **kwds): 78 with self._recreate_cm(): ---> 79 return func(*args, **kwds) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:676, in _compile(code, globals, locals, builtins, compiler_fn, one_graph, export, export_constraints, hooks, cache_size, frame, frame_state, compile_id, skip) 674 fail_user_frame_lineno: Optional[int] = None 675 try: --> 676 guarded_code = compile_inner(code, one_graph, hooks, transform) 677 return guarded_code 678 except ( 679 Unsupported, 680 TorchRuntimeError, (...) 687 BisectValidationException, 688 ) as e: File ~/.local/lib/python3.10/site-packages/torch/_dynamo/utils.py:262, in dynamo_timed.<locals>.dynamo_timed_inner.<locals>.time_wrapper(*args, **kwargs) 260 with torch.profiler.record_function(f"{key} (dynamo_timed)"): 261 t0 = time.time() --> 262 r = func(*args, **kwargs) 263 time_spent = time.time() - t0 264 compilation_time_metrics[key].append(time_spent) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:535, in _compile.<locals>.compile_inner(code, one_graph, hooks, transform) 533 CompileContext.get().attempt = attempt 534 try: --> 535 out_code = transform_code_object(code, transform) 536 break 537 except exc.RestartAnalysis as e: File ~/.local/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py:1036, in transform_code_object(code, transformations, safe) 1033 instructions = cleaned_instructions(code, safe) 1034 propagate_line_nums(instructions) -> 1036 transformations(instructions, code_options) 1037 return clean_and_assemble_instructions(instructions, keys, code_options)[1] File ~/.local/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:165, in preserve_global_state.<locals>._fn(*args, **kwargs) 163 cleanup = setup_compile_debug() 164 try: --> 165 return fn(*args, **kwargs) 166 finally: 167 cleanup.close() File ~/.local/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:500, in _compile.<locals>.transform(instructions, code_options) 498 try: 499 with tracing(tracer.output.tracing_context), tracer.set_current_tx(): --> 500 tracer.run() 501 except exc.UnspecializeRestartAnalysis: 502 speculation_log.clear() File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2149, in InstructionTranslator.run(self) 2148 def run(self): -> 2149 super().run() File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:810, in InstructionTranslatorBase.run(self) 805 try: 806 self.output.push_tx(self) 807 while ( 808 self.instruction_pointer is not None 809 and not self.output.should_exit --> 810 and self.step() 811 ): 812 pass 813 except BackendCompilerFailed: File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:773, in InstructionTranslatorBase.step(self) 769 unimplemented(f"missing: {inst.opname}") 770 TracingContext.set_current_loc( 771 self.f_code.co_filename, self.lineno, self.f_code.co_name 772 ) --> 773 getattr(self, inst.opname)(inst) 775 return inst.opname != "RETURN_VALUE" 776 except Unsupported: File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:489, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst) 485 try: 486 TracingContext.set_current_loc( 487 self.f_code.co_filename, self.lineno, self.f_code.co_name 488 ) --> 489 return inner_fn(self, inst) 490 except Unsupported as excp: 491 if self.generic_context_manager_depth > 0: 492 # We don't support graph break under GenericContextWrappingVariable, 493 # If there is, we roll back to the checkpoint and fall back. File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1219, in InstructionTranslatorBase.CALL_FUNCTION(self, inst) 1217 args = self.popn(inst.argval) 1218 fn = self.pop() -> 1219 self.call_function(fn, args, {}) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:674, in InstructionTranslatorBase.call_function(self, fn, args, kwargs) 672 if inner_fn and callable(inner_fn) and is_forbidden(inner_fn): 673 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}") --> 674 self.push(fn.call_function(self, args, kwargs)) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py:336, in NNModuleVariable.call_function(self, tx, args, kwargs) 334 else: 335 assert istype(fn, types.FunctionType) --> 336 return tx.inline_user_function_return( 337 variables.UserFunctionVariable(fn, source=fn_source), 338 args, 339 kwargs, 340 ) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:680, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs) 676 def inline_user_function_return(self, fn, args, kwargs): 677 """ 678 A call to some user defined function by inlining it. 679 """ --> 680 return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2285, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs) 2282 @classmethod 2283 def inline_call(cls, parent, func, args, kwargs): 2284 with patch.dict(counters, {"unimplemented": counters["inline_call"]}): -> 2285 return cls.inline_call_(parent, func, args, kwargs) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2399, in InliningInstructionTranslator.inline_call_(parent, func, args, kwargs) 2397 try: 2398 with strict_ctx: -> 2399 tracer.run() 2400 except exc.SkipFrame as e: 2401 msg = f"SKIPPED INLINING {code}: {e}" File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:810, in InstructionTranslatorBase.run(self) 805 try: 806 self.output.push_tx(self) 807 while ( 808 self.instruction_pointer is not None 809 and not self.output.should_exit --> 810 and self.step() 811 ): 812 pass 813 except BackendCompilerFailed: File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:773, in InstructionTranslatorBase.step(self) 769 unimplemented(f"missing: {inst.opname}") 770 TracingContext.set_current_loc( 771 self.f_code.co_filename, self.lineno, self.f_code.co_name 772 ) --> 773 getattr(self, inst.opname)(inst) 775 return inst.opname != "RETURN_VALUE" 776 except Unsupported: File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:489, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst) 485 try: 486 TracingContext.set_current_loc( 487 self.f_code.co_filename, self.lineno, self.f_code.co_name 488 ) --> 489 return inner_fn(self, inst) 490 except Unsupported as excp: 491 if self.generic_context_manager_depth > 0: 492 # We don't support graph break under GenericContextWrappingVariable, 493 # If there is, we roll back to the checkpoint and fall back. File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1260, in InstructionTranslatorBase.CALL_FUNCTION_EX(self, inst) 1258 # Map to a dictionary of str -> VariableTracker 1259 kwargsvars = kwargsvars.keys_as_python_constant() -> 1260 self.call_function(fn, argsvars.items, kwargsvars) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:674, in InstructionTranslatorBase.call_function(self, fn, args, kwargs) 672 if inner_fn and callable(inner_fn) and is_forbidden(inner_fn): 673 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}") --> 674 self.push(fn.call_function(self, args, kwargs)) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:335, in UserMethodVariable.call_function(self, tx, args, kwargs) 327 if ( 328 module_attr is not None 329 and module_attr.startswith("torch.nn.") 330 or self.is_constant 331 ): 332 return self.obj.call_method( 333 tx, self.fn.__name__, args, kwargs, constant=self.is_constant 334 ) --> 335 return super().call_function(tx, args, kwargs) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:289, in UserFunctionVariable.call_function(self, tx, args, kwargs) 284 if self.is_constant: 285 return invoke_and_store_as_constant( 286 tx, self.fn, self.get_name(), args, kwargs 287 ) --> 289 return super().call_function(tx, args, kwargs) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:90, in BaseUserFunctionVariable.call_function(self, tx, args, kwargs) 87 def call_function( 88 self, tx, args: "List[VariableTracker]", kwargs: "Dict[str, VariableTracker]" 89 ) -> "VariableTracker": ---> 90 return tx.inline_user_function_return( 91 self, list(self.self_args()) + list(args), kwargs 92 ) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:680, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs) 676 def inline_user_function_return(self, fn, args, kwargs): 677 """ 678 A call to some user defined function by inlining it. 679 """ --> 680 return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2285, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs) 2282 @classmethod 2283 def inline_call(cls, parent, func, args, kwargs): 2284 with patch.dict(counters, {"unimplemented": counters["inline_call"]}): -> 2285 return cls.inline_call_(parent, func, args, kwargs) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2399, in InliningInstructionTranslator.inline_call_(parent, func, args, kwargs) 2397 try: 2398 with strict_ctx: -> 2399 tracer.run() 2400 except exc.SkipFrame as e: 2401 msg = f"SKIPPED INLINING {code}: {e}" File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:810, in InstructionTranslatorBase.run(self) 805 try: 806 self.output.push_tx(self) 807 while ( 808 self.instruction_pointer is not None 809 and not self.output.should_exit --> 810 and self.step() 811 ): 812 pass 813 except BackendCompilerFailed: File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:773, in InstructionTranslatorBase.step(self) 769 unimplemented(f"missing: {inst.opname}") 770 TracingContext.set_current_loc( 771 self.f_code.co_filename, self.lineno, self.f_code.co_name 772 ) --> 773 getattr(self, inst.opname)(inst) 775 return inst.opname != "RETURN_VALUE" 776 except Unsupported: File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:489, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst) 485 try: 486 TracingContext.set_current_loc( 487 self.f_code.co_filename, self.lineno, self.f_code.co_name 488 ) --> 489 return inner_fn(self, inst) 490 except Unsupported as excp: 491 if self.generic_context_manager_depth > 0: 492 # We don't support graph break under GenericContextWrappingVariable, 493 # If there is, we roll back to the checkpoint and fall back. File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1219, in InstructionTranslatorBase.CALL_FUNCTION(self, inst) 1217 args = self.popn(inst.argval) 1218 fn = self.pop() -> 1219 self.call_function(fn, args, {}) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:674, in InstructionTranslatorBase.call_function(self, fn, args, kwargs) 672 if inner_fn and callable(inner_fn) and is_forbidden(inner_fn): 673 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}") --> 674 self.push(fn.call_function(self, args, kwargs)) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py:309, in NNModuleVariable.call_function(self, tx, args, kwargs) 305 self.convert_to_unspecialized(tx) 307 from .builder import wrap_fx_proxy --> 309 return wrap_fx_proxy( 310 tx=tx, 311 proxy=tx.output.create_proxy( 312 "call_module", 313 self.module_key, 314 *proxy_args_kwargs(args, kwargs), 315 ), 316 ) 317 else: 318 assert self.source, ( 319 "Must provide a valid source in order to inline, " 320 "since inlined function may have default args which must be guarded." 321 ) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py:1330, in wrap_fx_proxy(tx, proxy, example_value, subclass_type, **options) 1322 kwargs = { 1323 "tx": tx, 1324 "proxy": proxy, (...) 1327 **options, 1328 } 1329 if subclass_type is None: -> 1330 return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs) 1331 else: 1332 result = wrap_fx_proxy_cls(target_cls=TensorWithTFOverrideVariable, **kwargs) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py:1415, in wrap_fx_proxy_cls(target_cls, tx, proxy, example_value, subclass_type, **options) 1411 with preserve_rng_state(): 1412 if example_value is None: 1413 # only allow_non_graph_fake in this instance because we handle the non-fake 1414 # cases properly below. -> 1415 example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True) 1417 # Handle recursive calls here 1418 elif maybe_get_fake_mode(example_value) is tx.fake_mode: File ~/.local/lib/python3.10/site-packages/torch/_dynamo/utils.py:1714, in get_fake_value(node, tx, allow_non_graph_fake) 1712 elif isinstance(cause, ValueRangeError): 1713 raise UserError(UserErrorType.CONSTRAINT_VIOLATION, e.args[0]) from e -> 1714 raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None 1716 if not allow_non_graph_fake: 1717 _ = tree_map_only( 1718 torch.Tensor, functools.partial(ensure_graph_fake, tx=tx), ret_val 1719 ) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/utils.py:1656, in get_fake_value(node, tx, allow_non_graph_fake) 1654 try: 1655 with tx.fake_mode, enable_python_dispatcher(): -> 1656 ret_val = wrap_fake_exception( 1657 lambda: run_node(tx.output, node, args, kwargs, nnmodule) 1658 ) 1659 except Unsupported: 1660 raise File ~/.local/lib/python3.10/site-packages/torch/_dynamo/utils.py:1190, in wrap_fake_exception(fn) 1188 def wrap_fake_exception(fn): 1189 try: -> 1190 return fn() 1191 except UnsupportedFakeTensorException as e: 1192 from .exc import unimplemented File ~/.local/lib/python3.10/site-packages/torch/_dynamo/utils.py:1657, in get_fake_value.<locals>.<lambda>() 1654 try: 1655 with tx.fake_mode, enable_python_dispatcher(): 1656 ret_val = wrap_fake_exception( -> 1657 lambda: run_node(tx.output, node, args, kwargs, nnmodule) 1658 ) 1659 except Unsupported: 1660 raise File ~/.local/lib/python3.10/site-packages/torch/_dynamo/utils.py:1782, in run_node(tracer, node, args, kwargs, nnmodule) 1780 raise unimplemented(make_error_message(e)) from e 1781 except Exception as e: -> 1782 raise RuntimeError(make_error_message(e)).with_traceback( 1783 e.__traceback__ 1784 ) from e 1786 raise AssertionError(op) File ~/.local/lib/python3.10/site-packages/torch/_dynamo/utils.py:1769, in run_node(tracer, node, args, kwargs, nnmodule) 1767 elif op == "call_module": 1768 assert nnmodule is not None -> 1769 return nnmodule(*args, **kwargs) 1770 elif op == "get_attr": 1771 return tracer.get_submodule(node.target) File ~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs) 1530 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1531 else: -> 1532 return self._call_impl(*args, **kwargs) File ~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs) 1536 # If we don't have any hooks, we want to skip the rest of the logic in 1537 # this function, and just call forward. 1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1539 or _global_backward_pre_hooks or _global_backward_hooks 1540 or _global_forward_hooks or _global_forward_pre_hooks): -> 1541 return forward_call(*args, **kwargs) 1543 try: 1544 result = None File ~/.local/lib/python3.10/site-packages/torch/nn/modules/linear.py:116, in Linear.forward(self, input) 115 def forward(self, input: Tensor) -> Tensor: --> 116 return F.linear(input, self.weight, self.bias) File ~/.local/lib/python3.10/site-packages/torchao/dtypes/utils.py:54, in _dispatch__torch_function__(cls, func, types, args, kwargs) 51 kwargs = {} if kwargs is None else kwargs 52 if hasattr(cls, "_ATEN_OP_OR_TORCH_FN_TABLE") and \ 53 func in cls._ATEN_OP_OR_TORCH_FN_TABLE: ---> 54 return cls._ATEN_OP_OR_TORCH_FN_TABLE[func](func, types, args, kwargs) 56 with torch._C.DisableTorchFunctionSubclass(): 57 return func(*args, **kwargs) File ~/.local/lib/python3.10/site-packages/torchao/dtypes/utils.py:37, in _implements.<locals>.decorator.<locals>.wrapper(f, types, args, kwargs) 35 @functools.wraps(op) 36 def wrapper(f, types, args, kwargs): ---> 37 return func(f, types, args, kwargs) File ~/.local/lib/python3.10/site-packages/torchao/dtypes/affine_quantized_tensor.py:1218, in _(func, types, args, kwargs) 1214 # using try/except here so that we can have a general fallback when input_tensor/weight_tensor 1215 # is not picked up by any of the dispatch paths in `_quantized_linear_op`, this allows us to 1216 # make the branches easier to understand in `_quantized_linear_op` 1217 try: -> 1218 return weight_tensor._quantized_linear_op(input_tensor, weight_tensor, bias) 1219 except QuantizedLinearNotImplementedError: 1220 if isinstance(input_tensor, AffineQuantizedTensor): File ~/.local/lib/python3.10/site-packages/torchao/dtypes/affine_quantized_tensor.py:193, in AffineQuantizedTensor._quantized_linear_op(input_tensor, weight_tensor, bias) 190 @staticmethod 191 def _quantized_linear_op(input_tensor, weight_tensor, bias): 192 for dispatch_condition, impl in _QLINEAR_DISPATCH_TABLE.items(): --> 193 if dispatch_condition(input_tensor, weight_tensor, bias): 194 return impl(input_tensor, weight_tensor, bias) 195 raise QuantizedLinearNotImplementedError("No specialized dispatch found for quantized linear op") File ~/.local/lib/python3.10/site-packages/torchao/dtypes/affine_quantized_tensor.py:1059, in _linear_fp_act_int8_weight_check(input_tensor, weight_tensor, bias) 1046 def _linear_fp_act_int8_weight_check(input_tensor, weight_tensor, bias): 1047 return ( 1048 # input is native float tensor 1049 not is_traceable_wrapper_subclass(input_tensor) and 1050 input_tensor.is_floating_point() and 1051 # weight is int8 per channel quantized affine quantized tensor 1052 isinstance(weight_tensor, AffineQuantizedTensor) and 1053 _aqt_is_int8(weight_tensor) and 1054 len(weight_tensor.shape) == 2 and 1055 len(weight_tensor.block_size) == 2 and 1056 weight_tensor.block_size[0] == 1 and 1057 weight_tensor.block_size[1] == weight_tensor.shape[1] and 1058 weight_tensor.zero_point_domain == ZeroPointDomain.INT and -> 1059 isinstance(weight_tensor.layout_type, PlainLayoutType) 1060 ) File ~/.local/lib/python3.10/site-packages/torchao/dtypes/affine_quantized_tensor.py:359, in AffineQuantizedTensor.layout_type(self) 357 @property 358 def layout_type(self) -> LayoutType: --> 359 return self.layout_tensor.layout_type TorchRuntimeError: Failed running call_module L__self___time_embedding_linear_1(*(FakeTensor(..., device='cuda:0', size=(2, 1920), dtype=torch.bfloat16),), **{}): 'FakeTensor' object has no attribute 'layout_type'
My packages installed (with no conflicts according to pip) is:
accelerate==0.33.0 aiohappyeyeballs==2.4.0 aiohttp==3.10.5 aiosignal==1.3.1 anyio==4.4.0 argon2-cffi==23.1.0 argon2-cffi-bindings==21.2.0 arrow==1.3.0 asttokens==2.4.1 async-lru==2.0.4 async-timeout==4.0.3 attrs==23.2.0 Automat==22.10.0 Babel==2.10.3 bcrypt==3.2.2 beautifulsoup4==4.12.3 bleach==6.1.0 blinker==1.7.0 certifi==2023.11.17 cffi==1.17.0 chardet==5.2.0 click==8.1.6 cloud-init==24.1.3 colorama==0.4.6 comm==0.2.2 command-not-found==0.3 configobj==5.0.8 constantly==23.10.4 cryptography==41.0.7 dbus-python==1.3.2 debugpy==1.8.5 decorator==5.1.1 defusedxml==0.7.1 diffusers==0.30.2 distro==1.9.0 distro-info==1.7+build1 einops==0.8.0 exceptiongroup==1.2.2 executing==2.1.0 fastjsonschema==2.20.0 filelock==3.15.4 fqdn==1.5.1 frozenlist==1.4.1 fsspec==2024.6.1 h11==0.14.0 httpcore==1.0.5 httplib2==0.20.4 httpx==0.27.2 huggingface-hub==0.24.6 hyperlink==21.0.0 idna==3.6 importlib_metadata==8.4.0 incremental==22.10.0 ipykernel==6.29.5 ipython==8.27.0 ipywidgets==8.1.5 isoduration==20.11.0 jedi==0.19.1 Jinja2==3.1.4 json5==0.9.25 jsonpatch==1.32 jsonpointer==2.0 jsonschema==4.23.0 jsonschema-specifications==2023.12.1 jupyter==1.1.1 jupyter-console==6.6.3 jupyter-events==0.10.0 jupyter-lsp==2.2.5 jupyter_client==8.6.2 jupyter_core==5.7.2 jupyter_server==2.14.2 jupyter_server_terminals==0.5.3 jupyterlab==4.2.5 jupyterlab_pygments==0.3.0 jupyterlab_server==2.27.3 jupyterlab_widgets==3.0.13 kornia==0.7.3 kornia_rs==0.1.5 launchpadlib==1.11.0 lazr.restfulclient==0.14.6 lazr.uri==1.0.6 llvmlite==0.43.0 markdown-it-py==3.0.0 MarkupSafe==2.1.5 matplotlib-inline==0.1.7 mdurl==0.1.2 mistune==3.0.2 mpmath==1.3.0 multidict==6.0.5 nbclient==0.10.0 nbconvert==7.16.4 nbformat==5.10.4 nest-asyncio==1.6.0 netifaces==0.11.0 networkx==3.3 notebook==7.2.2 notebook_shim==0.2.4 numba==0.60.0 numpy==1.26.4 nvidia-cublas-cu12==12.1.3.1 nvidia-cuda-cupti-cu12==12.1.105 nvidia-cuda-nvrtc-cu12==12.1.105 nvidia-cuda-runtime-cu12==12.1.105 nvidia-cudnn-cu12==8.9.2.26 nvidia-cufft-cu12==11.0.2.54 nvidia-curand-cu12==10.3.2.106 nvidia-cusolver-cu12==11.4.5.107 nvidia-cusparse-cu12==12.1.0.106 nvidia-nccl-cu12==2.20.5 nvidia-nvjitlink-cu12==12.6.68 nvidia-nvtx-cu12==12.1.105 oauthlib==3.2.2 opencv-python-headless==4.10.0.84 overrides==7.7.0 packaging==24.1 pandocfilters==1.5.1 parso==0.8.4 pexpect==4.9.0 pillow==10.4.0 platformdirs==4.2.2 prometheus_client==0.20.0 prompt_toolkit==3.0.47 psutil==6.0.0 ptyprocess==0.7.0 pure_eval==0.2.3 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycparser==2.22 pycurl==7.45.3 Pygments==2.17.2 PyGObject==3.48.2 PyHamcrest==2.1.0 PyJWT==2.7.0 pyOpenSSL==23.2.0 pyparsing==3.1.1 pyrsistent==0.20.0 pyserial==3.5 python-apt==2.7.7+ubuntu1 python-dateutil==2.9.0.post0 python-json-logger==2.0.7 pytz==2024.1 PyYAML==6.0.1 pyzmq==26.2.0 referencing==0.35.1 regex==2024.7.24 requests==2.31.0 rfc3339-validator==0.1.4 rfc3986-validator==0.1.1 rich==13.7.1 rpds-py==0.20.0 safetensors==0.4.4 scipy==1.14.1 Send2Trash==1.8.3 sentencepiece==0.2.0 service-identity==24.1.0 six==1.16.0 sniffio==1.3.1 soundfile==0.12.1 soupsieve==2.6 spandrel==0.3.4 stack-data==0.6.3 sympy==1.13.2 systemd-python==235 terminado==0.18.1 tinycss2==1.3.0 tokenizers==0.19.1 tomli==2.0.1 torch==2.3.1+cu121 torchao==0.5.0.dev20240902+cu121 torchaudio==2.3.1+cu121 torchsde==0.2.6 torchvision==0.18.1+cu121 tornado==6.4.1 tqdm==4.66.5 traitlets==5.14.3 trampoline==0.1.2 transformers==4.44.2 triton==2.3.1 Twisted==24.3.0 types-python-dateutil==2.9.0.20240821 typing_extensions==4.12.2 ubuntu-pro-client==8001 unattended-upgrades==0.1 uri-template==1.3.0 urllib3==2.0.7 wadllib==1.3.6 wcwidth==0.2.13 webcolors==24.8.0 webencodings==0.5.1 websocket-client==1.8.0 widgetsnbextension==4.0.13 xformers==0.0.27 yarl==1.9.7 zipp==3.20.1 zope.interface==6.1
The text was updated successfully, but these errors were encountered:
Try installing a more recent version of pytorch instead, either 2.4 or nightlies
Sorry, something went wrong.
I had another error with 2.4.0, forget what ATM, I just shut down for the night.
I'll dig in tomorrow see if I can figure it out. I think 2.4.0 was installing numpy 2+ and messing stuff up cause things compiled to 1.x I guess.
Is there any way to solve this issue in PyTorch 2.3?
for 2.4 or earlier, you can try https://github.com/pytorch/ao/tree/main/torchao/quantization#workaround-with-unwrap_tensor_subclass-for-export-aoti-and-torchcompile-pytorch-24-and-before-only for torchao to be compatible with torch.compile
No branches or pull requests
Getting a big whopper of an error trying to apply the optimizations shown in the directions for CogVideoX
My packages installed (with no conflicts according to pip) is:
The text was updated successfully, but these errors were encountered: