Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

convert error at llama example #447

Open
dlgktjr opened this issue Jan 6, 2025 · 2 comments
Open

convert error at llama example #447

dlgktjr opened this issue Jan 6, 2025 · 2 comments

Comments

@dlgktjr
Copy link

dlgktjr commented Jan 6, 2025

Description of the bug:

ai_edge_torch.version
'0.3.0.dev20250105'

torch.version
'2.5.1+cu124'

python version
3.11.9

llama version
https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct

I run the command
ai-edge-torch/ai_edge_torch/generative/examples/llama$ python convert_to_tflite.py

I just change

_CHECKPOINT_PATH = flags.DEFINE_string(
    'checkpoint_path',
-    os.path.join(pathlib.Path.home(), 'Downloads/llm_data/llama'),
+    os.path.join(pathlib.Path.home(), 'Downloads/llama'),
    'The path to the model checkpoint, or directory holding the checkpoint.',
)
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/export/exported_program.py", line 114, in wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/export/_trace.py", line 1880, in _export
    export_artifact = export_func(  # type: ignore[operator]
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/export/_trace.py", line 1224, in _strict_export
    return _strict_export_lower_to_aten_ir(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/export/_trace.py", line 1333, in _strict_export_lower_to_aten_ir
    aten_export_artifact = lower_to_aten_callback(
                           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/export/_trace.py", line 637, in _export_to_aten_ir
    gm, graph_signature = transform(aot_export_module)(
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1246, in aot_export_module
    fx_g, metadata, in_spec, out_spec = _aot_export_function(
                                        ^^^^^^^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1480, in _aot_export_function
    fx_g, meta = create_aot_dispatcher_function(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
    return _create_aot_dispatcher_function(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 623, in _create_aot_dispatcher_function
    fw_metadata = run_functionalized_fw_and_collect_metadata(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 173, in inner
    flat_f_outs = f(*flat_f_args)
                  ^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 182, in flat_fn
    tree_out = fn(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 859, in functional_call
    out = PropagateUnbackedSymInts(mod).run(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/fx/interpreter.py", line 146, in run
    self.env[node] = self.run_node(node)
                     ^^^^^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 5498, in run_node
    result = super().run_node(n)
             ^^^^^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/fx/interpreter.py", line 203, in run_node
    return getattr(self, n.op)(n.target, args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/fx/interpreter.py", line 275, in call_function
    return target(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Cannot set version_counter for inference tensor
While executing %getitem_8 : [num_users=1] = call_function[target=operator.getitem](args = (%int_2, 0), kwargs = {})
Original traceback:
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/ai_edge_torch/generative/utilities/converter.py", line 39, in forward
    return self.module(*export_args, **full_kwargs)
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/ai_edge_torch/generative/utilities/model_builder.py", line 121, in forward
    return self._forward_with_embeds(
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/ai_edge_torch/generative/utilities/model_builder.py", line 147, in _forward_with_embeds
    x, kv_entry = block(x, rope, mask, input_pos, kv_entry)
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/ai_edge_torch/generative/layers/attention.py", line 95, in forward
    atten_func_out = self.atten_func(x_norm, rope, mask, input_pos, kv_cache)
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/ai_edge_torch/generative/layers/attention.py", line 218, in forward
    kv_cache = kv_utils.update(kv_cache, input_pos, k, v)
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/ai_edge_torch/generative/layers/kv_cache.py", line 164, in update
    return update_kv_cache(cache, input_pos, k_slice, v_slice)
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/ai_edge_torch/generative/layers/kv_cache.py", line 197, in _update_kv_impl
    k_slice_indices = _get_slice_indices(input_pos)
  File "/home/haseok/miniconda3/lib/python3.11/site-packages/ai_edge_torch/generative/layers/kv_cache.py", line 184, in _get_slice_indices
    positions = positions.int()[0].reshape([])

Is there any one have this problem?

Actual vs expected behavior:

No response

Any other information you'd like to share?

additionally, I got fail at verify.py.
is this error related that error?
if not, can you tell me why failed at the "verify"? I didn't change 'verify.py' at all.

I0106 15:52:25.160858 135564329379648 verifier.py:303] Verifying the reauthored model with input IDs: [1, 2, 3, 4]
I0106 15:52:25.161123 135564329379648 verifier.py:206] Forwarding the original model...
I0106 15:52:29.689072 135564329379648 verifier.py:209] logits_original: tensor([ 9.2015,  9.3586, 14.1001,  ..., -1.4927, -1.4921, -1.4925],
       grad_fn=<SliceBackward0>)
I0106 15:52:29.690362 135564329379648 verifier.py:211] Forwarding the reauthored model...
I0106 15:52:33.384325 135564329379648 verifier.py:214] logits_reauthored: tensor([ 9.1995,  9.3627, 14.1132,  ..., -1.4958, -1.4952, -1.4955])
E0106 15:52:33.385745 135564329379648 verifier.py:309] *** FAILED *** verify with input IDs: [1, 2, 3, 4]
@dlgktjr dlgktjr added the type:bug Bug label Jan 6, 2025
@haozha111
Copy link
Contributor

@talumbau any ideas on it?

@dlgktjr
Copy link
Author

dlgktjr commented Jan 8, 2025

I tried same thing at 5a93316. It works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants