Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SetInput error #7583

Open
jpeng2012 opened this issue Jan 10, 2025 · 7 comments
Open

SetInput error #7583

jpeng2012 opened this issue Jan 10, 2025 · 7 comments
Labels
module: examples Issues related to demos under examples directory triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@jpeng2012
Copy link

jpeng2012 commented Jan 10, 2025

🐛 Describe the bug

I am testing mobilenet_v3_small. My model generation code is below.

import torch
import torchvision
from torch.export import export
from executorch.exir import to_edge

import executorch.exir as exir

from torch.export import export, ExportedProgram
from torchvision.models import mobilenet_v3_small, MobileNet_V3_Small_Weights
from torch.utils.mobile_optimizer import optimize_for_mobile

m = mobilenet_v3_small(weights=MobileNet_V3_Small_Weights.IMAGENET1K_V1).eval()
m.eval()

example_args = (torch.randn(1, 3, 224, 224),)

aten_dialect: ExportedProgram = export(m, example_args)
edge_program: exir.EdgeProgramManager = exir.to_edge(aten_dialect)

executorch_program: exir.ExecutorchProgramManager = edge_program.to_executorch(
exir.ExecutorchBackendConfig(
passes=[], # User-defined passes
)
)

with open("mobilenet_v3_small.pte", "wb") as file:
file.write(executorch_program.buffer)

In my c++ code,
in the setup function, I use
executorch_model_ = std::make_unique(configs.model_path_.c_str(), Module::LoadMode::File);

In setInput function,:
for(int i=0;i<1;i++) {
auto input_tensor0 = executorch::extension::from_blob(input[i]->data, atenShape, data_type));

auto input_tensor = clone_tensor_ptr(input_tensor0);
std::cout<< "tensor data type0 " << static_cast(input_tensor->scalar_type()) << std::endl;
std::cout<< ensor data dim0 " << input_tensor->dim() << std::endl;
this->inputs_.emplace_back(input_tensor);
}

In execute function,

const auto result = executorch_model_->forward(this->inputs_);

I got the following error,

E 00:00:00.271241 executorch:tensor_impl.cpp:98] Attempted to resize a static tensor
E 00:00:00.271286 executorch:method.cpp:808] Error setting input 0: 0x10
F 00:00:00.271312 executorch:result.h:165] In function CheckOk(), assert failed: hasValue_

It looks like its trying to resize the input size of the forward method which TensorShapeDynamism::STATIC. Couldn't figure where it is configured as TensorShapeDynamism::STATIC.

Versions

I am using tag 0.4 on android NTK

@dvorjackz
Copy link
Contributor

What is the shape of your input_tensor?

@jpeng2012
Copy link
Author

1, 3, 224, 224

@dvorjackz
Copy link
Contributor

Okay, just remember that since you haven't declared any dynamic shapes on export, your inputs to forward() need to have that exact shape. Other than that, try building from main to get some useful logging that @GregoryComer added recently - https://github.com/pytorch/executorch/blob/main/runtime/core/portable_type/tensor_impl.cpp#L115

@lucylq
Copy link
Contributor

lucylq commented Jan 10, 2025

Hi @jpeng2012,

Could you provide some more information on how you created the tensors?

I'm able to run your exported file with the executor_runner, you could also try that along with @dvorjackz 's suggestions.

@lucylq lucylq added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module module: examples Issues related to demos under examples directory labels Jan 10, 2025
@jpeng2012
Copy link
Author

jpeng2012 commented Jan 10, 2025

The tensor is created like this.

for(int i=0;i<1;i++) {
auto input_tensor0 = executorch::extension::from_blob(input[i]->data, atenShape, data_type));

auto input_tensor = clone_tensor_ptr(input_tensor0);
std::cout<< "tensor data type0 " << static_cast(input_tensor->scalar_type()) << std::endl;
std::cout<< ensor data dim0 " << input_tensor->dim() << std::endl;
this->inputs_.emplace_back(input_tensor);
}

I added some logs in Module::execute

runtime::Result<std::vector<runtime::EValue>> Module::execute(
    const std::string& method_name,
    const std::vector<runtime::EValue>& input_values) {

  const auto& t_src  = input_values[0].toTensor();
  std::cout<< " tensor data sizes2 " << t_src.sizes()[0] << " " << t_src.sizes()[1] << " "<< t_src.sizes()[2] << " "<< t_src.sizes()[3] << " "<< std::endl;

  
  ET_CHECK_OK_OR_RETURN_ERROR(load_method(method_name));
  auto& method = methods_.at(method_name).method;
  auto& inputs = methods_.at(method_name).inputs;

  const auto& t_src2  = input_values[0].toTensor();
  std::cout<< "tensor data sizes3 " << t_src2.sizes()[0] << " " << t_src2.sizes()[1] << " "<< t_src2.sizes()[2] << " "<< t_src2.sizes()[3] << " "<< std::endl;

The log output is as below.

tensor data sizes 1 3 224 224 
tensor data sizes2 1 3 224 224 
tensor data sizes3 16 3 3 3 

Somehow data shape changed after

ET_CHECK_OK_OR_RETURN_ERROR(load_method(method_name));
  
Dig into a little furthur, 
the shape size is caused by this line 
method_holder.method = ET_UNWRAP_UNIQUE(program_->load_method(
          method_name.c_str(),
          method_holder.memory_manager.get(),
          event_tracer!=nullptr ? event_tracer : this->event_tracer()));

@jpeng2012
Copy link
Author

I add load_method right after the module is created, then the issue is gone. Is it expected or some bug in the code?

executorch_model_ = std::make_unique(configs.model_path_.c_str());
auto result = executorch_model_->load_method("forward");

@lucylq
Copy link
Contributor

lucylq commented Jan 11, 2025

Glad to hear you were able to resolve the issue.

Using the Module API, you should be able to do something like this example: #7598, without explicitly calling executorch_model_->load_method("forward");

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: examples Issues related to demos under examples directory triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

3 participants