Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error with C++ API "Internal: Unsupported data type in custom op handler" #137

Closed
lilanxiao opened this issue Jun 9, 2020 · 3 comments
Closed
Labels
build issues with the build system libedgetpu

Comments

@lilanxiao
Copy link

lilanxiao commented Jun 9, 2020

Hi,

I got the error "Internal: Unsupported data type in custom op handler" when I try to build an Edge-TPU-Interpreter using the c++ API. I am using the model from official model zoo here. My code passes the compilation without any issue.

I think it might be a compatibility issue with the edge runtime and tensorflow-lite C++ API. I have following observations:

  1. The same model and hardware works well with python API.
  2. C++ code running tflite-model on the CPU works well.
  3. The error occurs when they all come together.
  • hardware: Ubuntu 18 desktop with a USB accelerator
  • edge tpu runtime: version 14.0
  • libtensorflow-lite: build from source (cloned on 2020.06.08)
  • code to buid the Edge-TPU-Interpreter:
std::unique_ptr<tflite::Interpreter> BuildEdgeTpuInterpreter(
    const tflite::FlatBufferModel& model,
    edgetpu::EdgeTpuContext* edgetpu_context) {
    tflite::ops::builtin::BuiltinOpResolver resolver;
    resolver.AddCustom(edgetpu::kCustomOp, edgetpu::RegisterCustomOp());
    std::unique_ptr<tflite::Interpreter> interpreter;
    if (tflite::InterpreterBuilder(model, resolver)(&interpreter) != kTfLiteOk) {
        std::cerr << "Failed to build interpreter." << std::endl;
    }
    // Bind given context with interpreter.
    interpreter->SetExternalContext(kTfLiteEdgeTpuContext, edgetpu_context);
    interpreter->SetNumThreads(1);
    if (interpreter->AllocateTensors() != kTfLiteOk) {
        std::cerr << "Failed to allocate tensors." << std::endl;
    }
    return interpreter;
}

The error is caused by interpreter->AllocateTensors() since I got the info "Failed to allocate tensors."

@Namburger Namburger added build issues with the build system libedgetpu labels Jun 9, 2020
@Namburger
Copy link

@lilanxiao hi again :)

Couple things:

Couple of FYIs:

@lilanxiao
Copy link
Author

@Namburger nice to see you again!

thank you the information and nice repo! I've tried it on my PC and it works like a charm!

Just one more small thing: the CMakeList.txt has cmake_minimum_required(VERSION 3.11). The version might be too high for Ubuntu-users because for Ubuntu 18 the newest version from apt ist 3.10.2. I change the minimum required version to 3.10 and the build still works. So making the requirement lower seems harmless.

Thank you again!

@Namburger
Copy link

@lilanxiao sounds good :)
I think I just make that the requirements solely because that was the version I had at the time and haven't tested it on older versions haha

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build issues with the build system libedgetpu
Projects
None yet
Development

No branches or pull requests

2 participants