Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why does the API return different results than the Preview functionality or testing the pytorch model itself using a python application? #1039

Open
1 task done
florinmititica opened this issue Mar 1, 2025 · 4 comments
Labels
detect Object Detection issues, PR's HUB Ultralytics HUB issues question Further information is requested

Comments

@florinmititica
Copy link

Search before asking

Question

I trained a model and then tested it using the preview functionality in the Ultralytics HUB. I was happy with the results but then when I tested the API with the same picture I received no results. Doing more tests, I managed to get some results on some images but in most of the cases I get no results using the API compared to getting accurate detection using the preview functionality in the HUB or using a python application that uses the pytorch model.

Image
Image
Image

Additional

No response

@florinmititica florinmititica added the question Further information is requested label Mar 1, 2025
@UltralyticsAssistant UltralyticsAssistant added detect Object Detection issues, PR's HUB Ultralytics HUB issues labels Mar 1, 2025
@UltralyticsAssistant
Copy link
Member

👋 Hello @florinmititica, thank you for raising an issue about Ultralytics HUB 🚀! Please visit our HUB Docs to learn more:

  • Quickstart. Start training and deploying YOLO models with HUB in seconds.
  • Datasets: Preparing and Uploading. Learn how to prepare and upload your datasets to HUB in YOLO format.
  • Projects: Creating and Managing. Group your models into projects for improved organization.
  • Models: Training and Exporting. Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
  • Integrations. Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
  • Ultralytics HUB App. Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
    • iOS. Learn about YOLO CoreML models accelerated on Apple's Neural Engine on iPhones and iPads.
    • Android. Explore TFLite acceleration on mobile devices.
  • Inference API. Understand how to use the Inference API for running your trained models in the cloud to generate predictions.

If this is a 🐛 Bug Report, could you please provide a minimum reproducible example (MRE), including:

  1. The image(s) used, along with steps to recreate the issue on HUB and using the API.
  2. The specific API request details (e.g., request parameters, payload, etc.).
  3. Any relevant logs, error messages, or returned results.

If this is a ❓ Question, additional information about your model, dataset, environment, and how you're comparing the results would greatly facilitate troubleshooting 🕵️‍♂️.

We try to respond to all issues as promptly as possible. Thank you for your patience! An Ultralytics engineer will review and assist you shortly 😊.

@pderrenger
Copy link
Member

@florinmititica thanks for your question! Differences between HUB Preview, local PyTorch inference, and API results typically stem from three main factors:

  1. Inference Parameters:
    The HUB Preview uses default parameters (imgsz=640, conf=0.25, iou=0.45), but these can differ from your API request. Verify your API call matches these settings using the Inference API arguments table.

  2. Image Preprocessing:
    The API automatically preprocesses images (resize, normalization) which might differ from your local PyTorch pipeline. Ensure consistent preprocessing steps like:

    • Identical image sizes (use imgsz parameter)
    • Same letterbox/stretch resizing methods
    • Identical normalization (RGB vs BGR, 0-255 vs 0-1 ranges)
  3. Model Version:
    Confirm you're using the exact same model version in all environments. Check the model ID in your API call matches the one shown in HUB Preview (https://hub.ultralytics.com/models/MODEL_ID).

Troubleshooting Steps:

# Compare API vs local results with identical parameters
from ultralytics import YOLO

# Local inference
model = YOLO("yolov8n.pt")  # Replace with your model
local_results = model.predict("image.jpg", imgsz=640, conf=0.25, iou=0.45)

# API request (ensure parameters match)
import requests
response = requests.post(
    "https://predict.ultralytics.com",
    headers={"x-api-key": "API_KEY"},
    files={"file": open("image.jpg", "rb")},
    data={"model": "MODEL_ID", "imgsz":640, "conf":0.25, "iou":0.45}
)
print("API:", response.json())
print("Local:", local_results[0].tojson())

If discrepancies persist after verifying these factors, please share:

  1. The exact model ID used
  2. Sample image demonstrating the difference
  3. Full API request code (with sensitive info redacted)

We'll investigate further! For more details, see the Inference API documentation.

@florinmititica
Copy link
Author

Hey @pderrenger thank you for getting back to me and sharing all these details. I'm actually already using the same parameters and I'm not processing the image at all. Take a look at another test I've run using your troubleshooting code:

Image

@pderrenger
Copy link
Member

👋 Thanks for sharing the comparison! From your screenshot, it looks like the API is returning empty results while local PyTorch inference works. Let's dig deeper with these steps:

  1. Model Version Check
    Could you confirm you're using the exact same model version in both cases? Sometimes model updates in HUB can cause discrepancies. Verify the commit hash in HUB matches your local model's version (visible in model metadata).

  2. Image Channels Test
    Let's rule out RGB/BGR differences. Try converting your image to RGB explicitly before sending to API:

    from PIL import Image
    img = Image.open("image.jpg").convert("RGB")
    files = {"file": ("image.jpg", img.tobytes(), "image/jpeg")}
  3. Preprocessing Validation
    Add this debug line to compare API vs local preprocessing:

    print("API input shape:", response.json()["images"][0]["shape"])
    print("Local input shape:", local_results[0].orig_shape)

If the shapes differ significantly, there might be a letterbox/stretch resizing mismatch. For critical cases, you can force identical preprocessing by adding data={"rescale": True} to your API request to disable auto-rescaling.

If this still doesn't resolve it, please share:

  1. The exact model ID from HUB URL (hub.ultralytics.com/models/[MODEL_ID])
  2. One sample image that consistently shows this discrepancy
  3. Full error logs (if any) from API response headers

We'll escalate this to our engineering team for deeper investigation. You can also create a new issue with these details for public tracking. 🛠️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
detect Object Detection issues, PR's HUB Ultralytics HUB issues question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants