Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update RVC2 and RVC4 benchmark scripts to work with the dai Benchmark Nodes #64

Merged
merged 31 commits into from
Jan 31, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
c424740
Update benchmark scirpt for RVC2 using daiv3
ptoupas Jan 8, 2025
6cb5a2c
Add dai based benchmark execution for RVC4 device
ptoupas Jan 9, 2025
c4b1a5d
Ignore latency measurements on dai based benchmark reports
ptoupas Jan 9, 2025
e97a453
Update is_hubai_available to work with hubAI API calls
ptoupas Jan 10, 2025
7295d96
Update is_hubai_available to work with various teams from HubAI
ptoupas Jan 10, 2025
82a7044
Remove removeprefix to work with python version 3.8 [skip ci]
ptoupas Jan 10, 2025
a34b9ed
Fix test_modifier test error with EfficientVIT model and change the A…
ptoupas Jan 10, 2025
44a097b
Update .pre-commit-config.yaml
ptoupas Jan 10, 2025
7d4d223
Fix model path and HubAI model slug parsing [ci skip]
ptoupas Jan 13, 2025
57b8982
Add HUBAI_API_KEY to getModelFromZoo calls [ci skip]
ptoupas Jan 13, 2025
d6e5da1
Update Benchmarking Section of README file [ci skip]
ptoupas Jan 13, 2025
4d3bc5b
Update .pre-commit-config.yaml [ci skip]
ptoupas Jan 13, 2025
e8bc974
Fix dlc parsing on Benchmark __init__
ptoupas Jan 14, 2025
e2a7ed7
Update the way modify_onnx optimisation runs are conducted in the ONN…
ptoupas Jan 14, 2025
cd2b088
Fix SNPE benchmark on RVC4 and added support for benchmark over model…
ptoupas Jan 14, 2025
addc5f1
Updated ONNX version (#56)
kozlov721 Jan 15, 2025
f0149cd
Update the RVC4 benchmark to take into account the data type for each…
ptoupas Jan 16, 2025
2753987
Merge remote-tracking branch 'origin' into fix/update-benchmarks-scri…
ptoupas Jan 16, 2025
8dfdb84
Update .pre-commit-config.yaml [ci skip]
ptoupas Jan 16, 2025
b58782c
Fix issue when extracting the model from NNArchive in snpe benchmark …
ptoupas Jan 27, 2025
9b2a602
Add bool tensor type during evaluation of onnx models on ONNXModifier…
ptoupas Jan 27, 2025
e081181
Add a try except block on onnx optimisation and validation.
ptoupas Jan 27, 2025
9cd7158
Merge remote-tracking branch 'origin' into fix/update-benchmarks-scri…
ptoupas Jan 28, 2025
565ae6e
add disable_onnx_optimisation flag on the example defaults.yaml file
ptoupas Jan 28, 2025
d37ec5e
Update dai requirement to version 3.0.0a12 [ci skip]
ptoupas Jan 29, 2025
0548541
Add botocore requirement
ptoupas Jan 29, 2025
c2f91f2
Remove the extra-index-url from the requirements-bench.txt file
ptoupas Jan 29, 2025
8fd09f1
Update the README file regarding the depthai v3 installation.
ptoupas Jan 29, 2025
d409c6c
Update .pre-commit-config.yaml [ci skip]
ptoupas Jan 29, 2025
6f3e950
Update README.md [ci skip]
ptoupas Jan 29, 2025
1d08ae7
Merge branch 'main' into fix/update-benchmarks-scripts-with-daiv3
ptoupas Jan 31, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/modelconverter_test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ jobs:
cache: pip

- name: Install dependencies
run: pip install -e .[dev]
run: pip install -e .[dev] --extra-index-url https://artifacts.luxonis.com/artifactory/luxonis-python-release-local/

- name: Authenticate to Google Cloud
id: google-auth
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/unittests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ jobs:
cache: pip

- name: Install package
run: python -m pip install -e .[dev]
run: python -m pip install -e .[dev] --extra-index-url https://artifacts.luxonis.com/artifactory/luxonis-python-release-local/

- name: Run Unit Tests
env:
Expand Down
10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,13 @@ pip install modelconv

Run `modelconverter --help` to see the available commands and options.

> \[!NOTE\]
> To use the [benchmarking feature](#benchmarking), the `depthai v3` package must be installed. While the `depthai v3` is not yet released on PyPI, you can install it with the following command:
>
> ```bash
> pip install -r requirements-bench.txt --extra-index-url https://artifacts.luxonis.com/artifactory/luxonis-python-release-local/
> ```

## Configuration

There are two main ways to execute configure the conversion process:
Expand Down Expand Up @@ -437,3 +444,6 @@ modelconverter benchmark rvc3 --model-path <path_to_model.xml>

The command prints a table with the benchmark results to the console and
optionally saves the results to a `.csv` file.

> \[!NOTE\]
> For **RVC2** and **RVC4**: The `--model-path` can be a path to a local .blob file, a NN Archive file (.tar.xz), or a name of a model slug from [Luxonis HubAI](https://hub.luxonis.com/ai). To access models from different teams in Luxonis HubAI, remember to update the HUBAI_API_KEY environment variable respectively.
21 changes: 20 additions & 1 deletion modelconverter/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -175,10 +175,12 @@ def benchmark(

**RVC2**

- `--repetitions`: The number of repetitions to perform. Default: `1`
- `--repetitions`: The number of repetitions to perform. Default: `10`

- `--num-threads`: The number of threads to use for inference. Default: `2`

- `--num-messages`: The number of messages to measure for each report. Default: `50`

---

**RVC3**
Expand All @@ -191,8 +193,18 @@ def benchmark(

- `--profile`: The SNPE profile to use for inference. Default: `"default"`

- `--runtime`: The SNPE runtime to use for inference (dsp or cpu). Default: `"dsp"`

- `--num-images`: The number of images to use for inference. Default: `1000`

- `--dai-benchmark`: Whether to run the benchmark using the DAI V3. If False the SNPE tools are used. Default: `True`

- `--repetitions`: The number of repetitions to perform (dai-benchmark only). Default: `10`

- `--num-threads`: The number of threads to use for inference (dai-benchmark only). Default: `1`

- `--num-messages`: The number of messages to measure for each report (dai-benchmark only). Default: `50`

---
"""

Expand All @@ -203,6 +215,13 @@ def benchmark(
key = key[2:].replace("-", "_")
else:
raise typer.BadParameter(f"Unknown argument: {key}")
if key == "dai_benchmark":
value = value.capitalize()
if value not in ["True", "False"]:
raise typer.BadParameter(
"dai_benchmark must be either True or False"
)
value = value == "True"
kwargs[key] = value
Benchmark = get_benchmark(target)
benchmark = Benchmark(str(model_path))
Expand Down
58 changes: 46 additions & 12 deletions modelconverter/packages/base_benchmark.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import re
from abc import ABC, abstractmethod
from collections import namedtuple
from logging import getLogger
Expand All @@ -7,7 +8,7 @@
import pandas as pd
from typing_extensions import TypeAlias

from modelconverter.utils import resolve_path
from modelconverter.utils import is_hubai_available, resolve_path

logger = getLogger(__name__)

Expand All @@ -23,14 +24,36 @@


class Benchmark(ABC):
VALID_EXTENSIONS = (".tar.xz", ".blob", ".dlc")
HUB_MODEL_PATTERN = re.compile(r"^(?:([^/]+)/)?([^:]+):(.+)$")

def __init__(
self,
model_path: str,
dataset_path: Optional[Path] = None,
):
self.model_path = resolve_path(model_path, Path.cwd())
if any(model_path.endswith(ext) for ext in self.VALID_EXTENSIONS):
self.model_path = resolve_path(model_path, Path.cwd())
self.model_name = self.model_path.stem
else:
hub_match = self.HUB_MODEL_PATTERN.match(model_path)
if not hub_match:
raise ValueError(
"Invalid 'model-path' format. Expected either:\n"
"- Model file path: path/to/model.blob, path/to/model.dlc or path/to/model.tar.xz\n"
"- HubAI model slug: [team_name/]model_name:variant"
)
team_name, model_name, model_variant = hub_match.groups()
if is_hubai_available(model_name, model_variant):
self.model_path = model_path
self.model_name = model_name
else:
raise ValueError(
f"Model {team_name+'/' if team_name else ''}{model_name}:{model_variant} not found in HubAI."
)

self.dataset_path = dataset_path
self.model_name = self.model_path.stem

self.header = [
*self.default_configuration.keys(),
"fps",
Expand Down Expand Up @@ -64,7 +87,13 @@ def print_results(
title=f"Benchmark Results for [yellow]{self.model_name}",
box=box.ROUNDED,
)
for field in self.header:

updated_header = [
*results[0][0].keys(),
"fps",
"latency (ms)",
]
for field in updated_header:
table.add_column(f"[cyan]{field}")
for configuration, result in results:
fps_color = (
Expand All @@ -74,17 +103,22 @@ def print_results(
if result.fps < 5
else "green"
)
latency_color = (
"yellow"
if 50 < result.latency < 100
else "red"
if result.latency > 100
else "green"
)
if isinstance(result.latency, str):
latency_color = "orange3"
else:
latency_color = (
"yellow"
if 50 < result.latency < 100
else "red"
if result.latency > 100
else "green"
)
table.add_row(
*map(lambda x: f"[magenta]{x}", configuration.values()),
f"[{fps_color}]{result.fps:.2f}",
f"[{latency_color}]{result.latency:.5f}",
f"[{latency_color}]{result.latency}"
if isinstance(result.latency, str)
else f"[{latency_color}]{result.latency:.5f}",
)
console = Console()
console.print(table)
Expand Down
Loading
Loading