def new_model(
self, name: str, focoos_model: str, description: str
) -> Optional[RemoteModel]:
"""
diff --git a/objects.inv b/objects.inv
index 4daa132..9dee362 100644
Binary files a/objects.inv and b/objects.inv differ
diff --git a/search/search_index.json b/search/search_index.json
index 3631fea..3c0ee84 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to Focoos AI \ud83d\udd25","text":"Focoos AI provides an advanced development platform designed to empower developers and businesses with efficient, customizable computer vision solutions. Whether you're working with data from cloud infrastructures or deploying on edge devices, Focoos AI enables you to select, fine-tune, and deploy state-of-the-art models optimized for your unique needs.
"},{"location":"#what-we-offer","title":"What We Offer \ud83c\udfaf","text":""},{"location":"#ai-ready-models-platform-for-computer-vision-applications","title":"AI-Ready Models Platform for Computer Vision Applications \ud83e\udd16","text":"Focoos AI offers a versatile platform for developing computer vision solutions. Our platform includes a suite of services to support the end-to-end development process:
- Ready-to-use models: Choose from a variety of pre-trained models, optimized for different data, applications, and hardware.
- Customization: Tailor models to your specific needs by selecting relevant classes and fine-tuning them on your own dataset.
- Testing and Validation: Verify model accuracy and efficiency using your own data samples, ensuring the model meets your requirements before deployment.
"},{"location":"#key-features","title":"Key Features \ud83d\udd11","text":" -
Select Ready-to-use Models \ud83e\udde9 Get started quickly by selecting one of our efficient, pre-trained models that best suits your data and application needs.
-
Personalize Your Model \u2728 Customize the selected model for higher accuracy through fine-tuning. Adapt the model to your specific use case by training it on your own dataset and selecting useful classes.
-
Test and Validate \ud83e\uddea Upload your data sample to test the model\u2019s accuracy and efficiency. Iterate the process to ensure the model performs to your expectations.
-
Cloud Deployment \u2601\ufe0f Deploy the model on your preferred cloud infrastructure, whether it's your own private cloud or a public cloud service. Your data stays private, as it remains within your servers.
-
Edge Deployment \ud83d\udda5\ufe0f Deploy the model on edge devices. Download the Focoos Engine to run the model locally, without sending any data over the network, ensuring full privacy.
"},{"location":"#why-choose-focoos-ai","title":"Why Choose Focoos AI? \ud83e\udd29","text":"Using Focoos AI helps you save both time and money while delivering high-performance AI models:
- 80% Faster Development \u23f3: Save significant development time compared to traditional methods.
- +5% Model Accuracy \ud83c\udfaf: Achieve some of the most accurate models in the market, as demonstrated by our scientific benchmarks.
- Up to 20x Faster Models \u26a1: Run real-time data analysis with some of the fastest models available today.
"},{"location":"#pre-trained-models-with-minimum-training-data","title":"Pre-Trained Models with Minimum Training Data \ud83d\udcca","text":"Our pre-trained models reduce the need for large datasets, making it easier to deploy computer vision solutions. Here's how Focoos AI helps you minimize your resources:
- 80% Less Training Data \ud83d\udcc9: Leverage pre-trained models that are ready to tackle a variety of use cases.
- 50% Lower Infrastructure Costs \ud83d\udca1: Use less expensive hardware and reduce energy consumption.
- 75% Reduction in CO2 Emissions \ud83c\udf31: Deploy energy-efficient models that help you reduce your carbon footprint.
"},{"location":"#proven-efficiency-and-accuracy","title":"Proven Efficiency and Accuracy \ud83d\udd0d","text":"Focoos AI models outperform other solutions in terms of both accuracy and efficiency. Our technical report highlights how our models lead in academic benchmarks across multiple domains. Contact us to learn more about the scientific benchmarks that set Focoos AI apart.
"},{"location":"#pricing-model","title":"Pricing Model \ud83d\udcb5","text":"We offer a flexible pricing model based on your deployment preferences:
- Public Cloud \ud83c\udf10: Pay for model usage when deployed on public cloud providers.
- Private Infrastructure \ud83c\udfe2: Pay for usage when deploying on your own infrastructure.
Contact us for a tailored quote based on your specific use case.
By choosing Focoos AI, you can save time, reduce costs, and achieve superior model performance, all while ensuring the privacy and efficiency of your deployments. Ready to get started? Reach out to us today to explore how Focoos AI can power your computer vision projects. \ud83d\ude80
"},{"location":"datasets/","title":"Datasets","text":"With the Focoos SDK, you can leverage a diverse collection of foundational datasets specifically tailored for computer vision tasks. These datasets, spanning tasks such as segmentation, detection, and instance segmentation, provide a strong foundation for building and optimizing models across a variety of domains.
Datasets:
Name Task Description Layout Aeroscapes semseg A drone dataset to recognize many classes! supervisely Blister instseg A dataset to find blisters roboflow_coco Boxes detection Finding different boxes on the conveyor belt roboflow_coco Cable detection A dataset for detecting damages in cables (from Roboflow 100) - https://universe.roboflow.com/roboflow-100/cable-damage/dataset/2# roboflow_coco Circuit dataset detection A dataset with electronic circuits roboflow_coco Concrete instseg A dataset to find defect in concrete roboflow_coco Crack Segmentation instseg A dataset for segmenting cracks in buildings with 4k images. roboflow_coco Football-detection detection Football-detection by Roboflow roboflow_coco Peanuts detection Finding Molded or non Molded Peanuts roboflow_coco Strawberries instseg Finding defects on strawberries roboflow_coco aquarium detection aquarium roboflow_coco bottles detection bottles roboflow_coco chess_pieces detection A chess detector dataset by roboflow https://universe.roboflow.com/roboflow-100/chess-pieces-mjzgj roboflow_coco coco_2017_det detection COCO Detection catalog halo detection Halo fps by Roboflow roboflow_coco lettuce detection A dataset to find lettuce roboflow_coco safety detection From roboflow Universe: https://universe.roboflow.com/roboflow-100/construction-safety-gsnvb roboflow_coco screw detection Screw by Roboflow roboflow_coco"},{"location":"models/","title":"Focoos Foundational Models","text":"With the Focoos SDK, you can take advantage of a collection of foundational models that are optimized for a range of computer vision tasks. These pre-trained models, covering detection and semantic segmentation across various domains, provide an excellent starting point for your specific use case. Whether you need to fine-tune for custom requirements or adapt them to your application, these models offer a solid foundation to accelerate your development process.
Models:
Model Name Task Metrics Domain focoos_object365 Detection - Common Objects, 365 classes focoos_rtdetr Detection - Common Objects, 80 classes focoos_cts_medium Semantic Segmentation - Autonomous driving, 30 classes focoos_cts_large Semantic Segmentation - Autonomous driving, 30 classes focoos_ade_nano Semantic Segmentation - Common Scenes, 150 classes focoos_ade_small Semantic Segmentation - Common Scenes, 150 classes focoos_ade_medium Semantic Segmentation - Common Scenes, 150 classes focoos_ade_large Semantic Segmentation - Common Scenes, 150 classes focoos_aeroscapes Semantic Segmentation - Drone Aerial Scenes, 11 classes focoos_isaid_nano Semantic Segmentation - Satellite Imagery, 15 classes focoos_isaid_medium Semantic Segmentation - Satellite Imagery, 15 classes"},{"location":"api/focoos/","title":"focoos","text":"Focoos Module
This module provides a Python interface for interacting with Focoos APIs, allowing users to manage machine learning models and datasets in the Focoos ecosystem. The module supports operations such as retrieving model metadata, downloading models, and listing shared datasets.
Classes:
Name Description Focoos
Main class to interface with Focoos APIs.
Raises:
Type Description ValueError
Raised for invalid API responses or missing parameters.
"},{"location":"api/focoos/#focoos.focoos.Focoos","title":"Focoos
","text":"Main class to interface with Focoos APIs.
This class provides methods to interact with Focoos-hosted models and datasets. It supports functionalities such as listing models, retrieving model metadata, downloading models, and creating new models.
Attributes:
Name Type Description api_key
str
The API key for authentication.
http_client
HttpClient
HTTP client for making API requests.
user_info
dict
Information about the currently authenticated user.
cache_dir
str
Local directory for caching downloaded models.
Source code in focoos/focoos.py
class Focoos:\n \"\"\"\n Main class to interface with Focoos APIs.\n\n This class provides methods to interact with Focoos-hosted models and datasets.\n It supports functionalities such as listing models, retrieving model metadata,\n downloading models, and creating new models.\n\n Attributes:\n api_key (str): The API key for authentication.\n http_client (HttpClient): HTTP client for making API requests.\n user_info (dict): Information about the currently authenticated user.\n cache_dir (str): Local directory for caching downloaded models.\n \"\"\"\n\n def __init__(\n self,\n api_key: Optional[str] = None,\n host_url: Optional[str] = None,\n ):\n \"\"\"\n Initializes the Focoos API client.\n\n This client provides authenticated access to the Focoos API, enabling various operations\n through the configured HTTP client. It retrieves user information upon initialization and\n logs the environment details.\n\n Args:\n api_key (Optional[str]): API key for authentication. Defaults to the `focoos_api_key`\n specified in the FOCOOS_CONFIG.\n host_url (Optional[str]): Base URL for the Focoos API. Defaults to the `default_host_url`\n specified in the FOCOOS_CONFIG.\n\n Raises:\n ValueError: If the API key is not provided, or if the host URL is not specified in the\n arguments or the configuration.\n\n Attributes:\n api_key (str): The API key used for authentication.\n http_client (HttpClient): An HTTP client instance configured with the API key and host URL.\n user_info (dict): Information about the authenticated user retrieved from the API.\n cache_dir (str): Path to the cache directory used by the client.\n\n Logs:\n - Error if the API key or host URL is missing.\n - Info about the authenticated user and environment upon successful initialization.\n \"\"\"\n self.api_key = api_key or FOCOOS_CONFIG.focoos_api_key\n if not self.api_key:\n logger.error(\"API key is required \ud83e\udd16\")\n raise ValueError(\"API key is required \ud83e\udd16\")\n\n host_url = host_url or FOCOOS_CONFIG.default_host_url\n\n self.http_client = HttpClient(self.api_key, host_url)\n self.user_info = self._get_user_info()\n self.cache_dir = os.path.join(os.path.expanduser(\"~\"), \".cache\", \"focoos\")\n logger.info(\n f\"Currently logged as: {self.user_info['email']} environment: {host_url}\"\n )\n\n def _get_user_info(self):\n \"\"\"\n Retrieves information about the authenticated user.\n\n Returns:\n dict: Information about the user (e.g., email).\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"user/\")\n if res.status_code != 200:\n logger.error(f\"Failed to get user info: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to get user info: {res.status_code} {res.text}\")\n return res.json()\n\n def get_model_info(self, model_name: str) -> ModelMetadata:\n \"\"\"\n Retrieves metadata for a specific model.\n\n Args:\n model_name (str): Name of the model.\n\n Returns:\n ModelMetadata: Metadata of the specified model.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(f\"models/{model_name}\")\n if res.status_code != 200:\n logger.error(f\"Failed to get model info: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to get model info: {res.status_code} {res.text}\")\n return ModelMetadata.from_json(res.json())\n\n def list_models(self) -> list[ModelPreview]:\n \"\"\"\n Lists all available models.\n\n Returns:\n list[ModelPreview]: List of model previews.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"models/\")\n if res.status_code != 200:\n logger.error(f\"Failed to list models: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to list models: {res.status_code} {res.text}\")\n return [ModelPreview.from_json(r) for r in res.json()]\n\n def list_focoos_models(self) -> list[ModelPreview]:\n \"\"\"\n Lists models specific to Focoos.\n\n Returns:\n list[ModelPreview]: List of Focoos models.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"models/focoos-models\")\n if res.status_code != 200:\n logger.error(f\"Failed to list focoos models: {res.status_code} {res.text}\")\n raise ValueError(\n f\"Failed to list focoos models: {res.status_code} {res.text}\"\n )\n return [ModelPreview.from_json(r) for r in res.json()]\n\n def get_local_model(\n self,\n model_ref: str,\n runtime_type: Optional[RuntimeTypes] = None,\n ) -> LocalModel:\n \"\"\"\n Retrieves a local model for the specified reference.\n\n Downloads the model if it does not already exist in the local cache.\n\n Args:\n model_ref (str): Reference identifier for the model.\n runtime_type (Optional[RuntimeTypes]): Runtime type for the model. Defaults to the\n `runtime_type` specified in FOCOOS_CONFIG.\n\n Returns:\n LocalModel: An instance of the local model.\n\n Raises:\n ValueError: If the runtime type is not specified.\n\n Notes:\n The model is cached in the directory specified by `self.cache_dir`.\n \"\"\"\n runtime_type = runtime_type or FOCOOS_CONFIG.runtime_type\n model_dir = os.path.join(self.cache_dir, model_ref)\n if not os.path.exists(os.path.join(model_dir, \"model.onnx\")):\n self._download_model(model_ref)\n return LocalModel(model_dir, runtime_type)\n\n def get_remote_model(self, model_ref: str) -> RemoteModel:\n \"\"\"\n Retrieves a remote model instance.\n\n Args:\n model_ref (str): Reference name of the model.\n\n Returns:\n RemoteModel: The remote model instance.\n \"\"\"\n return RemoteModel(model_ref, self.http_client)\n\n def new_model(\n self, name: str, focoos_model: str, description: str\n ) -> Optional[RemoteModel]:\n \"\"\"\n Creates a new model in the Focoos system.\n\n Args:\n name (str): Name of the new model.\n focoos_model (str): Reference to the base Focoos model.\n description (str): Description of the new model.\n\n Returns:\n Optional[RemoteModel]: The created model instance, or None if creation fails.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.post(\n \"models/\",\n data={\n \"name\": name,\n \"focoos_model\": focoos_model,\n \"description\": description,\n },\n )\n if res.status_code in [200, 201]:\n return RemoteModel(res.json()[\"ref\"], self.http_client)\n if res.status_code == 409:\n logger.warning(f\"Model already exists: {name}\")\n return self.get_model_by_name(name, remote=True)\n logger.warning(f\"Failed to create new model: {res.status_code} {res.text}\")\n return None\n\n def list_shared_datasets(self) -> list[DatasetMetadata]:\n \"\"\"\n Lists datasets shared with the user.\n\n Returns:\n list[DatasetMetadata]: List of shared datasets.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"datasets/shared\")\n if res.status_code != 200:\n logger.error(f\"Failed to list datasets: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to list datasets: {res.status_code} {res.text}\")\n return [DatasetMetadata.from_json(dataset) for dataset in res.json()]\n\n def _download_model(self, model_ref: str) -> str:\n \"\"\"\n Downloads a model from the Focoos API.\n\n Args:\n model_ref (str): Reference name of the model.\n\n Returns:\n str: Path to the downloaded model.\n\n Raises:\n ValueError: If the API request fails or the download fails.\n \"\"\"\n model_dir = os.path.join(self.cache_dir, model_ref)\n model_path = os.path.join(model_dir, \"model.onnx\")\n metadata_path = os.path.join(model_dir, \"focoos_metadata.json\")\n if os.path.exists(model_path) and os.path.exists(metadata_path):\n logger.info(\"\ud83d\udce5 Model already downloaded\")\n return model_path\n\n ## download model metadata\n res = self.http_client.get(f\"models/{model_ref}/download?format=onnx\")\n if res.status_code != 200:\n logger.error(f\"Failed to download model: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to download model: {res.status_code} {res.text}\")\n\n download_data = res.json()\n metadata = ModelMetadata.from_json(download_data[\"model_metadata\"])\n download_uri = download_data[\"download_uri\"]\n\n ## download model from Focoos Cloud\n logger.debug(f\"Model URI: {download_uri}\")\n logger.info(\"\ud83d\udce5 Downloading model from Focoos Cloud.. \")\n response = self.http_client.get_external_url(download_uri, stream=True)\n if response.status_code != 200:\n logger.error(\n f\"Failed to download model: {response.status_code} {response.text}\"\n )\n raise ValueError(\n f\"Failed to download model: {response.status_code} {response.text}\"\n )\n total_size = int(response.headers.get(\"content-length\", 0))\n logger.info(f\"\ud83d\udce5 Size: {total_size / (1024**2):.2f} MB\")\n\n if not os.path.exists(model_dir):\n os.makedirs(model_dir)\n\n with open(metadata_path, \"w\") as f:\n f.write(metadata.model_dump_json())\n logger.debug(f\"Dumped metadata to {metadata_path}\")\n\n with (\n open(model_path, \"wb\") as f,\n tqdm(\n desc=str(model_path).split(\"/\")[-1],\n total=total_size,\n unit=\"B\",\n unit_scale=True,\n unit_divisor=1024,\n ) as bar,\n ):\n for chunk in response.iter_content(chunk_size=8192):\n f.write(chunk)\n bar.update(len(chunk))\n logger.info(f\"\ud83d\udce5 File downloaded: {model_path}\")\n return model_path\n\n def get_dataset_by_name(self, name: str) -> Optional[DatasetMetadata]:\n \"\"\"\n Retrieves a dataset by its name.\n\n Args:\n name (str): Name of the dataset.\n\n Returns:\n Optional[DatasetMetadata]: The dataset metadata if found, or None otherwise.\n \"\"\"\n datasets = self.list_shared_datasets()\n name_lower = name.lower()\n for dataset in datasets:\n if name_lower == dataset.name.lower():\n return dataset\n\n def get_model_by_name(\n self, name: str, remote: bool = True\n ) -> Optional[Union[RemoteModel, LocalModel]]:\n \"\"\"\n Retrieves a model by its name.\n\n Args:\n name (str): Name of the model.\n remote (bool): If True, retrieve as a RemoteModel. Otherwise, as a LocalModel. Defaults to True.\n\n Returns:\n Optional[Union[RemoteModel, LocalModel]]: The model instance if found, or None otherwise.\n \"\"\"\n models = self.list_models()\n name_lower = name.lower()\n for model in models:\n if name_lower == model.name.lower():\n if remote:\n return self.get_remote_model(model.ref)\n else:\n return self.get_local_model(model.ref)\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.__init__","title":"__init__(api_key=None, host_url=None)
","text":"Initializes the Focoos API client.
This client provides authenticated access to the Focoos API, enabling various operations through the configured HTTP client. It retrieves user information upon initialization and logs the environment details.
Parameters:
Name Type Description Default api_key
Optional[str]
API key for authentication. Defaults to the focoos_api_key
specified in the FOCOOS_CONFIG.
None
host_url
Optional[str]
Base URL for the Focoos API. Defaults to the default_host_url
specified in the FOCOOS_CONFIG.
None
Raises:
Type Description ValueError
If the API key is not provided, or if the host URL is not specified in the arguments or the configuration.
Attributes:
Name Type Description api_key
str
The API key used for authentication.
http_client
HttpClient
An HTTP client instance configured with the API key and host URL.
user_info
dict
Information about the authenticated user retrieved from the API.
cache_dir
str
Path to the cache directory used by the client.
Logs - Error if the API key or host URL is missing.
- Info about the authenticated user and environment upon successful initialization.
Source code in focoos/focoos.py
def __init__(\n self,\n api_key: Optional[str] = None,\n host_url: Optional[str] = None,\n):\n \"\"\"\n Initializes the Focoos API client.\n\n This client provides authenticated access to the Focoos API, enabling various operations\n through the configured HTTP client. It retrieves user information upon initialization and\n logs the environment details.\n\n Args:\n api_key (Optional[str]): API key for authentication. Defaults to the `focoos_api_key`\n specified in the FOCOOS_CONFIG.\n host_url (Optional[str]): Base URL for the Focoos API. Defaults to the `default_host_url`\n specified in the FOCOOS_CONFIG.\n\n Raises:\n ValueError: If the API key is not provided, or if the host URL is not specified in the\n arguments or the configuration.\n\n Attributes:\n api_key (str): The API key used for authentication.\n http_client (HttpClient): An HTTP client instance configured with the API key and host URL.\n user_info (dict): Information about the authenticated user retrieved from the API.\n cache_dir (str): Path to the cache directory used by the client.\n\n Logs:\n - Error if the API key or host URL is missing.\n - Info about the authenticated user and environment upon successful initialization.\n \"\"\"\n self.api_key = api_key or FOCOOS_CONFIG.focoos_api_key\n if not self.api_key:\n logger.error(\"API key is required \ud83e\udd16\")\n raise ValueError(\"API key is required \ud83e\udd16\")\n\n host_url = host_url or FOCOOS_CONFIG.default_host_url\n\n self.http_client = HttpClient(self.api_key, host_url)\n self.user_info = self._get_user_info()\n self.cache_dir = os.path.join(os.path.expanduser(\"~\"), \".cache\", \"focoos\")\n logger.info(\n f\"Currently logged as: {self.user_info['email']} environment: {host_url}\"\n )\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.get_dataset_by_name","title":"get_dataset_by_name(name)
","text":"Retrieves a dataset by its name.
Parameters:
Name Type Description Default name
str
Name of the dataset.
required Returns:
Type Description Optional[DatasetMetadata]
Optional[DatasetMetadata]: The dataset metadata if found, or None otherwise.
Source code in focoos/focoos.py
def get_dataset_by_name(self, name: str) -> Optional[DatasetMetadata]:\n \"\"\"\n Retrieves a dataset by its name.\n\n Args:\n name (str): Name of the dataset.\n\n Returns:\n Optional[DatasetMetadata]: The dataset metadata if found, or None otherwise.\n \"\"\"\n datasets = self.list_shared_datasets()\n name_lower = name.lower()\n for dataset in datasets:\n if name_lower == dataset.name.lower():\n return dataset\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.get_local_model","title":"get_local_model(model_ref, runtime_type=None)
","text":"Retrieves a local model for the specified reference.
Downloads the model if it does not already exist in the local cache.
Parameters:
Name Type Description Default model_ref
str
Reference identifier for the model.
required runtime_type
Optional[RuntimeTypes]
Runtime type for the model. Defaults to the runtime_type
specified in FOCOOS_CONFIG.
None
Returns:
Name Type Description LocalModel
LocalModel
An instance of the local model.
Raises:
Type Description ValueError
If the runtime type is not specified.
Notes The model is cached in the directory specified by self.cache_dir
.
Source code in focoos/focoos.py
def get_local_model(\n self,\n model_ref: str,\n runtime_type: Optional[RuntimeTypes] = None,\n) -> LocalModel:\n \"\"\"\n Retrieves a local model for the specified reference.\n\n Downloads the model if it does not already exist in the local cache.\n\n Args:\n model_ref (str): Reference identifier for the model.\n runtime_type (Optional[RuntimeTypes]): Runtime type for the model. Defaults to the\n `runtime_type` specified in FOCOOS_CONFIG.\n\n Returns:\n LocalModel: An instance of the local model.\n\n Raises:\n ValueError: If the runtime type is not specified.\n\n Notes:\n The model is cached in the directory specified by `self.cache_dir`.\n \"\"\"\n runtime_type = runtime_type or FOCOOS_CONFIG.runtime_type\n model_dir = os.path.join(self.cache_dir, model_ref)\n if not os.path.exists(os.path.join(model_dir, \"model.onnx\")):\n self._download_model(model_ref)\n return LocalModel(model_dir, runtime_type)\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.get_model_by_name","title":"get_model_by_name(name, remote=True)
","text":"Retrieves a model by its name.
Parameters:
Name Type Description Default name
str
Name of the model.
required remote
bool
If True, retrieve as a RemoteModel. Otherwise, as a LocalModel. Defaults to True.
True
Returns:
Type Description Optional[Union[RemoteModel, LocalModel]]
Optional[Union[RemoteModel, LocalModel]]: The model instance if found, or None otherwise.
Source code in focoos/focoos.py
def get_model_by_name(\n self, name: str, remote: bool = True\n) -> Optional[Union[RemoteModel, LocalModel]]:\n \"\"\"\n Retrieves a model by its name.\n\n Args:\n name (str): Name of the model.\n remote (bool): If True, retrieve as a RemoteModel. Otherwise, as a LocalModel. Defaults to True.\n\n Returns:\n Optional[Union[RemoteModel, LocalModel]]: The model instance if found, or None otherwise.\n \"\"\"\n models = self.list_models()\n name_lower = name.lower()\n for model in models:\n if name_lower == model.name.lower():\n if remote:\n return self.get_remote_model(model.ref)\n else:\n return self.get_local_model(model.ref)\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.get_model_info","title":"get_model_info(model_name)
","text":"Retrieves metadata for a specific model.
Parameters:
Name Type Description Default model_name
str
Name of the model.
required Returns:
Name Type Description ModelMetadata
ModelMetadata
Metadata of the specified model.
Raises:
Type Description ValueError
If the API request fails.
Source code in focoos/focoos.py
def get_model_info(self, model_name: str) -> ModelMetadata:\n \"\"\"\n Retrieves metadata for a specific model.\n\n Args:\n model_name (str): Name of the model.\n\n Returns:\n ModelMetadata: Metadata of the specified model.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(f\"models/{model_name}\")\n if res.status_code != 200:\n logger.error(f\"Failed to get model info: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to get model info: {res.status_code} {res.text}\")\n return ModelMetadata.from_json(res.json())\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.get_remote_model","title":"get_remote_model(model_ref)
","text":"Retrieves a remote model instance.
Parameters:
Name Type Description Default model_ref
str
Reference name of the model.
required Returns:
Name Type Description RemoteModel
RemoteModel
The remote model instance.
Source code in focoos/focoos.py
def get_remote_model(self, model_ref: str) -> RemoteModel:\n \"\"\"\n Retrieves a remote model instance.\n\n Args:\n model_ref (str): Reference name of the model.\n\n Returns:\n RemoteModel: The remote model instance.\n \"\"\"\n return RemoteModel(model_ref, self.http_client)\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.list_focoos_models","title":"list_focoos_models()
","text":"Lists models specific to Focoos.
Returns:
Type Description list[ModelPreview]
list[ModelPreview]: List of Focoos models.
Raises:
Type Description ValueError
If the API request fails.
Source code in focoos/focoos.py
def list_focoos_models(self) -> list[ModelPreview]:\n \"\"\"\n Lists models specific to Focoos.\n\n Returns:\n list[ModelPreview]: List of Focoos models.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"models/focoos-models\")\n if res.status_code != 200:\n logger.error(f\"Failed to list focoos models: {res.status_code} {res.text}\")\n raise ValueError(\n f\"Failed to list focoos models: {res.status_code} {res.text}\"\n )\n return [ModelPreview.from_json(r) for r in res.json()]\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.list_models","title":"list_models()
","text":"Lists all available models.
Returns:
Type Description list[ModelPreview]
list[ModelPreview]: List of model previews.
Raises:
Type Description ValueError
If the API request fails.
Source code in focoos/focoos.py
def list_models(self) -> list[ModelPreview]:\n \"\"\"\n Lists all available models.\n\n Returns:\n list[ModelPreview]: List of model previews.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"models/\")\n if res.status_code != 200:\n logger.error(f\"Failed to list models: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to list models: {res.status_code} {res.text}\")\n return [ModelPreview.from_json(r) for r in res.json()]\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.list_shared_datasets","title":"list_shared_datasets()
","text":"Lists datasets shared with the user.
Returns:
Type Description list[DatasetMetadata]
list[DatasetMetadata]: List of shared datasets.
Raises:
Type Description ValueError
If the API request fails.
Source code in focoos/focoos.py
def list_shared_datasets(self) -> list[DatasetMetadata]:\n \"\"\"\n Lists datasets shared with the user.\n\n Returns:\n list[DatasetMetadata]: List of shared datasets.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"datasets/shared\")\n if res.status_code != 200:\n logger.error(f\"Failed to list datasets: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to list datasets: {res.status_code} {res.text}\")\n return [DatasetMetadata.from_json(dataset) for dataset in res.json()]\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.new_model","title":"new_model(name, focoos_model, description)
","text":"Creates a new model in the Focoos system.
Parameters:
Name Type Description Default name
str
Name of the new model.
required focoos_model
str
Reference to the base Focoos model.
required description
str
Description of the new model.
required Returns:
Type Description Optional[RemoteModel]
Optional[RemoteModel]: The created model instance, or None if creation fails.
Raises:
Type Description ValueError
If the API request fails.
Source code in focoos/focoos.py
def new_model(\n self, name: str, focoos_model: str, description: str\n) -> Optional[RemoteModel]:\n \"\"\"\n Creates a new model in the Focoos system.\n\n Args:\n name (str): Name of the new model.\n focoos_model (str): Reference to the base Focoos model.\n description (str): Description of the new model.\n\n Returns:\n Optional[RemoteModel]: The created model instance, or None if creation fails.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.post(\n \"models/\",\n data={\n \"name\": name,\n \"focoos_model\": focoos_model,\n \"description\": description,\n },\n )\n if res.status_code in [200, 201]:\n return RemoteModel(res.json()[\"ref\"], self.http_client)\n if res.status_code == 409:\n logger.warning(f\"Model already exists: {name}\")\n return self.get_model_by_name(name, remote=True)\n logger.warning(f\"Failed to create new model: {res.status_code} {res.text}\")\n return None\n
"},{"location":"api/local_model/","title":"local model","text":"LocalModel Module
This module provides the LocalModel
class that allows loading, inference, and benchmark testing of models in a local environment. It supports detection and segmentation tasks, and utilizes ONNXRuntime for model execution.
Classes:
Name Description LocalModel
A class for managing and interacting with local models.
Functions:
Name Description __init__
Initializes the LocalModel instance, loading the model, metadata, and setting up the runtime.
_read_metadata
Reads the model metadata from a JSON file.
_annotate
Annotates the input image with detection or segmentation results.
infer
Runs inference on an input image, with optional annotation.
benchmark
Benchmarks the model's inference performance over a specified number of iterations and input size.
"},{"location":"api/local_model/#focoos.local_model.LocalModel","title":"LocalModel
","text":"Source code in focoos/local_model.py
class LocalModel:\n def __init__(\n self,\n model_dir: Union[str, Path],\n runtime_type: Optional[RuntimeTypes] = None,\n ):\n \"\"\"\n Initialize a LocalModel instance.\n\n This class sets up a local model for inference by initializing the runtime environment,\n loading metadata, and preparing annotation utilities.\n\n Args:\n model_dir (Union[str, Path]): The path to the directory containing the model files.\n runtime_type (Optional[RuntimeTypes]): Specifies the runtime type to use for inference.\n Defaults to the value of `FOCOOS_CONFIG.runtime_type` if not provided.\n\n Raises:\n ValueError: If no runtime type is provided and `FOCOOS_CONFIG.runtime_type` is not set.\n FileNotFoundError: If the specified model directory does not exist.\n\n Attributes:\n model_dir (Union[str, Path]): Path to the model directory.\n metadata (ModelMetadata): Metadata information for the model.\n model_ref: Reference identifier for the model obtained from metadata.\n label_annotator (sv.LabelAnnotator): Utility for adding labels to the output,\n initialized with text padding and border radius.\n box_annotator (sv.BoxAnnotator): Utility for annotating bounding boxes.\n mask_annotator (sv.MaskAnnotator): Utility for annotating masks.\n runtime (ONNXRuntime): Inference runtime initialized with the specified runtime type,\n model path, metadata, and warmup iterations.\n\n The method verifies the existence of the model directory, reads the model metadata,\n and initializes the runtime for inference using the provided runtime type. Annotation\n utilities are also prepared for visualizing model outputs.\n \"\"\"\n runtime_type = runtime_type or FOCOOS_CONFIG.runtime_type\n\n logger.debug(f\"Runtime type: {runtime_type}, Loading model from {model_dir},\")\n if not os.path.exists(model_dir):\n raise FileNotFoundError(f\"Model directory not found: {model_dir}\")\n self.model_dir: Union[str, Path] = model_dir\n self.metadata: ModelMetadata = self._read_metadata()\n self.model_ref = self.metadata.ref\n self.label_annotator = sv.LabelAnnotator(text_padding=10, border_radius=10)\n self.box_annotator = sv.BoxAnnotator()\n self.mask_annotator = sv.MaskAnnotator()\n self.runtime: ONNXRuntime = get_runtime(\n runtime_type,\n str(os.path.join(model_dir, \"model.onnx\")),\n self.metadata,\n FOCOOS_CONFIG.warmup_iter,\n )\n\n def _read_metadata(self) -> ModelMetadata:\n \"\"\"\n Reads the model metadata from a JSON file.\n\n Returns:\n ModelMetadata: Metadata for the model.\n\n Raises:\n FileNotFoundError: If the metadata file does not exist in the model directory.\n \"\"\"\n metadata_path = os.path.join(self.model_dir, \"focoos_metadata.json\")\n return ModelMetadata.from_json(metadata_path)\n\n def _annotate(self, im: np.ndarray, detections: sv.Detections) -> np.ndarray:\n \"\"\"\n Annotates the input image with detection or segmentation results.\n\n Args:\n im (np.ndarray): The input image to annotate.\n detections (sv.Detections): Detected objects or segmented regions.\n\n Returns:\n np.ndarray: The annotated image with bounding boxes or masks.\n \"\"\"\n classes = self.metadata.classes\n labels = [\n f\"{classes[int(class_id)] if classes is not None else str(class_id)}: {confid*100:.0f}%\"\n for class_id, confid in zip(detections.class_id, detections.confidence) # type: ignore\n ]\n if self.metadata.task == FocoosTask.DETECTION:\n annotated_im = self.box_annotator.annotate(\n scene=im.copy(), detections=detections\n )\n\n annotated_im = self.label_annotator.annotate(\n scene=annotated_im, detections=detections, labels=labels\n )\n elif self.metadata.task in [\n FocoosTask.SEMSEG,\n FocoosTask.INSTANCE_SEGMENTATION,\n ]:\n annotated_im = self.mask_annotator.annotate(\n scene=im.copy(), detections=detections\n )\n return annotated_im\n\n def infer(\n self,\n image: Union[bytes, str, Path, np.ndarray, Image.Image],\n threshold: float = 0.5,\n annotate: bool = False,\n ) -> Tuple[FocoosDetections, Optional[np.ndarray]]:\n \"\"\"\n Run inference on an input image and optionally annotate the results.\n\n Args:\n image (Union[bytes, str, Path, np.ndarray, Image.Image]): The input image to infer on.\n This can be a byte array, file path, or a PIL Image object, or a NumPy array representing the image.\n threshold (float, optional): The confidence threshold for detections. Defaults to 0.5.\n Detections with confidence scores below this threshold will be discarded.\n annotate (bool, optional): Whether to annotate the image with detection results. Defaults to False.\n If set to True, the method will return the image with bounding boxes or segmentation masks.\n\n Returns:\n Tuple[FocoosDetections, Optional[np.ndarray]]: A tuple containing:\n - `FocoosDetections`: The detections from the inference, represented as a custom object (`FocoosDetections`).\n This includes the details of the detected objects such as class, confidence score, and bounding box (if applicable).\n - `Optional[np.ndarray]`: The annotated image, if `annotate=True`.\n This will be a NumPy array representation of the image with drawn bounding boxes or segmentation masks.\n If `annotate=False`, this value will be `None`.\n\n Raises:\n ValueError: If the model is not deployed locally (i.e., `self.runtime` is `None`).\n \"\"\"\n assert self.runtime is not None, \"Model is not deployed (locally)\"\n resize = None #!TODO check for segmentation\n if self.metadata.task == FocoosTask.DETECTION:\n resize = 640 if not self.metadata.im_size else self.metadata.im_size\n logger.debug(f\"Resize: {resize}\")\n t0 = perf_counter()\n im1, im0 = image_preprocess(image, resize=resize)\n t1 = perf_counter()\n detections = self.runtime(im1.astype(np.float32), threshold)\n t2 = perf_counter()\n if resize:\n detections = scale_detections(\n detections, (resize, resize), (im0.shape[1], im0.shape[0])\n )\n logger.debug(f\"Inference time: {t2-t1:.3f} seconds\")\n im = None\n if annotate:\n im = self._annotate(im0, detections)\n\n out = sv_to_focoos_detections(detections, classes=self.metadata.classes)\n t3 = perf_counter()\n out.latency = {\n \"inference\": round(t2 - t1, 3),\n \"preprocess\": round(t1 - t0, 3),\n \"postprocess\": round(t3 - t2, 3),\n }\n return out, im\n\n def benchmark(self, iterations: int, size: int) -> LatencyMetrics:\n \"\"\"\n Benchmark the model's inference performance over multiple iterations.\n\n Args:\n iterations (int): Number of iterations to run for benchmarking.\n size (int): The input size for each benchmark iteration.\n\n Returns:\n LatencyMetrics: Latency metrics including time taken for inference.\n \"\"\"\n return self.runtime.benchmark(iterations, size)\n
"},{"location":"api/local_model/#focoos.local_model.LocalModel.__init__","title":"__init__(model_dir, runtime_type=None)
","text":"Initialize a LocalModel instance.
This class sets up a local model for inference by initializing the runtime environment, loading metadata, and preparing annotation utilities.
Parameters:
Name Type Description Default model_dir
Union[str, Path]
The path to the directory containing the model files.
required runtime_type
Optional[RuntimeTypes]
Specifies the runtime type to use for inference. Defaults to the value of FOCOOS_CONFIG.runtime_type
if not provided.
None
Raises:
Type Description ValueError
If no runtime type is provided and FOCOOS_CONFIG.runtime_type
is not set.
FileNotFoundError
If the specified model directory does not exist.
Attributes:
Name Type Description model_dir
Union[str, Path]
Path to the model directory.
metadata
ModelMetadata
Metadata information for the model.
model_ref
ModelMetadata
Reference identifier for the model obtained from metadata.
label_annotator
LabelAnnotator
Utility for adding labels to the output, initialized with text padding and border radius.
box_annotator
BoxAnnotator
Utility for annotating bounding boxes.
mask_annotator
MaskAnnotator
Utility for annotating masks.
runtime
ONNXRuntime
Inference runtime initialized with the specified runtime type, model path, metadata, and warmup iterations.
The method verifies the existence of the model directory, reads the model metadata, and initializes the runtime for inference using the provided runtime type. Annotation utilities are also prepared for visualizing model outputs.
Source code in focoos/local_model.py
def __init__(\n self,\n model_dir: Union[str, Path],\n runtime_type: Optional[RuntimeTypes] = None,\n):\n \"\"\"\n Initialize a LocalModel instance.\n\n This class sets up a local model for inference by initializing the runtime environment,\n loading metadata, and preparing annotation utilities.\n\n Args:\n model_dir (Union[str, Path]): The path to the directory containing the model files.\n runtime_type (Optional[RuntimeTypes]): Specifies the runtime type to use for inference.\n Defaults to the value of `FOCOOS_CONFIG.runtime_type` if not provided.\n\n Raises:\n ValueError: If no runtime type is provided and `FOCOOS_CONFIG.runtime_type` is not set.\n FileNotFoundError: If the specified model directory does not exist.\n\n Attributes:\n model_dir (Union[str, Path]): Path to the model directory.\n metadata (ModelMetadata): Metadata information for the model.\n model_ref: Reference identifier for the model obtained from metadata.\n label_annotator (sv.LabelAnnotator): Utility for adding labels to the output,\n initialized with text padding and border radius.\n box_annotator (sv.BoxAnnotator): Utility for annotating bounding boxes.\n mask_annotator (sv.MaskAnnotator): Utility for annotating masks.\n runtime (ONNXRuntime): Inference runtime initialized with the specified runtime type,\n model path, metadata, and warmup iterations.\n\n The method verifies the existence of the model directory, reads the model metadata,\n and initializes the runtime for inference using the provided runtime type. Annotation\n utilities are also prepared for visualizing model outputs.\n \"\"\"\n runtime_type = runtime_type or FOCOOS_CONFIG.runtime_type\n\n logger.debug(f\"Runtime type: {runtime_type}, Loading model from {model_dir},\")\n if not os.path.exists(model_dir):\n raise FileNotFoundError(f\"Model directory not found: {model_dir}\")\n self.model_dir: Union[str, Path] = model_dir\n self.metadata: ModelMetadata = self._read_metadata()\n self.model_ref = self.metadata.ref\n self.label_annotator = sv.LabelAnnotator(text_padding=10, border_radius=10)\n self.box_annotator = sv.BoxAnnotator()\n self.mask_annotator = sv.MaskAnnotator()\n self.runtime: ONNXRuntime = get_runtime(\n runtime_type,\n str(os.path.join(model_dir, \"model.onnx\")),\n self.metadata,\n FOCOOS_CONFIG.warmup_iter,\n )\n
"},{"location":"api/local_model/#focoos.local_model.LocalModel.benchmark","title":"benchmark(iterations, size)
","text":"Benchmark the model's inference performance over multiple iterations.
Parameters:
Name Type Description Default iterations
int
Number of iterations to run for benchmarking.
required size
int
The input size for each benchmark iteration.
required Returns:
Name Type Description LatencyMetrics
LatencyMetrics
Latency metrics including time taken for inference.
Source code in focoos/local_model.py
def benchmark(self, iterations: int, size: int) -> LatencyMetrics:\n \"\"\"\n Benchmark the model's inference performance over multiple iterations.\n\n Args:\n iterations (int): Number of iterations to run for benchmarking.\n size (int): The input size for each benchmark iteration.\n\n Returns:\n LatencyMetrics: Latency metrics including time taken for inference.\n \"\"\"\n return self.runtime.benchmark(iterations, size)\n
"},{"location":"api/local_model/#focoos.local_model.LocalModel.infer","title":"infer(image, threshold=0.5, annotate=False)
","text":"Run inference on an input image and optionally annotate the results.
Parameters:
Name Type Description Default image
Union[bytes, str, Path, ndarray, Image]
The input image to infer on. This can be a byte array, file path, or a PIL Image object, or a NumPy array representing the image.
required threshold
float
The confidence threshold for detections. Defaults to 0.5. Detections with confidence scores below this threshold will be discarded.
0.5
annotate
bool
Whether to annotate the image with detection results. Defaults to False. If set to True, the method will return the image with bounding boxes or segmentation masks.
False
Returns:
Type Description Tuple[FocoosDetections, Optional[ndarray]]
Tuple[FocoosDetections, Optional[np.ndarray]]: A tuple containing: - FocoosDetections
: The detections from the inference, represented as a custom object (FocoosDetections
). This includes the details of the detected objects such as class, confidence score, and bounding box (if applicable). - Optional[np.ndarray]
: The annotated image, if annotate=True
. This will be a NumPy array representation of the image with drawn bounding boxes or segmentation masks. If annotate=False
, this value will be None
.
Raises:
Type Description ValueError
If the model is not deployed locally (i.e., self.runtime
is None
).
Source code in focoos/local_model.py
def infer(\n self,\n image: Union[bytes, str, Path, np.ndarray, Image.Image],\n threshold: float = 0.5,\n annotate: bool = False,\n) -> Tuple[FocoosDetections, Optional[np.ndarray]]:\n \"\"\"\n Run inference on an input image and optionally annotate the results.\n\n Args:\n image (Union[bytes, str, Path, np.ndarray, Image.Image]): The input image to infer on.\n This can be a byte array, file path, or a PIL Image object, or a NumPy array representing the image.\n threshold (float, optional): The confidence threshold for detections. Defaults to 0.5.\n Detections with confidence scores below this threshold will be discarded.\n annotate (bool, optional): Whether to annotate the image with detection results. Defaults to False.\n If set to True, the method will return the image with bounding boxes or segmentation masks.\n\n Returns:\n Tuple[FocoosDetections, Optional[np.ndarray]]: A tuple containing:\n - `FocoosDetections`: The detections from the inference, represented as a custom object (`FocoosDetections`).\n This includes the details of the detected objects such as class, confidence score, and bounding box (if applicable).\n - `Optional[np.ndarray]`: The annotated image, if `annotate=True`.\n This will be a NumPy array representation of the image with drawn bounding boxes or segmentation masks.\n If `annotate=False`, this value will be `None`.\n\n Raises:\n ValueError: If the model is not deployed locally (i.e., `self.runtime` is `None`).\n \"\"\"\n assert self.runtime is not None, \"Model is not deployed (locally)\"\n resize = None #!TODO check for segmentation\n if self.metadata.task == FocoosTask.DETECTION:\n resize = 640 if not self.metadata.im_size else self.metadata.im_size\n logger.debug(f\"Resize: {resize}\")\n t0 = perf_counter()\n im1, im0 = image_preprocess(image, resize=resize)\n t1 = perf_counter()\n detections = self.runtime(im1.astype(np.float32), threshold)\n t2 = perf_counter()\n if resize:\n detections = scale_detections(\n detections, (resize, resize), (im0.shape[1], im0.shape[0])\n )\n logger.debug(f\"Inference time: {t2-t1:.3f} seconds\")\n im = None\n if annotate:\n im = self._annotate(im0, detections)\n\n out = sv_to_focoos_detections(detections, classes=self.metadata.classes)\n t3 = perf_counter()\n out.latency = {\n \"inference\": round(t2 - t1, 3),\n \"preprocess\": round(t1 - t0, 3),\n \"postprocess\": round(t3 - t2, 3),\n }\n return out, im\n
"},{"location":"api/remote_model/","title":"remote model","text":"RemoteModel Module
This module provides a class to manage remote models in the Focoos ecosystem. It supports various functionalities including model training, deployment, inference, and monitoring.
Classes:
Name Description RemoteModel
A class for interacting with remote models, managing their lifecycle, and performing inference.
Modules:
Name Description HttpClient
Handles HTTP requests.
logger
Logging utility.
BoxAnnotator, LabelAnnotator, MaskAnnotator
Annotation tools for visualizing detections and segmentation tasks.
FocoosDet, FocoosDetections
Classes for representing and managing detections.
FocoosTask
Enum for defining supported tasks (e.g., DETECTION, SEMSEG).
Hyperparameters
Structure for training configuration parameters.
ModelMetadata
Contains metadata for the model.
ModelStatus
Enum for representing the current status of the model.
TrainInstance
Enum for defining available training instances.
image_loader
Utility function for loading images.
focoos_detections_to_supervision
Converter for Focoos detections to supervision format.
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel","title":"RemoteModel
","text":"Represents a remote model in the Focoos platform.
Attributes:
Name Type Description model_ref
str
Reference ID for the model.
http_client
HttpClient
Client for making HTTP requests.
max_deploy_wait
int
Maximum wait time for model deployment.
metadata
ModelMetadata
Metadata of the model.
label_annotator
LabelAnnotator
Annotator for adding labels to images.
box_annotator
BoxAnnotator
Annotator for drawing bounding boxes.
mask_annotator
MaskAnnotator
Annotator for drawing masks on images.
Source code in focoos/remote_model.py
class RemoteModel:\n \"\"\"\n Represents a remote model in the Focoos platform.\n\n Attributes:\n model_ref (str): Reference ID for the model.\n http_client (HttpClient): Client for making HTTP requests.\n max_deploy_wait (int): Maximum wait time for model deployment.\n metadata (ModelMetadata): Metadata of the model.\n label_annotator (LabelAnnotator): Annotator for adding labels to images.\n box_annotator (sv.BoxAnnotator): Annotator for drawing bounding boxes.\n mask_annotator (sv.MaskAnnotator): Annotator for drawing masks on images.\n \"\"\"\n\n def __init__(self, model_ref: str, http_client: HttpClient):\n \"\"\"\n Initialize the RemoteModel instance.\n\n Args:\n model_ref (str): Reference ID for the model.\n http_client (HttpClient): HTTP client instance for communication.\n\n Raises:\n ValueError: If model metadata retrieval fails.\n \"\"\"\n self.model_ref = model_ref\n self.http_client = http_client\n self.max_deploy_wait = 10\n self.metadata: ModelMetadata = self.get_info()\n\n self.label_annotator = sv.LabelAnnotator(text_padding=10, border_radius=10)\n self.box_annotator = sv.BoxAnnotator()\n self.mask_annotator = sv.MaskAnnotator()\n logger.info(\n f\"[RemoteModel]: ref: {self.model_ref} name: {self.metadata.name} description: {self.metadata.description} status: {self.metadata.status}\"\n )\n\n def get_info(self) -> ModelMetadata:\n \"\"\"\n Retrieve model metadata.\n\n Returns:\n ModelMetadata: Metadata of the model.\n\n Raises:\n ValueError: If the request fails.\n \"\"\"\n res = self.http_client.get(f\"models/{self.model_ref}\")\n if res.status_code != 200:\n logger.error(f\"Failed to get model info: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to get model info: {res.status_code} {res.text}\")\n self.metadata = ModelMetadata(**res.json())\n return self.metadata\n\n def train(\n self,\n dataset_ref: str,\n hyperparameters: Hyperparameters,\n anyma_version: str = \"anyma-sagemaker-cu12-torch22-0111\",\n instance_type: TrainInstance = TrainInstance.ML_G4DN_XLARGE,\n volume_size: int = 50,\n max_runtime_in_seconds: int = 36000,\n ) -> dict | None:\n \"\"\"\n Initiate the training of a remote model on the Focoos platform.\n\n This method sends a request to the Focoos platform to start the training process for the model\n referenced by `self.model_ref`. It requires a dataset reference and hyperparameters for training,\n as well as optional configuration options for the instance type, volume size, and runtime.\n\n Args:\n dataset_ref (str): The reference ID of the dataset to be used for training.\n hyperparameters (Hyperparameters): A structure containing the hyperparameters for the training process.\n anyma_version (str, optional): The version of Anyma to use for training. Defaults to \"anyma-sagemaker-cu12-torch22-0111\".\n instance_type (TrainInstance, optional): The type of training instance to use. Defaults to TrainInstance.ML_G4DN_XLARGE.\n volume_size (int, optional): The size of the disk volume (in GB) for the training instance. Defaults to 50.\n max_runtime_in_seconds (int, optional): The maximum runtime for training in seconds. Defaults to 36000.\n\n Returns:\n dict: A dictionary containing the response from the training initiation request. The content depends on the Focoos platform's response.\n\n Raises:\n ValueError: If the request to start training fails (e.g., due to incorrect parameters or server issues).\n \"\"\"\n res = self.http_client.post(\n f\"models/{self.model_ref}/train\",\n data={\n \"dataset_ref\": dataset_ref,\n \"anyma_version\": anyma_version,\n \"instance_type\": instance_type,\n \"volume_size\": volume_size,\n \"max_runtime_in_seconds\": max_runtime_in_seconds,\n \"hyperparameters\": hyperparameters.model_dump(),\n },\n )\n if res.status_code != 200:\n logger.warning(f\"Failed to train model: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to train model: {res.status_code} {res.text}\")\n return res.json()\n\n def train_status(self) -> dict | None:\n \"\"\"\n Retrieve the current status of the model training.\n\n Sends a request to check the training status of the model referenced by `self.model_ref`.\n\n Returns:\n dict: A dictionary containing the training status information.\n\n Raises:\n ValueError: If the request to get training status fails.\n \"\"\"\n res = self.http_client.get(f\"models/{self.model_ref}/train/status\")\n if res.status_code != 200:\n logger.error(f\"Failed to get train status: {res.status_code} {res.text}\")\n raise ValueError(\n f\"Failed to get train status: {res.status_code} {res.text}\"\n )\n return res.json()\n\n def train_logs(self) -> list[str]:\n \"\"\"\n Retrieve the training logs for the model.\n\n This method sends a request to fetch the logs of the model's training process. If the request\n is successful (status code 200), it returns the logs as a list of strings. If the request fails,\n it logs a warning and returns an empty list.\n\n Returns:\n list[str]: A list of training logs as strings.\n\n Raises:\n None: Returns an empty list if the request fails.\n \"\"\"\n res = self.http_client.get(f\"models/{self.model_ref}/train/logs\")\n if res.status_code != 200:\n logger.warning(f\"Failed to get train logs: {res.status_code} {res.text}\")\n return []\n return res.json()\n\n def _annotate(self, im: np.ndarray, detections: sv.Detections) -> np.ndarray:\n \"\"\"\n Annotate an image with detection results.\n\n This method adds visual annotations to the provided image based on the model's detection results.\n It handles different tasks (e.g., object detection, semantic segmentation, instance segmentation)\n and uses the corresponding annotator (bounding box, label, or mask) to draw on the image.\n\n Args:\n im (np.ndarray): The image to be annotated, represented as a NumPy array.\n detections (sv.Detections): The detection results to be annotated, including class IDs and confidence scores.\n\n Returns:\n np.ndarray: The annotated image as a NumPy array.\n \"\"\"\n classes = self.metadata.classes\n if classes is not None:\n labels = [\n f\"{classes[int(class_id)]}: {confid*100:.0f}%\"\n for class_id, confid in zip(detections.class_id, detections.confidence)\n ]\n else:\n labels = [\n f\"{str(class_id)}: {confid*100:.0f}%\"\n for class_id, confid in zip(detections.class_id, detections.confidence)\n ]\n if self.metadata.task == FocoosTask.DETECTION:\n annotated_im = self.box_annotator.annotate(\n scene=im.copy(), detections=detections\n )\n\n annotated_im = self.label_annotator.annotate(\n scene=annotated_im, detections=detections, labels=labels\n )\n elif self.metadata.task in [\n FocoosTask.SEMSEG,\n FocoosTask.INSTANCE_SEGMENTATION,\n ]:\n annotated_im = self.mask_annotator.annotate(\n scene=im.copy(), detections=detections\n )\n return annotated_im\n\n def infer(\n self,\n image: Union[str, Path, np.ndarray, bytes],\n threshold: float = 0.5,\n annotate: bool = False,\n ) -> Tuple[FocoosDetections, Optional[np.ndarray]]:\n \"\"\"\n Perform inference on the provided image using the remote model.\n\n This method sends an image to the remote model for inference and retrieves the detection results.\n Optionally, it can annotate the image with the detection results.\n\n Args:\n image (Union[str, Path, bytes]): The image to infer on, which can be a file path, a string representing the path, or raw bytes.\n threshold (float, optional): The confidence threshold for detections. Defaults to 0.5.\n annotate (bool, optional): Whether to annotate the image with the detection results. Defaults to False.\n\n Returns:\n Tuple[FocoosDetections, Optional[np.ndarray]]:\n - FocoosDetections: The detection results including class IDs, confidence scores, etc.\n - Optional[np.ndarray]: The annotated image if `annotate` is True, else None.\n\n Raises:\n FileNotFoundError: If the provided image file path is invalid.\n ValueError: If the inference request fails.\n \"\"\"\n image_bytes = None\n if isinstance(image, str) or isinstance(image, Path):\n if not os.path.exists(image):\n logger.error(f\"Image file not found: {image}\")\n raise FileNotFoundError(f\"Image file not found: {image}\")\n image_bytes = open(image, \"rb\").read()\n elif isinstance(image, np.ndarray):\n _, buffer = cv2.imencode(\".jpg\", image)\n image_bytes = buffer.tobytes()\n else:\n image_bytes = image\n files = {\"file\": image_bytes}\n t0 = time.time()\n res = self.http_client.post(\n f\"models/{self.model_ref}/inference?confidence_threshold={threshold}\",\n files=files,\n )\n t1 = time.time()\n if res.status_code == 200:\n logger.debug(f\"Inference time: {t1-t0:.3f} seconds\")\n detections = FocoosDetections(\n detections=[\n FocoosDet.from_json(d) for d in res.json().get(\"detections\", [])\n ],\n latency=res.json().get(\"latency\", None),\n )\n preview = None\n if annotate:\n im0 = image_loader(image)\n sv_detections = focoos_detections_to_supervision(detections)\n preview = self._annotate(im0, sv_detections)\n return detections, preview\n else:\n logger.error(f\"Failed to infer: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to infer: {res.status_code} {res.text}\")\n\n def train_metrics(self, period=60) -> dict | None:\n \"\"\"\n Retrieve training metrics for the model over a specified period.\n\n This method fetches the training metrics for the remote model, including aggregated values,\n such as average performance metrics over the given period.\n\n Args:\n period (int, optional): The period (in seconds) for which to fetch the metrics. Defaults to 60.\n\n Returns:\n Optional[dict]: A dictionary containing the training metrics if the request is successful,\n or None if the request fails.\n \"\"\"\n res = self.http_client.get(\n f\"models/{self.model_ref}/train/all-metrics?period={period}&aggregation_type=Average\"\n )\n if res.status_code != 200:\n logger.warning(f\"Failed to get train logs: {res.status_code} {res.text}\")\n return None\n return res.json()\n\n def _log_metrics(self):\n \"\"\"\n Log the latest training metrics for the model.\n\n This method retrieves the current training metrics, such as iteration, total loss, and evaluation\n metrics (like mIoU for segmentation tasks or AP50 for detection tasks). It logs the most recent values\n for these metrics, helping monitor the model's training progress.\n\n The logged metrics depend on the model's task:\n - For segmentation tasks (SEMSEG), the mean Intersection over Union (mIoU) is logged.\n - For detection tasks, the Average Precision at 50% IoU (AP50) is logged.\n\n Returns:\n None: The method only logs the metrics without returning any value.\n\n Logs:\n - Iteration number.\n - Total loss value.\n - Relevant evaluation metric (mIoU or AP50).\n \"\"\"\n metrics = self.train_metrics()\n if metrics:\n iter = (\n metrics[\"iter\"][-1]\n if \"iter\" in metrics and len(metrics[\"iter\"]) > 0\n else -1\n )\n total_loss = (\n metrics[\"total_loss\"][-1]\n if \"total_loss\" in metrics and len(metrics[\"total_loss\"]) > 0\n else -1\n )\n if self.metadata.task == FocoosTask.SEMSEG:\n accuracy = (\n metrics[\"mIoU\"][-1]\n if \"mIoU\" in metrics and len(metrics[\"mIoU\"]) > 0\n else \"-\"\n )\n eval_metric = \"mIoU\"\n else:\n accuracy = (\n metrics[\"AP50\"][-1]\n if \"AP50\" in metrics and len(metrics[\"AP50\"]) > 0\n else \"-\"\n )\n eval_metric = \"AP50\"\n logger.info(\n f\"Iter {iter:.0f}: Loss {total_loss:.2f}, {eval_metric} {accuracy}\"\n )\n\n def monitor_train(self, update_period=30) -> None:\n \"\"\"\n Monitor the training process of the model and log its status periodically.\n\n This method continuously checks the model's training status and logs updates based on the current state.\n It monitors the primary and secondary statuses of the model, and performs the following actions:\n - If the status is \"Pending\", it logs a waiting message and waits for resources.\n - If the status is \"InProgress\", it logs the current status and elapsed time, and logs the training metrics if the model is actively training.\n - If the status is \"Completed\", it logs the final metrics and exits.\n - If the training fails, is stopped, or any unexpected status occurs, it logs the status and exits.\n\n Args:\n update_period (int, optional): The time (in seconds) to wait between status checks. Default is 30 seconds.\n\n Returns:\n None: This method does not return any value but logs information about the training process.\n\n Logs:\n - The current training status, including elapsed time.\n - Training metrics at regular intervals while the model is training.\n \"\"\"\n completed_status = [\"Completed\", \"Failed\", \"Stopped\"]\n # init to make do-while\n status = {\"main_status\": \"Flag\", \"secondary_status\": \"Flag\"}\n prev_status = status\n while status[\"main_status\"] not in completed_status:\n prev_status = status\n status = self.train_status()\n elapsed = status.get(\"elapsed_time\", 0)\n # Model at the startup\n if not status[\"main_status\"] or status[\"main_status\"] in [\"Pending\"]:\n if prev_status[\"main_status\"] != status[\"main_status\"]:\n logger.info(\"[0s] Waiting for resources...\")\n sleep(update_period)\n continue\n # Training in progress\n if status[\"main_status\"] in [\"InProgress\"]:\n if prev_status[\"secondary_status\"] != status[\"secondary_status\"]:\n if status[\"secondary_status\"] in [\"Starting\", \"Pending\"]:\n logger.info(\n f\"[0s] {status['main_status']}: {status['secondary_status']}\"\n )\n else:\n logger.info(\n f\"[{elapsed//60}m:{elapsed%60}s] {status['main_status']}: {status['secondary_status']}\"\n )\n if status[\"secondary_status\"] in [\"Training\"]:\n self._log_metrics()\n sleep(update_period)\n continue\n if status[\"main_status\"] == \"Completed\":\n self._log_metrics()\n return\n else:\n logger.info(f\"Model is not training, status: {status['main_status']}\")\n return\n\n def stop_training(self) -> None:\n \"\"\"\n Stop the training process of the model.\n\n This method sends a request to stop the training of the model identified by `model_ref`.\n If the request fails, an error is logged and a `ValueError` is raised.\n\n Raises:\n ValueError: If the stop training request fails.\n\n Logs:\n - Error message if the request to stop training fails, including the status code and response text.\n\n Returns:\n None: This method does not return any value.\n \"\"\"\n res = self.http_client.delete(f\"models/{self.model_ref}/train\")\n if res.status_code != 200:\n logger.error(f\"Failed to get stop training: {res.status_code} {res.text}\")\n raise ValueError(\n f\"Failed to get stop training: {res.status_code} {res.text}\"\n )\n\n def delete_model(self) -> None:\n \"\"\"\n Delete the model from the system.\n\n This method sends a request to delete the model identified by `model_ref`.\n If the request fails or the status code is not 204 (No Content), an error is logged\n and a `ValueError` is raised.\n\n Raises:\n ValueError: If the delete model request fails or does not return a 204 status code.\n\n Logs:\n - Error message if the request to delete the model fails, including the status code and response text.\n\n Returns:\n None: This method does not return any value.\n \"\"\"\n res = self.http_client.delete(f\"models/{self.model_ref}\")\n if res.status_code != 204:\n logger.error(f\"Failed to delete model: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to delete model: {res.status_code} {res.text}\")\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.__init__","title":"__init__(model_ref, http_client)
","text":"Initialize the RemoteModel instance.
Parameters:
Name Type Description Default model_ref
str
Reference ID for the model.
required http_client
HttpClient
HTTP client instance for communication.
required Raises:
Type Description ValueError
If model metadata retrieval fails.
Source code in focoos/remote_model.py
def __init__(self, model_ref: str, http_client: HttpClient):\n \"\"\"\n Initialize the RemoteModel instance.\n\n Args:\n model_ref (str): Reference ID for the model.\n http_client (HttpClient): HTTP client instance for communication.\n\n Raises:\n ValueError: If model metadata retrieval fails.\n \"\"\"\n self.model_ref = model_ref\n self.http_client = http_client\n self.max_deploy_wait = 10\n self.metadata: ModelMetadata = self.get_info()\n\n self.label_annotator = sv.LabelAnnotator(text_padding=10, border_radius=10)\n self.box_annotator = sv.BoxAnnotator()\n self.mask_annotator = sv.MaskAnnotator()\n logger.info(\n f\"[RemoteModel]: ref: {self.model_ref} name: {self.metadata.name} description: {self.metadata.description} status: {self.metadata.status}\"\n )\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.delete_model","title":"delete_model()
","text":"Delete the model from the system.
This method sends a request to delete the model identified by model_ref
. If the request fails or the status code is not 204 (No Content), an error is logged and a ValueError
is raised.
Raises:
Type Description ValueError
If the delete model request fails or does not return a 204 status code.
Logs - Error message if the request to delete the model fails, including the status code and response text.
Returns:
Name Type Description None
None
This method does not return any value.
Source code in focoos/remote_model.py
def delete_model(self) -> None:\n \"\"\"\n Delete the model from the system.\n\n This method sends a request to delete the model identified by `model_ref`.\n If the request fails or the status code is not 204 (No Content), an error is logged\n and a `ValueError` is raised.\n\n Raises:\n ValueError: If the delete model request fails or does not return a 204 status code.\n\n Logs:\n - Error message if the request to delete the model fails, including the status code and response text.\n\n Returns:\n None: This method does not return any value.\n \"\"\"\n res = self.http_client.delete(f\"models/{self.model_ref}\")\n if res.status_code != 204:\n logger.error(f\"Failed to delete model: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to delete model: {res.status_code} {res.text}\")\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.get_info","title":"get_info()
","text":"Retrieve model metadata.
Returns:
Name Type Description ModelMetadata
ModelMetadata
Metadata of the model.
Raises:
Type Description ValueError
If the request fails.
Source code in focoos/remote_model.py
def get_info(self) -> ModelMetadata:\n \"\"\"\n Retrieve model metadata.\n\n Returns:\n ModelMetadata: Metadata of the model.\n\n Raises:\n ValueError: If the request fails.\n \"\"\"\n res = self.http_client.get(f\"models/{self.model_ref}\")\n if res.status_code != 200:\n logger.error(f\"Failed to get model info: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to get model info: {res.status_code} {res.text}\")\n self.metadata = ModelMetadata(**res.json())\n return self.metadata\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.infer","title":"infer(image, threshold=0.5, annotate=False)
","text":"Perform inference on the provided image using the remote model.
This method sends an image to the remote model for inference and retrieves the detection results. Optionally, it can annotate the image with the detection results.
Parameters:
Name Type Description Default image
Union[str, Path, bytes]
The image to infer on, which can be a file path, a string representing the path, or raw bytes.
required threshold
float
The confidence threshold for detections. Defaults to 0.5.
0.5
annotate
bool
Whether to annotate the image with the detection results. Defaults to False.
False
Returns:
Type Description Tuple[FocoosDetections, Optional[ndarray]]
Tuple[FocoosDetections, Optional[np.ndarray]]: - FocoosDetections: The detection results including class IDs, confidence scores, etc. - Optional[np.ndarray]: The annotated image if annotate
is True, else None.
Raises:
Type Description FileNotFoundError
If the provided image file path is invalid.
ValueError
If the inference request fails.
Source code in focoos/remote_model.py
def infer(\n self,\n image: Union[str, Path, np.ndarray, bytes],\n threshold: float = 0.5,\n annotate: bool = False,\n) -> Tuple[FocoosDetections, Optional[np.ndarray]]:\n \"\"\"\n Perform inference on the provided image using the remote model.\n\n This method sends an image to the remote model for inference and retrieves the detection results.\n Optionally, it can annotate the image with the detection results.\n\n Args:\n image (Union[str, Path, bytes]): The image to infer on, which can be a file path, a string representing the path, or raw bytes.\n threshold (float, optional): The confidence threshold for detections. Defaults to 0.5.\n annotate (bool, optional): Whether to annotate the image with the detection results. Defaults to False.\n\n Returns:\n Tuple[FocoosDetections, Optional[np.ndarray]]:\n - FocoosDetections: The detection results including class IDs, confidence scores, etc.\n - Optional[np.ndarray]: The annotated image if `annotate` is True, else None.\n\n Raises:\n FileNotFoundError: If the provided image file path is invalid.\n ValueError: If the inference request fails.\n \"\"\"\n image_bytes = None\n if isinstance(image, str) or isinstance(image, Path):\n if not os.path.exists(image):\n logger.error(f\"Image file not found: {image}\")\n raise FileNotFoundError(f\"Image file not found: {image}\")\n image_bytes = open(image, \"rb\").read()\n elif isinstance(image, np.ndarray):\n _, buffer = cv2.imencode(\".jpg\", image)\n image_bytes = buffer.tobytes()\n else:\n image_bytes = image\n files = {\"file\": image_bytes}\n t0 = time.time()\n res = self.http_client.post(\n f\"models/{self.model_ref}/inference?confidence_threshold={threshold}\",\n files=files,\n )\n t1 = time.time()\n if res.status_code == 200:\n logger.debug(f\"Inference time: {t1-t0:.3f} seconds\")\n detections = FocoosDetections(\n detections=[\n FocoosDet.from_json(d) for d in res.json().get(\"detections\", [])\n ],\n latency=res.json().get(\"latency\", None),\n )\n preview = None\n if annotate:\n im0 = image_loader(image)\n sv_detections = focoos_detections_to_supervision(detections)\n preview = self._annotate(im0, sv_detections)\n return detections, preview\n else:\n logger.error(f\"Failed to infer: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to infer: {res.status_code} {res.text}\")\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.monitor_train","title":"monitor_train(update_period=30)
","text":"Monitor the training process of the model and log its status periodically.
This method continuously checks the model's training status and logs updates based on the current state. It monitors the primary and secondary statuses of the model, and performs the following actions: - If the status is \"Pending\", it logs a waiting message and waits for resources. - If the status is \"InProgress\", it logs the current status and elapsed time, and logs the training metrics if the model is actively training. - If the status is \"Completed\", it logs the final metrics and exits. - If the training fails, is stopped, or any unexpected status occurs, it logs the status and exits.
Parameters:
Name Type Description Default update_period
int
The time (in seconds) to wait between status checks. Default is 30 seconds.
30
Returns:
Name Type Description None
None
This method does not return any value but logs information about the training process.
Logs - The current training status, including elapsed time.
- Training metrics at regular intervals while the model is training.
Source code in focoos/remote_model.py
def monitor_train(self, update_period=30) -> None:\n \"\"\"\n Monitor the training process of the model and log its status periodically.\n\n This method continuously checks the model's training status and logs updates based on the current state.\n It monitors the primary and secondary statuses of the model, and performs the following actions:\n - If the status is \"Pending\", it logs a waiting message and waits for resources.\n - If the status is \"InProgress\", it logs the current status and elapsed time, and logs the training metrics if the model is actively training.\n - If the status is \"Completed\", it logs the final metrics and exits.\n - If the training fails, is stopped, or any unexpected status occurs, it logs the status and exits.\n\n Args:\n update_period (int, optional): The time (in seconds) to wait between status checks. Default is 30 seconds.\n\n Returns:\n None: This method does not return any value but logs information about the training process.\n\n Logs:\n - The current training status, including elapsed time.\n - Training metrics at regular intervals while the model is training.\n \"\"\"\n completed_status = [\"Completed\", \"Failed\", \"Stopped\"]\n # init to make do-while\n status = {\"main_status\": \"Flag\", \"secondary_status\": \"Flag\"}\n prev_status = status\n while status[\"main_status\"] not in completed_status:\n prev_status = status\n status = self.train_status()\n elapsed = status.get(\"elapsed_time\", 0)\n # Model at the startup\n if not status[\"main_status\"] or status[\"main_status\"] in [\"Pending\"]:\n if prev_status[\"main_status\"] != status[\"main_status\"]:\n logger.info(\"[0s] Waiting for resources...\")\n sleep(update_period)\n continue\n # Training in progress\n if status[\"main_status\"] in [\"InProgress\"]:\n if prev_status[\"secondary_status\"] != status[\"secondary_status\"]:\n if status[\"secondary_status\"] in [\"Starting\", \"Pending\"]:\n logger.info(\n f\"[0s] {status['main_status']}: {status['secondary_status']}\"\n )\n else:\n logger.info(\n f\"[{elapsed//60}m:{elapsed%60}s] {status['main_status']}: {status['secondary_status']}\"\n )\n if status[\"secondary_status\"] in [\"Training\"]:\n self._log_metrics()\n sleep(update_period)\n continue\n if status[\"main_status\"] == \"Completed\":\n self._log_metrics()\n return\n else:\n logger.info(f\"Model is not training, status: {status['main_status']}\")\n return\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.stop_training","title":"stop_training()
","text":"Stop the training process of the model.
This method sends a request to stop the training of the model identified by model_ref
. If the request fails, an error is logged and a ValueError
is raised.
Raises:
Type Description ValueError
If the stop training request fails.
Logs - Error message if the request to stop training fails, including the status code and response text.
Returns:
Name Type Description None
None
This method does not return any value.
Source code in focoos/remote_model.py
def stop_training(self) -> None:\n \"\"\"\n Stop the training process of the model.\n\n This method sends a request to stop the training of the model identified by `model_ref`.\n If the request fails, an error is logged and a `ValueError` is raised.\n\n Raises:\n ValueError: If the stop training request fails.\n\n Logs:\n - Error message if the request to stop training fails, including the status code and response text.\n\n Returns:\n None: This method does not return any value.\n \"\"\"\n res = self.http_client.delete(f\"models/{self.model_ref}/train\")\n if res.status_code != 200:\n logger.error(f\"Failed to get stop training: {res.status_code} {res.text}\")\n raise ValueError(\n f\"Failed to get stop training: {res.status_code} {res.text}\"\n )\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.train","title":"train(dataset_ref, hyperparameters, anyma_version='anyma-sagemaker-cu12-torch22-0111', instance_type=TrainInstance.ML_G4DN_XLARGE, volume_size=50, max_runtime_in_seconds=36000)
","text":"Initiate the training of a remote model on the Focoos platform.
This method sends a request to the Focoos platform to start the training process for the model referenced by self.model_ref
. It requires a dataset reference and hyperparameters for training, as well as optional configuration options for the instance type, volume size, and runtime.
Parameters:
Name Type Description Default dataset_ref
str
The reference ID of the dataset to be used for training.
required hyperparameters
Hyperparameters
A structure containing the hyperparameters for the training process.
required anyma_version
str
The version of Anyma to use for training. Defaults to \"anyma-sagemaker-cu12-torch22-0111\".
'anyma-sagemaker-cu12-torch22-0111'
instance_type
TrainInstance
The type of training instance to use. Defaults to TrainInstance.ML_G4DN_XLARGE.
ML_G4DN_XLARGE
volume_size
int
The size of the disk volume (in GB) for the training instance. Defaults to 50.
50
max_runtime_in_seconds
int
The maximum runtime for training in seconds. Defaults to 36000.
36000
Returns:
Name Type Description dict
dict | None
A dictionary containing the response from the training initiation request. The content depends on the Focoos platform's response.
Raises:
Type Description ValueError
If the request to start training fails (e.g., due to incorrect parameters or server issues).
Source code in focoos/remote_model.py
def train(\n self,\n dataset_ref: str,\n hyperparameters: Hyperparameters,\n anyma_version: str = \"anyma-sagemaker-cu12-torch22-0111\",\n instance_type: TrainInstance = TrainInstance.ML_G4DN_XLARGE,\n volume_size: int = 50,\n max_runtime_in_seconds: int = 36000,\n) -> dict | None:\n \"\"\"\n Initiate the training of a remote model on the Focoos platform.\n\n This method sends a request to the Focoos platform to start the training process for the model\n referenced by `self.model_ref`. It requires a dataset reference and hyperparameters for training,\n as well as optional configuration options for the instance type, volume size, and runtime.\n\n Args:\n dataset_ref (str): The reference ID of the dataset to be used for training.\n hyperparameters (Hyperparameters): A structure containing the hyperparameters for the training process.\n anyma_version (str, optional): The version of Anyma to use for training. Defaults to \"anyma-sagemaker-cu12-torch22-0111\".\n instance_type (TrainInstance, optional): The type of training instance to use. Defaults to TrainInstance.ML_G4DN_XLARGE.\n volume_size (int, optional): The size of the disk volume (in GB) for the training instance. Defaults to 50.\n max_runtime_in_seconds (int, optional): The maximum runtime for training in seconds. Defaults to 36000.\n\n Returns:\n dict: A dictionary containing the response from the training initiation request. The content depends on the Focoos platform's response.\n\n Raises:\n ValueError: If the request to start training fails (e.g., due to incorrect parameters or server issues).\n \"\"\"\n res = self.http_client.post(\n f\"models/{self.model_ref}/train\",\n data={\n \"dataset_ref\": dataset_ref,\n \"anyma_version\": anyma_version,\n \"instance_type\": instance_type,\n \"volume_size\": volume_size,\n \"max_runtime_in_seconds\": max_runtime_in_seconds,\n \"hyperparameters\": hyperparameters.model_dump(),\n },\n )\n if res.status_code != 200:\n logger.warning(f\"Failed to train model: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to train model: {res.status_code} {res.text}\")\n return res.json()\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.train_logs","title":"train_logs()
","text":"Retrieve the training logs for the model.
This method sends a request to fetch the logs of the model's training process. If the request is successful (status code 200), it returns the logs as a list of strings. If the request fails, it logs a warning and returns an empty list.
Returns:
Type Description list[str]
list[str]: A list of training logs as strings.
Raises:
Type Description None
Returns an empty list if the request fails.
Source code in focoos/remote_model.py
def train_logs(self) -> list[str]:\n \"\"\"\n Retrieve the training logs for the model.\n\n This method sends a request to fetch the logs of the model's training process. If the request\n is successful (status code 200), it returns the logs as a list of strings. If the request fails,\n it logs a warning and returns an empty list.\n\n Returns:\n list[str]: A list of training logs as strings.\n\n Raises:\n None: Returns an empty list if the request fails.\n \"\"\"\n res = self.http_client.get(f\"models/{self.model_ref}/train/logs\")\n if res.status_code != 200:\n logger.warning(f\"Failed to get train logs: {res.status_code} {res.text}\")\n return []\n return res.json()\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.train_metrics","title":"train_metrics(period=60)
","text":"Retrieve training metrics for the model over a specified period.
This method fetches the training metrics for the remote model, including aggregated values, such as average performance metrics over the given period.
Parameters:
Name Type Description Default period
int
The period (in seconds) for which to fetch the metrics. Defaults to 60.
60
Returns:
Type Description dict | None
Optional[dict]: A dictionary containing the training metrics if the request is successful, or None if the request fails.
Source code in focoos/remote_model.py
def train_metrics(self, period=60) -> dict | None:\n \"\"\"\n Retrieve training metrics for the model over a specified period.\n\n This method fetches the training metrics for the remote model, including aggregated values,\n such as average performance metrics over the given period.\n\n Args:\n period (int, optional): The period (in seconds) for which to fetch the metrics. Defaults to 60.\n\n Returns:\n Optional[dict]: A dictionary containing the training metrics if the request is successful,\n or None if the request fails.\n \"\"\"\n res = self.http_client.get(\n f\"models/{self.model_ref}/train/all-metrics?period={period}&aggregation_type=Average\"\n )\n if res.status_code != 200:\n logger.warning(f\"Failed to get train logs: {res.status_code} {res.text}\")\n return None\n return res.json()\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.train_status","title":"train_status()
","text":"Retrieve the current status of the model training.
Sends a request to check the training status of the model referenced by self.model_ref
.
Returns:
Name Type Description dict
dict | None
A dictionary containing the training status information.
Raises:
Type Description ValueError
If the request to get training status fails.
Source code in focoos/remote_model.py
def train_status(self) -> dict | None:\n \"\"\"\n Retrieve the current status of the model training.\n\n Sends a request to check the training status of the model referenced by `self.model_ref`.\n\n Returns:\n dict: A dictionary containing the training status information.\n\n Raises:\n ValueError: If the request to get training status fails.\n \"\"\"\n res = self.http_client.get(f\"models/{self.model_ref}/train/status\")\n if res.status_code != 200:\n logger.error(f\"Failed to get train status: {res.status_code} {res.text}\")\n raise ValueError(\n f\"Failed to get train status: {res.status_code} {res.text}\"\n )\n return res.json()\n
"},{"location":"api/runtime/","title":"runtime","text":"Runtime Module for ONNX-based Models
This module provides the necessary functionality for loading, preprocessing, running inference, and benchmarking ONNX-based models using different execution providers such as CUDA, TensorRT, OpenVINO, and CPU. It includes utility functions for image preprocessing, postprocessing, and interfacing with the ONNXRuntime library.
Functions:
Name Description det_postprocess
Postprocesses detection model outputs into sv.Detections.
semseg_postprocess
Postprocesses semantic segmentation model outputs into sv.Detections.
get_runtime
Returns an ONNXRuntime instance configured for the given runtime type.
Classes:
Name Description ONNXRuntime
A class that interfaces with ONNX Runtime for model inference.
"},{"location":"api/runtime/#focoos.runtime.ONNXRuntime","title":"ONNXRuntime
","text":"A class that interfaces with ONNX Runtime for model inference using different execution providers (CUDA, TensorRT, OpenVINO, CoreML, etc.). It manages preprocessing, inference, and postprocessing of data, as well as benchmarking the performance of the model.
Attributes:
Name Type Description logger
Logger
Logger for the ONNXRuntime instance.
name
str
The name of the model (derived from its path).
opts
OnnxEngineOpts
Options used for configuring the ONNX Runtime.
model_metadata
ModelMetadata
Metadata related to the model.
postprocess_fn
Callable
The function used to postprocess the model's output.
ort_sess
InferenceSession
The ONNXRuntime inference session.
dtype
dtype
The data type for the model input.
binding
Optional[str]
The binding type for the runtime (e.g., CUDA, CPU).
Source code in focoos/runtime.py
class ONNXRuntime:\n \"\"\"\n A class that interfaces with ONNX Runtime for model inference using different execution providers\n (CUDA, TensorRT, OpenVINO, CoreML, etc.). It manages preprocessing, inference, and postprocessing\n of data, as well as benchmarking the performance of the model.\n\n Attributes:\n logger (Logger): Logger for the ONNXRuntime instance.\n name (str): The name of the model (derived from its path).\n opts (OnnxEngineOpts): Options used for configuring the ONNX Runtime.\n model_metadata (ModelMetadata): Metadata related to the model.\n postprocess_fn (Callable): The function used to postprocess the model's output.\n ort_sess (InferenceSession): The ONNXRuntime inference session.\n dtype (np.dtype): The data type for the model input.\n binding (Optional[str]): The binding type for the runtime (e.g., CUDA, CPU).\n \"\"\"\n\n def __init__(\n self, model_path: str, opts: OnnxEngineOpts, model_metadata: ModelMetadata\n ):\n \"\"\"\n Initializes the ONNXRuntime instance with the specified model and configuration options.\n\n Args:\n model_path (str): Path to the ONNX model file.\n opts (OnnxEngineOpts): The configuration options for ONNX Runtime.\n model_metadata (ModelMetadata): Metadata for the model (e.g., task type).\n \"\"\"\n self.logger = get_logger()\n self.logger.debug(f\"[onnxruntime device] {ort.get_device()}\")\n self.logger.debug(\n f\"[onnxruntime available providers] {ort.get_available_providers()}\"\n )\n self.name = Path(model_path).stem\n self.opts = opts\n self.model_metadata = model_metadata\n self.postprocess_fn = (\n det_postprocess\n if model_metadata.task == FocoosTask.DETECTION\n else semseg_postprocess\n )\n options = ort.SessionOptions()\n if opts.verbose:\n options.log_severity_level = 0\n options.enable_profiling = opts.verbose\n # options.intra_op_num_threads = 1\n available_providers = ort.get_available_providers()\n if opts.cuda and \"CUDAExecutionProvider\" not in available_providers:\n self.logger.warning(\"CUDA ExecutionProvider not found.\")\n if opts.trt and \"TensorrtExecutionProvider\" not in available_providers:\n self.logger.warning(\"Tensorrt ExecutionProvider not found.\")\n if opts.vino and \"OpenVINOExecutionProvider\" not in available_providers:\n self.logger.warning(\"OpenVINO ExecutionProvider not found.\")\n if opts.coreml and \"CoreMLExecutionProvider\" not in available_providers:\n self.logger.warning(\"CoreML ExecutionProvider not found.\")\n # Set providers\n providers = []\n dtype = np.float32\n binding = None\n if opts.trt and \"TensorrtExecutionProvider\" in available_providers:\n providers.append(\n (\n \"TensorrtExecutionProvider\",\n {\n \"device_id\": 0,\n # 'trt_max_workspace_size': 1073741824, # 1 GB\n \"trt_fp16_enable\": opts.fp16,\n \"trt_force_sequential_engine_build\": False,\n },\n )\n )\n dtype = np.float32\n elif opts.vino and \"OpenVINOExecutionProvider\" in available_providers:\n providers.append(\n (\n \"OpenVINOExecutionProvider\",\n {\n \"device_type\": \"MYRIAD_FP16\",\n \"enable_vpu_fast_compile\": True,\n \"num_of_threads\": 1,\n },\n # 'use_compiled_network': False}\n )\n )\n options.graph_optimization_level = (\n ort.GraphOptimizationLevel.ORT_DISABLE_ALL\n )\n dtype = np.float32\n binding = None\n elif opts.cuda and \"CUDAExecutionProvider\" in available_providers:\n binding = \"cuda\"\n options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL\n providers.append(\n (\n \"CUDAExecutionProvider\",\n {\n \"device_id\": GPU_ID,\n \"arena_extend_strategy\": \"kSameAsRequested\",\n \"gpu_mem_limit\": 16 * 1024 * 1024 * 1024,\n \"cudnn_conv_algo_search\": \"EXHAUSTIVE\",\n \"do_copy_in_default_stream\": True,\n },\n )\n )\n elif opts.coreml and \"CoreMLExecutionProvider\" in available_providers:\n # # options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL\n providers.append(\"CoreMLExecutionProvider\")\n else:\n binding = None\n\n binding = None # TODO: remove this\n providers.append(\"CPUExecutionProvider\")\n self.dtype = dtype\n self.binding = binding\n self.ort_sess = ort.InferenceSession(model_path, options, providers=providers)\n self.active_providers = self.ort_sess.get_providers()\n self.logger.info(\n f\"[onnxruntime] Active providers:{self.ort_sess.get_providers()}\"\n )\n if self.ort_sess.get_inputs()[0].type == \"tensor(uint8)\":\n self.dtype = np.uint8\n else:\n self.dtype = np.float32\n if self.opts.warmup_iter > 0:\n self.logger.info(\"\u23f1\ufe0f [onnxruntime] Warming up model ..\")\n for _ in range(self.opts.warmup_iter):\n np_image = np.random.rand(1, 3, 640, 640).astype(self.dtype)\n input_name = self.ort_sess.get_inputs()[0].name\n out_name = [output.name for output in self.ort_sess.get_outputs()]\n t0 = perf_counter()\n if self.binding is not None:\n io_binding = self.ort_sess.io_binding()\n io_binding.bind_input(\n input_name,\n self.binding,\n device_id=GPU_ID,\n element_type=self.dtype,\n shape=np_image.shape,\n buffer_ptr=np_image.ctypes.data,\n )\n io_binding.bind_cpu_input(input_name, np_image)\n io_binding.bind_output(out_name[0], self.binding)\n t0 = perf_counter()\n self.ort_sess.run_with_iobinding(io_binding)\n t1 = perf_counter()\n io_binding.copy_outputs_to_cpu()\n else:\n self.ort_sess.run(out_name, {input_name: np_image})\n\n self.logger.info(f\"\u23f1\ufe0f [onnxruntime] {self.name} WARMUP DONE\")\n\n def __call__(self, im: np.ndarray, conf_threshold: float) -> sv.Detections:\n \"\"\"\n Runs inference on the provided input image and returns the model's detections.\n\n Args:\n im (np.ndarray): The preprocessed input image.\n conf_threshold (float): The confidence threshold for filtering results.\n\n Returns:\n sv.Detections: A sv.Detections object containing the model's output detections.\n \"\"\"\n out_name = None\n input_name = self.ort_sess.get_inputs()[0].name\n out_name = [output.name for output in self.ort_sess.get_outputs()]\n if self.binding is not None:\n self.logger.info(f\"binding {self.binding}\")\n io_binding = self.ort_sess.io_binding()\n\n io_binding.bind_input(\n input_name,\n self.binding,\n device_id=GPU_ID,\n element_type=self.dtype,\n shape=im.shape,\n buffer_ptr=im.ctypes.data,\n )\n\n io_binding.bind_cpu_input(input_name, im)\n io_binding.bind_output(out_name[0], self.binding)\n self.ort_sess.run_with_iobinding(io_binding)\n out = io_binding.copy_outputs_to_cpu()\n else:\n out = self.ort_sess.run(out_name, {input_name: im})\n\n detections = self.postprocess_fn(\n out, (im.shape[2], im.shape[3]), conf_threshold\n )\n return detections\n\n def benchmark(self, iterations=20, size=640) -> LatencyMetrics:\n \"\"\"\n Benchmarks the model by running multiple inference iterations and measuring the latency.\n\n Args:\n iterations (int, optional): Number of iterations to run for benchmarking. Defaults to 20.\n size (int, optional): The input image size for benchmarking. Defaults to 640.\n\n Returns:\n LatencyMetrics: The latency metrics (e.g., FPS, mean, min, max, and standard deviation).\n \"\"\"\n self.logger.info(\"\u23f1\ufe0f [onnxruntime] Benchmarking latency..\")\n size = size if isinstance(size, (tuple, list)) else (size, size)\n\n durations = []\n np_input = (255 * np.random.random((1, 3, size[0], size[1]))).astype(self.dtype)\n input_name = self.ort_sess.get_inputs()[0].name\n out_name = self.ort_sess.get_outputs()[0].name\n if self.binding:\n io_binding = self.ort_sess.io_binding()\n\n io_binding.bind_input(\n input_name,\n \"cuda\",\n device_id=0,\n element_type=self.dtype,\n shape=np_input.shape,\n buffer_ptr=np_input.ctypes.data,\n )\n\n io_binding.bind_cpu_input(input_name, np_input)\n io_binding.bind_output(out_name, \"cuda\")\n else:\n out_name = [output.name for output in self.ort_sess.get_outputs()]\n\n for step in range(iterations + 5):\n if self.binding:\n start = perf_counter()\n self.ort_sess.run_with_iobinding(io_binding)\n end = perf_counter()\n # out = io_binding.copy_outputs_to_cpu()\n else:\n start = perf_counter()\n out = self.ort_sess.run(out_name, {input_name: np_input})\n end = perf_counter()\n\n if step >= 5:\n durations.append((end - start) * 1000)\n durations = np.array(durations)\n provider = self.active_providers[0]\n if provider in [\"CUDAExecutionProvider\", \"TensorrtExecutionProvider\"]:\n device = get_gpu_name()\n else:\n device = get_cpu_name()\n metrics = LatencyMetrics(\n fps=int(1000 / durations.mean()),\n engine=f\"onnx.{provider}\",\n mean=round(durations.mean(), 3),\n max=round(durations.max(), 3),\n min=round(durations.min(), 3),\n std=round(durations.std(), 3),\n im_size=size[0],\n device=str(device),\n )\n self.logger.info(f\"\ud83d\udd25 FPS: {metrics.fps}\")\n return metrics\n
"},{"location":"api/runtime/#focoos.runtime.ONNXRuntime.__call__","title":"__call__(im, conf_threshold)
","text":"Runs inference on the provided input image and returns the model's detections.
Parameters:
Name Type Description Default im
ndarray
The preprocessed input image.
required conf_threshold
float
The confidence threshold for filtering results.
required Returns:
Type Description Detections
sv.Detections: A sv.Detections object containing the model's output detections.
Source code in focoos/runtime.py
def __call__(self, im: np.ndarray, conf_threshold: float) -> sv.Detections:\n \"\"\"\n Runs inference on the provided input image and returns the model's detections.\n\n Args:\n im (np.ndarray): The preprocessed input image.\n conf_threshold (float): The confidence threshold for filtering results.\n\n Returns:\n sv.Detections: A sv.Detections object containing the model's output detections.\n \"\"\"\n out_name = None\n input_name = self.ort_sess.get_inputs()[0].name\n out_name = [output.name for output in self.ort_sess.get_outputs()]\n if self.binding is not None:\n self.logger.info(f\"binding {self.binding}\")\n io_binding = self.ort_sess.io_binding()\n\n io_binding.bind_input(\n input_name,\n self.binding,\n device_id=GPU_ID,\n element_type=self.dtype,\n shape=im.shape,\n buffer_ptr=im.ctypes.data,\n )\n\n io_binding.bind_cpu_input(input_name, im)\n io_binding.bind_output(out_name[0], self.binding)\n self.ort_sess.run_with_iobinding(io_binding)\n out = io_binding.copy_outputs_to_cpu()\n else:\n out = self.ort_sess.run(out_name, {input_name: im})\n\n detections = self.postprocess_fn(\n out, (im.shape[2], im.shape[3]), conf_threshold\n )\n return detections\n
"},{"location":"api/runtime/#focoos.runtime.ONNXRuntime.__init__","title":"__init__(model_path, opts, model_metadata)
","text":"Initializes the ONNXRuntime instance with the specified model and configuration options.
Parameters:
Name Type Description Default model_path
str
Path to the ONNX model file.
required opts
OnnxEngineOpts
The configuration options for ONNX Runtime.
required model_metadata
ModelMetadata
Metadata for the model (e.g., task type).
required Source code in focoos/runtime.py
def __init__(\n self, model_path: str, opts: OnnxEngineOpts, model_metadata: ModelMetadata\n):\n \"\"\"\n Initializes the ONNXRuntime instance with the specified model and configuration options.\n\n Args:\n model_path (str): Path to the ONNX model file.\n opts (OnnxEngineOpts): The configuration options for ONNX Runtime.\n model_metadata (ModelMetadata): Metadata for the model (e.g., task type).\n \"\"\"\n self.logger = get_logger()\n self.logger.debug(f\"[onnxruntime device] {ort.get_device()}\")\n self.logger.debug(\n f\"[onnxruntime available providers] {ort.get_available_providers()}\"\n )\n self.name = Path(model_path).stem\n self.opts = opts\n self.model_metadata = model_metadata\n self.postprocess_fn = (\n det_postprocess\n if model_metadata.task == FocoosTask.DETECTION\n else semseg_postprocess\n )\n options = ort.SessionOptions()\n if opts.verbose:\n options.log_severity_level = 0\n options.enable_profiling = opts.verbose\n # options.intra_op_num_threads = 1\n available_providers = ort.get_available_providers()\n if opts.cuda and \"CUDAExecutionProvider\" not in available_providers:\n self.logger.warning(\"CUDA ExecutionProvider not found.\")\n if opts.trt and \"TensorrtExecutionProvider\" not in available_providers:\n self.logger.warning(\"Tensorrt ExecutionProvider not found.\")\n if opts.vino and \"OpenVINOExecutionProvider\" not in available_providers:\n self.logger.warning(\"OpenVINO ExecutionProvider not found.\")\n if opts.coreml and \"CoreMLExecutionProvider\" not in available_providers:\n self.logger.warning(\"CoreML ExecutionProvider not found.\")\n # Set providers\n providers = []\n dtype = np.float32\n binding = None\n if opts.trt and \"TensorrtExecutionProvider\" in available_providers:\n providers.append(\n (\n \"TensorrtExecutionProvider\",\n {\n \"device_id\": 0,\n # 'trt_max_workspace_size': 1073741824, # 1 GB\n \"trt_fp16_enable\": opts.fp16,\n \"trt_force_sequential_engine_build\": False,\n },\n )\n )\n dtype = np.float32\n elif opts.vino and \"OpenVINOExecutionProvider\" in available_providers:\n providers.append(\n (\n \"OpenVINOExecutionProvider\",\n {\n \"device_type\": \"MYRIAD_FP16\",\n \"enable_vpu_fast_compile\": True,\n \"num_of_threads\": 1,\n },\n # 'use_compiled_network': False}\n )\n )\n options.graph_optimization_level = (\n ort.GraphOptimizationLevel.ORT_DISABLE_ALL\n )\n dtype = np.float32\n binding = None\n elif opts.cuda and \"CUDAExecutionProvider\" in available_providers:\n binding = \"cuda\"\n options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL\n providers.append(\n (\n \"CUDAExecutionProvider\",\n {\n \"device_id\": GPU_ID,\n \"arena_extend_strategy\": \"kSameAsRequested\",\n \"gpu_mem_limit\": 16 * 1024 * 1024 * 1024,\n \"cudnn_conv_algo_search\": \"EXHAUSTIVE\",\n \"do_copy_in_default_stream\": True,\n },\n )\n )\n elif opts.coreml and \"CoreMLExecutionProvider\" in available_providers:\n # # options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL\n providers.append(\"CoreMLExecutionProvider\")\n else:\n binding = None\n\n binding = None # TODO: remove this\n providers.append(\"CPUExecutionProvider\")\n self.dtype = dtype\n self.binding = binding\n self.ort_sess = ort.InferenceSession(model_path, options, providers=providers)\n self.active_providers = self.ort_sess.get_providers()\n self.logger.info(\n f\"[onnxruntime] Active providers:{self.ort_sess.get_providers()}\"\n )\n if self.ort_sess.get_inputs()[0].type == \"tensor(uint8)\":\n self.dtype = np.uint8\n else:\n self.dtype = np.float32\n if self.opts.warmup_iter > 0:\n self.logger.info(\"\u23f1\ufe0f [onnxruntime] Warming up model ..\")\n for _ in range(self.opts.warmup_iter):\n np_image = np.random.rand(1, 3, 640, 640).astype(self.dtype)\n input_name = self.ort_sess.get_inputs()[0].name\n out_name = [output.name for output in self.ort_sess.get_outputs()]\n t0 = perf_counter()\n if self.binding is not None:\n io_binding = self.ort_sess.io_binding()\n io_binding.bind_input(\n input_name,\n self.binding,\n device_id=GPU_ID,\n element_type=self.dtype,\n shape=np_image.shape,\n buffer_ptr=np_image.ctypes.data,\n )\n io_binding.bind_cpu_input(input_name, np_image)\n io_binding.bind_output(out_name[0], self.binding)\n t0 = perf_counter()\n self.ort_sess.run_with_iobinding(io_binding)\n t1 = perf_counter()\n io_binding.copy_outputs_to_cpu()\n else:\n self.ort_sess.run(out_name, {input_name: np_image})\n\n self.logger.info(f\"\u23f1\ufe0f [onnxruntime] {self.name} WARMUP DONE\")\n
"},{"location":"api/runtime/#focoos.runtime.ONNXRuntime.benchmark","title":"benchmark(iterations=20, size=640)
","text":"Benchmarks the model by running multiple inference iterations and measuring the latency.
Parameters:
Name Type Description Default iterations
int
Number of iterations to run for benchmarking. Defaults to 20.
20
size
int
The input image size for benchmarking. Defaults to 640.
640
Returns:
Name Type Description LatencyMetrics
LatencyMetrics
The latency metrics (e.g., FPS, mean, min, max, and standard deviation).
Source code in focoos/runtime.py
def benchmark(self, iterations=20, size=640) -> LatencyMetrics:\n \"\"\"\n Benchmarks the model by running multiple inference iterations and measuring the latency.\n\n Args:\n iterations (int, optional): Number of iterations to run for benchmarking. Defaults to 20.\n size (int, optional): The input image size for benchmarking. Defaults to 640.\n\n Returns:\n LatencyMetrics: The latency metrics (e.g., FPS, mean, min, max, and standard deviation).\n \"\"\"\n self.logger.info(\"\u23f1\ufe0f [onnxruntime] Benchmarking latency..\")\n size = size if isinstance(size, (tuple, list)) else (size, size)\n\n durations = []\n np_input = (255 * np.random.random((1, 3, size[0], size[1]))).astype(self.dtype)\n input_name = self.ort_sess.get_inputs()[0].name\n out_name = self.ort_sess.get_outputs()[0].name\n if self.binding:\n io_binding = self.ort_sess.io_binding()\n\n io_binding.bind_input(\n input_name,\n \"cuda\",\n device_id=0,\n element_type=self.dtype,\n shape=np_input.shape,\n buffer_ptr=np_input.ctypes.data,\n )\n\n io_binding.bind_cpu_input(input_name, np_input)\n io_binding.bind_output(out_name, \"cuda\")\n else:\n out_name = [output.name for output in self.ort_sess.get_outputs()]\n\n for step in range(iterations + 5):\n if self.binding:\n start = perf_counter()\n self.ort_sess.run_with_iobinding(io_binding)\n end = perf_counter()\n # out = io_binding.copy_outputs_to_cpu()\n else:\n start = perf_counter()\n out = self.ort_sess.run(out_name, {input_name: np_input})\n end = perf_counter()\n\n if step >= 5:\n durations.append((end - start) * 1000)\n durations = np.array(durations)\n provider = self.active_providers[0]\n if provider in [\"CUDAExecutionProvider\", \"TensorrtExecutionProvider\"]:\n device = get_gpu_name()\n else:\n device = get_cpu_name()\n metrics = LatencyMetrics(\n fps=int(1000 / durations.mean()),\n engine=f\"onnx.{provider}\",\n mean=round(durations.mean(), 3),\n max=round(durations.max(), 3),\n min=round(durations.min(), 3),\n std=round(durations.std(), 3),\n im_size=size[0],\n device=str(device),\n )\n self.logger.info(f\"\ud83d\udd25 FPS: {metrics.fps}\")\n return metrics\n
"},{"location":"api/runtime/#focoos.runtime.det_postprocess","title":"det_postprocess(out, im0_shape, conf_threshold)
","text":"Postprocesses the output of an object detection model and filters detections based on a confidence threshold.
Parameters:
Name Type Description Default out
List[ndarray]
The output of the detection model.
required im0_shape
Tuple[int, int]
The original shape of the input image (height, width).
required conf_threshold
float
The confidence threshold for filtering detections.
required Returns:
Type Description Detections
sv.Detections: A sv.Detections object containing the filtered bounding boxes, class ids, and confidences.
Source code in focoos/runtime.py
def det_postprocess(\n out: List[np.ndarray], im0_shape: Tuple[int, int], conf_threshold: float\n) -> sv.Detections:\n \"\"\"\n Postprocesses the output of an object detection model and filters detections\n based on a confidence threshold.\n\n Args:\n out (List[np.ndarray]): The output of the detection model.\n im0_shape (Tuple[int, int]): The original shape of the input image (height, width).\n conf_threshold (float): The confidence threshold for filtering detections.\n\n Returns:\n sv.Detections: A sv.Detections object containing the filtered bounding boxes, class ids, and confidences.\n \"\"\"\n cls_ids, boxes, confs = out\n boxes[:, 0::2] *= im0_shape[1]\n boxes[:, 1::2] *= im0_shape[0]\n high_conf_indices = (confs > conf_threshold).nonzero()\n\n return sv.Detections(\n xyxy=boxes[high_conf_indices].astype(int),\n class_id=cls_ids[high_conf_indices].astype(int),\n confidence=confs[high_conf_indices].astype(float),\n )\n
"},{"location":"api/runtime/#focoos.runtime.get_runtime","title":"get_runtime(runtime_type, model_path, model_metadata, warmup_iter=0)
","text":"Creates and returns an ONNXRuntime instance based on the specified runtime type and model path, with options for various execution providers (CUDA, TensorRT, CPU, etc.).
Parameters:
Name Type Description Default runtime_type
RuntimeTypes
The type of runtime to use (e.g., ONNX_CUDA32, ONNX_TRT32).
required model_path
str
The path to the ONNX model.
required model_metadata
ModelMetadata
Metadata describing the model.
required warmup_iter
int
Number of warmup iterations before benchmarking. Defaults to 0.
0
Returns:
Name Type Description ONNXRuntime
ONNXRuntime
A fully configured ONNXRuntime instance.
Source code in focoos/runtime.py
def get_runtime(\n runtime_type: RuntimeTypes,\n model_path: str,\n model_metadata: ModelMetadata,\n warmup_iter: int = 0,\n) -> ONNXRuntime:\n \"\"\"\n Creates and returns an ONNXRuntime instance based on the specified runtime type\n and model path, with options for various execution providers (CUDA, TensorRT, CPU, etc.).\n\n Args:\n runtime_type (RuntimeTypes): The type of runtime to use (e.g., ONNX_CUDA32, ONNX_TRT32).\n model_path (str): The path to the ONNX model.\n model_metadata (ModelMetadata): Metadata describing the model.\n warmup_iter (int, optional): Number of warmup iterations before benchmarking. Defaults to 0.\n\n Returns:\n ONNXRuntime: A fully configured ONNXRuntime instance.\n \"\"\"\n opts = OnnxEngineOpts(\n cuda=runtime_type == RuntimeTypes.ONNX_CUDA32,\n trt=runtime_type in [RuntimeTypes.ONNX_TRT32, RuntimeTypes.ONNX_TRT16],\n fp16=runtime_type == RuntimeTypes.ONNX_TRT16,\n warmup_iter=warmup_iter,\n coreml=runtime_type == RuntimeTypes.ONNX_COREML,\n verbose=False,\n )\n return ONNXRuntime(model_path, opts, model_metadata)\n
"},{"location":"api/runtime/#focoos.runtime.semseg_postprocess","title":"semseg_postprocess(out, im0_shape, conf_threshold)
","text":"Postprocesses the output of a semantic segmentation model and filters based on a confidence threshold.
Parameters:
Name Type Description Default out
List[ndarray]
The output of the semantic segmentation model.
required im0_shape
Tuple[int, int]
The original shape of the input image (height, width).
required conf_threshold
float
The confidence threshold for filtering detections.
required Returns:
Type Description Detections
sv.Detections: A sv.Detections object containing the masks, class ids, and confidences.
Source code in focoos/runtime.py
def semseg_postprocess(\n out: List[np.ndarray], im0_shape: Tuple[int, int], conf_threshold: float\n) -> sv.Detections:\n \"\"\"\n Postprocesses the output of a semantic segmentation model and filters based\n on a confidence threshold.\n\n Args:\n out (List[np.ndarray]): The output of the semantic segmentation model.\n im0_shape (Tuple[int, int]): The original shape of the input image (height, width).\n conf_threshold (float): The confidence threshold for filtering detections.\n\n Returns:\n sv.Detections: A sv.Detections object containing the masks, class ids, and confidences.\n \"\"\"\n cls_ids, mask, confs = out[0][0], out[1][0], out[2][0]\n masks = np.equal(mask, np.arange(len(cls_ids))[:, None, None])\n high_conf_indices = np.where(confs > conf_threshold)[0]\n masks = masks[high_conf_indices].astype(bool)\n cls_ids = cls_ids[high_conf_indices].astype(int)\n confs = confs[high_conf_indices].astype(float)\n return sv.Detections(\n mask=masks,\n # xyxy is required from supervision\n xyxy=np.zeros(shape=(len(high_conf_indices), 4), dtype=np.uint8),\n class_id=cls_ids,\n confidence=confs,\n )\n
"},{"location":"development/changelog/","title":"Changelog","text":"\ud83d\udea7 Work in Progress \ud83d\udea7
This page is currently being developed and may not be complete.
Feel free to contribute to this page! If you have suggestions or would like to help improve it, please contact us.
"},{"location":"development/code_of_conduct/","title":"Code of Conduct","text":"\ud83d\udea7 Work in Progress \ud83d\udea7
This page is currently being developed and may not be complete.
Feel free to contribute to this page! If you have suggestions or would like to help improve it, please contact us.
"},{"location":"development/contributing/","title":"Contributing","text":"\ud83d\udea7 Work in Progress \ud83d\udea7
This page is currently being developed and may not be complete.
Feel free to contribute to this page! If you have suggestions or would like to help improve it, please contact us.
"},{"location":"getting_started/installation/","title":"Installation","text":"The focoos SDK provides flexibility for installation based on the execution environment you plan to use. The package supports CPU
, NVIDIA GPU
, and NVIDIA GPU with TensorRT
environments. Please note that only one execution environment should be selected during installation.
"},{"location":"getting_started/installation/#requirements","title":"Requirements","text":"For local inference, ensure that you have CUDA 12 and cuDNN 9 installed, as they are required for onnxruntime version 1.20.1.
To install cuDNN 9:
apt-get -y install cudnn9-cuda-12\n
To perform inference using TensorRT, ensure you have TensorRT version 10.5 installed.
"},{"location":"getting_started/installation/#installation-options","title":"Installation Options","text":" If you plan to run the SDK on a CPU-only environment:
pip install 'focoos[cpu] @ git+https://github.com/FocoosAI/focoos.git'\n
For execution using NVIDIA GPUs (with ONNX Runtime GPU support):
pip install 'focoos[gpu] @ git+https://github.com/FocoosAI/focoos.git'\n
For optimized execution using NVIDIA GPUs with TensorRT:
pip install 'focoos[tensorrt] @ git+https://github.com/FocoosAI/focoos.git'\n
Note
\ud83d\udee0\ufe0f Installation Tip: If you want to install a specific version, for example v0.1.3
, use:
pip install 'focoos[tensorrt] @ git+https://github.com/FocoosAI/focoos.git@v0.1.3'\n
\ud83d\udccb Check Versions: Visit https://github.com/FocoosAI/focoos/tags for available versions."},{"location":"getting_started/introduction/","title":"Focoos Python SDK \ud83d\udce6","text":"Unlock the full potential of Focoos AI with the Focoos Python SDK! \ud83d\ude80 This powerful SDK gives you seamless access to our cutting-edge computer vision models and tools, allowing you to effortlessly interact with the Focoos API. With just a few lines of code, you can easily select, customize, test, and deploy pre-trained models tailored to your specific needs. Whether you're deploying in the cloud or on edge devices, the Focoos Python SDK integrates smoothly into your workflow, speeding up your development process.
Ready to dive in? Get started with the setup in just a few simple steps!
\ud83d\ude80 Install the Focoos Python SDK
"},{"location":"getting_started/quickstart/","title":"Quickstart \ud83d\ude80","text":"Getting started with Focoos AI has never been easier! In just a few steps, you can quickly set up remote inference using our built-in models. Here's a simple example of how to perform object detection with the focoos_object365 model:
"},{"location":"getting_started/quickstart/#step-1-install-the-sdk","title":"Step 1: Install the SDK","text":"First, make sure you've installed the Focoos Python SDK by following the installation guide.
"},{"location":"getting_started/quickstart/#step-2-set-up-remote-inference","title":"Step 2: Set Up Remote Inference","text":"With the SDK installed, you can start using the Focoos API to run inference remotely. Here's a basic code snippet to detect objects in an image using a pre-trained model:
from focoos import Focoos\nimport os\n\n# Initialize the Focoos client with your API key\nfocoos = Focoos(api_key=os.getenv(\"FOCOOS_API_KEY\"))\n\n# Get the remote model (focoos_object365) from Focoos API\nmodel = focoos.get_remote_model(\"focoos_object365\")\n\n# Run inference on an image\ndetections = model.infer(\"./image.jpg\", threshold=0.4)\n\n# Output the detections\nprint(detections)\n
"},{"location":"helpers/wip/","title":"Wip","text":"\ud83d\udea7 Work in Progress \ud83d\udea7
This page is currently being developed and may not be complete.
Feel free to contribute to this page! If you have suggestions or would like to help improve it, please contact us.
"},{"location":"how_to/cloud_training/","title":"Cloud Training","text":"This section covers the steps to train a model in the cloud using the focoos
library. The following example demonstrates how to interact with the Focoos API to manage models, datasets, and training jobs.
"},{"location":"how_to/cloud_training/#listing-available-datasets","title":"Listing Available Datasets","text":"Before training a model, you can list all available shared datasets:
from pprint import pprint\nimport os\nfrom focoos import Focoos\n\nfocoos = Focoos(api_key=os.getenv(\"FOCOOS_API_KEY\"))\n\ndatasets = focoos.list_shared_datasets()\npprint(datasets)\n
"},{"location":"how_to/cloud_training/#initiating-a-cloud-training-job","title":"Initiating a Cloud Training Job","text":"To start training, configure the model, dataset, and training parameters as shown below:
from focoos.ports import Hyperparameters, TrainInstance\n\nmodel = focoos.get_remote_model(\"<YOUR-MODEL-ID>\")\n\nres = model.train(\n anyma_version=\"0.11.1\",\n dataset_ref=\"<YOUR-DATASET-ID>\",\n instance_type=TrainInstance.ML_G4DN_XLARGE,\n volume_size=50,\n max_runtime_in_seconds=36000,\n hyperparameters=Hyperparameters(\n learning_rate=0.0001,\n batch_size=16,\n max_iters=1500,\n eval_period=100,\n resolution=640,\n ), # type: ignore\n)\npprint(res)\n
"},{"location":"how_to/cloud_training/#monitoring-training-progress","title":"Monitoring Training Progress","text":"Once the training job is initiated, monitor its progress by polling the training status. Use the following code:
import time\nfrom pprint import pprint\nfrom focoos.utils.logger import get_logger\n\ncompleted_status = [\"Completed\", \"Failed\"]\nlogger = get_logger(__name__)\n\nmodel = focoos.get_remote_model(\"<YOUR-MODEL-ID>\")\nstatus = model.train_status()\n\nwhile status[\"main_status\"] not in completed_status:\n status = model.train_status()\n logger.info(f\"Training status: {status['main_status']}\")\n pprint(f\"Training progress: {status['status_transitions']}\")\n time.sleep(30)\n
"},{"location":"how_to/cloud_training/#retrieving-training-logs","title":"Retrieving Training Logs","text":"After the training process is complete, retrieve the logs for detailed insights:
logs = model.train_logs()\npprint(logs)\n
"},{"location":"how_to/inference/","title":"Inferece","text":"This section covers how to perform inference using the focoos
library. You can deploy models to the cloud for predictions, integrate with Gradio for interactive demos, or run inference locally.
"},{"location":"how_to/inference/#cloud-inference","title":"\ud83e\udd16 Cloud Inference","text":"from focoos import Focoos\n\nfocoos = Focoos(api_key=os.getenv(\"FOCOOS_API_KEY\"))\n\nmodel = focoos.get_remote_model(\"focoos_object365\")\ndetections = model.infer(\"./image.jpg\", threshold=0.4)\n
"},{"location":"how_to/inference/#cloud-inference-with-gradio","title":"\ud83e\udd16 Cloud Inference with Gradio","text":"setup FOCOOS_API_KEY_GRADIO
environment variable with your Focoos API key
pip install '.[gradio]'\n
python gradio/app.py\n
"},{"location":"how_to/inference/#local-inference","title":"\ud83e\udd16 Local Inference","text":"from focoos import Focoos\n\nfocoos = Focoos(api_key=os.getenv(\"FOCOOS_API_KEY\"))\n\nmodel = focoos.get_local_model(\"focoos_object365\")\n\ndetections = model.infer(\"./image.jpg\", threshold=0.4)\n
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to Focoos AI \ud83d\udd25","text":"Focoos AI provides an advanced development platform designed to empower developers and businesses with efficient, customizable computer vision solutions. Whether you're working with data from cloud infrastructures or deploying on edge devices, Focoos AI enables you to select, fine-tune, and deploy state-of-the-art models optimized for your unique needs.
"},{"location":"#what-we-offer","title":"What We Offer \ud83c\udfaf","text":""},{"location":"#ai-ready-models-platform-for-computer-vision-applications","title":"AI-Ready Models Platform for Computer Vision Applications \ud83e\udd16","text":"Focoos AI offers a versatile platform for developing computer vision solutions. Our platform includes a suite of services to support the end-to-end development process:
- Ready-to-use models: Choose from a variety of pre-trained models, optimized for different data, applications, and hardware.
- Customization: Tailor models to your specific needs by selecting relevant classes and fine-tuning them on your own dataset.
- Testing and Validation: Verify model accuracy and efficiency using your own data samples, ensuring the model meets your requirements before deployment.
"},{"location":"#key-features","title":"Key Features \ud83d\udd11","text":" -
Select Ready-to-use Models \ud83e\udde9 Get started quickly by selecting one of our efficient, pre-trained models that best suits your data and application needs.
-
Personalize Your Model \u2728 Customize the selected model for higher accuracy through fine-tuning. Adapt the model to your specific use case by training it on your own dataset and selecting useful classes.
-
Test and Validate \ud83e\uddea Upload your data sample to test the model\u2019s accuracy and efficiency. Iterate the process to ensure the model performs to your expectations.
-
Cloud Deployment \u2601\ufe0f Deploy the model on your preferred cloud infrastructure, whether it's your own private cloud or a public cloud service. Your data stays private, as it remains within your servers.
-
Edge Deployment \ud83d\udda5\ufe0f Deploy the model on edge devices. Download the Focoos Engine to run the model locally, without sending any data over the network, ensuring full privacy.
"},{"location":"#why-choose-focoos-ai","title":"Why Choose Focoos AI? \ud83e\udd29","text":"Using Focoos AI helps you save both time and money while delivering high-performance AI models:
- 80% Faster Development \u23f3: Save significant development time compared to traditional methods.
- +5% Model Accuracy \ud83c\udfaf: Achieve some of the most accurate models in the market, as demonstrated by our scientific benchmarks.
- Up to 20x Faster Models \u26a1: Run real-time data analysis with some of the fastest models available today.
"},{"location":"#pre-trained-models-with-minimum-training-data","title":"Pre-Trained Models with Minimum Training Data \ud83d\udcca","text":"Our pre-trained models reduce the need for large datasets, making it easier to deploy computer vision solutions. Here's how Focoos AI helps you minimize your resources:
- 80% Less Training Data \ud83d\udcc9: Leverage pre-trained models that are ready to tackle a variety of use cases.
- 50% Lower Infrastructure Costs \ud83d\udca1: Use less expensive hardware and reduce energy consumption.
- 75% Reduction in CO2 Emissions \ud83c\udf31: Deploy energy-efficient models that help you reduce your carbon footprint.
"},{"location":"#proven-efficiency-and-accuracy","title":"Proven Efficiency and Accuracy \ud83d\udd0d","text":"Focoos AI models outperform other solutions in terms of both accuracy and efficiency. Our technical report highlights how our models lead in academic benchmarks across multiple domains. Contact us to learn more about the scientific benchmarks that set Focoos AI apart.
"},{"location":"#pricing-model","title":"Pricing Model \ud83d\udcb5","text":"We offer a flexible pricing model based on your deployment preferences:
- Public Cloud \ud83c\udf10: Pay for model usage when deployed on public cloud providers.
- Private Infrastructure \ud83c\udfe2: Pay for usage when deploying on your own infrastructure.
Contact us for a tailored quote based on your specific use case.
By choosing Focoos AI, you can save time, reduce costs, and achieve superior model performance, all while ensuring the privacy and efficiency of your deployments. Ready to get started? Reach out to us today to explore how Focoos AI can power your computer vision projects. \ud83d\ude80
"},{"location":"datasets/","title":"Datasets","text":"With the Focoos SDK, you can leverage a diverse collection of foundational datasets specifically tailored for computer vision tasks. These datasets, spanning tasks such as segmentation, detection, and instance segmentation, provide a strong foundation for building and optimizing models across a variety of domains.
Datasets:
Name Task Description Layout Aeroscapes semseg A drone dataset to recognize many classes! supervisely Blister instseg A dataset to find blisters roboflow_coco Boxes detection Finding different boxes on the conveyor belt roboflow_coco Cable detection A dataset for detecting damages in cables (from Roboflow 100) - https://universe.roboflow.com/roboflow-100/cable-damage/dataset/2# roboflow_coco Circuit dataset detection A dataset with electronic circuits roboflow_coco Concrete instseg A dataset to find defect in concrete roboflow_coco Crack Segmentation instseg A dataset for segmenting cracks in buildings with 4k images. roboflow_coco Football-detection detection Football-detection by Roboflow roboflow_coco Peanuts detection Finding Molded or non Molded Peanuts roboflow_coco Strawberries instseg Finding defects on strawberries roboflow_coco aquarium detection aquarium roboflow_coco bottles detection bottles roboflow_coco chess_pieces detection A chess detector dataset by roboflow https://universe.roboflow.com/roboflow-100/chess-pieces-mjzgj roboflow_coco coco_2017_det detection COCO Detection catalog halo detection Halo fps by Roboflow roboflow_coco lettuce detection A dataset to find lettuce roboflow_coco safety detection From roboflow Universe: https://universe.roboflow.com/roboflow-100/construction-safety-gsnvb roboflow_coco screw detection Screw by Roboflow roboflow_coco"},{"location":"models/","title":"Focoos Foundational Models","text":"With the Focoos SDK, you can take advantage of a collection of foundational models that are optimized for a range of computer vision tasks. These pre-trained models, covering detection and semantic segmentation across various domains, provide an excellent starting point for your specific use case. Whether you need to fine-tune for custom requirements or adapt them to your application, these models offer a solid foundation to accelerate your development process.
Models:
Model Name Task Metrics Domain focoos_object365 Detection - Common Objects, 365 classes focoos_rtdetr Detection - Common Objects, 80 classes focoos_cts_medium Semantic Segmentation - Autonomous driving, 30 classes focoos_cts_large Semantic Segmentation - Autonomous driving, 30 classes focoos_ade_nano Semantic Segmentation - Common Scenes, 150 classes focoos_ade_small Semantic Segmentation - Common Scenes, 150 classes focoos_ade_medium Semantic Segmentation - Common Scenes, 150 classes focoos_ade_large Semantic Segmentation - Common Scenes, 150 classes focoos_aeroscapes Semantic Segmentation - Drone Aerial Scenes, 11 classes focoos_isaid_nano Semantic Segmentation - Satellite Imagery, 15 classes focoos_isaid_medium Semantic Segmentation - Satellite Imagery, 15 classes"},{"location":"api/focoos/","title":"focoos","text":"Focoos Module
This module provides a Python interface for interacting with Focoos APIs, allowing users to manage machine learning models and datasets in the Focoos ecosystem. The module supports operations such as retrieving model metadata, downloading models, and listing shared datasets.
Classes:
Name Description Focoos
Main class to interface with Focoos APIs.
Raises:
Type Description ValueError
Raised for invalid API responses or missing parameters.
"},{"location":"api/focoos/#focoos.focoos.Focoos","title":"Focoos
","text":"Main class to interface with Focoos APIs.
This class provides methods to interact with Focoos-hosted models and datasets. It supports functionalities such as listing models, retrieving model metadata, downloading models, and creating new models.
Attributes:
Name Type Description api_key
str
The API key for authentication.
http_client
HttpClient
HTTP client for making API requests.
user_info
dict
Information about the currently authenticated user.
cache_dir
str
Local directory for caching downloaded models.
Source code in focoos/focoos.py
class Focoos:\n \"\"\"\n Main class to interface with Focoos APIs.\n\n This class provides methods to interact with Focoos-hosted models and datasets.\n It supports functionalities such as listing models, retrieving model metadata,\n downloading models, and creating new models.\n\n Attributes:\n api_key (str): The API key for authentication.\n http_client (HttpClient): HTTP client for making API requests.\n user_info (dict): Information about the currently authenticated user.\n cache_dir (str): Local directory for caching downloaded models.\n \"\"\"\n\n def __init__(\n self,\n api_key: Optional[str] = None,\n host_url: Optional[str] = None,\n ):\n \"\"\"\n Initializes the Focoos API client.\n\n This client provides authenticated access to the Focoos API, enabling various operations\n through the configured HTTP client. It retrieves user information upon initialization and\n logs the environment details.\n\n Args:\n api_key (Optional[str]): API key for authentication. Defaults to the `focoos_api_key`\n specified in the FOCOOS_CONFIG.\n host_url (Optional[str]): Base URL for the Focoos API. Defaults to the `default_host_url`\n specified in the FOCOOS_CONFIG.\n\n Raises:\n ValueError: If the API key is not provided, or if the host URL is not specified in the\n arguments or the configuration.\n\n Attributes:\n api_key (str): The API key used for authentication.\n http_client (HttpClient): An HTTP client instance configured with the API key and host URL.\n user_info (dict): Information about the authenticated user retrieved from the API.\n cache_dir (str): Path to the cache directory used by the client.\n\n Logs:\n - Error if the API key or host URL is missing.\n - Info about the authenticated user and environment upon successful initialization.\n \"\"\"\n self.api_key = api_key or FOCOOS_CONFIG.focoos_api_key\n if not self.api_key:\n logger.error(\"API key is required \ud83e\udd16\")\n raise ValueError(\"API key is required \ud83e\udd16\")\n\n host_url = host_url or FOCOOS_CONFIG.default_host_url\n\n self.http_client = HttpClient(self.api_key, host_url)\n self.user_info = self.get_user_info()\n self.cache_dir = os.path.join(os.path.expanduser(\"~\"), \".cache\", \"focoos\")\n logger.info(\n f\"Currently logged as: {self.user_info.email} environment: {host_url}\"\n )\n\n def get_user_info(self) -> User:\n \"\"\"\n Retrieves information about the authenticated user.\n\n Returns:\n dict: Information about the user (e.g., email).\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"user/\")\n if res.status_code != 200:\n logger.error(f\"Failed to get user info: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to get user info: {res.status_code} {res.text}\")\n return User.from_json(res.json())\n\n def get_model_info(self, model_name: str) -> ModelMetadata:\n \"\"\"\n Retrieves metadata for a specific model.\n\n Args:\n model_name (str): Name of the model.\n\n Returns:\n ModelMetadata: Metadata of the specified model.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(f\"models/{model_name}\")\n if res.status_code != 200:\n logger.error(f\"Failed to get model info: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to get model info: {res.status_code} {res.text}\")\n return ModelMetadata.from_json(res.json())\n\n def list_models(self) -> list[ModelPreview]:\n \"\"\"\n Lists all available models.\n\n Returns:\n list[ModelPreview]: List of model previews.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"models/\")\n if res.status_code != 200:\n logger.error(f\"Failed to list models: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to list models: {res.status_code} {res.text}\")\n return [ModelPreview.from_json(r) for r in res.json()]\n\n def list_focoos_models(self) -> list[ModelPreview]:\n \"\"\"\n Lists models specific to Focoos.\n\n Returns:\n list[ModelPreview]: List of Focoos models.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"models/focoos-models\")\n if res.status_code != 200:\n logger.error(f\"Failed to list focoos models: {res.status_code} {res.text}\")\n raise ValueError(\n f\"Failed to list focoos models: {res.status_code} {res.text}\"\n )\n return [ModelPreview.from_json(r) for r in res.json()]\n\n def get_local_model(\n self,\n model_ref: str,\n runtime_type: Optional[RuntimeTypes] = None,\n ) -> LocalModel:\n \"\"\"\n Retrieves a local model for the specified reference.\n\n Downloads the model if it does not already exist in the local cache.\n\n Args:\n model_ref (str): Reference identifier for the model.\n runtime_type (Optional[RuntimeTypes]): Runtime type for the model. Defaults to the\n `runtime_type` specified in FOCOOS_CONFIG.\n\n Returns:\n LocalModel: An instance of the local model.\n\n Raises:\n ValueError: If the runtime type is not specified.\n\n Notes:\n The model is cached in the directory specified by `self.cache_dir`.\n \"\"\"\n runtime_type = runtime_type or FOCOOS_CONFIG.runtime_type\n model_dir = os.path.join(self.cache_dir, model_ref)\n if not os.path.exists(os.path.join(model_dir, \"model.onnx\")):\n self._download_model(model_ref)\n return LocalModel(model_dir, runtime_type)\n\n def get_remote_model(self, model_ref: str) -> RemoteModel:\n \"\"\"\n Retrieves a remote model instance.\n\n Args:\n model_ref (str): Reference name of the model.\n\n Returns:\n RemoteModel: The remote model instance.\n \"\"\"\n return RemoteModel(model_ref, self.http_client)\n\n def new_model(\n self, name: str, focoos_model: str, description: str\n ) -> Optional[RemoteModel]:\n \"\"\"\n Creates a new model in the Focoos system.\n\n Args:\n name (str): Name of the new model.\n focoos_model (str): Reference to the base Focoos model.\n description (str): Description of the new model.\n\n Returns:\n Optional[RemoteModel]: The created model instance, or None if creation fails.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.post(\n \"models/\",\n data={\n \"name\": name,\n \"focoos_model\": focoos_model,\n \"description\": description,\n },\n )\n if res.status_code in [200, 201]:\n return RemoteModel(res.json()[\"ref\"], self.http_client)\n if res.status_code == 409:\n logger.warning(f\"Model already exists: {name}\")\n return self.get_model_by_name(name, remote=True)\n logger.warning(f\"Failed to create new model: {res.status_code} {res.text}\")\n return None\n\n def list_shared_datasets(self) -> list[DatasetMetadata]:\n \"\"\"\n Lists datasets shared with the user.\n\n Returns:\n list[DatasetMetadata]: List of shared datasets.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"datasets/shared\")\n if res.status_code != 200:\n logger.error(f\"Failed to list datasets: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to list datasets: {res.status_code} {res.text}\")\n return [DatasetMetadata.from_json(dataset) for dataset in res.json()]\n\n def _download_model(self, model_ref: str) -> str:\n \"\"\"\n Downloads a model from the Focoos API.\n\n Args:\n model_ref (str): Reference name of the model.\n\n Returns:\n str: Path to the downloaded model.\n\n Raises:\n ValueError: If the API request fails or the download fails.\n \"\"\"\n model_dir = os.path.join(self.cache_dir, model_ref)\n model_path = os.path.join(model_dir, \"model.onnx\")\n metadata_path = os.path.join(model_dir, \"focoos_metadata.json\")\n if os.path.exists(model_path) and os.path.exists(metadata_path):\n logger.info(\"\ud83d\udce5 Model already downloaded\")\n return model_path\n\n ## download model metadata\n res = self.http_client.get(f\"models/{model_ref}/download?format=onnx\")\n if res.status_code != 200:\n logger.error(f\"Failed to download model: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to download model: {res.status_code} {res.text}\")\n\n download_data = res.json()\n metadata = ModelMetadata.from_json(download_data[\"model_metadata\"])\n download_uri = download_data[\"download_uri\"]\n\n ## download model from Focoos Cloud\n logger.debug(f\"Model URI: {download_uri}\")\n logger.info(\"\ud83d\udce5 Downloading model from Focoos Cloud.. \")\n response = self.http_client.get_external_url(download_uri, stream=True)\n if response.status_code != 200:\n logger.error(\n f\"Failed to download model: {response.status_code} {response.text}\"\n )\n raise ValueError(\n f\"Failed to download model: {response.status_code} {response.text}\"\n )\n total_size = int(response.headers.get(\"content-length\", 0))\n logger.info(f\"\ud83d\udce5 Size: {total_size / (1024**2):.2f} MB\")\n\n if not os.path.exists(model_dir):\n os.makedirs(model_dir)\n\n with open(metadata_path, \"w\") as f:\n f.write(metadata.model_dump_json())\n logger.debug(f\"Dumped metadata to {metadata_path}\")\n\n with (\n open(model_path, \"wb\") as f,\n tqdm(\n desc=str(model_path).split(\"/\")[-1],\n total=total_size,\n unit=\"B\",\n unit_scale=True,\n unit_divisor=1024,\n ) as bar,\n ):\n for chunk in response.iter_content(chunk_size=8192):\n f.write(chunk)\n bar.update(len(chunk))\n logger.info(f\"\ud83d\udce5 File downloaded: {model_path}\")\n return model_path\n\n def get_dataset_by_name(self, name: str) -> Optional[DatasetMetadata]:\n \"\"\"\n Retrieves a dataset by its name.\n\n Args:\n name (str): Name of the dataset.\n\n Returns:\n Optional[DatasetMetadata]: The dataset metadata if found, or None otherwise.\n \"\"\"\n datasets = self.list_shared_datasets()\n name_lower = name.lower()\n for dataset in datasets:\n if name_lower == dataset.name.lower():\n return dataset\n\n def get_model_by_name(\n self, name: str, remote: bool = True\n ) -> Optional[Union[RemoteModel, LocalModel]]:\n \"\"\"\n Retrieves a model by its name.\n\n Args:\n name (str): Name of the model.\n remote (bool): If True, retrieve as a RemoteModel. Otherwise, as a LocalModel. Defaults to True.\n\n Returns:\n Optional[Union[RemoteModel, LocalModel]]: The model instance if found, or None otherwise.\n \"\"\"\n models = self.list_models()\n name_lower = name.lower()\n for model in models:\n if name_lower == model.name.lower():\n if remote:\n return self.get_remote_model(model.ref)\n else:\n return self.get_local_model(model.ref)\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.__init__","title":"__init__(api_key=None, host_url=None)
","text":"Initializes the Focoos API client.
This client provides authenticated access to the Focoos API, enabling various operations through the configured HTTP client. It retrieves user information upon initialization and logs the environment details.
Parameters:
Name Type Description Default api_key
Optional[str]
API key for authentication. Defaults to the focoos_api_key
specified in the FOCOOS_CONFIG.
None
host_url
Optional[str]
Base URL for the Focoos API. Defaults to the default_host_url
specified in the FOCOOS_CONFIG.
None
Raises:
Type Description ValueError
If the API key is not provided, or if the host URL is not specified in the arguments or the configuration.
Attributes:
Name Type Description api_key
str
The API key used for authentication.
http_client
HttpClient
An HTTP client instance configured with the API key and host URL.
user_info
dict
Information about the authenticated user retrieved from the API.
cache_dir
str
Path to the cache directory used by the client.
Logs - Error if the API key or host URL is missing.
- Info about the authenticated user and environment upon successful initialization.
Source code in focoos/focoos.py
def __init__(\n self,\n api_key: Optional[str] = None,\n host_url: Optional[str] = None,\n):\n \"\"\"\n Initializes the Focoos API client.\n\n This client provides authenticated access to the Focoos API, enabling various operations\n through the configured HTTP client. It retrieves user information upon initialization and\n logs the environment details.\n\n Args:\n api_key (Optional[str]): API key for authentication. Defaults to the `focoos_api_key`\n specified in the FOCOOS_CONFIG.\n host_url (Optional[str]): Base URL for the Focoos API. Defaults to the `default_host_url`\n specified in the FOCOOS_CONFIG.\n\n Raises:\n ValueError: If the API key is not provided, or if the host URL is not specified in the\n arguments or the configuration.\n\n Attributes:\n api_key (str): The API key used for authentication.\n http_client (HttpClient): An HTTP client instance configured with the API key and host URL.\n user_info (dict): Information about the authenticated user retrieved from the API.\n cache_dir (str): Path to the cache directory used by the client.\n\n Logs:\n - Error if the API key or host URL is missing.\n - Info about the authenticated user and environment upon successful initialization.\n \"\"\"\n self.api_key = api_key or FOCOOS_CONFIG.focoos_api_key\n if not self.api_key:\n logger.error(\"API key is required \ud83e\udd16\")\n raise ValueError(\"API key is required \ud83e\udd16\")\n\n host_url = host_url or FOCOOS_CONFIG.default_host_url\n\n self.http_client = HttpClient(self.api_key, host_url)\n self.user_info = self.get_user_info()\n self.cache_dir = os.path.join(os.path.expanduser(\"~\"), \".cache\", \"focoos\")\n logger.info(\n f\"Currently logged as: {self.user_info.email} environment: {host_url}\"\n )\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.get_dataset_by_name","title":"get_dataset_by_name(name)
","text":"Retrieves a dataset by its name.
Parameters:
Name Type Description Default name
str
Name of the dataset.
required Returns:
Type Description Optional[DatasetMetadata]
Optional[DatasetMetadata]: The dataset metadata if found, or None otherwise.
Source code in focoos/focoos.py
def get_dataset_by_name(self, name: str) -> Optional[DatasetMetadata]:\n \"\"\"\n Retrieves a dataset by its name.\n\n Args:\n name (str): Name of the dataset.\n\n Returns:\n Optional[DatasetMetadata]: The dataset metadata if found, or None otherwise.\n \"\"\"\n datasets = self.list_shared_datasets()\n name_lower = name.lower()\n for dataset in datasets:\n if name_lower == dataset.name.lower():\n return dataset\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.get_local_model","title":"get_local_model(model_ref, runtime_type=None)
","text":"Retrieves a local model for the specified reference.
Downloads the model if it does not already exist in the local cache.
Parameters:
Name Type Description Default model_ref
str
Reference identifier for the model.
required runtime_type
Optional[RuntimeTypes]
Runtime type for the model. Defaults to the runtime_type
specified in FOCOOS_CONFIG.
None
Returns:
Name Type Description LocalModel
LocalModel
An instance of the local model.
Raises:
Type Description ValueError
If the runtime type is not specified.
Notes The model is cached in the directory specified by self.cache_dir
.
Source code in focoos/focoos.py
def get_local_model(\n self,\n model_ref: str,\n runtime_type: Optional[RuntimeTypes] = None,\n) -> LocalModel:\n \"\"\"\n Retrieves a local model for the specified reference.\n\n Downloads the model if it does not already exist in the local cache.\n\n Args:\n model_ref (str): Reference identifier for the model.\n runtime_type (Optional[RuntimeTypes]): Runtime type for the model. Defaults to the\n `runtime_type` specified in FOCOOS_CONFIG.\n\n Returns:\n LocalModel: An instance of the local model.\n\n Raises:\n ValueError: If the runtime type is not specified.\n\n Notes:\n The model is cached in the directory specified by `self.cache_dir`.\n \"\"\"\n runtime_type = runtime_type or FOCOOS_CONFIG.runtime_type\n model_dir = os.path.join(self.cache_dir, model_ref)\n if not os.path.exists(os.path.join(model_dir, \"model.onnx\")):\n self._download_model(model_ref)\n return LocalModel(model_dir, runtime_type)\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.get_model_by_name","title":"get_model_by_name(name, remote=True)
","text":"Retrieves a model by its name.
Parameters:
Name Type Description Default name
str
Name of the model.
required remote
bool
If True, retrieve as a RemoteModel. Otherwise, as a LocalModel. Defaults to True.
True
Returns:
Type Description Optional[Union[RemoteModel, LocalModel]]
Optional[Union[RemoteModel, LocalModel]]: The model instance if found, or None otherwise.
Source code in focoos/focoos.py
def get_model_by_name(\n self, name: str, remote: bool = True\n) -> Optional[Union[RemoteModel, LocalModel]]:\n \"\"\"\n Retrieves a model by its name.\n\n Args:\n name (str): Name of the model.\n remote (bool): If True, retrieve as a RemoteModel. Otherwise, as a LocalModel. Defaults to True.\n\n Returns:\n Optional[Union[RemoteModel, LocalModel]]: The model instance if found, or None otherwise.\n \"\"\"\n models = self.list_models()\n name_lower = name.lower()\n for model in models:\n if name_lower == model.name.lower():\n if remote:\n return self.get_remote_model(model.ref)\n else:\n return self.get_local_model(model.ref)\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.get_model_info","title":"get_model_info(model_name)
","text":"Retrieves metadata for a specific model.
Parameters:
Name Type Description Default model_name
str
Name of the model.
required Returns:
Name Type Description ModelMetadata
ModelMetadata
Metadata of the specified model.
Raises:
Type Description ValueError
If the API request fails.
Source code in focoos/focoos.py
def get_model_info(self, model_name: str) -> ModelMetadata:\n \"\"\"\n Retrieves metadata for a specific model.\n\n Args:\n model_name (str): Name of the model.\n\n Returns:\n ModelMetadata: Metadata of the specified model.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(f\"models/{model_name}\")\n if res.status_code != 200:\n logger.error(f\"Failed to get model info: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to get model info: {res.status_code} {res.text}\")\n return ModelMetadata.from_json(res.json())\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.get_remote_model","title":"get_remote_model(model_ref)
","text":"Retrieves a remote model instance.
Parameters:
Name Type Description Default model_ref
str
Reference name of the model.
required Returns:
Name Type Description RemoteModel
RemoteModel
The remote model instance.
Source code in focoos/focoos.py
def get_remote_model(self, model_ref: str) -> RemoteModel:\n \"\"\"\n Retrieves a remote model instance.\n\n Args:\n model_ref (str): Reference name of the model.\n\n Returns:\n RemoteModel: The remote model instance.\n \"\"\"\n return RemoteModel(model_ref, self.http_client)\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.get_user_info","title":"get_user_info()
","text":"Retrieves information about the authenticated user.
Returns:
Name Type Description dict
User
Information about the user (e.g., email).
Raises:
Type Description ValueError
If the API request fails.
Source code in focoos/focoos.py
def get_user_info(self) -> User:\n \"\"\"\n Retrieves information about the authenticated user.\n\n Returns:\n dict: Information about the user (e.g., email).\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"user/\")\n if res.status_code != 200:\n logger.error(f\"Failed to get user info: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to get user info: {res.status_code} {res.text}\")\n return User.from_json(res.json())\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.list_focoos_models","title":"list_focoos_models()
","text":"Lists models specific to Focoos.
Returns:
Type Description list[ModelPreview]
list[ModelPreview]: List of Focoos models.
Raises:
Type Description ValueError
If the API request fails.
Source code in focoos/focoos.py
def list_focoos_models(self) -> list[ModelPreview]:\n \"\"\"\n Lists models specific to Focoos.\n\n Returns:\n list[ModelPreview]: List of Focoos models.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"models/focoos-models\")\n if res.status_code != 200:\n logger.error(f\"Failed to list focoos models: {res.status_code} {res.text}\")\n raise ValueError(\n f\"Failed to list focoos models: {res.status_code} {res.text}\"\n )\n return [ModelPreview.from_json(r) for r in res.json()]\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.list_models","title":"list_models()
","text":"Lists all available models.
Returns:
Type Description list[ModelPreview]
list[ModelPreview]: List of model previews.
Raises:
Type Description ValueError
If the API request fails.
Source code in focoos/focoos.py
def list_models(self) -> list[ModelPreview]:\n \"\"\"\n Lists all available models.\n\n Returns:\n list[ModelPreview]: List of model previews.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"models/\")\n if res.status_code != 200:\n logger.error(f\"Failed to list models: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to list models: {res.status_code} {res.text}\")\n return [ModelPreview.from_json(r) for r in res.json()]\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.list_shared_datasets","title":"list_shared_datasets()
","text":"Lists datasets shared with the user.
Returns:
Type Description list[DatasetMetadata]
list[DatasetMetadata]: List of shared datasets.
Raises:
Type Description ValueError
If the API request fails.
Source code in focoos/focoos.py
def list_shared_datasets(self) -> list[DatasetMetadata]:\n \"\"\"\n Lists datasets shared with the user.\n\n Returns:\n list[DatasetMetadata]: List of shared datasets.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.get(\"datasets/shared\")\n if res.status_code != 200:\n logger.error(f\"Failed to list datasets: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to list datasets: {res.status_code} {res.text}\")\n return [DatasetMetadata.from_json(dataset) for dataset in res.json()]\n
"},{"location":"api/focoos/#focoos.focoos.Focoos.new_model","title":"new_model(name, focoos_model, description)
","text":"Creates a new model in the Focoos system.
Parameters:
Name Type Description Default name
str
Name of the new model.
required focoos_model
str
Reference to the base Focoos model.
required description
str
Description of the new model.
required Returns:
Type Description Optional[RemoteModel]
Optional[RemoteModel]: The created model instance, or None if creation fails.
Raises:
Type Description ValueError
If the API request fails.
Source code in focoos/focoos.py
def new_model(\n self, name: str, focoos_model: str, description: str\n) -> Optional[RemoteModel]:\n \"\"\"\n Creates a new model in the Focoos system.\n\n Args:\n name (str): Name of the new model.\n focoos_model (str): Reference to the base Focoos model.\n description (str): Description of the new model.\n\n Returns:\n Optional[RemoteModel]: The created model instance, or None if creation fails.\n\n Raises:\n ValueError: If the API request fails.\n \"\"\"\n res = self.http_client.post(\n \"models/\",\n data={\n \"name\": name,\n \"focoos_model\": focoos_model,\n \"description\": description,\n },\n )\n if res.status_code in [200, 201]:\n return RemoteModel(res.json()[\"ref\"], self.http_client)\n if res.status_code == 409:\n logger.warning(f\"Model already exists: {name}\")\n return self.get_model_by_name(name, remote=True)\n logger.warning(f\"Failed to create new model: {res.status_code} {res.text}\")\n return None\n
"},{"location":"api/local_model/","title":"local model","text":"LocalModel Module
This module provides the LocalModel
class that allows loading, inference, and benchmark testing of models in a local environment. It supports detection and segmentation tasks, and utilizes ONNXRuntime for model execution.
Classes:
Name Description LocalModel
A class for managing and interacting with local models.
Functions:
Name Description __init__
Initializes the LocalModel instance, loading the model, metadata, and setting up the runtime.
_read_metadata
Reads the model metadata from a JSON file.
_annotate
Annotates the input image with detection or segmentation results.
infer
Runs inference on an input image, with optional annotation.
benchmark
Benchmarks the model's inference performance over a specified number of iterations and input size.
"},{"location":"api/local_model/#focoos.local_model.LocalModel","title":"LocalModel
","text":"Source code in focoos/local_model.py
class LocalModel:\n def __init__(\n self,\n model_dir: Union[str, Path],\n runtime_type: Optional[RuntimeTypes] = None,\n ):\n \"\"\"\n Initialize a LocalModel instance.\n\n This class sets up a local model for inference by initializing the runtime environment,\n loading metadata, and preparing annotation utilities.\n\n Args:\n model_dir (Union[str, Path]): The path to the directory containing the model files.\n runtime_type (Optional[RuntimeTypes]): Specifies the runtime type to use for inference.\n Defaults to the value of `FOCOOS_CONFIG.runtime_type` if not provided.\n\n Raises:\n ValueError: If no runtime type is provided and `FOCOOS_CONFIG.runtime_type` is not set.\n FileNotFoundError: If the specified model directory does not exist.\n\n Attributes:\n model_dir (Union[str, Path]): Path to the model directory.\n metadata (ModelMetadata): Metadata information for the model.\n model_ref: Reference identifier for the model obtained from metadata.\n label_annotator (sv.LabelAnnotator): Utility for adding labels to the output,\n initialized with text padding and border radius.\n box_annotator (sv.BoxAnnotator): Utility for annotating bounding boxes.\n mask_annotator (sv.MaskAnnotator): Utility for annotating masks.\n runtime (ONNXRuntime): Inference runtime initialized with the specified runtime type,\n model path, metadata, and warmup iterations.\n\n The method verifies the existence of the model directory, reads the model metadata,\n and initializes the runtime for inference using the provided runtime type. Annotation\n utilities are also prepared for visualizing model outputs.\n \"\"\"\n runtime_type = runtime_type or FOCOOS_CONFIG.runtime_type\n\n logger.debug(f\"Runtime type: {runtime_type}, Loading model from {model_dir},\")\n if not os.path.exists(model_dir):\n raise FileNotFoundError(f\"Model directory not found: {model_dir}\")\n self.model_dir: Union[str, Path] = model_dir\n self.metadata: ModelMetadata = self._read_metadata()\n self.model_ref = self.metadata.ref\n self.label_annotator = sv.LabelAnnotator(text_padding=10, border_radius=10)\n self.box_annotator = sv.BoxAnnotator()\n self.mask_annotator = sv.MaskAnnotator()\n self.runtime: ONNXRuntime = get_runtime(\n runtime_type,\n str(os.path.join(model_dir, \"model.onnx\")),\n self.metadata,\n FOCOOS_CONFIG.warmup_iter,\n )\n\n def _read_metadata(self) -> ModelMetadata:\n \"\"\"\n Reads the model metadata from a JSON file.\n\n Returns:\n ModelMetadata: Metadata for the model.\n\n Raises:\n FileNotFoundError: If the metadata file does not exist in the model directory.\n \"\"\"\n metadata_path = os.path.join(self.model_dir, \"focoos_metadata.json\")\n return ModelMetadata.from_json(metadata_path)\n\n def _annotate(self, im: np.ndarray, detections: sv.Detections) -> np.ndarray:\n \"\"\"\n Annotates the input image with detection or segmentation results.\n\n Args:\n im (np.ndarray): The input image to annotate.\n detections (sv.Detections): Detected objects or segmented regions.\n\n Returns:\n np.ndarray: The annotated image with bounding boxes or masks.\n \"\"\"\n classes = self.metadata.classes\n labels = [\n f\"{classes[int(class_id)] if classes is not None else str(class_id)}: {confid*100:.0f}%\"\n for class_id, confid in zip(detections.class_id, detections.confidence) # type: ignore\n ]\n if self.metadata.task == FocoosTask.DETECTION:\n annotated_im = self.box_annotator.annotate(\n scene=im.copy(), detections=detections\n )\n\n annotated_im = self.label_annotator.annotate(\n scene=annotated_im, detections=detections, labels=labels\n )\n elif self.metadata.task in [\n FocoosTask.SEMSEG,\n FocoosTask.INSTANCE_SEGMENTATION,\n ]:\n annotated_im = self.mask_annotator.annotate(\n scene=im.copy(), detections=detections\n )\n return annotated_im\n\n def infer(\n self,\n image: Union[bytes, str, Path, np.ndarray, Image.Image],\n threshold: float = 0.5,\n annotate: bool = False,\n ) -> Tuple[FocoosDetections, Optional[np.ndarray]]:\n \"\"\"\n Run inference on an input image and optionally annotate the results.\n\n Args:\n image (Union[bytes, str, Path, np.ndarray, Image.Image]): The input image to infer on.\n This can be a byte array, file path, or a PIL Image object, or a NumPy array representing the image.\n threshold (float, optional): The confidence threshold for detections. Defaults to 0.5.\n Detections with confidence scores below this threshold will be discarded.\n annotate (bool, optional): Whether to annotate the image with detection results. Defaults to False.\n If set to True, the method will return the image with bounding boxes or segmentation masks.\n\n Returns:\n Tuple[FocoosDetections, Optional[np.ndarray]]: A tuple containing:\n - `FocoosDetections`: The detections from the inference, represented as a custom object (`FocoosDetections`).\n This includes the details of the detected objects such as class, confidence score, and bounding box (if applicable).\n - `Optional[np.ndarray]`: The annotated image, if `annotate=True`.\n This will be a NumPy array representation of the image with drawn bounding boxes or segmentation masks.\n If `annotate=False`, this value will be `None`.\n\n Raises:\n ValueError: If the model is not deployed locally (i.e., `self.runtime` is `None`).\n \"\"\"\n assert self.runtime is not None, \"Model is not deployed (locally)\"\n resize = None #!TODO check for segmentation\n if self.metadata.task == FocoosTask.DETECTION:\n resize = 640 if not self.metadata.im_size else self.metadata.im_size\n logger.debug(f\"Resize: {resize}\")\n t0 = perf_counter()\n im1, im0 = image_preprocess(image, resize=resize)\n t1 = perf_counter()\n detections = self.runtime(im1.astype(np.float32), threshold)\n t2 = perf_counter()\n if resize:\n detections = scale_detections(\n detections, (resize, resize), (im0.shape[1], im0.shape[0])\n )\n logger.debug(f\"Inference time: {t2-t1:.3f} seconds\")\n im = None\n if annotate:\n im = self._annotate(im0, detections)\n\n out = sv_to_focoos_detections(detections, classes=self.metadata.classes)\n t3 = perf_counter()\n out.latency = {\n \"inference\": round(t2 - t1, 3),\n \"preprocess\": round(t1 - t0, 3),\n \"postprocess\": round(t3 - t2, 3),\n }\n return out, im\n\n def benchmark(self, iterations: int, size: int) -> LatencyMetrics:\n \"\"\"\n Benchmark the model's inference performance over multiple iterations.\n\n Args:\n iterations (int): Number of iterations to run for benchmarking.\n size (int): The input size for each benchmark iteration.\n\n Returns:\n LatencyMetrics: Latency metrics including time taken for inference.\n \"\"\"\n return self.runtime.benchmark(iterations, size)\n
"},{"location":"api/local_model/#focoos.local_model.LocalModel.__init__","title":"__init__(model_dir, runtime_type=None)
","text":"Initialize a LocalModel instance.
This class sets up a local model for inference by initializing the runtime environment, loading metadata, and preparing annotation utilities.
Parameters:
Name Type Description Default model_dir
Union[str, Path]
The path to the directory containing the model files.
required runtime_type
Optional[RuntimeTypes]
Specifies the runtime type to use for inference. Defaults to the value of FOCOOS_CONFIG.runtime_type
if not provided.
None
Raises:
Type Description ValueError
If no runtime type is provided and FOCOOS_CONFIG.runtime_type
is not set.
FileNotFoundError
If the specified model directory does not exist.
Attributes:
Name Type Description model_dir
Union[str, Path]
Path to the model directory.
metadata
ModelMetadata
Metadata information for the model.
model_ref
ModelMetadata
Reference identifier for the model obtained from metadata.
label_annotator
LabelAnnotator
Utility for adding labels to the output, initialized with text padding and border radius.
box_annotator
BoxAnnotator
Utility for annotating bounding boxes.
mask_annotator
MaskAnnotator
Utility for annotating masks.
runtime
ONNXRuntime
Inference runtime initialized with the specified runtime type, model path, metadata, and warmup iterations.
The method verifies the existence of the model directory, reads the model metadata, and initializes the runtime for inference using the provided runtime type. Annotation utilities are also prepared for visualizing model outputs.
Source code in focoos/local_model.py
def __init__(\n self,\n model_dir: Union[str, Path],\n runtime_type: Optional[RuntimeTypes] = None,\n):\n \"\"\"\n Initialize a LocalModel instance.\n\n This class sets up a local model for inference by initializing the runtime environment,\n loading metadata, and preparing annotation utilities.\n\n Args:\n model_dir (Union[str, Path]): The path to the directory containing the model files.\n runtime_type (Optional[RuntimeTypes]): Specifies the runtime type to use for inference.\n Defaults to the value of `FOCOOS_CONFIG.runtime_type` if not provided.\n\n Raises:\n ValueError: If no runtime type is provided and `FOCOOS_CONFIG.runtime_type` is not set.\n FileNotFoundError: If the specified model directory does not exist.\n\n Attributes:\n model_dir (Union[str, Path]): Path to the model directory.\n metadata (ModelMetadata): Metadata information for the model.\n model_ref: Reference identifier for the model obtained from metadata.\n label_annotator (sv.LabelAnnotator): Utility for adding labels to the output,\n initialized with text padding and border radius.\n box_annotator (sv.BoxAnnotator): Utility for annotating bounding boxes.\n mask_annotator (sv.MaskAnnotator): Utility for annotating masks.\n runtime (ONNXRuntime): Inference runtime initialized with the specified runtime type,\n model path, metadata, and warmup iterations.\n\n The method verifies the existence of the model directory, reads the model metadata,\n and initializes the runtime for inference using the provided runtime type. Annotation\n utilities are also prepared for visualizing model outputs.\n \"\"\"\n runtime_type = runtime_type or FOCOOS_CONFIG.runtime_type\n\n logger.debug(f\"Runtime type: {runtime_type}, Loading model from {model_dir},\")\n if not os.path.exists(model_dir):\n raise FileNotFoundError(f\"Model directory not found: {model_dir}\")\n self.model_dir: Union[str, Path] = model_dir\n self.metadata: ModelMetadata = self._read_metadata()\n self.model_ref = self.metadata.ref\n self.label_annotator = sv.LabelAnnotator(text_padding=10, border_radius=10)\n self.box_annotator = sv.BoxAnnotator()\n self.mask_annotator = sv.MaskAnnotator()\n self.runtime: ONNXRuntime = get_runtime(\n runtime_type,\n str(os.path.join(model_dir, \"model.onnx\")),\n self.metadata,\n FOCOOS_CONFIG.warmup_iter,\n )\n
"},{"location":"api/local_model/#focoos.local_model.LocalModel.benchmark","title":"benchmark(iterations, size)
","text":"Benchmark the model's inference performance over multiple iterations.
Parameters:
Name Type Description Default iterations
int
Number of iterations to run for benchmarking.
required size
int
The input size for each benchmark iteration.
required Returns:
Name Type Description LatencyMetrics
LatencyMetrics
Latency metrics including time taken for inference.
Source code in focoos/local_model.py
def benchmark(self, iterations: int, size: int) -> LatencyMetrics:\n \"\"\"\n Benchmark the model's inference performance over multiple iterations.\n\n Args:\n iterations (int): Number of iterations to run for benchmarking.\n size (int): The input size for each benchmark iteration.\n\n Returns:\n LatencyMetrics: Latency metrics including time taken for inference.\n \"\"\"\n return self.runtime.benchmark(iterations, size)\n
"},{"location":"api/local_model/#focoos.local_model.LocalModel.infer","title":"infer(image, threshold=0.5, annotate=False)
","text":"Run inference on an input image and optionally annotate the results.
Parameters:
Name Type Description Default image
Union[bytes, str, Path, ndarray, Image]
The input image to infer on. This can be a byte array, file path, or a PIL Image object, or a NumPy array representing the image.
required threshold
float
The confidence threshold for detections. Defaults to 0.5. Detections with confidence scores below this threshold will be discarded.
0.5
annotate
bool
Whether to annotate the image with detection results. Defaults to False. If set to True, the method will return the image with bounding boxes or segmentation masks.
False
Returns:
Type Description Tuple[FocoosDetections, Optional[ndarray]]
Tuple[FocoosDetections, Optional[np.ndarray]]: A tuple containing: - FocoosDetections
: The detections from the inference, represented as a custom object (FocoosDetections
). This includes the details of the detected objects such as class, confidence score, and bounding box (if applicable). - Optional[np.ndarray]
: The annotated image, if annotate=True
. This will be a NumPy array representation of the image with drawn bounding boxes or segmentation masks. If annotate=False
, this value will be None
.
Raises:
Type Description ValueError
If the model is not deployed locally (i.e., self.runtime
is None
).
Source code in focoos/local_model.py
def infer(\n self,\n image: Union[bytes, str, Path, np.ndarray, Image.Image],\n threshold: float = 0.5,\n annotate: bool = False,\n) -> Tuple[FocoosDetections, Optional[np.ndarray]]:\n \"\"\"\n Run inference on an input image and optionally annotate the results.\n\n Args:\n image (Union[bytes, str, Path, np.ndarray, Image.Image]): The input image to infer on.\n This can be a byte array, file path, or a PIL Image object, or a NumPy array representing the image.\n threshold (float, optional): The confidence threshold for detections. Defaults to 0.5.\n Detections with confidence scores below this threshold will be discarded.\n annotate (bool, optional): Whether to annotate the image with detection results. Defaults to False.\n If set to True, the method will return the image with bounding boxes or segmentation masks.\n\n Returns:\n Tuple[FocoosDetections, Optional[np.ndarray]]: A tuple containing:\n - `FocoosDetections`: The detections from the inference, represented as a custom object (`FocoosDetections`).\n This includes the details of the detected objects such as class, confidence score, and bounding box (if applicable).\n - `Optional[np.ndarray]`: The annotated image, if `annotate=True`.\n This will be a NumPy array representation of the image with drawn bounding boxes or segmentation masks.\n If `annotate=False`, this value will be `None`.\n\n Raises:\n ValueError: If the model is not deployed locally (i.e., `self.runtime` is `None`).\n \"\"\"\n assert self.runtime is not None, \"Model is not deployed (locally)\"\n resize = None #!TODO check for segmentation\n if self.metadata.task == FocoosTask.DETECTION:\n resize = 640 if not self.metadata.im_size else self.metadata.im_size\n logger.debug(f\"Resize: {resize}\")\n t0 = perf_counter()\n im1, im0 = image_preprocess(image, resize=resize)\n t1 = perf_counter()\n detections = self.runtime(im1.astype(np.float32), threshold)\n t2 = perf_counter()\n if resize:\n detections = scale_detections(\n detections, (resize, resize), (im0.shape[1], im0.shape[0])\n )\n logger.debug(f\"Inference time: {t2-t1:.3f} seconds\")\n im = None\n if annotate:\n im = self._annotate(im0, detections)\n\n out = sv_to_focoos_detections(detections, classes=self.metadata.classes)\n t3 = perf_counter()\n out.latency = {\n \"inference\": round(t2 - t1, 3),\n \"preprocess\": round(t1 - t0, 3),\n \"postprocess\": round(t3 - t2, 3),\n }\n return out, im\n
"},{"location":"api/remote_model/","title":"remote model","text":"RemoteModel Module
This module provides a class to manage remote models in the Focoos ecosystem. It supports various functionalities including model training, deployment, inference, and monitoring.
Classes:
Name Description RemoteModel
A class for interacting with remote models, managing their lifecycle, and performing inference.
Modules:
Name Description HttpClient
Handles HTTP requests.
logger
Logging utility.
BoxAnnotator, LabelAnnotator, MaskAnnotator
Annotation tools for visualizing detections and segmentation tasks.
FocoosDet, FocoosDetections
Classes for representing and managing detections.
FocoosTask
Enum for defining supported tasks (e.g., DETECTION, SEMSEG).
Hyperparameters
Structure for training configuration parameters.
ModelMetadata
Contains metadata for the model.
ModelStatus
Enum for representing the current status of the model.
TrainInstance
Enum for defining available training instances.
image_loader
Utility function for loading images.
focoos_detections_to_supervision
Converter for Focoos detections to supervision format.
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel","title":"RemoteModel
","text":"Represents a remote model in the Focoos platform.
Attributes:
Name Type Description model_ref
str
Reference ID for the model.
http_client
HttpClient
Client for making HTTP requests.
max_deploy_wait
int
Maximum wait time for model deployment.
metadata
ModelMetadata
Metadata of the model.
label_annotator
LabelAnnotator
Annotator for adding labels to images.
box_annotator
BoxAnnotator
Annotator for drawing bounding boxes.
mask_annotator
MaskAnnotator
Annotator for drawing masks on images.
Source code in focoos/remote_model.py
class RemoteModel:\n \"\"\"\n Represents a remote model in the Focoos platform.\n\n Attributes:\n model_ref (str): Reference ID for the model.\n http_client (HttpClient): Client for making HTTP requests.\n max_deploy_wait (int): Maximum wait time for model deployment.\n metadata (ModelMetadata): Metadata of the model.\n label_annotator (LabelAnnotator): Annotator for adding labels to images.\n box_annotator (sv.BoxAnnotator): Annotator for drawing bounding boxes.\n mask_annotator (sv.MaskAnnotator): Annotator for drawing masks on images.\n \"\"\"\n\n def __init__(self, model_ref: str, http_client: HttpClient):\n \"\"\"\n Initialize the RemoteModel instance.\n\n Args:\n model_ref (str): Reference ID for the model.\n http_client (HttpClient): HTTP client instance for communication.\n\n Raises:\n ValueError: If model metadata retrieval fails.\n \"\"\"\n self.model_ref = model_ref\n self.http_client = http_client\n self.max_deploy_wait = 10\n self.metadata: ModelMetadata = self.get_info()\n\n self.label_annotator = sv.LabelAnnotator(text_padding=10, border_radius=10)\n self.box_annotator = sv.BoxAnnotator()\n self.mask_annotator = sv.MaskAnnotator()\n logger.info(\n f\"[RemoteModel]: ref: {self.model_ref} name: {self.metadata.name} description: {self.metadata.description} status: {self.metadata.status}\"\n )\n\n def get_info(self) -> ModelMetadata:\n \"\"\"\n Retrieve model metadata.\n\n Returns:\n ModelMetadata: Metadata of the model.\n\n Raises:\n ValueError: If the request fails.\n \"\"\"\n res = self.http_client.get(f\"models/{self.model_ref}\")\n if res.status_code != 200:\n logger.error(f\"Failed to get model info: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to get model info: {res.status_code} {res.text}\")\n self.metadata = ModelMetadata(**res.json())\n return self.metadata\n\n def train(\n self,\n dataset_ref: str,\n hyperparameters: Hyperparameters,\n anyma_version: str = \"anyma-sagemaker-cu12-torch22-0111\",\n instance_type: TrainInstance = TrainInstance.ML_G4DN_XLARGE,\n volume_size: int = 50,\n max_runtime_in_seconds: int = 36000,\n ) -> dict | None:\n \"\"\"\n Initiate the training of a remote model on the Focoos platform.\n\n This method sends a request to the Focoos platform to start the training process for the model\n referenced by `self.model_ref`. It requires a dataset reference and hyperparameters for training,\n as well as optional configuration options for the instance type, volume size, and runtime.\n\n Args:\n dataset_ref (str): The reference ID of the dataset to be used for training.\n hyperparameters (Hyperparameters): A structure containing the hyperparameters for the training process.\n anyma_version (str, optional): The version of Anyma to use for training. Defaults to \"anyma-sagemaker-cu12-torch22-0111\".\n instance_type (TrainInstance, optional): The type of training instance to use. Defaults to TrainInstance.ML_G4DN_XLARGE.\n volume_size (int, optional): The size of the disk volume (in GB) for the training instance. Defaults to 50.\n max_runtime_in_seconds (int, optional): The maximum runtime for training in seconds. Defaults to 36000.\n\n Returns:\n dict: A dictionary containing the response from the training initiation request. The content depends on the Focoos platform's response.\n\n Raises:\n ValueError: If the request to start training fails (e.g., due to incorrect parameters or server issues).\n \"\"\"\n res = self.http_client.post(\n f\"models/{self.model_ref}/train\",\n data={\n \"dataset_ref\": dataset_ref,\n \"anyma_version\": anyma_version,\n \"instance_type\": instance_type,\n \"volume_size\": volume_size,\n \"max_runtime_in_seconds\": max_runtime_in_seconds,\n \"hyperparameters\": hyperparameters.model_dump(),\n },\n )\n if res.status_code != 200:\n logger.warning(f\"Failed to train model: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to train model: {res.status_code} {res.text}\")\n return res.json()\n\n def train_status(self) -> dict | None:\n \"\"\"\n Retrieve the current status of the model training.\n\n Sends a request to check the training status of the model referenced by `self.model_ref`.\n\n Returns:\n dict: A dictionary containing the training status information.\n\n Raises:\n ValueError: If the request to get training status fails.\n \"\"\"\n res = self.http_client.get(f\"models/{self.model_ref}/train/status\")\n if res.status_code != 200:\n logger.error(f\"Failed to get train status: {res.status_code} {res.text}\")\n raise ValueError(\n f\"Failed to get train status: {res.status_code} {res.text}\"\n )\n return res.json()\n\n def train_logs(self) -> list[str]:\n \"\"\"\n Retrieve the training logs for the model.\n\n This method sends a request to fetch the logs of the model's training process. If the request\n is successful (status code 200), it returns the logs as a list of strings. If the request fails,\n it logs a warning and returns an empty list.\n\n Returns:\n list[str]: A list of training logs as strings.\n\n Raises:\n None: Returns an empty list if the request fails.\n \"\"\"\n res = self.http_client.get(f\"models/{self.model_ref}/train/logs\")\n if res.status_code != 200:\n logger.warning(f\"Failed to get train logs: {res.status_code} {res.text}\")\n return []\n return res.json()\n\n def _annotate(self, im: np.ndarray, detections: sv.Detections) -> np.ndarray:\n \"\"\"\n Annotate an image with detection results.\n\n This method adds visual annotations to the provided image based on the model's detection results.\n It handles different tasks (e.g., object detection, semantic segmentation, instance segmentation)\n and uses the corresponding annotator (bounding box, label, or mask) to draw on the image.\n\n Args:\n im (np.ndarray): The image to be annotated, represented as a NumPy array.\n detections (sv.Detections): The detection results to be annotated, including class IDs and confidence scores.\n\n Returns:\n np.ndarray: The annotated image as a NumPy array.\n \"\"\"\n classes = self.metadata.classes\n if classes is not None:\n labels = [\n f\"{classes[int(class_id)]}: {confid*100:.0f}%\"\n for class_id, confid in zip(detections.class_id, detections.confidence)\n ]\n else:\n labels = [\n f\"{str(class_id)}: {confid*100:.0f}%\"\n for class_id, confid in zip(detections.class_id, detections.confidence)\n ]\n if self.metadata.task == FocoosTask.DETECTION:\n annotated_im = self.box_annotator.annotate(\n scene=im.copy(), detections=detections\n )\n\n annotated_im = self.label_annotator.annotate(\n scene=annotated_im, detections=detections, labels=labels\n )\n elif self.metadata.task in [\n FocoosTask.SEMSEG,\n FocoosTask.INSTANCE_SEGMENTATION,\n ]:\n annotated_im = self.mask_annotator.annotate(\n scene=im.copy(), detections=detections\n )\n return annotated_im\n\n def infer(\n self,\n image: Union[str, Path, np.ndarray, bytes],\n threshold: float = 0.5,\n annotate: bool = False,\n ) -> Tuple[FocoosDetections, Optional[np.ndarray]]:\n \"\"\"\n Perform inference on the provided image using the remote model.\n\n This method sends an image to the remote model for inference and retrieves the detection results.\n Optionally, it can annotate the image with the detection results.\n\n Args:\n image (Union[str, Path, bytes]): The image to infer on, which can be a file path, a string representing the path, or raw bytes.\n threshold (float, optional): The confidence threshold for detections. Defaults to 0.5.\n annotate (bool, optional): Whether to annotate the image with the detection results. Defaults to False.\n\n Returns:\n Tuple[FocoosDetections, Optional[np.ndarray]]:\n - FocoosDetections: The detection results including class IDs, confidence scores, etc.\n - Optional[np.ndarray]: The annotated image if `annotate` is True, else None.\n\n Raises:\n FileNotFoundError: If the provided image file path is invalid.\n ValueError: If the inference request fails.\n \"\"\"\n image_bytes = None\n if isinstance(image, str) or isinstance(image, Path):\n if not os.path.exists(image):\n logger.error(f\"Image file not found: {image}\")\n raise FileNotFoundError(f\"Image file not found: {image}\")\n image_bytes = open(image, \"rb\").read()\n elif isinstance(image, np.ndarray):\n _, buffer = cv2.imencode(\".jpg\", image)\n image_bytes = buffer.tobytes()\n else:\n image_bytes = image\n files = {\"file\": image_bytes}\n t0 = time.time()\n res = self.http_client.post(\n f\"models/{self.model_ref}/inference?confidence_threshold={threshold}\",\n files=files,\n )\n t1 = time.time()\n if res.status_code == 200:\n logger.debug(f\"Inference time: {t1-t0:.3f} seconds\")\n detections = FocoosDetections(\n detections=[\n FocoosDet.from_json(d) for d in res.json().get(\"detections\", [])\n ],\n latency=res.json().get(\"latency\", None),\n )\n preview = None\n if annotate:\n im0 = image_loader(image)\n sv_detections = focoos_detections_to_supervision(detections)\n preview = self._annotate(im0, sv_detections)\n return detections, preview\n else:\n logger.error(f\"Failed to infer: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to infer: {res.status_code} {res.text}\")\n\n def train_metrics(self, period=60) -> dict | None:\n \"\"\"\n Retrieve training metrics for the model over a specified period.\n\n This method fetches the training metrics for the remote model, including aggregated values,\n such as average performance metrics over the given period.\n\n Args:\n period (int, optional): The period (in seconds) for which to fetch the metrics. Defaults to 60.\n\n Returns:\n Optional[dict]: A dictionary containing the training metrics if the request is successful,\n or None if the request fails.\n \"\"\"\n res = self.http_client.get(\n f\"models/{self.model_ref}/train/all-metrics?period={period}&aggregation_type=Average\"\n )\n if res.status_code != 200:\n logger.warning(f\"Failed to get train logs: {res.status_code} {res.text}\")\n return None\n return res.json()\n\n def _log_metrics(self):\n \"\"\"\n Log the latest training metrics for the model.\n\n This method retrieves the current training metrics, such as iteration, total loss, and evaluation\n metrics (like mIoU for segmentation tasks or AP50 for detection tasks). It logs the most recent values\n for these metrics, helping monitor the model's training progress.\n\n The logged metrics depend on the model's task:\n - For segmentation tasks (SEMSEG), the mean Intersection over Union (mIoU) is logged.\n - For detection tasks, the Average Precision at 50% IoU (AP50) is logged.\n\n Returns:\n None: The method only logs the metrics without returning any value.\n\n Logs:\n - Iteration number.\n - Total loss value.\n - Relevant evaluation metric (mIoU or AP50).\n \"\"\"\n metrics = self.train_metrics()\n if metrics:\n iter = (\n metrics[\"iter\"][-1]\n if \"iter\" in metrics and len(metrics[\"iter\"]) > 0\n else -1\n )\n total_loss = (\n metrics[\"total_loss\"][-1]\n if \"total_loss\" in metrics and len(metrics[\"total_loss\"]) > 0\n else -1\n )\n if self.metadata.task == FocoosTask.SEMSEG:\n accuracy = (\n metrics[\"mIoU\"][-1]\n if \"mIoU\" in metrics and len(metrics[\"mIoU\"]) > 0\n else \"-\"\n )\n eval_metric = \"mIoU\"\n else:\n accuracy = (\n metrics[\"AP50\"][-1]\n if \"AP50\" in metrics and len(metrics[\"AP50\"]) > 0\n else \"-\"\n )\n eval_metric = \"AP50\"\n logger.info(\n f\"Iter {iter:.0f}: Loss {total_loss:.2f}, {eval_metric} {accuracy}\"\n )\n\n def monitor_train(self, update_period=30) -> None:\n \"\"\"\n Monitor the training process of the model and log its status periodically.\n\n This method continuously checks the model's training status and logs updates based on the current state.\n It monitors the primary and secondary statuses of the model, and performs the following actions:\n - If the status is \"Pending\", it logs a waiting message and waits for resources.\n - If the status is \"InProgress\", it logs the current status and elapsed time, and logs the training metrics if the model is actively training.\n - If the status is \"Completed\", it logs the final metrics and exits.\n - If the training fails, is stopped, or any unexpected status occurs, it logs the status and exits.\n\n Args:\n update_period (int, optional): The time (in seconds) to wait between status checks. Default is 30 seconds.\n\n Returns:\n None: This method does not return any value but logs information about the training process.\n\n Logs:\n - The current training status, including elapsed time.\n - Training metrics at regular intervals while the model is training.\n \"\"\"\n completed_status = [\"Completed\", \"Failed\", \"Stopped\"]\n # init to make do-while\n status = {\"main_status\": \"Flag\", \"secondary_status\": \"Flag\"}\n prev_status = status\n while status[\"main_status\"] not in completed_status:\n prev_status = status\n status = self.train_status()\n elapsed = status.get(\"elapsed_time\", 0)\n # Model at the startup\n if not status[\"main_status\"] or status[\"main_status\"] in [\"Pending\"]:\n if prev_status[\"main_status\"] != status[\"main_status\"]:\n logger.info(\"[0s] Waiting for resources...\")\n sleep(update_period)\n continue\n # Training in progress\n if status[\"main_status\"] in [\"InProgress\"]:\n if prev_status[\"secondary_status\"] != status[\"secondary_status\"]:\n if status[\"secondary_status\"] in [\"Starting\", \"Pending\"]:\n logger.info(\n f\"[0s] {status['main_status']}: {status['secondary_status']}\"\n )\n else:\n logger.info(\n f\"[{elapsed//60}m:{elapsed%60}s] {status['main_status']}: {status['secondary_status']}\"\n )\n if status[\"secondary_status\"] in [\"Training\"]:\n self._log_metrics()\n sleep(update_period)\n continue\n if status[\"main_status\"] == \"Completed\":\n self._log_metrics()\n return\n else:\n logger.info(f\"Model is not training, status: {status['main_status']}\")\n return\n\n def stop_training(self) -> None:\n \"\"\"\n Stop the training process of the model.\n\n This method sends a request to stop the training of the model identified by `model_ref`.\n If the request fails, an error is logged and a `ValueError` is raised.\n\n Raises:\n ValueError: If the stop training request fails.\n\n Logs:\n - Error message if the request to stop training fails, including the status code and response text.\n\n Returns:\n None: This method does not return any value.\n \"\"\"\n res = self.http_client.delete(f\"models/{self.model_ref}/train\")\n if res.status_code != 200:\n logger.error(f\"Failed to get stop training: {res.status_code} {res.text}\")\n raise ValueError(\n f\"Failed to get stop training: {res.status_code} {res.text}\"\n )\n\n def delete_model(self) -> None:\n \"\"\"\n Delete the model from the system.\n\n This method sends a request to delete the model identified by `model_ref`.\n If the request fails or the status code is not 204 (No Content), an error is logged\n and a `ValueError` is raised.\n\n Raises:\n ValueError: If the delete model request fails or does not return a 204 status code.\n\n Logs:\n - Error message if the request to delete the model fails, including the status code and response text.\n\n Returns:\n None: This method does not return any value.\n \"\"\"\n res = self.http_client.delete(f\"models/{self.model_ref}\")\n if res.status_code != 204:\n logger.error(f\"Failed to delete model: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to delete model: {res.status_code} {res.text}\")\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.__init__","title":"__init__(model_ref, http_client)
","text":"Initialize the RemoteModel instance.
Parameters:
Name Type Description Default model_ref
str
Reference ID for the model.
required http_client
HttpClient
HTTP client instance for communication.
required Raises:
Type Description ValueError
If model metadata retrieval fails.
Source code in focoos/remote_model.py
def __init__(self, model_ref: str, http_client: HttpClient):\n \"\"\"\n Initialize the RemoteModel instance.\n\n Args:\n model_ref (str): Reference ID for the model.\n http_client (HttpClient): HTTP client instance for communication.\n\n Raises:\n ValueError: If model metadata retrieval fails.\n \"\"\"\n self.model_ref = model_ref\n self.http_client = http_client\n self.max_deploy_wait = 10\n self.metadata: ModelMetadata = self.get_info()\n\n self.label_annotator = sv.LabelAnnotator(text_padding=10, border_radius=10)\n self.box_annotator = sv.BoxAnnotator()\n self.mask_annotator = sv.MaskAnnotator()\n logger.info(\n f\"[RemoteModel]: ref: {self.model_ref} name: {self.metadata.name} description: {self.metadata.description} status: {self.metadata.status}\"\n )\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.delete_model","title":"delete_model()
","text":"Delete the model from the system.
This method sends a request to delete the model identified by model_ref
. If the request fails or the status code is not 204 (No Content), an error is logged and a ValueError
is raised.
Raises:
Type Description ValueError
If the delete model request fails or does not return a 204 status code.
Logs - Error message if the request to delete the model fails, including the status code and response text.
Returns:
Name Type Description None
None
This method does not return any value.
Source code in focoos/remote_model.py
def delete_model(self) -> None:\n \"\"\"\n Delete the model from the system.\n\n This method sends a request to delete the model identified by `model_ref`.\n If the request fails or the status code is not 204 (No Content), an error is logged\n and a `ValueError` is raised.\n\n Raises:\n ValueError: If the delete model request fails or does not return a 204 status code.\n\n Logs:\n - Error message if the request to delete the model fails, including the status code and response text.\n\n Returns:\n None: This method does not return any value.\n \"\"\"\n res = self.http_client.delete(f\"models/{self.model_ref}\")\n if res.status_code != 204:\n logger.error(f\"Failed to delete model: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to delete model: {res.status_code} {res.text}\")\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.get_info","title":"get_info()
","text":"Retrieve model metadata.
Returns:
Name Type Description ModelMetadata
ModelMetadata
Metadata of the model.
Raises:
Type Description ValueError
If the request fails.
Source code in focoos/remote_model.py
def get_info(self) -> ModelMetadata:\n \"\"\"\n Retrieve model metadata.\n\n Returns:\n ModelMetadata: Metadata of the model.\n\n Raises:\n ValueError: If the request fails.\n \"\"\"\n res = self.http_client.get(f\"models/{self.model_ref}\")\n if res.status_code != 200:\n logger.error(f\"Failed to get model info: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to get model info: {res.status_code} {res.text}\")\n self.metadata = ModelMetadata(**res.json())\n return self.metadata\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.infer","title":"infer(image, threshold=0.5, annotate=False)
","text":"Perform inference on the provided image using the remote model.
This method sends an image to the remote model for inference and retrieves the detection results. Optionally, it can annotate the image with the detection results.
Parameters:
Name Type Description Default image
Union[str, Path, bytes]
The image to infer on, which can be a file path, a string representing the path, or raw bytes.
required threshold
float
The confidence threshold for detections. Defaults to 0.5.
0.5
annotate
bool
Whether to annotate the image with the detection results. Defaults to False.
False
Returns:
Type Description Tuple[FocoosDetections, Optional[ndarray]]
Tuple[FocoosDetections, Optional[np.ndarray]]: - FocoosDetections: The detection results including class IDs, confidence scores, etc. - Optional[np.ndarray]: The annotated image if annotate
is True, else None.
Raises:
Type Description FileNotFoundError
If the provided image file path is invalid.
ValueError
If the inference request fails.
Source code in focoos/remote_model.py
def infer(\n self,\n image: Union[str, Path, np.ndarray, bytes],\n threshold: float = 0.5,\n annotate: bool = False,\n) -> Tuple[FocoosDetections, Optional[np.ndarray]]:\n \"\"\"\n Perform inference on the provided image using the remote model.\n\n This method sends an image to the remote model for inference and retrieves the detection results.\n Optionally, it can annotate the image with the detection results.\n\n Args:\n image (Union[str, Path, bytes]): The image to infer on, which can be a file path, a string representing the path, or raw bytes.\n threshold (float, optional): The confidence threshold for detections. Defaults to 0.5.\n annotate (bool, optional): Whether to annotate the image with the detection results. Defaults to False.\n\n Returns:\n Tuple[FocoosDetections, Optional[np.ndarray]]:\n - FocoosDetections: The detection results including class IDs, confidence scores, etc.\n - Optional[np.ndarray]: The annotated image if `annotate` is True, else None.\n\n Raises:\n FileNotFoundError: If the provided image file path is invalid.\n ValueError: If the inference request fails.\n \"\"\"\n image_bytes = None\n if isinstance(image, str) or isinstance(image, Path):\n if not os.path.exists(image):\n logger.error(f\"Image file not found: {image}\")\n raise FileNotFoundError(f\"Image file not found: {image}\")\n image_bytes = open(image, \"rb\").read()\n elif isinstance(image, np.ndarray):\n _, buffer = cv2.imencode(\".jpg\", image)\n image_bytes = buffer.tobytes()\n else:\n image_bytes = image\n files = {\"file\": image_bytes}\n t0 = time.time()\n res = self.http_client.post(\n f\"models/{self.model_ref}/inference?confidence_threshold={threshold}\",\n files=files,\n )\n t1 = time.time()\n if res.status_code == 200:\n logger.debug(f\"Inference time: {t1-t0:.3f} seconds\")\n detections = FocoosDetections(\n detections=[\n FocoosDet.from_json(d) for d in res.json().get(\"detections\", [])\n ],\n latency=res.json().get(\"latency\", None),\n )\n preview = None\n if annotate:\n im0 = image_loader(image)\n sv_detections = focoos_detections_to_supervision(detections)\n preview = self._annotate(im0, sv_detections)\n return detections, preview\n else:\n logger.error(f\"Failed to infer: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to infer: {res.status_code} {res.text}\")\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.monitor_train","title":"monitor_train(update_period=30)
","text":"Monitor the training process of the model and log its status periodically.
This method continuously checks the model's training status and logs updates based on the current state. It monitors the primary and secondary statuses of the model, and performs the following actions: - If the status is \"Pending\", it logs a waiting message and waits for resources. - If the status is \"InProgress\", it logs the current status and elapsed time, and logs the training metrics if the model is actively training. - If the status is \"Completed\", it logs the final metrics and exits. - If the training fails, is stopped, or any unexpected status occurs, it logs the status and exits.
Parameters:
Name Type Description Default update_period
int
The time (in seconds) to wait between status checks. Default is 30 seconds.
30
Returns:
Name Type Description None
None
This method does not return any value but logs information about the training process.
Logs - The current training status, including elapsed time.
- Training metrics at regular intervals while the model is training.
Source code in focoos/remote_model.py
def monitor_train(self, update_period=30) -> None:\n \"\"\"\n Monitor the training process of the model and log its status periodically.\n\n This method continuously checks the model's training status and logs updates based on the current state.\n It monitors the primary and secondary statuses of the model, and performs the following actions:\n - If the status is \"Pending\", it logs a waiting message and waits for resources.\n - If the status is \"InProgress\", it logs the current status and elapsed time, and logs the training metrics if the model is actively training.\n - If the status is \"Completed\", it logs the final metrics and exits.\n - If the training fails, is stopped, or any unexpected status occurs, it logs the status and exits.\n\n Args:\n update_period (int, optional): The time (in seconds) to wait between status checks. Default is 30 seconds.\n\n Returns:\n None: This method does not return any value but logs information about the training process.\n\n Logs:\n - The current training status, including elapsed time.\n - Training metrics at regular intervals while the model is training.\n \"\"\"\n completed_status = [\"Completed\", \"Failed\", \"Stopped\"]\n # init to make do-while\n status = {\"main_status\": \"Flag\", \"secondary_status\": \"Flag\"}\n prev_status = status\n while status[\"main_status\"] not in completed_status:\n prev_status = status\n status = self.train_status()\n elapsed = status.get(\"elapsed_time\", 0)\n # Model at the startup\n if not status[\"main_status\"] or status[\"main_status\"] in [\"Pending\"]:\n if prev_status[\"main_status\"] != status[\"main_status\"]:\n logger.info(\"[0s] Waiting for resources...\")\n sleep(update_period)\n continue\n # Training in progress\n if status[\"main_status\"] in [\"InProgress\"]:\n if prev_status[\"secondary_status\"] != status[\"secondary_status\"]:\n if status[\"secondary_status\"] in [\"Starting\", \"Pending\"]:\n logger.info(\n f\"[0s] {status['main_status']}: {status['secondary_status']}\"\n )\n else:\n logger.info(\n f\"[{elapsed//60}m:{elapsed%60}s] {status['main_status']}: {status['secondary_status']}\"\n )\n if status[\"secondary_status\"] in [\"Training\"]:\n self._log_metrics()\n sleep(update_period)\n continue\n if status[\"main_status\"] == \"Completed\":\n self._log_metrics()\n return\n else:\n logger.info(f\"Model is not training, status: {status['main_status']}\")\n return\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.stop_training","title":"stop_training()
","text":"Stop the training process of the model.
This method sends a request to stop the training of the model identified by model_ref
. If the request fails, an error is logged and a ValueError
is raised.
Raises:
Type Description ValueError
If the stop training request fails.
Logs - Error message if the request to stop training fails, including the status code and response text.
Returns:
Name Type Description None
None
This method does not return any value.
Source code in focoos/remote_model.py
def stop_training(self) -> None:\n \"\"\"\n Stop the training process of the model.\n\n This method sends a request to stop the training of the model identified by `model_ref`.\n If the request fails, an error is logged and a `ValueError` is raised.\n\n Raises:\n ValueError: If the stop training request fails.\n\n Logs:\n - Error message if the request to stop training fails, including the status code and response text.\n\n Returns:\n None: This method does not return any value.\n \"\"\"\n res = self.http_client.delete(f\"models/{self.model_ref}/train\")\n if res.status_code != 200:\n logger.error(f\"Failed to get stop training: {res.status_code} {res.text}\")\n raise ValueError(\n f\"Failed to get stop training: {res.status_code} {res.text}\"\n )\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.train","title":"train(dataset_ref, hyperparameters, anyma_version='anyma-sagemaker-cu12-torch22-0111', instance_type=TrainInstance.ML_G4DN_XLARGE, volume_size=50, max_runtime_in_seconds=36000)
","text":"Initiate the training of a remote model on the Focoos platform.
This method sends a request to the Focoos platform to start the training process for the model referenced by self.model_ref
. It requires a dataset reference and hyperparameters for training, as well as optional configuration options for the instance type, volume size, and runtime.
Parameters:
Name Type Description Default dataset_ref
str
The reference ID of the dataset to be used for training.
required hyperparameters
Hyperparameters
A structure containing the hyperparameters for the training process.
required anyma_version
str
The version of Anyma to use for training. Defaults to \"anyma-sagemaker-cu12-torch22-0111\".
'anyma-sagemaker-cu12-torch22-0111'
instance_type
TrainInstance
The type of training instance to use. Defaults to TrainInstance.ML_G4DN_XLARGE.
ML_G4DN_XLARGE
volume_size
int
The size of the disk volume (in GB) for the training instance. Defaults to 50.
50
max_runtime_in_seconds
int
The maximum runtime for training in seconds. Defaults to 36000.
36000
Returns:
Name Type Description dict
dict | None
A dictionary containing the response from the training initiation request. The content depends on the Focoos platform's response.
Raises:
Type Description ValueError
If the request to start training fails (e.g., due to incorrect parameters or server issues).
Source code in focoos/remote_model.py
def train(\n self,\n dataset_ref: str,\n hyperparameters: Hyperparameters,\n anyma_version: str = \"anyma-sagemaker-cu12-torch22-0111\",\n instance_type: TrainInstance = TrainInstance.ML_G4DN_XLARGE,\n volume_size: int = 50,\n max_runtime_in_seconds: int = 36000,\n) -> dict | None:\n \"\"\"\n Initiate the training of a remote model on the Focoos platform.\n\n This method sends a request to the Focoos platform to start the training process for the model\n referenced by `self.model_ref`. It requires a dataset reference and hyperparameters for training,\n as well as optional configuration options for the instance type, volume size, and runtime.\n\n Args:\n dataset_ref (str): The reference ID of the dataset to be used for training.\n hyperparameters (Hyperparameters): A structure containing the hyperparameters for the training process.\n anyma_version (str, optional): The version of Anyma to use for training. Defaults to \"anyma-sagemaker-cu12-torch22-0111\".\n instance_type (TrainInstance, optional): The type of training instance to use. Defaults to TrainInstance.ML_G4DN_XLARGE.\n volume_size (int, optional): The size of the disk volume (in GB) for the training instance. Defaults to 50.\n max_runtime_in_seconds (int, optional): The maximum runtime for training in seconds. Defaults to 36000.\n\n Returns:\n dict: A dictionary containing the response from the training initiation request. The content depends on the Focoos platform's response.\n\n Raises:\n ValueError: If the request to start training fails (e.g., due to incorrect parameters or server issues).\n \"\"\"\n res = self.http_client.post(\n f\"models/{self.model_ref}/train\",\n data={\n \"dataset_ref\": dataset_ref,\n \"anyma_version\": anyma_version,\n \"instance_type\": instance_type,\n \"volume_size\": volume_size,\n \"max_runtime_in_seconds\": max_runtime_in_seconds,\n \"hyperparameters\": hyperparameters.model_dump(),\n },\n )\n if res.status_code != 200:\n logger.warning(f\"Failed to train model: {res.status_code} {res.text}\")\n raise ValueError(f\"Failed to train model: {res.status_code} {res.text}\")\n return res.json()\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.train_logs","title":"train_logs()
","text":"Retrieve the training logs for the model.
This method sends a request to fetch the logs of the model's training process. If the request is successful (status code 200), it returns the logs as a list of strings. If the request fails, it logs a warning and returns an empty list.
Returns:
Type Description list[str]
list[str]: A list of training logs as strings.
Raises:
Type Description None
Returns an empty list if the request fails.
Source code in focoos/remote_model.py
def train_logs(self) -> list[str]:\n \"\"\"\n Retrieve the training logs for the model.\n\n This method sends a request to fetch the logs of the model's training process. If the request\n is successful (status code 200), it returns the logs as a list of strings. If the request fails,\n it logs a warning and returns an empty list.\n\n Returns:\n list[str]: A list of training logs as strings.\n\n Raises:\n None: Returns an empty list if the request fails.\n \"\"\"\n res = self.http_client.get(f\"models/{self.model_ref}/train/logs\")\n if res.status_code != 200:\n logger.warning(f\"Failed to get train logs: {res.status_code} {res.text}\")\n return []\n return res.json()\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.train_metrics","title":"train_metrics(period=60)
","text":"Retrieve training metrics for the model over a specified period.
This method fetches the training metrics for the remote model, including aggregated values, such as average performance metrics over the given period.
Parameters:
Name Type Description Default period
int
The period (in seconds) for which to fetch the metrics. Defaults to 60.
60
Returns:
Type Description dict | None
Optional[dict]: A dictionary containing the training metrics if the request is successful, or None if the request fails.
Source code in focoos/remote_model.py
def train_metrics(self, period=60) -> dict | None:\n \"\"\"\n Retrieve training metrics for the model over a specified period.\n\n This method fetches the training metrics for the remote model, including aggregated values,\n such as average performance metrics over the given period.\n\n Args:\n period (int, optional): The period (in seconds) for which to fetch the metrics. Defaults to 60.\n\n Returns:\n Optional[dict]: A dictionary containing the training metrics if the request is successful,\n or None if the request fails.\n \"\"\"\n res = self.http_client.get(\n f\"models/{self.model_ref}/train/all-metrics?period={period}&aggregation_type=Average\"\n )\n if res.status_code != 200:\n logger.warning(f\"Failed to get train logs: {res.status_code} {res.text}\")\n return None\n return res.json()\n
"},{"location":"api/remote_model/#focoos.remote_model.RemoteModel.train_status","title":"train_status()
","text":"Retrieve the current status of the model training.
Sends a request to check the training status of the model referenced by self.model_ref
.
Returns:
Name Type Description dict
dict | None
A dictionary containing the training status information.
Raises:
Type Description ValueError
If the request to get training status fails.
Source code in focoos/remote_model.py
def train_status(self) -> dict | None:\n \"\"\"\n Retrieve the current status of the model training.\n\n Sends a request to check the training status of the model referenced by `self.model_ref`.\n\n Returns:\n dict: A dictionary containing the training status information.\n\n Raises:\n ValueError: If the request to get training status fails.\n \"\"\"\n res = self.http_client.get(f\"models/{self.model_ref}/train/status\")\n if res.status_code != 200:\n logger.error(f\"Failed to get train status: {res.status_code} {res.text}\")\n raise ValueError(\n f\"Failed to get train status: {res.status_code} {res.text}\"\n )\n return res.json()\n
"},{"location":"api/runtime/","title":"runtime","text":"Runtime Module for ONNX-based Models
This module provides the necessary functionality for loading, preprocessing, running inference, and benchmarking ONNX-based models using different execution providers such as CUDA, TensorRT, OpenVINO, and CPU. It includes utility functions for image preprocessing, postprocessing, and interfacing with the ONNXRuntime library.
Functions:
Name Description det_postprocess
Postprocesses detection model outputs into sv.Detections.
semseg_postprocess
Postprocesses semantic segmentation model outputs into sv.Detections.
get_runtime
Returns an ONNXRuntime instance configured for the given runtime type.
Classes:
Name Description ONNXRuntime
A class that interfaces with ONNX Runtime for model inference.
"},{"location":"api/runtime/#focoos.runtime.ONNXRuntime","title":"ONNXRuntime
","text":"A class that interfaces with ONNX Runtime for model inference using different execution providers (CUDA, TensorRT, OpenVINO, CoreML, etc.). It manages preprocessing, inference, and postprocessing of data, as well as benchmarking the performance of the model.
Attributes:
Name Type Description logger
Logger
Logger for the ONNXRuntime instance.
name
str
The name of the model (derived from its path).
opts
OnnxEngineOpts
Options used for configuring the ONNX Runtime.
model_metadata
ModelMetadata
Metadata related to the model.
postprocess_fn
Callable
The function used to postprocess the model's output.
ort_sess
InferenceSession
The ONNXRuntime inference session.
dtype
dtype
The data type for the model input.
binding
Optional[str]
The binding type for the runtime (e.g., CUDA, CPU).
Source code in focoos/runtime.py
class ONNXRuntime:\n \"\"\"\n A class that interfaces with ONNX Runtime for model inference using different execution providers\n (CUDA, TensorRT, OpenVINO, CoreML, etc.). It manages preprocessing, inference, and postprocessing\n of data, as well as benchmarking the performance of the model.\n\n Attributes:\n logger (Logger): Logger for the ONNXRuntime instance.\n name (str): The name of the model (derived from its path).\n opts (OnnxEngineOpts): Options used for configuring the ONNX Runtime.\n model_metadata (ModelMetadata): Metadata related to the model.\n postprocess_fn (Callable): The function used to postprocess the model's output.\n ort_sess (InferenceSession): The ONNXRuntime inference session.\n dtype (np.dtype): The data type for the model input.\n binding (Optional[str]): The binding type for the runtime (e.g., CUDA, CPU).\n \"\"\"\n\n def __init__(\n self, model_path: str, opts: OnnxEngineOpts, model_metadata: ModelMetadata\n ):\n \"\"\"\n Initializes the ONNXRuntime instance with the specified model and configuration options.\n\n Args:\n model_path (str): Path to the ONNX model file.\n opts (OnnxEngineOpts): The configuration options for ONNX Runtime.\n model_metadata (ModelMetadata): Metadata for the model (e.g., task type).\n \"\"\"\n self.logger = get_logger()\n self.logger.debug(f\"[onnxruntime device] {ort.get_device()}\")\n self.logger.debug(\n f\"[onnxruntime available providers] {ort.get_available_providers()}\"\n )\n self.name = Path(model_path).stem\n self.opts = opts\n self.model_metadata = model_metadata\n self.postprocess_fn = (\n det_postprocess\n if model_metadata.task == FocoosTask.DETECTION\n else semseg_postprocess\n )\n options = ort.SessionOptions()\n if opts.verbose:\n options.log_severity_level = 0\n options.enable_profiling = opts.verbose\n # options.intra_op_num_threads = 1\n available_providers = ort.get_available_providers()\n if opts.cuda and \"CUDAExecutionProvider\" not in available_providers:\n self.logger.warning(\"CUDA ExecutionProvider not found.\")\n if opts.trt and \"TensorrtExecutionProvider\" not in available_providers:\n self.logger.warning(\"Tensorrt ExecutionProvider not found.\")\n if opts.vino and \"OpenVINOExecutionProvider\" not in available_providers:\n self.logger.warning(\"OpenVINO ExecutionProvider not found.\")\n if opts.coreml and \"CoreMLExecutionProvider\" not in available_providers:\n self.logger.warning(\"CoreML ExecutionProvider not found.\")\n # Set providers\n providers = []\n dtype = np.float32\n binding = None\n if opts.trt and \"TensorrtExecutionProvider\" in available_providers:\n providers.append(\n (\n \"TensorrtExecutionProvider\",\n {\n \"device_id\": 0,\n # 'trt_max_workspace_size': 1073741824, # 1 GB\n \"trt_fp16_enable\": opts.fp16,\n \"trt_force_sequential_engine_build\": False,\n },\n )\n )\n dtype = np.float32\n elif opts.vino and \"OpenVINOExecutionProvider\" in available_providers:\n providers.append(\n (\n \"OpenVINOExecutionProvider\",\n {\n \"device_type\": \"MYRIAD_FP16\",\n \"enable_vpu_fast_compile\": True,\n \"num_of_threads\": 1,\n },\n # 'use_compiled_network': False}\n )\n )\n options.graph_optimization_level = (\n ort.GraphOptimizationLevel.ORT_DISABLE_ALL\n )\n dtype = np.float32\n binding = None\n elif opts.cuda and \"CUDAExecutionProvider\" in available_providers:\n binding = \"cuda\"\n options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL\n providers.append(\n (\n \"CUDAExecutionProvider\",\n {\n \"device_id\": GPU_ID,\n \"arena_extend_strategy\": \"kSameAsRequested\",\n \"gpu_mem_limit\": 16 * 1024 * 1024 * 1024,\n \"cudnn_conv_algo_search\": \"EXHAUSTIVE\",\n \"do_copy_in_default_stream\": True,\n },\n )\n )\n elif opts.coreml and \"CoreMLExecutionProvider\" in available_providers:\n # # options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL\n providers.append(\"CoreMLExecutionProvider\")\n else:\n binding = None\n\n binding = None # TODO: remove this\n providers.append(\"CPUExecutionProvider\")\n self.dtype = dtype\n self.binding = binding\n self.ort_sess = ort.InferenceSession(model_path, options, providers=providers)\n self.active_providers = self.ort_sess.get_providers()\n self.logger.info(\n f\"[onnxruntime] Active providers:{self.ort_sess.get_providers()}\"\n )\n if self.ort_sess.get_inputs()[0].type == \"tensor(uint8)\":\n self.dtype = np.uint8\n else:\n self.dtype = np.float32\n if self.opts.warmup_iter > 0:\n self.logger.info(\"\u23f1\ufe0f [onnxruntime] Warming up model ..\")\n for _ in range(self.opts.warmup_iter):\n np_image = np.random.rand(1, 3, 640, 640).astype(self.dtype)\n input_name = self.ort_sess.get_inputs()[0].name\n out_name = [output.name for output in self.ort_sess.get_outputs()]\n t0 = perf_counter()\n if self.binding is not None:\n io_binding = self.ort_sess.io_binding()\n io_binding.bind_input(\n input_name,\n self.binding,\n device_id=GPU_ID,\n element_type=self.dtype,\n shape=np_image.shape,\n buffer_ptr=np_image.ctypes.data,\n )\n io_binding.bind_cpu_input(input_name, np_image)\n io_binding.bind_output(out_name[0], self.binding)\n t0 = perf_counter()\n self.ort_sess.run_with_iobinding(io_binding)\n t1 = perf_counter()\n io_binding.copy_outputs_to_cpu()\n else:\n self.ort_sess.run(out_name, {input_name: np_image})\n\n self.logger.info(f\"\u23f1\ufe0f [onnxruntime] {self.name} WARMUP DONE\")\n\n def __call__(self, im: np.ndarray, conf_threshold: float) -> sv.Detections:\n \"\"\"\n Runs inference on the provided input image and returns the model's detections.\n\n Args:\n im (np.ndarray): The preprocessed input image.\n conf_threshold (float): The confidence threshold for filtering results.\n\n Returns:\n sv.Detections: A sv.Detections object containing the model's output detections.\n \"\"\"\n out_name = None\n input_name = self.ort_sess.get_inputs()[0].name\n out_name = [output.name for output in self.ort_sess.get_outputs()]\n if self.binding is not None:\n self.logger.info(f\"binding {self.binding}\")\n io_binding = self.ort_sess.io_binding()\n\n io_binding.bind_input(\n input_name,\n self.binding,\n device_id=GPU_ID,\n element_type=self.dtype,\n shape=im.shape,\n buffer_ptr=im.ctypes.data,\n )\n\n io_binding.bind_cpu_input(input_name, im)\n io_binding.bind_output(out_name[0], self.binding)\n self.ort_sess.run_with_iobinding(io_binding)\n out = io_binding.copy_outputs_to_cpu()\n else:\n out = self.ort_sess.run(out_name, {input_name: im})\n\n detections = self.postprocess_fn(\n out, (im.shape[2], im.shape[3]), conf_threshold\n )\n return detections\n\n def benchmark(self, iterations=20, size=640) -> LatencyMetrics:\n \"\"\"\n Benchmarks the model by running multiple inference iterations and measuring the latency.\n\n Args:\n iterations (int, optional): Number of iterations to run for benchmarking. Defaults to 20.\n size (int, optional): The input image size for benchmarking. Defaults to 640.\n\n Returns:\n LatencyMetrics: The latency metrics (e.g., FPS, mean, min, max, and standard deviation).\n \"\"\"\n self.logger.info(\"\u23f1\ufe0f [onnxruntime] Benchmarking latency..\")\n size = size if isinstance(size, (tuple, list)) else (size, size)\n\n durations = []\n np_input = (255 * np.random.random((1, 3, size[0], size[1]))).astype(self.dtype)\n input_name = self.ort_sess.get_inputs()[0].name\n out_name = self.ort_sess.get_outputs()[0].name\n if self.binding:\n io_binding = self.ort_sess.io_binding()\n\n io_binding.bind_input(\n input_name,\n \"cuda\",\n device_id=0,\n element_type=self.dtype,\n shape=np_input.shape,\n buffer_ptr=np_input.ctypes.data,\n )\n\n io_binding.bind_cpu_input(input_name, np_input)\n io_binding.bind_output(out_name, \"cuda\")\n else:\n out_name = [output.name for output in self.ort_sess.get_outputs()]\n\n for step in range(iterations + 5):\n if self.binding:\n start = perf_counter()\n self.ort_sess.run_with_iobinding(io_binding)\n end = perf_counter()\n # out = io_binding.copy_outputs_to_cpu()\n else:\n start = perf_counter()\n out = self.ort_sess.run(out_name, {input_name: np_input})\n end = perf_counter()\n\n if step >= 5:\n durations.append((end - start) * 1000)\n durations = np.array(durations)\n provider = self.active_providers[0]\n if provider in [\"CUDAExecutionProvider\", \"TensorrtExecutionProvider\"]:\n device = get_gpu_name()\n else:\n device = get_cpu_name()\n metrics = LatencyMetrics(\n fps=int(1000 / durations.mean()),\n engine=f\"onnx.{provider}\",\n mean=round(durations.mean(), 3),\n max=round(durations.max(), 3),\n min=round(durations.min(), 3),\n std=round(durations.std(), 3),\n im_size=size[0],\n device=str(device),\n )\n self.logger.info(f\"\ud83d\udd25 FPS: {metrics.fps}\")\n return metrics\n
"},{"location":"api/runtime/#focoos.runtime.ONNXRuntime.__call__","title":"__call__(im, conf_threshold)
","text":"Runs inference on the provided input image and returns the model's detections.
Parameters:
Name Type Description Default im
ndarray
The preprocessed input image.
required conf_threshold
float
The confidence threshold for filtering results.
required Returns:
Type Description Detections
sv.Detections: A sv.Detections object containing the model's output detections.
Source code in focoos/runtime.py
def __call__(self, im: np.ndarray, conf_threshold: float) -> sv.Detections:\n \"\"\"\n Runs inference on the provided input image and returns the model's detections.\n\n Args:\n im (np.ndarray): The preprocessed input image.\n conf_threshold (float): The confidence threshold for filtering results.\n\n Returns:\n sv.Detections: A sv.Detections object containing the model's output detections.\n \"\"\"\n out_name = None\n input_name = self.ort_sess.get_inputs()[0].name\n out_name = [output.name for output in self.ort_sess.get_outputs()]\n if self.binding is not None:\n self.logger.info(f\"binding {self.binding}\")\n io_binding = self.ort_sess.io_binding()\n\n io_binding.bind_input(\n input_name,\n self.binding,\n device_id=GPU_ID,\n element_type=self.dtype,\n shape=im.shape,\n buffer_ptr=im.ctypes.data,\n )\n\n io_binding.bind_cpu_input(input_name, im)\n io_binding.bind_output(out_name[0], self.binding)\n self.ort_sess.run_with_iobinding(io_binding)\n out = io_binding.copy_outputs_to_cpu()\n else:\n out = self.ort_sess.run(out_name, {input_name: im})\n\n detections = self.postprocess_fn(\n out, (im.shape[2], im.shape[3]), conf_threshold\n )\n return detections\n
"},{"location":"api/runtime/#focoos.runtime.ONNXRuntime.__init__","title":"__init__(model_path, opts, model_metadata)
","text":"Initializes the ONNXRuntime instance with the specified model and configuration options.
Parameters:
Name Type Description Default model_path
str
Path to the ONNX model file.
required opts
OnnxEngineOpts
The configuration options for ONNX Runtime.
required model_metadata
ModelMetadata
Metadata for the model (e.g., task type).
required Source code in focoos/runtime.py
def __init__(\n self, model_path: str, opts: OnnxEngineOpts, model_metadata: ModelMetadata\n):\n \"\"\"\n Initializes the ONNXRuntime instance with the specified model and configuration options.\n\n Args:\n model_path (str): Path to the ONNX model file.\n opts (OnnxEngineOpts): The configuration options for ONNX Runtime.\n model_metadata (ModelMetadata): Metadata for the model (e.g., task type).\n \"\"\"\n self.logger = get_logger()\n self.logger.debug(f\"[onnxruntime device] {ort.get_device()}\")\n self.logger.debug(\n f\"[onnxruntime available providers] {ort.get_available_providers()}\"\n )\n self.name = Path(model_path).stem\n self.opts = opts\n self.model_metadata = model_metadata\n self.postprocess_fn = (\n det_postprocess\n if model_metadata.task == FocoosTask.DETECTION\n else semseg_postprocess\n )\n options = ort.SessionOptions()\n if opts.verbose:\n options.log_severity_level = 0\n options.enable_profiling = opts.verbose\n # options.intra_op_num_threads = 1\n available_providers = ort.get_available_providers()\n if opts.cuda and \"CUDAExecutionProvider\" not in available_providers:\n self.logger.warning(\"CUDA ExecutionProvider not found.\")\n if opts.trt and \"TensorrtExecutionProvider\" not in available_providers:\n self.logger.warning(\"Tensorrt ExecutionProvider not found.\")\n if opts.vino and \"OpenVINOExecutionProvider\" not in available_providers:\n self.logger.warning(\"OpenVINO ExecutionProvider not found.\")\n if opts.coreml and \"CoreMLExecutionProvider\" not in available_providers:\n self.logger.warning(\"CoreML ExecutionProvider not found.\")\n # Set providers\n providers = []\n dtype = np.float32\n binding = None\n if opts.trt and \"TensorrtExecutionProvider\" in available_providers:\n providers.append(\n (\n \"TensorrtExecutionProvider\",\n {\n \"device_id\": 0,\n # 'trt_max_workspace_size': 1073741824, # 1 GB\n \"trt_fp16_enable\": opts.fp16,\n \"trt_force_sequential_engine_build\": False,\n },\n )\n )\n dtype = np.float32\n elif opts.vino and \"OpenVINOExecutionProvider\" in available_providers:\n providers.append(\n (\n \"OpenVINOExecutionProvider\",\n {\n \"device_type\": \"MYRIAD_FP16\",\n \"enable_vpu_fast_compile\": True,\n \"num_of_threads\": 1,\n },\n # 'use_compiled_network': False}\n )\n )\n options.graph_optimization_level = (\n ort.GraphOptimizationLevel.ORT_DISABLE_ALL\n )\n dtype = np.float32\n binding = None\n elif opts.cuda and \"CUDAExecutionProvider\" in available_providers:\n binding = \"cuda\"\n options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL\n providers.append(\n (\n \"CUDAExecutionProvider\",\n {\n \"device_id\": GPU_ID,\n \"arena_extend_strategy\": \"kSameAsRequested\",\n \"gpu_mem_limit\": 16 * 1024 * 1024 * 1024,\n \"cudnn_conv_algo_search\": \"EXHAUSTIVE\",\n \"do_copy_in_default_stream\": True,\n },\n )\n )\n elif opts.coreml and \"CoreMLExecutionProvider\" in available_providers:\n # # options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL\n providers.append(\"CoreMLExecutionProvider\")\n else:\n binding = None\n\n binding = None # TODO: remove this\n providers.append(\"CPUExecutionProvider\")\n self.dtype = dtype\n self.binding = binding\n self.ort_sess = ort.InferenceSession(model_path, options, providers=providers)\n self.active_providers = self.ort_sess.get_providers()\n self.logger.info(\n f\"[onnxruntime] Active providers:{self.ort_sess.get_providers()}\"\n )\n if self.ort_sess.get_inputs()[0].type == \"tensor(uint8)\":\n self.dtype = np.uint8\n else:\n self.dtype = np.float32\n if self.opts.warmup_iter > 0:\n self.logger.info(\"\u23f1\ufe0f [onnxruntime] Warming up model ..\")\n for _ in range(self.opts.warmup_iter):\n np_image = np.random.rand(1, 3, 640, 640).astype(self.dtype)\n input_name = self.ort_sess.get_inputs()[0].name\n out_name = [output.name for output in self.ort_sess.get_outputs()]\n t0 = perf_counter()\n if self.binding is not None:\n io_binding = self.ort_sess.io_binding()\n io_binding.bind_input(\n input_name,\n self.binding,\n device_id=GPU_ID,\n element_type=self.dtype,\n shape=np_image.shape,\n buffer_ptr=np_image.ctypes.data,\n )\n io_binding.bind_cpu_input(input_name, np_image)\n io_binding.bind_output(out_name[0], self.binding)\n t0 = perf_counter()\n self.ort_sess.run_with_iobinding(io_binding)\n t1 = perf_counter()\n io_binding.copy_outputs_to_cpu()\n else:\n self.ort_sess.run(out_name, {input_name: np_image})\n\n self.logger.info(f\"\u23f1\ufe0f [onnxruntime] {self.name} WARMUP DONE\")\n
"},{"location":"api/runtime/#focoos.runtime.ONNXRuntime.benchmark","title":"benchmark(iterations=20, size=640)
","text":"Benchmarks the model by running multiple inference iterations and measuring the latency.
Parameters:
Name Type Description Default iterations
int
Number of iterations to run for benchmarking. Defaults to 20.
20
size
int
The input image size for benchmarking. Defaults to 640.
640
Returns:
Name Type Description LatencyMetrics
LatencyMetrics
The latency metrics (e.g., FPS, mean, min, max, and standard deviation).
Source code in focoos/runtime.py
def benchmark(self, iterations=20, size=640) -> LatencyMetrics:\n \"\"\"\n Benchmarks the model by running multiple inference iterations and measuring the latency.\n\n Args:\n iterations (int, optional): Number of iterations to run for benchmarking. Defaults to 20.\n size (int, optional): The input image size for benchmarking. Defaults to 640.\n\n Returns:\n LatencyMetrics: The latency metrics (e.g., FPS, mean, min, max, and standard deviation).\n \"\"\"\n self.logger.info(\"\u23f1\ufe0f [onnxruntime] Benchmarking latency..\")\n size = size if isinstance(size, (tuple, list)) else (size, size)\n\n durations = []\n np_input = (255 * np.random.random((1, 3, size[0], size[1]))).astype(self.dtype)\n input_name = self.ort_sess.get_inputs()[0].name\n out_name = self.ort_sess.get_outputs()[0].name\n if self.binding:\n io_binding = self.ort_sess.io_binding()\n\n io_binding.bind_input(\n input_name,\n \"cuda\",\n device_id=0,\n element_type=self.dtype,\n shape=np_input.shape,\n buffer_ptr=np_input.ctypes.data,\n )\n\n io_binding.bind_cpu_input(input_name, np_input)\n io_binding.bind_output(out_name, \"cuda\")\n else:\n out_name = [output.name for output in self.ort_sess.get_outputs()]\n\n for step in range(iterations + 5):\n if self.binding:\n start = perf_counter()\n self.ort_sess.run_with_iobinding(io_binding)\n end = perf_counter()\n # out = io_binding.copy_outputs_to_cpu()\n else:\n start = perf_counter()\n out = self.ort_sess.run(out_name, {input_name: np_input})\n end = perf_counter()\n\n if step >= 5:\n durations.append((end - start) * 1000)\n durations = np.array(durations)\n provider = self.active_providers[0]\n if provider in [\"CUDAExecutionProvider\", \"TensorrtExecutionProvider\"]:\n device = get_gpu_name()\n else:\n device = get_cpu_name()\n metrics = LatencyMetrics(\n fps=int(1000 / durations.mean()),\n engine=f\"onnx.{provider}\",\n mean=round(durations.mean(), 3),\n max=round(durations.max(), 3),\n min=round(durations.min(), 3),\n std=round(durations.std(), 3),\n im_size=size[0],\n device=str(device),\n )\n self.logger.info(f\"\ud83d\udd25 FPS: {metrics.fps}\")\n return metrics\n
"},{"location":"api/runtime/#focoos.runtime.det_postprocess","title":"det_postprocess(out, im0_shape, conf_threshold)
","text":"Postprocesses the output of an object detection model and filters detections based on a confidence threshold.
Parameters:
Name Type Description Default out
List[ndarray]
The output of the detection model.
required im0_shape
Tuple[int, int]
The original shape of the input image (height, width).
required conf_threshold
float
The confidence threshold for filtering detections.
required Returns:
Type Description Detections
sv.Detections: A sv.Detections object containing the filtered bounding boxes, class ids, and confidences.
Source code in focoos/runtime.py
def det_postprocess(\n out: List[np.ndarray], im0_shape: Tuple[int, int], conf_threshold: float\n) -> sv.Detections:\n \"\"\"\n Postprocesses the output of an object detection model and filters detections\n based on a confidence threshold.\n\n Args:\n out (List[np.ndarray]): The output of the detection model.\n im0_shape (Tuple[int, int]): The original shape of the input image (height, width).\n conf_threshold (float): The confidence threshold for filtering detections.\n\n Returns:\n sv.Detections: A sv.Detections object containing the filtered bounding boxes, class ids, and confidences.\n \"\"\"\n cls_ids, boxes, confs = out\n boxes[:, 0::2] *= im0_shape[1]\n boxes[:, 1::2] *= im0_shape[0]\n high_conf_indices = (confs > conf_threshold).nonzero()\n\n return sv.Detections(\n xyxy=boxes[high_conf_indices].astype(int),\n class_id=cls_ids[high_conf_indices].astype(int),\n confidence=confs[high_conf_indices].astype(float),\n )\n
"},{"location":"api/runtime/#focoos.runtime.get_runtime","title":"get_runtime(runtime_type, model_path, model_metadata, warmup_iter=0)
","text":"Creates and returns an ONNXRuntime instance based on the specified runtime type and model path, with options for various execution providers (CUDA, TensorRT, CPU, etc.).
Parameters:
Name Type Description Default runtime_type
RuntimeTypes
The type of runtime to use (e.g., ONNX_CUDA32, ONNX_TRT32).
required model_path
str
The path to the ONNX model.
required model_metadata
ModelMetadata
Metadata describing the model.
required warmup_iter
int
Number of warmup iterations before benchmarking. Defaults to 0.
0
Returns:
Name Type Description ONNXRuntime
ONNXRuntime
A fully configured ONNXRuntime instance.
Source code in focoos/runtime.py
def get_runtime(\n runtime_type: RuntimeTypes,\n model_path: str,\n model_metadata: ModelMetadata,\n warmup_iter: int = 0,\n) -> ONNXRuntime:\n \"\"\"\n Creates and returns an ONNXRuntime instance based on the specified runtime type\n and model path, with options for various execution providers (CUDA, TensorRT, CPU, etc.).\n\n Args:\n runtime_type (RuntimeTypes): The type of runtime to use (e.g., ONNX_CUDA32, ONNX_TRT32).\n model_path (str): The path to the ONNX model.\n model_metadata (ModelMetadata): Metadata describing the model.\n warmup_iter (int, optional): Number of warmup iterations before benchmarking. Defaults to 0.\n\n Returns:\n ONNXRuntime: A fully configured ONNXRuntime instance.\n \"\"\"\n opts = OnnxEngineOpts(\n cuda=runtime_type == RuntimeTypes.ONNX_CUDA32,\n trt=runtime_type in [RuntimeTypes.ONNX_TRT32, RuntimeTypes.ONNX_TRT16],\n fp16=runtime_type == RuntimeTypes.ONNX_TRT16,\n warmup_iter=warmup_iter,\n coreml=runtime_type == RuntimeTypes.ONNX_COREML,\n verbose=False,\n )\n return ONNXRuntime(model_path, opts, model_metadata)\n
"},{"location":"api/runtime/#focoos.runtime.semseg_postprocess","title":"semseg_postprocess(out, im0_shape, conf_threshold)
","text":"Postprocesses the output of a semantic segmentation model and filters based on a confidence threshold.
Parameters:
Name Type Description Default out
List[ndarray]
The output of the semantic segmentation model.
required im0_shape
Tuple[int, int]
The original shape of the input image (height, width).
required conf_threshold
float
The confidence threshold for filtering detections.
required Returns:
Type Description Detections
sv.Detections: A sv.Detections object containing the masks, class ids, and confidences.
Source code in focoos/runtime.py
def semseg_postprocess(\n out: List[np.ndarray], im0_shape: Tuple[int, int], conf_threshold: float\n) -> sv.Detections:\n \"\"\"\n Postprocesses the output of a semantic segmentation model and filters based\n on a confidence threshold.\n\n Args:\n out (List[np.ndarray]): The output of the semantic segmentation model.\n im0_shape (Tuple[int, int]): The original shape of the input image (height, width).\n conf_threshold (float): The confidence threshold for filtering detections.\n\n Returns:\n sv.Detections: A sv.Detections object containing the masks, class ids, and confidences.\n \"\"\"\n cls_ids, mask, confs = out[0][0], out[1][0], out[2][0]\n masks = np.equal(mask, np.arange(len(cls_ids))[:, None, None])\n high_conf_indices = np.where(confs > conf_threshold)[0]\n masks = masks[high_conf_indices].astype(bool)\n cls_ids = cls_ids[high_conf_indices].astype(int)\n confs = confs[high_conf_indices].astype(float)\n return sv.Detections(\n mask=masks,\n # xyxy is required from supervision\n xyxy=np.zeros(shape=(len(high_conf_indices), 4), dtype=np.uint8),\n class_id=cls_ids,\n confidence=confs,\n )\n
"},{"location":"development/changelog/","title":"Changelog","text":"\ud83d\udea7 Work in Progress \ud83d\udea7
This page is currently being developed and may not be complete.
Feel free to contribute to this page! If you have suggestions or would like to help improve it, please contact us.
"},{"location":"development/code_of_conduct/","title":"Code of Conduct","text":"\ud83d\udea7 Work in Progress \ud83d\udea7
This page is currently being developed and may not be complete.
Feel free to contribute to this page! If you have suggestions or would like to help improve it, please contact us.
"},{"location":"development/contributing/","title":"Contributing","text":"\ud83d\udea7 Work in Progress \ud83d\udea7
This page is currently being developed and may not be complete.
Feel free to contribute to this page! If you have suggestions or would like to help improve it, please contact us.
"},{"location":"getting_started/installation/","title":"Installation","text":"The focoos SDK provides flexibility for installation based on the execution environment you plan to use. The package supports CPU
, NVIDIA GPU
, and NVIDIA GPU with TensorRT
environments. Please note that only one execution environment should be selected during installation.
"},{"location":"getting_started/installation/#requirements","title":"Requirements","text":"For local inference, ensure that you have CUDA 12 and cuDNN 9 installed, as they are required for onnxruntime version 1.20.1.
To install cuDNN 9:
apt-get -y install cudnn9-cuda-12\n
To perform inference using TensorRT, ensure you have TensorRT version 10.5 installed.
"},{"location":"getting_started/installation/#installation-options","title":"Installation Options","text":" If you plan to run the SDK on a CPU-only environment:
pip install 'focoos[cpu] @ git+https://github.com/FocoosAI/focoos.git'\n
For execution using NVIDIA GPUs (with ONNX Runtime GPU support):
pip install 'focoos[gpu] @ git+https://github.com/FocoosAI/focoos.git'\n
For optimized execution using NVIDIA GPUs with TensorRT:
pip install 'focoos[tensorrt] @ git+https://github.com/FocoosAI/focoos.git'\n
Note
\ud83d\udee0\ufe0f Installation Tip: If you want to install a specific version, for example v0.1.3
, use:
pip install 'focoos[tensorrt] @ git+https://github.com/FocoosAI/focoos.git@v0.1.3'\n
\ud83d\udccb Check Versions: Visit https://github.com/FocoosAI/focoos/tags for available versions."},{"location":"getting_started/introduction/","title":"Focoos Python SDK \ud83d\udce6","text":"Unlock the full potential of Focoos AI with the Focoos Python SDK! \ud83d\ude80 This powerful SDK gives you seamless access to our cutting-edge computer vision models and tools, allowing you to effortlessly interact with the Focoos API. With just a few lines of code, you can easily select, customize, test, and deploy pre-trained models tailored to your specific needs. Whether you're deploying in the cloud or on edge devices, the Focoos Python SDK integrates smoothly into your workflow, speeding up your development process.
Ready to dive in? Get started with the setup in just a few simple steps!
\ud83d\ude80 Install the Focoos Python SDK
"},{"location":"getting_started/quickstart/","title":"Quickstart \ud83d\ude80","text":"Getting started with Focoos AI has never been easier! In just a few steps, you can quickly set up remote inference using our built-in models. Here's a simple example of how to perform object detection with the focoos_object365 model:
"},{"location":"getting_started/quickstart/#step-1-install-the-sdk","title":"Step 1: Install the SDK","text":"First, make sure you've installed the Focoos Python SDK by following the installation guide.
"},{"location":"getting_started/quickstart/#step-2-set-up-remote-inference","title":"Step 2: Set Up Remote Inference","text":"With the SDK installed, you can start using the Focoos API to run inference remotely. Here's a basic code snippet to detect objects in an image using a pre-trained model:
from focoos import Focoos\nimport os\n\n# Initialize the Focoos client with your API key\nfocoos = Focoos(api_key=os.getenv(\"FOCOOS_API_KEY\"))\n\n# Get the remote model (focoos_object365) from Focoos API\nmodel = focoos.get_remote_model(\"focoos_object365\")\n\n# Run inference on an image\ndetections = model.infer(\"./image.jpg\", threshold=0.4)\n\n# Output the detections\nprint(detections)\n
"},{"location":"helpers/wip/","title":"Wip","text":"\ud83d\udea7 Work in Progress \ud83d\udea7
This page is currently being developed and may not be complete.
Feel free to contribute to this page! If you have suggestions or would like to help improve it, please contact us.
"},{"location":"how_to/cloud_training/","title":"Cloud Training","text":"This section covers the steps to train a model in the cloud using the focoos
library. The following example demonstrates how to interact with the Focoos API to manage models, datasets, and training jobs.
"},{"location":"how_to/cloud_training/#listing-available-datasets","title":"Listing Available Datasets","text":"Before training a model, you can list all available shared datasets:
from pprint import pprint\nimport os\nfrom focoos import Focoos\n\nfocoos = Focoos(api_key=os.getenv(\"FOCOOS_API_KEY\"))\n\ndatasets = focoos.list_shared_datasets()\npprint(datasets)\n
"},{"location":"how_to/cloud_training/#initiating-a-cloud-training-job","title":"Initiating a Cloud Training Job","text":"To start training, configure the model, dataset, and training parameters as shown below:
from focoos.ports import Hyperparameters, TrainInstance\n\nmodel = focoos.get_remote_model(\"<YOUR-MODEL-ID>\")\n\nres = model.train(\n anyma_version=\"0.11.1\",\n dataset_ref=\"<YOUR-DATASET-ID>\",\n instance_type=TrainInstance.ML_G4DN_XLARGE,\n volume_size=50,\n max_runtime_in_seconds=36000,\n hyperparameters=Hyperparameters(\n learning_rate=0.0001,\n batch_size=16,\n max_iters=1500,\n eval_period=100,\n resolution=640,\n ), # type: ignore\n)\npprint(res)\n
"},{"location":"how_to/cloud_training/#monitoring-training-progress","title":"Monitoring Training Progress","text":"Once the training job is initiated, monitor its progress by polling the training status. Use the following code:
import time\nfrom pprint import pprint\nfrom focoos.utils.logger import get_logger\n\ncompleted_status = [\"Completed\", \"Failed\"]\nlogger = get_logger(__name__)\n\nmodel = focoos.get_remote_model(\"<YOUR-MODEL-ID>\")\nstatus = model.train_status()\n\nwhile status[\"main_status\"] not in completed_status:\n status = model.train_status()\n logger.info(f\"Training status: {status['main_status']}\")\n pprint(f\"Training progress: {status['status_transitions']}\")\n time.sleep(30)\n
"},{"location":"how_to/cloud_training/#retrieving-training-logs","title":"Retrieving Training Logs","text":"After the training process is complete, retrieve the logs for detailed insights:
logs = model.train_logs()\npprint(logs)\n
"},{"location":"how_to/inference/","title":"Inferece","text":"This section covers how to perform inference using the focoos
library. You can deploy models to the cloud for predictions, integrate with Gradio for interactive demos, or run inference locally.
"},{"location":"how_to/inference/#cloud-inference","title":"\ud83e\udd16 Cloud Inference","text":"from focoos import Focoos\n\nfocoos = Focoos(api_key=os.getenv(\"FOCOOS_API_KEY\"))\n\nmodel = focoos.get_remote_model(\"focoos_object365\")\ndetections = model.infer(\"./image.jpg\", threshold=0.4)\n
"},{"location":"how_to/inference/#cloud-inference-with-gradio","title":"\ud83e\udd16 Cloud Inference with Gradio","text":"setup FOCOOS_API_KEY_GRADIO
environment variable with your Focoos API key
pip install '.[gradio]'\n
python gradio/app.py\n
"},{"location":"how_to/inference/#local-inference","title":"\ud83e\udd16 Local Inference","text":"from focoos import Focoos\n\nfocoos = Focoos(api_key=os.getenv(\"FOCOOS_API_KEY\"))\n\nmodel = focoos.get_local_model(\"focoos_object365\")\n\ndetections = model.infer(\"./image.jpg\", threshold=0.4)\n
"}]}
\ No newline at end of file