From 90a9d2520f7feb3203544b6c5c298192cd00982b Mon Sep 17 00:00:00 2001 From: david Date: Fri, 25 Aug 2023 13:35:02 -0700 Subject: [PATCH] spelling fixes again --- ...ta_aquisition.png => data_acquisition.png} | Bin ...-perfomance.PNG => device-performance.png} | Bin .wordlist.txt | 64 +++++++ SUMMARY-v2.md | 156 ++++++++++++++++++ ai-leukemia-classifier.md | 12 +- ...-nicla-voice-syntiant-snoring-detection.md | 2 +- ...o-portenta-h7-byom-wound-classification.md | 12 +- bean-leaf-classification.md | 2 +- detect-harmful-gases.md | 2 +- fire-detection-with-arduino-and-tinyml.md | 8 +- food-irradiation-detection.md | 12 +- nvidia-deepstream-community-guide.md | 2 +- parcel-detection.md | 2 +- renesas-rzv2l-pose-detection.md | 40 ++--- silabs-xg24-posture-detection.md | 24 +-- sony-spresense-smart-hvac-system.md | 2 +- tinyml-gastroscopic-image-processing.md | 2 +- warehouse-shipment-monitoring.md | 2 +- 18 files changed, 282 insertions(+), 62 deletions(-) rename .gitbook/assets/arduino-nicla-voice-syntiant-snoring-detection/{data_aquisition.png => data_acquisition.png} (100%) rename .gitbook/assets/arduino-portenta-h7-byom-wound-classification/{device-perfomance.PNG => device-performance.png} (100%) create mode 100644 SUMMARY-v2.md diff --git a/.gitbook/assets/arduino-nicla-voice-syntiant-snoring-detection/data_aquisition.png b/.gitbook/assets/arduino-nicla-voice-syntiant-snoring-detection/data_acquisition.png similarity index 100% rename from .gitbook/assets/arduino-nicla-voice-syntiant-snoring-detection/data_aquisition.png rename to .gitbook/assets/arduino-nicla-voice-syntiant-snoring-detection/data_acquisition.png diff --git a/.gitbook/assets/arduino-portenta-h7-byom-wound-classification/device-perfomance.PNG b/.gitbook/assets/arduino-portenta-h7-byom-wound-classification/device-performance.png similarity index 100% rename from .gitbook/assets/arduino-portenta-h7-byom-wound-classification/device-perfomance.PNG rename to .gitbook/assets/arduino-portenta-h7-byom-wound-classification/device-performance.png diff --git a/.wordlist.txt b/.wordlist.txt index bc4e047e..97fa0650 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -1641,4 +1641,68 @@ XZjF PCBs Joos Korstanje +PdRwiQ +fzT +DX +jT +seyV +isJb +bbcgoodfood +fbioe +POBpR +jPE +NvDsObectMeta +NvDsInferParseYolo +SMEs +mispredicted +SabLvJqSaM +BYO +DRV +ERM +Eslov +synpkg +jKJgnxQAnQ +piropodatabase +omnidirectional +SNSD +incentivizing +uroot +nanoSieverts +RELU +Bingsu +dimensionally +SBC's +EIM's +MPU’s +hPa +timeframe +SoC's +DaytimeForest +spectrogram's +downstroke +ecommerce +KK +cpe +workpiece +Makerere +NaCRRI +RPAS +programmability +researchgate +manageBootloader +bootloaders +bn +AZH +BYOM +Colab +Anisuzzaman +Rostami +yasinpratomo +Hyperconverged +smoothening +KKB +Minto +Zanuttigh +Faris +Salman diff --git a/SUMMARY-v2.md b/SUMMARY-v2.md new file mode 100644 index 00000000..decc2b3f --- /dev/null +++ b/SUMMARY-v2.md @@ -0,0 +1,156 @@ +# Table of contents + +* [Welcome](README.md) + +## Image Projects + +* [Recyclable Materials Sorter - Nvidia Jetson Nano](recyclable-materials-sorter.md) +* [Analog Meter Reading - Arduino Nicla Vision](analog-meter-reading-with-nicla-vision.md) +* [Creating Synthetic Data with Nvidia Omniverse Replicator](nvidia-omniverse-replicator.md) +* [Traffic Monitoring using the Brainchip Akida Neuromorphic Processor](brainchip-akida-traffic-monitoring.md) +* [Workplace Organizer - Nvidia Jetson Nano](workplace-organizer.md) +* [Container Counting with a Nicla Vision & FOMO](container-counting-nicla-vision.md) +* [Smart Smoke Alarm Using Thermal Imaging](smart-smoke-alarm.md) +* [Shield Bot Autonomous Security Robot](shieldbot.md) +* [Cyclist Blind Spot Detection](cyclist-blind-spot-detection.md) +* [IV Drip Fluid-Level Monitoring](iv-drip-fluid-level-monitoring.md) +* [Worker Safety Monitoring with Nvidia Jetson Nano](worker-safety-monitoring.md) +* [Delivered Package Detection with Computer Vision](parcel-detection.md) +* [Bean Leaf Classification with Sony Spresense](bean-leaf-classification.md) +* [Oil Tank Measurement and Delivery Improvement Using Computer Vision](oil-tank-gauge-monitoring.md) +* [Adaptable Vision Counters for Smart Industries](adaptable-vision-counters.md) +* [Smart Cashier with FOMO on a Raspberry Pi](smart-cashier.md) +* [Identifying PCB Defects with Machine Learning](identifying-pcb-defects.md) +* [Counting Eggs with Computer Vision](egg-counting-openmv.md) +* [Elevator Passenger Counting Using Computer Vision](elevator-passenger-counting.md) +* [Bicycle Counting with a Sony Spresense](spresense-bicycle-counter.md) +* [ESD Protection using Computer Vision](esd-protection-using-computer-vision.md) +* [Solar Panel Defect Detection with FOMO on an Arduino Portenta](solar-panel-defect-detection.md) +* [Automated Label Inspection With FOMO](label-inspection.md) +* [Knob Eye: Monitor Analog Dials and Knobs with Computer Vision](ml-knob-eye.md) +* [Posture Detection for Worker Safety](worker-safety-posture-detection.md) +* [TinyML Digital Counter for Electric Metering System](tinyml-digital-counter-openmv.md) +* [Corrosion Detection with Seeed reTerminal](corrosion-detection-reterminal.md) +* [Automated Inventory Management with Computer Vision](automated-inventory-management.md) +* [Monitoring Retail Checkout Lines with Computer Vision on the RZ/V2L](monitoring-checkout-lines-rzv2l.md) +* [Smart Grocery Cart Using Computer Vision](smart-grocery-cart-with-computer-vision.md) +* [Driver Drowsiness Detection With FOMO](driver-drowsiness-detection-with-computer-vision.md) +* [TinyML for Gastroscopic Image Processing](tinyml-gastroscopic-image-processing.md) +* [Pharmaceutical Pill Quality Control and Defect Detection](pharmaceutical-pill-defect-detection.md) +* [Counting Retail Inventory with Computer Vision on the RZ/V2L](counting-retail-inventory-rzv2l.md) +* [Use Computer Vision on a TI TDA4VM to Deter Shoplifting](deter-shoplifting-with-computer-vision.md) +* [Retail Image Classification with a Jetson Nano](retail-image-classification-jetson-nano.md) +* [Smart Factory Prototype with Texas Instruments TDA4VM](smart-factory-with-tda4vm.md) +* [Fall Detection using Computer Vision for Industrial Workers](worker-fall-detection-computer-vision.md) +* [Surface Crack Detection and Localization with Texas Instruments TDA4VM](surface-crack-detection-ti-tda4vm.md) +* [Surface Crack Detection with Seeed reTerminal](surface-crack-detection.md) +* [The SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 1](silabs-xg24-card-sorting-and-robotics-1.md) +* [The SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 2](silabs-xg24-card-sorting-and-robotics-2.md) +* [Object Detection and Visualization with the Seeed Studio Grove Vision AI Module](object-detection-ubidots-seeed-grove-ai.md) +* [Renesas RZ/V2L DRP-AI Pose Detection](renesas-rzv2l-pose-detection.md) +* [Computer Vision for Product Quality Inspection with Renesas RZ/V2L](renesas-rzv2l-product-quality-inspection.md) +* [Build a Path-Following, Self-Driving Vehicle Using an Arduino Portenta H7 and Computer Vision](arduino-portenta-h7-self-driving-rc-car.md) +* [TI TDA4VM - Correct Posture Detection and Enforcement](ti-tda4vm-posture-enforcer.md) +* [Using a "Bring Your Own Model" Image Classifier for Wound Identification](arduino-portenta-h7-byom-wound-classification.md) +* [Acute Lymphoblastic Leukemia Classifier](ai-leukemia-classifier.md) + +## Audio Projects + +* [Glass Window Break Detection - Nordic Thingy:53](glass-break-detection-thingy53.md) +* [Occupancy Sensing - SiLabs EFR32MG24](occupancy-sensing-with-silabs.md) +* [Smart Appliance Control Using Voice Commands - Nordic Thingy:53](smart-appliance-voice-commands.md) +* [Illegal Logging Detection - Syntiant TinyML](illegal-logging-detection-syntiant.md) +* [Illegal Logging Detection - Nordic Thingy:53](illegal-logging-detection-nordic-thingy53.md) +* [Wearable Cough Sensor and Monitoring](wearable-cough-sensor.md) +* [Shield Bot Autonomous Security Robot](shieldbot.md) +* [Collect Data for Keyword Spotting with Raspberry Pi Pico and Edge Impulse](collect-data-raspberrypi-pico.md) +* [Voice-Activated LED Strip for $10: Raspberry Pi Pico and Edge Impulse](voice-activated-led-controller.md) +* [Snoring Detection on a Smart Phone](snoring-detection-on-smartphone.md) +* [Gunshot Audio Classification](gunshot-audio-classification.md) +* [AI-Powered Patient Assistance](ai-patient-assistance.md) +* [Acoustic Pipe Leak Detection](acoustic-pipe-leak-detection.md) +* [Location Identification using Sound](location-sound.md) +* [Environmental Noise Classification with a Nordic Thingy:53](environmental-noise-classification.md) +* [Running Faucet Detection with a Seeed XIAO Sense + Blues Cellular](running-faucet-detection.md) +* [Vandalism Detection via Audio Classification](vandalism-detection-audio-classification.md) +* [Predictive Maintenance Using Audio Classification](predictive-maintenance-with-sound.md) +* [Porting an Audio Project from the SiLabs Thunderboard Sense 2 to xG24](audio-recognition-on-silabs-xg24.md) +* [Environmental Audio Monitoring Wearable with Syntiant TinyML Board](environmental-audio-monitoring-syntiant-tinyml.md) +* [Environmental Audio Monitoring Wearable with Syntiant TinyML Board - Part 2](environmental-audio-monitoring-syntiant-tinyml-part-2.md) +* [Keyword Spotting on the Nordic Thingy:53](keyword-spotting-on-nordic-thingy53.md) +* [Detecting Worker Accidents with Audio Classification](detecting-worker-accidents-with-ai.md) +* [Snoring Detection with Syntiant NDP120 Neural Decision Processor on Nicla Voice](arduino-nicla-voice-syntiant-snoring-detection.md) + +## Predictive Maintenance & Fault Classification + +* [Predictive Maintenance - Nordic Thingy:91](predictive-maintenance-with-nordic-thingy-91.md) +* [Brushless DC Motor Anomaly Detection](brushless-dc-motor-anomaly-detection.md) +* [Industrial Compressor Predictive Maintenance - Nordic Thingy:53](compressor-predictive-maintenance-thingy53.md) +* [EdenOff: Anticipate Power Outages with Machine Learning](edenoff-anticipate-power-outages.md) +* [Faulty Lithium-Ion Cell Identification in Battery Packs](faulty-lithium-ion-cell-identification.md) +* [Acoustic Pipe Leak Detection](acoustic-pipe-leak-detection.md) +* [Upgrade a Stretch-film Machine: Weight Scale and Predictive Maintenance](stretch-film-machine.md) +* [Fluid Leak Detection With a Flowmeter and AI](fluid-leak-detection-with-flowmeter-and-ai.md) +* [Pipeline Clog Detection with a Flowmeter and TinyML](clog-detection-with-ai.md) +* [Refrigerator Predictive Maintenance](refrigerator-predictive-maintenance.md) + +## Accelerometer & Activity Projects + +* [Arduino x K-Way - Outdoor Activity Tracker](arduino-kway-outdoor-activity-tracker.md) +* [Arduino x K-Way - Gesture Recognition for Hiking](arduino-kway-gesture-recognition-weather.md) +* [Arduino x K-Way - TinyML Fall Detection](arduino-kway-fall-detection.md) +* [Hand Gesture Recognition using TinyML on OpenMV](hand-gesture-recgnition-using-tinyml-on-openmv.md) +* [Arduin-Row, a TinyML Rowing Machine Coach](arduin-row-tinyml-rowing-machine-coach.md) +* [Gesture Recognition with a Bangle.js Smartwatch](gesture-recognition-with-banglejs-smartwatch.md) +* [Bluetooth Fall Detection](bt-fall-detection.md) +* [Safeguarding Packages During Transit with AI](secure-packages-with-ai.md) +* [Smart Baby Swing](smart-baby-swing.md) +* [Warehouse Shipment Monitoring using a Thunderboard Sense 2](warehouse-shipment-monitoring.md) +* [Patient Communication with Gesture Recognition](patient-gesture-recognition.md) +* [Hospital Bed Occupancy Detection with TinyML](hospital-bed-occupancy-detection.md) +* [Fall Detection using a Transformer Model with Arduino Giga R1 WiFi](fall-detection-with-transformers-arduino-giga-r1.md) +* [Porting a Posture Detection Project from the SiLabs Thunderboard Sense 2 to xG24](silabs-xg24-posture-detection.md) + +## Air Quality & Environmental Projects + +* [Arduino x K-Way - Environmental Asthma Risk Assessment](arduino-kway-environmental-risk-assessment.md) +* [Gas Detection in the Oil and Gas Industry - Nordic Thingy:91](gas-detection-thingy-91.md) +* [Smart HVAC System with an Arduino Nicla Vision](arduino-nicla-vision-smart-hvac.md) +* [Smart HVAC System with a Sony Spresense](sony-spresense-smart-hvac-system.md) +* [Indoor CO2 Level Estimation Using TinyML](indoor-co2-level-estimation-using-tinyml.md) +* [Bhopal 84, Detect Harmful Gases](detect-harmful-gases.md) +* [AI-Assisted Monitoring of Dairy Manufacturing Conditions](dairy-manufacturing-with-ai.md) +* [Fire Detection Using Sensor Fusion and TinyML](fire-detection-with-arduino-and-tinyml.md) +* [AI-Assisted Air Quality Monitoring with a DFRobot Firebeetle ESP32](air-quality-monitoring-firebeetle-esp32.md) +* [Air Quality Monitoring with Sipeed Longan Nano - RISC-V Gigadevice](air-quality-monitoring-sipeed-longan-nano-riscv.md) +* [Methane Monitoring in Mines - Silabs xG24 Dev Kit](methane-monitoring-silabs-xg24.md) + +## Novel Sensor Projects +* [8x8 ToF Gesture Classification](tof-gesture-classification.md) +* [Food Irradiation Dose Detection](food-irradiation-detection.md) +* [Bike Rearview Radar](bike-rearview-radar.md) +* [Applying EEG Data to Machine Learning, Part 1](eeg-data-machine-learning-part-1.md) +* [Applying EEG Data to Machine Learning, Part 2](eeg-data-machine-learning-part-2.md) +* [Applying EEG Data to Machine Learning, Part 3](eeg-data-machine-learning-part-3.md) +* [Porting a Gesture Recognition Project from the SiLabs Thunderboard Sense 2 to xG24](gesture-recognition-on-silabs-xg24.md) +* [Fluid Leak Detection With a Flowmeter and AI](fluid-leak-detection-with-flowmeter-and-ai.md) +* [Pipeline Clog Detection with a Flowmeter and TinyML](clog-detection-with-ai.md) +* [Liquid Classification with TinyML](liquid-classification-tinyml.md) +* [AI-Assisted Pipeline Diagnostics and Inspection with mmWave Radar](ai-pipeline-inspection-mmwave.md) +* [Sensecap A1101 - Soil Quality Detection Using AI and LoRaWAN](sensecap-a1101-lorawan-soil-quality.md) +* [Smart Diaper Prototype with an Arduino Nicla Sense ME](arduino-nicla-sense-smart-diaper.md) + +## Software Integration Demos + +* [Azure Machine Learning with Kubernetes Compute and Edge Impulse](azure-machine-learning-EI.md) +* [Community Guide – Using Edge Impulse with Nvidia DeepStream](nvidia-deepstream-community-guide.md) +* [Creating Synthetic Data with Nvidia Omniverse Replicator](nvidia-omniverse-replicator.md) +* [NVIDIA Omniverse - Synthetic Data Generation For Edge Impulse Projects](nvidia-omniverse-synthetic-data.md) +* [ROS2 + Edge Impulse, Part 1: Pub/Sub Node in Python](ros2-part1-pubsub-node.md) +* [ROS2 + Edge Impulse, Part 2: MicroROS](ros2-part2-microros.md) +* [Using Hugging Face Datasets in Edge Impulse](using-huggingface-dataset-with-edge-impulse.md) +* [How to Use a Hugging Face Image Classification Dataset with Edge Impulse](hugging-face-image-classification.md) +* [Edge Impulse API Usage Sample Application - Jetson Nano Trainer](api-sample-application-jetson-nano.md) +* [Renesas CK-RA6M5 Cloud Kit - Getting Started with Machine Learning](renesas-ra6m5-getting-started.md) +* [TI CC1352P Launchpad - Getting Started with Machine Learning](ti-cc1352p-getting-started.md) +* [MLOps with Edge Impulse and Azure IoT Edge](mlops-azure-iot-edge.md) diff --git a/ai-leukemia-classifier.md b/ai-leukemia-classifier.md index ef3e5c3c..35eb9303 100644 --- a/ai-leukemia-classifier.md +++ b/ai-leukemia-classifier.md @@ -18,11 +18,11 @@ GitHub Repo: [Edge Impulse Acute Lymphoblastic Leukemia Classifier](https://gith Acute Lymphoblastic Leukemia (ALL), also known as acute lymphocytic leukemia, is a cancer that affects the lymphoid blood cell lineage. It is the most common leukemia in children, and it accounts for 10-20% of acute leukemias in adults. The prognosis for both adult and especially childhood ALL has improved substantially since the 1970s. The 5-year survival is approximately 95% in children. In adults, the 5-year survival varies between 25% and 75%, with more favorable results in younger than in older patients. -Since 2018 I have worked on numerous projects exploring the use of AI for medical diagnostics, in particular, leukemia. In 2018 my grandfather was diagnosed as terminal with Actue Myeloid leukemia one month after an all clear blood test completely missed the disease. I was convinced that there must have been signs of the disease that were missed in the blood test, and began a research project with the goals of utilizing Artificial Intelligence to solve early detection of leukemia. The project grew to a non-profit association in Spain and is now a UK community interest company. +Since 2018 I have worked on numerous projects exploring the use of AI for medical diagnostics, in particular, leukemia. In 2018 my grandfather was diagnosed as terminal with Acute Myeloid leukemia one month after an all clear blood test completely missed the disease. I was convinced that there must have been signs of the disease that were missed in the blood test, and began a research project with the goals of utilizing Artificial Intelligence to solve early detection of leukemia. The project grew to a non-profit association in Spain and is now a UK community interest company. ## Investigation -One of the objectives of our mission is to experiment with different types of AI, different frameworks/programming languages, and hardwares. This project aims to show researchers the potential of the Edge Impulse platform and the NVIDIA Jetson Nano to quickly create and deploy prototypes for medical diagnosis research. +One of the objectives of our mission is to experiment with different types of AI, different frameworks/programming languages, and hardware. This project aims to show researchers the potential of the Edge Impulse platform and the NVIDIA Jetson Nano to quickly create and deploy prototypes for medical diagnosis research. ## Hardware @@ -38,7 +38,7 @@ One of the objectives of our mission is to experiment with different types of AI ## Dataset -For this project we are going to use the [Acute Lymphoblastic Leukemia (ALL) image dataset](https://www.kaggle.com/datasets/mehradaria/leukemia). Acute Lymphoblastic Leukemia can be either T-lineage, or B-lineage. This datset includes 4 classes: Benign, Early Pre-B, Pre-B, and Pro-B Acute Lymphoblastic Leukemia. +For this project we are going to use the [Acute Lymphoblastic Leukemia (ALL) image dataset](https://www.kaggle.com/datasets/mehradaria/leukemia). Acute Lymphoblastic Leukemia can be either T-lineage, or B-lineage. This dataset includes 4 classes: Benign, Early Pre-B, Pre-B, and Pro-B Acute Lymphoblastic Leukemia. Pre-B Lymphoblastic Leukemia, or precursor B-Lymphoblastic leukemia, is a very aggressive type of leukemia where there are too many B-cell lymphoblasts in the bone marrow and blood. B-cell lymphoblasts are immature white blood cells that have not formed correctly. The expressions ("early pre-b", "pre-b" and "pro-b") are related to the differentiation of B-cells. We can distinguish the different phases based on different cell markers expression, although this is complex because the "normal profile" may be altered in malignant cells. @@ -62,7 +62,7 @@ Now it is time to import your data. You should have already downloaded the datas ![Upload data](.gitbook/assets/ai-leukemia-classifier/5-upload-data.jpg) -Once downloaded head over the to **Data aquisition** in Edge Impulse Studio, click on the **Add data** button and then **Upload data**. +Once downloaded head over the to **Data acquisition** in Edge Impulse Studio, click on the **Add data** button and then **Upload data**. ![Uploading data](.gitbook/assets/ai-leukemia-classifier/6-data-uploading.jpg) @@ -112,7 +112,7 @@ Let's see how the model performs on unseen data. Head over to the **Model testin You will see the output of the testing in the output window, and once testing is complete you will see the results. In our case we can see that we have achieved 91.67% accuracy on the unseen data. -## Jeston Nano Setup +## Jetson Nano Setup Now we are ready to set up our Jetson Nano project. @@ -166,7 +166,7 @@ The code has been provided for you in the `classifier.py` file. To run the class python3 classifier.py ``` -You should see similar to the following output. In our case, our model performed exceptionally well at classifying the variuous stages of leukemia, only classifying 3 samples out of 14 incorrectly. +You should see similar to the following output. In our case, our model performed exceptionally well at classifying the various stages of leukemia, only classifying 3 samples out of 14 incorrectly. ``` Loaded runner for "Edge Impulse Experts / Acute Lymphoblastic Leukemia Classifier" diff --git a/arduino-nicla-voice-syntiant-snoring-detection.md b/arduino-nicla-voice-syntiant-snoring-detection.md index 5efb3134..8b5b0dda 100644 --- a/arduino-nicla-voice-syntiant-snoring-detection.md +++ b/arduino-nicla-voice-syntiant-snoring-detection.md @@ -72,7 +72,7 @@ $ edge-impulse-uploader --category split --label z_openset downsampled/non-snori To ensure accurate prediction, the Syntiant NDP chips necessitate a negative class that should not be predicted. For the datasets without snoring, the **z_openset** class label is utilized to ensure that it appears last in alphabetical order. By using the commands provided, the datasets are divided into **Training** and **Testing** samples. Access to the uploaded datasets can be found on the **Data Acquisition** page of the Edge Impulse Studio. -![Data Aquisition](.gitbook/assets/arduino-nicla-voice-syntiant-snoring-detection/data_aquisition.png) +![Data Acquisition](.gitbook/assets/arduino-nicla-voice-syntiant-snoring-detection/data_acquisition.png) ## Model Training diff --git a/arduino-portenta-h7-byom-wound-classification.md b/arduino-portenta-h7-byom-wound-classification.md index b85c3eda..f61cee64 100644 --- a/arduino-portenta-h7-byom-wound-classification.md +++ b/arduino-portenta-h7-byom-wound-classification.md @@ -75,7 +75,7 @@ Open your OpenMV IDE, and click on the **Connect** Icon. ![OpenMV CONNECT](.gitbook/assets/arduino-portenta-h7-byom-wound-classification/CONNECT.PNG) -A screen will appear with the message **A board in DFU mode was detected**. Select the "Install the latest release firmware". This will install the latest openMV firmware in the development board. Optionaly, leave the "Erase all files" option as it is and click **OK**. +A screen will appear with the message **A board in DFU mode was detected**. Select the "Install the latest release firmware". This will install the latest openMV firmware in the development board. Optionally, leave the "Erase all files" option as it is and click **OK**. ![OpenMV CONNECT](.gitbook/assets/arduino-portenta-h7-byom-wound-classification/install-latest-firmware.png) @@ -105,7 +105,7 @@ In this project, we leverage Bring Your Own Model (BYOM) capabilities offered by The model training process took place on Google Colab. We utilize Transfer Learning with a MobileNet architecture and TensorFlow framework. By using the pre-trained MobileNetV2, we can benefit from its high-performance feature extraction capabilities while training on the wound dataset. -We then optimize the model for edge deployment by applying **Post Training Quantization**, a technique that reduces the model size without significant loss in accuracy. This technique aids in minimizing the memory footprint and storage requirements of the model while maintaining its perfomance. +We then optimize the model for edge deployment by applying **Post Training Quantization**, a technique that reduces the model size without significant loss in accuracy. This technique aids in minimizing the memory footprint and storage requirements of the model while maintaining its performance. Once the model is trained and optimized, we converted it to the TensorFlow Lite format, which is compatible with the Edge Impulse Platform. We saved and downloaded the model for further use. Find all the [code on GitHub](https://github.com/tum-jackie/Wound-Classification-with-Edge-Impulse/tree/main) with the necessary steps, including training, quantization, conversion, and saving, to enable the deployment of the Model to Edge Impulse Platform. @@ -131,13 +131,13 @@ Once the model is uploaded, set the model configurations. The **Model input** is These are the **Profiling** results. The model uses 338.7KB of RAM and 573.6KB of Flash memory. -![Profiling Model](.gitbook/assets/arduino-portenta-h7-byom-wound-classification/device-perfomance.PNG) +![Profiling Model](.gitbook/assets/arduino-portenta-h7-byom-wound-classification/device-performance.png) We can check the model behavior by uploading a test image. We use an image that was not used during the training process in order to better identify how the model performs with unseen data. ![Test Model](.gitbook/assets/arduino-portenta-h7-byom-wound-classification/test-model.PNG) -The model performs well on one image. We then upload a set of images as Test images to further test the model perfomance before deploying the model on Arduino Portenta. +The model performs well on one image. We then upload a set of images as Test images to further test the model performance before deploying the model on Arduino Portenta. From the **Dashboard**, head to the **Data acquisition** tab, and upload a set of images as test data. @@ -147,7 +147,7 @@ From the Dashboard, head to the **Model testing** tab and click **Classify all** ![Test Model](.gitbook/assets/arduino-portenta-h7-byom-wound-classification/classifyall.PNG) -The model perfomance is quite satisfactory and we can now deploy to our target device. +The model performance is quite satisfactory and we can now deploy to our target device. There are different ways to deploy a model to the Arduino Portenta H7: as an arduino library, an OpenMV library, firmware, or a C++ library. @@ -183,7 +183,7 @@ And a sample **diabetic wound** classification: In this project, we have seen how it is possible to leverage AI-powered cameras to classify wounds, which offers a great advantage in reducing the time taken to diagnose wounds as well as reduce cost associated with the process. -This project could be scaled further by sending the inference results over a web platform for results to be conveniently accessed by clinicians. This enables accurate administration of treatment to patients regardless of their location and mitigates the severe effects of misdiagnosis, leading to improved human health, especially in rural areas without local expertise. Adding more dataset images in order to improve the model perfomance on diabetic wound classification would also be helpful. +This project could be scaled further by sending the inference results over a web platform for results to be conveniently accessed by clinicians. This enables accurate administration of treatment to patients regardless of their location and mitigates the severe effects of misdiagnosis, leading to improved human health, especially in rural areas without local expertise. Adding more dataset images in order to improve the model performance on diabetic wound classification would also be helpful. With **Bring Your Own Model** on Edge Impulse, ML engineers can build robust, state of the art models and deploy to edge devices. This creates a huge opportunity to solve challenges with Machine Learning and deploy to suitable hardware devices. diff --git a/bean-leaf-classification.md b/bean-leaf-classification.md index e429dee8..bfe804e0 100644 --- a/bean-leaf-classification.md +++ b/bean-leaf-classification.md @@ -64,7 +64,7 @@ First, on the Project Dashboard, we have set Labeling method to "Bounding boxes Our dataset is ready to train our model. This requires two important features: a processing block and learning block. Documentation on Impulse Design can be found [here](https://docs.edgeimpulse.com/docs/edge-impulse-studio/create-impulse). -We first click "Create Impulse". Here, set image width and heigh to 96x96; and Resize mode to Squash. The Processing block is set to "Image" and the Learning block is "Transfer Learning (Images)". Click 'Save Impulse' to use this configuration. We have used a 96x96 image size to lower the RAM usage, shown in Figure 4. +We first click "Create Impulse". Here, set image width and height to 96x96; and Resize mode to Squash. The Processing block is set to "Image" and the Learning block is "Transfer Learning (Images)". Click 'Save Impulse' to use this configuration. We have used a 96x96 image size to lower the RAM usage, shown in Figure 4. ![Figure 4: Create impulse figure with image input range, transfer learning and output class](.gitbook/assets/bean-leaf-classification/impulse.jpg) diff --git a/detect-harmful-gases.md b/detect-harmful-gases.md index a0271bd7..b7106163 100644 --- a/detect-harmful-gases.md +++ b/detect-harmful-gases.md @@ -68,7 +68,7 @@ Processing block will be raw data with all axis checked. For classification use In Raw Data you can see all values for regular and harmful inside every windows size. Then you have to click Generate Features. -For NN Classifier use 60 training cycles, 0.0005 Learning Rate, Validation 20 and Autobalance dataset. Add an extra layer Droput Rate 0.1 Click Start Training and check if you get good accuracy. +For NN Classifier use 60 training cycles, 0.0005 Learning Rate, Validation 20 and Autobalance dataset. Add an extra layer Dropout Rate 0.1 Click Start Training and check if you get good accuracy. If you are ok with results you can go to Model Testing and check the performance with new data. If there are lots of readings with wrong classification you should check again data acquisition procedure. diff --git a/fire-detection-with-arduino-and-tinyml.md b/fire-detection-with-arduino-and-tinyml.md index 9ebd853c..9c089a2c 100644 --- a/fire-detection-with-arduino-and-tinyml.md +++ b/fire-detection-with-arduino-and-tinyml.md @@ -38,7 +38,7 @@ We have only two classes in this project: **No Fire** and **Fire**. For the **No ![](.gitbook/assets/fire-detection-with-arduino-and-tinyml/data-explorer.jpg) -This tool is very useful for quickly looking for outliers and discrepencies in your labels and data points. +This tool is very useful for quickly looking for outliers and discrepancies in your labels and data points. # Impulse Design @@ -46,7 +46,7 @@ This is our machine learning pipeline, known as an **Impulse** ![](.gitbook/assets/fire-detection-with-arduino-and-tinyml/impulse.jpg) -For the Processing block we used **Spectral analysis** and for the Learning block we used **Classification**. Other options such as **Flatten** and **Raw Data** are also available as Processing blocks. Each Proceessing block has it's features and uses, if you need to dive into that, you can find [information covering each of them here](https://docs.edgeimpulse.com/docs/edge-impulse-studio/processing-blocks). +For the Processing block we used **Spectral analysis** and for the Learning block we used **Classification**. Other options such as **Flatten** and **Raw Data** are also available as Processing blocks. Each Processing block has it's features and uses, if you need to dive into that, you can find [information covering each of them here](https://docs.edgeimpulse.com/docs/edge-impulse-studio/processing-blocks). These are our Spectral Analysis parameters of **Filter** and **Spectral power**. We didn't use any filter for the raw data. @@ -58,7 +58,7 @@ The below image shows the **Generated features** for the collected data, and we ## Model Training -After sucessfully extracting the features from the DSP block, it's time to train the machine learning model. +After successfully extracting the features from the DSP block, it's time to train the machine learning model. Here are our Neural Network settings and architecture, which work very well for our data. @@ -70,7 +70,7 @@ After training, we achieved 98% validation accuracy for the data, so the model s ![](.gitbook/assets/fire-detection-with-arduino-and-tinyml/training-output.jpg) -The **Confusion matrix** is a great tool for evaluating the model, as you can see below, 2.1% of the data samples are missclassified as **No Fire**. +The **Confusion matrix** is a great tool for evaluating the model, as you can see below, 2.1% of the data samples are misclassified as **No Fire**. ![](.gitbook/assets/fire-detection-with-arduino-and-tinyml/confusion-matrix.jpg) diff --git a/food-irradiation-detection.md b/food-irradiation-detection.md index 22dd2adb..f5c4cbf0 100644 --- a/food-irradiation-detection.md +++ b/food-irradiation-detection.md @@ -36,7 +36,7 @@ After completing my data set and creating samples, I built my artificial neural After training and testing my neural network model, I deployed and uploaded the model on Beetle ESP32-C3. Therefore, the device is capable of detecting precise food irradiation dose levels (classes) by running the model independently without any additional procedures. -Lastly, to make the device as robust and compact as possible while experimenting with a motley collection of foods, I designed a Hulk-inspired structure with a moveable visible light sensor handle (3D printable). +Lastly, to make the device as robust and compact as possible while experimenting with a motley collection of foods, I designed a Hulk-inspired structure with a movable visible light sensor handle (3D printable). So, this is my project in a nutshell 😃 @@ -74,9 +74,9 @@ In the following steps, you can find more detailed information on coding, loggin ## Step 1: Designing and printing a Hulk-inspired structure -Since this project is for detecting irradiation doses of foods treated with ionizing radiation, I got inspired by the most prominent fictional Gamma radiation expert, Bruce Banner (aka, The Incredible Hulk), to design a unique structure so as to create a robust and compact device flawlessly operating while collecting data from foods. To collect data with the visible light sensor at different angles, I added a moveable handle to the structure, including a slot and a hook for hanging the sensor. +Since this project is for detecting irradiation doses of foods treated with ionizing radiation, I got inspired by the most prominent fictional Gamma radiation expert, Bruce Banner (aka, The Incredible Hulk), to design a unique structure so as to create a robust and compact device flawlessly operating while collecting data from foods. To collect data with the visible light sensor at different angles, I added a movable handle to the structure, including a slot and a hook for hanging the sensor. -I designed the structure and its moveable handle in Autodesk Fusion 360. You can download their STL files below. +I designed the structure and its movable handle in Autodesk Fusion 360. You can download their STL files below. ![image](.gitbook/assets/food-irradiation/model_1.PNG) @@ -100,7 +100,7 @@ Then, I sliced all 3D models (STL files) in Ultimaker Cura. ![image](.gitbook/assets/food-irradiation/model_8.PNG) -Since I wanted to create a solid structure for this device with a moveable handle and complement the Hulk theme gloriously, I utilized these PLA filaments: +Since I wanted to create a solid structure for this device with a movable handle and complement the Hulk theme gloriously, I utilized these PLA filaments: - eMarble Natural - Peak Green @@ -202,7 +202,7 @@ After completing sensor connections and adjustments on breadboards successfully, After printing all parts (models), I fastened all components except the visible light sensor to their corresponding slots on the structure via the hot glue gun. -Then, I attached the visible light sensor to the moveable handle and hung it via its slot in the structure. +Then, I attached the visible light sensor to the movable handle and hung it via its slot in the structure. ![image](.gitbook/assets/food-irradiation/connections_1.jpg) @@ -1067,7 +1067,7 @@ After uploading and running the code for collecting data and transmitting data p - F1 (405 - 425 nm) - CPM (Counts per Minute) -☢:bento: The device allows the user to collect visible light (color) data at different angles with the moveable handle. +☢:bento: The device allows the user to collect visible light (color) data at different angles with the movable handle. ![image](.gitbook/assets/food-irradiation/collect_1.jpg) diff --git a/nvidia-deepstream-community-guide.md b/nvidia-deepstream-community-guide.md index 2a7966a3..30ee28e4 100644 --- a/nvidia-deepstream-community-guide.md +++ b/nvidia-deepstream-community-guide.md @@ -77,7 +77,7 @@ YOLOv5 is therefore the best option to use with DeepStream. The workflow is the Image models, including object detection, are machine learning models focused on visual data, as opposed to models focused on audio and sound, or sensor data coming from analog or digital measuring devices. -Image models built with Edge Impulse use raw pixels as input features. The input image is scaled down to reduce the model input layer size, in order to maintain processing throughput on lower-powered hardware. With DeepStream you are only limited by the power of the chosen Nvdia platform to run the model on. The resolution and input layer size can be made larger, and experimentation for each platform is useful to determine the best choice. +Image models built with Edge Impulse use raw pixels as input features. The input image is scaled down to reduce the model input layer size, in order to maintain processing throughput on lower-powered hardware. With DeepStream you are only limited by the power of the chosen Nvidia platform to run the model on. The resolution and input layer size can be made larger, and experimentation for each platform is useful to determine the best choice. In addition to resolution, the model can be trained on RGB colour or Grayscale. Edge Impulse takes care of removing the alpha channel and allows you to select the input resolution and colour depth. Grayscale is ideal for tinyML applications due to the limited performance of most hardware, but on Nvidia hardware, color images can be utilized. diff --git a/parcel-detection.md b/parcel-detection.md index ca7af2fb..d666ac93 100644 --- a/parcel-detection.md +++ b/parcel-detection.md @@ -54,7 +54,7 @@ Next, we annotate the images and label a parcel in each image. We can now use our dataset to train our model. This requires two important features: a processing block and learning block. Documentation on Impulse Design can be found [here](https://docs.edgeimpulse.com/docs/edge-impulse-studio/create-impulse). -We first click ”Create Impulse”. Here, set image width and heigh to 96x96; and Resize mode to Squash. The Processing block is set to “Image” and the Learning block is “Object Detection (images)”. Click ‘Save Impulse’ to use this configuration. +We first click ”Create Impulse”. Here, set image width and height to 96x96; and Resize mode to Squash. The Processing block is set to “Image” and the Learning block is “Object Detection (images)”. Click ‘Save Impulse’ to use this configuration. Since the ESP-EYE is resource-constrained device (4MB flash and 8MB PSRAM), we have used 96x96 image size to lower RAM usage. diff --git a/renesas-rzv2l-pose-detection.md b/renesas-rzv2l-pose-detection.md index bb99a174..62ca052f 100644 --- a/renesas-rzv2l-pose-detection.md +++ b/renesas-rzv2l-pose-detection.md @@ -21,13 +21,13 @@ Renesas is a leading producer of a variety of specialized Microprocessor and Mic Their RZ family of Microprocessors includes a range of Arm Cortex-A based multicore models targeting a wide range of applications from, industrial networking (RZ/N) and real time control (RZ/T), to general purpose/HMI and graphics applications (RZ/A, RZ/G) and finally AI-based computer vision applications (RZ/V). -At the heart of the AI focused RZ/V MPU series is Renesas’ own DRP-AI ML accelerator. DRP-AI is a low power high performance ML accelerator that was designed around Renesas Dynamic Reconfigurable Processor (DRP) technology originally created to accelerate computer vision applications with DRP being especially useful in speeding up pre and post processing of image data in a computer vision pipeline. +At the heart of the AI focused RZ/V MPU series is Renesas' own DRP-AI ML accelerator. DRP-AI is a low power high performance ML accelerator that was designed around Renesas Dynamic Reconfigurable Processor (DRP) technology originally created to accelerate computer vision applications with DRP being especially useful in speeding up pre and post processing of image data in a computer vision pipeline. A traditional CPU has fixed data paths and algorithms are implemented by instructions or software written by a developer to manipulate how these fixed data paths are used. DRP is a form of reprogrammable hardware that is able to change its processing data paths during run time. This capability is referred to as its Dynamic Reconfiguration feature which enables DRP to provide the optimal hardware based implementation of an algorithm adapting its computing path ways to implement the algorithm in the most efficient way possible. The data path configuration that is loaded into the DRP specifies the operations and interconnections that the DRP will implement in hardware. DRP contains a Finite State Machine known as a State Transition Controller (STC) that manages the data path configuration in hardware and allows for changing out of data path configurations during run time. ![Dynamic Reconfiguration of Data Paths](.gitbook/assets/renesas-rzv2l-pose-detection/Untitled1.png) -Changing of hardware configurations between different data path configuration is referred to as *context switching* meaning that the hardware adapts in real time to the computing needs of complex algorithms. DRP works like an FPGA except instead of being programmed once during the development phase, it is being dynamically reconfigured in real time by the STC to ensure its constantly adapting to the to the processing requirements of the algorithm. This provides runtime configuration capabilities that are somewhat limited on ASICS and FPGA’s. +Changing of hardware configurations between different data path configuration is referred to as *context switching* meaning that the hardware adapts in real time to the computing needs of complex algorithms. DRP works like an FPGA except instead of being programmed once during the development phase, it is being dynamically reconfigured in real time by the STC to ensure its constantly adapting to the to the processing requirements of the algorithm. This provides runtime configuration capabilities that are somewhat limited on ASICS and FPGA's. Thanks to the other Dynamic Loading feature of DRP it is also able to load an entire new configuration (STC and data paths) in 1ms to completely change configuration as as an processing pipeline executes. Being able to completely load new sets of data path configurations in real-time means that you can effectively load complete processing algorithms and execute them as needed in hardware. @@ -53,7 +53,7 @@ The DRP component of the DRP-AI accelerator is also able to boost performance of ![DRP-AI Operation](.gitbook/assets/renesas-rzv2l-pose-detection/Untitled4.png) -All of this translates into DRP-AI being a DNN accelerator that is not only fast but also uses very little power and doesn’t require heatsinks and extensive cooling as compared to GPU’s. +All of this translates into DRP-AI being a DNN accelerator that is not only fast but also uses very little power and doesn't require heatsinks and extensive cooling as compared to GPU's. DRP-AI brings low power tinyML like characteristics to Edge applications while allowing you run complex unconstrained models with more parameters such as YOLO using less power than other competing solutions. @@ -87,7 +87,7 @@ The Avnet RZBoard V2L is an alternative option also based on the RZ/V2L that is ![Avnet RZBoard V2L](.gitbook/assets/renesas-rzv2l-pose-detection/Untitled9.png) -The RZBoard does not have all the features of the Renesas Eval kit but is useful for deployments and most common scenarios where you would use a SBC and can actually act as a drop in replacement for similar SBC’s especially when ML acceleration is required. +The RZBoard does not have all the features of the Renesas Eval kit but is useful for deployments and most common scenarios where you would use a SBC and can actually act as a drop in replacement for similar SBC's especially when ML acceleration is required. ## Using DRP-AI with Your Own Model @@ -127,9 +127,9 @@ The input files themselves require and understanding of the model input and outp Whether you wish to use DRP-AI Translator or DRP-AI TVM both tools that require understanding and expertise to use. The learning curve and effort required adds additional delays and costs into creating your end application which is why you are using ML in the first place. Unless you are working with a custom model architecture you are most likely needing to use Deep Learning to for Object Detection and Image classification which are the most common applications of AI vision. -Edge Impulse includes built in DRP-AI support for YOLOv5 and Edge Impulse’s own FOMO for Object Detection as well as MobileNet V1 and V2 for Image Classification. +Edge Impulse includes built in DRP-AI support for YOLOv5 and Edge Impulse's own FOMO for Object Detection as well as MobileNet V1 and V2 for Image Classification. -With Edge Impulse’s support of DRP-AI all of this is done behind the scenes with DRP-AI Translator and the associated configurations taking place in the back for the supported models. There is no need to work with the configuration files or read and understand lengthy manuals or understand the whole process of working with DRP-AI Translator and the associated input and output files. All that is needed is a few clicks to add DRP-AI support to existing or new models. +With Edge Impulse's support of DRP-AI all of this is done behind the scenes with DRP-AI Translator and the associated configurations taking place in the back for the supported models. There is no need to work with the configuration files or read and understand lengthy manuals or understand the whole process of working with DRP-AI Translator and the associated input and output files. All that is needed is a few clicks to add DRP-AI support to existing or new models. ![DRP-AI Translator vs Edge Impulse](.gitbook/assets/renesas-rzv2l-pose-detection/Untitled14.png) @@ -145,17 +145,17 @@ When using FOMO you can use the standard FOMO model however for YOLO there is a ![YOLO for DRP-AI in Edge Impulse Studio](.gitbook/assets/renesas-rzv2l-pose-detection/Untitled16.png) -This allows developers to transparently leverage the benefits of DRP-AI and instead be focused on the final application. This is in line with Edge Impulse’s philosophy of enabling developers at all skill levels to build production ready ML based applications as quickly and easily as possible. +This allows developers to transparently leverage the benefits of DRP-AI and instead be focused on the final application. This is in line with Edge Impulse's philosophy of enabling developers at all skill levels to build production ready ML based applications as quickly and easily as possible. ## Deployment with Edge Impulse -Once you have completed the process of building your model the next step is to actually deploy the model your hardware. For quick testing of your model directly on the RZ/V2L Evaluation kit you can use the Edge Impulse CLI specifically the `edge-impulse-linux-runner` command from the RZ/V2L board itself after installing all Edge Impulse CLI. This deploy the model directly to your board hosted in Edge Impulse’s TypeScript based Web Deployment and you can connect to the running model from your browser and evaluate performance. +Once you have completed the process of building your model the next step is to actually deploy the model your hardware. For quick testing of your model directly on the RZ/V2L Evaluation kit you can use the Edge Impulse CLI specifically the `edge-impulse-linux-runner` command from the RZ/V2L board itself after installing all Edge Impulse CLI. This deploy the model directly to your board hosted in Edge Impulse's TypeScript based Web Deployment and you can connect to the running model from your browser and evaluate performance. You will ultimately want to deploy the model into a custom application on your own custom application and the two choices you have are to use the C++ DRP-AI Library for embedding in a custom C++ application or the EIM deployment. ![](.gitbook/assets/renesas-rzv2l-pose-detection/Untitled17.png) -The DRP-AI C++ library is based on Edge Impulse’s standard C++ SDK library and contains the DRP-AI acceleration configuration built in making it easy for developers to use the SDK in their code with DRP-AI through the same API that is used across all platforms. The DRP-AI C++ library can be dropped into your application without you needing understand the underlying configuration. +The DRP-AI C++ library is based on Edge Impulse's standard C++ SDK library and contains the DRP-AI acceleration configuration built in making it easy for developers to use the SDK in their code with DRP-AI through the same API that is used across all platforms. The DRP-AI C++ library can be dropped into your application without you needing understand the underlying configuration. Edge Impulse has a created an a packaged executable called an EIM (Edge Impulse Model) that is essentially a Linux executable that wraps up your model and all associated feature processing and acceleration into one executable which is accessed via a simple Inter process Communication Interface (IPC). You can easily pass your input data to this executable via the IPC interface and the receive the results via the same interface. @@ -163,7 +163,7 @@ Edge Impulse has a created an a packaged executable called an EIM (Edge Impulse The EIM DRP-AI deployment is easily accessible from the Edge Impulse Studio by selecting the Renesas RZ/V option under Build Firmware. This automatically results an EIM download. The EIM file can then be copied to your RZ/V2L board and called from your C++, Python, NodeJS or GoLang application. -Not only do both of these options provide you flexibility for most applications and allow your developer to be able to use DRP-AI transparently, you also benefit from the additional optimization provided by the Edge Impulse’s EON Tuner. EON assists with improving accuracy and is supported on RZ/V for image classification. +Not only do both of these options provide you flexibility for most applications and allow your developer to be able to use DRP-AI transparently, you also benefit from the additional optimization provided by the Edge Impulse's EON Tuner. EON assists with improving accuracy and is supported on RZ/V for image classification. ## Deployment Examples @@ -177,7 +177,7 @@ The 2 stage pipeline runs sequentially and the more objects detected the more cl While this pipeline can be deployed to any Linux board that supports EIM, it can be used with DRP-AI on the Renesas RZ/V2L Eval kit or RZ/Board leveraging the highly performant and low power DRP-AI by selecting these options in Edge Impulse Studio as shown earlier. By deploying to the RZ/V2L you will achieve the lowest power consumption vs framerate against any of the other supported platforms. YOLO Object Detection also ensures you get the level of performance needed for demanding applications. -The application consists of two files `app.py` which contains the main 2 stage pipeline and web server and `eim.py` which is a custom Python SDK for using EIM’s in your own application +The application consists of two files `app.py` which contains the main 2 stage pipeline and web server and `eim.py` which is a custom Python SDK for using EIM's in your own application To configure the application various configuration options are available in the Application Configuration Options section near the top of the application: @@ -211,7 +211,7 @@ Power consumption figures are shown running on an actual RZ/V2L Eval Kit measuri ![](.gitbook/assets/renesas-rzv2l-pose-detection/Untitled21.png) -As can be seen the power current draw for YOLOv5 Object Detection is under 500mA in total whereas Image Classification is just under 400mA whereas the board draws just under 300mA while idle with a single user logged in via SSH. This shows the phenomaly low power operation of DRP-AI which also does not require any heatsinks to be attached to the RZ/V2L MPU. +As can be seen the power current draw for YOLOv5 Object Detection is under 500mA in total whereas Image Classification is just under 400mA whereas the board draws just under 300mA while idle with a single user logged in via SSH. This shows the phenominal low power operation of DRP-AI which also does not require any heatsinks to be attached to the RZ/V2L MPU. ### Pose Detection on Renesas RZ/V2L with DRP-AI @@ -225,7 +225,7 @@ The generated keypoints are fed to a normal classifier instead of an Transfer Le Provided the labels were correctly done the classifier learns to detect different kinds of poses. This is very useful for working with pose models to actually classify types of poses and figure out activities being performed by by people detected in a scene. -The PoseNet block requires that you run it locally on your own machine if you don’t have an Enterprise account more details can be found at [https://github.com/edgeimpulse/pose-estimation-processing-block](https://github.com/edgeimpulse/pose-estimation-processing-block) +The PoseNet block requires that you run it locally on your own machine if you don't have an Enterprise account more details can be found at [https://github.com/edgeimpulse/pose-estimation-processing-block](https://github.com/edgeimpulse/pose-estimation-processing-block) When using the PoseNet feature extractor with DRP-AI the feature extraction block runs in CPU whereas Edge Impulse will generate DRP-AI accelerated implementation of the Classifier provided the *Renesas RZ/V2L (with DRP-AI accelerator)* target is selected. @@ -233,27 +233,27 @@ The Classification model can be dropped into the two stage pipeline as the secon The number of people in the scene will impact performance in this way however with DRP-AI this is achieved with a lower power draw. -An example web based application written in Python with Flask is available (source) to test the two stage pipeline using Edge Impulse’s PoseNet (link to PoseNet) pipeline as part of the second stage classification in the pipeline. This demonstrates the power of using Python for simplification of the application logic while still being able to utilize the power of DRP-AI with Edge Impulse’s EIM deployments thereby making life easier. +An example web based application written in Python with Flask is available (source) to test the two stage pipeline using Edge Impulse's PoseNet (link to PoseNet) pipeline as part of the second stage classification in the pipeline. This demonstrates the power of using Python for simplification of the application logic while still being able to utilize the power of DRP-AI with Edge Impulse's EIM deployments thereby making life easier. The output of the Object Detection step draws a bounding box with the label shown on the left in this case a PERSON was detected. The second stage classifier shows the output of the classification in this case using the PoseNet pipeline showing a person POINTING. ![](.gitbook/assets/renesas-rzv2l-pose-detection/Untitled24.png) -The exact same application was use as the Candy Detection above by simply substituting EIM’s. +The exact same application was use as the Candy Detection above by simply substituting EIM's. -Edge Impulse will make it easy for you to build different cases and deploy a new pipeline by simply building your models and downloading the EIM’s. +Edge Impulse will make it easy for you to build different cases and deploy a new pipeline by simply building your models and downloading the EIM's. ## Summary -Renesas has created a low power highly performant and novel ML accelerator in the form of DRP-AI. The DRP-AI is provided as part of the RZ/V series of Arm Cortex-A MPU’s which were developed for AI based vision applications, offering a wide variety of peripherals to suit most applications from B2B to B2C. +Renesas has created a low power highly performant and novel ML accelerator in the form of DRP-AI. The DRP-AI is provided as part of the RZ/V series of Arm Cortex-A MPU's which were developed for AI based vision applications, offering a wide variety of peripherals to suit most applications from B2B to B2C. -DRP AI is a full ML accelerator that exploits the dynamically configurable capabilities which Renesas has designed to allow the hardware to effectively adapt itself in real time to the ML model being executed thereby accelerating the inference process while also consuming a lower amount of power than other solutions such GPU’s. +DRP AI is a full ML accelerator that exploits the dynamically configurable capabilities which Renesas has designed to allow the hardware to effectively adapt itself in real time to the ML model being executed thereby accelerating the inference process while also consuming a lower amount of power than other solutions such GPU's. Edge Impulse makes it easy to use DRP-AI without needing understand or implement the workflows required to convert your models to work with DRP. Everything needed for DRP-AI translation baked in to allow you to leverage the benefits of DRP-AI in your vision based AI applications with a few clicks greatly simplifying the developer experience. -Deployment options available start with Edge Impulse’s own EIM executable models that have the DRP-AI acceleration requirements built in to help you get going quickly on Linux and can be used with applications written in Python as we have demonstrated earlier. +Deployment options available start with Edge Impulse's own EIM executable models that have the DRP-AI acceleration requirements built in to help you get going quickly on Linux and can be used with applications written in Python as we have demonstrated earlier. -Alternatively there is also a C++ DRP-AI library built around Edge Impulse’s SDK that allows you to build in DRP-AI support into your custom applications [https://docs.edgeimpulse.com/renesas/deployment/drp-ai-library/deploy-your-model-as-a-drp-ai-library](https://docs.edgeimpulse.com/renesas/deployment/drp-ai-library/deploy-your-model-as-a-drp-ai-library) +Alternatively there is also a C++ DRP-AI library built around Edge Impulse's SDK that allows you to build in DRP-AI support into your custom applications [https://docs.edgeimpulse.com/renesas/deployment/drp-ai-library/deploy-your-model-as-a-drp-ai-library](https://docs.edgeimpulse.com/renesas/deployment/drp-ai-library/deploy-your-model-as-a-drp-ai-library) When used in combination with Renesas DRP-AI, the RZ/V2L becomes an even more powerful tool for developers working on AI-powered applications. Together, these two products offer a high-performance, low-power consumption solution for processing large amounts of data quickly and efficiently. diff --git a/silabs-xg24-posture-detection.md b/silabs-xg24-posture-detection.md index cc64d804..a2f9a32e 100644 --- a/silabs-xg24-posture-detection.md +++ b/silabs-xg24-posture-detection.md @@ -12,7 +12,7 @@ Public Project: ## Introduction -In this project I'm going to walkthough how to port an exisiting project developed on the SiLabs Thunderboard Sense 2, to SiLabs' newer and more powerful xG24 development board. +In this project I'm going to walkthrough how to port an existing project developed on the SiLabs Thunderboard Sense 2, to SiLabs' newer and more powerful xG24 development board. The original project was developed by [Manivnnan Sivan](https://www.hackster.io/manivannan) to detect correct / incorrect posture of manufacturing workers using a wearable belt. @@ -24,7 +24,7 @@ I will walk you through how you can clone his Public Edge Impulse project, deplo You can find more about the project here in the original project documentation, [Worker Safety Posture Detection](https://docs.edgeimpulse.com/experts/prototype-and-concept-projects/worker-safety-posture-detection). -The project is intended to help workers in manufaturing. They work in conditions that can put a lot of stress on their bodies. Depending on the worker's role in the production process, they might experience issues related to cramped working conditions, heavy lifting, or repetitive stress. +The project is intended to help workers in manufacturing. They work in conditions that can put a lot of stress on their bodies. Depending on the worker's role in the production process, they might experience issues related to cramped working conditions, heavy lifting, or repetitive stress. Poor posture can cause problems for the health of those who work in manufacturing. Along with that, research suggests that making efforts to improve posture among manufacturing employees can lead to significant increases in production. Workers can improve their posture by physical therapy, or simply by being more mindful during their work day. @@ -32,7 +32,7 @@ Poor posture can cause problems for the health of those who work in manufacturin ## Running the Project on Thunderboard Sense 2 -Before porting, we need to run the project on the existing platform to understand how it's run and familiarize ourselves with it's paramters. So let's get started. +Before porting, we need to run the project on the existing platform to understand how it's run and familiarize ourselves with it's parameters. So let's get started. ### Installing Dependencies @@ -45,7 +45,7 @@ Before you proceed further, there are few other software packages you need to in Go to the Edge Impulse project page using the [link here](https://studio.edgeimpulse.com/public/148375/latest), and clone it. -Click **Clone** on the right corner buttton to create a copy of the project. +Click **Clone** on the right corner button to create a copy of the project. ![](.gitbook/assets/silabs-xg24-posture-detection/clone-step1.png) @@ -57,7 +57,7 @@ Done, the project is successfully cloned into your Edge Impulse account: ![](.gitbook/assets/silabs-xg24-posture-detection/clone-step3.png) -As we clone the project, it will be loaded with dataset collected by Manivnnan. +As we clone the project, it will be loaded with the dataset collected by Manivannan. ![](.gitbook/assets/silabs-xg24-posture-detection/clone-step4.png) @@ -105,7 +105,7 @@ Done, we can now open the LightBlue mobile app to run and see the inference: ![](.gitbook/assets/silabs-xg24-posture-detection/runonSense2.jpg) -Alternativly you can run it on a computer, if you dont't have access to a phone. Run the command below to see if the tinyML model is inferencing. +Alternatively you can run it on a computer, if you don't have access to a phone. Run the command below to see if the tinyML model is inferencing. `edge-impulse-run-impulse` @@ -173,7 +173,7 @@ Now, we can build the model and deploy to the xG24. For the build, we need to ch ![](.gitbook/assets/silabs-xg24-posture-detection/xG24BuildModel.png) -After genrating the `.bin` file, we need to use [Simplicity Commander](https://community.silabs.com/s/article/simplicity-commander) to flash your xG24 Dev Kit with this firmware. To do this, first select your board from the dropdown list on the top left corner: +After generating the `.bin` file, we need to use [Simplicity Commander](https://community.silabs.com/s/article/simplicity-commander) to flash your xG24 Dev Kit with this firmware. To do this, first select your board from the dropdown list on the top left corner: ![](.gitbook/assets/silabs-xg24-posture-detection/xg24-dk-commander-select-board.webp) @@ -185,23 +185,23 @@ Next, we can use the LightBlue mobile app to run and see the inference. ![](.gitbook/assets/silabs-xg24-posture-detection/xG24App.jpg) -Alternatively, we can run on computer as we did for the Thunderboard Sense 2, if you dont't have access to a phone. Run the command below to see if the tinyML model is inferencing. +Alternatively, we can run on computer as we did for the Thunderboard Sense 2, if you don't have access to a phone. Run the command below to see if the tinyML model is inferencing. `edge-impulse-run-impulse` ![](.gitbook/assets/silabs-xg24-posture-detection/xg24_inference.png) -Awesome, we have now sucessfully ported a project from Thunderboard Sense 2 to the xG24 Dev Kit! +Awesome, we have now successfully ported a project from Thunderboard Sense 2 to the xG24 Dev Kit! ## Conclusion -We can see here, the xG24 does a faster classfication of these tinyML datasets without compromising the accuracy. +We can see here, the xG24 does a faster classification of these tinyML datasets without compromising the accuracy. -Here you can see the comparison data, and we can see **91.1765%** increased inferencing speed in the NN Classfier, while the RAM and Flash usage are the same. +Here you can see the comparison data, and we can see **91.1765%** increased inferencing speed in the NN Classifier, while the RAM and Flash usage are the same. ![](.gitbook/assets/silabs-xg24-posture-detection/model-optimization_time.png) -Similar results are acheived in the field data when we are inferencing the live data stream. Here we can see a 92.3077% increase in speed in the classification, which is more than what was calculated in the model optimization. +Similar results are achieved in the field data when we are inferencing the live data stream. Here we can see a 92.3077% increase in speed in the classification, which is more than what was calculated in the model optimization. ![](.gitbook/assets/silabs-xg24-posture-detection/InferenceTime.png) diff --git a/sony-spresense-smart-hvac-system.md b/sony-spresense-smart-hvac-system.md index 75a64937..b8ece004 100644 --- a/sony-spresense-smart-hvac-system.md +++ b/sony-spresense-smart-hvac-system.md @@ -107,7 +107,7 @@ If everything is OK, then we can test the model. Go to Model Testing on the left ![](.gitbook/assets/sony-spresense-smart-hvac-system/image15.png) -You should have the Arduino IDE installed on your computer for the following step. On the navigation menu, choose **Deployment** on the left, search for and select Arduino Library in the Selected Deployment box, and then click **Build** Once the Edge Impulse Arduino Library is built, downloaded and unzipped, you should download the *spresense_camera_smartHVAC_oled.ino* code which [can be found here](https://github.com/Jallson/Smart_HVAC/blob/main/spresense_camera_smartHVAC_oled.ino) and place it inside the unzipped folder from Edge Impulse. Once the *.ino* code is inside Edge Impulse unzipped folder, move it to your Arduino folder on your computer. Now you can upload the *.ino* code to your Spresense board via the Arduino IDE. +You should have the Arduino IDE installed on your computer for the following step. On the navigation menu, choose **Deployment** on the left, search for and select Arduino Library in the Selected Deployment box, and then click **Build**. Once the Edge Impulse Arduino Library is built, downloaded and unzipped, you should download the *spresense_camera_smartHVAC_oled.ino* code which [can be found here](https://github.com/Jallson/Smart_HVAC/blob/main/spresense_camera_smartHVAC_oled.ino) and place it inside the unzipped folder from Edge Impulse. Once the *.ino* code is inside Edge Impulse unzipped folder, move it to your Arduino folder on your computer. Now you can upload the *.ino* code to your Spresense board via the Arduino IDE. ![](.gitbook/assets/sony-spresense-smart-hvac-system/image19.png) diff --git a/tinyml-gastroscopic-image-processing.md b/tinyml-gastroscopic-image-processing.md index be23780e..419544a8 100644 --- a/tinyml-gastroscopic-image-processing.md +++ b/tinyml-gastroscopic-image-processing.md @@ -50,7 +50,7 @@ After the training is complete, the **Live classification** page allows us to te In order to deploy the model on a microcontroller, the model is converted to a `.tflite` file then deployed on the MCU. Here in our case, we must build firmware using the Edge Impulse platform. We're going to use two different platforms for the sake of testing, the OpenMV Cam H7 and a Sony Spresense. Impulses can also be deployed as a C++ library and can be included in your own application to run the impulse locally. -To use the model on the OpenMV Cam H7, copy the `.tflite` and label file from the folder that is downloaded from the **Deployment** page. Next, paste it into the OpenMV drive that get's mounted on your computer when the OpenMV Cam is attached via USB. Open the python script file in the OpenMV IDE and start inference. +To use the model on the OpenMV Cam H7, copy the `.tflite` and label file from the folder that is downloaded from the **Deployment** page. Next, paste it into the OpenMV drive that gets mounted on your computer when the OpenMV Cam is attached via USB. Open the python script file in the OpenMV IDE and start inference. For the Sony Spresense, unzip the downloaded file that is provided on the **Deployment** page, and click on the `flash` command that corresponds to your operating system. In my case, this was Windows. A Terminal will open and it will begin flashing the board, then open a new Terminal and run the command **edge-impulse-run-impulse –continuous** as shown in the below image. diff --git a/warehouse-shipment-monitoring.md b/warehouse-shipment-monitoring.md index dcb13b70..bf94a904 100644 --- a/warehouse-shipment-monitoring.md +++ b/warehouse-shipment-monitoring.md @@ -85,7 +85,7 @@ Back in the Studio, move approximately 30 seconds of data from each category to ![](.gitbook/assets/warehouse-shipment-monitoring/train-test.jpg) -After data collection is completed, now go the Iimpulse settings and select Raw data in Processing. A Window size of 3000ms and Window increase of 500 ms is good. +After data collection is completed, now go the Impulse settings and select Raw data in Processing. A Window size of 3000ms and Window increase of 500 ms is good. ![](.gitbook/assets/warehouse-shipment-monitoring/impulse.jpg)