From 47e8070ebc36794aab3bf3bcfe8752703088d108 Mon Sep 17 00:00:00 2001 From: david Date: Wed, 23 Aug 2023 13:50:39 -0700 Subject: [PATCH] spelling fixes --- .wordlist.txt | 106 ++++++++++++++++++ adaptable-vision-counters.md | 8 +- ai-patient-assistance.md | 16 +-- arduino-kway-fall-detection.md | 6 +- arduino-kway-gesture-recognition-weather.md | 4 +- audio-recognition-on-silabs-xg24.md | 10 +- container-counting-nicla-vision.md | 2 +- detecting-worker-accidents-with-ai.md | 46 ++++---- edenoff-anticipate-power-outages.md | 4 +- esd-protection-using-computer-vision.md | 2 +- fluid-leak-detection-with-flowmeter-and-ai.md | 2 +- hospital-bed-occupancy-detection.md | 4 +- nvidia-omniverse-replicator.md | 2 +- predictive-maintenance-with-sound.md | 4 +- smart-baby-swing.md | 4 +- solar-panel-defect-detection.md | 4 +- surface-crack-detection.md | 2 +- tinyml-digital-counter-openmv.md | 10 +- worker-safety-posture-detection.md | 10 +- 19 files changed, 176 insertions(+), 70 deletions(-) diff --git a/.wordlist.txt b/.wordlist.txt index b7dd3e63..e37b527b 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -1419,3 +1419,109 @@ ek rXkz stripboard veroboard +Brocolli +Chillis +msec +Multicore +erroring +PIO +multicore +AGC +Neopixel +Neopixels +ebFyFq +sX +Piezo +piezo +FoV +MLX +IZDHoQUmEg +BCM +Broadcom +authDomain +databaseURL +dataset's +banglejs +AQG +FUsKiQHVg +GIP +ZCz +aGpTxexuEA +crf +daHeR +dms +fp +kX +licdn +vCISf +Vikström +Truphone +siW +GraphQL +NNtam +SQk +Adamkiewicz +Heroux +NBK +ncbi +nlm +EnvNotification +photochemical +AQI +IQAir +env +PPB +NOx +Tropospheric +VOCs +NOx +NOy +PV +misclassification +spm +Arduin +arduin +frombuffer +Atmega +SX +WaziAct +aaagh +actuations +Grămescu +Niţu +BHA +Elecrow's +frombytes +Pushbutton +qU +rFyNGM +mins +BwoOB +vJc +armsleeve +Labeler +SesEx +zU +olsCrJkXi +nanocamera +CuNENU +ivLA +Electro +SRfa +fb +InvenSense +TDK +DNS +SSL +AKS +vec +Vec +washwater +Rg +UUKy +X'ers +OpenMV's +Edenor +prewarned +ukkinstituutti + diff --git a/adaptable-vision-counters.md b/adaptable-vision-counters.md index 1b3eaadd..ed5aa2e0 100644 --- a/adaptable-vision-counters.md +++ b/adaptable-vision-counters.md @@ -22,7 +22,7 @@ Weight based counting assumes that each part has the exact same weight and uses Consider if there are a higher number of defective parts, we can assume that something might be wrong with the production units. This data can also be used to improve the quality of production and thus industry can make more products in less time. So our adaptable counters are evolving as a solution to the world's accurate and flexible counting needs. -The Adaptable Counter is a device consisting of a Rapsberry Pi 4 and camera module, and the counting process is fully powered by FOMO. So it can count faster and more accurately than any other method. Adaptable counters are integrated with a cool looking website. +The Adaptable Counter is a device consisting of a Raspberry Pi 4 and camera module, and the counting process is fully powered by FOMO. So it can count faster and more accurately than any other method. Adaptable counters are integrated with a cool looking website. ## Use-Cases @@ -36,7 +36,7 @@ In this case, we are counting defective and non-defective washers. ### 2. Counting in Motion -In this case, we are counting bolts and washers and faulty washers passing through the conveyer belt. +In this case, we are counting bolts and washers and faulty washers passing through the conveyor belt. ![](.gitbook/assets/adaptable-vision-counters/IMG_1678.jpg) @@ -62,7 +62,7 @@ Edge Impulse is one of the leading development platforms for machine learning on ### Data Acquisition -Every machine learining project starts with data collection. A goood collection of data is one of the major factors that influences the performance of the model. Make sure you have a wide range of perspectives and zoom levels of the items that are being collected. You may take data from any device or development board, or upload your own datasets, for data acquisition. As we have our own dataset, we are uploading them using the Data Acquisition tab. +Every machine learning project starts with data collection. A good collection of data is one of the major factors that influences the performance of the model. Make sure you have a wide range of perspectives and zoom levels of the items that are being collected. You may take data from any device or development board, or upload your own datasets, for data acquisition. As we have our own dataset, we are uploading them using the Data Acquisition tab. ![](.gitbook/assets/adaptable-vision-counters/Data_Acquisition.png) @@ -188,4 +188,4 @@ This Raspberry Pi Camera Module is a custom-designed add-on for Raspberry Pi. It For powering up the system we used a 5V 2A adapter. In this case we don't have any power hungry peripherals, so 2A current is enough. If you have 3A supply, please go for that. -For the sake of convienence we also used a acrylic case for setting up all the hardware. +For the sake of convenience we also used a acrylic case for setting up all the hardware. diff --git a/ai-patient-assistance.md b/ai-patient-assistance.md index 987d5668..fe41bd5f 100644 --- a/ai-patient-assistance.md +++ b/ai-patient-assistance.md @@ -96,7 +96,7 @@ Once complete head over to the devices tab of your project and you should see th We are going to create our own dataset, using the built in microphone on the Arduino Nano 33 BLE Sense. We are going to collect data that will allow us to train a machine learning model that can detect the words/phrases **Doctor**, **Nurse**, and **Help**. -We will use the **Record new data** feature on Edge Impulse to record 15 sets of 10 utterences of each of our keywords, and then we will split them into individual samples. +We will use the **Record new data** feature on Edge Impulse to record 15 sets of 10 utterances of each of our keywords, and then we will split them into individual samples. Ensuring your device is connected to the Edge Impulse platform, head over to the **Data Aqcquisition** tab to continue. @@ -146,7 +146,7 @@ Now we are going to create our network and train our model. ![Add processing block](.gitbook/assets/ai-patient-assistance/processing-block.jpg) -Head to the **Create Impulse** tab and change the window size to 2000ms. Next click **Add processing block** and select **Audio (MFCC)**, then click **Add learning block** and select **Clasification (Keras)**. +Head to the **Create Impulse** tab and change the window size to 2000ms. Next click **Add processing block** and select **Audio (MFCC)**, then click **Add learning block** and select **Classification (Keras)**. ![Created Impulse](.gitbook/assets/ai-patient-assistance/impulse-2.jpg) @@ -196,17 +196,17 @@ You will see the output of the testing in the output window, and once testing is ![Live testing](.gitbook/assets/ai-patient-assistance/testing-3.jpg) -Now we need to test how the model works on our device. Use the **Live classification** feature to record some samples for clasification. Your model should correctly identify the class for each sample. +Now we need to test how the model works on our device. Use the **Live classification** feature to record some samples for classification. Your model should correctly identify the class for each sample. -## Performance Callibration +## Performance Calibration -![Performance Callibration](.gitbook/assets/ai-patient-assistance/calibration.jpg) +![Performance Calibration](.gitbook/assets/ai-patient-assistance/calibration.jpg) -Edge Impulse has a great new feature called **Performance Callibration**, or **PerfCal**. This feature allows you to run a test on your model and see how well it will perform in the real world. The system will create a set of post processing configurations for you to choose from. These configurations help to minimize either false activations or false rejections +Edge Impulse has a great new feature called **Performance Calibration**, or **PerfCal**. This feature allows you to run a test on your model and see how well it will perform in the real world. The system will create a set of post processing configurations for you to choose from. These configurations help to minimize either false activations or false rejections ![Turn on perfcal](.gitbook/assets/ai-patient-assistance/calibration-2.jpg) -Once you turn on perfcal, you will see a new tab in the menu called **Performance callibration**. Navigate to the perfcal page and you will be met with some configuration options. +Once you turn on perfcal, you will see a new tab in the menu called **Performance calibration**. Navigate to the perfcal page and you will be met with some configuration options. ![Perfcal settings](.gitbook/assets/ai-patient-assistance/calibration-3.jpg) @@ -214,7 +214,7 @@ Select the **Noise** class from the drop down, and check the Unknown class in th ![Perfcal configs](.gitbook/assets/ai-patient-assistance/calibration-4.jpg) -The system will provide a number of configs for you to choose from. Choose the one that best suits your needs and click **Save selected config**. This config will be deployed to your device once you download and install the libray on your device. +The system will provide a number of configs for you to choose from. Choose the one that best suits your needs and click **Save selected config**. This config will be deployed to your device once you download and install the library on your device. ## Versioning diff --git a/arduino-kway-fall-detection.md b/arduino-kway-fall-detection.md index 6682074d..f4b0c5c9 100644 --- a/arduino-kway-fall-detection.md +++ b/arduino-kway-fall-detection.md @@ -34,11 +34,11 @@ Finns in general, and elderly people in particular, are made of a tough and hard Many existing fall detection systems use signals from **accelerometers**, sometimes together with gyroscope sensors, to detect falls. Accelerometers are very sensitively monitoring the acceleration in x, y, and z directions, and are as such very suitable for the purpose. The challenge with developing a fall detection system with the help of accelerometers, is that the data frequency typically needs to be quite high (> 100 Hz) and that the signals need to be filtered and processed further to be of use. -Apart from accelerometers, it is also possible to use e.g. **barometers** to sense if a person suddenly has dropped a meter or more. Barometers sense the air pressure, and as the air pressure is higher closer to the ground, one only needs grade school mathematics to create a bare bones fall detection system this way. Easiest is to first convert air pressure to altitude in **meters**, and then use e.g. this formula `previous altitude in meters - current altitude in meters`, and if the difference is higher than e.g. 1.2 meters within 1-2 seconds, a fall might have happened. With barometers the data frequency does often not need to be as high as with accelerometers, and only one parameter (air pressure=altitude) is recorded. One major drawback is the rate of false positives (a fall detected where no fall occured). These might happen because of quick changes in air pressure, e.g. someone opening or closing a door in a confined space like a car, someone shouting, sneezing, coughing close to the sensor etc. +Apart from accelerometers, it is also possible to use e.g. **barometers** to sense if a person suddenly has dropped a meter or more. Barometers sense the air pressure, and as the air pressure is higher closer to the ground, one only needs grade school mathematics to create a bare bones fall detection system this way. Easiest is to first convert air pressure to altitude in **meters**, and then use e.g. this formula `previous altitude in meters - current altitude in meters`, and if the difference is higher than e.g. 1.2 meters within 1-2 seconds, a fall might have happened. With barometers the data frequency does often not need to be as high as with accelerometers, and only one parameter (air pressure=altitude) is recorded. One major drawback is the rate of false positives (a fall detected where no fall occurred). These might happen because of quick changes in air pressure, e.g. someone opening or closing a door in a confined space like a car, someone shouting, sneezing, coughing close to the sensor etc. ![](.gitbook/assets/arduino-kway-fall-detection/fall_det_05.png) -Some modern and more expensive smartwatches, e.g. Apple Watch, already have in-built fall detection systems, that can automatically call for help in case a fall has been detected, and the person has been immobile for a minute or so. In case the watch has cellular connectivity, it does not even neeed to be paired to a smart phone. +Some modern and more expensive smartwatches, e.g. Apple Watch, already have in-built fall detection systems, that can automatically call for help in case a fall has been detected, and the person has been immobile for a minute or so. In case the watch has cellular connectivity, it does not even need to be paired to a smart phone. ## Project Introduction @@ -54,7 +54,7 @@ Initially I intended to collect data for normal behaviour and activities like e. To be able to use anomaly detection, you just need to collect data for what is considered normal behaviour. Later, when the resulting ML model is deployed to an edge device, it will calculate anomaly scores from the sensor data used. When this score is low it indicates normal behaviour, and when it's high it means an anomaly has been detected. -I followed [this tutorial](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/arduino-nicla-sense-me) on the Nicla to get up and running. The Edge Impulse-provided `nicla_sense_ingestion.ino` sketch was used to collect accelometer data. +I followed [this tutorial](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/arduino-nicla-sense-me) on the Nicla to get up and running. The Edge Impulse-provided `nicla_sense_ingestion.ino` sketch was used to collect accelerometer data. I started to collect 8-second samples when walking, running, etc. For the sake of simplicity, I had the Nicla device tethered through USB to a laptop as the alternative would have been to use a more complex data gathering program using BLE. I thus held Nicla in one hand and my laptop in the other and started walking and jogging indoors. To get a feeling for how the anomaly detection model works, I only collected 1m 17s of data, with the intention of collecting at least 10 times more data later on. Astonishingly, I soon found out that this tiny data amount was enough for this proof of concept! Obviously, in a real scenario you would need to secure you have covered all the expected different types of activities a person might get involved in. diff --git a/arduino-kway-gesture-recognition-weather.md b/arduino-kway-gesture-recognition-weather.md index 58eb710a..cfdaf5d9 100644 --- a/arduino-kway-gesture-recognition-weather.md +++ b/arduino-kway-gesture-recognition-weather.md @@ -2,7 +2,7 @@ description: Use a Nicla Sense ME attached to the sleeve of a K-way jacket for gesture recognition and bad weather prediction --- -# Arduino x K-Way - Gesture Recognition and Weather Prediciton for Hiking +# Arduino x K-Way - Gesture Recognition and Weather Prediction for Hiking Created By: Justin Lutz @@ -38,7 +38,7 @@ Ideally, there would be a pouch on the armsleeve to slide the Nicla Sense ME int ![](.gitbook/assets/arduino-kway-gesture-recognition-weather/wrist.jpg) -To complete this project I used my go-to source, Edge Impulse, to ingest raw data, develop a model, and export it as an Arduino library. I [followed this tutorial](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/arduino-nicla-sense-me) on the Nicla Sense ME from Edge Impulse to get up and running. The Edge Impulse-provided `nicla_sense_ingestion.ino` sketch was used to collect the raw accelometer data. I created 3 classes: idle (no movement), walking, and checkpoint. The **Checkpoint** class was essentially me drawing the letter "C" in the air to tell the app to mark a checkpoint on the map while out on a hike. +To complete this project I used my go-to source, Edge Impulse, to ingest raw data, develop a model, and export it as an Arduino library. I [followed this tutorial](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/arduino-nicla-sense-me) on the Nicla Sense ME from Edge Impulse to get up and running. The Edge Impulse-provided `nicla_sense_ingestion.ino` sketch was used to collect the raw accelerometer data. I created 3 classes: idle (no movement), walking, and checkpoint. The **Checkpoint** class was essentially me drawing the letter "C" in the air to tell the app to mark a checkpoint on the map while out on a hike. You of course could add additional gestures if you wanted to expand the functionality of the jacket and Nicla Sense ME (an "S" for "selfie" maybe?). Even with just 15 minutes of data (split between Training and Test), there was great clustering of class data: diff --git a/audio-recognition-on-silabs-xg24.md b/audio-recognition-on-silabs-xg24.md index 74d3b615..3b3c53aa 100644 --- a/audio-recognition-on-silabs-xg24.md +++ b/audio-recognition-on-silabs-xg24.md @@ -12,7 +12,7 @@ Public Project: ## Intro -This project focuses on how to port an existing audio recognition project built with a SiLabs Thunderboard Sense 2, to the latest [EFR32MG24](https://www.silabs.com/wireless/zigbee/efr32mg24-series-2-socs) as used in the newer SiLabs xG24 Dev Kit. For demostration purposes, we will be porting [Manivannan Sivan's](https://www.hackster.io/manivannan) ["Vehicle Predictive Maintenance"](https://www.hackster.io/manivannan/vehicle-predictive-maintenance-cf2ee3) project, which is an Edge Impulse based TinyML model to predict various vehicle failures like faulty drive shaft and brake-pad noises. Check out his work for more information. +This project focuses on how to port an existing audio recognition project built with a SiLabs Thunderboard Sense 2, to the latest [EFR32MG24](https://www.silabs.com/wireless/zigbee/efr32mg24-series-2-socs) as used in the newer SiLabs xG24 Dev Kit. For demonstration purposes, we will be porting [Manivannan Sivan's](https://www.hackster.io/manivannan) ["Vehicle Predictive Maintenance"](https://www.hackster.io/manivannan/vehicle-predictive-maintenance-cf2ee3) project, which is an Edge Impulse based TinyML model to predict various vehicle failures like faulty drive shaft and brake-pad noises. Check out his work for more information. The audio sensor on the Thunderboard Sense 2 and the xG24 Dev Kit are the same (TDK InvenSense ICS-43434), so ideally we're not required to collect any new data using the xG24 Dev Kit for the model to work properly. Had the audio sensor been a different model, it would most likely be necessary to capture a new dataset. @@ -56,16 +56,16 @@ You can follow the guide below to go through the process, if you are interested > ["Edge Impulse xG24 Dev Kit Guide"](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/silabs-xg24-devkit). -With default value of Window Size (10s) and Window Increase (500 ms), the processiong block will throw an error, as represented below: +With default value of Window Size (10s) and Window Increase (500 ms), the processing block will throw an error, as represented below: ![](.gitbook/assets/audio-recognition-on-silabs-xg24/frame_stride_error.jpg) -This is because some of the features in Edge Impulse's processing block have been updated since this project was created, so you need to update some of the paramaters in the Timer Series block such as Window Size and Window Increase, +This is because some of the features in Edge Impulse's processing block have been updated since this project was created, so you need to update some of the parameters in the Timer Series block such as Window Size and Window Increase, or increase the frame stride parameter in the MFE processing block. This is what my updated window parameters look like: ![](.gitbook/assets/audio-recognition-on-silabs-xg24/window_increase_updated.jpg) -If you added some new data and are not sure of the model design, then the [EON tuner](https://docs.edgeimpulse.com/docs/edge-impulse-studio/eon-tuner) can come to the rescue. You just have to select the target device as SiLabs EFR32MG24 (Cortex-M33 78MHz) and configure your desired paramters, then the Eon tuner will come up with suggested architetures which you can use. +If you added some new data and are not sure of the model design, then the [EON tuner](https://docs.edgeimpulse.com/docs/edge-impulse-studio/eon-tuner) can come to the rescue. You just have to select the target device as SiLabs EFR32MG24 (Cortex-M33 78MHz) and configure your desired parameters, then the Eon tuner will come up with suggested architectures which you can use. ![](.gitbook/assets/audio-recognition-on-silabs-xg24/EON_tuner.jpg) @@ -117,7 +117,7 @@ Note that this is a newer command supported by the Edge Impulse CLI, hence you m Now your model should be running, and recognize the same audio data and perform inferencing on the newer xG24 Dev Kit hardware, with little to no modifications to actual data or to the model architecture. -This higlights the platform agnostic nature of Edge Impulse, and was possible in this case because the audio sensor on both the Thunderboard and xG24 are the same. However, you would need do your own due diligence for migrating projects built with other sensor data such as humidity/temperature, or the light sensor, as those do vary between the boards. +This highlights the platform agnostic nature of Edge Impulse, and was possible in this case because the audio sensor on both the Thunderboard and xG24 are the same. However, you would need do your own due diligence for migrating projects built with other sensor data such as humidity/temperature, or the light sensor, as those do vary between the boards. One final note is that in this project, the xG24 is roughly 2x as fast as the Thunderboard Sense 2 in running the DSP, and 8x faster in running the inference: diff --git a/container-counting-nicla-vision.md b/container-counting-nicla-vision.md index c8e6deff..f0d742cf 100644 --- a/container-counting-nicla-vision.md +++ b/container-counting-nicla-vision.md @@ -13,7 +13,7 @@ Public Project Link: ## Introduction -Accurate inventory management is critical for any business that relies on the sale of physical goods. Inventories can represent a significant investment of capital, and even a small error in inventory levels can have a major impact on a company's bottom line. Furthermore, customers expect to be able to find the products they need when they want them, and out-of-stock items can lead to lost sales. In order to properly manage their inventories, businesses need to keep track of both the level of stock on hand and the rate at which stock is being sold. By using this information to forecast future demand, businesses can avoid both overstock and stockouts. In today's competitive marketplace, effective inventory management can be the difference between success and failure. +Accurate inventory management is critical for any business that relies on the sale of physical goods. Inventories can represent a significant investment of capital, and even a small error in inventory levels can have a major impact on a company's bottom line. Furthermore, customers expect to be able to find the products they need when they want them, and out-of-stock items can lead to lost sales. In order to properly manage their inventories, businesses need to keep track of both the level of stock on hand and the rate at which stock is being sold. By using this information to forecast future demand, businesses can avoid both overstock and out of stock events. In today's competitive marketplace, effective inventory management can be the difference between success and failure. Machine Learning algorithms power automatic inventory tracking systems that can automatically detect and classify objects in images, even as items are moved around. This is important because it helps to ensure that inventory levels are accurate, which is essential for businesses to run smoothly. Machine Learning can also be used to automatically count items in containers, such as boxes on a shelf. This is important because it helps to reduce the amount of time that employees need to spend counting inventory manually. As a result, automatic inventory tracking can save businesses time and money. diff --git a/detecting-worker-accidents-with-ai.md b/detecting-worker-accidents-with-ai.md index 91bbfa38..0416d7f9 100644 --- a/detecting-worker-accidents-with-ai.md +++ b/detecting-worker-accidents-with-ai.md @@ -23,7 +23,7 @@ Some accidents which are difficult to be detected in industries includes: ## Detecting Worker Accidents with AI -Sound classification is one of the most widely used applications of Machine Learning. When in danger or scared, we humans respond with audible actions such as screaming, crying, or with words such as: “stop”, or “help” . This alerts other people that we are in trouble and can also give them instructions such as stopping a machine, or opening/closing a system. We can use sound classification to give hearing to machines and manufacturing setups so that they can be aware of the environment status. +Sound classification is one of the most widely used applications of Machine Learning. When in danger or scared, we humans respond with audible actions such as screaming, crying, or with words such as: "stop", or "help" . This alerts other people that we are in trouble and can also give them instructions such as stopping a machine, or opening/closing a system. We can use sound classification to give hearing to machines and manufacturing setups so that they can be aware of the environment status. TinyML has enabled us to bring machine learning models to low-cost and low-power microcontrollers. We will use Edge Impulse to develop a machine learning model which is capable of detecting accidents from workers screams and cries. This event can then be used to trigger safety measures such as machine/actuator stop, and sound alarms. @@ -34,15 +34,15 @@ The [Syntiant](https://www.syntiant.com/) TinyML Board is a tiny development boa ## Quick Start -You can find the public project here: [Acoustic Sensing of Worker Accidents](https://studio.edgeimpulse.com/public/111611/latest). To add this project into your account projects, click “Clone this project” at the top of the window. Next, go to the “Deploying to Syntiant TinyML Board” section below to see how you can deploy the model to the Syntiant TinyML board. +You can find the public project here: [Acoustic Sensing of Worker Accidents](https://studio.edgeimpulse.com/public/111611/latest). To add this project into your account projects, click "Clone this project" at the top of the window. Next, go to the "Deploying to Syntiant TinyML Board" section below to see how you can deploy the model to the Syntiant TinyML board. Alternatively, to create a similar project, follow the next steps after creating a new Edge Impulse project. ## Data Acquisition -We want to create a model that can recognize both key words and human sounds like cries and screams. For these, we have 4 classes in our model: stop, help, cry and scream. In addition to these classes, we also need another class that is not part of our 4 keywords. We label this class as “unknown” and it has sound of people speaking, machines, and vehicles, among others. Each class has 1 second of audio sounds. +We want to create a model that can recognize both key words and human sounds like cries and screams. For these, we have 4 classes in our model: stop, help, cry and scream. In addition to these classes, we also need another class that is not part of our 4 keywords. We label this class as "unknown" and it has sound of people speaking, machines, and vehicles, among others. Each class has 1 second of audio sounds. -In total, we have 31 minutes of data for training and 8 minutes of data for testing. For the “unknown” class, we can use Edge Impulse Key Spotting Dataset, which can be obtained [here](https://docs.edgeimpulse.com/docs/pre-built-datasets/keyword-spotting). From this dataset we use the “noise” audio files. +In total, we have 31 minutes of data for training and 8 minutes of data for testing. For the "unknown" class, we can use Edge Impulse Key Spotting Dataset, which can be obtained [here](https://docs.edgeimpulse.com/docs/pre-built-datasets/keyword-spotting). From this dataset we use the "noise" audio files. ![Training data](.gitbook/assets/detecting-worker-accidents-with-AI/img2_screenshot%20Data%20Acquisition%20training.png) @@ -50,21 +50,21 @@ In total, we have 31 minutes of data for training and 8 minutes of data for test ## Impulse Design -The Impulse design is very unique as we are targeting the Syntiant TinyML board. Under ‘Create Impulse’ we set the following configurations: +The Impulse design is very unique as we are targeting the Syntiant TinyML board. Under 'Create Impulse' we set the following configurations: -Our window size is 968ms, and window increase is 484ms milliseconds(ms). Click ‘Add a processing block’ and select Audio (Syntiant). Next, we add a learning block by clicking ‘Add a learning block’ and select Classification (Keras). Click ‘Save Impulse’ to use this configuration. +Our window size is 968ms, and window increase is 484ms milliseconds(ms). Click 'Add a processing block' and select Audio (Syntiant). Next, we add a learning block by clicking 'Add a learning block' and select Classification (Keras). Click 'Save Impulse' to use this configuration. ![Create impulse](.gitbook/assets/detecting-worker-accidents-with-AI/img4_screenshot%20Create%20Impulse.png) -Next we go to our processing block configuration, Syntiant, and first click ‘Save parameters’. The preset parameters will work well so we can use them in our case. +Next we go to our processing block configuration, Syntiant, and first click 'Save parameters'. The preset parameters will work well so we can use them in our case. -On the window ‘Generate features’, we click the “Generate features” button. Upon completion we see a 3D representation of our dataset. These are the Syntiant blocks that will be passed into the neural network. +On the window 'Generate features', we click the "Generate features" button. Upon completion we see a 3D representation of our dataset. These are the Syntiant blocks that will be passed into the neural network. ![Features](.gitbook/assets/detecting-worker-accidents-with-AI/img5_screenshot%20Generate%20Features.png) -Lastly, we need to configure our neural network. Start by clicking “NN Classifier” . Here we set the number of training cycle to 80, with a learning rate of 0.0005. Edge Impulse automatically designs a default Neural Network architecture that works very well without requiring the parameters to be changed. However, if you wish to update some parameters, Data Augmentation can improve your model accuracy. Try adding noise, masking time and frequency bands and asses your model performance with each setting. +Lastly, we need to configure our neural network. Start by clicking "NN Classifier" . Here we set the number of training cycle to 80, with a learning rate of 0.0005. Edge Impulse automatically designs a default Neural Network architecture that works very well without requiring the parameters to be changed. However, if you wish to update some parameters, Data Augmentation can improve your model accuracy. Try adding noise, masking time and frequency bands and asses your model performance with each setting. -With the training cycles and learning rate set, click “Start training”, and you will have a neural network when the task is complete. We get an accuracy of 94%, which is pretty good! +With the training cycles and learning rate set, click "Start training", and you will have a neural network when the task is complete. We get an accuracy of 94%, which is pretty good! ![NN parameters](.gitbook/assets/detecting-worker-accidents-with-AI/img6_screenshot%20NN%20Classifier%20parameters.png) @@ -75,23 +75,23 @@ With the training cycles and learning rate set, click “Start training”, and When training our model, we used 80% of the data in our dataset. The remaining 20% is used to test the accuracy of the model in classifying unseen data. We need to verify that our model has not overfit by testing it on new data. If your model performs poorly, then it means that it overfit (crammed your dataset). This can be resolved by adding more data and/or reconfiguring the processing and learning blocks if needed. Increasing performance tricks can be found in this [guide](https://docs.edgeimpulse.com/docs/tips-and-tricks/increasing-model-performance). -On the left bar, click “Model testing” then “classify all”. Our current model has a performance of 91% which is pretty good and acceptable. +On the left bar, click "Model testing" then "classify all". Our current model has a performance of 91% which is pretty good and acceptable. -From the results we can see new data called “testing” which was obtained from the environment and sent to Edge Impulse. The Expected Outcome column shows which class the collected data belong to. In all cases, our model classifies the sounds correctly as seen in the Result column; it matches the Expected outcome column. +From the results we can see new data called "testing" which was obtained from the environment and sent to Edge Impulse. The Expected Outcome column shows which class the collected data belong to. In all cases, our model classifies the sounds correctly as seen in the Result column; it matches the Expected outcome column. ![Model testing](.gitbook/assets/detecting-worker-accidents-with-AI/img8_screenshot%20Model%20testing.png) ## Deploying to the Syntiant TinyML Board -To deploy our model to the Syntiant Board, first click “Deployment”. Here, we will first deploy our model as a firmware on the board. When our audible events (cry, scream, help, stop) are detected, the onboard RGB LED will turn on. When the unknown sounds are detected, the on board RGB LED will be off. This runs locally on the board without requiring an internet connection, and runs with minimal power consumption. +To deploy our model to the Syntiant Board, first click "Deployment". Here, we will first deploy our model as a firmware on the board. When our audible events (cry, scream, help, stop) are detected, the onboard RGB LED will turn on. When the unknown sounds are detected, the on board RGB LED will be off. This runs locally on the board without requiring an internet connection, and runs with minimal power consumption. -Under “Build Firmware” select Syntiant TinyML. +Under "Build Firmware" select Syntiant TinyML. ![Deploy firmware](.gitbook/assets/detecting-worker-accidents-with-AI/img9_screenshot%20Deploying%20Firmware.png) -Next, we need to configure posterior parameters. These are used to tune the precision and recall of our Neural Network activations, to minimize False Rejection Rate and False Activation Rate. More information on posterior parameters can be found here: [Responding to your voice - Syntiant - RC Commands](https://docs.edgeimpulse.com/docs/tutorials/hardware-specific-tutorials/responding-to-your-voice-syntiant-rc-commands-go-stop), in “Deploying to your device” section. +Next, we need to configure posterior parameters. These are used to tune the precision and recall of our Neural Network activations, to minimize False Rejection Rate and False Activation Rate. More information on posterior parameters can be found here: [Responding to your voice - Syntiant - RC Commands](https://docs.edgeimpulse.com/docs/tutorials/hardware-specific-tutorials/responding-to-your-voice-syntiant-rc-commands-go-stop), in "Deploying to your device" section. -Under “Configure posterior parameters” click “Find posterior parameters”. Check all classes apart from “unknown”, and for calibration dataset we use “No calibration (fastest)”. After setting the configurations, click “Find parameters”. +Under "Configure posterior parameters" click "Find posterior parameters". Check all classes apart from "unknown", and for calibration dataset we use "No calibration (fastest)". After setting the configurations, click "Find parameters". ![Find posterior parameters](.gitbook/assets/detecting-worker-accidents-with-AI/img10_screenshot%20Deploying%20Find%20posterior%20parameters.png) @@ -99,17 +99,17 @@ This will start a new task which we have to wait until it is finished. ![posterior parameters done](.gitbook/assets/detecting-worker-accidents-with-AI/img11_screenshot%20Deploying%20Configure%20posterior%20parameters%20Job%20complete.png) -When the job is completed, close the popup window and then click “Build” options to build our firmware. The firmware will be downloaded automatically when the build job completes. Once the firmware is downloaded, we first need to unzip it. Connect a Syntiant TinyML board to your computer using a USB cable. Next, open the unzipped folder and run the flashing script based on your Operating System. +When the job is completed, close the popup window and then click "Build" options to build our firmware. The firmware will be downloaded automatically when the build job completes. Once the firmware is downloaded, we first need to unzip it. Connect a Syntiant TinyML board to your computer using a USB cable. Next, open the unzipped folder and run the flashing script based on your Operating System. -We can connect to the board’s firmware over Serial. To do this, open a terminal, select the COM Port of the Syntiant TinyML board with settings 115200 8-N-1 settings (in Arduino IDE, that is 115200 baud Carriage return). +We can connect to the board's firmware over Serial. To do this, open a terminal, select the COM Port of the Syntiant TinyML board with settings 115200 8-N-1 settings (in Arduino IDE, that is 115200 baud Carriage return). -Sounds such as “stop”, “help”, “aaagh!” or crying will turn the RGB LED to red. +Sounds such as "stop", "help", "aaagh!" or crying will turn the RGB LED to red. ![Syntiant red-light green-light](.gitbook/assets/detecting-worker-accidents-with-AI/img12_Syntiant%20TinyML%20board%20-%20inference%20red%20green.png) -![Predicitons on serial port](.gitbook/assets/detecting-worker-accidents-with-AI/img13_Serial%20running%20model%20on%20Syntiant%20board.png) +![Predictions on serial port](.gitbook/assets/detecting-worker-accidents-with-AI/img13_Serial%20running%20model%20on%20Syntiant%20board.png) -For the “unknown” sounds, the RGB LED is off. While configuring the posterior parameters, the detected classes that we selected are the ones which trigger the RGB LED lighting. +For the "unknown" sounds, the RGB LED is off. While configuring the posterior parameters, the detected classes that we selected are the ones which trigger the RGB LED lighting. ## Taking it one step further @@ -125,7 +125,7 @@ A custom firmware was then created to turn on GPIO 1 HIGH (3.3V) of the Syntiant ![Custom firmware](.gitbook/assets/detecting-worker-accidents-with-AI/img16_screenshot%20Arduino%20custom%20firmware%20code.png) -Awesome! What’s next now? Checkout the custom firmware [here](https://github.com/SolomonGithu/syntiant-tinyml-firmware-acoustic-detection) and add intelligent sensing to your actuators and also home automation devices! +Awesome! What's next now? Checkout the custom firmware [here](https://github.com/SolomonGithu/syntiant-tinyml-firmware-acoustic-detection) and add intelligent sensing to your actuators and also home automation devices! ## Intelligent sensing for 8-bit LoRaWAN actuator @@ -135,7 +135,7 @@ I leveraged my TinyML solution and used it to add more sensing to my LoRaWAN act ![Accident detected](.gitbook/assets/detecting-worker-accidents-with-AI/img18_Arduino%20accident%20detected.png) -Below is a sneak peak of an indoor test… Now my “press a button” LoRaWAN actuations can run without causing harm such as turning on a faulty device, pouring water via solenoid/pump in unsafe conditions, and other accidental events! +Below is a sneak peak of an indoor test… Now my "press a button" LoRaWAN actuations can run without causing harm such as turning on a faulty device, pouring water via solenoid/pump in unsafe conditions, and other accidental events! ![Syntiant TinyML LoRaWAN testing](.gitbook/assets/detecting-worker-accidents-with-AI/gif_syntiant%20stop.gif) diff --git a/edenoff-anticipate-power-outages.md b/edenoff-anticipate-power-outages.md index 00b509e1..85342e63 100644 --- a/edenoff-anticipate-power-outages.md +++ b/edenoff-anticipate-power-outages.md @@ -88,12 +88,12 @@ Veff = (((VeffD - 420.76) / -90.24) * -210.2) + 210.2; ## Software - Install HTS221 library. Even when this is an on board module, the HTS221 library is required. Go to Sketch > Include Library > Manage Libraries > Search HTS221 - - Download this ZIP file > Add vía Sketch > Add Zip. + - Download this ZIP file > Add via Sketch > Add Zip. - Download the .ino file > load it into Arduino BLE 33 > connect the Arduino using micro USB cable, and upload Regarding code settings: -**Threesold** is used to compare against **result.classification[ix].value** for failure dataset. See below: +**Threshold** is used to compare against **result.classification[ix].value** for failure dataset. See below: ``` float threesold=0.85; diff --git a/esd-protection-using-computer-vision.md b/esd-protection-using-computer-vision.md index c4cb5525..e7e4856b 100644 --- a/esd-protection-using-computer-vision.md +++ b/esd-protection-using-computer-vision.md @@ -60,7 +60,7 @@ Once I had the model saved to Jetson reComputer, I started coding. This is where As I mentioned before, the Edge Impulse Python SDK is really straightforward to use in my opinion. Along with the [Advantech sample code](https://github.com/edgeimpulse/workshop-advantech-jetson-nano/tree/main/code_samples/inference) that I mentioned earlier, I really had a great framework in place. If Edge Impulse and Advantech offer the course again this year, I highly recommend taking it. -I coded options to read in live video from the attached CSI camera, or streaming via RSTP. Both options worked well for me. I had a third option for reading in video files for testing archived video. I had some minor issues getting camera drivers working (nanocamera ended up working for me) but once I was through those, it was really fun and straightforward. You can see a couple videos below. +I coded options to read in live video from the attached CSI camera, or streaming via RTSP. Both options worked well for me. I had a third option for reading in video files for testing archived video. I had some minor issues getting camera drivers working (nanocamera ended up working for me) but once I was through those, it was really fun and straightforward. You can see a couple videos below. {% embed url="https://www.youtube.com/watch?v=ivLA7CuNENU" %} diff --git a/fluid-leak-detection-with-flowmeter-and-ai.md b/fluid-leak-detection-with-flowmeter-and-ai.md index 77b3a582..0def43c3 100644 --- a/fluid-leak-detection-with-flowmeter-and-ai.md +++ b/fluid-leak-detection-with-flowmeter-and-ai.md @@ -134,7 +134,7 @@ From the **Deployment** tab, build an **Arduino Library**. You can enable optimi ![](.gitbook/assets/fluid-leak-detection-with-flowmeter-and-ai/deployment.png) -The build will output a `.zip` file containing the model and some examples. Add the library to the **Adruino IDE** using Sketch > Include Library > Add .ZIP library +The build will output a `.zip` file containing the model and some examples. Add the library to the **Arduino IDE** using Sketch > Include Library > Add .ZIP library ![](.gitbook/assets/fluid-leak-detection-with-flowmeter-and-ai/arduino-ide.png) diff --git a/hospital-bed-occupancy-detection.md b/hospital-bed-occupancy-detection.md index 6dda0efe..af58b451 100644 --- a/hospital-bed-occupancy-detection.md +++ b/hospital-bed-occupancy-detection.md @@ -2,7 +2,7 @@ description: Use machine learning and an Arduino Nano BLE Sense to monitor bed occupancy in hospitals or care facilities. --- -# Hospital Bed Occupancy Detetction with TinyML +# Hospital Bed Occupancy Detection with TinyML Created By: [Adam Milton-Barker](https://www.adammiltonbarker.com/) @@ -173,7 +173,7 @@ Before we deploy the software to the Nano 33 BLE Sense, lets test using the Edge ![Live testing: Occupied](.gitbook/assets/hospital-bed-occupancy-detection/16-model-testing.jpg "Live testing: Occupied") -Use the **Live classification** feature to record some samples for clasification from the Nano BLE Sense. Your model should correctly identify the class for each sample. +Use the **Live classification** feature to record some samples for classification from the Nano BLE Sense. Your model should correctly identify the class for each sample. ## Deployment diff --git a/nvidia-omniverse-replicator.md b/nvidia-omniverse-replicator.md index 8b414c65..b48db80e 100644 --- a/nvidia-omniverse-replicator.md +++ b/nvidia-omniverse-replicator.md @@ -30,7 +30,7 @@ Consequently, models trained in a single domain are brittle and often fail when > The purpose of domain randomization is to provide enough simulated variability at training time such that at test time the model is able to generalize to real-world data.” - Tobin et al, Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World, 2017 -![Domain Randomization for Transfering Deep Neural Networks - source: Tobin et al, 2017)](.gitbook/assets/nvidia-omniverse-replicator/research-domain-rand.jpg) +![Domain Randomization for Transferring Deep Neural Networks - source: Tobin et al, 2017)](.gitbook/assets/nvidia-omniverse-replicator/research-domain-rand.jpg) Nvidia Replicator enables us to perform Domain Randomization. The Replicator is one module within the Omniverse family, and it offers tools and workflow to generate data for various computer vision and non-visual tasks. The Replicator is a highly interoperable tool that integrates with over 40+ modelling/rendering applications across different verticals. The seamless integration is possible thanks to Pixar's Universal Scene Description (USD), which serves as a protocol for various applications such as Blender, 3DMax, Maya, Revit, C4D etc., to work with the Nvidia Replicator. diff --git a/predictive-maintenance-with-sound.md b/predictive-maintenance-with-sound.md index e1d4ea9d..e859b663 100644 --- a/predictive-maintenance-with-sound.md +++ b/predictive-maintenance-with-sound.md @@ -1,5 +1,5 @@ --- -description: A proof-of-concept that uses an Arduino to listen for anomolies in the sound of a running motor. +description: A proof-of-concept that uses an Arduino to listen for anomalies in the sound of a running motor. --- # Predictive Maintenance Using Audio Classification @@ -14,7 +14,7 @@ Public Project Link: Every manufacturing environment is equipped with machines. For a better-performing manufacturing unit, the health of machines plays a major role and hence maintenance of the machines is important. We have three strategies of maintenance namely - Preventive maintenance, Corrective maintenance, and Predictive maintenance. -If you want to find the best balance between preventing failures and avoiding over-maintenance, Predictive Mainenance (PdM) is the way to go. Equip your factory with relatively affordable sensors to track temperature, vibrations, and motion data, use predictive techniques to schedule maintenance when a failure is about to occur, and you'll see a nice reduction in operating costs. +If you want to find the best balance between preventing failures and avoiding over-maintenance, Predictive Maintenance (PdM) is the way to go. Equip your factory with relatively affordable sensors to track temperature, vibrations, and motion data, use predictive techniques to schedule maintenance when a failure is about to occur, and you'll see a nice reduction in operating costs. In the newest era of technology, teaching computers to make sense of the acoustic world is now a hot research topic. So in this project, we use sound to do some predictive maintenance using an Arduino Nano 33 BLE Sense. diff --git a/smart-baby-swing.md b/smart-baby-swing.md index 4902cd66..a19a5cc7 100644 --- a/smart-baby-swing.md +++ b/smart-baby-swing.md @@ -102,7 +102,7 @@ Then next step is deployment to the hardware. ## Deployment -Once the testing is complete, go to the "Deployment" option and select *Build firmware* -> *Arduino Portenta H7* to create a downloadble firmware to flash to the board. I have chosen Quantised (Int8). In Edge Impulse, there is also an option to use the EON compiler for reducing resources and improving accuracy, as well as lower latency. +Once the testing is complete, go to the "Deployment" option and select *Build firmware* -> *Arduino Portenta H7* to create a downloadable firmware to flash to the board. I have chosen Quantized (Int8). In Edge Impulse, there is also an option to use the EON compiler for reducing resources and improving accuracy, as well as lower latency. ![](.gitbook/assets/smart-baby-swing/eon-compiler.jpg) @@ -141,7 +141,7 @@ The application code will activate the baby swing rocker for 20 seconds, wheneve ## Hardware Integration -The Arduino Portenta is connected to the 5v DC Relay module. The Common pin in the relay is connected to the Gnd of the battery and NO pin in the relay is connected to the Gnd of the motor in the baby swing rocker whereas the Vcc of the motor is connected directly to the Battery +ve terminal. +The Arduino Portenta is connected to the 5v DC Relay module. The Common pin in the relay is connected to the Gnd of the battery and NO pin in the relay is connected to the Gnd of the motor in the baby swing rocker whereas the Vcc of the motor is connected directly to the Battery positve terminal. ![](.gitbook/assets/smart-baby-swing/hardware.jpg) diff --git a/solar-panel-defect-detection.md b/solar-panel-defect-detection.md index bfd6bb26..6271ed84 100644 --- a/solar-panel-defect-detection.md +++ b/solar-panel-defect-detection.md @@ -61,7 +61,7 @@ The major steps that need to be followed for the model development are: For data acquisition, I have collected the real images of solar panels with cracks using the Arduino Portenta H7 and Vision Shield. -To connect the Portenta for the first time, follow the below setps: +To connect the Portenta for the first time, follow the below steps: 1. Download the zip file [https://cdn.edgeimpulse.com/firmware/arduino-portenta-h7.zip](https://cdn.edgeimpulse.com/firmware/arduino-portenta-h7.zip) 1. Press the Reset button twice to put the device into "boot loader" mode @@ -75,7 +75,7 @@ Now the Portenta is connected to the Edge Impulse account. I have placed the sol Go to the *Data Acquisition* section in Edge Impulse and [capture images](https://docs.edgeimpulse.com/docs/edge-impulse-studio/data-acquisition). -Then go to *Labeling queue* in the *Data acquisition* section to draw bounding boxes aroung the cracks in the collected images. +Then go to *Labeling queue* in the *Data acquisition* section to draw bounding boxes around the cracks in the collected images. ![](.gitbook/assets/solar-panel-defect-detection/labeling.jpg) diff --git a/surface-crack-detection.md b/surface-crack-detection.md index ee1559af..9708836b 100644 --- a/surface-crack-detection.md +++ b/surface-crack-detection.md @@ -92,7 +92,7 @@ $ edge-impulse-uploader --category split --label unknown unknown/*.jpg We can see the uploaded datasets on the Edge Impulse Studio's Data Acquisition page. -![Data Aquisition](.gitbook/assets/surface-crack-detection/data_aquisition.png) +![Data Acquisition](.gitbook/assets/surface-crack-detection/data_aquisition.png) ## Training Go to the **Impulse Design** > **Create Impulse** page, click **Add a processing block**, and then choose **Image**, which preprocesses and normalizes image data, and optionally reduces the color depth. Also, on the same page, click **Add a learning block**, and choose **Transfer Learning (Images)**, which fine-tunes a pre-trained image classification model on the data. We are using a 160x160 image size. Now click on the **Save Impulse** button. diff --git a/tinyml-digital-counter-openmv.md b/tinyml-digital-counter-openmv.md index f3d6ec53..12d6c448 100644 --- a/tinyml-digital-counter-openmv.md +++ b/tinyml-digital-counter-openmv.md @@ -19,13 +19,13 @@ GitHub Repository: ## Story -Ever had this brilliant idea to sprinkle a dash of IoT on a system only to be stopped in your tracks because the system is from ages bygone? Well, you are not alone in this quagmire, a number of brownfield projects have an all too common constraint. The systems are either from times before the age of the internet or the OEM does not appreciate unsanctioned tinkering in their ‘business’. Throughout the year, I have been pondering ways to track my power consumption without taking a stroll out of my apartment, drag myself down a flight of stairs and peep, in shock, at the nearly zero credit points left on my pre-paid electric meter. I really wanted a way to get that data, with less stress and at my beck and call. +Ever had this brilliant idea to sprinkle a dash of IoT on a system only to be stopped in your tracks because the system is from ages bygone? Well, you are not alone in this quagmire, a number of brownfield projects have an all too common constraint. The systems are either from times before the age of the internet or the OEM does not appreciate unsanctioned tinkering in their 'business'. Throughout the year, I have been pondering ways to track my power consumption without taking a stroll out of my apartment, drag myself down a flight of stairs and peep, in shock, at the nearly zero credit points left on my pre-paid electric meter. I really wanted a way to get that data, with less stress and at my beck and call. ![](.gitbook/assets/tinyml-digital-counter-openmv/1_header.jpg) -Luckily, a form of display is everybody’s favorite (Boomers and Gen X’ers alike; the architect of legacy tech) giving us a unique opportunity to grab that all elusive data in a non-intrusive way: *and if you go the TinyML fashion, a low-cost solution.* Anyone in the know can affirm that the cost of this approach pales in comparison to the alternative–overhaul of the old system in part or whole. +Luckily, a form of display is everybody's favorite (Boomers and Gen X'ers alike; the architect of legacy tech) giving us a unique opportunity to grab that all elusive data in a non-intrusive way: *and if you go the TinyML fashion, a low-cost solution.* Anyone in the know can affirm that the cost of this approach pales in comparison to the alternative–overhaul of the old system in part or whole. -Most systems provide the output of process parameters in form of a display; analog or digital. [Peter Warden’s blog](https://petewarden.com/2021/02/28/how-screen-scraping-and-tinyml-can-turn-any-dial-into-an-api/) on “How screen scraping and TinyML can turn any dial into an API” provides a brilliant treatise on handling this problem. This work attempts to do for digital displays what [Brandon Satrom](https://www.hackster.io/brandonsatrom/monitor-the-analog-world-with-tinyml-fd59c4) implemented for analog displays, albeit with a different setup, hardware, and software choices. +Most systems provide the output of process parameters in form of a display; analog or digital. [Peter Warden's blog](https://petewarden.com/2021/02/28/how-screen-scraping-and-tinyml-can-turn-any-dial-into-an-api/) on “How screen scraping and TinyML can turn any dial into an API” provides a brilliant treatise on handling this problem. This work attempts to do for digital displays what [Brandon Satrom](https://www.hackster.io/brandonsatrom/monitor-the-analog-world-with-tinyml-fd59c4) implemented for analog displays, albeit with a different setup, hardware, and software choices. ![Screen Scraping: Analog vs Digital](.gitbook/assets/tinyml-digital-counter-openmv/2_analog_v_digital.jpg) @@ -65,7 +65,7 @@ Recall that these steps were just necessary to understand the workings of the pr ![Screen Segmentation for Digit Cropping](.gitbook/assets/tinyml-digital-counter-openmv/5_screens.jpg) -Having sorted out the process of image capture and cropping for digit recognition, I began the data collection stage of the project. The duration of the collection was based on the constraints namely the battery power of the cells and the board’s internal memory (32Mb). For some reason, my particular board was not accepting an SDCard, and all the scrolling through MicroPython and OpenMV forums did not help! I intentionally placed a limit on data collection to control the quality of images gotten–*which were taken during daytime*–and the mAh value of the power source in use. +Having sorted out the process of image capture and cropping for digit recognition, I began the data collection stage of the project. The duration of the collection was based on the constraints namely the battery power of the cells and the board's internal memory (32Mb). For some reason, my particular board was not accepting an SDCard, and all the scrolling through MicroPython and OpenMV forums did not help! I intentionally placed a limit on data collection to control the quality of images gotten–*which were taken during daytime*–and the mAh value of the power source in use. The whole data collection process generated 3260 digit data for training. The choice was made to manually label the data set for improved accuracy. I did this by selecting and moving individual images to folders labelled 0 to 9. A verification step was included in the later stage to handle errors in labelling–*FYI, I had 3 labelling errors in all.* @@ -79,7 +79,7 @@ At this juncture, it was time to have fun with Edge Impulse Studio. Considering ![Subsequent Attempt with 3 Dense Layers](.gitbook/assets/tinyml-digital-counter-openmv/9_dense.jpg) -The model was generated by Edge Impulse Studio in a `tflite` file format and it was transferred to the OpenMV board on the initial setup rig. Since the metering device was placed at a detached location from the apartment, a WiFi range extender was set up to enable communication between the OpenMV’s WiFi Shield and the platform of choice. Essentially, the OpenMV device which houses the model does the computing and sends the live data via MQTT to a dashboard *(in this case Adafruit IO)* allowing for real-time updates of the credit point pending on the meter. Mission accomplished! +The model was generated by Edge Impulse Studio in a `tflite` file format and it was transferred to the OpenMV board on the initial setup rig. Since the metering device was placed at a detached location from the apartment, a WiFi range extender was set up to enable communication between the OpenMV's WiFi Shield and the platform of choice. Essentially, the OpenMV device which houses the model does the computing and sends the live data via MQTT to a dashboard *(in this case Adafruit IO)* allowing for real-time updates of the credit point pending on the meter. Mission accomplished! ![Screen Scraping in Action](.gitbook/assets/tinyml-digital-counter-openmv/10_gif.gif) diff --git a/worker-safety-posture-detection.md b/worker-safety-posture-detection.md index 16e78d8a..e05886dd 100644 --- a/worker-safety-posture-detection.md +++ b/worker-safety-posture-detection.md @@ -12,7 +12,7 @@ Public Project Link: ## Problem Statement -Working in manufacturing can put a lot of stress on a worker's body. Depending on the worker’s role in the production process, they might experience issues related to cramped working conditions, heavy lifting, or repetitive stress. +Working in manufacturing can put a lot of stress on a worker's body. Depending on the worker's role in the production process, they might experience issues related to cramped working conditions, heavy lifting, or repetitive stress. Poor posture is another issue that can cause problems for the health of those who work in manufacturing. Along with that, research suggests that making efforts to improve posture among manufacturing employees can lead to significant increases in production. Workers can improve their posture by physical therapy, or simply by being more mindful during their work day. @@ -32,7 +32,7 @@ Lifting can be another issue affecting the posture of those who work in manufact ## TinyML Solution -I have created a wearable device using a [SiLabs Thunderboard Sense 2](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/silabs-thunderboard-sense-2) which can be fitted to a worker’s waist. The worker can do their normal activities, and the TinyML model running on the hardware will predict the posture and communicate to the worker through BLE communication. The worker can get notified in the Light Blue App on their phone or smartwatch. +I have created a wearable device using a [SiLabs Thunderboard Sense 2](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/silabs-thunderboard-sense-2) which can be fitted to a worker's waist. The worker can do their normal activities, and the TinyML model running on the hardware will predict the posture and communicate to the worker through BLE communication. The worker can get notified in the Light Blue App on their phone or smartwatch. ![](.gitbook/assets/worker-safety-posture-detection/bluetooth-app.jpg) @@ -44,7 +44,7 @@ I have trained a model with several different postures, so that it can classify 1. Incorrect Sitting Posture 1. Walking -Now let’s see how I trained the model and tested on real hardware in detail. +Now let's see how I trained the model and tested on real hardware in detail. ## Data Acquisition @@ -88,11 +88,11 @@ For lifting objects off the ground, the correct posture is to squat down to the ![](.gitbook/assets/worker-safety-posture-detection/lifting-correct.jpg) -I have collected the "squat" type data for around two minutess for model training, and 20 seconds of data for model testing. +I have collected the "squat" type data for around two minutes for model training, and 20 seconds of data for model testing. ![](.gitbook/assets/worker-safety-posture-detection/data-lifting-correct.jpg) -### Inorrect Lifting Posture +### Incorrect Lifting Posture For incorrect lifting ("bent over") data, I have collected 2 minutes 30 seconds of data for model training, and another 30 seconds of data for model testing.