diff --git a/SUMMARY.md b/SUMMARY.md index ab9b3842..0ab87ced 100644 --- a/SUMMARY.md +++ b/SUMMARY.md @@ -53,7 +53,7 @@ * [Acoustic Pipe Leakage Detection - Arduino Portenta H7](acoustic-pipe-leak-detection-arduino-portenta-h7.md) * [Worker Safety Monitoring with Nvidia Jetson Nano](worker-safety-monitoring.md) * [Bhopal 84, Detect Harmful Gases](detect-harmful-gases.md) -* [Bean Leaf Classification with Sony Spresense](bean-leaf-classification.md) +* [Bean Leaf Disease Classification - Sony Spresense](prototype-and-concept-projects/bean-leaf-disease-classification-sony-spresense.md) * [Collect Data for Keyword Spotting with Raspberry Pi Pico and Edge Impulse](collect-data-raspberrypi-pico.md) * [Voice-Activated LED Strip for $10: Raspberry Pi Pico and Edge Impulse](voice-activated-led-controller.md) * [Oil Tank Measurement and Delivery Improvement Using Computer Vision](oil-tank-gauge-monitoring.md) diff --git a/acoustic-pipe-leak-detection-arduino-portenta-h7.md b/acoustic-pipe-leak-detection-arduino-portenta-h7.md index 5cc2194e..45be5d30 100644 --- a/acoustic-pipe-leak-detection-arduino-portenta-h7.md +++ b/acoustic-pipe-leak-detection-arduino-portenta-h7.md @@ -1,14 +1,14 @@ --- -description: Using an Arduino Portenta H7 to listen for and classify the flow of water in a pipe. +description: >- + Using an Arduino Portenta H7 to listen for and classify the flow of water in a + pipe. --- # Acoustic Pipe Leakage Detection - Arduino Portenta H7 -Created By: -Manivannan Sivan +Created By: Manivannan Sivan -Public Project Link: -[https://studio.edgeimpulse.com/public/111978/latest](https://studio.edgeimpulse.com/public/111978/latest) +Public Project Link: [https://studio.edgeimpulse.com/public/111978/latest](https://studio.edgeimpulse.com/public/111978/latest) ## Project Demo @@ -18,12 +18,11 @@ Public Project Link: Water is the world's most precious resource, yet it is also the one which is almost universally mismanaged. As a result, water shortages are becoming ever more common. In the case of water supply and distribution networks, these manifest themselves in the intermittent operation of the system. Not only is this detrimental to the structural condition of the pipes, but can also adversely affect the quality of the water delivered to the customer's taps. Further, leakage often exceeds 50% of the production. Not only does this have a significant economic impact, but an environmental one too. But to recover leakage has a cost to undertake a hydraulic study of the network, create a permanent monitoring system, and eliminate the leaks. So how low should leakage go and how can a lower leakage level be maintained over time? This is the objective of the very innovative EU funded PALM project recently completed in central Italy. - ### Increase in Carbon Level Due to Water Leakage -There is an increased carbon footprint of having pumps constantly running to make up for the water lost due to leakage. It is the increased pump use, and pump maintenance/replacement costs that increase CO2 in the air from the fossil fuels being burned to support it. According to a study done by Von Sacken in 2001, water utilities are the largest user of electricity accounting for 3% of the total electricity consumption in the US. In addition, it is estimated that 2-3 billion kW/h of electricity is expended pumping water due to leakage. +There is an increased carbon footprint of having pumps constantly running to make up for the water lost due to leakage. It is the increased pump use, and pump maintenance/replacement costs that increase CO2 in the air from the fossil fuels being burned to support it. According to a study done by Von Sacken in 2001, water utilities are the largest user of electricity accounting for 3% of the total electricity consumption in the US. In addition, it is estimated that 2-3 billion kW/h of electricity is expended pumping water due to leakage. -*Costs, health, the environment, and infrastructure are just a few things that can come into play when water system leakage goes uncorrected.*  +_Costs, health, the environment, and infrastructure are just a few things that can come into play when water system leakage goes uncorrected._ More than 2 billion people globally live in countries with high water stress, per the 2018 statistics provided by the United Nations (UN). In order to tackle this problem, it is necessary to conserve and utilize water safely. Installation of proper water pipeline leak detection systems assist in specifying the leakages in installed water pipes, which ultimately avoids wasting water through cracks and holes. Therefore, the increasing scarcity of water is propelling the demand for water leak solutions, which in turn drives the market. @@ -41,7 +40,7 @@ On the contrary, in recent years, pipeline leak detection systems have undergone ![](.gitbook/assets/acoustic-pipe-leak-detection-arduino-portenta-h7/intro-4.jpg) -In recent years, the increase in acoustic-based pipe leakage detection has started increasing due to investment in R&D. +In recent years, the increase in acoustic-based pipe leakage detection has started increasing due to investment in R\&D. ## A TinyML-based Solution for Pipe Leakage Detection: @@ -53,7 +52,7 @@ My prototype is based on acoustic data collected on an Arduino Portenta H7 and a ![](.gitbook/assets/acoustic-pipe-leak-detection-arduino-portenta-h7/prototype-2.jpg) -In the data acquisition stage, the pipe is simulated with "Idle" mode, where the tap is fully closed so no water flows, and then slightly opened to simulate "leakage mode". Finally, is it fully opened to simulate "water flow" mode. +In the data acquisition stage, the pipe is simulated with "Idle" mode, where the tap is fully closed so no water flows, and then slightly opened to simulate "leakage mode". Finally, is it fully opened to simulate "water flow" mode. ![](.gitbook/assets/acoustic-pipe-leak-detection-arduino-portenta-h7/data-acquisition.jpg) @@ -71,7 +70,7 @@ For Neural Network configuration, I have used couple of 1D-Conv layers followed ![](.gitbook/assets/acoustic-pipe-leak-detection-arduino-portenta-h7/nn-settings.jpg) -The number of Training cycles is set to 100 and Learning rate is set to 0.005. The accuracy obtained was 99.1 % with loss of 0.02 only. As the model is performing well at classifying the data, we can move on. +The number of Training cycles is set to 100 and Learning rate is set to 0.005. The accuracy obtained was 99.1 % with loss of 0.02 only. As the model is performing well at classifying the data, we can move on. ![](.gitbook/assets/acoustic-pipe-leak-detection-arduino-portenta-h7/model-accuracy.jpg) @@ -79,7 +78,7 @@ The number of Training cycles is set to 100 and Learning rate is set to 0.005. T ![](.gitbook/assets/acoustic-pipe-leak-detection-arduino-portenta-h7/model-testing.jpg) -In Model testing, the trained model is tested with data and it is able to predict all 3 conditions we trained on with 100% accuracy. +In Model testing, the trained model is tested with data and it is able to predict all 3 conditions we trained on with 100% accuracy. ## Deployment @@ -102,5 +101,3 @@ The prototype demonstrated an acoustic method to predict leakage in a pipe. The By integrating well-designed enclosures with higher quality microphones, the Arduino Portenta H7 will be ideal for industrial use-cases for pipe leakage detection. ![](.gitbook/assets/acoustic-pipe-leak-detection-arduino-portenta-h7/summary.jpg) - - diff --git a/bean-leaf-classification.md b/prototype-and-concept-projects/bean-leaf-disease-classification-sony-spresense.md similarity index 63% rename from bean-leaf-classification.md rename to prototype-and-concept-projects/bean-leaf-disease-classification-sony-spresense.md index bfe804e0..efe21a89 100644 --- a/bean-leaf-classification.md +++ b/prototype-and-concept-projects/bean-leaf-disease-classification-sony-spresense.md @@ -2,13 +2,11 @@ description: A TinyML Approach for Bean Leaf Disease Classification using a Sony Spresense. --- -# Bean Leaf Disease Classification Using Sony Spresense +# Bean Leaf Disease Classification - Sony Spresense -Created By: -Wamiq Raza +Created By: Wamiq Raza -Public Project Link: -[https://studio.edgeimpulse.com/public/119787/latest](https://studio.edgeimpulse.com/public/119787/latest) +Public Project Link: [https://studio.edgeimpulse.com/public/119787/latest](https://studio.edgeimpulse.com/public/119787/latest) ## Project Demo @@ -16,7 +14,7 @@ Public Project Link: ## Introduction -![](.gitbook/assets/bean-leaf-classification/intro.jpg) +![](../.gitbook/assets/bean-leaf-classification/intro.jpg) Modern technology has enabled human civilization to generate enough food to feed more than 7 billion people. Nevertheless, food security continues to be challenged by a variety of reasons, including climate change, disease etc. Plant leaf diseases have become a widespread issue, necessitating precise study and the quick use of deep learning in plant disease classification. Beans are also one of the most essential plants that are utilized in cuisine across the world, whether dried or fresh. There are several illnesses connected with beans leaves that impede productivity, including angular leaf spot disease and bean rust disease. @@ -25,14 +23,16 @@ To treat the problem at an early stage, a precise classification of bean leaf di ### Things used in this project Hardware components - - [Spresense main boards](https://developer.sony.com/develop/spresense/) x 1 - - Extension board x 1 - - Spresense camera x 1 - - USB micro cable x 1 - - UAVs x 1 + +* [Spresense main boards](https://developer.sony.com/develop/spresense/) x 1 +* Extension board x 1 +* Spresense camera x 1 +* USB micro cable x 1 +* UAVs x 1 Software - - Edge Impulse Studio + +* Edge Impulse Studio ## Background @@ -40,21 +40,21 @@ The computerized categorization of illnesses using photographs has piqued the in ## Dataset Description -The dataset is of leaf images taken in the field in different districts in Uganda by the Makerere AI Lab in collaboration with the National Crops Resources Research Institute (NaCRRI), the national body in charge of research in agriculture in Uganda [1]. Figure 1 shows an example of each of three classes and their corresponding label. The data is of leaf images representing 3 classes: the healthy class of images, and two disease classes including Angular Leaf Spot and Bean Rust diseases. In total 600 images were taken from the dataset; 200 images for each class. +The dataset is of leaf images taken in the field in different districts in Uganda by the Makerere AI Lab in collaboration with the National Crops Resources Research Institute (NaCRRI), the national body in charge of research in agriculture in Uganda \[1]. Figure 1 shows an example of each of three classes and their corresponding label. The data is of leaf images representing 3 classes: the healthy class of images, and two disease classes including Angular Leaf Spot and Bean Rust diseases. In total 600 images were taken from the dataset; 200 images for each class. -![Figure 1: Dataset images with respect to its class label](.gitbook/assets/bean-leaf-classification/dataset.jpg) +![Figure 1: Dataset images with respect to its class label](../.gitbook/assets/bean-leaf-classification/dataset.jpg) ## Hardware and Connectivity Description -The Sony Spresense main board, Extension board, Spresense camera, a USB-micro cable, and UAVs were used in this project. The Sony Spresense is a small, but powerful development board with a 6 core Cortex-M4F microcontroller and integrated GPS, and a wide variety of add-on modules including an extension board with headphone jack, SD card slot and microphone pins, a camera board, a sensor board with accelerometer, pressure, and geomagnetism sensors, and Wi-Fi board - and it's fully supported by Edge Impulse [2]. - -Drones, in the form of both Remotely Piloted Aerial Systems (RPAS) and unmanned aerial vehicles (UAV), are increasingly being used to revolutionize many existing applications. The Internet of Things (IoT) is becoming more ubiquitous every day, thanks to the widespread adoption and integration of mobile robots into IoT ecosystems [3]. As a basis for the development of our autonomous system, the [DJI Tello](https://store.dji.com/product/tello?vid=38421) was chosen due to its easy programmability and wide availability. The Tello drone [4] has a maximum flight time of up to 13 minutes, a weight of about 80 g (with propellers and battery), and dimensions of 98 mm x 92.5 mm x 41 mm. It mounts 3-inch propellers and has a built-in WIFI 802.11n 2.4ghz module. As for the TinyML platform, the Sony Spresense microcontroller [5] was chosen, which acted as a decision unit shown in Figure 2. The platform is a small, low power microcontroller that enables the easy and intuitive implementation of image processing applications. It can be programmed using high-level Python scripts (MicroPython). +The Sony Spresense main board, Extension board, Spresense camera, a USB-micro cable, and UAVs were used in this project. The Sony Spresense is a small, but powerful development board with a 6 core Cortex-M4F microcontroller and integrated GPS, and a wide variety of add-on modules including an extension board with headphone jack, SD card slot and microphone pins, a camera board, a sensor board with accelerometer, pressure, and geomagnetism sensors, and Wi-Fi board - and it's fully supported by Edge Impulse \[2]. + +Drones, in the form of both Remotely Piloted Aerial Systems (RPAS) and unmanned aerial vehicles (UAV), are increasingly being used to revolutionize many existing applications. The Internet of Things (IoT) is becoming more ubiquitous every day, thanks to the widespread adoption and integration of mobile robots into IoT ecosystems \[3]. As a basis for the development of our autonomous system, the [DJI Tello](https://store.dji.com/product/tello?vid=38421) was chosen due to its easy programmability and wide availability. The Tello drone \[4] has a maximum flight time of up to 13 minutes, a weight of about 80 g (with propellers and battery), and dimensions of 98 mm x 92.5 mm x 41 mm. It mounts 3-inch propellers and has a built-in WIFI 802.11n 2.4ghz module. As for the TinyML platform, the Sony Spresense microcontroller \[5] was chosen, which acted as a decision unit shown in Figure 2. The platform is a small, low power microcontroller that enables the easy and intuitive implementation of image processing applications. It can be programmed using high-level Python scripts (MicroPython). -![Figure 2: Sony Spresense main board, extension board and camera](.gitbook/assets/bean-leaf-classification/spresense.jpg) +![Figure 2: Sony Spresense main board, extension board and camera](../.gitbook/assets/bean-leaf-classification/spresense.jpg) -For connectivity I follow the same approach used in "Energy-Efficient Inference on the Edge Exploiting TinyML Capabilities for UAVs" [3] by the authors and the block diagram shown in Figure 3 represents the workflow. The only difference is that instead of the wireless connectivity, in this project the UAVs were integrated with the USB cable. +For connectivity I follow the same approach used in "Energy-Efficient Inference on the Edge Exploiting TinyML Capabilities for UAVs" \[3] by the authors and the block diagram shown in Figure 3 represents the workflow. The only difference is that instead of the wireless connectivity, in this project the UAVs were integrated with the USB cable. -![Figure 3: Block diagram](.gitbook/assets/bean-leaf-classification/workflow.jpg) +![Figure 3: Block diagram](../.gitbook/assets/bean-leaf-classification/workflow.jpg) ## Data Acquisition @@ -66,47 +66,47 @@ Our dataset is ready to train our model. This requires two important features: a We first click "Create Impulse". Here, set image width and height to 96x96; and Resize mode to Squash. The Processing block is set to "Image" and the Learning block is "Transfer Learning (Images)". Click 'Save Impulse' to use this configuration. We have used a 96x96 image size to lower the RAM usage, shown in Figure 4. -![Figure 4: Create impulse figure with image input range, transfer learning and output class](.gitbook/assets/bean-leaf-classification/impulse.jpg) +![Figure 4: Create impulse figure with image input range, transfer learning and output class](../.gitbook/assets/bean-leaf-classification/impulse.jpg) Next, on the "Image" processing block, set Color depth to RGB. "Save parameters", and this will open the "Generate Features" tab. On the window 'Generate features', we click the "Generate features" button. Upon completion we see a 3D representation of our dataset. These is what will be passed into the neural network, and the visualization can be seen in Figure 5. -![Figure 5: Generated feature representation](.gitbook/assets/bean-leaf-classification/feature-explorer.jpg) +![Figure 5: Generated feature representation](../.gitbook/assets/bean-leaf-classification/feature-explorer.jpg) ## Building and training the model -To train a model, MobileNetV1 96x96 0.2 algorithm was then used. As MobileNetV1 is a unique machine learning approach that extends object classification to devices with limited processing power, it allows you to count things, locate objects in an image, and track numerous objects in real time while consuming less computing power. Dataset visualization and separability of the classes is presented in Figure 6. Even after rescaling and color conversions, image features have a high dimensionality that prevents suitable visualization. Each image was resized to 96 x96 pixels, in addition to that, data augmentation technique was applied. +To train a model, MobileNetV1 96x96 0.2 algorithm was then used. As MobileNetV1 is a unique machine learning approach that extends object classification to devices with limited processing power, it allows you to count things, locate objects in an image, and track numerous objects in real time while consuming less computing power. Dataset visualization and separability of the classes is presented in Figure 6. Even after rescaling and color conversions, image features have a high dimensionality that prevents suitable visualization. Each image was resized to 96 x96 pixels, in addition to that, data augmentation technique was applied. The number of epochs is the number of times the entire dataset is passed through the neural network during training. There is no ideal number for this, it depends on the data in total. The model was run for 60 epochs, with learning rate set to 0.001 with the dataset split into training, validation, and testing. After introducing a dynamic quantization from a 32-bit floating point to an 8-bit integer, the resulting optimized model showed a significant reduction in size (106.3K). The onboard inference time was reduced to 183 msec and the use of RAM was limited to 225.6K, with an accuracy after the post-training validation of 78%. The model confusion matrix and on a mobile device performance can be seen in Figure 6. -![Figure 6: Parameter with overall accuracy of model](.gitbook/assets/bean-leaf-classification/accuracy.jpg) +![Figure 6: Parameter with overall accuracy of model](../.gitbook/assets/bean-leaf-classification/accuracy.jpg) ## Model Testing When training our model, we used 80% of the data in our dataset. The remaining 20% is used to test the accuracy of the model in classifying unseen data. We need to verify that our model has not overfit, by testing it on new data. If your model performs poorly, then it means that it overfit. Click "Model testing" then "classify all". Our current model has an accuracy of 76%, as can be seen in Figure 7. -![Figure 7: Model testing using Edge Impulse framework](.gitbook/assets/bean-leaf-classification/model-testing.jpg) +![Figure 7: Model testing using Edge Impulse framework](../.gitbook/assets/bean-leaf-classification/model-testing.jpg) ## Model deployment on Sony Spresense In order to deploy a model on a microcontroller we must build firmware using the Edge Impulse platform. Figure 8 represents the steps for the Sony Spresense with red bounding boxes. Impulses can be deployed as a C++ library and included in your own application, or full firmware with the model included can be downloaded. Choosing the firmware version, a Zip file is created and a download is generated. -After downloading, unzip the file, as shown in Figure 9. Click on the `flash` command that corresponds to your operating system. In my case, this was Windows. +After downloading, unzip the file, as shown in Figure 9. Click on the `flash` command that corresponds to your operating system. In my case, this was Windows. {% hint style="info" %} Go through this [post](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/sony-spresense) from Edge Impulse official in order to know how to connect Sony Spresense to your computer. {% endhint %} -![Figure 8: Post quantization model deployment](.gitbook/assets/bean-leaf-classification/deployment.jpg) +![Figure 8: Post quantization model deployment](../.gitbook/assets/bean-leaf-classification/deployment.jpg) -![Figure 9: Flash command](.gitbook/assets/bean-leaf-classification/flash-1.jpg) +![Figure 9: Flash command](../.gitbook/assets/bean-leaf-classification/flash-1.jpg) A Terminal will open as shown in Figure 10. Wait for it to finish processing and flashing the board, then open a new Terminal and run the command `**edge-impulse-run-impulse –continuous**`. The prediction score for every class can be seen, as shown in Figure 11 and in the YouTube video. -![Figure 10: Flash command terminal output](.gitbook/assets/bean-leaf-classification/flash-2.jpg) +![Figure 10: Flash command terminal output](../.gitbook/assets/bean-leaf-classification/flash-2.jpg) -![Figure 11: Flash command terminal output with class prediction](.gitbook/assets/bean-leaf-classification/flash-3.jpg) +![Figure 11: Flash command terminal output with class prediction](../.gitbook/assets/bean-leaf-classification/flash-3.jpg) ## Conclusion @@ -114,14 +114,12 @@ In this project, we build upon the approach used by the author of "Energy-Effici ## References -[1] [https://github.com/AI-Lab-Makerere/ibean](https://github.com/AI-Lab-Makerere/ibean) - -[2] [https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/sony-spresense](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/sony-spresense) - -[3] (PDF) [Energy-Efficient Inference on the Edge Exploiting TinyML Capabilities for UAVs](https://www.researchgate.net/publication/355781811_Energy-Efficient_Inference_on_the_Edge_Exploiting_TinyML_Capabilities_for_UAVs) (researchgate.net) +\[1] [https://github.com/AI-Lab-Makerere/ibean](https://github.com/AI-Lab-Makerere/ibean) -[4] RAZE. Tello. 2019. Available online: [https://www.ryzerobotics.com/kr/tello](https://www.ryzerobotics.com/kr/tello) (accessed on 06 June 2022) +\[2] [https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/sony-spresense](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/sony-spresense) -[5] [https://developer.sony.com/develop/spresense/buy-now](https://developer.sony.com/develop/spresense/buy-now) +\[3] (PDF) [Energy-Efficient Inference on the Edge Exploiting TinyML Capabilities for UAVs](https://www.researchgate.net/publication/355781811\_Energy-Efficient\_Inference\_on\_the\_Edge\_Exploiting\_TinyML\_Capabilities\_for\_UAVs) (researchgate.net) +\[4] RAZE. Tello. 2019. Available online: [https://www.ryzerobotics.com/kr/tello](https://www.ryzerobotics.com/kr/tello) (accessed on 06 June 2022) +\[5] [https://developer.sony.com/develop/spresense/buy-now](https://developer.sony.com/develop/spresense/buy-now)