Skip to content

Commit

Permalink
spelling fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
dtischler committed Aug 23, 2023
1 parent c9990f4 commit 47e8070
Show file tree
Hide file tree
Showing 19 changed files with 176 additions and 70 deletions.
106 changes: 106 additions & 0 deletions .wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1419,3 +1419,109 @@ ek
rXkz
stripboard
veroboard
Brocolli
Chillis
msec
Multicore
erroring
PIO
multicore
AGC
Neopixel
Neopixels
ebFyFq
sX
Piezo
piezo
FoV
MLX
IZDHoQUmEg
BCM
Broadcom
authDomain
databaseURL
dataset's
banglejs
AQG
FUsKiQHVg
GIP
ZCz
aGpTxexuEA
crf
daHeR
dms
fp
kX
licdn
vCISf
Vikström
Truphone
siW
GraphQL
NNtam
SQk
Adamkiewicz
Heroux
NBK
ncbi
nlm
EnvNotification
photochemical
AQI
IQAir
env
PPB
NOx
Tropospheric
VOCs
NOx
NOy
PV
misclassification
spm
Arduin
arduin
frombuffer
Atmega
SX
WaziAct
aaagh
actuations
Grămescu
Niţu
BHA
Elecrow's
frombytes
Pushbutton
qU
rFyNGM
mins
BwoOB
vJc
armsleeve
Labeler
SesEx
zU
olsCrJkXi
nanocamera
CuNENU
ivLA
Electro
SRfa
fb
InvenSense
TDK
DNS
SSL
AKS
vec
Vec
washwater
Rg
UUKy
X'ers
OpenMV's
Edenor
prewarned
ukkinstituutti

8 changes: 4 additions & 4 deletions adaptable-vision-counters.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Weight based counting assumes that each part has the exact same weight and uses

Consider if there are a higher number of defective parts, we can assume that something might be wrong with the production units. This data can also be used to improve the quality of production and thus industry can make more products in less time. So our adaptable counters are evolving as a solution to the world's accurate and flexible counting needs.

The Adaptable Counter is a device consisting of a Rapsberry Pi 4 and camera module, and the counting process is fully powered by FOMO. So it can count faster and more accurately than any other method. Adaptable counters are integrated with a cool looking website.
The Adaptable Counter is a device consisting of a Raspberry Pi 4 and camera module, and the counting process is fully powered by FOMO. So it can count faster and more accurately than any other method. Adaptable counters are integrated with a cool looking website.

## Use-Cases

Expand All @@ -36,7 +36,7 @@ In this case, we are counting defective and non-defective washers.

### 2. Counting in Motion

In this case, we are counting bolts and washers and faulty washers passing through the conveyer belt.
In this case, we are counting bolts and washers and faulty washers passing through the conveyor belt.

![](.gitbook/assets/adaptable-vision-counters/IMG_1678.jpg)

Expand All @@ -62,7 +62,7 @@ Edge Impulse is one of the leading development platforms for machine learning on

### Data Acquisition

Every machine learining project starts with data collection. A goood collection of data is one of the major factors that influences the performance of the model. Make sure you have a wide range of perspectives and zoom levels of the items that are being collected. You may take data from any device or development board, or upload your own datasets, for data acquisition. As we have our own dataset, we are uploading them using the Data Acquisition tab.
Every machine learning project starts with data collection. A good collection of data is one of the major factors that influences the performance of the model. Make sure you have a wide range of perspectives and zoom levels of the items that are being collected. You may take data from any device or development board, or upload your own datasets, for data acquisition. As we have our own dataset, we are uploading them using the Data Acquisition tab.

![](.gitbook/assets/adaptable-vision-counters/Data_Acquisition.png)

Expand Down Expand Up @@ -188,4 +188,4 @@ This Raspberry Pi Camera Module is a custom-designed add-on for Raspberry Pi. It

For powering up the system we used a 5V 2A adapter. In this case we don't have any power hungry peripherals, so 2A current is enough. If you have 3A supply, please go for that.

For the sake of convienence we also used a acrylic case for setting up all the hardware.
For the sake of convenience we also used a acrylic case for setting up all the hardware.
16 changes: 8 additions & 8 deletions ai-patient-assistance.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ Once complete head over to the devices tab of your project and you should see th

We are going to create our own dataset, using the built in microphone on the Arduino Nano 33 BLE Sense. We are going to collect data that will allow us to train a machine learning model that can detect the words/phrases **Doctor**, **Nurse**, and **Help**.

We will use the **Record new data** feature on Edge Impulse to record 15 sets of 10 utterences of each of our keywords, and then we will split them into individual samples.
We will use the **Record new data** feature on Edge Impulse to record 15 sets of 10 utterances of each of our keywords, and then we will split them into individual samples.

Ensuring your device is connected to the Edge Impulse platform, head over to the **Data Aqcquisition** tab to continue.

Expand Down Expand Up @@ -146,7 +146,7 @@ Now we are going to create our network and train our model.

![Add processing block](.gitbook/assets/ai-patient-assistance/processing-block.jpg)

Head to the **Create Impulse** tab and change the window size to 2000ms. Next click **Add processing block** and select **Audio (MFCC)**, then click **Add learning block** and select **Clasification (Keras)**.
Head to the **Create Impulse** tab and change the window size to 2000ms. Next click **Add processing block** and select **Audio (MFCC)**, then click **Add learning block** and select **Classification (Keras)**.

![Created Impulse](.gitbook/assets/ai-patient-assistance/impulse-2.jpg)

Expand Down Expand Up @@ -196,25 +196,25 @@ You will see the output of the testing in the output window, and once testing is

![Live testing](.gitbook/assets/ai-patient-assistance/testing-3.jpg)

Now we need to test how the model works on our device. Use the **Live classification** feature to record some samples for clasification. Your model should correctly identify the class for each sample.
Now we need to test how the model works on our device. Use the **Live classification** feature to record some samples for classification. Your model should correctly identify the class for each sample.

## Performance Callibration
## Performance Calibration

![Performance Callibration](.gitbook/assets/ai-patient-assistance/calibration.jpg)
![Performance Calibration](.gitbook/assets/ai-patient-assistance/calibration.jpg)

Edge Impulse has a great new feature called **Performance Callibration**, or **PerfCal**. This feature allows you to run a test on your model and see how well it will perform in the real world. The system will create a set of post processing configurations for you to choose from. These configurations help to minimize either false activations or false rejections
Edge Impulse has a great new feature called **Performance Calibration**, or **PerfCal**. This feature allows you to run a test on your model and see how well it will perform in the real world. The system will create a set of post processing configurations for you to choose from. These configurations help to minimize either false activations or false rejections

![Turn on perfcal](.gitbook/assets/ai-patient-assistance/calibration-2.jpg)

Once you turn on perfcal, you will see a new tab in the menu called **Performance callibration**. Navigate to the perfcal page and you will be met with some configuration options.
Once you turn on perfcal, you will see a new tab in the menu called **Performance calibration**. Navigate to the perfcal page and you will be met with some configuration options.

![Perfcal settings](.gitbook/assets/ai-patient-assistance/calibration-3.jpg)

Select the **Noise** class from the drop down, and check the Unknown class in the list of classes below, then click **Run test** and wait for the test to complete.

![Perfcal configs](.gitbook/assets/ai-patient-assistance/calibration-4.jpg)

The system will provide a number of configs for you to choose from. Choose the one that best suits your needs and click **Save selected config**. This config will be deployed to your device once you download and install the libray on your device.
The system will provide a number of configs for you to choose from. Choose the one that best suits your needs and click **Save selected config**. This config will be deployed to your device once you download and install the library on your device.

## Versioning

Expand Down
6 changes: 3 additions & 3 deletions arduino-kway-fall-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,11 @@ Finns in general, and elderly people in particular, are made of a tough and hard

Many existing fall detection systems use signals from **accelerometers**, sometimes together with gyroscope sensors, to detect falls. Accelerometers are very sensitively monitoring the acceleration in x, y, and z directions, and are as such very suitable for the purpose. The challenge with developing a fall detection system with the help of accelerometers, is that the data frequency typically needs to be quite high (> 100 Hz) and that the signals need to be filtered and processed further to be of use.

Apart from accelerometers, it is also possible to use e.g. **barometers** to sense if a person suddenly has dropped a meter or more. Barometers sense the air pressure, and as the air pressure is higher closer to the ground, one only needs grade school mathematics to create a bare bones fall detection system this way. Easiest is to first convert air pressure to altitude in **meters**, and then use e.g. this formula `previous altitude in meters - current altitude in meters`, and if the difference is higher than e.g. 1.2 meters within 1-2 seconds, a fall might have happened. With barometers the data frequency does often not need to be as high as with accelerometers, and only one parameter (air pressure=altitude) is recorded. One major drawback is the rate of false positives (a fall detected where no fall occured). These might happen because of quick changes in air pressure, e.g. someone opening or closing a door in a confined space like a car, someone shouting, sneezing, coughing close to the sensor etc.
Apart from accelerometers, it is also possible to use e.g. **barometers** to sense if a person suddenly has dropped a meter or more. Barometers sense the air pressure, and as the air pressure is higher closer to the ground, one only needs grade school mathematics to create a bare bones fall detection system this way. Easiest is to first convert air pressure to altitude in **meters**, and then use e.g. this formula `previous altitude in meters - current altitude in meters`, and if the difference is higher than e.g. 1.2 meters within 1-2 seconds, a fall might have happened. With barometers the data frequency does often not need to be as high as with accelerometers, and only one parameter (air pressure=altitude) is recorded. One major drawback is the rate of false positives (a fall detected where no fall occurred). These might happen because of quick changes in air pressure, e.g. someone opening or closing a door in a confined space like a car, someone shouting, sneezing, coughing close to the sensor etc.

![](.gitbook/assets/arduino-kway-fall-detection/fall_det_05.png)

Some modern and more expensive smartwatches, e.g. Apple Watch, already have in-built fall detection systems, that can automatically call for help in case a fall has been detected, and the person has been immobile for a minute or so. In case the watch has cellular connectivity, it does not even neeed to be paired to a smart phone.
Some modern and more expensive smartwatches, e.g. Apple Watch, already have in-built fall detection systems, that can automatically call for help in case a fall has been detected, and the person has been immobile for a minute or so. In case the watch has cellular connectivity, it does not even need to be paired to a smart phone.

## Project Introduction

Expand All @@ -54,7 +54,7 @@ Initially I intended to collect data for normal behaviour and activities like e.

To be able to use anomaly detection, you just need to collect data for what is considered normal behaviour. Later, when the resulting ML model is deployed to an edge device, it will calculate anomaly scores from the sensor data used. When this score is low it indicates normal behaviour, and when it's high it means an anomaly has been detected.

I followed [this tutorial](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/arduino-nicla-sense-me) on the Nicla to get up and running. The Edge Impulse-provided `nicla_sense_ingestion.ino` sketch was used to collect accelometer data.
I followed [this tutorial](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/arduino-nicla-sense-me) on the Nicla to get up and running. The Edge Impulse-provided `nicla_sense_ingestion.ino` sketch was used to collect accelerometer data.

I started to collect 8-second samples when walking, running, etc. For the sake of simplicity, I had the Nicla device tethered through USB to a laptop as the alternative would have been to use a more complex data gathering program using BLE.
I thus held Nicla in one hand and my laptop in the other and started walking and jogging indoors. To get a feeling for how the anomaly detection model works, I only collected 1m 17s of data, with the intention of collecting at least 10 times more data later on. Astonishingly, I soon found out that this tiny data amount was enough for this proof of concept! Obviously, in a real scenario you would need to secure you have covered all the expected different types of activities a person might get involved in.
Expand Down
4 changes: 2 additions & 2 deletions arduino-kway-gesture-recognition-weather.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
description: Use a Nicla Sense ME attached to the sleeve of a K-way jacket for gesture recognition and bad weather prediction
---

# Arduino x K-Way - Gesture Recognition and Weather Prediciton for Hiking
# Arduino x K-Way - Gesture Recognition and Weather Prediction for Hiking

Created By:
Justin Lutz
Expand Down Expand Up @@ -38,7 +38,7 @@ Ideally, there would be a pouch on the armsleeve to slide the Nicla Sense ME int

![](.gitbook/assets/arduino-kway-gesture-recognition-weather/wrist.jpg)

To complete this project I used my go-to source, Edge Impulse, to ingest raw data, develop a model, and export it as an Arduino library. I [followed this tutorial](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/arduino-nicla-sense-me) on the Nicla Sense ME from Edge Impulse to get up and running. The Edge Impulse-provided `nicla_sense_ingestion.ino` sketch was used to collect the raw accelometer data. I created 3 classes: idle (no movement), walking, and checkpoint. The **Checkpoint** class was essentially me drawing the letter "C" in the air to tell the app to mark a checkpoint on the map while out on a hike.
To complete this project I used my go-to source, Edge Impulse, to ingest raw data, develop a model, and export it as an Arduino library. I [followed this tutorial](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/arduino-nicla-sense-me) on the Nicla Sense ME from Edge Impulse to get up and running. The Edge Impulse-provided `nicla_sense_ingestion.ino` sketch was used to collect the raw accelerometer data. I created 3 classes: idle (no movement), walking, and checkpoint. The **Checkpoint** class was essentially me drawing the letter "C" in the air to tell the app to mark a checkpoint on the map while out on a hike.

You of course could add additional gestures if you wanted to expand the functionality of the jacket and Nicla Sense ME (an "S" for "selfie" maybe?). Even with just 15 minutes of data (split between Training and Test), there was great clustering of class data:

Expand Down
10 changes: 5 additions & 5 deletions audio-recognition-on-silabs-xg24.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Public Project:

## Intro

This project focuses on how to port an existing audio recognition project built with a SiLabs Thunderboard Sense 2, to the latest [EFR32MG24](https://www.silabs.com/wireless/zigbee/efr32mg24-series-2-socs) as used in the newer SiLabs xG24 Dev Kit. For demostration purposes, we will be porting [Manivannan Sivan's](https://www.hackster.io/manivannan) ["Vehicle Predictive Maintenance"](https://www.hackster.io/manivannan/vehicle-predictive-maintenance-cf2ee3) project, which is an Edge Impulse based TinyML model to predict various vehicle failures like faulty drive shaft and brake-pad noises. Check out his work for more information.
This project focuses on how to port an existing audio recognition project built with a SiLabs Thunderboard Sense 2, to the latest [EFR32MG24](https://www.silabs.com/wireless/zigbee/efr32mg24-series-2-socs) as used in the newer SiLabs xG24 Dev Kit. For demonstration purposes, we will be porting [Manivannan Sivan's](https://www.hackster.io/manivannan) ["Vehicle Predictive Maintenance"](https://www.hackster.io/manivannan/vehicle-predictive-maintenance-cf2ee3) project, which is an Edge Impulse based TinyML model to predict various vehicle failures like faulty drive shaft and brake-pad noises. Check out his work for more information.

The audio sensor on the Thunderboard Sense 2 and the xG24 Dev Kit are the same (TDK InvenSense ICS-43434), so ideally we're not required to collect any new data using the xG24 Dev Kit for the model to work properly. Had the audio sensor been a different model, it would most likely be necessary to capture a new dataset.

Expand Down Expand Up @@ -56,16 +56,16 @@ You can follow the guide below to go through the process, if you are interested

> ["Edge Impulse xG24 Dev Kit Guide"](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/silabs-xg24-devkit).
With default value of Window Size (10s) and Window Increase (500 ms), the processiong block will throw an error, as represented below:
With default value of Window Size (10s) and Window Increase (500 ms), the processing block will throw an error, as represented below:

![](.gitbook/assets/audio-recognition-on-silabs-xg24/frame_stride_error.jpg)

This is because some of the features in Edge Impulse's processing block have been updated since this project was created, so you need to update some of the paramaters in the Timer Series block such as Window Size and Window Increase,
This is because some of the features in Edge Impulse's processing block have been updated since this project was created, so you need to update some of the parameters in the Timer Series block such as Window Size and Window Increase,
or increase the frame stride parameter in the MFE processing block. This is what my updated window parameters look like:

![](.gitbook/assets/audio-recognition-on-silabs-xg24/window_increase_updated.jpg)

If you added some new data and are not sure of the model design, then the [EON tuner](https://docs.edgeimpulse.com/docs/edge-impulse-studio/eon-tuner) can come to the rescue. You just have to select the target device as SiLabs EFR32MG24 (Cortex-M33 78MHz) and configure your desired paramters, then the Eon tuner will come up with suggested architetures which you can use.
If you added some new data and are not sure of the model design, then the [EON tuner](https://docs.edgeimpulse.com/docs/edge-impulse-studio/eon-tuner) can come to the rescue. You just have to select the target device as SiLabs EFR32MG24 (Cortex-M33 78MHz) and configure your desired parameters, then the Eon tuner will come up with suggested architectures which you can use.

![](.gitbook/assets/audio-recognition-on-silabs-xg24/EON_tuner.jpg)

Expand Down Expand Up @@ -117,7 +117,7 @@ Note that this is a newer command supported by the Edge Impulse CLI, hence you m

Now your model should be running, and recognize the same audio data and perform inferencing on the newer xG24 Dev Kit hardware, with little to no modifications to actual data or to the model architecture.

This higlights the platform agnostic nature of Edge Impulse, and was possible in this case because the audio sensor on both the Thunderboard and xG24 are the same. However, you would need do your own due diligence for migrating projects built with other sensor data such as humidity/temperature, or the light sensor, as those do vary between the boards.
This highlights the platform agnostic nature of Edge Impulse, and was possible in this case because the audio sensor on both the Thunderboard and xG24 are the same. However, you would need do your own due diligence for migrating projects built with other sensor data such as humidity/temperature, or the light sensor, as those do vary between the boards.

One final note is that in this project, the xG24 is roughly 2x as fast as the Thunderboard Sense 2 in running the DSP, and 8x faster in running the inference:

Expand Down
Loading

0 comments on commit 47e8070

Please sign in to comment.