Skip to content

Commit

Permalink
Merge pull request #306 from dtischler/main
Browse files Browse the repository at this point in the history
spelling fixes
  • Loading branch information
dtischler authored Aug 25, 2023
2 parents e863f7e + 89d9047 commit 1d68eab
Show file tree
Hide file tree
Showing 7 changed files with 57 additions and 13 deletions.
46 changes: 45 additions & 1 deletion .wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1705,4 +1705,48 @@ Minto
Zanuttigh
Faris
Salman

Grămescu
Niţu
Ayoub
onboarding
EUI
Uplink
acidification
Decrypt
libusb
Thonny
decrypt
BLDevCube
evergrowing
Forebodingly
MetHb
macronutrients
mellitus
methemoglobinemia
MedTech
dirs
Gcode
OCI
QgQMWRZAlcY
RatPack
VBAT
CSVs
Bot's
Roomba
unchecking
lKLFVqofdu
MQ
BCI
TeraTerm
CJGDpgfY
MPU's
walkthrough
transductor
Tkinter
unchecking
Jetsons
recfaces
incase
EVM
notok
QgQMWRZAlcY
6 changes: 3 additions & 3 deletions deter-shoplifting-with-computer-vision.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,11 +57,11 @@ Next, go to the Labeling Queue. Drag a Bounding Box around the location of the b

## Model Training

With the data all uploaded and labeled, go to **Impulse Design**. An Impulse is a machine learning pipeline. In the first block, select Image Data if it is not already chosen, and set the image width and height to 96 pixes, and Fir shortest axis. Next, choose Image for the Processing block. Then Object Detection in the Learning block. You can learn more about block choices here: [https://docs.edgeimpulse.com/docs/edge-impulse-studio/impulse-design](https://docs.edgeimpulse.com/docs/edge-impulse-studio/impulse-design).
With the data all uploaded and labeled, go to **Impulse Design**. An Impulse is a machine learning pipeline. In the first block, select Image Data if it is not already chosen, and set the image width and height to 96 pixels, and Fit shortest axis. Next, choose Image for the Processing block. Then Object Detection in the Learning block. You can learn more about block choices here: [https://docs.edgeimpulse.com/docs/edge-impulse-studio/impulse-design](https://docs.edgeimpulse.com/docs/edge-impulse-studio/impulse-design).

![](.gitbook/assets/deter-shoplifting-with-computer-vision/impulse.jpg)

Save the Impulse and move on to Parameters, then next move on to Generate features. Generating features will give you a vidual representation of your data, and you should be able to notice the data is clustered.
Save the Impulse and move on to Parameters, then next move on to Generate features. Generating features will give you a visual representation of your data, and you should be able to notice the data is clustered.

Move on to Object Detection on the left menu, and you can select the options to begin training your model. I went with 60 cycles, a 0.01 Learning Rate, set aside 20% of my data for Validation, and chose to enable Data Augmentation. You can click the **Start Training** button to begin the process.

Expand All @@ -87,7 +87,7 @@ $ npm install -g --unsafe-perm edge-impulse-linux

Follow the prompts to login to your account, and select the Project to connect to.

Back in the Edge Impulse Studio, click on Deployment on the left menu, and you will find all of the methods for building firmware and libraries. In this case, select Texas Instruments, TIDL-RT-Library, and download the `.zip` file that gets gererated. That file is going to be needed on the TDA4VM board, so you could use SFTP to place it onto the board, or perhaps just use a USB drive and copy the file from your laptop or desktop PC, onto the USB stick, then place the USB stick into the TDA4VM board and copy it from USB to the local filesystem.
Back in the Edge Impulse Studio, click on Deployment on the left menu, and you will find all of the methods for building firmware and libraries. In this case, select Texas Instruments, TIDL-RT-Library, and download the `.zip` file that gets generated. That file is going to be needed on the TDA4VM board, so you could use SFTP to place it onto the board, or perhaps just use a USB drive and copy the file from your laptop or desktop PC, onto the USB stick, then place the USB stick into the TDA4VM board and copy it from USB to the local filesystem.

![](.gitbook/assets/deter-shoplifting-with-computer-vision/deployment.jpg)

Expand Down
4 changes: 2 additions & 2 deletions eeg-data-machine-learning-part-1.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Professional or clinical EEG-devices are typically equipped with between 16 to 6

![](.gitbook/assets/eeg-data-part-1/intro-3.jpg)

Muse EEG-devices are focused on consumers and they have four EEG-electrodes, two at the forehead, two behind the ears. In addition they also have an accelerometer/gyroscope, and newer models include a PPG-sensor which measures blood flow, breathing rhytm, and heart rate. In this tutorial however are only signals from EEG-electrodes being used.
Muse EEG-devices are focused on consumers and they have four EEG-electrodes, two at the forehead, two behind the ears. In addition they also have an accelerometer/gyroscope, and newer models include a PPG-sensor which measures blood flow, breathing rhythm, and heart rate. In this tutorial however are only signals from EEG-electrodes being used.

---------------
## Prerequisites
Expand Down Expand Up @@ -130,7 +130,7 @@ In this chapter you will get detailed instructions from start to end how to coll

![](.gitbook/assets/eeg-data-part-1/muse.jpg)

- Wait until the horseshoe in MindMonitor has disappeared and the graph lines for all sensors have calmed down like in the picture. You might need to wait a few minutes to acquire good signals, but it's possible to speed up the process a bit by moisturing the sensors with e.g. a wet finger.
- Wait until the horseshoe in MindMonitor has disappeared and the graph lines for all sensors have calmed down like in the picture. You might need to wait a few minutes to acquire good signals, but it's possible to speed up the process a bit by moistening the sensors with e.g. a wet finger.
- Start streaming from Mind Monitor by clicking on the button showed in the picture

**1. Collect EEG-data**
Expand Down
4 changes: 2 additions & 2 deletions elevator-passenger-counting.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ Here is our Neural Network training settings and architecture for generating the

We only changed the training cycle from **60** to **70**. Further increasing a training cycle or learning rate can overfit the data, so we stuck to this.

For the Neural Network architecture, we used **FOMO (MobileNet V2 0.35)**. The results are great, acheiving a bit over 95% accuracy for the model (using quantized int version).
For the Neural Network architecture, we used **FOMO (MobileNet V2 0.35)**. The results are great, achieving a bit over 95% accuracy for the model (using quantized int version).

![](.gitbook/assets/elevator-passenger-counting/Model_output.jpg)

Expand Down Expand Up @@ -168,4 +168,4 @@ There is a wide variety of options available and they are shown below.

This device can be easily integrated and installed in an elevator, making it so that the elevator will only start when the passenger count is in the permissible range.

To reduce the cost of the unit, we can also try using an ESP32-EYE or similar microcontroller unit, instead of the Nicla Vision, thought the quality and capablility will need to be tested similar to how the Nicla was evaluated.
To reduce the cost of the unit, we can also try using an ESP32-EYE or similar microcontroller unit, instead of the Nicla Vision, though the quality and capability will need to be tested similar to how the Nicla was evaluated.
2 changes: 1 addition & 1 deletion methane-monitoring-silabs-xg24.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ There are many different methane monitoring systems on the market, but choosing

1. **The type of work environment**: Methane monitors come in a variety of shapes and sizes, each designed for a specific type of work environment. You will need to choose a methane monitor that is designed for use in the type of workplace where it will be used. For example, personal portable gas monitors are designed to be worn by individual workers, while fixed gas monitors are designed to be placed in a specific location.

2. **The size of the workplace**: The size of the workplace will determine how many methane monitors you will need. For example, a small mine might only require a few fixed gas monitors, while a large mine might require dozens of personal porta- Pre-process signal: This block is used to filter and normalize the data from the methane sensor.
2. **The size of the workplace**: The size of the workplace will determine how many methane monitors you will need. For example, a small mine might only require a few fixed gas monitors, while a large mine might require dozens.

3. **The methane concentration**: The level of methane present in the workplace will determine how often the methane monitor needs to be used. For example, in a workplace with a high concentration of methane, the monitor may need to be used more frequently than in a workplace with a low concentration of methane.

Expand Down
2 changes: 1 addition & 1 deletion renesas-rzv2l-pose-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@ Power consumption figures are shown running on an actual RZ/V2L Eval Kit measuri

![](.gitbook/assets/renesas-rzv2l-pose-detection/Untitled21.png)

As can be seen the power current draw for YOLOv5 Object Detection is under 500mA in total whereas Image Classification is just under 400mA whereas the board draws just under 300mA while idle with a single user logged in via SSH. This shows the phenominal low power operation of DRP-AI which also does not require any heatsinks to be attached to the RZ/V2L MPU.
As can be seen the power current draw for YOLOv5 Object Detection is under 500mA in total whereas Image Classification is just under 400mA whereas the board draws just under 300mA while idle with a single user logged in via SSH. This shows the phenomenal low power operation of DRP-AI which also does not require any heatsinks to be attached to the RZ/V2L MPU.

### Pose Detection on Renesas RZ/V2L with DRP-AI

Expand Down
6 changes: 3 additions & 3 deletions worker-safety-monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Public Project Link:

A study by the British Safety Council reveals that nearly 80% of workers work in unsafe environments and on-site deaths are 20 times higher in India than those in Britain[1]. Nearly 48,000 workers die in the country due to occupational accidents, out of which 24.2% is recorded in the construction industry[2].

Since the majority of the workforce belongs to the bottom of the social pyramid, any injury and resultant expenditure adds to their financial burden, having a negative impact on their quality of life. This necessitatse an urgent need for better industrial safety systems and risk assessment technologies across all construction sites.
Since the majority of the workforce belongs to the bottom of the social pyramid, any injury and resultant expenditure adds to their financial burden, having a negative impact on their quality of life. This necessitates an urgent need for better industrial safety systems and risk assessment technologies across all construction sites.

“Increasingly high noncompliance with PPE protocols is an alarming trend and a serious threat to worker health and safety,” said Gina, manufacturing segment marketing manager for Kimberly-Clark Professional. “Whether this is a result of economic conditions, a flawed approach to safety programs, younger workers who are more inclined to take greater risks or some other reason, it’s essential that workers wear PPE when it is required. PPE protects workers against injury, but it will not work if workers fail to use it and use it properly.”

Expand Down Expand Up @@ -62,7 +62,7 @@ In Model training, I have selected FOMO (Faster Objects, More Objects) MobileNet

### Accuracy

The model accuracy is 91.7%. But, the Helmet dataset only acheived 71.4% accuracy. Let's figure out why.
The model accuracy is 91.7%. But, the Helmet dataset only achieved 71.4% accuracy. Let's figure out why.

![](.gitbook/assets/worker-safety-monitoring/accuracy.jpg)

Expand All @@ -86,7 +86,7 @@ This will start the inferencing locally on the Jetson.

## Application in Industries

In Industrial areas, some specific work locations may needs an employee to wear Safety Goggles, but in other locations they may be required to wear a helmet and safety reflective jacket. Requirment may vary by specific job site, and even zone within a site.
In Industrial areas, some specific work locations may needs an employee to wear Safety Goggles, but in other locations they may be required to wear a helmet and safety reflective jacket. Requirements may vary by specific job site, and even zone within a site.

Now that we have a deployed model, we need to write some application code to determine the expected gear to be used by workers in a particular area.

Expand Down

0 comments on commit 1d68eab

Please sign in to comment.