Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

camera reports "select timeout" in RP 3b+ - uvcvideo settings problematic? #44

Open
sebasgra39 opened this issue Jan 20, 2021 · 20 comments

Comments

@sebasgra39
Copy link

sebasgra39 commented Jan 20, 2021

Dear groupgets,
for a longer period of time, we don't get the module to work constantly over a longer period of time. Might there be a thermal or buffer issue with your module?

Our setup:

  • [PTmini module with Lepton 3 on RP 3b+]
  • [recently updated module to fw:1.3.0]
  • [opening the stream as webcam on windows works just fine]
  • [starting a stream with opencv 3 is unproblematic, but subscribing and displaying the images reports timeouts]
  • [changing the uvcvideo settings, especially the "quirks" parameter makes a difference, but does not solve the problem]
  • [sometimes the stream drops, by reporting "Backend can't provide camera FPS information"]

We have several modules and cameras which report the exact same issues.

Is there anything you can suggest trying or any specific uvcvideo settings to try out?

I'm looking forward on your reply!

@sebasgra39
Copy link
Author

sebasgra39 commented Jan 27, 2021

Dear purethermal team and community,

it seems like this issue cam be tracked down to the quirks parameter https://www.ideasonboard.org/uvc/faq/ of the Linux kernel. More or less by experimenting around, I found out that quirks = 0x400 seems to be a well fitting parameter to combine two pure thermal boards with a pi 3b+. However this setting does not work, when using the cameras at my laptop (x86, Ubuntu 18.04). Starting and running two cameras without - at least so far - seg faults was able going for quirks = 0x200.

WARNING: removing and changing the uvc settings in your kernel might crash other stuff running on your system. Please make sure you store your momentary settings first to potentially come back to a working state later on.

First get the current value (you might also have a look into the other parameters (no drop, timeout, ..):
cat /sys/module/uvcvideo/parameters/quirks

Then remove the uvc video parameters:
sudo rmmod uvcvideo

Next insert the new parameters. Note: quirks = 0x400 worked for me but might be adjusted for your setup and kernel:
sudo modprobe uvcvideo nodrop=1 timeout=5000 quirks=0x400

Since I give both camera a divice id specific /dev/video*-address, I need to reload the rules and the v4l module in the following, by:
sudo udevadm trigger --attr-match=subsystem=video4linux

Please comment if you encounter any problems or have some more background information on this issue!

BR
Sebastian

@agjunyent
Copy link

Just to get a bit more info, is the RPI3 using Ethernet too?

You need to think that before RPI4, Ethernet and USB buses are shared, so a "big" ethernet traffic can cause problems in the USB bus. And PT boards ask for a bigger bandwith than they really need if you only need the radiometric info with Y16 format.

Also, even though it does not seem your problem, if the camera is not perfectly attached to the PT board, it can cause "select timeout" randomly. And when one happens, it stays like that until the camera is restarted.

I usually solve the select timeout problem firmly attaching the camera to the PT board, reducing the bandwith needed to minimum to run Y16 format (modifying source firmware and burning it to PT board) and I also moved to RPI4, that has the USB bus NOT shared with Ethernet. I've been able to run a USB RGB camera + PT Mini board for more than a month without issues.

@sebasgra39
Copy link
Author

Dear Albert,

first of all thanks a lot for your reply!

Your solution sounds highly interesting and realls helpful to me, since I'm starting to run into bandwidth errors. The pi streams the two cameras images to a workstation, where they are visualized. I sometimes use a picam v2 at low framerate and resolution, which just kills the stream within seconds so bandwidth is an issue, yes.

First of all:
May it be possible, that you share your modified version of the firmware with me? I'd love to test wether or not this might already be a proper solution. I'm also wanting to change from the presetted RGB 8 format, since I'd like to get radimetric info too.

Second:
I recently tried upgrading to the pi 4, but - although I had the latest firmware flashed on the pt mini, they did not show up in the devices. Thus I switched back to the 3b+.

I look forward to hear from you!

BR
Sebastian

@agjunyent
Copy link

Hi,

Sorry for the late reply. I don't know why I did not get the notification!

So, the modifications needed to be done to the firmware are listed here:

groupgets/purethermal1-firmware#24 (comment)

Supposedly, this is already integrated in the new release, but I always apply this patch to reduce the bandwith of the PT to the minimum (I'm also using 2 RGB cameras with 1 raspberry pi, so I want to avoid USB bandwith issues). I simply modify all packets size that I find in the firmware to 482, because I only use the camera as Y16 to obtain temperatures so that's the minimum for them to work.

After doing the modification, compile and flash the firmware.

@sebasgra39
Copy link
Author

sebasgra39 commented Aug 12, 2021

Dear @agjunyent,

I still didn't manage to fix this issue. Could you maybe help me with a more detailled information on how to change the firmware and what exactly needs to be changed? Or could you send me the adjusted firmware version that you are using?

I'm new to firmware issues and don't really know in which of those 100 files I need to look and how to compile my changes properly.

BR from Germany
Sebastian

@agjunyent
Copy link

HI again @sebasgra39! Let's try to work on this.

So, let's start: how are you reading the images? Could you provide a very simple script that you use to read them?

@sebasgra39
Copy link
Author

sebasgra39 commented Aug 12, 2021

Hi @agjunyent,

wow, thanks a lot for the fast reply!

I use two of the cameras as a stereo system. All my code is supposed to run in ros using video_stream_opencv. The script that I use two run both cameras is attached below.

launchfile.txt

To match the upload-rules, i copied the code into a text document.

@sebasgra39
Copy link
Author

sebasgra39 commented Aug 12, 2021

I also dived a little deeper into the firmware 1.3.0 to understand your solution here:

Hi,

Sorry for the late reply. I don't know why I did not get the notification!

So, the modifications needed to be done to the firmware are listed here:

groupgets/purethermal1-firmware#24 (comment)

Supposedly, this is already integrated in the new release, but I always apply this patch to reduce the bandwith of the PT to the minimum (I'm also using 2 RGB cameras with 1 raspberry pi, so I want to avoid USB bandwith issues). I simply modify all packets size that I find in the firmware to 482, because I only use the camera as Y16 to obtain temperatures so that's the minimum for them to work.

After doing the modification, compile and flash the firmware.

The // USB Video device class specification version 1.10 code says:

#ifdef UVC_1_1
#define UVC_VERSION                             0x0110      // UVC 1.1
#else
#define UVC_VERSION                             0x0100      // UVC 1.0
#endif

#define UVC_IN_EP                                     0x81  /* EP1 for data IN */
#define UVC_CMD_EP                                    0x82  /* EP2 for UVC commands */
#define VIDEO_PACKET_SIZE_ALT1                        ((unsigned int)(642)) /* 642 */
#define VIDEO_PACKET_SIZE_ALT2                        ((unsigned int)(962))
#define VIDEO_PACKET_SIZE_MAX                         ((unsigned int)(1023))
#define VIDEO_MAX_SETUP_PACKET_SIZE                   ((unsigned int)(1024))
#define CMD_PACKET_SIZE                               ((unsigned int)(8))

Which is already different to Kurt Kiefers solution that you referred to later on.

Where do I select the Alternative packet size?

And additionally, what video format could you recommend and where do I set the format of my choice?

BR
Sebastian

@agjunyent
Copy link

agjunyent commented Aug 12, 2021

Ok, so you are using the ros package to retrieve images.

I'd suggest you two things:

1.- Create a simple python script to read from both cameras at the same time for a while. Here is what I use. Take a look that I have some configuration that the ros nodelet does not have, say the fourcc, the rgb=false, and width and height. I've found all these steps necessary for OpenCV to select the correct video format for both radiometry and low bandwith

import cv2

PORT = "/dev/video0"

cap = cv2.VideoCapture(PORT) 
cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('Y', '1', '6', ' '))
cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 160)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 120)
cap.set(cv2.CAP_PROP_FPS, 9)

while True:
    ret, img = cap.read()

Leave this running for some time, starting it twice, once for each camera editing the PORT, and see if it fails

2.- After 1 has run for some time without problems, try to download the ros nodelet and compile it yourself, modifying the code adding the configuration parameters of python, say

cap->set(cv::CAP_PROP_FOURCC, cv::VideoWriter_fourcc('Y', '1', '6', ' '))
cap->set(cv::CAP_PROP_CONVERT_RGB, False)
...

BEFORE starting to grab images with cap->read()

Let's see if the python script is able to run for some time this way. I've had it running for over a month, no issues

@sebasgra39
Copy link
Author

sebasgra39 commented Aug 12, 2021

Yes, I am usig ROS.

To be honest, I can't really follow your instructions. I'm not a software engineer originally and not familiar with python yet.

Coming to the second part of your answer, what nodelet do you mean exactly?

This one: https://github.com/ros-drivers/video_stream_opencv/blob/master/src/video_stream.cpp ?
If yes, how exactly do I need to insert the lines there?

Regarding the modification of the firmware, I was able to change the format from YUV to Y16, but timeouts occured at one of the two cameras again after some minutes. Changing the packet size to 482 reproduced the error under Kurts Bug fix from 2019, so 642 might really be the lowest option.

I really don't get this problem, since both cameras worked fine for days, when they were attached to my Laptop, but are randomly buggy when being attached to the pi. What do you think is the error?
BR
Sebastian

@agjunyent
Copy link

Regarding your last question, the problem could be that the Pi only has 1 USB bus that manages all USB ports, while usually computers have more than 1 USB bus. The cameras are sending too much data for the bus to manage, and at the end one timeouts because the bus is "full".

Let's go step by step. Even without knowledge of python, try to copy this into a python script, and run it with python3:

import cv2
import time

LEFT_CAMERA = "/dev/video7"
RIGHT_CAMERA = "/dev/video6"

cap_left = cv2.VideoCapture(LEFT_CAMERA) 
cap_left.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('Y', '1', '6', ' '))
cap_left.set(cv2.CAP_PROP_CONVERT_RGB, False)
cap_left.set(cv2.CAP_PROP_FRAME_WIDTH, 160)
cap_left.set(cv2.CAP_PROP_FRAME_HEIGHT, 120)
cap_left.set(cv2.CAP_PROP_FPS, 9)

cap_right = cv2.VideoCapture(RIGHT_CAMERA) 
cap_right.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('Y', '1', '6', ' '))
cap_right.set(cv2.CAP_PROP_CONVERT_RGB, False)
cap_right.set(cv2.CAP_PROP_FRAME_WIDTH, 160)
cap_right.set(cv2.CAP_PROP_FRAME_HEIGHT, 120)
cap_right.set(cv2.CAP_PROP_FPS, 9)

num_frames_left = 0
num_frames_right = 0
init_time = time.time()

while True:
    ret_left, img_left = cap_left.read()
    ret_right, img_right = cap_right.read()

    if ret_left:
        num_frames_left += 1
    if ret_right:
        num_frames_right += 1
    
    if (time.time() - init_time) > 60:
        print("Left frames:", num_frames_left)
        print("Right frames:", num_frames_right)
        init_time = time.time()

This script will initialize both cameras with devices 6 and 7 (as in your launch file), set all parameters and start reading from them. Every minute more or less you should see on screen the number of frames each of them have read up until that moment. Leave it running for some time and see if it gets stuck, stops working or an error appears (I was getting an error that read 'select timeout' before configuring the way I'm doing it here)

If you don't have python3 you can install it and OpenCV with sudo apt install python3 python3-opencv, then run the script with python3 <script_name>.py

@sebasgra39
Copy link
Author

Hey Albert,

thanks for your stunning support!

When running the python script containing the code above, I receive this:


pi@stereoeins:~ $ python3 test.py 
select timeout
select timeout
OpenCV Error: Bad argument (Unknown array type) in cvarrToMat, file /build/opencv-L65chJ/opencv-3.2.0+dfsg/modules/core/src/matrix.cpp, line 943
Traceback (most recent call last):
  File "test.py", line 26, in <module>
    ret_left, img_left = cap_left.read()
cv2.error: /build/opencv-L65chJ/opencv-3.2.0+dfsg/modules/core/src/matrix.cpp:943: error: (-5) Unknown array type in function cvarrToMat

Is this error familiar to you?

BR
Sebastian

@sebasgra39
Copy link
Author

Hey @agjunyent

man, the cameras ran smoothly yesterday for the first time! Adjusting the (myterious) quirks-parameter was still necessary, yet succeessful this time. Big thanks!

Instead of my previous ROS-code, I also switched to the solution provided here: : https://automaticaddison.com/working-with-ros-and-opencv-in-ros-noetic/

There is only one thing that needs further testing now. Changing the Image Pixel Format to Y16 (what I'll need in my case) as you suggested in your code:

cap_left.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('Y', '1', '6', ' '))
cap_left.set(cv2.CAP_PROP_CONVERT_RGB, False)

Returns all-black images with only a cryptic last image row. Is a modification of the firmware necessary in prior (at the moment, the boards run fw.1.3.0).

All the best from Aachen
Sebastian

@agjunyent
Copy link

Hi, sorry for my very late reply. Don't know if you still here... Been on vacations.

Do you have experience in linux services? Maybe you are missing to 'configure' the camera and the driver to accept additional parameters.

The idea is to create a service that triggers on boot or on plug of the camera to configure the parameters. I'll post it on Monday

@sebasgra39
Copy link
Author

Hi @agjunyent ,
thanks a lot for this idea.
I'd be very grateful to see hot you perform this configuration.

BR
Sebastian

@agjunyent
Copy link

agjunyent commented Nov 10, 2021

Hi @sebasgra39

Let me try to explain how do I do this.

  • Create an udev rule to rename the Lepton camera every time it's plugged in automatically. This allows later to not bother in which /dev/videoX it ends when booting

Create a file in /etc/udev/rules.d/ called 99-lepton.rules (or the way you like) with one line with this contents:
SUBSYSTEM=="video4linux", SUBSYSTEMS=="usb", ATTRS{idVendor}=="1e4e", ATTRS{idProduct}=="0100", ATTR{index}=="0", ACTION=="add", SYMLINK+="lepton_camera", TAG+="systemd", MODE="0666"

What this does it create a symlink to /dev/lepton_camera when it detects a lepton connected. This is based on Purethermal Mini 2, so if you have a different board, change ATTRS{idVendor} and ATTRS{idProduct} to the ones of yours, listed in lsbusb:

Bus 002 Device 003: ID 18d1:9302 Google Inc.
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 005: ID 05a3:9422 ARC International Camera
Bus 001 Device 004: ID 2341:0042 Arduino SA Mega 2560 R3 (CDC ACM)
Bus 001 Device 003: ID 1e4e:0100 Cubeternet WebCam   <--- HERE
Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
  • Download this repo (clone) and copy the folder v4l2/uvcdynctrl to your home directory.
  • Install uvcdynctrl and v4l2-ctl: sudo apt install uvcdynctrl v4l-utils
  • Edit v4l2/uvcdynctrl/load.sh and change the contents to:
#!/bin/bash

sleep 1
user="<USERNAME>"

port=-1
ports=(0 1 2 3 4 5 6)

for p in "${ports[@]}"
do
    ctl="$(v4l2-ctl -d $p --list-formats)"
    if [[ $ctl == *"Y16 "* ]]; then
        port=$p
        break
    fi
done

device=${1:-/dev/video$port}
xml=${2:-/home/$user/uvcdynctrl/pt1.xml}

logger Start loading thermal camera params...
logger $device
logger $xml
/usr/bin/uvcdynctrl -d $device -i $xml -v
/usr/bin/v4l2-ctl -d $device -c lep_cid_vid_lut_select=6

This loads the parameters of the camera to be accessed later, that we do in the last line with lep_cid_vid_lut_select=6, selecting the video type 6 which is "grayscale with temperature values"

  • Create a service in /etc/systemd/system/ called lepton.service (or the way you like) with this contents:
[Unit]
Description=Load params of lepton on plug
Requires=dev-lepton_camera.device
After=dev-lepton_camera.device

[Service]
ExecStart=/home/<USERNAME>/uvcdynctrl/load.sh

[Install]
WantedBy=dev-lepton_camera.device

Activate and enable the service with
sudo systemctl daemon-reload && sudo systemctl enable lepton.service

REBOOT.

  • Once rebooted, connect the camera. It should appear a device in /dev called /dev/lepton_camera. If it is not there, recheck instructions or post back with any error.
  • Use this code to test and grab frames:
import cv2

cap = cv2.VideoCapture("/dev/lepton_camera")
cap.set(cv2.CAP_PROP_CONVERT_RGB, False) <-- If your cv2 version is different, maybe you should change False to 0
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 160)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 120)
cap.set(cv2.CAP_PROP_FPS, 9)

while cap.isOpened():
    img, ret = cap.read()
    temperature_image = (img - 27315) / 100.0
    time.sleep(0.05)

In the variable temperature_image you have a 160x120 image with each pixel value being the ºC

That's all I do, I think. Give it a try and post here if you have any error, problem or question.

Agjunyent

@ArtuursK
Copy link

ArtuursK commented Jun 3, 2023

Hi @agjunyent

Thank you for your suggestions.
I followed your steps on raspberry pi 3A+ (Raspbian buster), but unfortunately no success.
My aim was to just get the array containing all pixel temperatures.

When I print out temperature_image I get only one value: "-273.14" which means that img is equal to 1 or "True".
I can see the image/save it to a file, but somehow I do not get the temperature values.

I use pureThermal2 with Flir 3.5 which is connected to Pi via USB.
I have also tested the libuvc approach with uvc-radiometry, but it always returns something like "....queue.Empty". And the same script with the same camera works fine on Linux PC (I can get the temperature array). Camera working on a RPi would be much better for my use case.

Best regards,
Arturs

@agjunyent
Copy link

@ArtuursK my bad! This has been long time with a bug!!

Line 10 should be

ret, img = cap.read()

Ret and img are wrong!

@ArtuursK
Copy link

Hi @agjunyent,

Thanks for your quick reply. Unfortunately I was not able to fix my issue after your suggestion.

I have now decided to switch to and test some other SBCs like Khadas vim1s which can run ubuntu server.

Best regards,
Arturs

@agjunyent
Copy link

Sorry to hear that.
Hopefully you can make it work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants