You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Brokkr watches the stream of data from the sensor and writes to a Pi-attached drive. The sensor also writes data. So, we have two places where data can be found. Typically:
The sensor drive is smaller than the Pi drive
The sensor drive does not buffer, so under high flash rates might drop triggers.
We want to:
Make sure that every trigger on the Pi drive is on the sensor drive.
This will catch any case where brokkr fails to write data. Note that we don't need to worry about the converse.
Periodically clean sensor drive.
If it fills, the sensor goes into a reboot cycle. This can happen near daily, depending on the size of the USB drive.
The text was updated successfully, but these errors were encountered:
The sensor drive does not buffer, so under high flash rates might drop triggers.
This hopefully won't happen because its a fast and stable SSD, or a relatively fast and stable USB flash drive (depending on whether the latter is inserted), but its not impossible.
Periodically clean sensor drive.
Yeah, I'd talked about this with David for a while but we never actually got around to implementing it. The idea is to run every N minutes and delete the oldest max(ceiling(2 * N * 60 s/minute * 0.022 GB/s * 1 GB/file - GB_FREE)) files, such that the internal drive serves as a circular buffer in case something happens to the Pi, its software or its drives.
I suggest implement this on the Pi as a PipelineStep in a Brokkr pipeline that's set to run every N minutes, You could use Paramiko to execute the commands remotely on the sensor, but for a quick and dirty use case its simpler to just run the SSH commands directly with subprocess. Of course, you could also implement it as a service unit, cron job, etc. but may as well take advantage of Brokkr to do everything in one place rather than something else to create, install, configure and maintain. I'd happy to help with further advice, if needed, though it should be relatively straightforward.
Checking if the sensor and Brokkr have the same files is not trivial; either implementation or performance cost wise, but you could read the trigger GPS times by striding every 22 000 000 bytes of the sensor's data files, reading that chunk, processing it into a header and comparing it with those captured by Brokkr. Or, you could do the same with CRC checksums, but that would require a lot more disk bandwith on Brokkr's end, either to store them or to retrieve them. If its missing, you could send an error message and/or copy just the missing data chunks into a special directory.
However, I'm not sure that's worth it, since you can check the packet_sequence number in the decoded headers, and if that's non-sequential, you know you missed one.
Brokkr watches the stream of data from the sensor and writes to a Pi-attached drive. The sensor also writes data. So, we have two places where data can be found. Typically:
We want to:
This will catch any case where
brokkr
fails to write data. Note that we don't need to worry about the converse.If it fills, the sensor goes into a reboot cycle. This can happen near daily, depending on the size of the USB drive.
The text was updated successfully, but these errors were encountered: