Skip to content

Commit

Permalink
Added example documentation, upgrade of Dockerfile with inmemory buff…
Browse files Browse the repository at this point in the history
…er methods, fixed bugs on start.sh script
  • Loading branch information
igorrendulic committed Jan 19, 2021
1 parent d2f46f3 commit 7078448
Show file tree
Hide file tree
Showing 6 changed files with 40 additions and 19 deletions.
35 changes: 28 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ Copy and paste `docker-compose.yml` to folder of your choice (recommended to be
version: '3.8'
services:
chrysedgeportal:
image: chryscloud/chrysedgeportal:0.0.6
image: chryscloud/chrysedgeportal:0.0.7
depends_on:
- chrysedgeserver
- redis
Expand All @@ -119,7 +119,7 @@ services:
networks:
- chrysnet
chrysedgeserver:
image: chryscloud/chrysedgeserver:0.0.6
image: chryscloud/chrysedgeserver:0.0.7
restart: always
depends_on:
- redis
Expand Down Expand Up @@ -287,6 +287,26 @@ Run example to turn storage off for camera `test`:
python storage_onoff.py --device test --on false
```

### Running `opencv_inmemory_display.py`

Prerequsite to have an in-memory queue is to setup `buffer -> in_memory` value in `conf.yaml` of your custom config.

This setting stores compressed video stream in memory and enables you to query the complete queue or portion of it. It also allows you to query the same queue (`timestamp_from` and `timestamp_to`) from parallel subprocess (check `examples/opencv_inmemory_display_advanced.py` for an example).

Wait for X amount of time for in-memory queue to fill up then run (for added camera named `test`):
```
python opencv_inmemory_display.py --device test
```

### Running `video_probe.py`

This example shows gow to query local system time and retrieve information about the incoming video for specific camera/device.

Run example to probe a video stream (for added camera named `test`):
```
python video_probe.py --device tet
```

# Custom configuration

## Custom redis configuration
Expand All @@ -311,7 +331,7 @@ Modify folders accordingly for **Mac OS X and Windows**
Create `conf.yaml` file in `/data/chrysalis` folder. The configuration file is automatically picked up if it exists otherwise system fallbacks to it's default configuration.

```yaml
version: 0.0.3
version: 0.0.7
title: Chrysalis Video Edge Proxy
description: Chrysalis Video Edge Proxy Service for Computer Vision
mode: release # "debug": or "release"
Expand All @@ -332,6 +352,7 @@ annotation:

buffer:
in_memory: 1 # number of images to store in memory buffer (1 = default)
in_memory_scale: "-1:-1" # scaling of the images. Examples: 400:-1 (keeps aspect radio with width 400), 400:300, iw/3:ih/3, ...)
on_disk: false # store key-frame separated mp4 file segments to disk
on_disk_folder: /data/chrysalis/archive # can be any custom folder you'd like to store video segments to
on_disk_clean_older_than: "5m" # remove older mp4 segments than 5m
Expand All @@ -347,10 +368,10 @@ buffer:
- `annotation -> poll_duration_ms`: poll every x miliseconds for batching purposes (default: 300ms)
- `annotation -> max_match_size`: maximum number of annotation per batch size (default: 299)
- `buffer -> in_memory`: number of decoded frames to store in memory per camera (default: 1)
- `buffer -> in_memory_scale`: rescaling decoded images in memory buffer (default: `-1:-1`). Check [FFmpeg Scaling](https://trac.ffmpeg.org/wiki/Scaling)
- `on_disk`: true/false, store key-frame chunked mp4 files to disk (default: false)
- `on_disk_folder`: path to the folder where segments will be stored
- `on_disk_clean_older_than`: remove mp4 segments older than (default: 5m)
- `on_disk_schedule`: run disk cleanup scheduler cron job [#https://en.wikipedia.org/wiki/Cron](https://en.wikipedia.org/wiki/Cron)

`on_disk` creates mp4 segments in format: `"current_timestamp in ms"_"duration_in_ms".mp4`. For example: `1600685088000_2000.mp4`

Expand Down Expand Up @@ -391,12 +412,12 @@ docker-compose build
- [X] Add API key to Chrysalis Cloud for enable/disable storage
- [X] Add configuration for in memory buffer pool of decoded image so they can be queried in the past
- [X] Configuration and a cron job to store mp4 segments (1 per key-frame) from cameras and a cron job to clean old mp4 segments (rotating file buffer)
- [ ] Add gRPC API to query in-memory buffer of images
- [X] Add gRPC API to query in-memory buffer of images
- [ ] Remote access Security (grpc TLS Client Authentication)
- [ ] Remote access Security (TLS Client Authentication for web interface)
- [ ] add RTMP container support (mutliple streams, same treatment as RTSP cams)
- [ ] add v4l2 container support (e.g. Jetson Nano, Raspberry Pi?)
- [ ] Initial web screen to pull images (RTSP, RTMP, V4l2)
- [X] Initial web screen to pull images (RTSP, RTMP, V4l2)
- [ ] Benchmark NVDEC,NVENC, VAAPI hardware decoders

# Contributing
Expand All @@ -405,7 +426,7 @@ Please read `CONTRIBUTING.md` for details on our code of conduct, and the proces

# Versioning

Current version is initial release - v0.0.1 prerelease
Current version is initial release - v0.0.7 prerelease

# Authors

Expand Down
4 changes: 2 additions & 2 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
version: '3.8'
services:
chrysedgeportal:
image: chryscloud/chrysedgeportal:0.0.6
image: chryscloud/chrysedgeportal:0.0.7
build: web/
depends_on:
- chrysedgeserver
Expand All @@ -11,7 +11,7 @@ services:
networks:
- chrysnet
chrysedgeserver:
image: chryscloud/chrysedgeserver:0.0.6
image: chryscloud/chrysedgeserver:0.0.7
build: server/
restart: always
depends_on:
Expand Down
6 changes: 3 additions & 3 deletions examples/opencv_inmemory_display_advanced.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,10 +61,10 @@ def video_process(queue, device,ts_from, ts_to, grpc_channel, process_number):

img_count += 1

# queue.put({"img":re_img, "process": process_number})
queue.put({"img":re_img, "process": process_number})
print("image count: ", img_count)

# queue.put({"is_end":True, "process":process_number})
queue.put({"is_end":True, "process":process_number})

# debug function, checkin if dislay smooth and looks fastforwarded without any glitches
def display(num_processes):
Expand Down Expand Up @@ -133,7 +133,7 @@ def display(num_processes):
for p in processes:
p.start()

# display(len(processes))
display(len(processes))

for p in processes:
p.join()
Expand Down
2 changes: 1 addition & 1 deletion python/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ RUN conda env create -f environment.yml

RUN mkdir /proto
COPY proto/ /proto/
COPY rtsp_to_rtmp.py start.sh global_vars.py read_image.py archive.py disk_cleanup.py /
COPY rtsp_to_rtmp.py start.sh global_vars.py read_image.py archive.py disk_cleanup.py inmemory_buffer.py /

RUN chmod +x /start.sh

Expand Down
4 changes: 2 additions & 2 deletions python/start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,8 @@ fi
if [ ! -z "$in_memory_buffer" ]; then
cmd="$cmd --memory_buffer $in_memory_buffer"
fi
if [ ! -z "$in_memory_scame" ]; then
cmd="$cmd --memory_scale $in_memory_buffer"
if [ ! -z "$in_memory_scale" ]; then
cmd="$cmd --memory_scale $in_memory_scale"
fi
if [ ! -z "$disk_buffer_path" ]; then
cmd="$cmd --disk_path $disk_buffer_path"
Expand Down
8 changes: 4 additions & 4 deletions server/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,10 @@ import (
)

var (
grpcServer *grpc.Server
grpcConn net.Listener
// defaultDBPath = "/data/chrysalis"
defaultDBPath = "/home/igor/Downloads"
grpcServer *grpc.Server
grpcConn net.Listener
defaultDBPath = "/data/chrysalis"
// defaultDBPath = "/home/igor/Downloads"
)

func main() {
Expand Down

0 comments on commit 7078448

Please sign in to comment.