diff --git a/README.md b/README.md index 8f0253c..e95b35b 100644 --- a/README.md +++ b/README.md @@ -23,7 +23,7 @@ A client side implementation of the ONVIF specification for Linux, Mac and Windo Onvif GUI is an integrated camera management and NVR system with an intuitive user interface that can easily manage a fleet of cameras and create high resolution recordings based on alarm conditions. A best of breed YOLO detector is included with the system to facilitate accurate alarm signals without false detections. -The system is designed to scale with available hardware and will run on simple configurations with minimal hardware requirements as well as high end multi core CPUs with NVIDIA GPU for maximum performance. The system can be configured with auto start settings and a user friendly icon so that non-technical users can feel comfortable working with the application without specialized training. +The system is designed to scale with available hardware and will run on simple configurations with minimal hardware requirements as well as high end multi core CPUs with NVIDIA GPU for maximum performance. Additionally, the system has integrated OpenVINO support for operation on CPU and iGPU on Intel hardware enabling high performance in low power environments. The system can be configured with auto start settings and a user friendly icon so that non-technical users can feel comfortable working with the application without specialized training. File management is easy with an automated disk space manager and file playback controls. @@ -139,16 +139,19 @@ Here is the application running 14 cameras through the yolox detector on an RTX * ### Step 4. Install Dependencies + ``` brew update brew upgrade brew install libxml2 brew install cmake brew install git - brew install ffmpeg@6 - export FFMPEG_INSTALL_DIR=/opt/homebrew/opt/ffmpeg@6 + brew tap homebrew-ffmpeg/ffmpeg + brew install homebrew-ffmpeg/ffmpeg/ffmpeg ``` + Please note that the standard Homebrew core ffmpeg version 7 is incompatible with onvif-gui. For this reason, the install procedure calls for the 3rd party tap [homebrew-ffmpeg](https://github.com/homebrew-ffmpeg/homebrew-ffmpeg). If you already have another version of ffmpeg installed, this will create a conflict. In order to install this version, it is necessary to run ```brew uninstall ffmpeg``` before this tap can be installed. + * ### Step 5. Create Virtual Environment ``` @@ -168,10 +171,6 @@ Here is the application running 14 cameras through the yolox detector on an RTX onvif-gui ``` -* ### Note: - - The ffmpeg@6 version installed under this sequence of commands is not the current version and is a keg only package. If there are issues during the pip install regarding avio, check the environment variable FFMPEG_INSTALL_DIR to insure that the lib and bin sub-directories exist. The message produced by brew when installing ffmeg@6 may help provide some info. - --- @@ -255,12 +254,8 @@ Here is the application running 14 cameras through the yolox detector on an RTX * ### Step 4. Install ``` - cd libonvif/libonvif - pip install -v . - cd ../libavio - pip install -v . - cd ../onvif-gui - pip install . + cd libonvif + assets/scripts/compile ``` * ### Step 5. Launch Program @@ -298,12 +293,8 @@ Here is the application running 14 cameras through the yolox detector on an RTX * ### Step 4. Install ``` - cd libonvif/libonvif - pip install -v . - cd ../libavio - pip install -v . - cd ../onvif-gui - pip install . + cd libonvif + assets/scripts/compile ``` * ### Step 5. Launch Program @@ -340,10 +331,12 @@ Here is the application running 14 cameras through the yolox detector on an RTX brew install libxml2 brew install cmake brew install git - brew install ffmpeg@6 - export FFMPEG_INSTALL_DIR=/opt/homebrew/opt/ffmpeg@6 + brew tap homebrew-ffmpeg/ffmpeg + brew install homebrew-ffmpeg/ffmpeg/ffmpeg ``` + Please note that the standard Homebrew core ffmpeg version 7 is incompatible with onvif-gui. For this reason, the install procedure calls for the 3rd party tap [homebrew-ffmpeg](https://github.com/homebrew-ffmpeg/homebrew-ffmpeg). If you already have another version of ffmpeg installed, this will create a conflict. In order to install this version, it is necessary to run ```brew uninstall ffmpeg``` before this tap can be installed. + * ### Step 5. Create Virtual Environment ``` @@ -360,12 +353,8 @@ Here is the application running 14 cameras through the yolox detector on an RTX * ### Step 7. Install ``` - cd libonvif/libonvif - pip install -v . - cd ../libavio - pip install -v . - cd ../onvif-gui - pip install . + cd libonvif + assets/scripts/compile ``` * ### Step 8. Launch Program @@ -374,11 +363,6 @@ Here is the application running 14 cameras through the yolox detector on an RTX onvif-gui ``` -* ### Note: - - The ffmpeg@6 version installed under this sequence of commands is not the current version and is a keg only package. If there are issues during the pip install regarding avio, check the environment variable FFMPEG_INSTALL_DIR to insure that the lib and bin sub-directories exist. The message produced by brew when installing ffmeg@6 may help provide some info. - - --- @@ -404,12 +388,8 @@ In order to build from source on Windows, development tools and python are requi * ### Step 3. Install ``` - cd libonvif\libonvif - pip install -v . - cd ..\libavio - pip install -v . - cd ..\onvif-gui - pip install onvif-gui + cd libonvif + assets\scripts\compile ``` * ### Step 4. Launch Program @@ -551,7 +531,7 @@ Camera audio can be controlled from the panel. The mute button can be clicked to * ### Aspect - Aspect ratio of the camera video stream. In some cases, particularly when using substreams, the aspect ratio may be distorted. Changing the aspect ratio by using the combo box can restore the correct appearance of the video. If the aspect ratio has been changed this way, the label of the box will have a * appended. This setting is not native to the camera, so it is not necessary to click the apply button for this change. + When using substreams, the aspect ratio may be distorted. Changing the aspect ratio by using the combo box can restore the correct appearance of the video. If the aspect ratio has been changed this way, the label of the box will have a * appended. This setting is not native to the camera, so it is not necessary to click the apply button for this change. * ### FPS @@ -561,6 +541,8 @@ Camera audio can be controlled from the panel. The mute button can be clicked to Keyframe interval of the video stream. Keyframes are a full frame encoding, whereas intermediate frames are differential representations of the changes between frames. Keyframes are larger and require more computing power to process. Higher GOP intervals mean fewer keyframes and as a result, less accurate represention of the video. Lower GOP rates increase the accuracy of the video at the expense of higher bandwidth and compute load. It is necessary to click the Apply button to enact these changes on the camera. + Note that some cameras may have an option for Dynamic GOP or Adaptive Framerate, or some other name for a process that reduces the GOP automatically based on the lack of motion in the camera view. It is advised to turn this feature off when using onvif-gui. + * ### Cache A read only field showing the size of the video packet input buffer for the camera prior to decoding. Higher cache values represent longer latency in the video processing, which may be observed as a delay between the time an event occurs and the event being shown in the video. @@ -579,21 +561,21 @@ Camera audio can be controlled from the panel. The mute button can be clicked to Initially, the Main Profile is selected by default. By changing the selection to a secondary profile, a lower order Sub Stream can be displayed. The term lower order implies that the Sub Stream has lower resolution, lower frame rate and lower bitrate than the Main Stream. Note that the application may be processing both streams, but only the Display Profile selected on the Video Tab is displayed. The other stream, referred to as the Record Stream, is not decoded, but its packets are collected for writing to disk storage. - The Display Profile will change automatically when the Video Tab Profile combo box is changed, so it is not necessary to click the Apply button when changing this setting. + The display will update automatically when the Video Tab Profile combo box is changed, so it is not necessary to click the Apply button when changing this setting. * ### Audio The audio encoder used by the camera is set here. If the camera does not have audio capability, the audio section will be disabled. Note that some cameras may have audio capability, but the stream is not available due to configuration issues or lack of hardware accessories. Available audio encoders will be shown in the combo box and may be set by the user. Changes to the audio parameter require that the Apply button is clicked to enact the change on the camera. - AAC encoding is highly recommended, as G style encoders may have issues during playback. Note that some cameras have incorrect implementations for encoders and the audio may not be usable in the stream recording to disk. + AAC encoding is highly recommended, as G style encoders may have issues during playback. Note that some cameras have incorrect implementations for encoders and the audio may not be usable in the stream recording to disk. Please be aware that currently onvif-gui is unable to process G726. * ### Samples - The sample rate of the audio stream. Available sample rates are shown in the combo box. Use the Apply button to enact the change on the camera. Higher sample rates increase the quality of the audio at the expense of higher bandwidth and disk space when recording. The audio bitrate is implied by the sample rate based on encoder parameters. + Available sample rates are shown in the combo box. Use the Apply button to enact the change on the camera. Higher sample rates increase the quality of the audio at the expense of higher bandwidth and disk space when recording. The audio bitrate is implied by the sample rate based on encoder parameters. * ### No Audio - Audio can be disabled by clicking this check box. This is different than mute in the sense that under mute, the audio stream is decoded, but not played on the computer speakers. If the No Audio check box is clicked, the audio stream is discarded, which can reduce compute load and may improve performance. If the No Audio checkbox is de-selected, the stream must restart in order to initialize the audio. The Apply button is not clicked when changing this parameter. + Audio can be disabled by clicking this check box. This is different than mute in the sense that under mute, the audio stream is decoded, but not played on the computer speakers. If the No Audio check box is clicked, the audio stream is discarded. If the No Audio checkbox is de-selected, the stream must restart in order to initialize the audio. The Apply button is not clicked when changing this parameter. * ### Video Alarm @@ -603,6 +585,10 @@ Camera audio can be controlled from the panel. The mute button can be clicked to This check box enables audio analytic processing for alarm generation. See the section on Audio Panel for reference to audio alarm functions. Note that the Audio Alarm check box must be selected in order to enable the Audio Panel for that camera. The Apply button is not used for this box. During Alarm condition, a solid red circle will show in the stream display if not recording, or a blinking red circle if the stream is being recorded. +* ### Audio Sync + + This check box will force synchronization of the audio and video feeds from the camera. This may cause the input cache to grow, resulting in latency in the video stream. It is usually only recommended to use this setting if it is important for the viewer of the real time display to view the synchronized streams, such as a condition where a speaking person is the subject of the video stream. Some cameras will perform well under this setting and not allow a significant delay between video and audio streams, but others may not. +
@@ -768,7 +754,7 @@ Right clicking over the file will bring up a context menu that can be used to pe --- - + ### Common Username and Password @@ -806,6 +792,14 @@ In the case where a camera is configured to record during alarms, this length of A few default alarm sounds for selection. A system wide volume setting for the alarm volume can be made with the slider. +### Display Refresh Interval + +Performance on some lower powered systems may be improved by increasing the display refresh interval. + +### Maximum Input Stream Cache Size + +Adjust the maximum number of frames held in the cache before frames are dropped. This is the same cache referred to by the Video Tab of the Camera Panel. + ### Discovery Options * Discovery Broadcast @@ -860,7 +854,7 @@ Shows this file. --- -The Video Panel has two modes of operation, motion and yolox. The default setting is for motion, which can be used without further configuration and will run easily on a CPU only computer. Yolox requires the installation of the pytorch module and will consume significant computing resources for which a GPU is recommended, but not required. +The Video Panel has two modes of operation, motion, yolox. The default setting is for motion, which can be used without further configuration and will run easily on a CPU only computer. YoloX requires the installation of additional python packages, namely pytorch and openvino. An optional yolov8 package is available for [download](https://github.com/sr99622/yolov8-onvif-gui). In order for the panel to be enabled, either a camera or a file must be selected. If a camera is selected, the Video Alarm check box must also be selected on the Media Tab of the Camera Panel. If a file is selected, the Enable File check box on the Video Panel must also be selected. @@ -878,32 +872,38 @@ The motion detector measures the difference between two consecutive frames by ca * ### YOLOX + + +  + YOLOX requires installation of [PyTorch](https://pytorch.org/get-started/locally/) and [OpenVINO](https://docs.openvino.ai/2024/get-started/install-openvino.html?VERSION=v_2024_1_0&OP_SYSTEM=LINUX&DISTRIBUTION=ARCHIVE) -TLDR: From a python virtual environment on Ubunu Linux, use the commands below. Windows please see above. +Please note that if you intend to run yolox using OpenVINO on Intel hardware, you will need to install the hardware drivers. Unfortunately, the Intel installation procedure is scattershot and not entirely reliable. For best results installing hardware drivers for iGPU or ARC in Ubuntu, please refer to the intructions for the latest version of the [Intel compute-runtime package](https://github.com/intel/compute-runtime/releases), then use ```pip install openvino```. + +TLDR: From a python virtual environment on Ubunu Linux with the GPU drivers already installed, use the commands below. Otherwise please see above. ``` -pip install torch torchvision torchaudio +pip install torch torchvision pip install openvino ``` - - -  +The upper portion of the yolox panel has a model managment box. Model parameters are system wide, as there will be one model running that is shared by all cameras. The Name combo box selects the model, which is named according to the size of the number of parameters in the model. Larger models may produce more accurate results at the cost of increased compute load. The Size combo box sets the resolution to which the video is scaled for model input. Larger sizes may increase accuracy at the cost of increased compute load. It is possible to change the backend API of the yolo detector by using the API combo box. The Device combo box will populate automatically with available hardware. -The upper portion of the yolox panel has a model managment box. Model parameters are system wide, as there will be one model running that is shared by all cameras. The Model Name selects the file containing the model, which is named according to the size of the number of parameters in the model. Larger models may produce more accurate results at the cost of increased compute load. The Model Size is the resolution to which the video is scaled for model input. Larger sizes may increase accuracy at the cost of increased compute load. +The model is initialized automatically by starting a camera stream with the Camera tab Video Alarm checked. By default the application is configured to download a model automatically when a stream is started for the first time. There may be a delay while the model is downloaded, during which time a wait box is shown. Subsequent stream launches will run the model with less delay. -By default the application is configured to download a model automatically when a stream is started with the yolox alarm option for the first time. There may be a delay while the model is downloaded. Subsequent stream launches will run the model with less delay. A model may be specified manually by de-selecting the Automatically download model checkbox and populating the Model file name box. Note that if a model is manually specified, it is still necessary to assign the correct Model Name corresponding to the parameter size. It is recommended to stop all streams before changing a running model. +A model may be specified manually by de-selecting the Automatically download model checkbox and populating the Model file name box. Note that if a model is manually specified, it is still necessary to assign the correct Name corresponding to the model parameter size. The lower portion of the panel has settings for detector configuration. Parameters on this section are assigned to each camera individually. -The yolox detector counts the number of frames during a one second interval in which at least one detection was observed, then normalizes that value by dividing by the number of frames. The value output from the detector algorithm can be adjusted using the Gain slider. Higher Gain slider values increase the sensitivity of the detector. +Skip Frames spin box sets the number of frames to skip between model analysis runs. If the Skip Frames value is set to zero, every frame produced by stream is set through the detector. If the Skip Frames value is set to one, every other frame is sent through the detecter, and so on. This setting can be used to reduce computational burden on the system. + +The yolox detector samples a number of frames as set by the Samples setting. The number of frames with positive detections required to trigger an alarm is set by the Limit slider. For example, if the Sample Size is 4 and the Limit slider is set to 2, at least two of the last four frames observed must have positive detections in order to trigger the alarm. There is also a Confidence slider that applies to the yolox model output. Higher confidence settings require stricter conformance to model expectations to qualify a positive detection. Lower confidence settings will increase the number of detections at the risk of false detections. It is necessary to assign at least one target to the panel in order to observe detections. The + button will launch a dialog box with a list of the available targets. Targets may be removed by using the - button or the delete key while the target is highlghted in the list. ---- +  
@@ -973,7 +973,7 @@ The control tab on the right of the application window may be toggled using the * ### Recommended Configuration -The application is optimized for performance on Ubuntu Linux. Apple Mac should have good performance as well due to similarity between the systems. The application will run on Windows, but performance will be lower. The difference is due primarily to the use of OpenGL for video rendering, which performs better on *nix style platforms. When using GPU, Ubuntu Linux NVIDIA drivers generally outperform those on other operating systems. +The application is optimized for performance on Ubuntu Linux, which will deliver the best overall performance, including yolo detection. The application will run on Windows or Mac, but the platforms are not offically supported, and lower performance should be expected. Linux offers additional advantages in network configuration as well. Linux can easily be configured to run a [DHCP server](https://ubuntu.com/server/docs/how-to-install-and-configure-isc-kea) to manage a separate network in which to isolate the cameras. A good way to configure the system is to use the wired network port of the host computer to manage the camera network, and use the wireless network connection of the host computer to connect with the wifi router and internet. The cameras will be isolated from the internet and will not increase network load on the wifi. @@ -987,15 +987,15 @@ Many camera substreams will have a distorted aspect ratio, which can be correcte * ### Performance Tuning -As the number of cameras and stream analytics added to the system increases, the host may become overwhelmed, causing cache buffer overflow resulting in dropped frames. If a camera stream is dropping frames, a yellow border will be displayed over the camera output. The Cache value for each camera is a good indicator of system performance, and reaches maximum capacity at 100. If a cache is overflowing, the load placed on the system by the camera can be reduced by lowering frame rate and to a lesser degree by lowering resolution. +As the number of cameras and stream analytics added to the system increases, the host may become overwhelmed, causing cache buffer overflow resulting in dropped frames. If a camera stream is dropping frames, a yellow border will be displayed over the camera output. The Cache value for each camera is a good indicator of system performance, and reaches the maximum capacity on the Settings Panel (default 100). If a cache is overflowing, the load placed on the system by the camera can be reduced by lowering frame rate and to a lesser degree by lowering resolution. Using Skip Frames during yolox analysis can also greatly reduce compute load. -Lower powered CPUs with a small number of cores may benefit from hardware decoding. More powerful CPUs with a large core count will decode as easily as a hardware decoder. +Lower powered CPUs with a small number of cores or systems running a large number of streams may benefit from hardware decoding. More powerful CPUs with a large core count will work as well as a hardware decoder for smaller numbers of streams. -Stream analysis can potentially place significant burden on system resources. Motion detection and Audio Amplitude analysis have very little load. Audio Frequency analysis does present a moderate load which may be an issue for lower powered systems. Yolox is by far the most intensive load and will limit the number of streams it can process. A GPU is recommended for Yolox, as a CPU only system will be able to process maybe one or two streams at the most. +Stream analysis can potentially place significant burden on system resources. Motion detection and Audio Amplitude analysis have very little load. Audio Frequency analysis does present a moderate load which may be an issue for lower powered systems. Yolox is by far the most intensive load and will limit the number of streams it can process. A GPU or iGPU is recommended for Yolox, as a CPU only system will be able to process maybe one or two streams at the most. Intel Xe Graphics or later is recommended for iGPU. -If a system is intended for GPU use with yolox, it is advised to connect the monitor of the host computer to the motherboard output of the CPU integrated graphics chip. This has the effect of reducing memory transfers between CPU and GPU, which are a source of latency. +If a system is intended for GPU use with yolox, it is advised to connect the monitor of the host computer to the motherboard output of the CPU integrated graphics chip if possible. This has the effect of reducing memory transfers between CPU and GPU, which are a source of latency, and may reduce throughput. -GPU cards with PCIe 4 compatability will outperform those designed for PCIe 3. Note that not all cards utilize the full 16 lanes of the bus. GPU cards with 16 lanes will outperform those with only 8 lanes. Memory transfer between CPU and GPU occurs on the PCIe bus and can be a bottleneck for the system. GPU memory requirements are minimal, the yolox small model (yolox_s) will consume less than 2 GB. Yolox will employ a large number of cuda cores, so more is better in this category. Ubutnu NVIDIA drivers will outperform those on other operating systems. +GPU cards with PCIe 4 compatability will outperform those designed for PCIe 3. Note that not all cards utilize the full 16 lanes of the bus. GPU cards with 16 lanes will outperform those with only 8 lanes. Memory transfer between CPU and GPU occurs on the PCIe bus and can be a bottleneck for the system. Yolox will employ a large number of cuda cores, so more is better in this category. Ubutnu NVIDIA drivers will usually outperform those on other operating systems. For low powered systems with a small number of cameras, OpenVINO running on Intel iGPU is recommended. * ### Camera Compliance With Standards @@ -1377,6 +1377,55 @@ Exit session   +
+YOLOX - Apache +  + +--- + + YOLOX + Copyright (c) 2021-2022 Megvii Inc. All rights reserved. + + License: Apache + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + +## Cite YOLOX +If you use YOLOX in your research, please cite our work by using the following BibTeX entry: + +```latex + @article{yolox2021, + title={YOLOX: Exceeding YOLO Series in 2021}, + author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian}, + journal={arXiv preprint arXiv:2107.08430}, + year={2021} +} +``` +## In memory of Dr. Jian Sun +Without the guidance of [Dr. Sun Jian](http://www.jiansun.org/), YOLOX would not have been released and open sourced to the community. +The passing away of Dr. Sun Jian is a great loss to the Computer Vision field. We have added this section here to express our remembrance and condolences to our captain Dr. Sun. +It is hoped that every AI practitioner in the world will stick to the concept of "continuous innovation to expand cognitive boundaries, and extraordinary technology to achieve product value" and move forward all the way. + +
+没有孙剑博士的指导,YOLOX也不会问世并开源给社区使用。 +孙剑博士的离去是CV领域的一大损失,我们在此特别添加了这个部分来表达对我们的“船长”孙老师的纪念和哀思。 +希望世界上的每个AI从业者秉持着“持续创新拓展认知边界,非凡科技成就产品价值”的观念,一路向前。 + +--- + +  +
+
getopt-win.h - BSD-2-Clause-NETBSD   @@ -1481,52 +1530,3 @@ Exit session
-
-YOLOX - Apache -  - ---- - - YOLOX - Copyright (c) 2021-2022 Megvii Inc. All rights reserved. - - License: Apache - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - -## Cite YOLOX -If you use YOLOX in your research, please cite our work by using the following BibTeX entry: - -```latex - @article{yolox2021, - title={YOLOX: Exceeding YOLO Series in 2021}, - author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian}, - journal={arXiv preprint arXiv:2107.08430}, - year={2021} -} -``` -## In memory of Dr. Jian Sun -Without the guidance of [Dr. Sun Jian](http://www.jiansun.org/), YOLOX would not have been released and open sourced to the community. -The passing away of Dr. Sun Jian is a great loss to the Computer Vision field. We have added this section here to express our remembrance and condolences to our captain Dr. Sun. -It is hoped that every AI practitioner in the world will stick to the concept of "continuous innovation to expand cognitive boundaries, and extraordinary technology to achieve product value" and move forward all the way. - -
-没有孙剑博士的指导,YOLOX也不会问世并开源给社区使用。 -孙剑博士的离去是CV领域的一大损失,我们在此特别添加了这个部分来表达对我们的“船长”孙老师的纪念和哀思。 -希望世界上的每个AI从业者秉持着“持续创新拓展认知边界,非凡科技成就产品价值”的观念,一路向前。 - ---- - -  -
- diff --git a/assets/images/media_tab.png b/assets/images/media_tab.png index 312c030..1a5e56b 100644 Binary files a/assets/images/media_tab.png and b/assets/images/media_tab.png differ diff --git a/assets/images/settings_panel.png b/assets/images/settings_panel.png index a3c7b8f..ceb4fa7 100644 Binary files a/assets/images/settings_panel.png and b/assets/images/settings_panel.png differ diff --git a/assets/images/yolox.png b/assets/images/yolox.png index e3179ea..d3b84d2 100644 Binary files a/assets/images/yolox.png and b/assets/images/yolox.png differ diff --git a/assets/scripts/build_pkgs.bat b/assets/scripts/build_pkgs.bat index 0dfdd22..ee77da7 100755 --- a/assets/scripts/build_pkgs.bat +++ b/assets/scripts/build_pkgs.bat @@ -12,6 +12,6 @@ cd .. cd onvif-gui python -m build cd .. -for /R libonvif\dist %%F in (*.whl) do pip install %%F -for /R libavio\dist %%F in (*.whl) do pip install %%F -for /R onvif-gui\dist %%F in (*.whl) do pip install %%F +for /R libonvif\dist %%F in (*.whl) do pip install "%%F" +for /R libavio\dist %%F in (*.whl) do pip install "%%F" +for /R onvif-gui\dist %%F in (*.whl) do pip install "%%F" diff --git a/assets/scripts/clean b/assets/scripts/clean index 11a818d..4ddead0 100755 --- a/assets/scripts/clean +++ b/assets/scripts/clean @@ -1,14 +1,34 @@ #!/bin/bash +find . -type f -name '._*' -delete cd libonvif -rm -R build -rm -R libonvif.egg-info -rm -R dist + +FILE=build +if [ -d "$FILE" ]; then + rm -R build +fi +FILE=libonvif.egg-info +if [ -d "$FILE" ]; then + rm -R libonvif.egg-info +fi + cd ../libavio -rm -R build -rm -R avio.egg-info -rm -R dist +FILE=build +if [ -d "$FILE" ]; then + rm -R build +fi +FILE=avio.egg-info +if [ -d "$FILE" ]; then + rm -R avio.egg-info +fi + cd ../onvif-gui -rm -R build -rm -R onvif_gui.egg-info -rm -R dist +FILE=build +if [ -d "$FILE" ]; then + rm -R build +fi +FILE=onvif_gui.egg-info +if [ -d "$FILE" ]; then + rm -R onvif_gui.egg-info +fi + cd .. diff --git a/assets/scripts/clean.bat b/assets/scripts/clean.bat index 5db4097..e0a2576 100755 --- a/assets/scripts/clean.bat +++ b/assets/scripts/clean.bat @@ -1,13 +1,22 @@ cd libonvif -rmdir /q /s build -rmdir /q /s libonvif.egg-info -rmdir /q /s dist +if exist build\ ( + rmdir /s /q build +) +if exist libonvif.egg-info\ ( + rmdir /s /q libonvif.egg-info +) cd ../libavio -rmdir /q /s build -rmdir /q /s avio.egg-info -rmdir /q /s dist +if exist build\ ( + rmdir /s /q build +) +if exist avio.egg-info\ ( + rmdir /s /q avio.egg-info +) cd ../onvif-gui -rmdir /q /s build -rmdir /q /s onvif_gui.egg-info -rmdir /q /s dist +if exist build\ ( + rmdir /s /q build +) +if exist onvif_gui.egg-info\ ( + rmdir /s /q onvif_gui.egg-info +) cd .. diff --git a/assets/scripts/compile b/assets/scripts/compile index 527ee81..9678c9d 100755 --- a/assets/scripts/compile +++ b/assets/scripts/compile @@ -1,14 +1,36 @@ #!/bin/bash + cd libonvif -rm -R build -rm -R libonvif.egg-info + +FILE=build +if [ -d "$FILE" ]; then + rm -R build +fi +FILE=libonvif.egg-info +if [ -d "$FILE" ]; then + rm -R libonvif.egg-info +fi pip install -v . + cd ../libavio -rm -R build -rm -R avio.egg-info +FILE=build +if [ -d "$FILE" ]; then + rm -R build +fi +FILE=avio.egg-info +if [ -d "$FILE" ]; then + rm -R avio.egg-info +fi pip install -v . + cd ../onvif-gui -rm -R build -rm -R onvif_gui.egg-info +FILE=build +if [ -d "$FILE" ]; then + rm -R build +fi +FILE=onvif_gui.egg-info +if [ -d "$FILE" ]; then + rm -R onvif_gui.egg-info +fi pip install . cd .. diff --git a/assets/scripts/compile.bat b/assets/scripts/compile.bat index 463a72d..1e1c0e4 100755 --- a/assets/scripts/compile.bat +++ b/assets/scripts/compile.bat @@ -1,13 +1,25 @@ cd libonvif -rmdir /s /q build -rmdir /s /q libonvif.egg-info +if exist build\ ( + rmdir /s /q build +) +if exist libonvif.egg-info\ ( + rmdir /s /q libonvif.egg-info +) pip install -v . cd ../libavio -rmdir /s /q build -rmdir /s /q avio.egg-info +if exist build\ ( + rmdir /s /q build +) +if exist avio.egg-info\ ( + rmdir /s /q avio.egg-info +) pip install -v . cd ../onvif-gui -rmdir /s /q build -rmdir /s /q onvif_gui.egg-info +if exist build\ ( + rmdir /s /q build +) +if exist onvif_gui.egg-info\ ( + rmdir /s /q onvif_gui.egg-info +) pip install . cd .. diff --git a/libavio b/libavio index d5683af..dd249be 160000 --- a/libavio +++ b/libavio @@ -1 +1 @@ -Subproject commit d5683af6031ea442b147483c3020e875c0ed56e0 +Subproject commit dd249be9bf2600d384ca58b3120eff1145250256 diff --git a/libonvif/CMakeLists.txt b/libonvif/CMakeLists.txt index 5fd9a0c..7a2a840 100644 --- a/libonvif/CMakeLists.txt +++ b/libonvif/CMakeLists.txt @@ -87,8 +87,8 @@ IF (NOT WITHOUT_PYTHON) ${LIBXML2_LIBRARIES} ) - message("-- LIBXML2_INCLUDE_DIRS: "${LIBXML2_INCLUDE_DIRS}) - message("-- LIBXML2_LIBRARIES: "${LIBXML2_LIBRARIES}) + message("-- LIBXML2_INCLUDE_DIRS: " ${LIBXML2_INCLUDE_DIRS}) + message("-- LIBXML2_LIBRARIES: " ${LIBXML2_LIBRARIES}) target_include_directories(pyonvif PUBLIC include @@ -96,3 +96,18 @@ IF (NOT WITHOUT_PYTHON) ) endif() + +IF (BUILD_TEST) + add_executable(onvif-test + test/onvif-test.cpp + ) + + target_link_libraries(onvif-test PRIVATE + libonvif + ) + + target_include_directories(onvif-test PUBLIC + include + ) + +endif() \ No newline at end of file diff --git a/libonvif/include/onvif.h b/libonvif/include/onvif.h index 6c19fe5..3110f71 100644 --- a/libonvif/include/onvif.h +++ b/libonvif/include/onvif.h @@ -151,6 +151,8 @@ struct OnvifData { bool analyze_audio; int desired_aspect; bool hidden; + int cache_max; + bool sync_audio; }; struct OnvifSession { diff --git a/libonvif/include/onvif_data.h b/libonvif/include/onvif_data.h index 1f33171..57c79ea 100644 --- a/libonvif/include/onvif_data.h +++ b/libonvif/include/onvif_data.h @@ -477,6 +477,7 @@ class Data Data profile(*this); getProfileToken(profile, index); if (profile.profile().length() == 0) + break; getStreamUri(profile.data); profiles.push_back(profile); @@ -677,6 +678,21 @@ class Data bool audio_multicast_auto_start() { return data->audio_multicast_auto_start; } //GUI INTERFACE + + /* + Please note that this class is intended to be self contained within the C++ domain. It will not + behave as expected if the calling python program attempts to extend the functionality of the + class by adding member variables in the python domain. This was done so that the profile could + be copied or filled with data by the C++ class exclusively, removing the need for additional + synchronization code in the python domain. + + The effect of this decision is that GUI persistence for profiles must be implemented in this + C++ class directly. The member variables are added to the OnvifData structure in onvif.h and + the copyData and clearData functions in onvif.c. GUI persistence is handled by passing setSetting + and getSetting from the calling python program for writing variable states to disk. + */ + + bool getDisableVideo() { std::stringstream str; str << serial_number() << "/" << profile() << "/DisableVideo"; @@ -721,6 +737,17 @@ class Data str << serial_number() << "/" << profile() << "/AnalyzeAudio"; setSetting(str.str(), arg ? "1" : "0"); } + bool getSyncAudio() { + std::stringstream str; + str << serial_number() << "/" << profile() << "/SyncAudio"; + return getSetting(str.str(), "0") == "1"; + } + void setSyncAudio(bool arg) { + data->sync_audio = arg; + std::stringstream str; + str << serial_number() << "/" << profile() << "/SyncAudio"; + setSetting(str.str(), arg ? "1" : "0"); + } bool getHidden() { std::stringstream str; str << serial_number() << "/" << profile() << "/Hidden"; @@ -748,6 +775,21 @@ class Data str_val << arg; setSetting(str_key.str(), str_val.str()); } + int getCacheMax() { + std::stringstream str_key, str_val; + str_key << serial_number() << "/" << profile() << "/CacheMax"; + str_val << getSetting(str_key.str(), "100"); + int result = 100; + str_val >> result; + return result; + } + void setCacheMax(int arg) { + data->cache_max = arg; + std::stringstream str_key, str_val; + str_key << serial_number() << "/" << profile() << "/CacheMax"; + str_val << arg; + setSetting(str_key.str(), str_val.str()); + } }; diff --git a/libonvif/pyproject.toml b/libonvif/pyproject.toml index e8ac330..de520f0 100644 --- a/libonvif/pyproject.toml +++ b/libonvif/pyproject.toml @@ -25,7 +25,7 @@ build-backend = "setuptools.build_meta" [project] name = "libonvif" -version = "3.1.1" +version = "3.2.0" authors = [ { name="Stephen Rhodes", email="sr99622@gmail.com" }, ] diff --git a/libonvif/setup.py b/libonvif/setup.py index 5b47533..97b7d04 100644 --- a/libonvif/setup.py +++ b/libonvif/setup.py @@ -1,19 +1,21 @@ #******************************************************************************* # libonvif/setup.py # -# Copyright (c) 2024 Stephen Rhodes +# Copyright (c) 2023, 2024 Stephen Rhodes # -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. # -# http://www.apache.org/licenses/LICENSE-2.0 +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. # -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. +# You should have received a copy of the GNU General Public License along +# with this program; if not, write to the Free Software Foundation, Inc., +# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. # #******************************************************************************/ @@ -26,7 +28,7 @@ from setuptools.command.build_ext import build_ext PKG_NAME = "libonvif" -VERSION = "3.1.1" +VERSION = "3.2.0" class CMakeExtension(Extension): def __init__(self, name, sourcedir=""): diff --git a/libonvif/src/onvif.c b/libonvif/src/onvif.c index 98974de..1740d80 100644 --- a/libonvif/src/onvif.c +++ b/libonvif/src/onvif.c @@ -1,7 +1,7 @@ /******************************************************************************* * onvif.c * -* copyright 2018, 2023 Stephen Rhodes +* copyright 2018, 2023, 2024 Stephen Rhodes * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public @@ -101,12 +101,12 @@ int getNetworkInterfaces(struct OnvifData *onvif_data) { xmlNodeSetPtr nodeset; xmlChar *enabled = NULL; xmlXPathObjectPtr xml_result = getNodeSet(reply, xpath); + xmlDocPtr temp_doc = xmlNewDoc(BAD_CAST "1.0"); if (xml_result) { nodeset = xml_result->nodesetval; for (int i=0; inodeNr; i++) { xmlNodePtr cur = nodeset->nodeTab[i]; xmlChar *token = xmlGetProp(cur, BAD_CAST "token"); - xmlDocPtr temp_doc = xmlNewDoc(BAD_CAST "1.0"); xmlDocSetRootElement(temp_doc, cur); bool dhcp = false; @@ -146,13 +146,15 @@ int getNetworkInterfaces(struct OnvifData *onvif_data) { i = nodeset->nodeNr; } } - xmlFreeDoc(temp_doc); + + xmlFree(token); } + xmlXPathFreeObject(xml_result); } + xmlFreeDoc(temp_doc); if (enabled != NULL) { xmlFree(enabled); } - xmlXPathFreeObject(xml_result); result = checkForXmlErrorMsg(reply, onvif_data->last_error); if (result < 0) @@ -163,6 +165,7 @@ int getNetworkInterfaces(struct OnvifData *onvif_data) { strcpy(onvif_data->last_error, "getNetworkInterfaces - No XML reply"); } return result; + return 0; } int setNetworkInterfaces(struct OnvifData *onvif_data) { @@ -918,8 +921,6 @@ int getAudioEncoderConfigurationOptions(struct OnvifData *onvif_data) { for (int i=0; inodeNr; i++) { xmlNodePtr cur = nodeset->nodeTab[i]->children; while(cur != NULL) { - //printf("iterating encoders %d : %s\n", i, cur->content); - //printf("strlen: %d\n", strlen(cur->content) ); strcpy(onvif_data->audio_encoders[i], cur->content); cur = cur->next; } @@ -940,7 +941,6 @@ int getAudioEncoderConfigurationOptions(struct OnvifData *onvif_data) { while(cur != NULL) { item = xmlNodeListGetString(reply, cur->xmlChildrenNode, 1); if (item) { - //printf("iterating bitrates: %d %d %s\n", i, j, item); onvif_data->audio_bitrates[i][j] = atoi(item); j++; } @@ -961,7 +961,6 @@ int getAudioEncoderConfigurationOptions(struct OnvifData *onvif_data) { while(cur != NULL) { item = xmlNodeListGetString(reply, cur->xmlChildrenNode, 1); if (item) { - //printf("iterating sample rates: %s %d %s\n", onvif_data->xaddrs, i, item); onvif_data->audio_sample_rates[i][j] = atoi(item); j++; } @@ -1058,7 +1057,6 @@ int getAudioEncoderConfiguration(struct OnvifData *onvif_data) { } int setAudioEncoderConfiguration(struct OnvifData *onvif_data) { - //printf("setAudioEncoderConfiguration: %s\n", onvif_data->audioEncoderConfigurationToken); memset(onvif_data->last_error, 0, sizeof(onvif_data->last_error)); int result = 0; @@ -1126,9 +1124,6 @@ int getProfile(struct OnvifData *onvif_data) { memset(onvif_data->audioEncoderConfigurationToken, 0, sizeof(onvif_data->audioEncoderConfigurationToken)); memset(onvif_data->audioSourceConfigurationToken, 0, sizeof(onvif_data->audioSourceConfigurationToken)); memset(onvif_data->last_error, 0, sizeof(onvif_data->last_error)); - //memset(onvif_data->audio_encoding, 0, sizeof(onvif_data->audio_encoding)); - //memset(onvif_data->audio_source_token, 0, sizeof(onvif_data->audio_source_token)); - //memset(onvif_data->audio_name, 0, sizeof(onvif_data->audio_name)); int result = 0; xmlDocPtr doc = xmlNewDoc(BAD_CAST "1.0"); @@ -1149,55 +1144,11 @@ int getProfile(struct OnvifData *onvif_data) { xmlChar *xpath; - /* - xpath = BAD_CAST "//s:Body//trt:GetProfileResponse//trt:Profile//tt:VideoEncoderConfiguration//tt:Resolution//tt:Width"; - if (getXmlValue(reply, xpath, temp_buf, 128) == 0) - onvif_data->width = atoi(temp_buf); - xpath = BAD_CAST "//s:Body//trt:GetProfileResponse//trt:Profile//tt:VideoEncoderConfiguration//tt:Resolution//tt:Height"; - if (getXmlValue(reply, xpath, temp_buf, 128) == 0) - onvif_data->height = atoi(temp_buf); - xpath = BAD_CAST "//s:Body//trt:GetProfileResponse//trt:Profile//tt:VideoEncoderConfiguration//tt:RateControl//tt:FrameRateLimit"; - if (getXmlValue(reply, xpath, temp_buf, 128) == 0) { - onvif_data->frame_rate = atoi(temp_buf); - } - xpath = BAD_CAST "//s:Body//trt:GetProfileResponse//trt:Profile//tt:VideoEncoderConfiguration//tt:RateControl//tt:BitrateLimit"; - if (getXmlValue(reply, xpath, temp_buf, 128) == 0) { - onvif_data->bitrate = atoi(temp_buf); - } else { - onvif_data->bitrate = 0; - } - xpath = BAD_CAST "//s:Body//trt:GetProfileResponse//trt:Profile//tt:VideoEncoderConfiguration//tt:H264//tt:GovLength"; - if (getXmlValue(reply, xpath, temp_buf, 128) == 0) { - onvif_data->gov_length = atoi(temp_buf); - } - */ - xpath = BAD_CAST "//s:Body//trt:GetProfileResponse//trt:Profile//tt:AudioEncoderConfiguration"; getNodeAttribute(reply, xpath, BAD_CAST "token", onvif_data->audioEncoderConfigurationToken, 128); xpath = BAD_CAST "//s:Body//trt:GetProfileResponse//trt:Profile//tt:AudioSourceConfiguration//tt:SourceToken"; getXmlValue(reply, xpath, onvif_data->audioSourceConfigurationToken, 128); - /* - xpath = BAD_CAST "//s:Body//trt:GetProfileResponse//trt:Profile//tt:AudioEncoderConfiguration//tt:Encoding"; - getXmlValue(reply, xpath, onvif_data->audio_encoding, 128); - xpath = BAD_CAST "//s:Body//trt:GetProfileResponse//trt:Profile//tt:AudioEncoderConfiguration//tt:Name"; - getXmlValue(reply, xpath, onvif_data->audio_name, 128); - xpath = BAD_CAST "//s:Body//trt:GetProfileResponse//trt:Profile//tt:AudioEncoderConfiguration//tt:Bitrate"; - if (getXmlValue(reply, xpath, temp_buf, 128) == 0) { - onvif_data->audio_bitrate = atoi(temp_buf); - } - else { - onvif_data->audio_bitrate = 0; - } - xpath = BAD_CAST "//s:Body//trt:GetProfileResponse//trt:Profile//tt:AudioEncoderConfiguration//tt:SampleRate"; - if (getXmlValue(reply, xpath, temp_buf, 128) == 0) { - onvif_data->audio_sample_rate = atoi(temp_buf); - } - else { - onvif_data->audio_sample_rate = 0; - } - */ - xpath = BAD_CAST "//s:Body//trt:GetProfileResponse//trt:Profile//tt:VideoEncoderConfiguration"; getNodeAttribute(reply, xpath, BAD_CAST "token", onvif_data->videoEncoderConfigurationToken, 128); xpath = BAD_CAST "//s:Body//trt:GetProfileResponse//trt:Profile//tt:VideoSourceConfiguration//tt:SourceToken"; @@ -3239,12 +3190,9 @@ void clearData(struct OnvifData *onvif_data) { onvif_data->host[i] = '\0'; onvif_data->serial_number[i] = '\0'; onvif_data->audio_encoding[i] = '\0'; - //onvif_data->audio_source_token[i] = '\0'; onvif_data->audio_name[i] = '\0'; onvif_data->audioEncoderConfigurationToken[i] = '\0'; onvif_data->audioSourceConfigurationToken[i] = '\0'; - //onvif_data->conf_audio_name[i] = '\0'; - //onvif_data->conf_audio_encoding[i] = '\0'; onvif_data->audio_session_timeout[i] = '\0'; onvif_data->audio_multicast_type[i] = '\0'; onvif_data->audio_multicast_address[i] = '\0'; @@ -3256,7 +3204,6 @@ void clearData(struct OnvifData *onvif_data) { onvif_data->camera_name[i] = '\0'; onvif_data->host_name[i] = '\0'; } - //onvif_data->extended_data_filled = false; onvif_data->gov_length_min = 0; onvif_data->gov_length_max = 0; onvif_data->frame_rate_min = 0; @@ -3290,19 +3237,13 @@ void clearData(struct OnvifData *onvif_data) { onvif_data->time_offset = 0; onvif_data->event_listen_port = 0; onvif_data->guaranteed_frame_rate = false; - //onvif_data->conf_width = 0; - //onvif_data->conf_height = 0; - //onvif_data->conf_frame_rate_limit = 0; onvif_data->encoding_interval = 0; - //onvif_data->conf_bitrate_limit = 0; onvif_data->datetimetype = '\0'; onvif_data->dst = false; onvif_data->ntp_dhcp = false; onvif_data->audio_bitrate = 0; onvif_data->audio_sample_rate = 0; onvif_data->audio_use_count = 0; - //onvif_data->conf_audio_bitrate = 0; - //onvif_data->conf_audio_sample_rate = 0; onvif_data->audio_multicast_port = 0; onvif_data->audio_multicast_TTL = 0; onvif_data->audio_multicast_auto_start = false; @@ -3312,6 +3253,8 @@ void clearData(struct OnvifData *onvif_data) { onvif_data->analyze_audio = false; onvif_data->desired_aspect = 0; onvif_data->hidden = false; + onvif_data->cache_max = 100; + onvif_data->sync_audio = false; } void copyData(struct OnvifData *dst, struct OnvifData *src) { @@ -3356,12 +3299,9 @@ void copyData(struct OnvifData *dst, struct OnvifData *src) { dst->host[i] = src->host[i]; dst->serial_number[i] = src->serial_number[i]; dst->audio_encoding[i] = src->audio_encoding[i]; - //dst->audio_source_token[i] = src->audio_source_token[i]; dst->audio_name[i] = src->audio_name[i]; dst->audioEncoderConfigurationToken[i] = src->audioEncoderConfigurationToken[i]; dst->audioSourceConfigurationToken[i] = src->audioSourceConfigurationToken[i]; - //dst->conf_audio_name[i] = src->conf_audio_name[i]; - //dst->conf_audio_encoding[i] = src->conf_audio_encoding[i]; dst->audio_session_timeout[i] = src->audio_session_timeout[i]; dst->audio_multicast_type[i] = src->audio_multicast_type[i]; dst->audio_multicast_address[i] = src->audio_multicast_address[i]; @@ -3374,7 +3314,6 @@ void copyData(struct OnvifData *dst, struct OnvifData *src) { dst->host_name[i] = src->host_name[i]; dst->last_error[i] = src->last_error[i]; } - //dst->extended_data_filled = src->extended_data_filled; dst->gov_length_min = src->gov_length_min; dst->gov_length_max = src->gov_length_max; dst->frame_rate_min = src->frame_rate_min; @@ -3408,19 +3347,13 @@ void copyData(struct OnvifData *dst, struct OnvifData *src) { dst->time_offset = src->time_offset; dst->event_listen_port = src->event_listen_port; dst->guaranteed_frame_rate = src->guaranteed_frame_rate; - //dst->conf_width = src->conf_width; - //dst->conf_height = src->conf_height; - //dst->conf_frame_rate_limit = src->conf_frame_rate_limit; dst->encoding_interval = src->encoding_interval; - //dst->conf_bitrate_limit = src->conf_bitrate_limit; dst->datetimetype = src->datetimetype; dst->dst = src->dst; dst->ntp_dhcp = src->ntp_dhcp; dst->audio_bitrate = src->audio_bitrate; dst->audio_sample_rate = src->audio_sample_rate; dst->audio_use_count = src->audio_use_count; - //dst->conf_audio_bitrate = src->conf_audio_bitrate; - //dst->conf_audio_sample_rate = src->conf_audio_sample_rate; dst->audio_multicast_port = src->audio_multicast_port; dst->audio_multicast_TTL = src->audio_multicast_TTL; dst->audio_multicast_auto_start = src->audio_multicast_auto_start; @@ -3430,6 +3363,8 @@ void copyData(struct OnvifData *dst, struct OnvifData *src) { dst->analyze_audio = src->analyze_audio; dst->desired_aspect = src->desired_aspect; dst->hidden = src->hidden; + dst->cache_max = src->cache_max; + dst->sync_audio = src->sync_audio; } void initializeSession(struct OnvifSession *onvif_session) { diff --git a/libonvif/src/onvif.cpp b/libonvif/src/onvif.cpp index 441a4fb..b719144 100644 --- a/libonvif/src/onvif.cpp +++ b/libonvif/src/onvif.cpp @@ -1,7 +1,7 @@ /******************************************************************************* * onvif.cpp * -* copyright 2023 Stephen Rhodes +* copyright 2023, 2024 Stephen Rhodes * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public @@ -63,10 +63,6 @@ PYBIND11_MODULE(libonvif, m) .def("last_error", &Data::last_error) .def("profile", &Data::profile) .def("setProfile", &Data::setProfile) - // for some reason, windows doesn't like these - //.def("setHost", &Data::setHost) - //.def("syncProfile", &Data::syncProfile) - //.def("clearLastError", &Data::clearLastError) .def("resolutions_buf", &Data::resolutions_buf) .def("width", &Data::width) .def("setWidth", &Data::setWidth) @@ -164,8 +160,12 @@ PYBIND11_MODULE(libonvif, m) .def("setAnalyzeAudio", &Data::setAnalyzeAudio) .def("getDesiredAspect", &Data::getDesiredAspect) .def("setDesiredAspect", &Data::setDesiredAspect) + .def("getSyncAudio", &Data::getSyncAudio) + .def("setSyncAudio", &Data::setSyncAudio) .def("getHidden", &Data::getHidden) .def("setHidden", &Data::setHidden) + .def("getCacheMax", &Data::getCacheMax) + .def("setCacheMax", &Data::setCacheMax) .def(py::self == py::self) .def_readwrite("profiles", &Data::profiles) .def_readwrite("displayProfile", &Data::displayProfile) diff --git a/libonvif/test/onvif-test.cpp b/libonvif/test/onvif-test.cpp new file mode 100644 index 0000000..8716f09 --- /dev/null +++ b/libonvif/test/onvif-test.cpp @@ -0,0 +1,152 @@ +#include +#include +#include +#include +#include +#include +#include +#include "onvif.h" +#ifdef _WIN32 +#include "getopt-win.h" +#else +#include +#endif + +int main(int argc, char **argv) +{ + std::cout << "Looking for cameras on the network..." << std::endl; + + struct OnvifSession *onvif_session = (struct OnvifSession*)calloc(sizeof(struct OnvifSession), 1); + + getActiveNetworkInterfaces(onvif_session); + for (int i = 0; i < 16; i++) { + std::cout << "interface: " << onvif_session->active_network_interfaces[i] << std::endl; + } + + std::string delimiter = " - "; + std::string thingy = onvif_session->active_network_interfaces[0]; + std::string token = thingy.substr(0, thingy.find(delimiter)); + std::cout << "---" << token << "---" << std::endl; + strcpy(onvif_session->preferred_network_address, "10.1.1.1"); + + struct OnvifData *tmp_onvif_data = (struct OnvifData*)calloc(sizeof(struct OnvifData), 1); + struct OnvifData *onvif_data = (struct OnvifData*)calloc(sizeof(struct OnvifData), 1); + + initializeSession(onvif_session); + int n = broadcast(onvif_session); + std::cout << "Found " << n << " cameras" << std::endl; + for (int i = 0; i < n; i++) { + if (prepareOnvifData(i, onvif_session, tmp_onvif_data)) { + char host[128]; + extractHost(tmp_onvif_data->xaddrs, host); + getHostname(tmp_onvif_data); + printf("%s %s(%s)\n",host, + tmp_onvif_data->host_name, + tmp_onvif_data->camera_name); + + if (!strcmp(host, "10.1.1.67")) { + std::cout << "FOUND HOST" << tmp_onvif_data->camera_name << std::endl; + copyData(onvif_data, tmp_onvif_data); + } + } + else { + std::cout << "found invalid xaddrs in device repsonse" << std::endl; + } + } + + closeSession(onvif_session); + free(onvif_session); + free(tmp_onvif_data); + + std::cout << "subject camera - " << onvif_data->camera_name << std::endl; + + strcpy(onvif_data->username, "admin"); + strcpy(onvif_data->password, "admin123"); + if (getDeviceInformation(onvif_data)) + std::cout << "getDeviceInformation failure " << onvif_data->last_error << std::endl; + + if (getCapabilities(onvif_data)) + std::cout << "getCapabilities failure " << onvif_data->last_error << std::endl; + + if (getProfileToken(onvif_data, 0)) + std::cout << "getProfileToken failure " << onvif_data->last_error << std::endl; + + if (getProfile(onvif_data)) + std::cout << "getProfile failure " << onvif_data->last_error << std::endl; + + if (setSystemDateAndTime(onvif_data)) + std::cout << "setSystemDateAndTime failure " << onvif_data->last_error << std::endl; + + if (getStreamUri(onvif_data)) + std::cout << "getStreamUri failure " << onvif_data->last_error << std::endl; + + std::cout << onvif_data->stream_uri << std::endl; + + if(getVideoEncoderConfiguration(onvif_data)) + std::cout << "getVideoEncoderConfiguration failure " << onvif_data->last_error << std::endl; + + std::cout << " Width: " << onvif_data->width << "\n"; + std::cout << " Height: " << onvif_data->height << "\n"; + std::cout << " Frame Rate: " << onvif_data->frame_rate << "\n"; + std::cout << " Gov Length: " << onvif_data->gov_length << "\n"; + std::cout << " Bitrate: " << onvif_data->bitrate << "\n" << std::endl; + + if (getOptions(onvif_data)) + std::cout << "getOptions failure " << onvif_data->last_error << std::endl; + + std::cout << " Min Brightness: " << onvif_data->brightness_min << "\n"; + std::cout << " Max Brightness: " << onvif_data->brightness_max << "\n"; + std::cout << " Min ColorSaturation: " << onvif_data->saturation_min << "\n"; + std::cout << " Max ColorSaturation: " << onvif_data->saturation_max << "\n"; + std::cout << " Min Contrast: " << onvif_data->contrast_min << "\n"; + std::cout << " Max Contrast: " << onvif_data->contrast_max << "\n"; + std::cout << " Min Sharpness: " << onvif_data->sharpness_min << "\n"; + std::cout << " Max Sharpness: " << onvif_data->sharpness_max << "\n" << std::endl; + + if (getImagingSettings(onvif_data)) + std::cout << "getImagingSettings failure" << onvif_data->last_error << std::endl; + + std::cout << " Brightness: " << onvif_data->brightness << "\n"; + std::cout << " Contrast: " << onvif_data->contrast << "\n"; + std::cout << " Saturation: " << onvif_data->saturation << "\n"; + std::cout << " Sharpness: " << onvif_data->sharpness << "\n" << std::endl; + + if (getTimeOffset(onvif_data)) + std::cout << "getTimeOffset failure " << onvif_data->last_error << std::endl; + + std::cout << " Time Offset: " << onvif_data->time_offset << " seconds" << "\n"; + std::cout << " Timezone: " << onvif_data->timezone << "\n"; + std::cout << " DST: " << (onvif_data->dst ? "Yes" : "No") << "\n"; + std::cout << " Time Set By: " << ((onvif_data->datetimetype == 'M') ? "Manual" : "NTP") << "\n"; + + if (getNetworkInterfaces(onvif_data)) + std::cout << "getNetworkInterfaces failure " << onvif_data->last_error << std::endl; + if (getNetworkDefaultGateway(onvif_data)) + std::cout << "getNetworkDefaultGateway failure " << onvif_data->last_error << std::endl; + if (getDNS(onvif_data)) + std::cout << "getDNS failure " << onvif_data->last_error << std::endl; + + std::cout << " IP Address: " << onvif_data->ip_address_buf << "\n"; + std::cout << " Gateway: " << onvif_data->default_gateway_buf << "\n"; + std::cout << " DNS: " << onvif_data->dns_buf << "\n"; + std::cout << " DHCP: " << (onvif_data->dhcp_enabled ? "YES" : "NO") << "\n" << std::endl; + + if (getVideoEncoderConfigurationOptions(onvif_data)) + std::cout << "getVideoEncoderConfigurationOptions failure " << onvif_data->last_error << std::endl; + + std::cout << " Available Resolutions" << std::endl; + for (int i=0; i<16; i++) { + if (strlen(onvif_data->resolutions_buf[i])) + std::cout << " " << onvif_data->resolutions_buf[i] << std::endl; + } + + std::cout << " Min Gov Length: " << onvif_data->gov_length_min << "\n"; + std::cout << " Max Gov Length: " << onvif_data->gov_length_max << "\n"; + std::cout << " Min Frame Rate: " << onvif_data->frame_rate_min << "\n"; + std::cout << " Max Frame Rate: " << onvif_data->frame_rate_max << "\n"; + std::cout << " Min Bit Rate: " << onvif_data->bitrate_min << "\n"; + std::cout << " Max Bit Rate: " << onvif_data->bitrate_max << "\n" << std::endl; + + + free(onvif_data); +} diff --git a/onvif-gui/gui/components/directoryselector.py b/onvif-gui/gui/components/directoryselector.py index 36af17b..165e7f6 100644 --- a/onvif-gui/gui/components/directoryselector.py +++ b/onvif-gui/gui/components/directoryselector.py @@ -34,7 +34,7 @@ def __init__(self, mw, name, label, location=""): self.txtDirectory = QLineEdit() self.txtDirectory.setText(self.mw.settings.value(self.directoryKey, location)) - self.txtDirectory.textEdited.connect(self.txtDirectoryChanged) + self.txtDirectory.setEnabled(False) self.btnSelect = QPushButton("...") self.btnSelect.clicked.connect(self.btnSelectClicked) lblSelect = QLabel(label) diff --git a/onvif-gui/gui/components/target.py b/onvif-gui/gui/components/target.py index 75166c1..a2ef7b0 100644 --- a/onvif-gui/gui/components/target.py +++ b/onvif-gui/gui/components/target.py @@ -22,6 +22,7 @@ from PyQt6.QtCore import Qt, pyqtSignal, QObject from .warningbar import WarningBar, Indicator from gui.onvif.datastructures import MediaSource +from loguru import logger class Target(QListWidgetItem): def __init__(self, name, id): @@ -116,9 +117,11 @@ def __init__(self, mw, module): self.barLevel = WarningBar() self.indAlarm = Indicator(self.mw) self.sldGain = QSlider(Qt.Orientation.Vertical) - self.sldGain.setMinimum(1) + self.sldGain.setMinimum(0) self.sldGain.setMaximum(100) + self.sldGain.setValue(0) self.sldGain.valueChanged.connect(self.sldGainValueChanged) + self.lblGain = QLabel("0") pnlTargets = QWidget() pnlTargets.setMaximumWidth(200) @@ -134,8 +137,9 @@ def __init__(self, mw, module): lytMain = QGridLayout(self) lytMain.addWidget(pnlTargets, 1, 0, 2, 1) + lytMain.addWidget(self.lblGain, 1, 1, 1, 1, Qt.AlignmentFlag.AlignHCenter) lytMain.addWidget(self.sldGain, 2, 1, 1, 1, Qt.AlignmentFlag.AlignHCenter) - lytMain.addWidget(QLabel("Gain"), 3, 1, 1, 1, Qt.AlignmentFlag.AlignCenter) + lytMain.addWidget(QLabel("Limit"), 3, 1, 1, 1, Qt.AlignmentFlag.AlignHCenter) lytMain.addWidget(self.indAlarm, 1, 2, 1, 1) lytMain.addWidget(self.barLevel, 2, 2, 1, 1) lytMain.addWidget(QLabel(), 2, 3, 1, 1) @@ -146,45 +150,51 @@ def btnAddTargetClicked(self): self.dlgTarget.show() def btnDeleteTargetClicked(self): - item = self.lstTargets.currentItem() - if item: - print(item.text()) - ret = QMessageBox.warning(self, "Delete Target: " + item.text(), "You are about to delete target\n" - "Are you sure you want to continue?", - QMessageBox.StandardButton.Ok | QMessageBox.StandardButton.Cancel) - if ret != QMessageBox.StandardButton.Ok: - return - self.lstTargets.takeItem(self.lstTargets.row(item)) - - match self.mw.videoConfigure.source: - case MediaSource.CAMERA: - camera = self.mw.cameraPanel.getCurrentCamera() - if camera: - camera.videoModelSettings.setTargets(self.lstTargets.toString()) - case MediaSource.FILE: - self.mw.filePanel.videoModelSettings.setTargets(self.lstTargets.toString()) + try: + if item := self.lstTargets.currentItem(): + ret = QMessageBox.warning(self, "Delete Target: " + item.text(), "You are about to delete target\n" + "Are you sure you want to continue?", + QMessageBox.StandardButton.Ok | QMessageBox.StandardButton.Cancel) + if ret != QMessageBox.StandardButton.Ok: + return + self.lstTargets.takeItem(self.lstTargets.row(item)) + + match self.mw.videoConfigure.source: + case MediaSource.CAMERA: + if camera := self.mw.cameraPanel.getCurrentCamera(): + if camera.videoModelSettings: + camera.videoModelSettings.setTargets(self.lstTargets.toString()) + case MediaSource.FILE: + if self.mw.filePanel.videoModelSettings: + self.mw.filePanel.videoModelSettings.setTargets(self.lstTargets.toString()) + except Exception as ex: + logger.error(ex) def dlgListAccepted(self): item = self.dlgTarget.list.item(self.dlgTarget.list.currentRow()) self.onAddItemDoubleClicked(item) def onAddItemDoubleClicked(self, item): - target = Target(item.text(), item.id) - found = False - for i in range(self.lstTargets.count()): - if target.text() == self.lstTargets.item(i).text(): - found = True - break - if not found: - self.lstTargets.addItem(target) - - match self.mw.videoConfigure.source: - case MediaSource.CAMERA: - camera = self.mw.cameraPanel.getCurrentCamera() - if camera: - camera.videoModelSettings.setTargets(self.lstTargets.toString()) - case MediaSource.FILE: - self.mw.filePanel.videoModelSettings.setTargets(self.lstTargets.toString()) + try: + target = Target(item.text(), item.id) + found = False + for i in range(self.lstTargets.count()): + if target.text() == self.lstTargets.item(i).text(): + found = True + break + if not found: + self.lstTargets.addItem(target) + + match self.mw.videoConfigure.source: + case MediaSource.CAMERA: + if camera := self.mw.cameraPanel.getCurrentCamera(): + if camera.videoModelSettings: + camera.videoModelSettings.setTargets(self.lstTargets.toString()) + case MediaSource.FILE: + if self.mw.filePanel.videoModelSettings: + self.mw.filePanel.videoModelSettings.setTargets(self.lstTargets.toString()) + except Exception as ex: + logger.error(ex) def setTargets(self, targets): while self.lstTargets.count() > 0: @@ -201,19 +211,28 @@ def getTargets(self): return output def sldGainValueChanged(self, value): - match self.mw.videoConfigure.source: - case MediaSource.CAMERA: - camera = self.mw.cameraPanel.getCurrentCamera() - if camera: - camera.videoModelSettings.setModelOutputGain(value) - case MediaSource.FILE: - self.mw.filePanel.videoModelSettings.setModelOutputGain(value) + try: + self.lblGain.setText(f'{value}') + match self.mw.videoConfigure.source: + case MediaSource.CAMERA: + if camera := self.mw.cameraPanel.getCurrentCamera(): + if camera.videoModelSettings: + camera.videoModelSettings.setModelOutputLimit(value) + case MediaSource.FILE: + if self.mw.filePanel.videoModelSettings: + self.mw.filePanel.videoModelSettings.setModelOutputLimit(value) + except Exception as ex: + logger.error(ex) def chkShowBoxesStateChanged(self, state): - match self.mw.videoConfigure.source: - case MediaSource.CAMERA: - camera = self.mw.cameraPanel.getCurrentCamera() - if camera: - camera.videoModelSettings.setModelShowBoxes(bool(state)) - case MediaSource.FILE: - self.mw.filePanel.videoModelSettings.setModelShowBoxes(bool(state)) + try: + match self.mw.videoConfigure.source: + case MediaSource.CAMERA: + if camera := self.mw.cameraPanel.getCurrentCamera(): + if camera.videoModelSettings: + camera.videoModelSettings.setModelShowBoxes(bool(state)) + case MediaSource.FILE: + if self.mw.filePanel.videoModelSettings: + self.mw.filePanel.videoModelSettings.setModelShowBoxes(bool(state)) + except Exception as ex: + logger.error(ex) diff --git a/onvif-gui/gui/components/thresholdslider.py b/onvif-gui/gui/components/thresholdslider.py index fb58537..432b93b 100644 --- a/onvif-gui/gui/components/thresholdslider.py +++ b/onvif-gui/gui/components/thresholdslider.py @@ -20,6 +20,7 @@ from PyQt6.QtWidgets import QWidget, QSlider, QLabel, QGridLayout from PyQt6.QtCore import Qt from gui.onvif.datastructures import MediaSource +from loguru import logger class ThresholdSlider(QWidget): def __init__(self, mw, title, module): @@ -37,26 +38,18 @@ def __init__(self, mw, title, module): lytThreshold.setContentsMargins(0, 0, 0, 0) def sldThresholdChanged(self, value): - self.lblValue.setText(str(value)) - match self.mw.videoConfigure.source: - case MediaSource.CAMERA: - camera = self.mw.cameraPanel.getCurrentCamera() - if camera: - camera.videoModelSettings.setModelConfidence(value) - ''' - profile = self.mw.cameraPanel.getCurrentProfile() - if profile: - profile.setModelConfidence(value, self.module) - ''' - case MediaSource.FILE: - #self.mw.filePanel.setModelConfidence(value, self.module) - self.mw.filePanel.videoModelSettings.setModelConfidence(value) - - ''' - player = self.mw.pm.getCurrentPlayer() - if player: - player.model_confidence = value / 100 - ''' + try: + self.lblValue.setText(str(value)) + match self.mw.videoConfigure.source: + case MediaSource.CAMERA: + if camera := self.mw.cameraPanel.getCurrentCamera(): + if camera.videoModelSettings: + camera.videoModelSettings.setModelConfidence(value) + case MediaSource.FILE: + if self.mw.filePanel.videoModelSettings: + self.mw.filePanel.videoModelSettings.setModelConfidence(value) + except Exception as ex: + logger.error(ex) def value(self): # return a value between 0 and 1 diff --git a/onvif-gui/gui/glwidget.py b/onvif-gui/gui/glwidget.py index d612cba..0ccf752 100644 --- a/onvif-gui/gui/glwidget.py +++ b/onvif-gui/gui/glwidget.py @@ -33,11 +33,9 @@ class GLWidget(QOpenGLWidget): def __init__(self, mw): super().__init__() self.mw = mw - self.setMouseTracking(True) self.focused_uri = None self.image_loading = False self.model_loading = False - self.last_update_time = None self.spinner = QMovie("image:spinner.gif") self.spinner.start() self.plain_recording = QMovie("image:plain_recording.gif") @@ -50,19 +48,21 @@ def __init__(self, mw): self.timer = QTimer() self.timer.timeout.connect(self.timerCallback) - self.timer.start(10) + refreshInterval = self.mw.settingsPanel.spnDisplayRefresh.value() + self.timer.start(refreshInterval) def renderCallback(self, F, player): try : - - if player.analyze_video: + if player.analyze_video and self.mw.videoConfigure.initialized: F = self.mw.pyVideoCallback(F, player) else: if player.uri == self.focused_uri: if self.mw.videoWorker: self.mw.videoWorker(None, None) - ary = np.array(F, copy = True) + ary = np.array(F, copy = False) + if len(ary.shape) < 2: + return h = ary.shape[0] w = ary.shape[1] d = 1 @@ -170,7 +170,6 @@ def isFocusedURI(self, uri): def paintGL(self): try: - self.last_update_time = datetime.now() painter = QPainter(self) painter.setRenderHint(QPainter.RenderHint.Antialiasing) @@ -252,7 +251,6 @@ def paintGL(self): painter.setPen(pen) painter.drawRect(rect.adjusted(3, 3, -5, -5)) - show = False if player.videoModelSettings: show = player.videoModelSettings.show diff --git a/onvif-gui/gui/main.py b/onvif-gui/gui/main.py index 98a7bf9..c65c6a1 100644 --- a/onvif-gui/gui/main.py +++ b/onvif-gui/gui/main.py @@ -22,6 +22,8 @@ if sys.platform == "linux": os.environ["QT_QPA_PLATFORM"] = "xcb" +if sys.platform == "darwin": + os.environ['PYTORCH_ENABLE_MPS_FALLBACK'] = '1' from loguru import logger @@ -47,7 +49,7 @@ import shutil import avio -VERSION = "2.0.18" +VERSION = "2.1.0" class PipeManager(): def __init__(self, mw, uri): @@ -73,6 +75,7 @@ def __init__(self, uri, mw): self.audioModelSettings = None self.detection_count = deque() self.last_image = None + self.zombie_counter = 0 self.boxes = [] self.labels = [] @@ -81,7 +84,6 @@ def __init__(self, uri, mw): self.save_image_filename = None self.pipe_output_start_time = None self.estimated_file_size = 0 - self.bitrate_used_to_estimate = 0 self.packet_drop_frame_counter = 0 self.timer = QTimer() @@ -155,9 +157,8 @@ def estimateFileSize(self): profile = self.mw.cameraPanel.getProfile(self.uri) if profile: audio_bitrate = min(profile.audio_bitrate(), 128) - bitrate = profile.bitrate() + audio_bitrate - if bitrate != self.bitrate_used_to_estimate: - self.bitrate_used_to_estimate = bitrate + video_bitrate = min(profile.bitrate(), 16384) + bitrate = video_bitrate + audio_bitrate result = (bitrate * 1000 / 8) * self.mw.STD_FILE_DURATION self.estimated_file_size = result return result @@ -218,7 +219,7 @@ def manageDirectory(self, d): oldest_file = self.getOldestFile(d) if oldest_file: QFile.remove(oldest_file) - logger.debug(f'File has been deleted by auto process: {oldest_file}') + #logger.debug(f'File has been deleted by auto process: {oldest_file}') else: logger.debug("Unable to find the oldest file for deletion during disk management") break @@ -250,49 +251,24 @@ def getFrameRate(self): frame_rate = profile.frame_rate() return frame_rate - def processModelOutput(self, output): + def processModelOutput(self): while self.rendering: sleep(0.001) - self.boxes = [] - self.labels = [] - self.scores = [] - - if self.image and output is not None: - labels = output[:, 5].astype(int) - - for i in range(len(labels)): - if labels[i] in self.videoModelSettings.targets: - self.boxes.append(output[i, 0:4]) - self.scores.append(output[i, 4]) - self.labels.append(int(output[i, 5])) - - # detection count is a deque of length equal to one second of video - frame_rate = self.getVideoFrameRate() - if frame_rate <= 0: - profile = self.mw.cameraPanel.getProfile(self.uri) - if profile: - frame_rate = profile.frame_rate() - sum = 0 - if frame_rate: - if len(self.detection_count) > (frame_rate - 1) and len(self.detection_count): - self.detection_count.popleft() - # if a desired label has been detected, mark the queue slot as true - if len(self.labels): - self.detection_count.append(1) - else: - self.detection_count.append(0) + if len(self.detection_count) > self.videoModelSettings.sampleSize - 1 and len(self.detection_count): + self.detection_count.popleft() - # add the number of detections in the deque - for count in self.detection_count: - sum += count + if len(self.boxes): + self.detection_count.append(1) else: - # fallback for cameras without correct frame rate parameter - if len(self.labels): - sum = 1 + self.detection_count.append(0) + + for count in self.detection_count: + sum += count + return sum class TimerSignals(QObject): @@ -372,7 +348,6 @@ def __init__(self, clear_settings=False): self.pm = Manager(self) self.timers = {} - self.glWidget = GLWidget(self) self.audioPlayer = None self.signals = MainWindowSignals() @@ -380,6 +355,7 @@ def __init__(self, clear_settings=False): self.settingsPanel = SettingsPanel(self) self.signals.started.connect(self.settingsPanel.onMediaStarted) self.signals.stopped.connect(self.settingsPanel.onMediaStopped) + self.glWidget = GLWidget(self) self.cameraPanel = CameraPanel(self) self.signals.started.connect(self.cameraPanel.onMediaStarted) self.signals.stopped.connect(self.cameraPanel.onMediaStopped) @@ -420,7 +396,6 @@ def __init__(self, clear_settings=False): self.setGeometry(rect) self.discoverTimer = None - #self.enableDiscoverTimer(self.settingsPanel.chkAutoDiscover.isChecked()) if self.settingsPanel.chkAutoDiscover.isChecked(): self.cameraPanel.btnDiscoverClicked() @@ -520,9 +495,17 @@ def playMedia(self, uri, alarm_sound=False): logger.debug(f'Attempt to create player with null uri') return - if self.pm.getPlayer(uri): - logger.debug(f'Duplicate media uri from {self.getCameraName(uri)} " : " {uri}') - return + if existing := self.pm.getPlayer(uri): + logger.debug(f'Duplicate media uri from {self.getCameraName(uri)}') + existing_terminated = False + if not existing.running: + existing.zombie_counter += 1 + if existing.zombie_counter > 60: + logger.debug(f'Removing zombie player {self.getCameraName(uri)}') + self.pm.removePlayer(uri) + existing_terminated = True + if not existing_terminated: + return player = Player(uri, self) @@ -536,13 +519,14 @@ def playMedia(self, uri, alarm_sound=False): player.infoCallback = lambda msg, uri : self.infoCallback(msg, uri) player.getAudioStatus = lambda : self.getAudioStatus() player.setAudioStatus = lambda status : self.setAudioStatus(status) + player.hw_device_type = self.settingsPanel.getDecoder() if player.isCameraStream(): - player.vpq_size = 100 - player.apq_size = 100 - - profile = self.cameraPanel.getProfile(uri) - if profile: + if profile := self.cameraPanel.getProfile(uri): + player.vpq_size = self.settingsPanel.spnCacheMax.value() + player.apq_size = self.settingsPanel.spnCacheMax.value() + if profile.audio_encoding() == "AAC" and profile.audio_sample_rate() and profile.frame_rate(): + player.apq_size = int(player.vpq_size * profile.audio_sample_rate() / profile.frame_rate()) player.buffer_size_in_seconds = self.settings.value(self.settingsPanel.bufferSizeKey, 10) player.onvif_frame_rate.num = profile.frame_rate() player.onvif_frame_rate.den = 1 @@ -552,6 +536,7 @@ def playMedia(self, uri, alarm_sound=False): player.desired_aspect = profile.getDesiredAspect() player.analyze_video = profile.getAnalyzeVideo() player.analyze_audio = profile.getAnalyzeAudio() + player.sync_audio = profile.getSyncAudio() camera = self.cameraPanel.getCamera(uri) if camera: player.systemTabSettings = camera.systemTabSettings @@ -604,23 +589,29 @@ def showEvent(self, event): super().showEvent(event) def closeEvent(self, event): - self.cameraPanel.closeEvent() - for timer in self.timers.values(): - if timer.isActive(): + try: + self.cameraPanel.closeEvent() + for player in self.pm.players: + player.requestShutdown() + for timer in self.timers.values(): timer.stop() - for player in self.pm.players: - player.requestShutdown() + count = 0 + while len(self.pm.players): + sleep(0.1) + count += 1 + if count > 200: + logger.debug("not all players closed within the allotted time, flushing player manager") + self.pm.players.clear() + break - count = 0 - while len(self.pm.players) > 0: - sleep(0.1) - count += 1 - if count > 10: - break + self.pm.ordinals.clear() + self.pm.sizes.clear() - self.settings.setValue(self.geometryKey, self.geometry()) - super().closeEvent(event) + self.settings.setValue(self.geometryKey, self.geometry()) + super().closeEvent(event) + except Exception as ex: + logger.error(f'window close error: {ex}') def mediaPlayingStarted(self, uri): if self.isCameraStreamURI(uri): @@ -636,8 +627,7 @@ def mediaPlayingStarted(self, uri): if finished: self.pm.auto_start_mode = False - player = self.pm.getPlayer(uri) - if player: + if player := self.pm.getPlayer(uri): player.clearCache() if player.systemTabSettings: if player.systemTabSettings.record_enable and player.systemTabSettings.record_always: @@ -662,24 +652,22 @@ def mediaPlayingStarted(self, uri): self.signals.started.emit(uri) def stopReconnectTimer(self, uri): - timer = self.timers.get(uri, None) - if timer: + if timer := self.timers.get(uri, None): while timer.rendering: sleep(0.001) timer.stop() def mediaPlayingStopped(self, uri): - player = self.pm.getPlayer(uri) - if player: + if player := self.pm.getPlayer(uri): if player.request_reconnect: - camera = self.cameraPanel.getCamera(uri) - if camera: + if camera := self.cameraPanel.getCamera(uri): logger.debug(f'Camera stream closed with reconnect requested {self.getCameraName(uri)}') self.signals.reconnect.emit(uri) else: if self.isCameraStreamURI(uri): logger.debug(f'Stream closed by user {self.getCameraName(uri)}') + player.rendering = False self.pm.removePlayer(uri) self.glWidget.update() if self.signals: @@ -714,28 +702,37 @@ def infoCallback(self, msg, uri): else: name = f'File: {uri}' - print(f'{name}, Message: {msg}') + if msg.startswith("Output file creation failure:") or msg.startswith("Record to file close error:"): + logger.error(f'{name}, Message: {msg}') + + else: + print(f'{name}, Message: {msg}') def errorCallback(self, msg, uri, reconnect): if reconnect: - camera = self.cameraPanel.getCamera(uri) - if camera: - logger.debug(f'Error from camera: {self.getCameraName(uri)} : {msg}, attempting to re-connect') + camera_name = "" + last_msg = "" - player = self.pm.getPlayer(uri) - if player: + if camera := self.cameraPanel.getCamera(uri): + camera_name = self.getCameraName(uri) + last_msg = camera.last_msg + camera.last_msg = msg + + if player := self.pm.getPlayer(uri): if not player.getVideoWidth(): self.pm.removePlayer(uri) self.signals.reconnect.emit(uri) + + if msg != last_msg: + logger.debug(f'Error from camera: {camera_name} : {msg}, attempting to re-connect') else: name = "" if self.isCameraStreamURI(uri): player = self.pm.getPlayer(uri) - if not player.getVideoWidth(): - self.pm.removePlayer(uri) - self.pm.removeKeys(uri) + self.pm.removePlayer(uri) + self.pm.removeKeys(uri) camera = self.cameraPanel.getCamera(uri) if camera: @@ -743,6 +740,7 @@ def errorCallback(self, msg, uri, reconnect): camera.setIconIdle() self.cameraPanel.syncGUI() self.cameraPanel.setTabsEnabled(True) + self.signals.error.emit(msg) else: name = f'File: {uri}' self.pm.removePlayer(uri) @@ -751,7 +749,6 @@ def errorCallback(self, msg, uri, reconnect): self.signals.error.emit(msg) logger.error(f'{name}, Error: {msg}') - def mediaProgress(self, pct, uri): self.signals.progress.emit(pct, uri) @@ -760,7 +757,7 @@ def packetDrop(self, uri): if player: frames = 10 if player.onvif_frame_rate.num and player.onvif_frame_rate.den: - frames = int((player.onvif_frame_rate.num / player.onvif_frame_rate.den) * 10) + frames = int((player.onvif_frame_rate.num / player.onvif_frame_rate.den) * 2) player.packet_drop_frame_counter = frames def getAudioStatus(self): @@ -792,6 +789,11 @@ def tabIndexChanged(self, index): else: self.tabIndex = index + if index == 0: + if camera := self.cameraPanel.getCurrentCamera(): + if self.videoConfigure: + self.videoConfigure.setCamera(camera) + def isSplitterCollapsed(self): return self.split.sizes()[1] == 0 @@ -820,7 +822,7 @@ def getCameraName(self, uri): result = "" camera = self.cameraPanel.getCamera(uri) if camera: - result = camera.text() + " ( " + camera.profileName(uri) + ")" + result = camera.text() + " (" + camera.profileName(uri) + ")" return result def getLocation(self): @@ -850,23 +852,6 @@ def getLogFilename(self): log_dir += os.path.sep + "logs" + os.path.sep + "onvif-gui" + os.path.sep + datestamp return log_dir + os.path.sep + source + "_" + timestamp + ".csv" - ''' - def enableDiscoverTimer(self, state): - DISCOVER_TIME_INTERVAL = 60000 - if int(state) == 0: - logger.debug("Auto Discover has been turned off") - if self.discoverTimer: - self.discoverTimer.stop() - else: - logger.debug("Auto Discover has been turned on") - self.cameraPanel.btnDiscoverClicked() - if self.settingsPanel.radDiscover.isChecked(): - if not self.discoverTimer: - self.discoverTimer = QTimer() - self.discoverTimer.timeout.connect(self.cameraPanel.btnDiscoverClicked) - self.discoverTimer.start(DISCOVER_TIME_INTERVAL) - #''' - def style(self): blDefault = "#5B5B5B" bmDefault = "#4B4B4B" diff --git a/onvif-gui/gui/manager.py b/onvif-gui/gui/manager.py index 0cd96e7..bb49c15 100644 --- a/onvif-gui/gui/manager.py +++ b/onvif-gui/gui/manager.py @@ -19,7 +19,7 @@ from time import sleep from loguru import logger -from PyQt6.QtCore import QRectF, QSize, Qt, QSizeF +from PyQt6.QtCore import QRectF, QSize, Qt, QSizeF, QPointF class Manager(): def __init__(self, mw): @@ -161,8 +161,6 @@ def removePlayer(self, uri): sleep(0.001) if not player.request_reconnect: - while self.remove_lock: - sleep(0.001) self.removeKeys(uri) self.players.remove(player) @@ -237,17 +235,16 @@ def computeRowsCols(self, size_canvas, aspect_ratio): return valid_layouts[index].width(), valid_layouts[index].height() def displayRect(self, uri, canvas_size): - self.remove_lock = True ar = self.getMostCommonAspectRatio() num_rows, num_cols = self.computeRowsCols(canvas_size, ar / 1000) if num_cols == 0: - return QRectF(0, 0, 0, 0) + return QRectF(QPointF(0, 0), QSizeF(canvas_size)) ordinal = -1 if uri in self.ordinals.keys(): ordinal = self.ordinals[uri] else: - return QRectF(0, 0, 0, 0) + return QRectF(QPointF(0, 0), QSizeF(canvas_size)) if ordinal > num_rows * num_cols - 1: ordinal = self.getOrdinal() @@ -281,5 +278,4 @@ def displayRect(self, uri, canvas_size): x = (col * cell_width) + x_offset y = (row * cell_height) + y_offset - self.remove_lock = False return QRectF(x, y, w, h) diff --git a/onvif-gui/gui/onvif/datastructures.py b/onvif-gui/gui/onvif/datastructures.py index 4c47bd9..6d1d5b5 100644 --- a/onvif-gui/gui/onvif/datastructures.py +++ b/onvif-gui/gui/onvif/datastructures.py @@ -76,6 +76,7 @@ def __init__(self, onvif_data, mw): self.icnRecord = QIcon("image:recording_hi.png") self.defaultForeground = self.foreground() self.filled = False + self.last_msg = "" onvif_data.setSetting = self.setSetting onvif_data.getSetting = self.getSetting diff --git a/onvif-gui/gui/onvif/systemtab.py b/onvif-gui/gui/onvif/systemtab.py index 158e434..25b64aa 100644 --- a/onvif-gui/gui/onvif/systemtab.py +++ b/onvif-gui/gui/onvif/systemtab.py @@ -178,16 +178,18 @@ def __init__(self, cp): self.btnBrowser = QPushButton("Browser") self.btnBrowser.clicked.connect(self.btnBrowserClicked) self.btnBrowser.setFocusPolicy(Qt.FocusPolicy.NoFocus) - self.btnJPEG = QPushButton("JPEG") - self.btnJPEG.clicked.connect(self.writeJPEG) - self.btnJPEG.setFocusPolicy(Qt.FocusPolicy.NoFocus) + + self.btnSnapshot = QPushButton("JPEG") + self.btnSnapshot.clicked.connect(self.btnSnapshotClicked) + self.btnSnapshot.setFocusPolicy(Qt.FocusPolicy.NoFocus) pnlButton = QWidget() lytButton = QGridLayout(pnlButton) lytButton.addWidget(self.btnReboot, 0, 0, 1, 1) lytButton.addWidget(self.btnSyncTime, 1, 0, 1, 1) lytButton.addWidget(self.btnBrowser, 2, 0, 1, 1) - lytButton.addWidget(self.btnJPEG, 3, 0, 1, 1) + lytButton.addWidget(self.btnSnapshot, 3, 0, 1, 1) + lytButton.setContentsMargins(6, 0, 6, 0) lytMain = QGridLayout(self) lytMain.addWidget(self.grpRecord, 0, 0, 1, 1) @@ -295,9 +297,8 @@ def btnBrowserClicked(self): host = "http://" + camera.onvif_data.host() webbrowser.get().open(host) - def writeJPEG(self): - player = self.cp.getCurrentPlayer() - if player: + def btnSnapshotClicked(self): + if player := self.cp.getCurrentPlayer(): root = self.cp.mw.settingsPanel.dirPictures.txtDirectory.text() + "/" + self.cp.getCamera(player.uri).text() pathlib.Path(root).mkdir(parents=True, exist_ok=True) filename = '{0:%Y%m%d%H%M%S.jpg}'.format(datetime.datetime.now()) diff --git a/onvif-gui/gui/onvif/videotab.py b/onvif-gui/gui/onvif/videotab.py index f40cc21..83df6cd 100644 --- a/onvif-gui/gui/onvif/videotab.py +++ b/onvif-gui/gui/onvif/videotab.py @@ -21,6 +21,8 @@ QGridLayout, QWidget, QLabel, QCheckBox, QPushButton from PyQt6.QtCore import Qt from loguru import logger +import datetime +import pathlib class SpinBox(QSpinBox): def __init__(self, qle): @@ -94,6 +96,10 @@ def __init__(self, cp): self.cmbSampleRates.setMaximumWidth(50) self.lblSampleRates = QLabel("Samples") + self.chkSyncAudio = QCheckBox("Sync Audio") + self.chkSyncAudio.clicked.connect(self.chkSyncAudioChecked) + self.chkSyncAudio.setFocusPolicy(Qt.FocusPolicy.NoFocus) + pnlRow1 = QWidget() lytRow1 = QGridLayout(pnlRow1) lytRow1.addWidget(self.lblResolutions, 0, 0, 1, 1) @@ -137,10 +143,12 @@ def __init__(self, cp): pnlRow5 = QWidget() lytRow5 = QGridLayout(pnlRow5) - lytRow5.addWidget(QLabel(), 0, 0, 1, 1) - lytRow5.addWidget(self.chkAnalyzeVideo, 0, 1, 1, 1) + #lytRow5.addWidget(self.btnSnapshot, 0, 0, 1, 1, Qt.AlignmentFlag.AlignCenter) + lytRow5.addWidget(self.chkAnalyzeVideo, 0, 0, 1, 1) + #lytRow5.addWidget(QLabel(), 0, 1, 1, 1) + lytRow5.addWidget(self.chkAnalyzeAudio, 0, 1, 1, 1) lytRow5.addWidget(QLabel(), 0, 2, 1, 1) - lytRow5.addWidget(self.chkAnalyzeAudio, 0, 3, 1, 1) + lytRow5.addWidget(self.chkSyncAudio, 0, 3, 1, 1) lytRow5.setColumnStretch(0, 10) lytRow5.setContentsMargins(0, 0, 0, 0) @@ -265,6 +273,7 @@ def syncGUI(self): self.cmbSampleRates.setEnabled(True) self.lblSampleRates.setEnabled(True) self.chkAnalyzeAudio.setEnabled(True) + self.chkSyncAudio.setEnabled(True) else: self.chkDisableAudio.setChecked(False) self.chkDisableAudio.setEnabled(False) @@ -273,6 +282,7 @@ def syncGUI(self): self.cmbSampleRates.setEnabled(False) self.lblSampleRates.setEnabled(False) self.chkAnalyzeAudio.setEnabled(False) + self.chkSyncAudio.setEnabled(False) self.chkDisableAudio.setChecked(profile.getDisableAudio()) if self.chkDisableAudio.isChecked(): @@ -281,7 +291,9 @@ def syncGUI(self): self.cmbSampleRates.setEnabled(False) self.lblSampleRates.setEnabled(False) self.chkAnalyzeAudio.setEnabled(False) + self.chkSyncAudio.setEnabled(False) + self.chkSyncAudio.setChecked(profile.getSyncAudio()) self.chkAnalyzeVideo.setChecked(profile.getAnalyzeVideo()) self.chkAnalyzeAudio.setChecked(profile.getAnalyzeAudio()) @@ -379,6 +391,13 @@ def chkDisableAudioChanged(self, state): self.cp.syncGUI() self.syncGUI() + def chkSyncAudioChecked(self, state): + if profile := self.cp.getCurrentProfile(): + profile.setSyncAudio(state) + + if player := self.cp.getCurrentPlayer(): + player.sync_audio = bool(state) + def chkAnalyzeVideoChecked(self, state): profile = self.cp.getCurrentProfile() if profile: @@ -427,8 +446,7 @@ def updateCacheSize(self, size): self.lblCacheSize.setText("Cache: " + arg) def btnClearCacheClicked(self): - player = self.cp.getCurrentPlayer() - if player: + if player := self.cp.getCurrentPlayer(): player.clearCache() def cmbAudioChanged(self): diff --git a/onvif-gui/gui/panels/camerapanel.py b/onvif-gui/gui/panels/camerapanel.py index 11184de..2cf30db 100644 --- a/onvif-gui/gui/panels/camerapanel.py +++ b/onvif-gui/gui/panels/camerapanel.py @@ -60,10 +60,8 @@ def keyPressEvent(self, event): return super().keyPressEvent(event) def remove(self): - camera = self.currentItem() - if camera: - player = self.mw.pm.getPlayer(camera.uri()) - if player: + if camera := self.currentItem(): + if self.mw.pm.getPlayer(camera.uri()): ret = QMessageBox.warning(self, camera.name(), "Camera is currently playing. Please stop before deleting.", QMessageBox.StandardButton.Ok) @@ -153,6 +151,7 @@ def __init__(self, mw): self.lstCamera = CameraList(mw) self.lstCamera.currentItemChanged.connect(self.onCurrentItemChanged) self.lstCamera.itemDoubleClicked.connect(self.onItemDoubleClicked) + self.lstCamera.itemClicked.connect(self.onItemClicked) self.lstCamera.setContextMenuPolicy(Qt.ContextMenuPolicy.CustomContextMenu) self.lstCamera.customContextMenuRequested.connect(self.showContextMenu) @@ -303,6 +302,9 @@ def discovered(self): self.btnDiscover.setEnabled(True) def getCredential(self, onvif_data): + if not onvif_data: + return + if self.lstCamera: cameras = [self.lstCamera.item(x) for x in range(self.lstCamera.count())] for camera in cameras: @@ -321,11 +323,6 @@ def getCredential(self, onvif_data): else: while self.dlgLogin.active: - #print("1", onvif_data.host()) - #print("2", self.dlgLogin.lblCameraIP.text()) - #if onvif_data.host() == self.dlgLogin.lblCameraIP.text(): - # onvif_data.cancelled = True - # return onvif_data sleep(0.01) self.dlgLogin.active = True @@ -339,6 +336,9 @@ def onShowLogin(self, onvif_data): self.dlgLogin.exec(onvif_data) def getData(self, onvif_data): + if not onvif_data: + return + if onvif_data.last_error().startswith("Error initializing camera data during manual fill:"): logger.debug(onvif_data.last_error()) return @@ -376,8 +376,10 @@ def getData(self, onvif_data): onvif_data.startFill(synchronizeTime) def filled(self, onvif_data): - camera = self.getCamera(onvif_data.uri()) - if camera: + if not onvif_data: + return + + if camera := self.getCamera(onvif_data.uri()): camera.restoreForeground() key = f'{camera.serial_number()}/XAddrs' self.mw.settings.setValue(key, camera.xaddrs()) @@ -404,6 +406,7 @@ def filled(self, onvif_data): if not camera.isRunning(): while not self.mw.isVisible(): sleep(0.1) + self.lstCamera.itemClicked.emit(camera) self.lstCamera.itemDoubleClicked.emit(camera) if len(onvif_data.last_error()): @@ -418,14 +421,17 @@ def saveCameraList(self): def onCurrentItemChanged(self, current, previous): if current: - player = self.mw.pm.getPlayer(current.uri()) - if player: + if self.mw.pm.getPlayer(current.uri()): self.mw.glWidget.focused_uri = current.uri() else: self.mw.glWidget.focused_uri = None self.signals.fill.emit(current.onvif_data) self.syncGUI() + def onItemClicked(self, camera): + if self.mw.videoConfigure: + self.mw.videoConfigure.setCamera(camera) + def onItemDoubleClicked(self, camera): if not camera: return @@ -450,7 +456,10 @@ def onItemDoubleClicked(self, camera): else: if len(players): for player in players: - player.requestShutdown() + if not player.running: + self.mw.pm.removePlayer(player.uri) + else: + player.requestShutdown() else: for i, profile in enumerate(profiles): if i == 0: @@ -549,8 +558,7 @@ def btnStopClicked(self): def onMediaStarted(self, uri): if self.mw.tabVisible: if self.mw.glWidget.focused_uri is None: - camera = self.getCurrentCamera() - if camera: + if camera := self.getCurrentCamera(): self.mw.glWidget.focused_uri = camera.uri() self.syncGUI() @@ -569,11 +577,9 @@ def onMediaStopped(self, uri): self.syncGUI() def syncGUI(self): - camera = self.getCurrentCamera() - if camera: + if camera := self.getCurrentCamera(): self.btnStop.setEnabled(True) - player = self.mw.pm.getPlayer(camera.uri()) - if player: + if player := self.mw.pm.getPlayer(camera.uri()): self.btnStop.setStyleSheet(self.getButtonStyle("stop")) ps = player.systemTabSettings @@ -595,7 +601,9 @@ def syncGUI(self): if camera.isRecording(): self.btnRecord.setStyleSheet(self.getButtonStyle("recording")) record_always = player.systemTabSettings.record_always if player.systemTabSettings else False - if camera.isAlarming() or record_always: + record_alarm = player.systemTabSettings.record_alarm if player.systemTabSettings else False + record_enable = player.systemTabSettings.record_enable if player.systemTabSettings else False + if record_enable and ((camera.isAlarming() and record_alarm) or record_always): self.btnRecord.setEnabled(False) else: self.btnRecord.setStyleSheet(self.getButtonStyle("record")) @@ -612,8 +620,7 @@ def syncGUI(self): self.btnStop.setStyleSheet(self.getButtonStyle("play")) self.setTabsEnabled(True) - profile = camera.getProfile(camera.uri()) - if profile: + if profile := camera.getProfile(camera.uri()): if camera.mute: self.btnMute.setStyleSheet(self.getButtonStyle("mute")) else: @@ -704,19 +711,18 @@ def getCurrentCamera(self): return result def setCurrentCamera(self, uri): - camera = self.getCamera(uri) - if camera: + if camera := self.getCamera(uri): self.lstCamera.setCurrentItem(camera) self.signals.fill.emit(camera.onvif_data) self.syncGUI() - if self.mw.videoConfigure.source != MediaSource.CAMERA: - if camera: - self.mw.videoConfigure.setCamera(camera) + if self.mw.videoConfigure: + if self.mw.videoConfigure.source != MediaSource.CAMERA: + self.mw.videoConfigure.setCamera(camera) - if self.mw.audioConfigure.source != MediaSource.CAMERA: - if camera: - self.mw.audioConfigure.setCamera(camera) + if self.mw.audioConfigure: + if self.mw.audioConfigure.source != MediaSource.CAMERA: + self.mw.audioConfigure.setCamera(camera) def enableAutoTimeSync(self, state): AUTO_TIME_SYNC_INTERVAL = 3600000 diff --git a/onvif-gui/gui/panels/filepanel.py b/onvif-gui/gui/panels/filepanel.py index 3402efe..5ff3f14 100644 --- a/onvif-gui/gui/panels/filepanel.py +++ b/onvif-gui/gui/panels/filepanel.py @@ -360,6 +360,7 @@ def __init__(self, mw): self.model.fileRenamed.connect(self.onFileRenamed) self.tree = TreeView(mw) self.tree.setModel(self.model) + self.tree.clicked.connect(self.treeClicked) self.tree.doubleClicked.connect(self.treeDoubleClicked) self.tree.setContextMenuPolicy(Qt.ContextMenuPolicy.CustomContextMenu) self.tree.customContextMenuRequested.connect(self.showContextMenu) @@ -398,6 +399,11 @@ def dirChanged(self, path): self.tree.setRootIndex(self.model.index(path)) self.setDirectory(path) + def treeClicked(self, index): + if index.isValid(): + fileInfo = self.model.fileInfo(index) + self.mw.videoConfigure.setFile(fileInfo.canonicalFilePath()) + def treeDoubleClicked(self, index): if index.isValid(): fileInfo = self.model.fileInfo(index) @@ -554,7 +560,7 @@ def onMenuInfo(self): strInfo = "Invalid Index" msgBox = QMessageBox(self) - msgBox.setWindowTitle("File Info") + msgBox.setWindowTitle("") msgBox.setText(strInfo) msgBox.exec() diff --git a/onvif-gui/gui/panels/settingspanel.py b/onvif-gui/gui/panels/settingspanel.py index 40643ae..c680833 100644 --- a/onvif-gui/gui/panels/settingspanel.py +++ b/onvif-gui/gui/panels/settingspanel.py @@ -32,6 +32,7 @@ import shutil from time import sleep import webbrowser +import platform class AddCameraDialog(QDialog): def __init__(self, mw): @@ -127,9 +128,14 @@ def readLogFile(self, path): self.editor.scrollToBottom() def btnArchiveClicked(self): - path = QFileDialog.getOpenFileName(self, "Select File", self.windowTitle())[0] - if len(path) > 0: - self.readLogFile(path) + path = None + if platform.system() == "Linux": + path = QFileDialog.getOpenFileName(self, "Select File", self.windowTitle(), options=QFileDialog.Option.DontUseNativeDialog)[0] + else: + path = QFileDialog.getOpenFileName(self, "Select File", self.windowTitle())[0] + if path: + if len(path) > 0: + self.readLogFile(path) def btnClearClicked(self): filename = self.windowTitle() @@ -178,8 +184,14 @@ def __init__(self, mw): self.alarmSoundVolumeKey = "settings/alarmSoundVolume" self.diskLimitKey = "settings/diskLimit" self.mangageDiskUsagekey = "settings/manageDiskUsage" + self.displayRefreshKey = "settings/displayRefresh" + self.cacheMaxSizeKey = "settings/cacheMaxSize" - decoders = ["NONE", "CUDA", "VAAPI", "VDPAU", "DXVA2", "D3D11VA"] + decoders = ["NONE"] + if sys.platform == "win32": + decoders += ["CUDA", "DXVA2", "D3D11VA"] + if sys.platform == "linux": + decoders += ["CUDA", "VAAPI", "VDPAU"] self.dlgLog = None @@ -238,6 +250,25 @@ def __init__(self, mw): self.spnLagTime.valueChanged.connect(self.spnLagTimeChanged) lblLagTime = QLabel("Post-Alarm Lag Time (in seconds)") + self.spnDisplayRefresh = QSpinBox() + self.spnDisplayRefresh.setMinimum(1) + self.spnDisplayRefresh.setMaximum(1000) + self.spnDisplayRefresh.setMaximumWidth(80) + refresh = 10 + if sys.platform == "win32": + refresh = 20 + self.spnDisplayRefresh.setValue(int(self.mw.settings.value(self.displayRefreshKey, refresh))) + self.spnDisplayRefresh.valueChanged.connect(self.spnDisplayRefreshChanged) + lblDisplayRefresh = QLabel("Display Refresh Interval (in milliseconds)") + + self.spnCacheMax = QSpinBox() + self.spnCacheMax.setMaximum(200) + self.spnCacheMax.setValue(100) + self.spnCacheMax.setMaximumWidth(80) + self.spnCacheMax.setValue(int(self.mw.settings.value(self.cacheMaxSizeKey, 100))) + self.spnCacheMax.valueChanged.connect(self.spnCacheMaxChanged) + lblCacheMax = QLabel("Maximum Input Stream Cache Size") + self.cmbSoundFiles = QComboBox() d = f'{self.mw.getLocation()}/gui/resources' sounds = [f for f in os.listdir(d) if os.path.isfile(os.path.join(d, f)) and f.endswith(".mp3")] @@ -258,11 +289,15 @@ def __init__(self, mw): pnlBuffer = QWidget() lytBuffer = QGridLayout(pnlBuffer) - lytBuffer.addWidget(lblBufferSize, 0, 0, 1, 3) - lytBuffer.addWidget(self.spnBufferSize, 0, 3, 1, 1) - lytBuffer.addWidget(lblLagTime, 1, 0, 1, 3) - lytBuffer.addWidget(self.spnLagTime, 1, 3, 1, 1) - lytBuffer.addWidget(pnlSoundFile, 2, 0, 1, 4) + lytBuffer.addWidget(lblBufferSize, 1, 0, 1, 3) + lytBuffer.addWidget(self.spnBufferSize, 1, 3, 1, 1) + lytBuffer.addWidget(lblLagTime, 2, 0, 1, 3) + lytBuffer.addWidget(self.spnLagTime, 2, 3, 1, 1) + lytBuffer.addWidget(pnlSoundFile, 3, 0, 1, 4) + lytBuffer.addWidget(lblDisplayRefresh, 4, 0, 1, 3) + lytBuffer.addWidget(self.spnDisplayRefresh, 4, 3, 1, 1) + lytBuffer.addWidget(lblCacheMax, 5, 0, 1, 3) + lytBuffer.addWidget(self.spnCacheMax, 5, 3, 1, 1) lytBuffer.setContentsMargins(0, 0, 0, 0) self.grpDiscoverType = QGroupBox("Set Camera Discovery Method") @@ -395,6 +430,13 @@ def autoTimeSyncChecked(self, state): def autoStartChecked(self, state): self.mw.settings.setValue(self.autoStartKey, state) + def spnDisplayRefreshChanged(self, i): + self.mw.settings.setValue(self.displayRefreshKey, i) + self.mw.glWidget.timer.setInterval(i) + + def spnCacheMaxChanged(self, i): + self.mw.settings.setValue(self.cacheMaxSizeKey, i) + def spnBufferSizeChanged(self, i): self.mw.settings.setValue(self.bufferSizeKey, i) @@ -414,33 +456,40 @@ def onMediaStopped(self): def btnCloseAllClicked(self): - if self.btnCloseAll.text() == "Close All Streams": - for player in self.mw.pm.players: - player.requestShutdown() - for timer in self.mw.timers.values(): - timer.stop() - self.mw.pm.auto_start_mode = False - lstCamera = self.mw.cameraPanel.lstCamera - if lstCamera: - cameras = [lstCamera.item(x) for x in range(lstCamera.count())] - for camera in cameras: - camera.setIconIdle() - - count = 0 - while len(self.mw.pm.players): - sleep(0.01) - count += 1 - if count > 200: - logger.debug("not all players closed within the allotted time") - break - self.mw.pm.ordinals.clear() - self.mw.cameraPanel.syncGUI() - else: - lstCamera = self.mw.cameraPanel.lstCamera - if lstCamera: - cameras = [lstCamera.item(x) for x in range(lstCamera.count())] - for camera in cameras: - self.mw.cameraPanel.onItemDoubleClicked(camera) + try: + if self.btnCloseAll.text() == "Close All Streams": + for player in self.mw.pm.players: + player.requestShutdown() + for timer in self.mw.timers.values(): + timer.stop() + self.mw.pm.auto_start_mode = False + lstCamera = self.mw.cameraPanel.lstCamera + if lstCamera: + cameras = [lstCamera.item(x) for x in range(lstCamera.count())] + for camera in cameras: + camera.setIconIdle() + + count = 0 + while len(self.mw.pm.players): + sleep(0.1) + count += 1 + if count > 200: + logger.debug("not all players closed within the allotted time, flushing player manager") + self.mw.pm.players.clear() + break + + self.mw.pm.ordinals.clear() + self.mw.pm.sizes.clear() + self.mw.cameraPanel.syncGUI() + else: + lstCamera = self.mw.cameraPanel.lstCamera + if lstCamera: + cameras = [lstCamera.item(x) for x in range(lstCamera.count())] + for camera in cameras: + self.mw.cameraPanel.setCurrentCamera(camera.uri()) + self.mw.cameraPanel.onItemDoubleClicked(camera) + except Exception as ex: + logger.error(ex) def scanAllNetworksChecked(self, state): self.mw.settings.setValue(self.scanAllKey, state) diff --git a/onvif-gui/gui/panels/videopanel.py b/onvif-gui/gui/panels/videopanel.py index 080b407..6f75aa5 100644 --- a/onvif-gui/gui/panels/videopanel.py +++ b/onvif-gui/gui/panels/videopanel.py @@ -92,16 +92,9 @@ def cmbWorkerChanged(self, worker): self.mw.videoConfigure.setCamera(media) case MediaSource.FILE: self.mw.videoConfigure.setFile(media) - #self.mw.videoConfigure.enableControls(True) - existing = False - for player in self.mw.pm.players: - if player.analyze_video: - existing = True - - if existing: - self.mw.videoWorkerHook = None - self.mw.videoWorker = None + self.mw.videoWorkerHook = None + self.mw.videoWorker = None player = self.mw.pm.getCurrentPlayer() if player: diff --git a/onvif-gui/modules/video/motion.py b/onvif-gui/modules/video/motion.py index 6690227..b47a339 100644 --- a/onvif-gui/modules/video/motion.py +++ b/onvif-gui/modules/video/motion.py @@ -34,11 +34,11 @@ def __init__(self, mw, camera=None): self.camera = camera self.mw = mw self.id = "File" - self.show = False if camera: self.id = camera.serial_number() - + self.show = False self.gain = self.getModelOutputGain() + self.limit = 0 def getModelOutputGain(self): key = f'{self.id}/{MODULE_NAME}/ModelAlarmLimit' @@ -57,6 +57,7 @@ def __init__(self, mw): self.name = MODULE_NAME self.source = None self.media = None + self.initialized = False self.chkShow = QCheckBox("Show Diff Image") self.barLevel = WarningBar() @@ -81,6 +82,12 @@ def __init__(self, mw): lytMain.addWidget(QLabel(), 1, 0, 1, 2) self.enableControls(False) + if camera := self.mw.cameraPanel.getCurrentCamera(): + self.setCamera(camera) + else: + if file := self.mw.filePanel.getCurrentFileURI(): + self.setFile(file) + self.initialized = True except: logger.exception("sample configuration failed to load") @@ -101,7 +108,7 @@ def setCamera(self, camera): if not self.isModelSettings(camera.videoModelSettings): camera.videoModelSettings = MotionSettings(self.mw, camera) self.mw.videoPanel.lblCamera.setText(f'Camera - {camera.name()}') - self.sldGain.setValue(camera.videoModelSettings.gain) + self.sldGain.setValue(camera.videoModelSettings.limit) self.barLevel.setLevel(0) self.indAlarm.setState(0) profile = self.mw.cameraPanel.getProfile(camera.uri()) @@ -115,7 +122,7 @@ def setFile(self, file): if not self.isModelSettings(self.mw.filePanel.videoModelSettings): self.mw.filePanel.videoModelSettings = MotionSettings(self.mw) self.mw.videoPanel.lblCamera.setText(f'File - {os.path.split(file)[1]}') - self.sldGain.setValue(self.mw.filePanel.videoModelSettings.gain) + self.sldGain.setValue(self.mw.filePanel.videoModelSettings.limit) self.barLevel.setLevel(0) self.indAlarm.setState(0) self.enableControls(self.mw.videoPanel.chkEnableFile.isChecked()) @@ -176,7 +183,7 @@ def __call__(self, F, player): diff = cv2.morphologyEx(diff, cv2.MORPH_CLOSE, self.kernel, iterations=1) motion = diff.sum() / (diff.shape[0] * diff.shape[1]) - level = math.exp(0.2 * (player.videoModelSettings.gain - 50)) * motion + level = math.exp(0.2 * (player.videoModelSettings.limit - 50)) * motion player.last_image = img diff --git a/onvif-gui/modules/video/yolox.py b/onvif-gui/modules/video/yolox.py index c8dde52..454a604 100644 --- a/onvif-gui/modules/video/yolox.py +++ b/onvif-gui/modules/video/yolox.py @@ -18,6 +18,8 @@ #*********************************************************************/ IMPORT_ERROR = "" +MODULE_NAME = "yolox" + try: import os import sys @@ -27,7 +29,7 @@ from gui.components import ComboSelector, FileSelector, ThresholdSlider, TargetSelector from gui.onvif.datastructures import MediaSource from PyQt6.QtWidgets import QWidget, QGridLayout, QLabel, QCheckBox, QMessageBox, \ - QGroupBox, QDialog + QGroupBox, QDialog, QSpinBox from PyQt6.QtCore import Qt, QSize, QObject, pyqtSignal from PyQt6.QtGui import QMovie import time @@ -36,14 +38,16 @@ import torch.nn as nn from yolox.models import YOLOX, YOLOPAFPN, YOLOXHead from yolox.utils import postprocess - import openvino as ov + if sys.platform != "darwin": + import openvino as ov except ModuleNotFoundError as ex: IMPORT_ERROR = str(ex) logger.debug("Import Error has occurred, missing modules need to be installed, please consult documentation: ", ex) + QMessageBox.critical(None, MODULE_NAME + " Import Error", "Modules required for running this function are missing: " + IMPORT_ERROR) os.environ['KMP_DUPLICATE_LIB_OK']='True' -MODULE_NAME = "yolox" + class YoloxWaitDialog(QDialog): def __init__(self, p): @@ -74,9 +78,13 @@ def __init__(self, mw, camera=None): self.id = camera.serial_number() self.targets = self.getTargetsForPlayer() - self.gain = self.getModelOutputGain() + self.limit = self.getModelOutputLimit() self.confidence = self.getModelConfidence() self.show = self.getModelShowBoxes() + self.skipFrames = self.getSkipFrames() + self.skipCounter = 0 + self.sampleSize = self.getSampleSize() + self.orig_img = None def getTargets(self): key = f'{self.id}/{MODULE_NAME}/Targets' @@ -109,13 +117,13 @@ def setModelConfidence(self, value): self.confidence = value self.mw.settings.setValue(key, value) - def getModelOutputGain(self): - key = f'{self.id}/{MODULE_NAME}/ModelOutputGain' - return int(self.mw.settings.value(key, 50)) + def getModelOutputLimit(self): + key = f'{self.id}/{MODULE_NAME}/ModelOutputLimit' + return int(self.mw.settings.value(key, 0)) - def setModelOutputGain(self, value): - key = f'{self.id}/{MODULE_NAME}/ModelOutputGain' - self.gain = value + def setModelOutputLimit(self, value): + key = f'{self.id}/{MODULE_NAME}/ModelOutputLimit' + self.limit = value self.mw.settings.setValue(key, value) def getModelShowBoxes(self): @@ -127,6 +135,24 @@ def setModelShowBoxes(self, value): self.show = value self.mw.settings.setValue(key, int(value)) + def getSkipFrames(self): + key = f'{self.id}/{MODULE_NAME}/SkipFrames' + return int(self.mw.settings.value(key, 0)) + + def setSkipFrames(self, value): + key = f'{self.id}/{MODULE_NAME}/SkipFrames' + self.skipFrames = int(value) + self.mw.settings.setValue(key, int(value)) + + def getSampleSize(self): + key = f'{self.id}/{MODULE_NAME}/SampleSize' + return int(self.mw.settings.value(key, 1)) + + def setSampleSize(self, value): + key = f'{self.id}/{MODULE_NAME}/SampleSize' + self.sampleSize = int(value) + self.mw.settings.setValue(key, int(value)) + class YoloxSignals(QObject): showWaitDialog = pyqtSignal() hideWaitDialog = pyqtSignal() @@ -139,8 +165,13 @@ def __init__(self, mw): self.name = MODULE_NAME self.source = None self.media = None + self.initialized = False self.autoKey = "Module/" + MODULE_NAME + "/autoDownload" + if len(IMPORT_ERROR): + self.mw.videoPanel.lblCamera.setText(f'Configuration error - {IMPORT_ERROR}') + return + self.dlgWait = YoloxWaitDialog(self.mw) self.signals = YoloxSignals() self.signals.showWaitDialog.connect(self.showWaitDialog) @@ -151,9 +182,14 @@ def __init__(self, mw): self.chkAuto.stateChanged.connect(self.chkAutoClicked) self.txtFilename = FileSelector(mw, MODULE_NAME) self.txtFilename.setEnabled(not self.chkAuto.isChecked()) - self.cmbRes = ComboSelector(mw, "Size", ("160", "320", "480", "640", "960", "1240"), "640", MODULE_NAME) + self.cmbRes = ComboSelector(mw, "Size", ("160", "320", "480", "640", "960", "1280"), "640", MODULE_NAME) self.cmbModelName = ComboSelector(mw, "Name", ("yolox_tiny", "yolox_s", "yolox_m", "yolox_l", "yolox_x"), "yolox_s", MODULE_NAME) - self.cmbAPI = ComboSelector(mw, "API", ("PyTorch", "OpenVINO"), "OpenVINO", MODULE_NAME) + + apis = ["PyTorch"] + if sys.platform != "darwin": + apis.append("OpenVINO") + + self.cmbAPI = ComboSelector(mw, "API", apis, "PyTorch", MODULE_NAME) self.cmbAPI.cmbBox.currentTextChanged.connect(self.cmbAPIChanged) self.cmbDevice = ComboSelector(mw, "Device", self.getDevices(self.cmbAPI.currentText()), "AUTO", MODULE_NAME) @@ -161,6 +197,17 @@ def __init__(self, mw): self.sldConfThre = ThresholdSlider(mw, "Confidence", MODULE_NAME) self.selTargets = TargetSelector(self.mw, MODULE_NAME) + self.spnSkipFrames = QSpinBox() + self.lblSkipFrames = QLabel("Skip Frames") + self.spnSkipFrames.setValue(0) + self.spnSkipFrames.valueChanged.connect(self.spnSkipFramesChanged) + + self.spnSampleSize = QSpinBox() + self.lblSampleSize = QLabel("Sample Size") + self.spnSampleSize.setMinimum(1) + self.spnSampleSize.setValue(1) + self.spnSampleSize.valueChanged.connect(self.spnSampleSizeChanged) + grpSystem = QGroupBox("System wide model parameters") lytSystem = QGridLayout(grpSystem) lytSystem.addWidget(self.chkAuto, 0, 0, 1, 4) @@ -172,9 +219,13 @@ def __init__(self, mw): self.grpCamera = QGroupBox("Check camera video alarm to enable") lytCamera = QGridLayout(self.grpCamera) - lytCamera.addWidget(self.sldConfThre, 0, 0, 1, 1) - lytCamera.addWidget(QLabel(), 1, 0, 1, 1) - lytCamera.addWidget(self.selTargets, 2, 0, 1, 1) + lytCamera.addWidget(self.sldConfThre, 0, 0, 1, 4) + lytCamera.addWidget(self.lblSkipFrames, 1, 0, 1, 1) + lytCamera.addWidget(self.spnSkipFrames, 1, 1, 1, 1) + lytCamera.addWidget(self.lblSampleSize, 1, 2, 1, 1) + lytCamera.addWidget(self.spnSampleSize, 1, 3, 1, 1) + lytCamera.addWidget(QLabel(), 2, 0, 1, 4) + lytCamera.addWidget(self.selTargets, 3, 0, 1, 4) lytMain = QGridLayout(self) lytMain.addWidget(grpSystem, 0, 0, 1, 1) @@ -184,25 +235,51 @@ def __init__(self, mw): lytMain.setRowStretch(3, 10) self.enableControls(False) + if camera := self.mw.cameraPanel.getCurrentCamera(): + self.setCamera(camera) + else: + if file := self.mw.filePanel.getCurrentFileURI(): + self.setFile(file) - if len(IMPORT_ERROR) > 0: - QMessageBox.critical(None, MODULE_NAME + " Import Error", "Modules required for running this function are missing: " + IMPORT_ERROR) + self.initialized = True - except: + except Exception as ex: logger.exception(MODULE_NAME + " configure failed to load") + QMessageBox.critical(None, f'{MODULE_NAME} Error', f'{MODULE_NAME} configure failed to initialize: {ex}') def chkAutoClicked(self, state): self.mw.settings.setValue(self.autoKey, state) self.txtFilename.setEnabled(not self.chkAuto.isChecked()) + def spnSkipFramesChanged(self, value): + if self.source == MediaSource.CAMERA: + if self.media: + if self.media.videoModelSettings: + self.media.videoModelSettings.setSkipFrames(value) + if self.source == MediaSource.FILE: + if self.mw.filePanel.videoModelSettings: + self.mw.filePanel.videoModelSettings.setSkipFrames(value) + + def spnSampleSizeChanged(self, value): + if self.source == MediaSource.CAMERA: + if self.media: + if self.media.videoModelSettings: + self.media.videoModelSettings.setSampleSize(value) + if self.source == MediaSource.FILE: + if self.mw.filePanel.videoModelSettings: + self.mw.filePanel.videoModelSettings.setSampleSize(value) + self.selTargets.sldGain.setMaximum(value) + def getDevices(self, api): devices = [] - if api == "OpenVINO": + if api == "OpenVINO" and sys.platform != "darwin": devices = ["AUTO"] + ov.Core().available_devices if api == "PyTorch": devices = ["auto", "cpu"] if torch.cuda.is_available(): devices.append("cuda") + if torch.backends.mps.is_available(): + devices.append("mps") return devices def cmbAPIChanged(self, text): @@ -213,31 +290,39 @@ def setCamera(self, camera): self.source = MediaSource.CAMERA self.media = camera - if camera: + if camera and not len(IMPORT_ERROR): if not self.isModelSettings(camera.videoModelSettings): camera.videoModelSettings = YoloxSettings(self.mw, camera) self.mw.videoPanel.lblCamera.setText(f'Camera - {camera.name()}') self.selTargets.setTargets(camera.videoModelSettings.targets) self.sldConfThre.setValue(camera.videoModelSettings.confidence) - self.selTargets.sldGain.setValue(camera.videoModelSettings.gain) + self.spnSkipFrames.setValue(camera.videoModelSettings.skipFrames) + self.spnSampleSize.setValue(camera.videoModelSettings.sampleSize) + self.selTargets.sldGain.setMaximum(camera.videoModelSettings.sampleSize) + self.selTargets.sldGain.setValue(camera.videoModelSettings.limit) self.selTargets.chkShowBoxes.setChecked(camera.videoModelSettings.show) self.selTargets.barLevel.setLevel(0) self.selTargets.indAlarm.setState(0) - profile = self.mw.cameraPanel.getProfile(camera.uri()) - if profile: + if profile := self.mw.cameraPanel.getProfile(camera.uri()): self.enableControls(profile.getAnalyzeVideo()) def setFile(self, file): self.source = MediaSource.FILE self.media = file - if file: + if file and not len(IMPORT_ERROR): if not self.isModelSettings(self.mw.filePanel.videoModelSettings): self.mw.filePanel.videoModelSettings = YoloxSettings(self.mw) - self.mw.videoPanel.lblCamera.setText(f'File - {os.path.split(file)[1]}') + file_dir = "File" + if os.path.isdir(file): + file_dir = "Directory" + self.mw.videoPanel.lblCamera.setText(f'{file_dir} - {os.path.split(file)[1]}') self.selTargets.setTargets(self.mw.filePanel.videoModelSettings.targets) self.sldConfThre.setValue(self.mw.filePanel.videoModelSettings.confidence) - self.selTargets.sldGain.setValue(self.mw.filePanel.videoModelSettings.gain) + self.spnSkipFrames.setValue(self.mw.filePanel.videoModelSettings.skipFrames) + self.spnSampleSize.setValue(self.mw.filePanel.videoModelSettings.sampleSize) + self.selTargets.sldGain.setMaximum(self.mw.filePanel.videoModelSettings.sampleSize) + self.selTargets.sldGain.setValue(self.mw.filePanel.videoModelSettings.limit) self.selTargets.chkShowBoxes.setChecked(self.mw.filePanel.videoModelSettings.show) self.selTargets.barLevel.setLevel(0) self.selTargets.indAlarm.setState(0) @@ -263,15 +348,23 @@ def hideWaitDialog(self): class VideoWorker: def __init__(self, mw): try: - print("Video Worker initialization") self.mw = mw self.last_ex = "" + self.api = self.mw.videoConfigure.cmbAPI.currentText() + self.lock = False + self.callback_lock = False - if self.mw.videoConfigure.name != MODULE_NAME or len(IMPORT_ERROR) > 0: + if self.mw.videoConfigure.name != MODULE_NAME or len(IMPORT_ERROR) > 0 or self.mw.glWidget.model_loading: return self.mw.glWidget.model_loading = True - self.lock = True + time.sleep(1) + + for player in self.mw.pm.players: + player.boxes = [] + + self.mw.videoConfigure.selTargets.indAlarm.setState(0) + self.mw.videoConfigure.selTargets.barLevel.setLevel(0) self.torch_device = None self.torch_device_name = None @@ -284,14 +377,12 @@ def __init__(self, mw): initializer_data = torch.rand(1, 3, self.res, self.res) self.model_name = self.mw.videoConfigure.cmbModelName.currentText() - self.api = self.mw.videoConfigure.cmbAPI.currentText() if self.api == "PyTorch": self.torch_device_name = self.mw.videoConfigure.cmbDevice.currentText() if self.api == "OpenVINO": self.ov_device = self.mw.videoConfigure.cmbDevice.currentText() - - if self.api == "OpenVINO" and Path(self.get_ov_model_filename()).is_file(): + if self.api == "OpenVINO" and Path(self.get_ov_model_filename()).is_file() and sys.platform != "darwin": ov_model = ov.Core().read_model(self.get_ov_model_filename()) if (self.api == "OpenVINO" and not ov_model) or self.api == "PyTorch": @@ -299,6 +390,8 @@ def __init__(self, mw): if self.api == "PyTorch": if torch.cuda.is_available(): self.torch_device_name = "cuda" + if torch.backends.mps.is_available(): + self.torch_device_name = "mps" if self.mw.videoConfigure.cmbDevice.currentText() == "cpu": self.torch_device_name = "cpu" self.torch_device = torch.device(self.torch_device_name) @@ -333,45 +426,123 @@ def __init__(self, mw): self.model.load_state_dict(torch.load(self.ckpt_file, map_location="cpu")["model"]) self.model(initializer_data.to(self.torch_device)) - if self.api == "OpenVINO": + if self.api == "OpenVINO" and sys.platform != "darwin": + if not ov_model: ov_model = ov.convert_model(self.model, example_input=initializer_data) ov.save_model(ov_model, self.get_ov_model_filename()) - if self.api == "OpenVINO": self.ov_device = self.mw.videoConfigure.cmbDevice.currentText() - core = ov.Core() if self.ov_device != "CPU": ov_model.reshape({0: [1, 3, self.res, self.res]}) ov_config = {} - if "GPU" in self.ov_device or ("AUTO" in self.ov_device and "GPU" in core.available_devices): + if "GPU" in self.ov_device or ("AUTO" in self.ov_device and "GPU" in ov.Core().available_devices): ov_config = {"GPU_DISABLE_WINOGRAD_CONVOLUTION": "YES"} self.compiled_model = ov.compile_model(ov_model, self.ov_device, ov_config) self.compiled_model(initializer_data) + self.infer_queue = ov.AsyncInferQueue(self.compiled_model) + self.infer_queue.set_callback(self.callback) if not self.torch_device: self.torch_device = torch.device("cpu") - except: + except Exception as ex: logger.exception(MODULE_NAME + " initialization failure") - self.mw.signals.error.emit(MODULE_NAME + " initialization failure, please check logs for details") + self.mw.signals.error.emit(f'{MODULE_NAME} initialization failure - {ex}') self.mw.glWidget.model_loading = False - self.lock = False + + def preprocess(self, player): + h = player.videoModelSettings.orig_img.shape[0] + w = player.videoModelSettings.orig_img.shape[1] + + test_size = (self.res, self.res) + ratio = min(test_size[0] / h, test_size[1] / w) + inf_shape = (int(h * ratio), int(w * ratio)) + bottom = test_size[0] - inf_shape[0] + side = test_size[1] - inf_shape[1] + pad = (0, 0, side, bottom) + + timg = functional.to_tensor(player.videoModelSettings.orig_img).to(self.torch_device) + timg *= 255 + timg = functional.resize(timg, inf_shape) + timg = functional.pad(timg, pad, 114) + timg = timg.unsqueeze(0) + return timg + + def postprocess(self, outputs, player): + confthre = player.videoModelSettings.confidence / 100 + nmsthre = 0.65 + if isinstance(outputs, np.ndarray): + outputs = torch.from_numpy(outputs) + outputs = postprocess(outputs, self.num_classes, confthre, nmsthre) + output = None + boxes = [] + test_size = (self.res, self.res) + h = player.videoModelSettings.orig_img.shape[0] + w = player.videoModelSettings.orig_img.shape[1] + ratio = min(test_size[0] / h, test_size[1] / w) + + if outputs[0] is not None: + output = outputs[0].cpu().numpy().astype(float) + output[:, 0:4] /= ratio + output[:, 4] *= output[:, 5] + output = np.delete(output, 5, 1) + + labels = output[:, 5].astype(int) + for i in range(len(labels)): + if labels[i] in player.videoModelSettings.targets: + boxes.append(output[i, 0:4]) + + player.boxes = boxes + result = player.processModelOutput() + alarmState = result >= player.videoModelSettings.limit if result else False + player.handleAlarm(alarmState) + + if camera := self.mw.cameraPanel.getCamera(player.uri): + if camera.isFocus(): + + level = 0 + if player.videoModelSettings.limit: + level = result / player.videoModelSettings.limit + else: + if result: + level = 1.0 + + self.mw.videoConfigure.selTargets.barLevel.setLevel(level) + + if alarmState: + self.mw.videoConfigure.selTargets.indAlarm.setState(1) + + def callback(self, infer_request, player): + try: + if not self.mw.glWidget.model_loading: + while self.callback_lock: + time.sleep(0.001) + self.callback_lock = True + outputs = infer_request.get_output_tensor(0).data + self.postprocess(outputs, player) + + except Exception as ex: + logger.exception(f'{MODULE_NAME} callback error: {ex}') + + self.callback_lock = False def __call__(self, F, player): try: + if len(IMPORT_ERROR) or self.mw.glWidget.model_loading: + return if not F or not player or self.mw.videoConfigure.name != MODULE_NAME: self.mw.videoConfigure.selTargets.barLevel.setLevel(0) self.mw.videoConfigure.selTargets.indAlarm.setState(0) return - + camera = self.mw.cameraPanel.getCamera(player.uri) if not self.mw.videoConfigure.isModelSettings(player.videoModelSettings): if player.isCameraStream(): if camera: if not self.mw.videoConfigure.isModelSettings(camera.videoModelSettings): - camera.videoModelSettings = YoloxSettings(self.mw, camera) + self.mw.cameraPanel.setCurrentCamera(camera) player.videoModelSettings = camera.videoModelSettings else: if not self.mw.videoConfigure.isModelSettings(self.mw.filePanel.videoModelSettings): @@ -381,75 +552,40 @@ def __call__(self, F, player): if not player.videoModelSettings: raise Exception("Unable to set video model parameters for player") - img = np.array(F, copy=False) - - test_size = (self.res, self.res) - ratio = min(test_size[0] / img.shape[0], test_size[1] / img.shape[1]) - inf_shape = (int(img.shape[0] * ratio), int(img.shape[1] * ratio)) - bottom = test_size[0] - inf_shape[0] - side = test_size[1] - inf_shape[1] - pad = (0, 0, side, bottom) - - timg = functional.to_tensor(img).to(self.torch_device) - timg *= 255 - timg = functional.resize(timg, inf_shape) - timg = functional.pad(timg, pad, 114) - timg = timg.unsqueeze(0) + player.videoModelSettings.orig_img = np.array(F, copy=False) - confthre = player.videoModelSettings.confidence / 100 - nmsthre = 0.65 + if player.videoModelSettings.skipCounter < player.videoModelSettings.skipFrames: + player.videoModelSettings.skipCounter += 1 + return + player.videoModelSettings.skipCounter = 0 while self.lock: time.sleep(0.001) self.lock = True - if self.api == "PyTorch": + if self.api == "OpenVINO" and self.compiled_model: + timg = self.preprocess(player) + self.infer_queue.start_async({0: timg}, player, False) + self.lock = False + + if self.api == "PyTorch" and self.model: + timg = self.preprocess(player) with torch.no_grad(): outputs = self.model(timg) - if self.api == "OpenVINO": - outputs = torch.from_numpy(self.compiled_model(timg)[0]) - - self.lock = False - - output = None - outputs = postprocess(outputs, self.num_classes, confthre, nmsthre) - if outputs[0] is not None: - output = outputs[0].cpu().numpy().astype(float) - output[:, 0:4] /= ratio - output[:, 4] *= output[:, 5] - output = np.delete(output, 5, 1) - - result = player.processModelOutput(output) - frame_rate = player.getVideoFrameRate() - if frame_rate <= 0: - profile = self.mw.cameraPanel.getProfile(player.uri) - if profile: - frame_rate = profile.frame_rate() - - gain = 1 - if frame_rate: - gain = player.videoModelSettings.gain / frame_rate - - alarmState = result * gain >= 1.0 - - if camera: - if camera.isFocus(): - self.mw.videoConfigure.selTargets.barLevel.setLevel(result * gain) - if alarmState: - self.mw.videoConfigure.selTargets.indAlarm.setState(1) - - player.handleAlarm(alarmState) + self.postprocess(outputs, player) + self.lock = False if self.parameters_changed(): self.__init__(self.mw) except Exception as ex: if self.last_ex != str(ex) and self.mw.videoConfigure.name == MODULE_NAME: - logger.exception(MODULE_NAME + " runtime error") + logger.exception(f'{MODULE_NAME} runtime error - {ex}') self.last_ex = str(ex) - self.lock = False + + self.lock = False def parameters_changed(self): result = False @@ -474,11 +610,11 @@ def parameters_changed(self): def get_ov_model_filename(self): model_name = self.mw.videoConfigure.cmbModelName.currentText() openvino_device = self.mw.videoConfigure.cmbDevice.currentText() - return f'{torch.hub.get_dir()}/checkpoints/{model_name}/{openvino_device}/model.xml' + return Path(f'{torch.hub.get_dir()}/checkpoints/{model_name}/{openvino_device}/model.xml').absolute() def get_auto_ckpt_filename(self): model_name = self.mw.videoConfigure.cmbModelName.currentText() - return f'{torch.hub.get_dir()}/checkpoints/{model_name}.pth' + return Path(f'{torch.hub.get_dir()}/checkpoints/{model_name}.pth').absolute() def get_model(self, num_classes, depth, width, act): def init_yolo(M): diff --git a/onvif-gui/pyproject.toml b/onvif-gui/pyproject.toml index 4b9575b..54aa3db 100644 --- a/onvif-gui/pyproject.toml +++ b/onvif-gui/pyproject.toml @@ -19,7 +19,7 @@ [project] name = "onvif-gui" -version = "2.0.9" +version = "2.1.0" dynamic = ["gui-scripts"] description = "A client gui for Onvif" readme = "README.md" @@ -38,7 +38,11 @@ classifiers = [ ] dependencies = [ - "libonvif==3.1.1", "avio==3.1.2", "PyQt6-Qt6==6.6.1", "pyqt6==6.6.1", "numpy", "loguru", "opencv-python" + 'libonvif==3.2.0', 'avio==3.2.0', 'numpy', 'loguru', 'opencv-python', + 'PyQt6-Qt6==6.6.1; platform_system != "Darwin"', + 'pyqt6==6.6.1; platform_system != "Darwin"', + 'PyQt6-Qt6; platform_system == "Darwin"', + 'pyqt6; platform_system == "Darwin"' ] [project.urls] diff --git a/onvif-gui/setup.py b/onvif-gui/setup.py index 4e7ecc2..cc93646 100644 --- a/onvif-gui/setup.py +++ b/onvif-gui/setup.py @@ -18,25 +18,19 @@ #******************************************************************************/ from setuptools import setup, find_packages -#from setuptools.command.install import install with open("README.md", "r", encoding = 'cp850') as fh: long_description = fh.read() setup( name="onvif-gui", - version="2.0.9", + version="2.1.0", author="Stephen Rhodes", author_email="sr99622@gmail.com", description="GUI program for onvif", long_description=long_description, long_description_content_type="text/markdown", packages=find_packages(), - classifiers=[ - "Programming Language :: Python :: 3", - "License :: OSI Approved :: MIT License", - "Operating System :: OS Independent", - ], python_requires='>=3.10', entry_points={ 'gui_scripts': [ diff --git a/onvif-gui/yolox/models/yolo_head.py b/onvif-gui/yolox/models/yolo_head.py index 0389828..3d0a9b3 100644 --- a/onvif-gui/yolox/models/yolo_head.py +++ b/onvif-gui/yolox/models/yolo_head.py @@ -278,8 +278,13 @@ def decode_outputs(self, outputs, dtype): shape = grid.shape[:2] strides.append(torch.full((*shape, 1), stride)) - grids = torch.cat(grids, dim=1).type(dtype) - strides = torch.cat(strides, dim=1).type(dtype) + if dtype.startswith("torch.mps"): + grids = torch.cat(grids, dim=1).to("cpu") + strides = torch.cat(strides, dim=1).to("cpu") + outputs = outputs.to("cpu") + else: + grids = torch.cat(grids, dim=1).type(dtype) + strides = torch.cat(strides, dim=1).type(dtype) outputs = torch.cat([ (outputs[..., 0:2] + grids) * strides,